uuid int64 541B 3,299B | dataset stringclasses 1
value | text stringlengths 1 4.29M |
|---|---|---|
2,877,628,090,654 | arxiv | \section{How to Use This Template}
\section{Introduction}
The concept of phonemes is well developed in speech recognition and derives from a definition in phonetics as ``the smallest sound one can articulate'' \cite{international1999handbook}. Phonemes are analogous to atoms---they are the building blocks of speech. While they are an approximation, in~practice that approximation has been remarkably robust~\cite{hinton2012deep}. Not only are phonemes used by linguists and audiologists to describe speech, they are widely used in large-vocabulary speech recognition as the acoustic classes, or~`units', to~be recognized~\cite{hinton2012deep,wollmer2010recognition, triefenbach2010phoneme}. Sequences of unit estimates can be strung together to infer words and~sentences.
Comprehending visual speech, or~lipreading, is much less well developed~\cite{bear2016decoding}. The~units considered to be equivalent to phonemes are called visemes~\cite{cappelletta2011viseme} but, even in English, there is no clear agreement on the visemes~\cite{goldschen1996rationale}, and~in~\cite{bear2017phoneme} for example, it is noted that there are at least $120$ proposed viseme sets. This large number arises because some authors take vowels~\cite{montgomery1983physical}, and~others consonants~\cite{walden1977effects}, but~also because, of~the proposed sets, some are derived from linguistic principles~\cite{neti2000audio,woodward1960phoneme}, some are the results of human lipreading experiments~\cite{fisher1968confusions, finn1988automatic}, others are data-derived~\cite{bear2017phoneme, lee2002audio}, and~others still are hybrids of these approaches~\cite{bozkurt2007comparison}.
Despite the challenges, a~number of lipreading systems have been built using visemes (\cite{shaikh2010lip,bear2014resolution} for example). When building a viseme recognizer a complication is that multiple phonemes will map onto a single viseme~\cite{bear2017phoneme}. A~common example is the $/p/$, $/b/$, and~$/m/$ bilabial sounds which are often grouped into one viseme~\cite{binnie1976visual, disney, lip_reading18}. Attempts to draw mappings between the phonemes and visemes have been tested~\cite{bear2017phoneme,bear2014some} but to date these mappings have not yet proven to improve machine lipreading~significantly.
On the other hand, there is an emerging body of work~\cite{thangthai2017comparing, howell2013confusion} that, despite the caveats above, is demonstrating that phoneme lipreading systems can outperform viseme recognizers. In~essence it is a tradeoff: does one use viseme units which are tuned to the shape of the lips but suffer with inaccuracies caused by visual confusions between words that sound different but look identical~\cite{thangthai2017comparing}; or does one stick to phonetic units knowing that many of the phonemes are difficult to distinguish on the lips?
These visual confusions are called homophenes~\cite{10.1080/00221309}.
We demonstrate the homophenous word difficulty, with~some examples in Table~\ref{tab:homophones} from~\cite{thangthai2017comparing}. In~this example, Jeffers visemes~\cite{jeffers1971speechreading} have been used to translate the phonemes into viseme~strings.
\begin{table}[H]
\centering
\caption{Example of phoneme and viseme dictionary with its corresponding IPA symbols~\cite{thangthai2017comparing}.}
\begin{tabular}{lcc}
\toprule
\textbf{Word Entry} & \textbf{Phoneme Dictionary} & \textbf{Viseme Dictionary} \\
\midrule
TALK & /t/ /\textopeno/ /k/ & /C/ /V1/ /H/ \\
TONGUE & /t/ /\textturnv/ /\textipa{N}/ & /C/ /V1/ /H/ \\
DOG & /d/ \textopeno/ /g/ & /C/ /V1/ /H/ \\
DUG & /d/ /\textturnv/ /g/ & /C/ /V1/ /H/ \\
\cmidrule{1-3}
CARE & /k/ /e/ /r/ & /H/ /V3/ /A/ \\
WELL & /w/ /e/ /l/ & /H/ /V3/ /A/ \\
WHERE & /w/ /e/ /r/ & /H/ /V3/ /A/ \\
WEAR & /w/ /e/ /r/ & /H/ /V3/ /A/ \\
WHILE & /w/ /ai/ /l/ & /H/ /V3/ /A/ \\
\bottomrule
\end{tabular}
\label{tab:homophones}
\end{table}
However, as~we shall show in this paper, it need not be an either/or approach to phonemes or visemes; we develop a novel method that allows us to vary the number of classes/visual units.
This~means we can tune the visual units as an intermediary state between the visual and audio spaces and we can also optimize against the competing trends of homopheneiosity~\cite{bear2017visual,bear2018comparing} and accuracy~\cite{htk34}. Thus, in this work, we use the term visemes for the traditional visemes, and~the term visual units for our new intermediary units which we propose will improve phoneme~classifiers.
We are motivated in our work because lipreading is a difficult challenge from speech signals. Speech signals are bimodal (that is they have two channels of information, audio and visual) and significant prior work uses both. For~example~\cite{ngiam2011multimodal} uses audio-visual speech recognition to demonstrate cross modality learning. However in our case, lipreading, which is useful for understanding speech when audio speech is too noisy to recognize easily, is classifying speech from only the visual information channel in speech signals thus, as~we shall present, we use a novel training method which uses new visual units and phonemes in a complimentary~fashion.
This paper is an extended version of our prior work~\cite{bear2015findingphonemes,bear2016decoding}, this work is relevant to all classifiers since the choice of visual unit matters and is made before the classifier is trained. In~other words, the~choice of visual units must be made early in the design process and a non-optimal choice can be very expensive in terms of~performance.
The rest of this paper is structured as follows; we summarize prior viseme research for lipreading by both humans and machines, and~describe the state-of-the-art approaches for lipreading systems in a background section. Then we present an experiment in which we demonstrate how we can find the optimal number of visual units within a set; this is an essential preliminary test to define the scope of the second task. We present the data for all experiments within this section. The~preliminary test includes phoneme classification and clustering for new visual unit generation before analyzing the results to find the optimal visual unit~sets.
These optimal visual unit sets are used to test our novel method for training phoneme-labeled classifiers by using these sets as an initialization stage in the training phase of a conventional lipreading system. As~part of this second task, we also present a side task of deducing the right units for lipreading language models used in the lipreading system. Finally, we present the results of the new training method and draw conclusions before suggesting future work. Thus, we have three main contributions:
\begin{itemize}[leftmargin=2.3em,labelsep=6mm]
\item a method for finding optimal visual units,
\item a review of language model units for lipreading systems,
\item a new training paradigm for lipreading systems.
\end{itemize}
\section{Background}
Table~\ref{tab:Confusion_Factors} summarizes the most common viseme sets in the literature used for both human and machine lip reading.
The range of set sizes is from four (Woodward~\cite{woodward1960phoneme}) to 21 (Nichie~\cite{lip_reading18}). Note that not all viseme sets represent the same number of phonemes. Furthermore some of these use American English and others British English so there are minor variations in the phoneme sets. (American English phonemes tend to use diacritics~\cite{labov2005atlas}.)
\begin{table}[H]
\centering
\caption{Ratio of visemes to phonemes in previous viseme sets from literature.}
\begin{tabular}{ l r l r }
\toprule
\textbf{Set} & \textbf{V:P} & \textbf{Set} & \textbf{V:P} \\
\midrule
Woodward~\cite{woodward1960phoneme} & 4:24 & Fisher~\cite{fisher1968confusions} & 5:21 \\
Lee~\cite{lee2002audio} & 9:38 & Jeffers~\cite{jeffers1971speechreading} & 11:42 \\
Neti~\cite{neti2000audio} & 12:43 & Franks~\cite{franks1972confusion}& 5:17 \\
Disney~\cite{disney} & 10:33 & Kricos~\cite{kricos1982differences} & 8:24 \\
Hazen~\cite{Hazen1027972} & 14:39 & Bozkurt~\cite{bozkurt2007comparison} & 15:41 \\
Montgomery~\cite{montgomery1983physical} & 8:19 & Finn~\cite{finn1988automatic}& 10:23 \\
Nichie~\cite{lip_reading18} & 21:48 & Walden~\cite{walden1977effects} & 9:20 \\
\bottomrule
\end{tabular}
\label{tab:Confusion_Factors}
\end{table}
Lipreading systems can be built with a range of architectures. Conventional systems are adopted from acoustic methods, often using Hidden Markov Models, for example as in~\cite{potamianos1998image}. More modern systems exploit deep learning methods~\cite{petridis2017end, stafylakis2017combining}. Deep learning has been deployed in two configurations: (i) as a replacement for the GMM in the Hidden Markov Models (HMM) and (ii) in a configuration known as end-to-end~learning.
However, the~high-level architectures have similarities: first the face of the speaker must be tracked or located; then some form of features are extracted; then a classification model is trained and tested on unseen data, optionally using a language model to improve the classification output (e.g.~\cite{le2017generating}). Throughout this process one must translate between the words spoken (and captured in the training videos), to~their phonetic pronunciation, to~their visual representation on the lips, and~back again for a useful transcript.
\section{Finding a Robust Range of Intermediate Visual~Units}
In our first example we use the RMAV dataset~\cite{improveVis} and the BEEP pronunciation dictionary~\cite{beep}. \textls[-15]{Figure~\ref{fig:process} shows a high-level overview of the first task. We begin with classification using phoneme-labeled} classifiers. The~output of this task is a set of speaker-dependent confusion matrices. The~data in these are used to cluster together single phonemes (monophones) into subgroups of visual units, based~upon~confusions.
However, conversely to the approach in~\cite{bear2017phoneme} we implement an alternative phoneme clustering process (described in detail in Section~\ref{sec:newclusteringalg}). The~key difference between the ad-hoc viseme choices compared in~\cite{bear2017phoneme} and our new clustering approach, is our ability to choose the number of visual units, whereas in prior viseme sets, this is~fixed.
With our new algorithm, we create a new phoneme-to-viseme (P2V) mapping every time a pair of classes is re-classified into a new class, thus reducing the number of classes in a set by one each time. In~the phonetic transcripts of our $12$ speakers, there is a maximum of $45$ phonemes, therefore we can create at most $45$ P2V maps for each speaker. We note that the real number of maps we can derive depends upon the number of phonemes classified during step one of Figure~\ref{fig:process}. During~this preliminary phoneme classification, should a phoneme not be classified, either incorrectly or correctly, then it is an omission in the confusion matrix from which our visual units are created. Thus, we have \emph{up to} $45$ sets of visual unit labels per speaker with which to label our~classifiers.
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{journal3Fig1.pdf}
\caption{Three-step high-level process for visual unit classification where the visual units are derived from phoneme~confusions.}
\label{fig:process}
\end{figure}
There is the option to measure performance using phoneme, viseme, or~word error. Here we choose word error~\cite{bearTaylor} because viseme error varies as the number of visemes varies which leads to unfair comparisons and phoneme error is not as close to what we believe to be of interest to users which is transcript~error.
\subsection{Data}
The RMAV dataset (formerly known as LiLIR)
consists of $20$ British English speakers (we use the 12 speakers who had tracked features available; seven male and five female) and up to $200$ utterances per speaker of the Resource Management (RM) sentences which totals between $1362$ and $1802$ words each. The~sentences selected for the RMAV speakers are a subset of the full RM dataset~\cite{fisher1986darpa} transcripts. They were selected to maintain as much coverage of all phonemes as possible as shown in Figure~\ref{fig:histogram} and realistic to English conversation~\cite{improveVis}. The~original videos were recorded in high definition ($1920 \times 1080$) and in a full-frontal position at 25 fs$^{-1}$. Individual speakers are tracked using Linear Predictors~\cite{ong2011robust} and Active Appearance Model~\cite{Matthews_Baker_2004} features of concatenated shape and appearance information have been~extracted.
\begin{figure}[H]
\centering
\includegraphics[width=.75\textwidth]{lilir_histo1.eps}
\caption{\textit{Cont.}}
\label{fig:histogram}
\end{figure}
\begin{figure}[H]\ContinuedFloat
\centering
\includegraphics[width=.75\textwidth]{lilir_histo2.eps}
\caption{Occurrence frequency of phonemes in the RMAV~dataset.}
\label{fig:histogram}
\end{figure}
\unskip
\subsection{Linear Predictor~Tracking}
Linear Predictors (LP) are a person-specific and data-driven facial tracking method. Devised primarily for observing visual changes in the face during speech, these make it possible to cope with facial feature configurations not present in the training data by treating each feature~independently.
The linear predictor is the central point around which support pixels are used to identify the change in position of the central point over time. The~central point is observed as a landmark on the outline of a feature. In~this method both the shape (comprised of landmarks) and the pixel information surrounding the linear predictor position are intrinsically linked. Linear predictors have been successfully used to track objects in motion, for~example~\cite{matas2006learning}.
\subsection{Active Appearance Model~Features}
AAM features~\cite{Matthews_Baker_2004} of concatenated shape and appearance information have been extracted. We~track using a full-face model (Figure~\ref{fig:landmarks} (left)) but the final features are reduced to information from the lip area alone (Figure~\ref{fig:landmarks} (right)). Shape features (\ref{eq:shapecombined}) are based solely upon the lip shape and positioning during the duration of the speaker speaking. The~landmark positions can be compactly represented using a linear model of the form:
\begin{equation}
s = s_0 + \sum_{i=1}^ms_ip_i
\label{eq:shapecombined}
\end{equation}
where $s_0$ is the mean shape and $s_i$ are the modes. The~appearance features are computed over pixels, the~original images having been warped to the mean shape. So $A_0(x)$ is the mean appearance and appearance is described as a sum over modal appearances:
\begin{equation}
A(x) = A_0(x) + \sum_{i=1}^l{\lambda}_iA_i(x) \qquad \forall x \in S_0
\label{eq:appcombined}
\end{equation}
Combined features are the concatenation of shape and appearance after PCA has been applied to each independently. The~AAM parameters for each speaker is in Table~\ref{tab:parameterslilirfeatures} (MATLAB files containing the extracted features can be downloaded from \url{http://zenodo.org/record/2576567})
\begin{figure}[H]
\centering
\includegraphics[width=.33\textwidth]{avl2_face.png}
\includegraphics[width=.33\textwidth]{avl2_lips.png}
\caption{Landmarks in a full-face AAM used to track a face (\textbf{left}) and the lip-only AAM landmarks (\textbf{right})for feature extraction.}
\label{fig:landmarks}
\end{figure}
\unskip
\begin{table}[H]
\centering
\caption{The number of parameters of shape, appearance, and~combined shape and appearance AAM features for the RMAV dataset speakers. Features retain 95\% variance of facial~information.}
\begin{tabular}{ l r r r }
\toprule
\textbf{Speaker} & \textbf{Shape} & \textbf{Appearance} & \textbf{Combined} \\
\midrule
S1 & 13 & 46 & 59 \\
S2 & 13 & 47 & 60 \\
S3 & 13 & 43 & 56 \\
S4 & 13 & 47 & 60 \\
S5 & 13 & 45 & 58 \\
S6 & 13 & 47 & 60 \\
S7 & 13 & 37 & 50 \\
S8 & 13 & 46 & 59 \\
S9 & 13 & 45 & 58 \\
S10 & 13 & 45 & 58 \\
S11 & 14 & 72 & 86 \\
S12 & 13 & 45 & 58 \\
\bottomrule
\end{tabular}
\label{tab:parameterslilirfeatures}
\end{table}
\unskip
\section{Clustering}
\label{sec:newclusteringalg}
\unskip
\subsection{Step One: Phoneme~Classification}
\label{sec:one}
To complete our preliminary phoneme classification, we implement $10$-fold cross-validation with replacement~\cite{efron1983leisurely}, over~the 200 sentences per speaker. This means $20$ test samples are randomly selected and omitted from training sample folds. Our classifiers are based upon Hidden Markov Models (HMMs) \cite{holmes2001speech} and implemented with the HTK toolkit~\cite{htk34}. We use the HTK tools as follows;
\begin{enumerate}[leftmargin=*,labelsep=4.9mm]
\item \texttt{HLed} creates our phoneme transcripts to be used as ground truth transcriptions.
\item \texttt{HCompV} initializes the HMMs using a `flat-start'~\cite{forney1973viterbi} using manually made prototype files for each speaker based upon their AAM parameters (listed in Table~\ref{tab:parameterslilirfeatures}) and the desired HMM parameters. The~prototype HMM is based upon a Gaussian mixture of five components and three state HMMs as per the work of~\cite{982900}.
\item Using \texttt{HERest} we train the classifiers by re-estimating the HMM parameters $11$ times over via embedded training with the Baum-Welch algorithm~\cite{welch2003hidden}, more than 11 iterations and the HMM's overfit. Our list of HMMs includes a single-state, short-pause model, labeled $/sp/$ to model the short silences between words in the spoken sentences. States are tied with \texttt{HHed}.
\item We build a bigram word lattice using \texttt{HLStats} and \texttt{HBuild} and use this lattice to complete recognition with \texttt{HVite}. \texttt{HVite} uses our trained set of phoneme-labeled HMM classifiers to estimate what our test samples should be.
\item The output transcripts from \texttt{HVite} are used with our ground truth transcriptions from \texttt{HLEd} as inputs into \texttt{HResults} to produce confusion matrices and lipreading accuracy scores. \texttt{HResults} uses an optimal string match using dynamic programming~\cite{htk34} to compare the ground truths with the prediction transcripts.
\end{enumerate}
\subsection{Step Two: Phoneme~Clustering}
\label{sec:two}
Now we have our phoneme confusions (an example matrix is in Figure~\ref{table:examplecm}), we have ten confusion matrices per speaker (one for each fold of the cross-validation). We cluster the $m$ phonemes into new visual unit classes, one iteration at a~time.
\begin{figure}[H]
\centering
\resizebox{\columnwidth}{!}
\begin{tabular}{|l l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\toprule
& & \multicolumn{15}{c|}{Predicted classes} \\
& \multicolumn{1}{r|}{} & /ae/ & /ay/ & /b/ & /c/ & /d/ & /ea/ & /f/ & /iy/ & /l/ & /m/ & /n/ & /oy/ & /p/ & /s/ & /t/ \\
\midrule
\multirow{16}{*}{Actual classes} & /ae/ & 76 & 2 & 1 & 5 & 2 & 1 & 3 & 1 & 1 & 3 & 1 & 5 & 0 & 0 & 4 \\ \cmidrule{2-17}
& /ay/ & 0 & 28 & 0 & 1 & 0 & 2 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 2 \\ \cmidrule{2-17}
& /b/ & 0 & 4 & 17 & 0 & 1 & 2 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 \\ \cmidrule{2-17}
& /c/ & 3 & 6 & 6 & 163 & 3 & 7 & 7 & 2 & 8 & 7 & 1 & 4 & 2 & 0 & 1 \\ \cmidrule{2-17}
& /d/ & 4 & 2 & 2 & 3 & 33 & 0 & 0 & 1 & 3 & 0 & 1 & 2 & 1 & 0 & 1 \\ \cmidrule{2-17}
& /ea/ & 2 & 0 & 0 & 6 & 1 & 9 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ \cmidrule{2-17}
& /f/ & 4 & 1 & 0 & 3 & 1 & 1 & 40 & 0 & 0 & 1 & 5 & 2 & 0 & 0 & 0 \\ \cmidrule{2-17}
& /iy/ & 0 & 3 & 2 & 1 & 2 & 0 & 0 & 11 & 8 & 2 & 0 & 2 & 0 & 0 & 1 \\ \cmidrule{2-17}
& /l/ & 0 & 0 & 1 & 4 & 1 & 0 & 1 & 2 & 97 & 3 & 1 & 0 & 0 & 0 & 0 \\ \cmidrule{2-17}
& /m/ & 2 & 1 & 4 & 1 & 2 & 3 & 0 & 1 & 6 & 110 & 8 & 0 & 2 & 0 & 0 \\ \cmidrule{2-17}
& /n/ & 0 & 1 & 0 & 1 & 0 & 1 & 2 & 0 & 0 & 1 & 14 & 1 & 2 & 0 & 0 \\ \cmidrule{2-17}
& /oy/ & 0 & 0 & 0 & 3 & 1 & 1 & 4 & 1 & 1 & 3 & 1 & 16 & 1 & 0 & 0 \\ \cmidrule{2-17}
& /p/ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 3 & 0 & 0 \\ \cmidrule{2-17}
& /s/ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 84 & 0 \\ \cmidrule{2-17}
& /t/ & 1 & 3 & 0 & 2 & 1 & 1 & 1 & 0 & 0 & 1 & 2 & 0 & 0 & 0 & 28 \\
\bottomrule
\end{tabular}%
}
\caption{An example phoneme confusion matrix.
\label{table:examplecm}
\end{figure}
First we sum all ten matrices into one matrix to represent all the confusions for each speaker. Our clustering begins with this single specific speaker confusion matrix.
\begin{equation}
[K_{m}]_{ij} = N (\phat{p}_j | p_i)\quad
\label{eq3}
\end{equation}
where the $ij^{th}$ element is the count of the number of times phoneme $i$ is classified as phoneme $j$. This algorithm works with the column normalized version,
\begin{equation}
[P_m]_{ij} = Pr\{p_i | \phat{p}_j \} \quad
\label{eq4}
\end{equation}
the probability that given a classification of $p_j$ that the phoneme really was $p_i$. Merging of phonemes is done by looking for the two most confused phonemes and hence creating new matrices $K_{m-1}, P_{m-1}$.
Specifically, for each possible merged pair a score, $q$, is calculated as:
\begin{equation}
q = [P_{m}]_{rs} + [P_{m}]_{sr} \quad \\
= Pr\{\phat{P}r | Ps \} + Pr\{\phat{P}s | Pr\}
\label{eq5}
\end{equation}
Vowels and consonants cannot be mixed, the~significant negative effect of mixing vowel and consonant phonemes in visemes was demonstrated in~\cite{bear2017phoneme}, so phonemes are assigned to one of two classes, $V$ or $C$, for~vowels and consonants respectively. The~pair with the highest $q$ is merged. We break equal scores randomly. This process is repeated until $m=2$. We stop at two because at this point we have two single classes, one class containing vowel phonemes, and~a second class of consonant phonemes. Each~intermediate step, $M = 45,44,43 ... 2$ forms another set of prospective visual units.
An example P2V mapping is shown in Table~\ref{tab:example} for RMAV speaker number one with ten visual~units.
\begin{table}[H]
\centering
\caption{An example P2V map, (for RMAV Speaker 1 with ten visual units).}
\begin{tabular}{ll}
\toprule
\textbf{Visual Unit} & \textbf{Phonemes} \\
\midrule
$/v01/$ & /ax/ \\
$/v02/$ & /v/ \\
$/v03/$ & /\textopeno\textsci/ \\
$/v04/$ & /f/ /\textipa{Z}/ /w/ \\
$/v05/$ & /k/ /b/ /d/ /\textipa{T}/ /p/ \\
$/v06/$ & /l/ /d\textipa{Z}/ \\
$/v07/$ & /g/ /m/ /z/ /y/ /t\textipa{S}/ /\textipa{D}/ /s/ /r/ /t/ /\textipa{S}/ \\
$/v08/$ & /n/ /hh/ /\textipa{N}/ \\
$/v09/$ & /\textipa{E}/ /ae/ /\textopeno/ /uw/ /\textturnscripta/ /\textsci\textschwa/ /ey/ /ua/ /\textrevepsilon/ \\
$/v10/$ & /ay/ /\textscripta/ /\textturnv/ /\textscripta\textupsilon/ /\textupsilon/ /\textschwa\textupsilon/ /\textsci/ /iy/ /\textschwa/ /eh/ \\
\bottomrule
\end{tabular}
\label{tab:example}
\end{table}
\unskip
\subsection{Step Three: Visual Unit~Classification}
\label{sec:visRecog}
Step three is similar to step one. We again complete $10$-fold cross-validation with replacement~\cite{efron1983leisurely} over the $200$ sentences for each speaker using the same folds as the prior steps to prevent mixing the training and test data. Again, $20$ test samples are randomly selected to be omitted from the training folds. Again, with the HTK toolkit, we build new sets of HMM classifiers. This time however, our~classifiers are labeled with the visual units we have just created in step~two.
We have a python script which translates the phoneme transcripts from using \texttt{HLed} in step one and the P2V maps from step two, into~visual unit transcripts, one for each P2V map. For~each set of visual units, visual unit HMMs are flat-started (\texttt{HCompV}) with the same speaker specific HMM prototypes as before (Gaussian mixtures are uniform across prototypes), re-estimated $11$ times over with \texttt{HERest}. A~bigram word lattice supports classification including a grammar scale factor of $1.0$ (shown to be optimum in~\cite{howell2013confusion}) and a transition penalty of $0.5$.
The important difference this time is that the visual unit classes are now used as classifier labels. By~using these sets of classes which have been shown in step one to be visually confusing on the lips, we now perform classification for each class set. In~total this is at most $44$ sets, where the smallest set is of two classes (one with all the vowel phonemes and the other all the consonant phonemes), and~the largest set is of $45$ classes with one phoneme in each---thus the largest set for each speaker is a repeat of the phoneme classification task but using only phonemes which were originally recognized (either correctly or incorrectly) in step~one.
\section{Optimal Visual Unit Set~Sizes}
\label{sec:search}
Figure~\ref{fig:correctness} plots word correctness on the $y$-axis for all $12$ speakers with error bars showing $\pm$ one standard error (se). The~$x$-axis shows the number of visual units. In~green we plot mean weighted guessing over all speakers for each viseme set. Individual speaker variations are in Appendix~\ref{sec:app}, Figures~\ref{fig:sp01}--\ref{fig:sp11}.
It is important in this case to weight the chance of guessing by visual homophenes as these vary by the size of the visual unit set. Visual unit sets which contain fewer visual units produce sequences of visual units which represent more than one word. These are homophenes. The~effect of homophenes can be seen on the left side of Figure~\ref{fig:correctness} and the graphs in Appendix~\ref{sec:app} with visual unit sets with fewer than $11$ visual units where homophenes become noticeable and language model can no longer correct these~confusions.
An example of a homophene in the RMAV data are the words `tonnes' and `since'. If~one uses Speaker 1's 10-visual unit P2V map, both words transcribe into visual units as `$/v7/$ $/v10/$ $/v8/$ $/v7/$'. In~practice a language model, or~word lattice, will tend to reduce such confusions since the lattice models the probability of word $N$-grams which means that probable combinations such as ``metric tonnes'' will be favored over ``metric since'' \cite{thangthai2017comparing}.
We see all our word correctness scores are significantly above guessing albeit still low. There~is variation between speakers, but~there is a clear overall trend. Superior performance is to be found with larger numbers of visual units. An~important point is some authors report viseme accuracy instead of word correctness~\cite{bearTaylor}. This is unhelpful as it masks the effect of homophenous words on performance. Had we reported this then the positive effect of larger visual unit sets would not be~visible.
In Figure~\ref{fig:correctness} we highlight in red the class sets which, for~any speaker, have shown a significant classification improvement (with non-overlapping error bars) over the adjacent set of units on its right side along the $x$-axis. Error bars overlap once the correctness is averaged so Table~\ref{tab:merges} lists these combinations for each speaker. These red points show where we can identify the pairs of classes which, when merged into one class, significantly improve classification. If~we refer to the speaker demographic factors such as gender or age, we find no apparent pattern through these visual unit combinations. So, we have further evidence to reinforce the idea that all speakers have a unique visual speech signal,~\cite{607030}. In~\cite{bear2015speaker} this is suggested to be due to how the trajectory between visual units varies by speaker, due to such things as rate of speech~\cite{taylor2014effect}. This is how difficult finding a set of cross-speaker visual units can be when phonemes need alternative groupings for each individual~\cite{bear2017visual}.
\begin{figure}[H]
\centering
\includegraphics[width=0.95\linewidth]{all_corr2.pdf}
\caption{All-speaker mean word classification correctness $C\pm1se$.}
\label{fig:correctness}
\end{figure}
\begin{table}[H]
\centering
\tablesize{\footnotesize}
\caption{visual unit class merges which improve word classification in correctness; $V_n=V_i+V_j$.}
\resizebox{\columnwidth}{!}
\begin{tabular}{ l l l l l l }
\toprule
\textbf{Speaker} & \textbf{Set No} & \boldmath{$V_i$} & \boldmath{$V_j$} & \textbf{Set No} & \boldmath{$V_n$} \\
\midrule
Sp01 & 35 & /s/ /r/ & /\textipa{D}/ & 34 & /s/ /r/ /\textipa{D}/ \\
Sp02 & 22 & /d/ & /z/ /y/ & 21 & /d/ /z/ /y/ \\
Sp03 & 34 & /b/ /t\textipa{S}/ & /\textipa{Z}/ & 33 & /b/ /t\textipa{S}/ /\textipa{Z}/ \\
Sp03 & 31 & /\textipa{Z}/ /b/ /t\textipa{S}/ & /z/ & 30 & /\textipa{Z}/ /b/ /t\textipa{S}/ /z/ \\
Sp03 & 25 & /p/ /r/ & /\textipa{N}/ & 24 & /p/ /r/ /\textipa{N}/ \\
Sp05 & 17 & /ae/ & /eh/ & 16 & /ae/ /eh/ \\
Sp06 & 35 & /ae/ /\textturnv/ & /iy/ & 34 & /ae/ /\textturnv/ /iy/ \\
Sp09 & 12 & /b/ /w/ /v/ & /d\textipa{Z}/ /hh/ & 11 & /b/ /w/ /v/ /d\textipa{Z}/ /hh/ \\
Sp12 & 36 & /\textturnv/ & /\textopeno/ & 35 & /\textturnv/ /\textopeno/ \\
\bottomrule
\end{tabular} }%
\label{tab:merges}
\end{table}
\unskip
\section{Discussion}
\label{sec:discussion}
In Figure~\ref{fig:correctness} we have plotted mean word correctness, $C$, over~all $12$ speakers and weighted guessing ($1/(number Of Units$) in green. Here we see that within~one standard error, there is a monotonic trend. Small numbers of units perform worse than phonemes and which supports the claim that phonemes are preferred to visemes but, it would be an oversimplification to assert that higher accuracy lipreading can be achieved with phonemes as this has not been shown in our results with significance. Rather we say that, generally, visual unit sets with higher numbers of visual unit classes outperform the smaller sets. In~\cite{bear2017phoneme} the authors reviewed $120$ of previous phoneme-to-viseme (P2V) maps, typically these consist of between $10$ and $35$ visual units~\cite{bear2014phoneme}. For~example the Lee set consists of six consonant visemes and five vowel visemes~\cite{lee2002audio} and Jeffers~\cite{jeffers1971speechreading} group phonemes into eight vowel and three consonant~visemes.
In Figures~\ref{fig:sp01}--\ref{fig:sp11} and Figure \ref{fig:correctness} we present a definite rapid decrease in lipreading word correctness for visemes sets containing fewer than ten visemes. However, positively, the~region visemes sets of sizes between $11$ and $20$ contain the optimum viseme set for three out of the 12 speakers which is more than random chance. This means, for~each speaker, we have found and presented an optimal number of visual units (shown by the best performing results in Figures~\ref{fig:sp01}--\ref{fig:sp11}) but the optimal number is not related to any of the conventional viseme definitions, nor is it consistent across speakers. Table~\ref{tab:pr_vals} shows the word correctness, $C_w$, of~each speakers phoneme~classification.
\begin{table}[H]
\centering
\caption{Phoneme correctness $C$ for each speaker (right-hand data points of Figures~\ref{fig:sp01}--\ref{fig:sp11}).}
\begin{tabular}{lrrrrrrrrrrrr}
\toprule
Speaker & 1 & 2 & 3 & 4 & 5 & 6 \\
Phoneme $C$ & 0.05 & 0.06 & 0.06 & 0.05 & 0.06 & 0.06 \\
\midrule
Speaker & 7 & 8 & 9 &10 & 11 & 12 \\
Phoneme $C$ & 0.06 & 0.06 & 0.06 & 0.07 & 0.06 & 0.06 \\
\bottomrule
\end{tabular}
\label{tab:pr_vals}
\end{table}
\unskip
\section{Hierarchical Training for Weak-Learned Visual~Units}
\label{chap:support network}
Figure~\ref{fig:correctness} showed our first results derived using an adapted version of the algorithm described in~\cite{bear2014phoneme}.
Table~\ref{tab:merges} also shows us, for~each of our $12$ speakers the significantly improving visual unit sets. These sets are those where one single change of visual unit grouping has resulted in a significant (greater than one standard error over ten folds) increase in word correctness. This tells us that there are some units between the traditional visemes (for example~\cite{fisher1968confusions, lip_reading18, disney}), and~phonemes which are better for visual speech~classification.
Table~\ref{tab:merges} (\cite{bear2015findingphonemes}) shows us several significantly improving sets. Our suggestions for why these are interesting are; first the tradeoff of homophenes against accuracy. It is possible these are the groupings where the accuracy improvement is significantly improving, despite the extra homophenes created as the number of visual units in the set decreases. Either the increase in homophenes is negligible or, the~number of training samples for two visually indistinguishable classes significantly increases when~combined.
We propose a novel idea; to implement hierarchical classifier training using both visual units and phonemes in sequence. Some work in acoustic speech recognition has used this layered approach to model building with success e.g.~\cite{morgan2012deep}. It is our intention use our new range of visemes to test if our new training algorithm can improve phoneme classification without the need for more training data as this approach shares training data across models. This premise avoids the negative effects of introducing more homophenes because of the second layer of training discriminates between the sub-units\textls[-15]{ within the first layer. This will assist the identification of the more subtle but important differences in visual gestures representing alternative phonemes. We note from~\cite{bear2014some} that using the wrong clusters of phonemes is worse than using none, and~also that this new approach aims to optimize performance within the scope of the datasets and system affects described previously in Sections~\ref{sec:search} and~\ref{sec:discussion}. }
A bonus of our revised classification scheme is that because we weakly train the classifier before phoneme training, we remove any desire to consider post-processing methods (e.g. weighted finite state transducers~\cite{howell2013confusion}) to reverse the P2V mapping in order to decode the real phoneme~recognized.
In Figure~\ref{fig:correctness}, the~performance of classifiers with small numbers of visual units (fewer than $10$) is poor. As~described previously, we attributed this to the large number of homophenes. At~the other side of our figure, sets containing large numbers of visual units (greater than $35$) do not significantly, or~even noticeably, improve the correctness. This is where many phonetic variations are visually indistinguishable on the lips. Also taking into account the set numbers printed in black (which are the significantly improving visual unit sets) we focus on sets of visual units in the size range $11$ to $35$ with the same $12$ RMAV speakers for our experiments using hierarchical training of phoneme~classifiers.
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{journal3Fig6.pdf}
\caption{\textls[-25]{\textbf{Top}: a high-level lipreading system, and~\textbf{Bottom}: where conversions between words, phonemes,} and~visual units can occur in lipreading systems in three different~flows.}
\label{fig:unitphases}
\end{figure}
Here, we use our knowledge of visual speech to drive our novel redesign of the conventional training method. In Figure~\ref{fig:unitphases} shows how we make it earlier in the process. The~top of Figure~\ref{fig:unitphases} in black boxes shows the steps of a lipreading system, divided into phases where the units change from words, to~phonemes, to~visual units (where used). Flow 1 shows how we translate the word ground truth into phonemes using a pronunciation dictionary (e.g.~\cite{beep} or~\cite{cmudict}) for labeling the classifiers, before~decoding with a word language model. Flow 2 below this, using visual units. The~variation in flow 2 shows we translate from visual unit trained classifiers back into words using the word network. Finally, row three shows our new approach, where we introduce an extra step into the training phase, which means classifiers are initialized as visual units, before~retraining them into phoneme classifiers before word decoding. We describe this new process in detail~now.
\section{Classifier Adaptation~Training}
The basis of our new training algorithm is a hierarchical structure with the first level based on visual units, and~the second level based on phonemes. In~Figure~\ref{fig:wlt_process} we present an illustration based on a simple example using five phonemes (in reality there are up to $45$ in the RMAV sentences) mapped to two visual units (in reality there will be between $11$ and $35$ as we have refined our experiment to only use sets of visual units in the optimal size range from the preliminary test results). Each phoneme is mapped to a visual unit as in~\cite{bear2016decoding}, our example map is in Table~\ref{tab:exampleP2V}. But~now we are going to learn intermediate visual unit labeled HMMs before we create phoneme~models.
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{v2pHMMprocess2.pdf}
\caption{Hierarchical training strategy for training visual units HHMs into phoneme-labeled HMM~classifiers.}
\label{fig:wlt_process}
\end{figure}
In this example $/p1/$, $/p2/$ and $/p4/$ are associated with $/v1/$, so are initialized as duplicate copies of HMM $/v1/$. Likewise, phoneme models labeled $/p3/$ and $/p5/$ are initialized as replicas of $/v2/$. We now retrain the phoneme models using the same training~data.
\begin{table}[H]
\centering
\tablesize{\small}
\caption{Our example P2V map to illustrate our novel training~algorithm}
\begin{tabular}{ll}
\toprule
\textbf{Visual Units} & \textbf{Phonemes} \\
\midrule
/v1/ & /p1/ /p2/ /p4/ \\
/v2/ & /p3/ /p5/ \\
\bottomrule
\end{tabular}
\label{tab:exampleP2V}
\end{table}
In full for each set of visual units of sizes from $11$ to $35$:
\begin{enumerate}[leftmargin=*,labelsep=4.9mm]
\item \textls[-15]{We initialize \textit{visual unit} HMMs with \texttt{HCompV}, this tool initializes HMMs defines all models equal~\cite{young2006htk}.}
\item With our prototype HMM based upon a Gaussian mixture of five components and three states, we use \texttt{HERest} 11 times over to re-estimate the HMM parameters and we include short-pause model state tying (between re-estimates three and four with \texttt{HHed}). Training samples are from all phonemes in each visual unit cluster. These first two points are steps 1 and 2 in Figure~\ref{fig:wlt_process}.
\item \textls[-15]{Before classification, our visual unit HMM definitions duplicated to be used as initialized definitions} for phoneme-labeled HMMs (Figure~\ref{fig:wlt_process} step 3). In~our Figure~\ref{fig:wlt_process} illustration, $/v1/$ is duplicated three times (one for each phoneme in its cluster) and $/v2/$ is copied twice. The~respective visual unit HMM definition is used for all the phonemes in its relative P2V map.
\item These phoneme HMMs are retrained with \texttt{HERest} $11$ times over, this time, training samples are divided by the unique phoneme labels.
\item We create a bigram word lattice with \texttt{HLStats} and \texttt{HBuild} and as part of the classification we apply a grammar scale factor of $1.0$ and a transition penalty of $0.5$ (based on~\cite{howell2013confusion}) with \texttt{HVite}. In~Section~\ref{sec:ln} we present a test to determine the best language network units for this step.
\item Finally, the~output transcripts from \texttt{HVite} are used in \texttt{HResults} against the phoneme ground truths produced by \texttt{HLed}. This is all implemented using $10$-fold cross-validation with replacement~\cite{efron1983leisurely}.
\end{enumerate}
The big advantage of this approach is the phoneme classifiers have seen mostly positive cases therefore have good mode matching, the~disadvantage is they are limited in their exposure to negative cases, less so than the visual~units.
\section {Language Network Units}
\label{sec:ln}
Step five in our novel hierarchical training method requires a language network. It has been consistently observed that language models are very powerful in lipreading systems (e.g. in~\cite{6288999}). Language models built upon the ground truth utterances of datasets learn grammar and structure rules of words and sentences (the latter in the case of continuous speech). However, the~visual co-articulation effects damages the performance of visual speech language models as visually, people do not say what the language model expects. These types of network are commonplace, but we note that higher-order $N$-gram language models may improve classification rates but the cost of this model is disproportionate to our goal of developing more accurate classifiers. Therefore, to~decide which unit would best optimize our language model we test three units: visemes; phonemes; and words, as~bigram models in a second preliminary~test.
In the first two columns of Table~\ref{tab:sn_tests2} we list the possible pairs of classifier units and language model units. For~each of these pairs we use the common process previously described for lipreading in HTK, where our phonemes are based on the International Phonetic Alphabet~\cite{international1999handbook}, and~our visemes are Bear's speaker-dependent visemes~\cite{bear2017phoneme}. Word labels are from the RMAV dataset. We define \textbf{classifier units} as the labels used to identify individual classification models and \textbf{language units} as the label scheme used for building the decoding network used post~classification.
\subsection{Language Network Unit~Analysis}
In Table~\ref{tab:sn_tests2} column four we have listed one standard error values for these tests. The phoneme units are the most robust.
In Figure~\ref{fig:sn_effects_talker} we have plotted word correctness ($x$-axis) for each speaker along the $y$-axis over three figures, one figure per language network unit. The~viseme network is top, phoneme network middle, and~word network at the bottom. The~viseme network is the lowest performing score ($0.02\pm0.0063$). On~the face of it, the~idea of visemes classifiers is a good one because they take visual co-articulation into account to some extent. However, as~seen here, a~language model of visemes is too complex because of homophenes. This leaves us with a choice of either phoneme or word units for our language model in step five of our new hierarchical training~method.
\begin{figure}[H]
\centering
\begin{tabular}{c}
\includegraphics[width=0.65\textwidth]{viseme_network_units_by_talker} \\
\includegraphics[width=0.65\textwidth]{phoneme_network_units_by_talker} \\
\includegraphics[width=0.65\textwidth]{word_network_units_by_talker}
\end{tabular}
\caption{Effects of support network unit choice with each type of labeled HMM classifier units. Along~the $x$-axis is each speaker, $y$-axis values are correctness, $C$. Viseme network is at the top, phoneme network plotted in the middle, and~word networks at the~bottom.}
\label{fig:sn_effects_talker}
\end{figure}
\begin{table}[H]
\centering
\caption{Unit selection pairs for HMMs and language network combinations, and~the all-speaker mean $C_w$ achieved.}
\begin{tabular}{llrr}
\toprule
\textbf{Classifier units} & \textbf{Network units} & \boldmath{$C_w$} & \textbf{$1$se} \\
\midrule
Viseme & Viseme & $0.02$ & $0.0063$ \\
\midrule
Viseme & Phoneme & $0.19$ & $0.0036$ \\
Phoneme & Phoneme & $0.19$ & $0.0036$ \\
\midrule
Viseme & Word & $0.09$ & $0.0$ \\
Phoneme & Word & $0.20$ & $0.0043$ \\
Word & Word & $0.19$ & $0.0005$\\
\bottomrule
\end{tabular}
\label{tab:sn_tests2}
\end{table}
In Figure~\ref{fig:sn_effects_talker} (middle) we have our phoneme language network performance with both viseme and phoneme trained classifiers. This is more exciting because for all speakers we see a statistically significant increase in $C_w$ compared to the viseme network scores in Figure~\ref{fig:sn_effects_talker} top. Looking more closely between speakers we see that for four speakers (2, 9, 10 and 12), the~viseme classifiers outperform the phonemes, yet for all other speakers there is no significant difference between the two. On~average they are identical with an all-speaker mean $C_w$ of $0.19\pm0.0036$ compared to the viseme classifiers (Table~\ref{tab:sn_tests2}, column 3).
In Figure~\ref{fig:sn_effects_talker} (bottom) we show our $C_w$ for all speakers with a word network paired with classifiers built on viseme, phoneme, and~word units. Our first observation is that word classifiers perform very poorly. We attribute this to a low number of training samples per class due to the extra number of classes in the word space compared to the number of classes in the phoneme space, so we do not continue our work with word-based classifiers.
Also shown in Figure~\ref{fig:sn_effects_talker} (bottom) are the phoneme and viseme classifiers (in green and red respectively) with a word network. This time we see that for five of our 12 speakers (3, 5, 7, 8, and~11), the~phoneme classifiers outperform the visemes and for our remaining speakers there is no significant difference once a work network is~applied.
These results tell us that for some speakers viseme classifiers with phoneme networks are a better choice whereas others are easier to lipread with phoneme classifiers with a word network. Thus, we continue our work using both phoneme and word-based language~networks.
\section{Effects of Training Visual Units for Phoneme~Classifiers}
Here we present the results of our proposed hierarchical training method (described in Section~\ref{sec:newclusteringalg} with two different language models. Figure~\ref{fig:res_all} shows the mean correctness, $C$, for~all 12 speakers over $10$ folds. We have plotted four symbols, one for each of the pairings of our HMM unit labels and the language network unit (\{visual units and phonemes, visual units and words, phonemes and phonemes, phonemes and words\}). Random guessing is plotted in~orange.
\begin{table}[H]
\centering
\caption{Minimum and maximum all-speaker mean correctness, $C$, showing the effect of hierarchical training from visual units on phoneme-labeled HMM~classification.}
\begin{tabular}{lrrr}
\toprule
& \textbf{Min} & \textbf{Max} & \textbf{Range} \\
\midrule
visual units + word net & 0.03 & 0.06 & 0.03 \\
Phonemes + word net & 0.09 & 0.10 & 0.01 \\
Effect of WLT & 0.06 & 0.04 & -- \\
visual units + phoneme net & 0.20 & 0.22 & 0.02 \\
Phonemes + phoneme net & 0.26 & 0.24 & 0.01 \\
Effect of WLT & 0.02 & 0.02 & -- \\
\bottomrule
\end{tabular}
\label{tab:meanstats}
\end{table}
\unskip
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{wlt_agg.eps}
\caption{HTK Correctness $C$ for visual unit classifiers with either phoneme or word language models and phoneme classifiers with either phoneme or word language models averaged over all 12 speakers. The~correctness unit matches the paired network~unit. }
\label{fig:res_all}
\end{figure}
The $x$-axis of Figure~\ref{fig:res_all} is the size of the optimal visual unit sets from Figure~\ref{fig:correctness}, from~$11$ to $36$. This is the range of optimal number of visual units where phoneme label classifiers do not improve classification. The~baseline of visual unit classification with a word network from~\cite{bear2015findingphonemes} is shown in blue and is not significantly different from conventionally learned phoneme \textls[-15]{classifiers. Based on our language network study in Section~\ref{sec:ln}, it is not a surprise to see just by using a phoneme network instead of a word network to support visual unit classification we significantly improve our mean correctness score for all visual unit set sizes for all speakers (shown in pink). We have plotted weighted guessing in~orange.}
More interesting to see is our new weakly trained phoneme HMMs are significantly better than the visual unit HMMs. In~the first part of our work here phoneme HMMs gave an all-speaker mean $C = 0.059$ and was not significantly different from the best visual units. Here, regardless of the size of the original visual unit set, $C$ is almost double. Weakly learned phoneme classifiers with a word network gain $0.031$ to $0.040$ in mean $C$, and~when these phoneme classifiers are supported with a phoneme network we see a correctness gain range from $0.17$ to $0.18$. These gains are supported by the all-speaker mean minimum and maximums listed in Table~\ref{tab:meanstats}. These gain scores are from over all the potential P2V mappings and show there is little difference in which P2V map is best for knowing which set of visual units to initialize our phoneme classifiers. All results are significantly better than~guessing.
In Figures~\ref{fig:rmav1wlt}--\ref{fig:rmav7wlt}, we have plotted for each of our $12$ speakers non-aggregated results showing $C \pm$ one standard error. While not monotonic, these graphs are much smoother than the speaker-dependent graphs shown in appendix A. The~significant differences between visual unit set sizes (in Figure~\ref{fig:correctness}) have now disappeared because the learning of differences between visual units, has been incorporated into the training of phoneme classifiers, which in turn are now better trained (plotted in red and green which improve on blue and pink respectively).
\begin{figure}[H]
\centering
\includegraphics[width=0.68\textwidth]{wlt_sp01.eps} \\
\includegraphics[width=0.68\textwidth]{wlt_sp02.eps} \\
\includegraphics[width=0.68\textwidth]{wlt_sp03.eps}
\caption{\textls[-15]{Speaker 1 (\textbf{top}), Speaker 2 (\textbf{middle}), and~Speaker 3 (\textbf{bottom}) correctness with a word language model (blue) and the hierarchically trained phoneme classifiers with a phoneme or word~network.}}
\label{fig:rmav1wlt}
\end{figure}
\unskip
\begin{figure}[H]
\centering
\includegraphics[width=0.68\textwidth]{wlt_sp09.eps} \\
\includegraphics[width=0.68\textwidth]{wlt_sp10.eps} \\
\includegraphics[width=0.68\textwidth]{wlt_sp11.eps}
\caption{\textls[-15]{Speaker 4 (\textbf{top}), Speaker 5 (\textbf{middle}), and~Speaker 6 (\textbf{bottom}) correctness with a word language model (blue) and the hierarchically trained phoneme classifiers with a phoneme or word~network.}}
\label{fig:rmav3wlt}
\end{figure}
\unskip
\begin{figure}[H]
\centering
\includegraphics[width=0.68\textwidth]{wlt_sp15.eps} \\
\includegraphics[width=0.68\textwidth]{wlt_sp05.eps} \\
\includegraphics[width=0.68\textwidth]{wlt_sp06.eps}
\caption{\textls[-15]{Speaker 7 (\textbf{top}), Speaker 8 (\textbf{middle}), and~Speaker 9 (\textbf{bottom}) correctness with a word language model (blue) and the hierarchically trained phoneme classifiers with a phoneme or word~network.}}
\label{fig:rmav5wlt}
\end{figure}
\unskip
\begin{figure}[H]
\centering
\includegraphics[width=0.68\textwidth]{wlt_sp08.eps} \\
\includegraphics[width=0.68\textwidth]{wlt_sp13.eps} \\
\includegraphics[width=0.68\textwidth]{wlt_sp14.eps}
\caption{\textls[-15]{Speaker 10 (\textbf{top}), Speaker 11 (\textbf{middle}), and~Speaker 12 (\textbf{bottom}) correctness with a word language model (blue) and the hierarchically trained phoneme classifiers with a phoneme or word~network.}}
\label{fig:rmav7wlt}
\end{figure}
An intriguing observation is comparing the use of a phoneme network for visual units and for weakly taught phonemes. For~some speakers, the~weakly learned phonemes are not always as important as having the right network unit. This is seen in Figure~\ref{fig:rmav1wlt} (top and bottom),~Figure \ref{fig:rmav3wlt} (middle),~Figure \ref{fig:rmav5wlt} (middle), and Figure \ref{fig:rmav7wlt} (bottom) for Speaker's 1, 3, 5, 8, and~12. By~rewatching the original videos to estimate the age of our speakers, we categorize them as either an `older' or 'younger' speaker by eye because the exact ages were not captured during filming. The~speakers with less significant difference in the effect of hierarchical training from visual to audio units are younger. This implies to lipread a younger person we need more support from the language model, than~an older speaker. We suggest this could be because young people show more co-articulation than older people, but~this requires further~investigation.
\section{Conclusions}
We have described a method that allows us to construct any number of visual units. The~presence of an optimum is a result of two competing effects on a lipreading system. In~the first, as~the number of visual units shrinks the number of homophenes rises and it becomes more difficult to recognize words (correctness drops). In~the second, as~the number of visual units rises we run out of training data to learn the subtle differences in lip-shapes (if they exist), so again, correctness drops. Thus, the~optimum number of visual units lies between one and $45$. In~practice we see this optimum is between the number of phonemes and eight (which is the size of one of the smaller visual unit sets).
The choice of visual units in lipreading has caused some debate. Some workers use visemes (for~example Fisher~\cite{fisher1968confusions} in which visemes are a theoretical construct representing phonemes that should look identical on the lips~\cite{hazen2006visual}). Others, e.g. ~\cite{howell2013confusion} have noted that lipreading using phonemes can give superior performance to visemes.
Here, we supply further evidence to the more nuanced hypothesis first presented in~\cite{bear2015findingphonemes}, that there are intermediate units, which for convenience we call visual units, that can provide superior performance provided they are derived by an analysis of the data. A~good number of visual units in a set is higher than previously~thought.
We have also presented a novel learning algorithm which shows improved performance for these new data-driven visual units by using them as an intermediate step in training phoneme classifiers. The~essence of our method is to retrain the visual unit models in a fashion similar to hierarchical training. This two-pass approach on the same training data has improved the training of phoneme-labeled classifiers and increased the classification~performance.
We have also investigated the relationship between classifier unit choice with the unit choice for the supporting language network. We have shown that one can choose either phoneme or words without significantly different accuracy, but~recommend a word net as this reduces the effect of homophene error and enables unbiased comparison of classifier~performance.
In future works we would seek to experiment if this hierarchical training method would achieve the same benefit to other classification techniques, for~example RBMs. This is inspired by the work in~\cite{mousas2017real,nam2012learning} and other recent hybrid HMM studies such as~\cite{thangthai2018computer}.
\vspace{6pt}
\authorcontributions{H.B. and R.H conceived and designed the experiments; H.B.performed the experiments; H.B and R.H analyzed the data; H.B and R.H wrote the paper.}
\funding{This research was funded by EPSRC grant number 1161995. The APC was funded by Queen Mary University of London.}
\acknowledgments{We would like to thank our colleagues at the University of Surrey who were collaborators in building the RMAV dataset.}
\conflictsofinterest{The authors declare no conflict of interest.}
\section{How to Use This Template}
\section{Introduction}
The concept of phonemes is well developed in speech recognition and derives from a definition in phonetics as ``the smallest sound one can articulate'' \cite{international1999handbook}. Phonemes are analogous to atoms---they are the building blocks of speech. While they are an approximation, in~practice that approximation has been remarkably robust~\cite{hinton2012deep}. Not only are phonemes used by linguists and audiologists to describe speech, they are widely used in large-vocabulary speech recognition as the acoustic classes, or~`units', to~be recognized~\cite{hinton2012deep,wollmer2010recognition, triefenbach2010phoneme}. Sequences of unit estimates can be strung together to infer words and~sentences.
Comprehending visual speech, or~lipreading, is much less well developed~\cite{bear2016decoding}. The~units considered to be equivalent to phonemes are called visemes~\cite{cappelletta2011viseme} but, even in English, there is no clear agreement on the visemes~\cite{goldschen1996rationale}, and~in~\cite{bear2017phoneme} for example, it is noted that there are at least $120$ proposed viseme sets. This large number arises because some authors take vowels~\cite{montgomery1983physical}, and~others consonants~\cite{walden1977effects}, but~also because, of~the proposed sets, some are derived from linguistic principles~\cite{neti2000audio,woodward1960phoneme}, some are the results of human lipreading experiments~\cite{fisher1968confusions, finn1988automatic}, others are data-derived~\cite{bear2017phoneme, lee2002audio}, and~others still are hybrids of these approaches~\cite{bozkurt2007comparison}.
Despite the challenges, a~number of lipreading systems have been built using visemes (\cite{shaikh2010lip,bear2014resolution} for example). When building a viseme recognizer a complication is that multiple phonemes will map onto a single viseme~\cite{bear2017phoneme}. A~common example is the $/p/$, $/b/$, and~$/m/$ bilabial sounds which are often grouped into one viseme~\cite{binnie1976visual, disney, lip_reading18}. Attempts to draw mappings between the phonemes and visemes have been tested~\cite{bear2017phoneme,bear2014some} but to date these mappings have not yet proven to improve machine lipreading~significantly.
On the other hand, there is an emerging body of work~\cite{thangthai2017comparing, howell2013confusion} that, despite the caveats above, is demonstrating that phoneme lipreading systems can outperform viseme recognizers. In~essence it is a tradeoff: does one use viseme units which are tuned to the shape of the lips but suffer with inaccuracies caused by visual confusions between words that sound different but look identical~\cite{thangthai2017comparing}; or does one stick to phonetic units knowing that many of the phonemes are difficult to distinguish on the lips?
These visual confusions are called homophenes~\cite{10.1080/00221309}.
We demonstrate the homophenous word difficulty, with~some examples in Table~\ref{tab:homophones} from~\cite{thangthai2017comparing}. In~this example, Jeffers visemes~\cite{jeffers1971speechreading} have been used to translate the phonemes into viseme~strings.
\begin{table}[H]
\centering
\caption{Example of phoneme and viseme dictionary with its corresponding IPA symbols~\cite{thangthai2017comparing}.}
\begin{tabular}{lcc}
\toprule
\textbf{Word Entry} & \textbf{Phoneme Dictionary} & \textbf{Viseme Dictionary} \\
\midrule
TALK & /t/ /\textopeno/ /k/ & /C/ /V1/ /H/ \\
TONGUE & /t/ /\textturnv/ /\textipa{N}/ & /C/ /V1/ /H/ \\
DOG & /d/ \textopeno/ /g/ & /C/ /V1/ /H/ \\
DUG & /d/ /\textturnv/ /g/ & /C/ /V1/ /H/ \\
\cmidrule{1-3}
CARE & /k/ /e/ /r/ & /H/ /V3/ /A/ \\
WELL & /w/ /e/ /l/ & /H/ /V3/ /A/ \\
WHERE & /w/ /e/ /r/ & /H/ /V3/ /A/ \\
WEAR & /w/ /e/ /r/ & /H/ /V3/ /A/ \\
WHILE & /w/ /ai/ /l/ & /H/ /V3/ /A/ \\
\bottomrule
\end{tabular}
\label{tab:homophones}
\end{table}
However, as~we shall show in this paper, it need not be an either/or approach to phonemes or visemes; we develop a novel method that allows us to vary the number of classes/visual units.
This~means we can tune the visual units as an intermediary state between the visual and audio spaces and we can also optimize against the competing trends of homopheneiosity~\cite{bear2017visual,bear2018comparing} and accuracy~\cite{htk34}. Thus, in this work, we use the term visemes for the traditional visemes, and~the term visual units for our new intermediary units which we propose will improve phoneme~classifiers.
We are motivated in our work because lipreading is a difficult challenge from speech signals. Speech signals are bimodal (that is they have two channels of information, audio and visual) and significant prior work uses both. For~example~\cite{ngiam2011multimodal} uses audio-visual speech recognition to demonstrate cross modality learning. However in our case, lipreading, which is useful for understanding speech when audio speech is too noisy to recognize easily, is classifying speech from only the visual information channel in speech signals thus, as~we shall present, we use a novel training method which uses new visual units and phonemes in a complimentary~fashion.
This paper is an extended version of our prior work~\cite{bear2015findingphonemes,bear2016decoding}, this work is relevant to all classifiers since the choice of visual unit matters and is made before the classifier is trained. In~other words, the~choice of visual units must be made early in the design process and a non-optimal choice can be very expensive in terms of~performance.
The rest of this paper is structured as follows; we summarize prior viseme research for lipreading by both humans and machines, and~describe the state-of-the-art approaches for lipreading systems in a background section. Then we present an experiment in which we demonstrate how we can find the optimal number of visual units within a set; this is an essential preliminary test to define the scope of the second task. We present the data for all experiments within this section. The~preliminary test includes phoneme classification and clustering for new visual unit generation before analyzing the results to find the optimal visual unit~sets.
These optimal visual unit sets are used to test our novel method for training phoneme-labeled classifiers by using these sets as an initialization stage in the training phase of a conventional lipreading system. As~part of this second task, we also present a side task of deducing the right units for lipreading language models used in the lipreading system. Finally, we present the results of the new training method and draw conclusions before suggesting future work. Thus, we have three main contributions:
\begin{itemize}[leftmargin=2.3em,labelsep=6mm]
\item a method for finding optimal visual units,
\item a review of language model units for lipreading systems,
\item a new training paradigm for lipreading systems.
\end{itemize}
\section{Background}
Table~\ref{tab:Confusion_Factors} summarizes the most common viseme sets in the literature used for both human and machine lip reading.
The range of set sizes is from four (Woodward~\cite{woodward1960phoneme}) to 21 (Nichie~\cite{lip_reading18}). Note that not all viseme sets represent the same number of phonemes. Furthermore some of these use American English and others British English so there are minor variations in the phoneme sets. (American English phonemes tend to use diacritics~\cite{labov2005atlas}.)
\begin{table}[H]
\centering
\caption{Ratio of visemes to phonemes in previous viseme sets from literature.}
\begin{tabular}{ l r l r }
\toprule
\textbf{Set} & \textbf{V:P} & \textbf{Set} & \textbf{V:P} \\
\midrule
Woodward~\cite{woodward1960phoneme} & 4:24 & Fisher~\cite{fisher1968confusions} & 5:21 \\
Lee~\cite{lee2002audio} & 9:38 & Jeffers~\cite{jeffers1971speechreading} & 11:42 \\
Neti~\cite{neti2000audio} & 12:43 & Franks~\cite{franks1972confusion}& 5:17 \\
Disney~\cite{disney} & 10:33 & Kricos~\cite{kricos1982differences} & 8:24 \\
Hazen~\cite{Hazen1027972} & 14:39 & Bozkurt~\cite{bozkurt2007comparison} & 15:41 \\
Montgomery~\cite{montgomery1983physical} & 8:19 & Finn~\cite{finn1988automatic}& 10:23 \\
Nichie~\cite{lip_reading18} & 21:48 & Walden~\cite{walden1977effects} & 9:20 \\
\bottomrule
\end{tabular}
\label{tab:Confusion_Factors}
\end{table}
Lipreading systems can be built with a range of architectures. Conventional systems are adopted from acoustic methods, often using Hidden Markov Models, for example as in~\cite{potamianos1998image}. More modern systems exploit deep learning methods~\cite{petridis2017end, stafylakis2017combining}. Deep learning has been deployed in two configurations: (i) as a replacement for the GMM in the Hidden Markov Models (HMM) and (ii) in a configuration known as end-to-end~learning.
However, the~high-level architectures have similarities: first the face of the speaker must be tracked or located; then some form of features are extracted; then a classification model is trained and tested on unseen data, optionally using a language model to improve the classification output (e.g.~\cite{le2017generating}). Throughout this process one must translate between the words spoken (and captured in the training videos), to~their phonetic pronunciation, to~their visual representation on the lips, and~back again for a useful transcript.
\section{Finding a Robust Range of Intermediate Visual~Units}
In our first example we use the RMAV dataset~\cite{improveVis} and the BEEP pronunciation dictionary~\cite{beep}. \textls[-15]{Figure~\ref{fig:process} shows a high-level overview of the first task. We begin with classification using phoneme-labeled} classifiers. The~output of this task is a set of speaker-dependent confusion matrices. The~data in these are used to cluster together single phonemes (monophones) into subgroups of visual units, based~upon~confusions.
However, conversely to the approach in~\cite{bear2017phoneme} we implement an alternative phoneme clustering process (described in detail in Section~\ref{sec:newclusteringalg}). The~key difference between the ad-hoc viseme choices compared in~\cite{bear2017phoneme} and our new clustering approach, is our ability to choose the number of visual units, whereas in prior viseme sets, this is~fixed.
With our new algorithm, we create a new phoneme-to-viseme (P2V) mapping every time a pair of classes is re-classified into a new class, thus reducing the number of classes in a set by one each time. In~the phonetic transcripts of our $12$ speakers, there is a maximum of $45$ phonemes, therefore we can create at most $45$ P2V maps for each speaker. We note that the real number of maps we can derive depends upon the number of phonemes classified during step one of Figure~\ref{fig:process}. During~this preliminary phoneme classification, should a phoneme not be classified, either incorrectly or correctly, then it is an omission in the confusion matrix from which our visual units are created. Thus, we have \emph{up to} $45$ sets of visual unit labels per speaker with which to label our~classifiers.
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{journal3Fig1.pdf}
\caption{Three-step high-level process for visual unit classification where the visual units are derived from phoneme~confusions.}
\label{fig:process}
\end{figure}
There is the option to measure performance using phoneme, viseme, or~word error. Here we choose word error~\cite{bearTaylor} because viseme error varies as the number of visemes varies which leads to unfair comparisons and phoneme error is not as close to what we believe to be of interest to users which is transcript~error.
\subsection{Data}
The RMAV dataset (formerly known as LiLIR)
consists of $20$ British English speakers (we use the 12 speakers who had tracked features available; seven male and five female) and up to $200$ utterances per speaker of the Resource Management (RM) sentences which totals between $1362$ and $1802$ words each. The~sentences selected for the RMAV speakers are a subset of the full RM dataset~\cite{fisher1986darpa} transcripts. They were selected to maintain as much coverage of all phonemes as possible as shown in Figure~\ref{fig:histogram} and realistic to English conversation~\cite{improveVis}. The~original videos were recorded in high definition ($1920 \times 1080$) and in a full-frontal position at 25 fs$^{-1}$. Individual speakers are tracked using Linear Predictors~\cite{ong2011robust} and Active Appearance Model~\cite{Matthews_Baker_2004} features of concatenated shape and appearance information have been~extracted.
\begin{figure}[H]
\centering
\includegraphics[width=.75\textwidth]{lilir_histo1.eps}
\caption{\textit{Cont.}}
\label{fig:histogram}
\end{figure}
\begin{figure}[H]\ContinuedFloat
\centering
\includegraphics[width=.75\textwidth]{lilir_histo2.eps}
\caption{Occurrence frequency of phonemes in the RMAV~dataset.}
\label{fig:histogram}
\end{figure}
\unskip
\subsection{Linear Predictor~Tracking}
Linear Predictors (LP) are a person-specific and data-driven facial tracking method. Devised primarily for observing visual changes in the face during speech, these make it possible to cope with facial feature configurations not present in the training data by treating each feature~independently.
The linear predictor is the central point around which support pixels are used to identify the change in position of the central point over time. The~central point is observed as a landmark on the outline of a feature. In~this method both the shape (comprised of landmarks) and the pixel information surrounding the linear predictor position are intrinsically linked. Linear predictors have been successfully used to track objects in motion, for~example~\cite{matas2006learning}.
\subsection{Active Appearance Model~Features}
AAM features~\cite{Matthews_Baker_2004} of concatenated shape and appearance information have been extracted. We~track using a full-face model (Figure~\ref{fig:landmarks} (left)) but the final features are reduced to information from the lip area alone (Figure~\ref{fig:landmarks} (right)). Shape features (\ref{eq:shapecombined}) are based solely upon the lip shape and positioning during the duration of the speaker speaking. The~landmark positions can be compactly represented using a linear model of the form:
\begin{equation}
s = s_0 + \sum_{i=1}^ms_ip_i
\label{eq:shapecombined}
\end{equation}
where $s_0$ is the mean shape and $s_i$ are the modes. The~appearance features are computed over pixels, the~original images having been warped to the mean shape. So $A_0(x)$ is the mean appearance and appearance is described as a sum over modal appearances:
\begin{equation}
A(x) = A_0(x) + \sum_{i=1}^l{\lambda}_iA_i(x) \qquad \forall x \in S_0
\label{eq:appcombined}
\end{equation}
Combined features are the concatenation of shape and appearance after PCA has been applied to each independently. The~AAM parameters for each speaker is in Table~\ref{tab:parameterslilirfeatures} (MATLAB files containing the extracted features can be downloaded from \url{http://zenodo.org/record/2576567})
\begin{figure}[H]
\centering
\includegraphics[width=.33\textwidth]{avl2_face.png}
\includegraphics[width=.33\textwidth]{avl2_lips.png}
\caption{Landmarks in a full-face AAM used to track a face (\textbf{left}) and the lip-only AAM landmarks (\textbf{right})for feature extraction.}
\label{fig:landmarks}
\end{figure}
\unskip
\begin{table}[H]
\centering
\caption{The number of parameters of shape, appearance, and~combined shape and appearance AAM features for the RMAV dataset speakers. Features retain 95\% variance of facial~information.}
\begin{tabular}{ l r r r }
\toprule
\textbf{Speaker} & \textbf{Shape} & \textbf{Appearance} & \textbf{Combined} \\
\midrule
S1 & 13 & 46 & 59 \\
S2 & 13 & 47 & 60 \\
S3 & 13 & 43 & 56 \\
S4 & 13 & 47 & 60 \\
S5 & 13 & 45 & 58 \\
S6 & 13 & 47 & 60 \\
S7 & 13 & 37 & 50 \\
S8 & 13 & 46 & 59 \\
S9 & 13 & 45 & 58 \\
S10 & 13 & 45 & 58 \\
S11 & 14 & 72 & 86 \\
S12 & 13 & 45 & 58 \\
\bottomrule
\end{tabular}
\label{tab:parameterslilirfeatures}
\end{table}
\unskip
\section{Clustering}
\label{sec:newclusteringalg}
\unskip
\subsection{Step One: Phoneme~Classification}
\label{sec:one}
To complete our preliminary phoneme classification, we implement $10$-fold cross-validation with replacement~\cite{efron1983leisurely}, over~the 200 sentences per speaker. This means $20$ test samples are randomly selected and omitted from training sample folds. Our classifiers are based upon Hidden Markov Models (HMMs) \cite{holmes2001speech} and implemented with the HTK toolkit~\cite{htk34}. We use the HTK tools as follows;
\begin{enumerate}[leftmargin=*,labelsep=4.9mm]
\item \texttt{HLed} creates our phoneme transcripts to be used as ground truth transcriptions.
\item \texttt{HCompV} initializes the HMMs using a `flat-start'~\cite{forney1973viterbi} using manually made prototype files for each speaker based upon their AAM parameters (listed in Table~\ref{tab:parameterslilirfeatures}) and the desired HMM parameters. The~prototype HMM is based upon a Gaussian mixture of five components and three state HMMs as per the work of~\cite{982900}.
\item Using \texttt{HERest} we train the classifiers by re-estimating the HMM parameters $11$ times over via embedded training with the Baum-Welch algorithm~\cite{welch2003hidden}, more than 11 iterations and the HMM's overfit. Our list of HMMs includes a single-state, short-pause model, labeled $/sp/$ to model the short silences between words in the spoken sentences. States are tied with \texttt{HHed}.
\item We build a bigram word lattice using \texttt{HLStats} and \texttt{HBuild} and use this lattice to complete recognition with \texttt{HVite}. \texttt{HVite} uses our trained set of phoneme-labeled HMM classifiers to estimate what our test samples should be.
\item The output transcripts from \texttt{HVite} are used with our ground truth transcriptions from \texttt{HLEd} as inputs into \texttt{HResults} to produce confusion matrices and lipreading accuracy scores. \texttt{HResults} uses an optimal string match using dynamic programming~\cite{htk34} to compare the ground truths with the prediction transcripts.
\end{enumerate}
\subsection{Step Two: Phoneme~Clustering}
\label{sec:two}
Now we have our phoneme confusions (an example matrix is in Figure~\ref{table:examplecm}), we have ten confusion matrices per speaker (one for each fold of the cross-validation). We cluster the $m$ phonemes into new visual unit classes, one iteration at a~time.
\begin{figure}[H]
\centering
\resizebox{\columnwidth}{!}
\begin{tabular}{|l l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\toprule
& & \multicolumn{15}{c|}{Predicted classes} \\
& \multicolumn{1}{r|}{} & /ae/ & /ay/ & /b/ & /c/ & /d/ & /ea/ & /f/ & /iy/ & /l/ & /m/ & /n/ & /oy/ & /p/ & /s/ & /t/ \\
\midrule
\multirow{16}{*}{Actual classes} & /ae/ & 76 & 2 & 1 & 5 & 2 & 1 & 3 & 1 & 1 & 3 & 1 & 5 & 0 & 0 & 4 \\ \cmidrule{2-17}
& /ay/ & 0 & 28 & 0 & 1 & 0 & 2 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 2 \\ \cmidrule{2-17}
& /b/ & 0 & 4 & 17 & 0 & 1 & 2 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 \\ \cmidrule{2-17}
& /c/ & 3 & 6 & 6 & 163 & 3 & 7 & 7 & 2 & 8 & 7 & 1 & 4 & 2 & 0 & 1 \\ \cmidrule{2-17}
& /d/ & 4 & 2 & 2 & 3 & 33 & 0 & 0 & 1 & 3 & 0 & 1 & 2 & 1 & 0 & 1 \\ \cmidrule{2-17}
& /ea/ & 2 & 0 & 0 & 6 & 1 & 9 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ \cmidrule{2-17}
& /f/ & 4 & 1 & 0 & 3 & 1 & 1 & 40 & 0 & 0 & 1 & 5 & 2 & 0 & 0 & 0 \\ \cmidrule{2-17}
& /iy/ & 0 & 3 & 2 & 1 & 2 & 0 & 0 & 11 & 8 & 2 & 0 & 2 & 0 & 0 & 1 \\ \cmidrule{2-17}
& /l/ & 0 & 0 & 1 & 4 & 1 & 0 & 1 & 2 & 97 & 3 & 1 & 0 & 0 & 0 & 0 \\ \cmidrule{2-17}
& /m/ & 2 & 1 & 4 & 1 & 2 & 3 & 0 & 1 & 6 & 110 & 8 & 0 & 2 & 0 & 0 \\ \cmidrule{2-17}
& /n/ & 0 & 1 & 0 & 1 & 0 & 1 & 2 & 0 & 0 & 1 & 14 & 1 & 2 & 0 & 0 \\ \cmidrule{2-17}
& /oy/ & 0 & 0 & 0 & 3 & 1 & 1 & 4 & 1 & 1 & 3 & 1 & 16 & 1 & 0 & 0 \\ \cmidrule{2-17}
& /p/ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 3 & 0 & 0 \\ \cmidrule{2-17}
& /s/ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 84 & 0 \\ \cmidrule{2-17}
& /t/ & 1 & 3 & 0 & 2 & 1 & 1 & 1 & 0 & 0 & 1 & 2 & 0 & 0 & 0 & 28 \\
\bottomrule
\end{tabular}%
}
\caption{An example phoneme confusion matrix.
\label{table:examplecm}
\end{figure}
First we sum all ten matrices into one matrix to represent all the confusions for each speaker. Our clustering begins with this single specific speaker confusion matrix.
\begin{equation}
[K_{m}]_{ij} = N (\phat{p}_j | p_i)\quad
\label{eq3}
\end{equation}
where the $ij^{th}$ element is the count of the number of times phoneme $i$ is classified as phoneme $j$. This algorithm works with the column normalized version,
\begin{equation}
[P_m]_{ij} = Pr\{p_i | \phat{p}_j \} \quad
\label{eq4}
\end{equation}
the probability that given a classification of $p_j$ that the phoneme really was $p_i$. Merging of phonemes is done by looking for the two most confused phonemes and hence creating new matrices $K_{m-1}, P_{m-1}$.
Specifically, for each possible merged pair a score, $q$, is calculated as:
\begin{equation}
q = [P_{m}]_{rs} + [P_{m}]_{sr} \quad \\
= Pr\{\phat{P}r | Ps \} + Pr\{\phat{P}s | Pr\}
\label{eq5}
\end{equation}
Vowels and consonants cannot be mixed, the~significant negative effect of mixing vowel and consonant phonemes in visemes was demonstrated in~\cite{bear2017phoneme}, so phonemes are assigned to one of two classes, $V$ or $C$, for~vowels and consonants respectively. The~pair with the highest $q$ is merged. We break equal scores randomly. This process is repeated until $m=2$. We stop at two because at this point we have two single classes, one class containing vowel phonemes, and~a second class of consonant phonemes. Each~intermediate step, $M = 45,44,43 ... 2$ forms another set of prospective visual units.
An example P2V mapping is shown in Table~\ref{tab:example} for RMAV speaker number one with ten visual~units.
\begin{table}[H]
\centering
\caption{An example P2V map, (for RMAV Speaker 1 with ten visual units).}
\begin{tabular}{ll}
\toprule
\textbf{Visual Unit} & \textbf{Phonemes} \\
\midrule
$/v01/$ & /ax/ \\
$/v02/$ & /v/ \\
$/v03/$ & /\textopeno\textsci/ \\
$/v04/$ & /f/ /\textipa{Z}/ /w/ \\
$/v05/$ & /k/ /b/ /d/ /\textipa{T}/ /p/ \\
$/v06/$ & /l/ /d\textipa{Z}/ \\
$/v07/$ & /g/ /m/ /z/ /y/ /t\textipa{S}/ /\textipa{D}/ /s/ /r/ /t/ /\textipa{S}/ \\
$/v08/$ & /n/ /hh/ /\textipa{N}/ \\
$/v09/$ & /\textipa{E}/ /ae/ /\textopeno/ /uw/ /\textturnscripta/ /\textsci\textschwa/ /ey/ /ua/ /\textrevepsilon/ \\
$/v10/$ & /ay/ /\textscripta/ /\textturnv/ /\textscripta\textupsilon/ /\textupsilon/ /\textschwa\textupsilon/ /\textsci/ /iy/ /\textschwa/ /eh/ \\
\bottomrule
\end{tabular}
\label{tab:example}
\end{table}
\unskip
\subsection{Step Three: Visual Unit~Classification}
\label{sec:visRecog}
Step three is similar to step one. We again complete $10$-fold cross-validation with replacement~\cite{efron1983leisurely} over the $200$ sentences for each speaker using the same folds as the prior steps to prevent mixing the training and test data. Again, $20$ test samples are randomly selected to be omitted from the training folds. Again, with the HTK toolkit, we build new sets of HMM classifiers. This time however, our~classifiers are labeled with the visual units we have just created in step~two.
We have a python script which translates the phoneme transcripts from using \texttt{HLed} in step one and the P2V maps from step two, into~visual unit transcripts, one for each P2V map. For~each set of visual units, visual unit HMMs are flat-started (\texttt{HCompV}) with the same speaker specific HMM prototypes as before (Gaussian mixtures are uniform across prototypes), re-estimated $11$ times over with \texttt{HERest}. A~bigram word lattice supports classification including a grammar scale factor of $1.0$ (shown to be optimum in~\cite{howell2013confusion}) and a transition penalty of $0.5$.
The important difference this time is that the visual unit classes are now used as classifier labels. By~using these sets of classes which have been shown in step one to be visually confusing on the lips, we now perform classification for each class set. In~total this is at most $44$ sets, where the smallest set is of two classes (one with all the vowel phonemes and the other all the consonant phonemes), and~the largest set is of $45$ classes with one phoneme in each---thus the largest set for each speaker is a repeat of the phoneme classification task but using only phonemes which were originally recognized (either correctly or incorrectly) in step~one.
\section{Optimal Visual Unit Set~Sizes}
\label{sec:search}
Figure~\ref{fig:correctness} plots word correctness on the $y$-axis for all $12$ speakers with error bars showing $\pm$ one standard error (se). The~$x$-axis shows the number of visual units. In~green we plot mean weighted guessing over all speakers for each viseme set. Individual speaker variations are in Appendix~\ref{sec:app}, Figures~\ref{fig:sp01}--\ref{fig:sp11}.
It is important in this case to weight the chance of guessing by visual homophenes as these vary by the size of the visual unit set. Visual unit sets which contain fewer visual units produce sequences of visual units which represent more than one word. These are homophenes. The~effect of homophenes can be seen on the left side of Figure~\ref{fig:correctness} and the graphs in Appendix~\ref{sec:app} with visual unit sets with fewer than $11$ visual units where homophenes become noticeable and language model can no longer correct these~confusions.
An example of a homophene in the RMAV data are the words `tonnes' and `since'. If~one uses Speaker 1's 10-visual unit P2V map, both words transcribe into visual units as `$/v7/$ $/v10/$ $/v8/$ $/v7/$'. In~practice a language model, or~word lattice, will tend to reduce such confusions since the lattice models the probability of word $N$-grams which means that probable combinations such as ``metric tonnes'' will be favored over ``metric since'' \cite{thangthai2017comparing}.
We see all our word correctness scores are significantly above guessing albeit still low. There~is variation between speakers, but~there is a clear overall trend. Superior performance is to be found with larger numbers of visual units. An~important point is some authors report viseme accuracy instead of word correctness~\cite{bearTaylor}. This is unhelpful as it masks the effect of homophenous words on performance. Had we reported this then the positive effect of larger visual unit sets would not be~visible.
In Figure~\ref{fig:correctness} we highlight in red the class sets which, for~any speaker, have shown a significant classification improvement (with non-overlapping error bars) over the adjacent set of units on its right side along the $x$-axis. Error bars overlap once the correctness is averaged so Table~\ref{tab:merges} lists these combinations for each speaker. These red points show where we can identify the pairs of classes which, when merged into one class, significantly improve classification. If~we refer to the speaker demographic factors such as gender or age, we find no apparent pattern through these visual unit combinations. So, we have further evidence to reinforce the idea that all speakers have a unique visual speech signal,~\cite{607030}. In~\cite{bear2015speaker} this is suggested to be due to how the trajectory between visual units varies by speaker, due to such things as rate of speech~\cite{taylor2014effect}. This is how difficult finding a set of cross-speaker visual units can be when phonemes need alternative groupings for each individual~\cite{bear2017visual}.
\begin{figure}[H]
\centering
\includegraphics[width=0.95\linewidth]{all_corr2.pdf}
\caption{All-speaker mean word classification correctness $C\pm1se$.}
\label{fig:correctness}
\end{figure}
\begin{table}[H]
\centering
\tablesize{\footnotesize}
\caption{visual unit class merges which improve word classification in correctness; $V_n=V_i+V_j$.}
\resizebox{\columnwidth}{!}
\begin{tabular}{ l l l l l l }
\toprule
\textbf{Speaker} & \textbf{Set No} & \boldmath{$V_i$} & \boldmath{$V_j$} & \textbf{Set No} & \boldmath{$V_n$} \\
\midrule
Sp01 & 35 & /s/ /r/ & /\textipa{D}/ & 34 & /s/ /r/ /\textipa{D}/ \\
Sp02 & 22 & /d/ & /z/ /y/ & 21 & /d/ /z/ /y/ \\
Sp03 & 34 & /b/ /t\textipa{S}/ & /\textipa{Z}/ & 33 & /b/ /t\textipa{S}/ /\textipa{Z}/ \\
Sp03 & 31 & /\textipa{Z}/ /b/ /t\textipa{S}/ & /z/ & 30 & /\textipa{Z}/ /b/ /t\textipa{S}/ /z/ \\
Sp03 & 25 & /p/ /r/ & /\textipa{N}/ & 24 & /p/ /r/ /\textipa{N}/ \\
Sp05 & 17 & /ae/ & /eh/ & 16 & /ae/ /eh/ \\
Sp06 & 35 & /ae/ /\textturnv/ & /iy/ & 34 & /ae/ /\textturnv/ /iy/ \\
Sp09 & 12 & /b/ /w/ /v/ & /d\textipa{Z}/ /hh/ & 11 & /b/ /w/ /v/ /d\textipa{Z}/ /hh/ \\
Sp12 & 36 & /\textturnv/ & /\textopeno/ & 35 & /\textturnv/ /\textopeno/ \\
\bottomrule
\end{tabular} }%
\label{tab:merges}
\end{table}
\unskip
\section{Discussion}
\label{sec:discussion}
In Figure~\ref{fig:correctness} we have plotted mean word correctness, $C$, over~all $12$ speakers and weighted guessing ($1/(number Of Units$) in green. Here we see that within~one standard error, there is a monotonic trend. Small numbers of units perform worse than phonemes and which supports the claim that phonemes are preferred to visemes but, it would be an oversimplification to assert that higher accuracy lipreading can be achieved with phonemes as this has not been shown in our results with significance. Rather we say that, generally, visual unit sets with higher numbers of visual unit classes outperform the smaller sets. In~\cite{bear2017phoneme} the authors reviewed $120$ of previous phoneme-to-viseme (P2V) maps, typically these consist of between $10$ and $35$ visual units~\cite{bear2014phoneme}. For~example the Lee set consists of six consonant visemes and five vowel visemes~\cite{lee2002audio} and Jeffers~\cite{jeffers1971speechreading} group phonemes into eight vowel and three consonant~visemes.
In Figures~\ref{fig:sp01}--\ref{fig:sp11} and Figure \ref{fig:correctness} we present a definite rapid decrease in lipreading word correctness for visemes sets containing fewer than ten visemes. However, positively, the~region visemes sets of sizes between $11$ and $20$ contain the optimum viseme set for three out of the 12 speakers which is more than random chance. This means, for~each speaker, we have found and presented an optimal number of visual units (shown by the best performing results in Figures~\ref{fig:sp01}--\ref{fig:sp11}) but the optimal number is not related to any of the conventional viseme definitions, nor is it consistent across speakers. Table~\ref{tab:pr_vals} shows the word correctness, $C_w$, of~each speakers phoneme~classification.
\begin{table}[H]
\centering
\caption{Phoneme correctness $C$ for each speaker (right-hand data points of Figures~\ref{fig:sp01}--\ref{fig:sp11}).}
\begin{tabular}{lrrrrrrrrrrrr}
\toprule
Speaker & 1 & 2 & 3 & 4 & 5 & 6 \\
Phoneme $C$ & 0.05 & 0.06 & 0.06 & 0.05 & 0.06 & 0.06 \\
\midrule
Speaker & 7 & 8 & 9 &10 & 11 & 12 \\
Phoneme $C$ & 0.06 & 0.06 & 0.06 & 0.07 & 0.06 & 0.06 \\
\bottomrule
\end{tabular}
\label{tab:pr_vals}
\end{table}
\unskip
\section{Hierarchical Training for Weak-Learned Visual~Units}
\label{chap:support network}
Figure~\ref{fig:correctness} showed our first results derived using an adapted version of the algorithm described in~\cite{bear2014phoneme}.
Table~\ref{tab:merges} also shows us, for~each of our $12$ speakers the significantly improving visual unit sets. These sets are those where one single change of visual unit grouping has resulted in a significant (greater than one standard error over ten folds) increase in word correctness. This tells us that there are some units between the traditional visemes (for example~\cite{fisher1968confusions, lip_reading18, disney}), and~phonemes which are better for visual speech~classification.
Table~\ref{tab:merges} (\cite{bear2015findingphonemes}) shows us several significantly improving sets. Our suggestions for why these are interesting are; first the tradeoff of homophenes against accuracy. It is possible these are the groupings where the accuracy improvement is significantly improving, despite the extra homophenes created as the number of visual units in the set decreases. Either the increase in homophenes is negligible or, the~number of training samples for two visually indistinguishable classes significantly increases when~combined.
We propose a novel idea; to implement hierarchical classifier training using both visual units and phonemes in sequence. Some work in acoustic speech recognition has used this layered approach to model building with success e.g.~\cite{morgan2012deep}. It is our intention use our new range of visemes to test if our new training algorithm can improve phoneme classification without the need for more training data as this approach shares training data across models. This premise avoids the negative effects of introducing more homophenes because of the second layer of training discriminates between the sub-units\textls[-15]{ within the first layer. This will assist the identification of the more subtle but important differences in visual gestures representing alternative phonemes. We note from~\cite{bear2014some} that using the wrong clusters of phonemes is worse than using none, and~also that this new approach aims to optimize performance within the scope of the datasets and system affects described previously in Sections~\ref{sec:search} and~\ref{sec:discussion}. }
A bonus of our revised classification scheme is that because we weakly train the classifier before phoneme training, we remove any desire to consider post-processing methods (e.g. weighted finite state transducers~\cite{howell2013confusion}) to reverse the P2V mapping in order to decode the real phoneme~recognized.
In Figure~\ref{fig:correctness}, the~performance of classifiers with small numbers of visual units (fewer than $10$) is poor. As~described previously, we attributed this to the large number of homophenes. At~the other side of our figure, sets containing large numbers of visual units (greater than $35$) do not significantly, or~even noticeably, improve the correctness. This is where many phonetic variations are visually indistinguishable on the lips. Also taking into account the set numbers printed in black (which are the significantly improving visual unit sets) we focus on sets of visual units in the size range $11$ to $35$ with the same $12$ RMAV speakers for our experiments using hierarchical training of phoneme~classifiers.
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{journal3Fig6.pdf}
\caption{\textls[-25]{\textbf{Top}: a high-level lipreading system, and~\textbf{Bottom}: where conversions between words, phonemes,} and~visual units can occur in lipreading systems in three different~flows.}
\label{fig:unitphases}
\end{figure}
Here, we use our knowledge of visual speech to drive our novel redesign of the conventional training method. In Figure~\ref{fig:unitphases} shows how we make it earlier in the process. The~top of Figure~\ref{fig:unitphases} in black boxes shows the steps of a lipreading system, divided into phases where the units change from words, to~phonemes, to~visual units (where used). Flow 1 shows how we translate the word ground truth into phonemes using a pronunciation dictionary (e.g.~\cite{beep} or~\cite{cmudict}) for labeling the classifiers, before~decoding with a word language model. Flow 2 below this, using visual units. The~variation in flow 2 shows we translate from visual unit trained classifiers back into words using the word network. Finally, row three shows our new approach, where we introduce an extra step into the training phase, which means classifiers are initialized as visual units, before~retraining them into phoneme classifiers before word decoding. We describe this new process in detail~now.
\section{Classifier Adaptation~Training}
The basis of our new training algorithm is a hierarchical structure with the first level based on visual units, and~the second level based on phonemes. In~Figure~\ref{fig:wlt_process} we present an illustration based on a simple example using five phonemes (in reality there are up to $45$ in the RMAV sentences) mapped to two visual units (in reality there will be between $11$ and $35$ as we have refined our experiment to only use sets of visual units in the optimal size range from the preliminary test results). Each phoneme is mapped to a visual unit as in~\cite{bear2016decoding}, our example map is in Table~\ref{tab:exampleP2V}. But~now we are going to learn intermediate visual unit labeled HMMs before we create phoneme~models.
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{v2pHMMprocess2.pdf}
\caption{Hierarchical training strategy for training visual units HHMs into phoneme-labeled HMM~classifiers.}
\label{fig:wlt_process}
\end{figure}
In this example $/p1/$, $/p2/$ and $/p4/$ are associated with $/v1/$, so are initialized as duplicate copies of HMM $/v1/$. Likewise, phoneme models labeled $/p3/$ and $/p5/$ are initialized as replicas of $/v2/$. We now retrain the phoneme models using the same training~data.
\begin{table}[H]
\centering
\tablesize{\small}
\caption{Our example P2V map to illustrate our novel training~algorithm}
\begin{tabular}{ll}
\toprule
\textbf{Visual Units} & \textbf{Phonemes} \\
\midrule
/v1/ & /p1/ /p2/ /p4/ \\
/v2/ & /p3/ /p5/ \\
\bottomrule
\end{tabular}
\label{tab:exampleP2V}
\end{table}
In full for each set of visual units of sizes from $11$ to $35$:
\begin{enumerate}[leftmargin=*,labelsep=4.9mm]
\item \textls[-15]{We initialize \textit{visual unit} HMMs with \texttt{HCompV}, this tool initializes HMMs defines all models equal~\cite{young2006htk}.}
\item With our prototype HMM based upon a Gaussian mixture of five components and three states, we use \texttt{HERest} 11 times over to re-estimate the HMM parameters and we include short-pause model state tying (between re-estimates three and four with \texttt{HHed}). Training samples are from all phonemes in each visual unit cluster. These first two points are steps 1 and 2 in Figure~\ref{fig:wlt_process}.
\item \textls[-15]{Before classification, our visual unit HMM definitions duplicated to be used as initialized definitions} for phoneme-labeled HMMs (Figure~\ref{fig:wlt_process} step 3). In~our Figure~\ref{fig:wlt_process} illustration, $/v1/$ is duplicated three times (one for each phoneme in its cluster) and $/v2/$ is copied twice. The~respective visual unit HMM definition is used for all the phonemes in its relative P2V map.
\item These phoneme HMMs are retrained with \texttt{HERest} $11$ times over, this time, training samples are divided by the unique phoneme labels.
\item We create a bigram word lattice with \texttt{HLStats} and \texttt{HBuild} and as part of the classification we apply a grammar scale factor of $1.0$ and a transition penalty of $0.5$ (based on~\cite{howell2013confusion}) with \texttt{HVite}. In~Section~\ref{sec:ln} we present a test to determine the best language network units for this step.
\item Finally, the~output transcripts from \texttt{HVite} are used in \texttt{HResults} against the phoneme ground truths produced by \texttt{HLed}. This is all implemented using $10$-fold cross-validation with replacement~\cite{efron1983leisurely}.
\end{enumerate}
The big advantage of this approach is the phoneme classifiers have seen mostly positive cases therefore have good mode matching, the~disadvantage is they are limited in their exposure to negative cases, less so than the visual~units.
\section {Language Network Units}
\label{sec:ln}
Step five in our novel hierarchical training method requires a language network. It has been consistently observed that language models are very powerful in lipreading systems (e.g. in~\cite{6288999}). Language models built upon the ground truth utterances of datasets learn grammar and structure rules of words and sentences (the latter in the case of continuous speech). However, the~visual co-articulation effects damages the performance of visual speech language models as visually, people do not say what the language model expects. These types of network are commonplace, but we note that higher-order $N$-gram language models may improve classification rates but the cost of this model is disproportionate to our goal of developing more accurate classifiers. Therefore, to~decide which unit would best optimize our language model we test three units: visemes; phonemes; and words, as~bigram models in a second preliminary~test.
In the first two columns of Table~\ref{tab:sn_tests2} we list the possible pairs of classifier units and language model units. For~each of these pairs we use the common process previously described for lipreading in HTK, where our phonemes are based on the International Phonetic Alphabet~\cite{international1999handbook}, and~our visemes are Bear's speaker-dependent visemes~\cite{bear2017phoneme}. Word labels are from the RMAV dataset. We define \textbf{classifier units} as the labels used to identify individual classification models and \textbf{language units} as the label scheme used for building the decoding network used post~classification.
\subsection{Language Network Unit~Analysis}
In Table~\ref{tab:sn_tests2} column four we have listed one standard error values for these tests. The phoneme units are the most robust.
In Figure~\ref{fig:sn_effects_talker} we have plotted word correctness ($x$-axis) for each speaker along the $y$-axis over three figures, one figure per language network unit. The~viseme network is top, phoneme network middle, and~word network at the bottom. The~viseme network is the lowest performing score ($0.02\pm0.0063$). On~the face of it, the~idea of visemes classifiers is a good one because they take visual co-articulation into account to some extent. However, as~seen here, a~language model of visemes is too complex because of homophenes. This leaves us with a choice of either phoneme or word units for our language model in step five of our new hierarchical training~method.
\begin{figure}[H]
\centering
\begin{tabular}{c}
\includegraphics[width=0.65\textwidth]{viseme_network_units_by_talker} \\
\includegraphics[width=0.65\textwidth]{phoneme_network_units_by_talker} \\
\includegraphics[width=0.65\textwidth]{word_network_units_by_talker}
\end{tabular}
\caption{Effects of support network unit choice with each type of labeled HMM classifier units. Along~the $x$-axis is each speaker, $y$-axis values are correctness, $C$. Viseme network is at the top, phoneme network plotted in the middle, and~word networks at the~bottom.}
\label{fig:sn_effects_talker}
\end{figure}
\begin{table}[H]
\centering
\caption{Unit selection pairs for HMMs and language network combinations, and~the all-speaker mean $C_w$ achieved.}
\begin{tabular}{llrr}
\toprule
\textbf{Classifier units} & \textbf{Network units} & \boldmath{$C_w$} & \textbf{$1$se} \\
\midrule
Viseme & Viseme & $0.02$ & $0.0063$ \\
\midrule
Viseme & Phoneme & $0.19$ & $0.0036$ \\
Phoneme & Phoneme & $0.19$ & $0.0036$ \\
\midrule
Viseme & Word & $0.09$ & $0.0$ \\
Phoneme & Word & $0.20$ & $0.0043$ \\
Word & Word & $0.19$ & $0.0005$\\
\bottomrule
\end{tabular}
\label{tab:sn_tests2}
\end{table}
In Figure~\ref{fig:sn_effects_talker} (middle) we have our phoneme language network performance with both viseme and phoneme trained classifiers. This is more exciting because for all speakers we see a statistically significant increase in $C_w$ compared to the viseme network scores in Figure~\ref{fig:sn_effects_talker} top. Looking more closely between speakers we see that for four speakers (2, 9, 10 and 12), the~viseme classifiers outperform the phonemes, yet for all other speakers there is no significant difference between the two. On~average they are identical with an all-speaker mean $C_w$ of $0.19\pm0.0036$ compared to the viseme classifiers (Table~\ref{tab:sn_tests2}, column 3).
In Figure~\ref{fig:sn_effects_talker} (bottom) we show our $C_w$ for all speakers with a word network paired with classifiers built on viseme, phoneme, and~word units. Our first observation is that word classifiers perform very poorly. We attribute this to a low number of training samples per class due to the extra number of classes in the word space compared to the number of classes in the phoneme space, so we do not continue our work with word-based classifiers.
Also shown in Figure~\ref{fig:sn_effects_talker} (bottom) are the phoneme and viseme classifiers (in green and red respectively) with a word network. This time we see that for five of our 12 speakers (3, 5, 7, 8, and~11), the~phoneme classifiers outperform the visemes and for our remaining speakers there is no significant difference once a work network is~applied.
These results tell us that for some speakers viseme classifiers with phoneme networks are a better choice whereas others are easier to lipread with phoneme classifiers with a word network. Thus, we continue our work using both phoneme and word-based language~networks.
\section{Effects of Training Visual Units for Phoneme~Classifiers}
Here we present the results of our proposed hierarchical training method (described in Section~\ref{sec:newclusteringalg} with two different language models. Figure~\ref{fig:res_all} shows the mean correctness, $C$, for~all 12 speakers over $10$ folds. We have plotted four symbols, one for each of the pairings of our HMM unit labels and the language network unit (\{visual units and phonemes, visual units and words, phonemes and phonemes, phonemes and words\}). Random guessing is plotted in~orange.
\begin{table}[H]
\centering
\caption{Minimum and maximum all-speaker mean correctness, $C$, showing the effect of hierarchical training from visual units on phoneme-labeled HMM~classification.}
\begin{tabular}{lrrr}
\toprule
& \textbf{Min} & \textbf{Max} & \textbf{Range} \\
\midrule
visual units + word net & 0.03 & 0.06 & 0.03 \\
Phonemes + word net & 0.09 & 0.10 & 0.01 \\
Effect of WLT & 0.06 & 0.04 & -- \\
visual units + phoneme net & 0.20 & 0.22 & 0.02 \\
Phonemes + phoneme net & 0.26 & 0.24 & 0.01 \\
Effect of WLT & 0.02 & 0.02 & -- \\
\bottomrule
\end{tabular}
\label{tab:meanstats}
\end{table}
\unskip
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{wlt_agg.eps}
\caption{HTK Correctness $C$ for visual unit classifiers with either phoneme or word language models and phoneme classifiers with either phoneme or word language models averaged over all 12 speakers. The~correctness unit matches the paired network~unit. }
\label{fig:res_all}
\end{figure}
The $x$-axis of Figure~\ref{fig:res_all} is the size of the optimal visual unit sets from Figure~\ref{fig:correctness}, from~$11$ to $36$. This is the range of optimal number of visual units where phoneme label classifiers do not improve classification. The~baseline of visual unit classification with a word network from~\cite{bear2015findingphonemes} is shown in blue and is not significantly different from conventionally learned phoneme \textls[-15]{classifiers. Based on our language network study in Section~\ref{sec:ln}, it is not a surprise to see just by using a phoneme network instead of a word network to support visual unit classification we significantly improve our mean correctness score for all visual unit set sizes for all speakers (shown in pink). We have plotted weighted guessing in~orange.}
More interesting to see is our new weakly trained phoneme HMMs are significantly better than the visual unit HMMs. In~the first part of our work here phoneme HMMs gave an all-speaker mean $C = 0.059$ and was not significantly different from the best visual units. Here, regardless of the size of the original visual unit set, $C$ is almost double. Weakly learned phoneme classifiers with a word network gain $0.031$ to $0.040$ in mean $C$, and~when these phoneme classifiers are supported with a phoneme network we see a correctness gain range from $0.17$ to $0.18$. These gains are supported by the all-speaker mean minimum and maximums listed in Table~\ref{tab:meanstats}. These gain scores are from over all the potential P2V mappings and show there is little difference in which P2V map is best for knowing which set of visual units to initialize our phoneme classifiers. All results are significantly better than~guessing.
In Figures~\ref{fig:rmav1wlt}--\ref{fig:rmav7wlt}, we have plotted for each of our $12$ speakers non-aggregated results showing $C \pm$ one standard error. While not monotonic, these graphs are much smoother than the speaker-dependent graphs shown in appendix A. The~significant differences between visual unit set sizes (in Figure~\ref{fig:correctness}) have now disappeared because the learning of differences between visual units, has been incorporated into the training of phoneme classifiers, which in turn are now better trained (plotted in red and green which improve on blue and pink respectively).
\begin{figure}[H]
\centering
\includegraphics[width=0.68\textwidth]{wlt_sp01.eps} \\
\includegraphics[width=0.68\textwidth]{wlt_sp02.eps} \\
\includegraphics[width=0.68\textwidth]{wlt_sp03.eps}
\caption{\textls[-15]{Speaker 1 (\textbf{top}), Speaker 2 (\textbf{middle}), and~Speaker 3 (\textbf{bottom}) correctness with a word language model (blue) and the hierarchically trained phoneme classifiers with a phoneme or word~network.}}
\label{fig:rmav1wlt}
\end{figure}
\unskip
\begin{figure}[H]
\centering
\includegraphics[width=0.68\textwidth]{wlt_sp09.eps} \\
\includegraphics[width=0.68\textwidth]{wlt_sp10.eps} \\
\includegraphics[width=0.68\textwidth]{wlt_sp11.eps}
\caption{\textls[-15]{Speaker 4 (\textbf{top}), Speaker 5 (\textbf{middle}), and~Speaker 6 (\textbf{bottom}) correctness with a word language model (blue) and the hierarchically trained phoneme classifiers with a phoneme or word~network.}}
\label{fig:rmav3wlt}
\end{figure}
\unskip
\begin{figure}[H]
\centering
\includegraphics[width=0.68\textwidth]{wlt_sp15.eps} \\
\includegraphics[width=0.68\textwidth]{wlt_sp05.eps} \\
\includegraphics[width=0.68\textwidth]{wlt_sp06.eps}
\caption{\textls[-15]{Speaker 7 (\textbf{top}), Speaker 8 (\textbf{middle}), and~Speaker 9 (\textbf{bottom}) correctness with a word language model (blue) and the hierarchically trained phoneme classifiers with a phoneme or word~network.}}
\label{fig:rmav5wlt}
\end{figure}
\unskip
\begin{figure}[H]
\centering
\includegraphics[width=0.68\textwidth]{wlt_sp08.eps} \\
\includegraphics[width=0.68\textwidth]{wlt_sp13.eps} \\
\includegraphics[width=0.68\textwidth]{wlt_sp14.eps}
\caption{\textls[-15]{Speaker 10 (\textbf{top}), Speaker 11 (\textbf{middle}), and~Speaker 12 (\textbf{bottom}) correctness with a word language model (blue) and the hierarchically trained phoneme classifiers with a phoneme or word~network.}}
\label{fig:rmav7wlt}
\end{figure}
An intriguing observation is comparing the use of a phoneme network for visual units and for weakly taught phonemes. For~some speakers, the~weakly learned phonemes are not always as important as having the right network unit. This is seen in Figure~\ref{fig:rmav1wlt} (top and bottom),~Figure \ref{fig:rmav3wlt} (middle),~Figure \ref{fig:rmav5wlt} (middle), and Figure \ref{fig:rmav7wlt} (bottom) for Speaker's 1, 3, 5, 8, and~12. By~rewatching the original videos to estimate the age of our speakers, we categorize them as either an `older' or 'younger' speaker by eye because the exact ages were not captured during filming. The~speakers with less significant difference in the effect of hierarchical training from visual to audio units are younger. This implies to lipread a younger person we need more support from the language model, than~an older speaker. We suggest this could be because young people show more co-articulation than older people, but~this requires further~investigation.
\section{Conclusions}
We have described a method that allows us to construct any number of visual units. The~presence of an optimum is a result of two competing effects on a lipreading system. In~the first, as~the number of visual units shrinks the number of homophenes rises and it becomes more difficult to recognize words (correctness drops). In~the second, as~the number of visual units rises we run out of training data to learn the subtle differences in lip-shapes (if they exist), so again, correctness drops. Thus, the~optimum number of visual units lies between one and $45$. In~practice we see this optimum is between the number of phonemes and eight (which is the size of one of the smaller visual unit sets).
The choice of visual units in lipreading has caused some debate. Some workers use visemes (for~example Fisher~\cite{fisher1968confusions} in which visemes are a theoretical construct representing phonemes that should look identical on the lips~\cite{hazen2006visual}). Others, e.g. ~\cite{howell2013confusion} have noted that lipreading using phonemes can give superior performance to visemes.
Here, we supply further evidence to the more nuanced hypothesis first presented in~\cite{bear2015findingphonemes}, that there are intermediate units, which for convenience we call visual units, that can provide superior performance provided they are derived by an analysis of the data. A~good number of visual units in a set is higher than previously~thought.
We have also presented a novel learning algorithm which shows improved performance for these new data-driven visual units by using them as an intermediate step in training phoneme classifiers. The~essence of our method is to retrain the visual unit models in a fashion similar to hierarchical training. This two-pass approach on the same training data has improved the training of phoneme-labeled classifiers and increased the classification~performance.
We have also investigated the relationship between classifier unit choice with the unit choice for the supporting language network. We have shown that one can choose either phoneme or words without significantly different accuracy, but~recommend a word net as this reduces the effect of homophene error and enables unbiased comparison of classifier~performance.
In future works we would seek to experiment if this hierarchical training method would achieve the same benefit to other classification techniques, for~example RBMs. This is inspired by the work in~\cite{mousas2017real,nam2012learning} and other recent hybrid HMM studies such as~\cite{thangthai2018computer}.
\vspace{6pt}
\authorcontributions{H.B. and R.H conceived and designed the experiments; H.B.performed the experiments; H.B and R.H analyzed the data; H.B and R.H wrote the paper.}
\funding{This research was funded by EPSRC grant number 1161995. The APC was funded by Queen Mary University of London.}
\acknowledgments{We would like to thank our colleagues at the University of Surrey who were collaborators in building the RMAV dataset.}
\conflictsofinterest{The authors declare no conflict of interest.}
|
2,877,628,090,655 | arxiv | \section{Introduction}
In a parallel machine, the underlying topology is a graph where vertices are processors and edges are physical links between processors. Such a graph in an interconnection network. One may want to cluster the processors
into groups and the most basic clusters are clusters of two processors. Such a clustering may or may not be possible if there are failures in some of the links. So it may be desirable to consider interconnection networks that are
resilient to faulty links under such a clustering requirement.
All graphs considered in this paper are undirected, finite and
simple. We refer to the book \cite{Bondy} for graph theoretical
notation and terminology not described here. For a graph $G$, let $V(G)$, $E(G)$,
and $\overline{G}$ denote the set of vertices, the set
of edges, and the complement of $G$, respectively. The number of vertices in $G$ is the \emph{order} of $G$.
For any subset $X$ of $V(G)$, let $G[X]$ denote the subgraph induced
by $X$; similarly, for any subset
$F$ of $E(G)$, let $G[F]$ denote the subgraph induced by $F$. Let $X\subseteq V(G)\cup E(G)$. We use
$G-X$ to denote the subgraph of $G$ obtained by removing all the
vertices in $X$ together with the edges incident with them from $G$ as well as
removing all the edges in $X$ from $G$. If $X=\{x\}$, we may write $G-x$ instead of
$G=\{x\}$.
For two subsets $X$ and $Y$ of
$V(G)$ we denote by $E_G[X,Y]$ the set of edges of $G$ with one end
in $X$ and the other end in $Y$.
The {\it degree}\index{degree} of a vertex $v$ in a graph $G$,
denoted by $deg_G(v)$, is the number of edges of $G$ incident with
$v$. Let $\delta(G)$ and $\Delta(G)$ be the minimum degree and
maximum degree of the vertices of $G$, respectively. The set of
neighbors of a vertex $v$ in a graph $G$ is denoted by $N_G(v)$.
A graph is
\emph{Hamiltonian} if it contains a Hamiltonian cycle. A component of a
graph is \emph{odd} or \emph{even} according to whether it has an
odd or even number of vertices.
A \emph{matching} in a graph is a set of edges such that every vertex is incident with at most one edge in this set. If a set of edges form a matching in a graph, they are \emph{independent}.
A \emph{perfect matching} in a graph is a set of edges such that every vertex is incident with exactly one edge in this set. An
\emph{almost-perfect matching} in a graph is a set of edges such that every vertex, except one, is incident with exactly one edge in
this set, and the exceptional vertex is incident to none. So if a graph has a perfect matching, then it has an even number of
vertices; if a graph has an almost-perfect matching, then it has an odd number of vertices. The \emph{matching preclusion number}
of a graph $G$, denoted by $\mbox{mp}(G)$, is the minimum number of edges whose deletion leaves the resulting graph with neither
perfect matchings nor almost-perfect matchings. Such an optimal set is called an \emph{optimal matching preclusion set}. We define
$\mbox{mp}(G)=0$ if $G$ has neither perfect matchings nor almost-perfect matchings. This concept of matching preclusion was
introduced in \cite{BrighamHVY} and further studied in \cite{BrighamHVY, ChengLJ, ChengLJa, ChengCM, ChengL, ChengL2, ChengHJL, ChengLLL, Park, ParkS, ParkI, ParkI2, WangWLL, MWCM, WangFZ}. Originally this concept was introduced as a measure of robustness in the event
of edge failure in interconnection networks. It is worth nothing that besides this application, it was also remarked in \cite{BrighamHVY} that this measure has a theoretical connection to other concepts in graph theory such as conditional connectivity and extremal graph theory.
The following results are immediate.
\begin{obs}{\upshape \cite{ParkI}}\label{obs1-1}
$(1)$ If $H$ is a spanning subgraph of $G$, then
$\mbox{mp}(H)\leq \mbox{mp}(G)$.
$(2)$ For an even graph $G$, $\mbox{mp}(G)\leq \delta(G)$.
$(3)$ If $e$ is an edge of $G$, then $\mbox{mp}(G-e)\geq \mbox{mp}(G)-1$.
\end{obs}
As mentioned earlier, the topologies of the underlying architecture of distributed processor systems or parallel machines are important since good topologies offers the advantages of improved connectivity and reliability. We refer the readers to \cite{HsuL} for recent progress
in this area and the references in its extensive bibliography. For application that requires clustering, the most clusters are of size two. This is exactly the concepts of perfect matchings. Moreover,
the matching preclusion number measures the robustness under this clustering requirement in the event of link
failures, as indicated in \cite{BrighamHVY}. Naturally, we want the matching preclusion number to be high.
Let $\mathcal {G}(n)$ denote the class of simple graphs of order $n$. Given a graph theoretic parameter $f(G)$ and a positive
integer $n$, the \emph{Nordhaus-Gaddum Problem} is to
determine sharp bounds for: $(1)$ $f(G)+f(\overline{G})$ and $(2)$
$f(G)\cdot f(\overline{G})$, as $G$ ranges over the class $\mathcal
{G}(n)$, and characterize the extremal graphs. We consider the following problems in which their solutions will give insights in designing interconnection networks with respect to the size of the networks and the targeted matching preclusion number.
\noindent {\bf Problem 1.} Given two positive integers $n$ and $k$,
compute the minimum integer $s(n,k)=\min\{|E(G)|:G\in
\mathscr{G}(n,k)\}$, where $\mathscr{G}(n,k)$ the set of all graphs
of order $n$ (that is, with $n$ vertices) with matching preclusion number $k$.
\noindent {\bf Problem 2.} Given two positive integers $n$ and $k$,
compute the minimum integer $f(n,k)$ such that for every connected
graph $G$ of order $n$, if $|E(G)|\geq f(n,k)$ then $\mbox{mp}(G)\geq k$.
\noindent {\bf Problem 3.} Given two positive integers $n$ and $k$,
compute the maximum integer $g(n,k)$ such that for every
graph $G$ of order $n$, if $|E(G)|\leq g(n,k)$ then $\mbox{mp}(G)\leq k$.
We remark that while Problem 1 and Problem 3 are over all graphs, Problem 2 is over all connected graphs. Problem 2 would not be a good problem if it is over all graphs as we want to find the smallest $f(n,k)$ to guarantee the matching
preclusion number of a graph on $n$ vertices and $f(n,k)$ edges is at least $k$. Since the graph with two singletons and a $K_{n-2}$ has matching preclusion number 0, the bound on the number of edges would be too big, so it is not a good question to consider.
In Section 2, we give sharp upper bounds of $\mbox{mp}(G)$ for a graph $G$ and show that
$$
0\leq \mbox{mp}(G)\leq \begin{cases}
n-1 & \text{if $n$ is even,}\\
2n-3 & \text{if $n$ is odd.}
\end{cases}
$$
for a graph $G$ of order $n$. In Section 3, graphs with $\mbox{mp}(G)=0,1$, even graphs with $\mbox{mp}(G)=n-k \ (n\geq 4k+6)$, and odd graphs with $\mbox{mp}(G)=2n-3,2n-4,2n-5$ are characterized, respectively. In Section 4, we study the above extremal problems on matching preclusion number.
The Nordhaus-Gaddum-type results on matching preclusion number are given in Section 5. Many researchers found Nordhaus-Gaddum-type results aid their understanding of network vulnerability parameters such as edge-toughness, scattering number and bandwidth.
Indeed this type of results are of interest to both graph theorists and network analysts. See \cite{Aouchiche} for a survey.
\section{Sharp bounds on matching preclusion number}
From complete graphs, Brigham et al. \cite{BrighamHVY} derived the following
result.
\begin{thm}{\upshape \cite{BrighamHVY}}\label{lem2-1}
For a complete graph $K_n$,
$$
\mbox{mp}(K_n)=\begin{cases}
n-1 & \text{if $n$ is even and $n\geq 2$,}\\
2n-3 & \text{if $n$ is odd and $n\geq 9$.}
\end{cases}
$$
\end{thm}
The following result, due to Dirac, is well-known.
\begin{thm}{\upshape Dirac \cite{Bondy} \ (p-485)}\label{thA}
Let $G$ be a simple graph of order $n \ (n\geq 3)$ and minimum
degree $\delta$. If $\delta\geq \frac{n}{2}$, then $G$ is
Hamiltonian.
\end{thm}
If $G$ is an even graph (that is, $G$ has an even number of vertices), then $\mbox{mp}(G)\leq \delta(G)$.
From Theorem~\ref{lem2-1}, this bound is not true for odd graphs. We now consider
upper bounds of $\mbox{mp}(G)$ for an odd graph $G$.
Given an edge $uv\in E(G)$, the
parameter $\xi_G(uv)=d_G(u)+d_G(v)-2$ is \emph{the degree of the
edge $uv$} and the parameter $\xi(G)=\min\{\xi_G(uv)\,|\, uv \in
E\}$ is the \emph{minimum edge-degree} of $G$.
\begin{pro}\label{pro2-1}
Let $G$ be an odd graph. Then
$$
\mbox{mp}(G)\leq \xi(G)+1.
$$
Moreover, the bound is sharp.
\end{pro}
\begin{proof}
From the definition of $\xi(G)$, there exists two vertices $u,v$
such that $\xi_G(uv)=\xi(G)$. Let $X=E_G[u,N_G(u)]\cup
E_G[v,N_G(v)]\cup \{uv\}$. Since $u$ and $v$ are two isolated vertices in
$G-X$, it follows that $G-X$ contains neither perfect matching nor
almost perfect matching, and hence $\mbox{mp}(G)\leq
|X|=d_G(u)+d_G(v)-1=\xi(G)+1$.
To show the sharpness of this bound, we consider the complete graph
$K_n$, where $n$ is odd. By Theorem~\ref{lem2-1}, $\mbox{mp}(K_n)=2n-3=\xi(K_n)+1$ for $n\geq 9$.
\end{proof}
We now give a slightly more sophisticated upper bound.
\begin{pro}\label{pro2-3}
Let $G$ be an odd graph, and let $v$ be a vertex of $G$ such that
$d_G(v)=\delta(G)$. Then
$$
\mbox{mp}(G)\leq \delta(G)+\delta(G-v).
$$
Moreover, the bound is sharp.
\end{pro}
\begin{proof}
Let $u$ be a vertex of degree $\delta(G-v)$ in $G-v$. Let
$X=E_G[v,N_G(v)]\cup E_G[u,N_G(u)]$. Clearly, $u,v$ are two isolated
vertices in $G-X$, and hence there is no almost-perfect matchings in
$G-X$. So $\mbox{mp}(G)\leq d_G(v)+d_{G-v}(u)=\delta(G)+\delta(G-v)$, as
desired.
To show the sharpness of this bound, we consider the path graph $P_n
\ (n\geq 3)$, where $n$ is odd. Clearly,
$\mbox{mp}(P_n)=2=\delta(G)+\delta(G-v)$. Another example is to start with a $K_{2n+1}$ (with $n\geq 4$) and attach two pendant vertices. Then the resulting graph has matching preclusion number 2. (This also shows that this bound is
better than the one in Proposition~\ref{pro2-1}.
\end{proof}
Note that each graph $G$ with $n$ vertices is a spanning subgraph of
$K_n$. The following bounds are immediate by Observation
\ref{obs1-1} and Theorem~\ref{lem2-1}.
\begin{pro}\label{pro2-2}
Let $G$ be a connected graph of order $n$. Then
$$
0\leq \mbox{mp}(G)\leq \begin{cases}
n-1 & \text{if $n$ is even and $n\geq 2$,}\\
2n-3 & \text{if $n$ is odd and $n\geq 9$.}
\end{cases}
$$
Moreover, the bounds are sharp.
\end{pro}
\section{Graphs with given matching preclusion numbers}
In this section, we characterize graphs with large and small matching preclusion numbers.
\subsection{Graphs with small matching preclusion numbers}
Let $o(G)$ be the number of odd components of $G$. If a graph $G$
has a perfect matching $M$, then $o(G-S)\leq |S|$ for all
$S\subseteq V(G)$. The following result due to Tutte shows that the converse is
true.
\begin{thm}{\upshape \cite{Tutte}}\label{thC}
A graph $G$ has a perfect matching if and only if every subset $S$
of vertices satisfies Tutte's condition, that is, $o(G-S)\leq |S|$.
\end{thm}
Berge \cite{Berge} obtained a similar result for almost-perfect matchings.
\begin{thm}{\upshape \cite{Berge}}\label{thD}
A graph $G$ has an almost-perfect matching if and only if every subset $S$ of vertices satisfies Berge's condition, that is, $o(G-S)\leq |S|+1$.
\end{thm}
Clearly the above theorems can be used to characterized graph with matching preclusion number at most $k$ in the following way for even graphs: $\mbox{mp}(G)\leq k$ if and only if there exist $T\subset E(G)$ where
$|T|\leq k$ and $S\subseteq V(G-T)$ such that $o(G-S\cup T)>|S|$. Similarly for odd graphs. From an algorithmic point of view, the theorems of Tutte and Berge are important because such conditions formed a basis for a polynomial time algorithm in determining whether a graph has a perfect matching and almost perfect matching. Thus if $k$ is fixed, then there is a polynomial time algorithm to determine whether $\mbox{mp}(G)\leq k$. If $k$ is not fixed, then the problem of determining whether
$\mbox{mp}(G)\leq k$ is {\cal NP}-complete; see \cite{LacroixMMP}.
The following simple result gives a bound on the smallest degree if $\mbox{mp}(G)=k$.
\begin{pro}\label{pro3-2}
Let $G$ be a graph of order $n$ and $k\geq 1$. If $\mbox{mp}(G)=k$, then $\delta(G)\leq \frac{n}{2}+k-2$.
\end{pro}
\begin{proof}
Suppose $\delta(G)\geq \frac{n}{2}+k-1)$. Let $F$ be an optimal matching preclusion set. Let $e\in F$. Let Then $\delta(G-(F\setminus\{e\}))\geq \frac{n}{2})$. From Theorem \ref{thA}, $G-(F\setminus\{e\})$
contains a Hamiltonian cycle. Thus $G-F$ contains a
perfect matching or an almost-perfect matching, which is a contradiction.
\end{proof}
\subsection{Graphs with large matching preclusion number}
We first characterize even graphs.
\begin{pro}\label{pro3-4}
Let $G$ be an even graph of order $n\geq 2$. Then $\mbox{mp}(G)=n-1$
if and only if $G$ is a complete graph.
\end{pro}
\begin{proof}
From Theorem~\ref{lem2-1}, if $G$ is a complete graph of order $n$, then $\mbox{mp}(G)=n-1$. Conversely, we suppose $\mbox{mp}(G)=n-1$. Then $\delta(G)\geq \mbox{mp}(G)=n-1$, and hence $G$ is a complete graph, as desired.
\end{proof}
\begin{thm}\label{th3-3}
Let $G$ be an even graph of order $n\geq 4$. Then $\mbox{mp}(G)=n-2$
if and only if $\delta(G)=n-2$.
\end{thm}
\begin{proof}
If $\mbox{mp}(G)=n-2$, then $\delta(G)\geq \mbox{mp}(G)=n-2$, and hence
$\delta(G)=n-2$ by Proposition \ref{pro3-4}. Conversely, if
$\delta(G)=n-2$, then $\mbox{mp}(G)\leq n-2$. We need to show $\mbox{mp}(G)\geq
n-2$. It suffices to prove that for any $X\subseteq E(G)$ and $|X|=n-3$,
$G-X$ has a perfect matching. Since $\delta(G)=n-2$, it follows that
$G$ is a graph obtained from $K_{n}$ by deleting a perfect matching. Note
that there are $n-2$ edge-disjoint perfect matchings in $G$. Since we
only delete $n-3$ edges from $G-X$, it follows that $G-X$ has a
perfect matching. So $\mbox{mp}(G)\geq n-2$. From the above argument, we
conclude that $\mbox{mp}(G)=n-2$.
\end{proof}
For $n\geq 4k+6$, we have the following general result.
\begin{thm}\label{th3-5a}
Let $n,k$ be two integers with $n\geq 4k+6$, and let $G$ be an even
graph of order $n$. Then $\mbox{mp}(G)=n-k$ if and only if $\delta(G)=n-k$.
\end{thm}
\begin{proof}
Suppose $\delta(G)=n-k$. Then $\mbox{mp}(G)\leq n-k$. We need to show
$\mbox{mp}(G)\geq n-k$. It suffices to prove that for every $X\subseteq E(G)$ and
$|X|=n-k-1$, $G-X$ has a perfect matching. We
first suppose that $deg_{G[X]}(v)\leq \frac{n-2k}{2}$ for every $v\in
V(G)$. Then
$$
deg_{G-X}(v)=deg_{G}(v)-deg_{G[X]}(v)\geq (n-k)-\frac{n-2k}{2}=\frac{n}{2},
$$
and hence $\delta(G-X)\geq \frac{n}{2}$. From Theorem \ref{thA},
$G-X$ contains a Hamiltonian cycle, and hence there is a perfect
matching in $G-X$. Next, we suppose that there exists a vertex $v\in
V(G)$ such that $deg_{G[X]}(v)\geq \frac{n-2k+2}{2}$. Since
$deg_{G-X}(v)\geq deg_{G}(v)-|X|\geq 1$, it follows that there
exists a vertex $u\in V(G)$ such that $vu\in E(G-X)$. Let
$G_1=G-\{u,v\}$. Clearly, $|V(G_1)|=n-2$ is even, and $|X\cap
E(G_1)|\leq n-k-1-\frac{n-2k+2}{2}=\frac{n-4}{2}$. Since $|X\cap
E(G_1)|\leq \frac{n-4}{2}$, it follows that for any vertex pair
$s,t\in V(G_1)$, $deg_{G_1-X}(s)+deg_{G_1-X}(t)\geq
2(n-k-2)-\frac{n-4}{2}-1\geq n$ since $n\geq 4k+1$. So $G_1-X$ contains a Hamiltonian
cycle, and hence there is a perfect matching in $G_1-X$, say $M'$.
Clearly, $M'\cup \{uv\}$ is a perfect matching of $G-X$. From the
above argument, we conclude that $\mbox{mp}(G)=n-k$.
Conversely, we suppose $\mbox{mp}(G)=n-k$. We want to show that
$\delta(G)=n-k$. Furthermore, by induction on $k$, we prove that
$\mbox{mp}(G)=n-k$ if and only if $\delta(G)=n-k$. From Proposition
\ref{pro3-4}, Theorem \ref{th3-3}, the result
follows for $k=1,2$. Suppose that the argument is true for every
integer $k' \ (k'<k)$, that is, $\mbox{mp}(H)=n-k'$ if and only if
$\delta(H)=n-k'$. For integer $k$, it follows from Observation
\ref{obs1-1} that $\delta(G)\geq \mbox{mp}(G)=n-k$. We need to show that
$\delta(G)=n-k$. Assume, on the contrary, that $\delta(G)>n-k$. Let
$\delta(G)=n-k+t=n-(k-t)$, where $t\geq 1$. Since $k-t<k$, it
follows from the induction hypothesis that $\mbox{mp}(G)=n-k+t<n-k$, which
contradicts $\mbox{mp}(G)=n-k$. So $\delta(G)=n-k$.
\end{proof}
Next, we characterize odd graphs with $\mbox{mp}(G)=2n-3,2n-4,2n-5$, respectively.
\begin{pro}\label{pro3-5}
Let $G$ be an odd graph of order $n\geq 9$. Then $\mbox{mp}(G)=2n-3$ if and only if $G$ is a complete graph.
\end{pro}
\begin{proof}
From Theorem~\ref{lem2-1}, if $G$ is an odd complete graph of order
$n$, then $\mbox{mp}(G)=2n-3$. Conversely, we suppose $\mbox{mp}(G)=2n-3$. If $G$
is not a complete graph, then there exist an edge $e=uv\notin
E(G)$. Let $X=E_G[u,N_G(u)]\cup E_G[v,N_G(v)]$. Since $u,v$ are two
isolated vertices in $G-X$, it follows that $G-X$ has no
almost-perfect matchings, and hence $\mbox{mp}(G)\leq |X|\leq 2n-4$, a
contradiction. So $G$ is a complete graph, as desired.
\end{proof}
\begin{thm}\label{th3-5}
Let $G$ be an odd graph of order $n\geq 9$. Then $\mbox{mp}(G)=2n-4$ if and only if $G=K_n-e$, where $e\in E(K_n)$.
\end{thm}
\begin{proof}
If $\mbox{mp}(G)=2n-4$, then it follows from Proposition \ref{pro3-5} that
$\delta(G)\leq n-2$. We claim that $G=K_n-e$. Assume, on the
contrary, that $G\neq K_n-e$. Then $\overline{G}$ contains $P_3=vuw$
or two independent edges $xy,uv$ as its subgraph. For the former
case, let $X=E_G[u,N_G(u)]\cup E_G[v,N_G(v)]$, and $|X|\leq
2(n-2)-1=2n-5$. Since $u,v$ are two isolated vertices in $G-X$, it
follows that $\mbox{mp}(G)\leq |X|\leq 2n-5$. For the latter case, let
$Y=E_G[u,N_G(u)]\cup E_G[x,N_G(x)]$. Similarly, $\mbox{mp}(G)\leq |Y|\leq
2(n-2)+1-2=2n-5$, a contradiction. So $G=K_n-e$.
Conversely, if $G=K_n-e$, then $\mbox{mp}(G)\leq 2n-4$. We need to show $\mbox{mp}(G)\geq 2n-4$. It suffices to prove that for every $X\subseteq E(G)$ and $|X|=2n-5$, $G-X$ has an almost-perfect matching. Since $G=K_n-e$, it follows that $G-X=K_n-(X\cup \{e\})$ is a graph obtained from $K_{n}$ by deleting at most $2n-4$ edges. By Proposition \ref{pro2-2}, $G-X$ has an almost-perfect matching. So $\mbox{mp}(G)\geq 2n-4$, and together with the above argument, $\mbox{mp}(G)=2n-4$.
\end{proof}
\begin{thm}\label{th3-6}
Let $G$ be an odd graph of order $n\geq 13$. Then $\mbox{mp}(G)=2n-5$
if and only if $G$ satisfies one of the following conditions.
$(1)$ $\delta(G)=n-2$ and $G\neq K_n-e$;
$(2)$ $G=K_n-E(P_3)$.
\end{thm}
\begin{proof}
Suppose $\mbox{mp}(G)=2n-5$. Then we have the following claims.
\textbf{Claim 1.} $\delta(G)\geq n-3$.
\noindent \textbf{Proof of Claim 1.} Assume, on the contrary, that
$\delta(G)\leq n-4$. Then there exists a vertex $u$ in $G$ such that
$d_G(u)\leq n-4$. Pick $v\in N_G(u)$. Then $uv\in E(G)$. Let
$X=E_G[u,N_G(u)]\cup E_G[v,N_G(v)]$. Then $|X|\leq
(n-4)+(n-2)=2n-6$. Clearly, $u,v$ are two isolated vertices in
$G-X$, and there are no almost-perfect matchings in $G-X$. So
$\mbox{mp}(G)\leq |X|\leq 2n-6$, which is a contradiction. $\Diamond$
By Proposition \ref{pro3-5}, Theorem \ref{th3-5} and Claim 1, we
have $\delta(G)=n-2$ and $G\neq K_n-e$, or $\delta(G)= n-3$. Thus we may assume that
$\delta(G)=n-3$. Furthermore, we have
the following claim.
\textbf{Claim 2.} $G=K_n-E(P_3)$.
\noindent \textbf{Proof of Claim 2.} Assume, on the contrary, that
$G\neq K_n-E(P_3)$. Since $\delta(G)=n-3$, it follows that $G$ is a
spanning subgraph of $K_n-E(P_3)$ where $P_3=uvw$ and $v$ has degree 2 in $G$. Moreover, at least one additional edge
$e=xy$ is deleted. Clearly, $xy\neq uv$ and $xy \neq vw$. Suppose
$u=x$. Then $d_G(u),d_G(v)=n-3$. Let $X=E_G[u,N_G(u)]\cup
E_G[v,N_G(v)]$. Then $|X|\leq (n-3)+(n-3)=2n-6$.
Since $G-X$ has two isolated vertices,
there are no almost-perfect matchings in $G-X$. So $\mbox{mp}(G)\leq |X|\leq 2n-6$,
which is a contradiction. Suppose $x,y\notin \{u,w\}$. Let
$X=E_G[v,N_G(v)]\cup E_G[x,N_G(x)]$. Thus $|X|\leq n-3+n-2-1=2n-6$ since the edge $xv$ is incident to both $x$ and $v$. As before, $G-X$ has no
almost-perfect matchings since $v$ and $x$ are isolated. Then $\mbox{mp}(G)\leq |X|\leq 2n-6$, which is a
contradiction. $\Diamond$
In summary, we have $G=K_n-E(P_3)$, or $\delta(G)=n-2$ and $G\neq
K_n-e$, as required.
Conversely, if $G=K_n-E(P_3)$, then it follows from Proposition
\ref{pro3-5} and Theorem \ref{th3-5} that $\mbox{mp}(G)\leq 2n-5$. We need
to show $\mbox{mp}(G)\geq 2n-5$. It suffices to prove that for every $X\subseteq
E(G)$ and $|X|=2n-6$, $G-X$ has an almost-perfect matching. Since
$G=K_n-E(P_3)$, it follows that $G-X=K_n-(X\cup E(P_3))$ is a graph
obtained from $K_{n}$ by deleting at most $2n-4$ edges. By
Proposition \ref{pro3-5}, $G-X$ has an almost-perfect matching, as
desired.
Suppose that $\delta(G)=n-2$ and $G\neq K_n-e$.
Since $\delta(G)=n-2$, $G=K_n-L$ where $L$ is a matching of $K_n$ of size at least 2.
By Proposition
\ref{pro3-5} and Theorem \ref{th3-5}, $\mbox{mp}(G)\leq 2n-5$. We
need to show $\mbox{mp}(G)\geq 2n-5$. It suffices to prove that for every
$X\subseteq E(G)$ and $|X|=2n-6$, $G-X$ has an almost-perfect
matching. If $G-X$ has an isolated vertex, say $v$, then $n-2\leq
|X\cap E_G[v,N_G(v)]|\leq n-1$, and hence $|X\cap E(G-v)|\leq n-4$.
We note that there are $n-2$ edge-disjoint perfect matchings in $K_{n-1}$.
In fact, we can say that given a perfect matching $N$ in $K_{n-1}$, the edges $K_{n-1}-N$ can be decomposed into $n-3$ edge-disjoint
perfect matchings. We observed earlier that $G=K_n-L$ where $L$ is a matching of $K_n$ of size at least 2. Thus $G-v=K_{n-1}-L'$ where $L'$
is a matching of $K_{n-1}$. Extend $L'$ to $N$, a perfect matching of $K_{n-1}$. Now $G-v-N$ has
$n-3$ edge-disjoint
perfect matchings, and hence $G-v-N-X$ contains at least
$n-3-(n-4)=1$ perfect matching, say $M'$. Clearly, $M'$ is a perfect matching of $G-v-X$ and $M'$ is
an almost-perfect matching of $G-X$ missing $v$.
From now on, we may assume that $G-X$ has no isolated vertices. If
$deg_{G[X]}(v)\leq \frac{n-5}{2}$ for every $v\in V(G)$, then
$$
deg_{G-X}(v)= deg_{G}(v)-deg_{G[X]}(v)\geq (n-2)-\frac{n-5}{2}=\frac{n+1}{2}>\frac{n}{2},
$$
and hence $\delta(G-X)> \frac{n}{2}$. By Theorem \ref{thA}, $G-X$
contains a Hamiltonian cycle, and hence there is an almost-perfect
matching in $G-X$, as desired.
Suppose that there exists a vertex $v\in V(G)$ such that
$deg_{G[X]}(v)\geq \frac{n-3}{2}$. Since $G-X$ has no isolated
vertex, it follows that there exists a vertex $u\in V(G)$ such that
$uv\in E(G-X)$. Let $G_1=G-\{u,v\}$. Note that $|V(G_1)|=n-2$ is odd,
and $|X\cap E(G_1)|\leq 2n-6-\frac{n-3}{2}=\frac{3n-9}{2}$. If
$G_1-X$ has an isolated vertex, say $w$, then $n-4\leq |X\cap
E_{G_1}[w,N_{G_1}(w)]|\leq n-3$, and hence $|X\cap E(G_1-w)|\leq
\frac{3n-9}{2}-(n-4)=\frac{n-1}{2}$.
Recall that $G=K_n-L$ where $L$ is a matching of $K_n$ of size at least 2.
Let $G_2=G-\{u,v,w\}=K_n-L-\{u,v,w\}$. Let $T$ be the (possibly empty) matching of $K_n-\{u,v,w\}$ induced from $T$. Now $K_n-\{u,v,w\}$ has $n-4$ disjoint perfect matchings, one of which contains $T$.
Thus $G_2=G-\{u,v,w\}=K_n-L-\{u,v,w\}$ has at least $n-5$ disjoint perfect matchings. Since $n\geq 13$, $n-5-(n-1)/2 \geq 1$.
Thus
$G-X-\{u,v,w\}$ contains a perfect matching, say $M'$.
Clearly, $M'\cup \{uv\}$ is an almost-perfect matching of
$G-X$ missing $w$.
From now on, we may assume that $G_1-X$ has no isolated vertices. If
$deg_{G_1[X]}(x)\leq \frac{n-9}{2}$ for every $x\in V(G_1)$, then
$$
deg_{G_1-X}(x)\geq deg_{G_1}(x)-deg_{G_1[X]}(x)\geq (n-4)-\frac{n-9}{2}=\frac{n+1}{2}>\frac{n}{2},
$$
and hence $\delta(G_1-X)> \frac{n}{2}$. By Theorem \ref{thA}, $G_1-X$ contains a Hamiltonian cycle, and hence there is an almost-perfect matching in $G_1-X$, say $M'$. Clearly, $M'\cup \{uv\}$ is an almost-perfect matching of $G-X$ missing $w$.
Suppose that there exists a vertex $s\in V(G_1)$ such that
$deg_{G_1[X]}(s)\geq \frac{n-7}{2}$. Since $G_1$ has no isolated
vertices, it follows that there exists a vertex $t\in V(G_1)$ such
that $st\in E(G_1-X)$. Let $G_2=G_1-\{s,t\}$. Note that $|V(G_2)|=n-4$
is odd, and $|X\cap E(G_2)|\leq
2n-6-\frac{n-3}{2}-\frac{n-7}{2}=n-1$.
Observe that $G_2$ is a graph
from $K_{n-4}$ by deleting at most $\frac{n-5}{2}$ edges. Then
$G_2-X$ is a graph from $K_{n-4}$ deleted at most
$\frac{n-5}{2}+n-1=\frac{3n-7}{2}< 2(n-4)-3$ edges. By Theorem~\ref{lem2-1}, there is an almost-perfect matching in $G_2-X$, say
$M'$. (Here we require $n-4\geq 9$, which is satisfied as $n\geq 13$.) Clearly, $M'\cup \{uv,st\}$ is an almost-perfect matching of
$G-X$ missing $w$.
We may now conclude that $\mbox{mp}(G)=2n-5$.
\end{proof}
We remark in the above proof, there is one place that we requires $n\geq 13$. It is in the second last paragraph when we apply Theorem~\ref{lem2-1}. Although $G_2$ is a graph
from $K_{n-4}$ by deleting at most $\frac{n-5}{2}$ edges, these edges are independent. Thus it may be possible to exploit this structure to replace the 13 in the $n\geq 13$ requirement to a smaller number. We feel that it is not
worthwhile to lengthen this discussion by this potential marginal improvement.
\section{Extremal problems on matching preclusion number}
We now consider the three extremal problems that we stated in the Introduction. We first give the results for $s(n,k)$.
\begin{lem}\label{lem4-1}
Let $n,k$ be two positive integers such that $n\geq 3$ is odd. Then
$(1)$ $s(n,0)=0$;
$(2)$ $s(n,1)=\frac{n-1}{2}$;
$(3)$ $s(n,2)=n-1$;
$(4)$ $s(n,3)=n$.
\end{lem}
\begin{proof}
$(1)$ Let $H_1$ be the graph of order $n$ with no edges. Clearly,
$\mbox{mp}(H_1)=0$. Then $s(n,0)\leq 0$, and so $s(n,0)=0$.
$(2)$ Let $H_2$ be a graph of order $n$ with $\frac{n-1}{2}$
independent edges. Clearly, $\mbox{mp}(H_2)=1$ and $H_2$ has $\frac{n-1}{2}$ edges.
Then $s(n,1)\leq \frac{n-1}{2}$. Conversely, let $G$ be an odd graph
of order $n$ such that $\mbox{mp}(G)=1$. Since $G$ contains an
almost-perfect matching, it follows that $G$ has at least $\frac{n-1}{2}$ edges,
and hence $s(n,1)\geq \frac{n-1}{2}$. So $s(n,1)=\frac{n-1}{2}$.
$(3)$ Let $H_3$ be a path of order $n$. Since $n$ is odd, it follows
that $\mbox{mp}(H_3)=2$ and $H_3$ has $n-1$ edges. Then $s(n,2)\leq n-1$.
We now prove that this inequality holds as equality.
Assume, on the contrary, that
$s(n,2)\leq n-2$. Then there exists an odd graph $G$ of order $n$
with $s(n,2)\leq n-2$ edges
such that $\mbox{mp}(G)=2$. Clearly, $G$ is not
connected. Let $C_1,C_2,\ldots,C_r$ be the connected components in
$G$. If two of $C_1,C_2,\ldots,C_r$ are odd components, then
$\mbox{mp}(G)=0$, which is a contradiction. So there is at most one odd component
in $G$. Since $|V(G)|$ is odd, it follows that there is exactly one
odd component in $G$. Without loss of generality, we may assume that
$|V(C_1)|$ is odd, and $|V(C_i)|$ is even for $2\leq i\leq r$. We
claim that $\delta(C_i)\geq 2$ for $2\leq i\leq r$. Assume, on the
contrary, that there exists a component $C_j$ such that
$\delta(C_j)=1$. Then there exists a vertex $v$ such that
$d_G(v)=1$. Let $uv$ be the pendent edge of $C_j$. Clearly, $v$ and
$C_1$ are two odd components of $G-uv$, which contradicts the fact
that $\mbox{mp}(G)=2$. So $\delta(C_i)\geq 2$ for $2\leq i\leq r$. Then
$|E(G)|=\sum_{i=1}^r|E(C_i)|\geq
(|V(C_1)|-1)+\sum_{i=2}^r|V(C_i)|=n-1$, which is a contradiction.
$(4)$ Let $H_4$ be a cycle of order $n$. Since $n$ is odd, it
follows that $\mbox{mp}(H_4)=3$ and $|E(H_4)|=n$. Then $s(n,3)\leq n$.
We now prove that this inequality holds as equality.
Assume, on the contrary, that
$s(n,3)\leq n-1$. Since $s(n,2)=n-1$, it follows that $s(n,3)\geq
s(n,2)=n-1$, and hence $s(n,3)=n-1$. Then there exists an odd graph
$G$ of order $n$ such that $\mbox{mp}(G)=3$ and $|E(G)|=s(n,3)=n-1$. If $G$
is connected, then $G$ is a tree, and hence $\mbox{mp}(G)=2$, which is a
contradiction. We now assume that $G$ is not connected. Let
$C_1,C_2,\ldots,C_r$ be the connected components in $G$. If two of
$C_1,C_2,\ldots,C_r$ are odd components, then $\mbox{mp}(G)=0$, which is a
contradiction. So there is at most one odd component in $G$. Since
$|V(G)|$ is odd, it follows that there is exactly one odd component in
$G$. Without loss of generality, we may assume that $|V(C_1)|$ is odd,
and $|V(C_i)|$ is even for $2\leq i\leq r$. Furthermore, we have the
following claim.
\textbf{Claim 1.} $\delta(C_i)\geq 3$ for $2\leq i\leq r$.
\noindent \textbf{Proof of Claim 1.} Assume, on the contrary, that
there exists a component $C_j$ such that $\delta(C_j)\leq 2$ for
$2\leq i\leq r$. Then there exists a vertex $v$ in $C_j$ such that
$d_G(v)\leq 2$. Let $X=E_G[v,N_G(v)]$. Clearly, $v$ and $C_1$ are
two odd components of $G-X$, which contradicts the fact that
$\mbox{mp}(G)=3$ as $|X|\leq 2$. $\Diamond$
From Claim 1, we have $\delta(C_i)\geq 3$ for $2\leq i\leq r$. Then
$2|E(G)|=2\sum_{i=1}^r|E(C_i)|\geq 2(|V(C_1)|-1)+3\sum_{i=2}^r|V(C_i)|=2(|V(C_1)|-1)+3(n-|V(C_1)|)$, and hence $|E(G)|\geq n+\frac{n-|V(C_1)|-2}{2}\geq n$, which is a contradiction.
\end{proof}
\begin{thm}\label{pro4-3}
Let $n,k$ be two positive integers. Then
$(1)$ If $n\geq 2$ is even and $0\leq k\leq n-1$, then
$s(n,k)=\frac{nk}{2}$.
$(2)$ If $n\geq 5$ is odd and $4\leq k\leq 2n-6$, then
$$
\frac{n(n-1)k}{4n-6} \leq s(n,k)\leq
\min\left\{\left\lceil\frac{k}{3}\right\rceil,\frac{n-1}{2}\right\}n.
$$
Moreover, if in addition, $n\geq 13$, then $s(n,2n-3)=\frac{n(n-1)}{2}$,
$s(n,2n-4)=\frac{n(n-1)}{2}-1$,
$s(n,2n-5)=\frac{n(n-1)}{2}-\frac{n-1}{2}$, $s(n,0)=0$,
$s(n,1)=\frac{n-1}{2}$, $s(n,2)=n-1$, and $s(n,3)=n$.
\end{thm}
\begin{proof}
$(1)$ Let $G$ be a spanning subgraph of $K_n$ obtained from $k$
edge-disjoint perfect matchings of $K_n$. Clearly,
$|E(G)|=\frac{nk}{2}$ and $\mbox{mp}(G)=k$, implying $s(n,k)\leq
\frac{nk}{2}$. Since $\mbox{mp}(G)=k$, it follows that $\delta(G)\geq
\mbox{mp}(G)\geq k$, and hence $s(n,k)\geq \frac{nk}{2}$. So $s(n,k)=\frac{nk}{2}$.
$(2)$ For odd $n$ and $4\leq k\leq 2n-6$, to show the upper bound,
we let $G$ be a spanning subgraph of $K_n$ derived from
$\min\{\lceil\frac{k}{3}\rceil,\frac{n-1}{2}\}$ edge-disjoint
spanning Hamiltonian cycles of $K_n$. Clearly, $\mbox{mp}(G)\geq k$ and
$|E(G)|=\min\{\lceil\frac{k}{3}\rceil,\frac{n-1}{2}\}n$, implying
$s(n,k)\leq \min\{\lceil\frac{k}{3}\rceil,\frac{n-1}{2}\}n$.
We now show the lower bound. Let $G$ be a graph of order $n$ with
$\mbox{mp}(G)=k$. Set $V(G)=\{v_1,v_2,\ldots,v_n\}$. For
$v_i,v_j\in V(G)$, we have the following facts.
\begin{itemize}
\item If $v_iv_j \notin E(G)$, then
$d_G(v_i)+d_G(v_j)\geq k$;
\item If $v_iv_j \in E(G)$, then $d_G(v_i)+d_G(v_j)\geq
k+1$.
\end{itemize}
Without loss of generality, let $v_1v_i\in E(G)$ for $2\leq i\leq
x$; $v_1v_i\notin E(G)$ for $x+1\leq i\leq n$. Clearly, $x-1=d_G(v_1)$.
Observe that $d_G(v_1)+d_G(v_i)\geq k+1$ for $2\leq i\leq x$;
$d_G(v_1)+d_G(v_j)\geq k$ for $x+1\leq j\leq n$. Then
$(n-1)d_G(v_1)+\sum_{1\leq i\leq n, \ i\neq 1}d_G(v_i)\geq
(n-1)k+x-1=(n-1)k+d_G(v_1)$. Similarly, we have
\begin{itemize}
\item[] $(n-1)d_G(v_2)+\sum_{1\leq i\leq n, \ i\neq 2}d_G(v_i)\geq (n-1)k+d_G(v_2)$;
\item[] \ \ \ \ $\vdots$
\item[] $(n-1)d_G(v_n)+\sum_{1\leq i\leq n, \ i\neq n}d_G(v_i)\geq (n-1)k+d_G(v_n)$.
\end{itemize}
Then
$$
(n-1)\sum_{1\leq i\leq n}d_G(v_i)+(n-1)\sum_{1\leq i\leq
n}d_G(v_i)\geq n(n-1)k+\sum_{1\leq i\leq n}d_G(v_i),
$$
that is, $2(n-1)\cdot 2|E(G)|\geq n(n-1)k+2|E(G)|$. So $|E(G)|\geq
\frac{n(n-1)k}{4n-6}$, and hence $s(n,k)\geq \frac{n(n-1)k}{4n-6}$,
as desired.
By Proposition \ref{pro3-5}, Theorems \ref{th3-5} and \ref{th3-6} (where we need $n\geq 13$),
we have $s(n,2n-3)=\frac{n(n-1)}{2}$,
$s(n,2n-4)=\frac{n(n-1)}{2}-1$, and
$s(n,2n-5)=\frac{n(n-1)}{2}-\frac{n-1}{2}$. By Lemma \ref{lem4-1},
$s(n,0)=0$, $s(n,1)=\frac{n-1}{2}$, $s(n,2)=n-1$, and $s(n,3)=n$.
\end{proof}
The following observation is immediate.
\begin{obs}
Let $n,k$ be two positive integers. Then $g(n,k)=s(n,k+1)-1$.
\end{obs}
By the above observation, we have the following result for
$g(n,k)$.
\begin{cor}\label{pro4-2}
Let $n,k$ be two positive integers. Then
$(1)$ If $n\geq 4$ is even and $0\leq k\leq n-2$, then
$g(n,k)=\frac{n(k+1)}{2}-1$.
$(2)$ If $n\geq 5$ is odd and $3\leq k\leq 2n-7$, then
$$
\frac{n(n-1)(k+1)}{4n-6}-1 \leq g(n,k)\leq
\min\left\{\left\lceil\frac{k+1}{3}\right\rceil,\frac{n-1}{2}\right\}n-1.
$$
Moreover, if in addition, $n\geq 15$, then, $g(n,2n-3)=\frac{n(n-1)}{2}$;
$g(n,2n-4)=\frac{n(n-1)}{2}-1$; $g(n,2n-5)=\frac{n(n-1)}{2}-2$;
$g(n,2n-6)=\frac{n(n-1)}{2}-\frac{n-1}{2}-1$;
$g(n,0)=\frac{n-1}{2}-1$; $g(n,1)=n-2$; $g(n,2)=n-1$.
\end{cor}
Next, we give the exact value of $f(n,k)$.
\begin{thm}\label{pro4-1}
Let $n,k$ be two positive integers. Then
$(1)$ If $n\geq 2$ is even and $1\leq k\leq n-1$, then
$f(n,k)=\binom{n-1}{2}+k$.
$(2)$ If $n\geq 3$ is odd and $2\leq k\leq 2n-3$, then $f(n,k)=\binom{n-2}{2}+k$.
\end{thm}
\begin{proof}
$(1)$ Let $G$ be a graph with $n$ vertices such that
$|E(G)|\geq \binom{n-1}{2}+k$. Clearly, $|E(\overline{G})|\leq
n-k-1$. Since there are $n-1$ edge-disjoint perfect matchings in
$K_n$, it follows that $G$ contains at least $(n-1)-(n-k-1)=k$
edge-disjoint perfect matchings, and hence $\mbox{mp}(G)\geq k$. So
$f(n,k)\leq \binom{n-1}{2}+k$. To show $f(n,k)\geq
\binom{n-1}{2}+k$, we construct $G_k$ as follows: Let $A_k$ be the graph with two components, $K_1$ and $K_{n-k}$, and $B_k$ be $K_{k-1}$; then $G_k$ is obtained by taking $A_k$ and $B_k$, and by adding all possible
edges between the vertices of $A_k$ and the vertices of $B_k$. (In order words, $G_k$ is the join of $A_k$ and $B_k$, that is,
$G_k=K_{k-1}\vee (K_{n-k}\cup K_1)$.)
Clearly, $G_k$ is a connected graph on $n$ vertices,
$|E(G_k)|=\binom{n-1}{2}+k-1$, and $\mbox{mp}(G_k)<k$. So
$f(n,k)=\binom{n-1}{2}+k$.
$(2)$ Let $G$ be a graph with $n$ vertices such that
$|E(G)|\geq \binom{n-2}{2}+k$. Clearly, $|E(\overline{G})|\leq
2n-k-3$. For any $X\subseteq E(G)$, $|X|=k-1$, we have
$|E(\overline{G-X})|\leq 2n-4$. Since $\mbox{mp}(K_n)=2n-3$, it follows
that $G-X$ has a perfect matching, and hence $\mbox{mp}(G)\geq k$. So
$f(n,k)\leq \binom{n-2}{2}+k$. To show $f(n,k)\geq
\binom{n-2}{2}+k$, we let $G_k$ be the graph obtained from $K_{n-2}$
by adding two vertices $u,v$ and the edges in
$E_{G_k}[u,K_{n-2}]\cup E_{G_k}[v,K_{n-2}]$ such that
$|E_{G_k}[u,K_{n-2}]|+|E_{G_k}[v,K_{n-2}]|=k-1$,
$|E_{G_k}[u,K_{n-2}]|\geq 1$, and $|E_{G_k}[v,K_{n-2}]|\geq 1$.
Clearly, $G_k$ is a connected graph on $n$ vertices,
$|E(G_k)|=\binom{n-2}{2}+k-1$, and $\mbox{mp}(G_k)<k$. So
$f(n,k)=\binom{n-1}{2}+k$.
\end{proof}
We remark that $f(n,k)$ is relatively large as one can create a Tutte/Berge set as a vertex cut that separates two complete graphs. This is not unusual as the corresponding result for Hamiltonicity has similar characteristics.
\section{Nordhaus-Gaddum-type results}
In this section, we give Nordhaus-Gaddum-type results for matching preclusion number.
\begin{thm}\label{th5-1}
Let $G\in \mathcal {G}(n)$ be a graph. For $n\geq 3$, we have
$(1)$ $$0\leq \mbox{mp}(G)+\mbox{mp}(\overline{G})\leq \begin{cases}
n-1, &\mbox {\rm if $n$ is even};\\[0.2cm]
2n-3, &\mbox {\rm if $n$ is odd}.
\end{cases}$$
$(2)$ $$0\leq \mbox{mp}(G)\cdot \mbox{mp}(\overline{G})\leq \begin{cases}
\lceil\frac{n-1}{2}\rceil \lfloor\frac{n-1}{2}\rfloor, &\mbox {\rm if $n$ is even};\\[0.2cm]
(n-2)^2, &\mbox {\rm if $n$ is odd and $n\geq 5$}.
\end{cases}$$
\end{thm}
\begin{proof}
The lower bounds are clear. So we concentrate on the upper bounds.
For $(1)$, if $n$ is even, then we let $\mbox{mp}(G)=\ell$. Then $\Delta(G)\geq \delta(G)\geq \ell$, and hence $\mbox{mp}(\overline{G})\leq \delta(\overline{G})=n-1-\Delta(G)\leq n-1-\ell$. So
$\mbox{mp}(G)+\mbox{mp}(\overline{G})\leq n-1$. Clearly, $\mbox{mp}(G)+\mbox{mp}(\overline{G})\geq 0$. For $(1)$, if $n$ is odd, then we let $u,v$ be two vertices in $G$. Without loss of generality, let $uv\in E(G)$ and $uv\notin E(\overline{G})$. Let $X=E_G[\{u,v\},V(G)-\{u,v\}]\cup \{uv\}$ and $Y=E_{\overline{G}}[\{u,v\},V(G)-\{u,v\}]$. Clearly $|X|+|Y|=2n-3$. Since $u,v$ are two isolated vertices in $G-X$, it follows that $G-X$ contains no
almost perfect matching, and hence $\mbox{mp}(G)\leq |X|$. Similarly, since $\overline{G}-Y$ contains no almost perfect matching, it follows that $\mbox{mp}(\overline{G})\leq |Y|$. So $\mbox{mp}(G)+\mbox{mp}(\overline{G})\leq |X|+|Y|=2n-3$.
For $(2)$, the upper bound follow from the upper bound on $\mbox{mp}(G)+\mbox{mp}(\overline{G})$ from $(1)$ if $n$ is even. (Maximizing $ab$ subject to $a+b=2c$, where $a$ and $b$ are variables and $c$ is a constant, gives an optimal solution at
$a=b=c$.)
If $n$ is odd, the upper bound can be improved to the one given in the statement.
We consider two cases. We first suppose neither $G$ nor $\overline{G}$ has isolated vertices. Let $u$ be a vertex of $G$ such that $d_G(u)=\delta(G)$. Let $x$ be a vertex of $\overline{G}$ such that $d_{\overline{G}}(x)=\delta(\overline{G})$.
Let $v$ be a neighbor of $u$ and $y$ be a neighbor of $x$. (These vertices exist because $G$ and $\overline{G}$ have no isolated vertices.) Now
\[ \mbox{mp}(G)+\mbox{mp}(\overline{G})\leq (d_G(u)+d_G(v)-1)+(d_{\overline{G}}(x)+d_{\overline{G}}(y)-1)=\delta(G)+\delta(\overline{G})+d_G(v)+d_{\overline{G}}(y)-2. \]
But $\delta(G)+d_{\overline{G}}(y)\leq d_{G}(y)+d_{\overline{G}}(y)=n-1$ and
$\delta(\overline{G})+d_G(v)\leq d_{\overline{G}}(v)+d_G(v)=n-1$. Thus
$\mbox{mp}(G)+\mbox{mp}(\overline{G})\leq 2n-4=2(n-2)$. Hence $\mbox{mp}(G)\cdot\mbox{mp}(\overline{G})\leq (n-2)^2$. Now suppose $G$ has an isolated vertex $w$. Then $\overline{G}$ has no isolated vertices. If $G$ has no edges, then $mp(G)=0$ and the result is clear.
Thus we may assume that $G$ has edges and hence $\overline{G}$ is not complete.
If $G-w$ is complete, then $\overline{G}$ is $K_{1,n-1}$. Thus
$\mbox{mp}(\overline{G})=0$ since $n\geq 5$. (If $n=3$, then $\mbox{mp}(\overline{G})=2$ and $\mbox{mp}(G)=1$.) So we may assume that $G-w$ is not complete.
Consider $H=G-w$. Then $\delta(H)\leq n-3$.
$H$ has $n-1$ vertices and $n-1$ is even. So $\mbox{mp}(G)=\mbox{mp}(H)\leq \delta(H)$.
Since $\overline{G}$ has no isolated vertices, let $x$ be a vertex of $\overline{G}$ such that $d_{\overline{G}}(x)=\delta(\overline{G})$ and $y$ is a neighbor of $x$. Clearly
$x\neq w$ as $\overline{G}$ is not complete. Now $\delta(H)\leq d_H(x)=d_G(x)\leq n-2$ as $G$ has an isolated vertex. Clearly $d_{\overline{G}}(x)=\delta(\overline{G})\leq n-2$ as $\overline{G}$ is not complete.
If $d_{\overline{G}}(x)\geq 2$. Then we may choose $y\neq w$.
Now $\mbox{mp}(\overline{G})\leq d_{\overline{G}}(x)+d_{\overline{G}}(y)-1$. Thus
\[ \mbox{mp}(G)+\mbox{mp}(\overline{G})\leq \delta(H)+(d_{\overline{G}}(x)+d_{\overline{G}}(y)-1)\leq d_G(y)+d_{\overline{G}}(x)+d_{\overline{G}}(y)-1\leq d_{\overline{G}}(x)+n-2. \]
Since $d_{\overline{G}}(x)\leq n-2$, $\mbox{mp}(G)+\mbox{mp}(\overline{G})\leq 2n-4$, and we are done, as before.
Thus $d_{\overline{G}}(x)=1$ and $x$ is adjacent to $y=w$. Then
\[ \mbox{mp}(G)+\mbox{mp}(\overline{G})\leq \delta(H)+d_{\overline{G}}(x)+d_{\overline{G}}(y)-1\leq (n-3)+1+(n-1)-1=2n-4, \]
and we are done, as before.
\end{proof}
We now consider the sharpness of the bounds in Theorem~\ref{th5-1}. We first consider the lower bound. If $n$ is even, we can say more with the following
result.
\begin{pro}\label{pro5-1}
Let $T$ be a tree of order $n\geq 9$. Then
$\mbox{mp}(T)=\mbox{mp}(\overline{T})=0$ if and only if $n$ is even and $T=K_{1,n-1}$.
\end{pro}
\begin{proof}
If $n$ is even and $T=K_{1,n-1}$, then $\mbox{mp}(T)=\mbox{mp}(\overline{T})=0$. Conversely, we suppose
$\mbox{mp}(T)=\mbox{mp}(\overline{T})=0$. We claim that $n$ is even.
Assume, on the contrary, that $n$ is odd.
Since $T$ is a tree, it follows that $\overline{T}$ is a subgraph of $K_n$
by deleting $n-1$ edges. Since $\mbox{mp}(K_n)=2n-3>(n-1)$ (by Theorem~\ref{lem2-1}), $\mbox{mp}(\overline{T})\geq 1$, which is a contradiction.
We may now assume that $n$ is even. Since the matching preclusion of a path with an odd number of vertices is 2, it follows that
$T$ is not a path. Henceforth we may that $T$ is not a path. We now complete the proof by showing that
$T=K_{1,n-1}$. Assume, on the contrary, that
$T\neq K_{1,n-1}$. Then there exists a vertex $u$ such that $3\leq
d_T(u)\leq n-2$, and hence there exists a vertex $v$, such that $uv
\in E(\overline{T})$. Since $T$ is a tree, it follows that
$\overline{T}-\{u,v\}$ is a spanning subgraph of $K_{n-2}$ by deleting
at most $n-4$ edges, and hence $\overline{G}-\{u,v\}$ has a perfect
matching, say $M$. Clearly, $M\cup
\{uv\}$ is a perfect matching of $T$, and
so $\mbox{mp}(\overline{T})\geq 1$, which is a contradiction.
\end{proof}
Proposition~\ref{pro5-1} provides an example to show that the lower bounds in Theorem~\ref{th5-1} are tight for $n$ being even.
If we require both $G$ and $\overline{G}$ to be connected, then we can
consider this example for the lower bounds. This example also works if $n$ is odd.
\noindent {\bf Example 5.1.} For $n\geq 12$, let $G$ be the graph
obtained from $K_{n-5}$ by adding five new vertices
$v_1,v_2,v_3,v_4,v_5$ and then adding the edges in $\{vv_i\,|\,1\leq
i\leq 4\}\cup \{v_5v_1\}$, where $v$ is a vertex of $K_{n-5}$.
Clearly, $G$ and $\overline{G}$ are both connected. Now,
$\mbox{mp}(G)=0$ since deleting $\{v_1,v\}$ from $G$ leaves 3 singletons in the resulting graph, and $\mbox{mp}(\overline{G})=0$ since deleting $\{v_1,v_2,v_3,v_4,v_5\}$ from $\overline{G}$ leaves $(n-5)-5\geq 2$ singletons.
(So $G$ and $\overline{G}$ have neither perfect matchings nor almost perfect matchings by Theorem~\ref{thC} and Theorem~\ref{thD}.)
One can easily classify graphs that meet the lower bound for the product.
\begin{obs}\label{obs5-1}
Let $G$ be a graph of order $n$. Then
$\mbox{mp}(G)\cdot \mbox{mp}(\overline{G})=0$ if and only if $G$ or $\overline{G}$ have no perfect matching or have no almost-perfect matching.
\end{obs}
To show the sharpness of the upper bound for the sum, we consider the following
results.
\begin{thm}\label{th5-2}
If $G$ is an odd graph of order
$n\geq 9$, then $\mbox{mp}(G)+\mbox{mp}(\overline{G})=2n-3$ if and only if $G=K_n$ or
$\overline{G}=K_n$.
\end{thm}
\begin{proof}
If $G=K_n$ or $\overline{G}=K_n$, then it follows from Proposition \ref{pro3-5} that $\mbox{mp}(G)+\mbox{mp}(\overline{G})=2n-3$.
Conversely, we suppose $\mbox{mp}(G)+\mbox{mp}(\overline{G})=2n-3$, $G\neq K_n$ and $\overline{G}\neq K_n$. Then there is a vertex in $G$, say $u$, with
$1\leq d_G(u)\leq n-2$. So we have $u,w,v\in V(G)$ such that $uw\in E(G)$ and $uv\notin E(G)$. We may assume that $d_G(w)\leq d_G(v)$;
otherwise, interchange the roles of $G$ and $\overline{G}$. Now $\mbox{mp}(G)+\mbox{mp}(\overline{G})\leq
d_G(u)+d_G(w)-1+d_{\overline{G}}(u)+d_{\overline{G}}(v)-1\leq d_G(u)+d_G(v)-1+d_{\overline{G}}(u)+d_{\overline{G}}(v)-1=2n-4<2n-3$, which
is a contradiction.
\end{proof}
If $G$ is even, $K_n$ is still an example to show sharpness of the upper bound for the sum. However, there are other examples such as a cycle of length $4$. Nevertheless, there are some restrictions as the next result shows.
\begin{thm}\label{th5-3}
Let $G$ be an even graph of
order $n\geq 2$. If $\mbox{mp}(G)+\mbox{mp}(\overline{G})=n-1$, then $G$ is regular.
\end{thm}
\begin{proof}
Suppose $\mbox{mp}(G)+\mbox{mp}(\overline{G})=n-1$. Assume, on the contrary, that
$G$ is not regular. Then $\Delta(G)>\delta(G)$, and hence there
exist two vertices $u,v$ in $G$ such that $d_G(u)=\Delta(G)$ and
$d_G(v)=\delta(G)$. Since $\mbox{mp}(G)\leq d_G(v)$ and
$\mbox{mp}(\overline{G})\leq d_{\overline{G}}(u)$, it follows that
$\mbox{mp}(G)+\mbox{mp}(\overline{G})\leq
d_G(v)+d_{\overline{G}}(u)<d_G(u)+d_{\overline{G}}(u)=n-1$, which is a
contradiction.
\end{proof}
One may wonder where every value between the lower bound and the upper bound of $\mbox{mp}(G)+\mbox{mp}(\overline{G})$ is achievable, that is, whether an intermediate value theorem exists. We now show that it is true if
$n$ is even. We start with the following observation.
\begin{obs}\label{obs5-2}
If $m$ is odd, then edges of $K_m$ can be decomposed into $m$ edge-disjoint almost perfect matchings where each vertex is missed in exactly one of them.
\end{obs}
\begin{thm}\label{th5-6}
Let $n\geq 4$ be an even number. Then there exists $G$ such that $\mbox{mp}(G)+\mbox{mp}(\overline{G})=r$ for every $r$ in $[0,n-1]$. Moreover it is realizable with $\mbox{mp}(G)=r$ and $\mbox{mp}(\overline{G})=0$ for every $r$ in $[0,n-1]$.
\end{thm}
\begin{proof}
Let $n$ be even. Let $r$ be an integer in $[1,n-1]$. Take $r$ edge-disjoint almost perfect matchings from $K_{n-1}$ and consider the graph $H$ induced by them. Add a vertex $v$ that is adjacent to every $n-1$ vertices in $H$. Call the resulting graph $G$. Then $v$ has degree $n-1$ in $G$. The other vertices has degree $r+1$ or $r$ in $G$. (They have degree $r$ or $r-1$ in $H$.) By construction, $G$ contains $r$ edge-disjoint perfect matchings. So $\mbox{mp}(G)=r$. Note that $\overline{G}$ has an isolated vertex, so $\mbox{mp}(\overline{G})=0$. This together with the sharpness of the bounds give the result.
\end{proof}
For $n$ is odd, we were not able to obtain an intermediate value theorem although we conjecture it to be true.
We now consider the sharpness of the upper bound for the product. Suppose $n$ is even. Now $K_n$ can be partitioned into $n-1$ edge-disjoint perfect matching $M_1,M_2,\ldots,M_{n-1}$. Let $G_1$ be
the graph induced by the first $\lceil\frac{n-1}{2}\rceil$ $M_i'$s and $G_2$ be the graph induced by the other $\lfloor\frac{n-1}{2}\rfloor$ $M_i'$s. Then $G_2=\overline{G_1}$. Clearly
$\mbox{mp}(G_1)=\lceil\frac{n-1}{2}\rceil$ and $\mbox{mp}(G_2)=\lfloor\frac{n-1}{2}\rfloor$. The next result shows that such examples must be regular graphs.
\begin{thm}\label{th5-5}
Let $G$ be an even graph of
order $n\geq 2$. If $\mbox{mp}(G)\cdot \mbox{mp}(\overline{G})=\lceil\frac{n-1}{2}\rceil \lfloor\frac{n-1}{2}\rfloor$, then $G$ is $\lceil\frac{n-1}{2}\rceil$-regular or $\lfloor\frac{n-1}{2}\rfloor$-regular.
\end{thm}
\begin{proof}
Since $\mbox{mp}(G)+\mbox{mp}(\overline{G})\leq n-1$, If $\mbox{mp}(G)\cdot \mbox{mp}(\overline{G})=\lceil\frac{n-1}{2}\rceil \lfloor\frac{n-1}{2}\rfloor$, we have $\mbox{mp}(G)=\lceil\frac{n-1}{2}\rceil$ and $\mbox{mp}(\overline{G})=\lfloor\frac{n-1}{2}\rfloor$ or $\mbox{mp}(\overline{G})=\lceil\frac{n-1}{2}\rceil$ and $\mbox{mp}(G)=\lfloor\frac{n-1}{2}\rfloor$. Without loss of generality, we consider only $\mbox{mp}(G)=\lceil\frac{n-1}{2}\rceil$ and $\mbox{mp}(\overline{G})=\lfloor\frac{n-1}{2}\rfloor$. As $\mbox{mp}(G)\leq \delta (G)$, we have $\delta (G) \geq \lceil\frac{n-1}{2}\rceil$ and $\delta (\overline{G}) \geq \lfloor\frac{n-1}{2}\rfloor$. Then $G$ is $\lceil\frac{n-1}{2}\rceil$-regular. The other situation can be obtained by the same method.
\end{proof}
If $n$ is odd, then if we let $G=C_5$, the 5-cycle, then $\overline{G}$ is also $C_5$. Thus $\mbox{mp}(G)\cdot \mbox{mp}(\overline{G})=3^2=(5-2)^2$. However, we do not know examples for infinitely many $n$ to truly show
that $(n-2)^2$
is tight.
\section{Conclusion}
In this paper, we studied some extremal type problems for the matching preclusion problem. Understanding these types of problems who help researchers in designing reliable and resilient interconnection networks.
|
2,877,628,090,656 | arxiv | \section{Introduction} \label{sec_in}
In order to derive duality assertions in set-valued optimization one has the possibilities to use an approach via conjugates, via Lagrangian technique or an axiomatic approach. Conjugate duality statements, based on different types of perturbation of the original problem, have been derived by Tanino and Sawaragi \cite{TanSaw80}, Sawaragi, Nakayama and Tanino \cite{SawNakTan85}, L\"ohne \cite{Loe05,Loe2012-book}, Bo\c{t}, Grad and Wanka \cite{BotGraWan09}, Hamel \cite{Hamel09} and others. Lagrange duality for set-valued problems has been studied, for instance, by Luc \cite{Luc88}, Ha \cite{Troung05}, Hern{\'a}ndez and Rodr{\'{\i}}guez-Mar{\'{\i}}n \cite{HerRod07,HerRod07-1}, Li, Chen and Wu \cite{LiCheWu09}, L\"ohne \cite{Loe2012-book} and Hamel and L\"ohne \cite{HamLoe12}. An axiomatic approach was given by Luc \cite{Luc88}. Furthermore, duality assertions can be developed for different solution concepts, this means for the vector approach (cf. Luc \cite{Luc88}, Bo\c{t}, Grad and Wanka \cite{BotGraWan09}, Li, Chen and Wu \cite{LiCheWu09}), for the set-approach (cf. Kuroiwa \cite{Kur99}, Hern{\'a}ndez and Rodr{\'{\i}}guez-Mar{\'{\i}}n \cite{HerRod07}) and by using supremal and infimal sets (see Nieuwenhuis \cite{Nieuwenhuis80}, Tanino \cite{Tanino88,Tanino92}) and/or infimum and supremum in a complete lattice for the lattice approach (cf. Tanino \cite{Tanino92}, Song \cite{Song97}, \cite{Song98}, L\"ohne \cite{Loe05,Loe2012-book}, Lalitha and Arora \cite{LalAro07}, Hamel \cite{Hamel09}).
Another important difference in constructing dual problems concerns the type of dual variables, which can be vectors like in the scalar case (cf. L\"ohne \cite{Loe05,Loe2012-book}, Bo\c{t}, Grad and Wanka \cite{BotGraWan09}, Section 7.1.3), extended vectors (cf. Hamel \cite{Hamel09}, Hamel and L\"ohne \cite{HamLoe12}) or operators (cf. Corley \cite{Corley87}, Luc \cite{Luc88}, Tanino \cite{Tanino92}, Bo\c{t}, Grad and Wanka \cite{BotGraWan09}, Section 7.1.2, Hern{\'a}ndez and Rodr{\'{\i}}guez-Mar{\'{\i}}n \cite{HerRod07}, Lalitha and Arora \cite{LalAro07} , Li, Chen and Wu \cite{LiCheWu09}).
In the mentioned papers one can observe that it is very easy to derive weak duality statements without additional assumptions. In order to get strong duality one needs additionally convexity and certain regularity assumptions. An important approach for the formulation of regularity assumptions is the stability of the primal set-valued problem. In the literature, stability of the primal set-valued problem is formulated using the subdifferential of the minimal value map (cf. Tanino and Sawaragi \cite{TanSaw80}, Bo\c{t}, Grad and Wanka \cite{BotGraWan09}). In this paper we will introduce a subdifferential notion based on infimal sets where subgradients are vectors. A corresponding stability notion is used to prove strong duality statements. Furthermore, we will discuss the space of dual variables and explain the relations between some other approaches from the literature. Subdifferential notions for vector and set-valued problems have been investigated by many authors, see e.g., Tanino \cite{Tanino92}, Jahn \cite{Jahn04}, Bo\c{t}, Grad and Wanka \cite{BotGraWan09}, Schrage \cite{Schrage09diss}, Hamel and Schrage \cite{HamSch12}.
Lagrange duality theory is an important tool in optimization and
there are many approaches to a corresponding theory for vector
optimization problems, see e.g. Corley \cite{Corley81,Corley87}, \cite{RubGas04}, Bo\c{t} and Wanka \cite{BotWan04}, Li and Chen \cite{LiChe97}, Jahn \cite{Jahn04}, G\"{o}pfert, Tammer, Riahi and Z\u{a}linescu \cite{GoeTamRiaZal03}, Bo\c{t}, Grad and Wanka \cite{BotGraWan09} and the references therein.
In the scalar case, the basic idea is to assign to a given constrained optimization problem (called the primal problem)
$$\mbox{(p)} \hspace{6cm} p:=\inf_{x \in S} f(x), \hspace{12cm} $$
where $f\colon X \to \overline{\mathbb{R}}$, a Lagrange function $L:X \times
\Lambda \to \overline \mathbb{R}$, where $\Lambda$ is the set of dual variables, via
$$\displaystyle{\sup_{ u^*}}\, L(x, u^*) = f(x)\quad \mbox{ for }\quad x \in S
\subseteq X.$$
One considers the closely related pair of mutually
dual unconstrained problems
$$ \inf_{x}\sup_{ u^*} L(x, u^*) \qquad \text{ and } \qquad \sup_{ u^*} \inf_{x} L(x, u^*).$$
The dual problem is usually written as
$$\mbox{(d)} \hspace{6cm} d:=\sup_{ u^*} \phi( u^*) \hspace{12cm} $$
where $ u^*$ is called the dual variable and $\phi:\Lambda \to
\overline
\mathbb{R},\;\;\phi( u^*) := \displaystyle{\inf_{x} }\,L(x, u^*)$ is called
the dual objective function. If $p=d$ we say that (d) is an exact dual problem of (p). One goal in duality theory is to find a subset of $\Lambda$ as small as possible so that (d) is still an exact
dual problem of (p). The concepts of infimum and supremum play an important role in Lagrange duality. In all the vectorial approaches the problem was to find an appropriate replacement for these concepts because of non-completeness of preference orders. One possibility to reobtain the lattice structure in vector optimization is to scalarize the Lagrange function and to use the usual infimum/supremum in the complete lattice $\overline \mathbb{R}:= \mathbb{R} \cup \cb{+\infty} \cup \cb{-\infty}$ see, for instance, \cite{Jahn86, Jahn04}. Another idea was to replace the infimum/supremum by minimality/maximality notions, see e.g. Bo\c{t}, Grad and Wanka \cite{BotGraWan09}. In \cite{LoeTam07}, a vector optimization problem has been extended to a set-valued problem in order to obtain a complete lattice. Conjugate duality statements have been proven in order to demonstrate that the ideas from scalar optimization can be analogously formulated in the vectorial framework. In \cite{Loe2012-book} corresponding Lagrange duality results have been established.
In this paper, we provide an alternative proof of the duality results given in \cite{Loe2012-book}. Our approach is motivated mainly by the paper \cite{Tanino92} by Tanino. Even though Tanino \cite{Tanino92} and related papers \cite{Song97,LiCheWu09,CheLi09} did not mention the complete lattice structure, his results are closely related to the lattice approach. In this paper we use a classical notation which is similar to the one in \cite{Tanino92} in order to emphasize the relationships. The results of this paper can be easily restated by using the infimum and supremum in a complete lattice, see e.g. \cite{Loe2012-book} for more details. Another purpose of this paper is to simplify the set of dual variables. We will use vectors rather than operators and point out that this leads to stronger duality statements. An operator variant of our duality result will be obtained as a corollary.
This paper is organized as follows. In Section 2 we recall the concepts and results that we use later, in particular, Lagrange duality. Section 3 is devoted to a new proof of the strong duality theorem, which is based on new stability and subgradient notions that we introduce before. In Section 4 we formulate duality assertions with operators as dual variables and show that they easily follow from the results with vectors as dual variables.
\section{Preliminaries}
\subsection{Infimal sets and the complete lattice of self-infimal sets}
We recall in this section the concept of an {\em infimal set} (resp. {\em supremal set}), which is due to Nieuwenhuis \cite{Nieuwenhuis80}, was extended by Tanino \cite{Tanino88}, and slightly modified with respect to the elements $\pm\infty$ in \cite{LoeTam07}. We will shortly discuss the role of the space of self-infimal sets, which was shown in \cite{LoeTam07} to be a complete lattice.
Let $(Y,\leq)$ be a partially ordered linear topological space, where the order is induced by a convex cone $C$ satisfying
$\emptyset \neq \Int C \neq Y$. We write $y\leq y'$ if $y'-y\in C$ and $y<y'$ if $y'-y\in \Int C$.
We denote by $\overline{Y}:= Y \cup \cb{-\infty}\cup\cb{+\infty}$ the extended space, where the ordering is extended by the convention
\[ \forall y \in Y:\; -\infty \leq y \leq +\infty.\]
The {\em upper closure} (with respect to $C$) of $A \subseteq \overline{Y}$ is defined \cite{LoeTam07,Loe2012-book} to be the set
$$ \Cl_+ A := \left\{ \begin{array}{lll}
Y & \mbox{ if } & -\infty \in A \\
\emptyset & \mbox{ if } & A=\cb{+\infty} \\
\cb{y \in Y\st \cb{y} + \Int C \subseteq A\setminus\cb{+\infty} + \Int C}& \mbox{ otherwise. } &
\end{array} \right.
$$
We have \cite[Proposition 1.40]{Loe2012-book}
\begin{equation}\label{eq_clplus}
\Cl_+ A := \left\{ \begin{array}{lll}
Y & \mbox{ if } & -\infty \in A \\
\emptyset & \mbox{ if } & A=\cb{+\infty} \\
\cl(A\setminus\cb{+\infty}+C)& \mbox{ otherwise. } &
\end{array} \right.
\end{equation}
The set of {\em weakly
minimal elements} of a subset $A \subseteq Y$ (with respect to
$C$) is defined by
$$ \wMin A := \cb{y \in A \st (\cb{y} -\Int C) \cap A = \emptyset},$$
We have \cite[Corollary 1.44]{Loe2012-book}
$$ \wMin \Cl_+ A \neq \emptyset \iff \emptyset \neq \Cl_+A \neq Y.$$
The {\em infimal set} of $A \subseteq \overline{Y}$ (with respect to C) is defined by
$$ \Inf A := \left\{ \begin{array}{lll}
\wMin \Cl_+ A & \mbox{ if } & \emptyset \neq \Cl_+ A \neq Y \\
\cb{-\infty} & \mbox{ if } & \Cl_+ A = Y \\
\cb{+\infty} & \mbox{ if } & \Cl_+ A = \emptyset.
\end{array} \right.
$$
We see that the infimal set of $A$ with respect to $C$ coincides essentially with the set of weakly minimal elements of the set $\cl(A+C)$. Note that if $A\subseteq Y$ then $\wMin A=A\cap \Inf A$.
By our conventions, $\Inf A$ is always a nonempty set. If $-\infty$ belongs to $A$, we have $\Inf A = \cb{-\infty}$, in particular, $\Inf \cb{-\infty} =\cb{-\infty}$. Furthermore, one has
$\Inf \emptyset = \Inf \cb{+\infty} = \cb{+\infty}$ and $\Cl_+ A = \Cl_+(A \cup \cb{+\infty})$. Hence $\Inf A = \Inf (A \cup \cb{+\infty})$ for all $A \subseteq \overline{Y}$.
The following properties of infimal sets seem to be essentially due to Nieuwenhuis \cite{Nieuwenhuis80}. Extended variants (but slightly different to the following ones) have been established by \cite{Tanino88}.
\begin{proposition}[\cite{Loe2012-book}, Corollary 1.48] \label{pr}
Let $A,B \subseteq Y$ with $\emptyset \neq \Cl_+A \neq Y$ and $\emptyset \neq \Cl_+B \neq Y$, then
\begin{enumerate}[(i)]
\item \label{pr_0_1} $ \Inf A =
\cb{y \in Y \st y \not\in A + \Int C,\;
\cb{y} + \Int C \subseteq A + \Int C}$,
\item \label{pr_0_2} $ A + \Int C = B + \Int C \iff \Inf A = \Inf B$,
\item \label{pr_0_3} $A + \Int C = \Inf A + \Int C$,
\item \label{pr_0_4} $\Cl_+ A = \Inf A \cup (\Inf A + \Int C)$,
\item \label{pr_0_5} $\Inf A$, $(\Inf A- \Int C)$ and $(\Inf A + \Int C)$ are disjoint,
\item \label{pr_0_6} $\Inf A \cup (\Inf A- \Int C) \cup (\Inf A + \Int C) = Y$.
\item \label{pr_0_7} $\Inf(\Inf A + \Inf B) = \Inf(A+B)$,
\item \label{pr_0_8} $\alpha \Inf A = \Inf (\alpha A)$ for $\alpha >0$.
\end{enumerate}
For $A \subseteq \overline{Y}$ one has
\begin{enumerate}[(i)]
\item[(ix)] $\Inf \Inf A = \Inf A$, $\Cl_+ \Cl_+ A = \Cl_+ A$, $\Inf \Cl_+ A = \Inf A$, $\Cl_+ \Inf A = \Cl_+ A$,
\end{enumerate}
\end{proposition}
\begin{proposition}\label{pr_3}
Let $A_i \subset \overline{Y}$ for $i \in I$, where $I$ is an arbitrary index set. Then
\begin{enumerate}[(i)]
\item $\displaystyle \Cl_+ \bigcup_{i\in I} A_i = \Cl_+ \bigcup_{i\in I} \Cl_+ A_i$,
\item $\displaystyle \Inf \bigcup_{i\in I} A_i = \Inf \bigcup_{i\in I} \Inf A_i$.
\end{enumerate}
\end{proposition}
Using the set $\wMax A:= -\wMin (-A)$ of weakly maximal elements of $A\subseteq Y$, one can define likewise
the lower closure $\Cl_-A$ and the set $\Sup A$ of supremal elements of $A \subseteq \overline{Y}$ and there are analogous statements.
We introduce the following ordering relation for sets $A,B \subseteq \overline{Y}$:
$$ A \preccurlyeq B \;:\iff\; \Cl_+ A \supseteq \Cl_+ B. $$
Given a partially ordered set $(Z,\le)$, we say that $\bar z \in Z$ is a {\em lower bound} of $A \subseteq Z$ if $\bar z \le a$ for all $a \in A$. The element $\bar z \in Z$ is called the infimum of $A\subseteq Z$ (written $\bar z = \inf A$) if $\bar z$ is a lower bound of $A$ and if $\hat z \le \bar z$ holds for every other lower bound $\hat z$ of $A$. As the ordering $\le$ is antisymmetric, the infimum, if it exists, is uniquely defined. The partially ordered set $(Z,\le)$ is called a {\em complete lattice} if every subset of $Z$ has an infimum and a supremum, where the supremum is defined analogously and is denoted by $\sup A$.
The following result has been established in \cite{LoeTam07}, a proof can be also found in \cite[Theorem 1.54]{Loe2012-book}. We denote by $\mathcal{I}$ the space of all self-infimal subsets of $\overline{Y}$, where $A\subseteq \overline{Y}$ is called {\em self-infimal} if $A=\Inf A$.
\begin{theorem} [\cite{LoeTam07}]\label{th_1}
The partially ordered set $(\mathcal{I}, \preccurlyeq)$ provides a complete lattice. For nonempty sets $\mathcal{A} \subseteq \mathcal{I}$ it holds
$$ \inf \mathcal{A} = \Inf \bigcup_{A \in \mathcal{A}} A , \qquad \sup \mathcal{A} = \Sup \bigcup_{A \in \mathcal{A}} A.$$
\end{theorem}
This theorem allows to formulate duality statements analogous to their scalar counterparts by using the infimum and supremum in a complete lattice. An easy consequence, for instance, is that for any function $L: X \times U \ensuremath{\rightrightarrows} \overline{Y}$ one has
$$ \Sup\bigcup_{u \in U} \Inf\bigcup_{x\in X} L(x,u) \preccurlyeq \Inf\bigcup_{x\in X} \Sup\bigcup_{u \in U} L(x,u).$$
Another consequence is that, for an arbitrary function $f:X\ensuremath{\rightrightarrows} \overline{Y}$ and $A \subseteq B \subseteq X$, one has
$$ \Inf\bigcup_{x\in A} f(x) \succcurlyeq \Inf\bigcup_{x\in B} f(x)
\quad\text{ and }\quad
\Sup\bigcup_{x\in A} f(x) \preccurlyeq \Sup\bigcup_{x\in B} f(x).$$
A further useful consequence is the following result, which is a reformulation of \cite[Proposition 1.56]{Loe2012-book}.
\begin{proposition}\label{pr_2}
For nonempty subsets $\mathcal{A},\mathcal{B} \subseteq \overline{Y}$ we have
\begin{enumerate}[(i)]
\item \label{pr_2_1}$\displaystyle \Inf\bigcup_{ A \in \mathcal{A},\; B \in \mathcal{B}} (A + B) = \Inf\of{\Inf \bigcup_{A\in \mathcal{A}} A + \Inf\bigcup_{B\in\mathcal{B}} B }$,
\item \label{pr_2_2}$\displaystyle \Sup\bigcup_{ A \in \mathcal{A},\; B \in \mathcal{B}} (A + B) \preccurlyeq \Sup \bigcup_{A\in \mathcal{A}} A + \Sup\bigcup_{B\in\mathcal{B}} B $.
\end{enumerate}
\end{proposition}
\subsection{Lagrange duality} \label{sec_lagr}
In this section we recall Lagrange duality results as given in \cite[Section 3.3.2]{Loe2012-book} for optimization problems with set-valued objective function and set-valued constraints using a notation adapted to \cite{Tanino92}.
Let $X$ be a linear space and let $U$ be a separated locally convex space. Moreover, let $\scp{U,U^*}$ be a dual pair.
Let $F : X \ensuremath{\rightrightarrows} \overline{Y}$ and $G : X \rightrightarrows U$ be set-valued maps. We set ${\rm dom\,} F:=\cb{x \in X \st F(x)\neq \cb{+\infty}}$ and ${\rm dom\,} G:=\cb{x \in X \st G(x)\neq \emptyset}$. Let $D \subseteq U$ be a proper closed convex cone with nonempty interior. We denote by $D^\circ$ the negative polar cone of $D$.
We consider the following primal problem \eqref{p}:
\begin{equation}\label{p}
\tag{P} \hspace{3cm} \bar p:=\Inf \bigcup_{x \in S} F(x),\qquad S:=\cb{x \in X \st G(x) \cap -D \neq \emptyset}.
\end{equation}
Constraints of this type have been investigated by many authors, such as Borwein \cite{Borwein77};
Corley \cite{Corley87}; Jahn \cite{Jahn83}; Luc \cite{Luc88}; G\"otz and Jahn \cite{GoeJah99}; Crespi, Ginchev and Rocca \cite{CreGinRoc06}; Bo\c{t} and Wanka \cite{BotWan04}.
We assume throughout that a fixed vector
\begin{equation}\label{eq_c}
c\in \Int C
\end{equation}
is given. Several concepts, for instance, the Lagrangian, the dual objective function and subgradients will depend on the choice of this vector $c$. Note that we do not mention this dependance explicitly.
The Lagrangian map of problem \eqref{p} is
defined by
\begin{equation}\label{eq_lagr}
L: X \times U^* \ensuremath{\rightrightarrows} \overline{Y},\qquad
L(x, u^*) = F(x) + \Inf \bigcup_{u \in G(x)+D}\scp{ u^*,u}\cb{c}.
\end{equation}
In the special case $q=1$, $C = \mathbb{R}_+$, $c = 1$, the well-known Lagrangian
coincides with the Lagrangian of the scalar problem. For every choice of $c \in \Int C$ we have a different Lagrangian map and a different corresponding dual problem, but we show that weak duality and strong duality hold for all these problems.
The scalar counterpart of the following result is well known.
\begin{proposition}[\cite{Loe2012-book}, Proposition 3.23] \label{pr_r45}
Let $x \in S$, then
$$ \Sup \bigcup_{ u^* \in U^*} L(x, u^*) = F(x).$$
\end{proposition}
We next define the dual problem. The dual objective function is defined by
$$ \phi: U^* \ensuremath{\rightrightarrows} \overline{Y}, \qquad \phi( u^*):= \Inf \bigcup_{x \in X} L(x, u^*)$$
and the dual problem (with respect to $c \in \Int C$) associated to \eqref{p} is defined by
\begin{equation}\label{d}
\tag{D} \bar d:=\Sup \bigcup_{ u^* \in U^*} \phi(u^*).
\end{equation}
As shown in \cite[Theorem 3.25]{Loe2012-book}, we have weak duality. Taking into account Theorem \ref{th_1}, we get the following formulation.
\begin{theorem}[weak duality] \label{th_1w} The problems \eqref{p} and \eqref{d} satisfy the weak duality inequality
$$\Sup \bigcup_{ u^* \in U^*} \phi(u^*) \preccurlyeq \Inf \bigcup_{x \in S} F(x).$$
\end{theorem}
The following strong duality theorem has been proven in \cite[Theorem 3.26]{Loe2012-book} by using scalarization and a scalar Lagrange duality result. We present a reformulation based on Theorem \ref{th_1}.
\begin{theorem}[strong duality]\label{th_ld}
Let $F$ be $C$-convex, let $G$ be $D$-convex, and let
\begin{equation}\label{eq_sl5}
G({\rm dom\,} F) \cap - \Int D \neq \emptyset.
\end{equation}
Then, strong duality holds, that is,
$$\Sup \bigcup_{ u^* \in U^*} \phi(u^*) = \Inf \bigcup_{x \in S} F(x).$$
\end{theorem}
By strong duality and Theorem \ref{th_1} we get
\[ \Sup\bigcup_{u^*\in U^*} \Inf\bigcup_{x\in X} L(x,u^*) =\Inf\bigcup_{x\in X} \Sup\bigcup_{u^*\in U^*} L(x,u^*).\]
The next statement extends Proposition \ref{pr_r45}. Note that the additional assumption of $G(x)+D$ being a closed convex set originates from the set-valued constraints. It cannot be omitted even if the objective function would be scalar-valued, see \cite[Example 3.21]{Loe2012-book}.
\begin{proposition}[\cite{Loe2012-book}, Proposition 3.24] \label{pr_u20}
Let $F:X\ensuremath{\rightrightarrows}\overline{Y}$ be a set valued map with $F(x)=\Inf F(x)\neq \cb{-\infty}$ for all $x \in X$, ${\rm dom\,} F \neq \emptyset$ and let the set $G(x)+D$ be closed and convex for every $x \in X$, then
$$ \Sup\bigcup_{u^* \in U^*} L(x,u^*) = \left\{ \begin{array}{clc}
F(x) & \mbox{ if }& x \in S \\
\cb{+\infty} & \mbox{ else. } &
\end{array} \right. $$
\end{proposition}
\section{Stability, subgradients and another proof of duality}
\label{sec_ap}
We start this section by introducing the notion of subgradient and stability for our framework. These concepts are mainly motivated by Tanino \cite{Tanino92} (see also Bo\c{t}, Grad and Wanka \cite{BotGraWan09} for numerous related results). We consider problem \eqref{p} and the related notions as introduced in Section \ref{sec_lagr}.
Let $\varphi$ be a set-valued map from $X \times U$ to $Y$ defined by
$$\varphi(x,u)=\left \{\begin{array}{cc}
F(x) & \mbox{ if }G(x)\cap (-D-u)\neq \emptyset \\
\emptyset & \mbox{ else. }
\end{array}\right.
$$
Denote by $W \colon U \ensuremath{\rightrightarrows} \overline{Y}$ the perturbation map defined by
$$W(u)=\Inf\bigcup_{x\in X}\varphi(x,u).$$
Clearly, we have
$$W(0)=\Inf\bigcup_{x \in S} F(x) = \bar p.$$
This leads to the following definition of a subgradient.
\begin{definition}\label{def_subg}
A point $u^* \in -D^\circ$ is a called {\em positive subgradient} of $W$ at
$(\bar{u},\bar{y})\in U\times Y$ with $\bar{y}\in W(\bar{x})$, written $u^* \in
\partial^+ W(\bar{u},\bar{y})$ for short, if
$$\bar y - \scp{u^*,\bar{u}} c \in \Inf\bigcup_{u \in U}(W(u) - \scp{u^*,u} c).$$
\end{definition}
\begin{remark} Note that the subgradient in \cite[Definition 2.5]{LiCheWu09} is a stronger notion than the one in Definition \ref{def_subg}. In particular, in \cite[Definition 2.5]{LiCheWu09}, subgradients are operators whereas in Definition \ref{def_subg} subgradients are vectors. Furthermore, the definition of subgradients in Definition \ref{def_subg} is closely related to that given by Bo\c{t}, Grad and Wanka \cite[Definition 7.1.9 (c)]{BotGraWan09}, where the subgradients are defined using the Pareto maximum instead of the supremal set as in Definition \ref{def_subg}. However, in \cite{BotGraWan09} the so called $k$-subgradients are vectors too.
\end{remark}
\begin{lemma}\label{lem_510}
Consider the set-valued maps $\partial^+W(0,\cdot): Y \ensuremath{\rightrightarrows} U^*$ and $\phi: U^* \ensuremath{\rightrightarrows} \overline{Y}$. For $u^*\in U^*$ with $\phi(u^*)\subseteq Y$, one has
$$u^* \in \partial^+W(0,y) \;\iff\; y \in \phi(u^*).$$
\end{lemma}
\begin{proof}
By definition, $u^* \in \partial^+W(0,y)$ means
$$y \in \Inf\bigcup_{u \in U}(W(u) - \scp{u^*,u} c).$$
We have
$$
\renewcommand{\arraystretch}{1.8}
\begin{array}{ll}
\displaystyle\Inf\bigcup_{u \in U}(W(u) - \scp{u^*,u} c) &=
\displaystyle\Inf\bigcup_{u \in U}\Inf\bigcup_{x \in X} (\varphi(x,u) - \scp{u^*,u} c) \\
&= \displaystyle\Inf\bigcup_{-u \in G(x)+D,\, x \in X} (F(x) - \scp{u^*,u} c) \\
&= \displaystyle\Inf\bigcup_{u \in G(x)+D,\,x \in X} (F(x) + \scp{u^*,u} c) \\
&= \displaystyle \Inf\bigcup_{x \in X} \ofgg{F(x) + \Inf\bigcup_{u \in G(x)+D} \scp{u^*,u} c}\\
&= \displaystyle \Inf\bigcup_{x \in X} L(x,u^*) \;=\; \phi(u^*),
\end{array}
$$
which proves the claim.
\end{proof}
\begin{definition}
Problem \eqref{p} is called {\em stable} if $W(0)\neq \cb{+\infty}$, $W(0)\neq \cb{-\infty}$ and
$\partial^+ W(0,y)\neq \emptyset$ for all $y\in W(0)$.
\end{definition}
\begin{remark}
In Bo\c{t}, Grad and Wanka \cite{BotGraWan09} the definition of $k$-subgradients is used in order to introduce the property that the primal set-valued optimization problem is $k$-stable: The problem $(P)$ is called $k$-stable with respect to a certain set-valued perturbation map if the corresponding minimal value map is $k$-subdifferentiable at $0$.
\end{remark}
In the proof of the next theorem we use the following lemma.
\begin{lemma}\label{lem_property}
Let $A,B \subseteq Y$ with $\emptyset \neq \Cl_+A \neq Y$ and $\emptyset \neq \Cl_+B \neq Y$, then
$$A \preccurlyeq B \Leftrightarrow (A-\Int C)\cap B=\emptyset.$$
\end{lemma}
\begin{proof}
$A \preccurlyeq B$ is equivalent to $B \subseteq \Cl_+ A$. By Proposition \ref{pr} \eqref{pr_0_5}, \eqref{pr_0_6}, the latter inclusion is equivalent to $(A-\Int C)\cap B=\emptyset$.
\end{proof}
\begin{theorem}\label{th_5_1_0} If \eqref{p} is stable, then strong duality holds for \eqref{p} and \eqref{d}, that is,
$$\Sup \bigcup_{ u^* \in U^*} \phi(u^*) = \Inf \bigcup_{x \in S} F(x).$$
\end{theorem}
\begin{proof} We set
$ \bar d=\Sup \bigcup_{ u^* \in U^*}\phi(u^*)$
and
$\bar p = W(0) = \Inf \bigcup_{x \in S} F(x)$.
By assumption, we have $W(0) \neq \cb{+\infty}$ and $W(0) \neq \cb{-\infty}$, which implies $\emptyset \subsetneq \Cl_+ W(0) \subsetneq Y$. Take some $y \in W(0)$. Since \eqref{p} is stable, there is some $u^*\in U^*$ with $y\in \phi(u^*)$ (by Lemma \ref{lem_510}). Using weak duality we get $\phi(u^*) \preccurlyeq \bar d \preccurlyeq \bar p$. Thus $\bar d \neq \cb{-\infty}$ and $\bar d \neq \cb{+\infty}$, which implies $\emptyset \subsetneq \Cl_+ \bar d \subsetneq Y$.
By weak duality, it remains to prove $\bar p \preccurlyeq \bar d$.
Taking into account Lemma \ref{lem_property}, we have to prove that $(\bar{p}-\Int C)\cap \bar d=\emptyset.$ On the contrary, suppose that there is $y \in Y$ with $y \in (\bar{p}-\Int C)\cap \bar d$. Then there exists $z \in \bar{p} = W(0)$ and $c \in \Int C$ such that
$y=z-c$. On the other hand, there exists $u^* \in \partial^+W(0,z)$. By Lemma \ref{lem_510}, this means $z \in \phi(u^*)$. Hence $y \in (\phi(u^*)-\Int C) \cap \bar d$. By Lemma \ref{lem_property}, this contradicts $\phi(u^*) \preccurlyeq \bar d$.
\end{proof}
\begin{theorem}\label{th_5_1} If $F$ is $C$-convex, $G$ is $D$-convex,
\begin{equation}\label{eq_cq3}
G({\rm dom\,} F) \cap (-\Int D) \neq \emptyset,
\end{equation}
and $W(0)\neq \cb{-\infty}$, then \eqref{p} is stable.
\end{theorem}
\begin{proof}
From \eqref{eq_cq3}, we get $W(0) \neq \cb{+\infty}$.
Let $\bar y \in W(0)$. By Lemma \ref{lem_510} we have to show that there exists $u^*\in U^*$ with $\bar y \in \phi(u^*)$.
The map $Q\colon X\rightrightarrows Y\times U$ defined by $Q(x)=(F(x),G(x))$ is $C\times D$-convex. Thus, $Q(X)+C\times D$ is a convex set. We next show that
\begin{equation}\label{eq_1_th_5_1}
\left(Q(S)+C\times D\right)\cap \Int (B\times (-D))=\emptyset
\end{equation}
where $B=\cb{\bar y}-C$. Indeed, if there exist $x'\in S$ and $(y,u)$ such that
$$(y,u)\in \left((F(x'),G(x'))+C\times D\right)\cap \Int (B\times (-D)),$$
then $y\in (F(x')+C)\cap (\cb{\bar y}-\Int C)$ and $u\in (G(x')+D)\cap -\Int D$. Thus, $y'=\bar{y}-c'$ where $y'\in F(x')$ and $c'\in \Int C$ and $(G(x')+D)\cap (-D)\neq \emptyset$ (that is, $x'\in S$) which contradicts $\bar y \in W(0) = \Inf \bigcup_{x \in S} F(x).$
By \eqref{eq_1_th_5_1}, applying a separation theorem, there exists a pair $(y^*,u^*)\in Y^*\times U^*\setminus \{(0,0)\}$ such that
\begin{equation}\label{eq_2_th_5_1}
\scp{y^*, y} +\scp{u^*, u} \leq \scp{y^*, b}+\scp{u^*,-d}
\end{equation}
for all $(y,u)\in Q(S)+C\times D$, $b\in B$ and $d\in D$.
We deduce that $(y^*,u^*)\in (C^\circ \times D^\circ) \setminus \{(0,0)\}.$
This implies
$$
\forall (y,u) \in Q(S)+C\times D:\; \scp{y^*, y} + \scp{u^*,u} \leq \scp{y^*, \bar y}.
$$
Since, by \eqref{eq_clplus}, $\Cl_+ W(0) = \Cl_+ F(S) = \cl(F(S)+C)$, we get
\begin{equation}\label{eq_3_th_5_1}
\forall y \in \Cl_+ F(S),\; \forall u \in G(S)+D:\; \scp{y^*, y} + \scp{u^*,u} \leq \scp{y^*, \bar y}.
\end{equation}
We show that $y^* \neq 0$. Assuming the contrary, we get $u^*\neq 0$ and, by \eqref{eq_3_th_5_1}, we have
$$\forall u \in G(S)+D:\; \scp{u^*, u}\leq 0.$$
On the other hand, by \eqref{eq_cq3}, there exists $x \in S$ and $u' \in G(x) \cap (-\Int D)$, i.e., $\scp{u^*, u'}>0$ as $u^* \in D^\circ\setminus \{0\}$. Since $u' \in G(x) \subseteq G(x) + D$ this is a contradiction.
Since $y^* \in C^\circ\setminus\cb{0}$, for the fixed vector $c \in \Int C$ according to \eqref{eq_c}, we have
$\scp{y^*,c}<0$. Without loss of generality we can assume $\scp{y^*,c}=-1$.
Since $\bar y \in W(0) \subseteq \Cl_+ F(S)$, by \eqref{eq_3_th_5_1}, we have $\scp{u^*, u}\leq 0$ for all $u\in G(\bar x) \subseteq G(\bar x)+D$. Since $u^*\in D^\circ$, we have $\scp{u^*,\bar u}=0$ for all $u \in G(\bar x)\cap -D$. Thus, \eqref{eq_3_th_5_1} can be written as
\begin{equation*}
\forall y \in \Cl_+ F(S),\; \forall u \in G(S)+D:\; \scp{y^*, y - \scp{u^*,u}\cb{c}} \leq \scp{y^*,\bar y}.
\end{equation*}
From weak duality, we know that $\bar y \in \Cl_+ \phi(u^*)$. Assuming that $\bar y \in \phi(u^*) + \Int C$, we obtain a contradiction to the latter inequality. Hence, by Proposition \ref{pr}, we have $\bar y \in \phi(u^*)$.
\end{proof}
\noindent
{\bf Another proof of Theorem \ref{th_ld}:} If $W(0)=\cb{-\infty}$ the statement follows from weak duality.
Otherwise it is obtained by combining Theorem \ref{th_5_1_0} with Theorem \ref{th_5_1}. \hfill $\Box$\medskip\noindent
\section{Lagrange duality with operators as dual variables}
We establish in this section another type of dual problem where the dual variables are operators rather than vectors as in problem $(\rm D)$. The usage of operators is more common in the literature (see, for instance, \cite{Corley87,LiChe97,Kur99,Luc88,HerRod07}). We will see, however, that a duality theory based on operators as dual variables is an easy consequence of the above results.
Denote by $\mathcal{L}$ the set of all linear continuous operators from $U$ to $ Y$ and by $\mathcal{L}_{+}$ the subset of all positive operators, that is, $\mathcal{L}_+ :=\{T\in \mathcal{L}\colon T(D)\subseteq
C \}$. Given $T\in \mathcal{L}$ and $A\subseteq U$ we write
$T(A)=\cb{T(a) \st a \in A}$. Let $\mathcal{L}_c$ be a subset of $\mathcal{L}$ defined by $\mathcal{L}_c := \{T \in \mathcal{L} \st T=\scp{ u^*,\cdot} c \text{ for some } u^* \in -D^\circ \}. $
Obviously, we have
\begin{equation}\label{eq_op22}
\mathcal{L}_c \subseteq \mathcal{L}_+.
\end{equation}
Moreover, $\mathcal{L}_c$ is isomorphic to $-D^\circ \subseteq U^*$.
The Lagrangian map $L: X \times \mathcal{L} \to \overline{Y}$ is defined by
\begin{equation}\label{L}
L(x,T):= F(x) + T(G(x)).
\end{equation}
We define the dual objective function $\Phi: \mathcal{L} \to \overline{Y}$ by
\begin{equation*}\label{duality-f-2}
\Phi(T):= \Inf\bigcup_{x \in X} L(x,T).
\end{equation*}
The associated dual problem is
\begin{equation}\label{d_op}
\tag{$\mathcal{D}$} \tilde d:= \Sup\bigcup_{T \in \mathcal{L}_+} \Phi(T).
\end{equation}
Comparing the two dual problems \eqref{d} (with vectors as dual variables) and \eqref{d_op} (with operators as dual variables), we observe that Lagrangian \eqref{eq_lagr} for problem \eqref{d} involves the cone $D$ but Lagrangian \eqref{L} for problem \eqref{d_op} does not. On the other hand, the supremum in \eqref{d} is taken over the whole linear space $U^*$ whereas in \eqref{d_op} only the subspace $\mathcal{L}_+$ of the linear space $\mathcal{L}$ is considered. A reformulation of problem \eqref{d} clarifies the connection. Consider, instead of \eqref{eq_lagr}, the Lagrangian
\begin{equation}\label{eq_lagr1}
\hat L: X \times U^* \ensuremath{\rightrightarrows} \overline{Y},\qquad
\hat L(x, u^*) = F(x) + \Inf\bigcup_{u \in G(x)}\scp{ u^*,u}\cb{c},
\end{equation}
and the corresponding dual objective function
\[ \hat\phi: U^* \ra \overline{Y}, \qquad \hat\phi( u^*):= \Inf\bigcup_{x \in X} \hat L(x, u^*).\]
\begin{lemma} The dual objective function of problem \eqref{d} can be expressed as
\begin{equation}\label{phi_c}
\phi(u^*)= \left \{\begin{array}{cl}
\hat \phi(u^*) & \mbox{ if }\quad u^* \in -D^\circ \\
\cb{-\infty} & \mbox{ otherwise. }
\end{array}\right.
\end{equation}
\end{lemma}
\begin{proof}
Since $c \in \Int C$, we have
\[
\Inf\bigcup_{d \in D} \scp{d,u^*} c = \left\{
\begin{array}{cl}
\cb{0} & \text{ if } u^* \in -D^\circ \\
\cb{-\infty} & \text{ otherwise. }
\end{array} \right.
\]
It follows
\[
\renewcommand{\arraystretch}{2}
\begin{array}{ll}
\phi(u^*) &= \displaystyle\Inf\bigcup_{x \in X} L(x,u^*) \\
&= \displaystyle\Inf\bigcup_{x \in X} \ofgg{F(x) + \Inf\bigcup_{u \in G(x)+D} \scp{u,u^*}c }\\
&= \displaystyle\Inf\bigcup_{x \in X} \ofgg{F(x) + \Inf\bigcup_{u \in G(x), d \in D} \ofg{\scp{u,u^*}c + \scp{d,u^*}c}} \\
&= \displaystyle\Inf\bigcup_{x \in X} \ofgg{F(x) + \Inf\bigcup_{u \in G(x)} \scp{u,u^*}\cb{c} + \Inf\bigcup_{d \in D}\scp{d,u^*}\cb{c}} \\
&= \displaystyle \Inf\bigcup_{x \in X} \hat L(x,u^*) + \Inf\bigcup_{d \in D}\scp{d,u^*}\cb{c} \\
&= \displaystyle \hat \phi(u^*) + \Inf\bigcup_{d \in D}\scp{d,u^*}\cb{c}.
\end{array}
\]
Combining the two equations, we obtain the result.
\end{proof}
As a consequence, we can define a dual problem
\begin{equation}\label{d_hat}
\tag{$\hat D$} \hat d:=\Sup\bigcup_{ u^* \in -D^\circ} \hat\phi(u^*),
\end{equation}
where we obviously have
\begin{equation}\label{dd}
\bar d = \Sup\bigcup_{ u^* \in U^*} \phi(u^*) = \Sup\bigcup_{ u^* \in -D^\circ} \hat\phi(u^*) = \hat d.
\end{equation}
Since $-D^\circ$ is isomorphic to $\mathcal{L}_c$ and $\mathcal{L}_c \subseteq \mathcal{L}_+$, we get (using Theorem \ref{th_1})
\begin{equation}\label{ieq_dd}
\bar d = \Sup\bigcup_{ u^* \in U^*} \phi(u^*) \preccurlyeq \Sup\bigcup_{ T \in \mathcal{L}_+} \Phi(T) = \tilde d.
\end{equation}
We next prove weak duality.
\begin{theorem}[weak duality]\label{th_1wL} The problems \eqref{p} and \eqref{d_op} satisfy the weak duality inequality, i.e., $$\Sup\bigcup_{ T \in \mathcal{L}_+} \Phi(T) \preccurlyeq \Inf\bigcup_{x\in S} F(x).$$
\end{theorem}
\begin{proof}
By Theorem \ref{th_1}, we have
\[ \Sup\bigcup_{T \in \mathcal{L}_+} \Inf\bigcup_{x \in X} L(x,T) \preccurlyeq \Inf\bigcup_{x \in X} \Sup\bigcup_{T \in \mathcal{L}_+} L(x,T).\]
Since $\Phi(T)= \Inf\bigcup_{x \in X} L(x,T)$, it remains to show
\[\Inf\bigcup_{x \in X} \Sup\bigcup_{T \in \mathcal{L}_+} L(x,T) \preccurlyeq \Inf\bigcup_{x \in S} F(x).\]
But this follows from Proposition \ref{pr_r45} and
\[\Inf\bigcup_{x \in X} \Sup\bigcup_{T \in \mathcal{L}_+} L(x,T) \preccurlyeq \Inf\bigcup_{x \in S} \Sup\bigcup_{T \in \mathcal{L}_+} L(x,T),\]
which is a consequence of Theorem \ref{th_1}.
\end{proof}
Finally we obtain strong duality as a conclusion of the Lagrange duality theorem with vectors as variables.
\begin{theorem}[strong duality]
Let $F$ be $C$-convex, let $G$ be $D$-convex, and let
\[
G({\rm dom\,} f) \cap (- \Int D) \neq \emptyset.
\]
Then strong duality holds, that is,
$$\Sup\bigcup_{ T \in \mathcal{L}_+} \Phi(T) = \Inf\bigcup_{x \in S} F(x).$$
\end{theorem}
\begin{proof}
From Theorem \ref{th_ld}, inequality \eqref{ieq_dd}, and Theorem \ref{th_1wL}, we get
$$\Inf\bigcup_{x \in S} F(x) = \Sup\bigcup_{ u^* \in U^*} \phi(u^*) \preccurlyeq \Sup\bigcup_{ T \in \mathcal{L}_+} \Phi(T) \preccurlyeq \Inf\bigcup_{x \in S} F(x),$$
which yields the desired equation.
\end{proof}
\bibliographystyle{abbrv}
|
2,877,628,090,657 | arxiv | \section{Introduction}
A large effort has been devoted to understand the QCD dynamics of rapidity gaps in jet events since such processes were observed in $p+\bar{p}$ collisions at the Tevatron more than 10 years ago \cite{d0,cdf}. While describing diffractive processes in QCD has been a challenge for many years, the presence of a hard scale in so-called {\it jet-gap-jet} events for instance, brings hope that one could be able to understand these with perturbative methods. However, after many theoretical investigations, there is still no consensus on what the relevant QCD mechanism really is.
In a hadron-hadron collision, a jet-gap-jet event features a large rapidity gap with a high-$p_T$ jet on each side ($p_T\!\gg\!\Lambda_{QCD}$). Across the gap, the object exchanged in the $t-$channel is color singlet and carries a large momentum transfer, and when the rapidity gap is sufficiently large the natural candidate in perturbative QCD is the Balitsky-Fadin-Kuraev-Lipatov (BFKL) Pomeron \cite{bfkl}. Of course the collision energy $\sqrt{s}$ should be big ($\sqrt{s}\gg E_T$) in order for jets to be produced along with a large rapidity gap. Such events are expected to be produced copiously in $p+p$ collisions at the LHC.
To compute the jet-gap-jet process in the BFKL framework, one has first to address the problem of coupling the BFKL Pomeron to partons, as opposed to colorless particles. Indeed, BFKL calculations usually use the fact that impact factors, which describe the coupling of incoming and outgoing particles to the BFKL Pomeron, vanish when attached to gluons with no transverse momentum. This is a property of colorless impact factors. For instance, this is what allows to turn the Feynman-diagram calculation of the BFKL Pomeron into a conformal-invariant Green function \cite{lipatov}. Consequently, this BFKL Green function cannot be hooked to colored particles, and should be modified accordingly first. The Mueller-Tang (MT) prescription \cite{muellertang} is widely used in the literature to couple the BFKL Pomeron to quarks and gluons.
On the phenomenological side, the original parton-level MT calculation was not sufficient to describe the Tevatron data. A first attempt to improve it was proposed in \cite{cfl}, parton showering and hadronization were taken into account using the HERWIG Monte Carlo program \cite{herwig}. An agreement with data could only be obtained if the leading-logarithmic (LL) BFKL calculation was done with a fixed value of the coupling constant $\alpha_S,$ which is not satisfactory, as next-to-leading logarithmic (NLL) BFKL corrections are known to be important. In addition, only the leading conformal spin ($p=0$) was taken into account. In
\cite{rikard}, it was shown that a good description of the data could be obtained when some NLL corrections were numerically taken into account in an effective way \cite{fakenll}, but the full NLL-BFKL kernel
\cite{nllbfkl} could still not be implemented. As a result, these tests on the relevance of the BFKL dynamics were not conclusive.
In the most recent phenomenological work on the subject \cite{us}, the full NLL-BFKL kernel was implemented including all conformal spins, along with the collinear improvements necessary to remove spurious singularities and obtain meaningful results \cite{salam,ccs}. However, the results of \cite{us} remained at parton level, therefore the fact that the rapidity interval between the jets can be larger than the rapidity gap could not be implemented. The D0 measurement could nevertheless be reasonably well described, while the CDF data was not considered. The purpose of this letter is to improve the model by taking into account parton showering, hadronization effects and jet reconstruction, which is necessary to make more precise comparisons with data. We will interface the collinearly-improved NLL-BFKL parton-level results of \cite{us} with the HERWIG Monte Carlo, as was done in \cite{cfl} with the LL-BFKL calculation. We shall compare our results to both D0 and CDF data.
The plan of the letter is as follows. In section II, we recall the phenomenological NLL-BFKL formulation of the
jet-gap-jet cross section and in section III, we explain how it is embedded into the HERWIG Monte Carlo program. In section IV, we present successful comparisons with all Tevatron data, which allow to fix the absolute normalization in our model. Predictions for the jet-gap-jet cross section at the LHC are presented in Section V. Section VI is devoted to conclusions and outlook.
\section{The jet-gap-jet cross section in the BFKL framework}
\begin{figure}[t]
\begin{center}
\epsfig{file=jetgapjet.eps,width=13cm}
\caption{Production of two jets surrounding a large rapidity gap in a hadron-hadron collision. $\sqrt{s}$ denotes the collision energy, $p_{T_1}$ ($\eta_1$) and $p_{T_2}$ ($\eta_2$) the transverse momenta (rapidities) of the jets and $x_1$ and $x_2$ are their longitudinal momentum fraction with respect to the incident hadrons. The rapidity interval between the jets $\Delta\eta_J$ is bigger than the rapidity gap
$\Delta\eta_g$.}
\label{jetgapjet}
\end{center}
\end{figure}
The production of a rapidity gap between two outgoing jets in a hadron-hadron collision is pictured in
Fig.~\ref{jetgapjet}, with the different kinematic variables. We denote $\sqrt{s}$ the collision energy,
$p_{T_1}$ and $p_{T_2}$ the transverse momenta of the two jets and $x_1$ and $x_2$ their longitudinal fraction of momentum with respect to the incident hadrons. The rapidity interval between the two jets is $\Delta\eta_J=\ln(x_1x_2s/p_{T_1}p_{T_2})$. At the parton level (see Fig.1 in \cite{us}),
$p_{T_1}=-p_{T_2}=p_T$, and the rapidity gap coincides with the rapidity interval
$\Delta\eta\!=\!\ln(x_1x_2s/p_T^2)$ between the outgoing partons that will initiate the jets. The hadronization of the partons into jets reduces the size of the rapidity gap to $\Delta\eta_g$.
In this section, we deal with the parton-level cross section
\begin{equation}
\frac{d\sigma^{pp\to XJJY}}{dx_1 dx_2 dp_T^2} = {\cal S}f_{eff}(x_1,p_T^2)f_{eff}(x_2,p_T^2)
\frac{d\sigma^{gg\rightarrow gg}}{dp_T^2},
\label{jgj}
\end{equation}
where the functions $f_{eff}(x,p_T^2)$ are effective parton distributions that resum the leading logarithms
$\log(p_T^2/\Lambda_{QCD}^2)$. They have the form
\begin{equation}
f_{eff}(x,\mu^2)=g(x,\mu^2)+\frac{C_F^2}{N_c^2}\lr{q(x,\mu^2)+\bar{q}(x,\mu^2)}\ ,
\label{pdfs}
\end{equation}
where $g$ (respectively $q$, $\bar{q}$) is the gluon (respectively quark, antiquark) distribution function in the incoming hadrons, and evolves according to DGLAP evolution \cite{dglap}. Even though the process we consider involves moderate values of $x_1$ and $x_2$ and the perturbative scale $p_T^2\gg\Lambda_{QCD}^2,$ which we have chosen as the factorization scale, the cross section \eqref{jgj} does not obey collinear factorization. This is due to possible secondary soft interactions between the colliding hadrons which can fill the rapidity gap. Therefore, in \eqref{jgj}, the collinear factorization of the parton distributions $f_{eff}$ is corrected with the so-called gap-survival probability ${\cal S}$, which we assume depends only on $\sqrt{s}$ as in standard diffractive calculations. Since the soft interactions happen on much longer time scales, the factor ${\cal S}$ is factorized from the hard part
$d\sigma^{gg\rightarrow gg}/dp_T^2$. This hard cross section is given by
\begin{equation}
\frac{d \sigma^{gg\rightarrow gg}}{dp_T^2}=\frac{1}{16\pi}\left|A(\Delta\eta,p_T^2)\right|^2
\label{hardpart}
\end{equation}
in terms of the $gg\to gg$ scattering amplitude $A(\Delta\eta,p_T^2).$ The two measured jets are initiated by the final-state gluons (or quarks), parton showering and hadronization effects will be discussed in the next section.
In the following, we consider the high-energy limit in which the rapidity gap $\Delta\eta$ is assumed to be very large. The BFKL framework allows to compute the $gg\to gg$ amplitude in this regime, and the result is known up to NLL accuracy. We note that there exist other QCD-based approaches to compute the jet-gap-jet cross section \cite{sll}. Let us first point out that in general collinear and $k_T$-factorization are two distinct schemes to factorize a hard process from a soft process (as is the case for the proton structure function
$F_2$), and should not be mixed. But the process we are investigating is different: collinear factorization is used to separate the hard part from the soft part, and $k_T$-factorization is only used within the hard part itself. It allows to factorize the amplitude $A(\Delta\eta,p_T^2)$ into three hard pieces: two impact factors defined order-by-order with respect to $\alpha_S,$ and the BFKL Green function where a resummation of leading (and next-leading) logarithms is performed.
Since in our calculation the BFKL Pomeron is coupled to quarks or gluons, the BFKL Green function cannot be used as it is and should be modified. The transformation proposed in \cite{muellertang} is based on the fact that one should recover the analiticity of the Feynman diagrams. It was later argued that this prescripion corresponds to a deformed representation of the BFKL kernel that indeed could be coupled to colored particles and for which the bootstrap relation is fullfiled \cite{barlip}. Applying the MT prescription at NLL leads to
\begin{equation}
A(\Delta\eta,p_T^2)=\frac{16N_c\pi\alpha_S^2(p_T^2)}{C_Fp_T^2}\sum_{p=-\infty}^\infty\intc{\gamma}
\frac{[p^2-(\gamma-1/2)^2]\exp\left\{\bar\alpha(p_T^2)\chi_{eff}[2p,\gamma,\bar\alpha(p_T^2)] \Delta \eta\right\}}
{[(\gamma-1/2)^2-(p-1/2)^2][(\gamma-1/2)^2-(p+1/2)^2]}
\label{jgjnll}
\end{equation}
with the complex integral running along the imaginary axis from $1/2\!-\!i\infty$
to $1/2\!+\!i\infty,$ and with only even conformal spins contributing to the sum \cite{leszek}.
The running coupling is given by
\begin{equation}
\bar\alpha(p_T^2)=\frac{\alpha_S(p_T^2)N_c}{\pi}=
\left[b\log\lr{p_T^2/\Lambda_{QCD}^2}\right]^{-1}\ ,\quad b=\frac{11N_c-2N_f}{12N_c}\ .
\end{equation}
It is important to note that in formula \eqref{jgjnll}, we used the leading-order non-forward quark and gluon impact factors. We point out that the next-to-leading-order impact factors are known \cite{ifnlo}, and that in principle a full NLL analysis is feasible, but this goes beyond the scope of our study.
The NLL-BFKL effects are phenomenologically taken into account by the effective kernels
$\chi_{eff}(p,\gamma,\bar\alpha).$ For $p=0,$ the scheme-dependent NLL-BFKL kernels provided by the regularisation procedure $\chi_{NLL}\lr{\gamma,\omega}$ depend on $\omega,$ the Mellin variable conjugate to $\exp(\Delta\eta).$ In each case, the NLL kernels obey a {\it consistency condition} \cite{salam} which allows to reformulate the problem in terms of $\chi_{eff}(\gamma,\bar\alpha)$ (see also \cite{ccs,singnll} for different approaches). The effective kernel $\chi_{eff}(\gamma,\bar\alpha)$ is obtained from the NLL kernel $\chi_{NLL}\lr{\gamma,\omega}$ by solving the implicit equation $\chi_{eff}=\chi_{NLL}\lr{\gamma,\bar\alpha\ \chi_{eff}}$. In
\cite{nllmnjus,nllmnjthem}, the regularisation procedure has been extended to non-zero conformal spins and the kernel $\chi_{NLL}\lr{p,\gamma,\omega}$ was obtained from the results of \cite{kotlip}. The formulae needed to compute it can be found in the appendix of \cite{nllmnjus} (in the present study we shall use the S4 scheme in which $\chi_{NLL}$ is supplemented by an explicit $\bar\alpha$ dependence, the results in the case of the S3 scheme are similar). Then the effective kernels $\chi_{eff}(p,\gamma,\bar\alpha)$ are obtained from the NLL kernel by solving the implicit equation:
\begin{equation}
\chi_{eff}=\chi_{NLL}\lr{p,\gamma,\bar\alpha\ \chi_{eff}}\ .
\label{eff}
\end{equation}
Similar NLL-BFKL phenomenological studies have been carried out with Mueller-Navelet jets in hadron-hadron collisions \cite{nllmnjus,nllmnjthem}, forward jet production in deep inelastic scattering
\cite{nllfjus,nllfjthem}, and the proton structure function \cite{nllf2}. While in the $F_2$ analysis the NLL corrections did not really improve the BFKL description, it was definitively the case in the forward-jet study. In the Mueller-Navelet jet case, NLL corrections dramatically change the predictions, even more so in the full calculation when NLO impact factors are also implemented \cite{nllmnjfull}. In fact, these results cast strong doubts on the fact that Mueller-Navelet jets are a good observable to unambiguously observe BFKL effects, leaving the jet-gap-jet measurement as perhaps the new candidate.
In the LL-BFKL case that we consider for comparisons, the formula for the jet-gap-jet cross section is formally the same as the NLL one, with the following substitutions in \eqref{jgjnll}:
\begin{equation}
\chi_{eff}(p,\gamma,\bar\alpha)\rightarrow\chi_{LL}(p,\gamma)
=2\psi(1)-\psi\lr{1-\gamma+\frac{|p|}2}-\psi\lr{\gamma+\frac{|p|}2}\ ,
\hspace{1cm}\bar\alpha(k^2)\rightarrow\bar\alpha=\mbox{const. parameter} ,
\label{chill}\end{equation}
where $\psi(\gamma)\!=\!d\log\Gamma(\gamma)/d\gamma$ is the logarithmic derivative of the Gamma function. In this case, the coupling $\bar\alpha$ is a priori a parameter. We choose to fix it to the value 0.16 obtained in
\cite{nllfjus} by fitting the forward jet data from HERA. This unphysically small value of the coupling is indicative of the slower Bjorken-$x$ dependence of the forward-jet data compared to the LL-BFKL cross section, when used with a reasonable $\bar\alpha$ value. And in fact, the value $\bar\alpha=0.16$ mimics the slower energy dependence of NLL-BFKL cross section (in this case the average value of $\bar\alpha$ is about 0.25), which in the forward-jet case is consistent with data. Therefore in both the LL- and NLL-BFKL cases, one deals with one-parameter formulae: the absolute normalization which is not under control. In the NLL case, this is due to the fact the we do not use NLO impact factors.
Finally, to compute the cross section \eqref{jgj}, we use CTEQ parton distribution functions \cite{cteq},
and we take $S=0.1$ for the gap-survival probability at the Tevatron and $S=0.03$ at the LHC. More details on the parton-level computations can be found in \cite{us}, such as the importance of the different conformal spins in \eqref{jgjnll}, or the uncertainty due to the choice of the renormalization scale. In this work, the goal is to obtain hadron-level results by interfacing \eqref{jgj} with the HERWIG event generator.
\section{Implementation of the NLL-BFKL formula in HERWIG}
The parton-level calculation presented in the previous section leads by definition to a gap size $\Delta\eta$ equal to the interval in rapidity between the partons that initiate the jets. At particle level, it is no longer true. Due to QCD radiation and hadronisation, the jets have a finite size, and the gap size
$\Delta\eta_g$ is smaller than the difference in rapidity between the two jets $\Delta\eta_J$ (see
Fig.~\ref{jetgapjet}). This has an important consequence: to be able to compare the NLL-BFKL jet-gap-jet cross sections with CDF and D0 measurements at the Tevatron, it is needed to embed our formulae in a Monte Carlo code. For instance, the D0 collaboration selects events with a gap devoid of any activity in the $[-1,1]$ region in rapidity while they require the jets to be separated by at least 4 units in rapidity. To take into account these effects, we implemented the NLL-BFKL cross section in the HERWIG Monte Carlo, and made our analysis at the particle level, after hadronisation. Since this procedure of going from parton-level to hadron-level is quite sensitive to the way jets are reconstructed, we use the same jet algorithm as experimentally used.
Practically, in order to implement our formalism in HERWIG, we modified the HWHSNM function which implements the matrix element squared for color-singlet parton-parton scattering \cite{herwig}. Formula
\eqref{hardpart}, which gives the BFKL $d\sigma/dp_T^2$ cross section is too complicated to be implemented
directly in HERWIG since it involves an integration in the complex plane over $\gamma$, and it would
take too much computing time to generate many events. To avoid this issue, we parameterized $d\sigma/dp_T^2$ as a function of the parton $p_T$ and $\Delta\eta$ between both partons at generator level. Denoting
$z(p_T^2)=\bar\alpha(p_T^2)\Delta\eta/2$, the parametrization used is
\begin{equation}
\frac{d \sigma}{dp_T^2}=\frac{\alpha_S^4(p_T^2)}{4\pi p_T^4} \left[ a + b p_T + c \sqrt{p_T}
+ (d + e p_T + f \sqrt{p_T})\times z + (g + h p_T)\times z^2 +
(i + j \sqrt{p_T})\times z^3 + \exp(k + l z) \right]\ .
\label{formulafit}
\end{equation}
This formula is purely phenomenological, not motivated by theory, and was just introduced to obtain
a very good $\chi^2$ while fitting \eqref{formulafit} to the full expression of $d\sigma/dp_T^2$. To perform the fit, 2330 points were used for parton $p_T$ ranging from 10 to 120 GeV, and $\Delta\eta$ up to 10. The values of the different parameters were implemented in the HERWIG Monte Carlo. To summarize, we input into HERWIG the NLL-BFKL parton-level cross section which depends on $p_T$ and $\Delta\eta$, and the output depends on $\Delta\eta_g$, $\Delta\eta_J$, and the jets transverse momenta $p_{T_1}$ and $p_{T_2}$. Further integrations of these kinematic variables are performed to obtain the different observables discussed in the next section, taking into account experimental cuts.
\section{Comparaison with Tevatron data}
\begin{figure}[t]
\begin{center}
\epsfig{file=d0.eps,width=8.9cm}
\hfill
\epsfig{file=cdf.eps,width=8.9cm}
\end{center}
\caption{Comparisons between the D0 (left) and CDF (right) measurements of the jet-gap-jet event ratio with the NLL- and LL-BFKL calculations. The NLL calculation is in fair agreement with the data while the LL one leads to a worse description.}
\label{tevatron}
\end{figure}
The D0 collaboration has performed a measurement of the jet-gap-jet event ratio, defined as the ratio of the jet-gap-jet cross section to the inclusive di-jet cross section, as a function of the transverse energy of the second-leading jet, that we denote $E_T$, and also as a function of the rapidity difference
$\Delta\eta_J$ between the two leading jets \cite{d0}. At least two jets are reconstructed in the D0 calorimeter with $E_T> 15\ \mbox{GeV}$ for the second leading jet. In addition, the two jets are required to be in the forward regions and in opposite hemispheres, by requesting $1.9<|\eta_{1,2}|<4.1$ and
$\eta_1 \eta_2 <0$. The difference in rapidity between both jets $\Delta\eta_J$ was imposed to be larger than 4, and a rapidity gap between at least $\eta=-1$ and $\eta=1$ is required. The data are presented as a function of the second-leading-jet $E_T$, or as a function of $\Delta\eta_J$ in which case a low-$E_T$ and a high-$E_T$ jet samples were used (low $E_T$ means $15<E_T<25\ \mbox{GeV}$ and high $E_T$ means $E_T>30\ \mbox{GeV},$ those cuts applying to both jets).
To compare directly the D0 measurement with the NLL-BFKL calculation implemented in HERWIG, we compute the following ratio
\begin{equation}
R=\frac{NLL~BFKL~Herwig}{Jet~Herwig}\times\frac{LO~QCD}{NLO~QCD}\ ,
\end{equation}
where $NLL~BFKL~Herwig$ and $Dijet~Herwig$ are the jet-gap-jet and the inclusive di-jet cross sections obtained with HERWIG, respectively. To take into account NLO QCD effects, we also correct the ratio
$R$ by the LO/NLO QCD di-jet cross section ratio obtained with the NLOJet++ program \cite{nlojet}. The same method applies for the LL-BFKL cross section calculation. The comparison between our calculations and the D0 data is given in the left plot of Fig.~\ref{tevatron}, after the overall normalization was adjusted in the two cases (LL and NLL). We find that there is a good agreement between the NLL-BFKL calculation and the data whereas the LL-BFKL calculation leads to an $E_T$ dependence which is too flat. Moreover, in the LL case it is difficult to accomodate all the data with a single overall normalization factor, while in the NLL case this is not a problem.
The comparison with CDF data is given in the right plot of Fig.\ref{tevatron}, in this case the normalization of the different data sets is arbitrary. The CDF collaboration also measured the jet-gap-jet cross section requesting a gap between -1 and 1 in rapidity, but used a higher jet $E_T$ threshold of 20 GeV and a lower acceptance in jet rapidity between 1.8 to 3.5, compared to the D0 measurement. The CDF requirement on the minimum rapidity interval between the two jets is then $\Delta\eta_J>3.6$, compared to 4 in the case of D0.
CDF measured the jet-gap-jet event ratio as a function of the average jet transverse momentum (this is what $E_T$ denotes in the case of CDF), the average transverse momentum of the third jet $E_T^{(3)}$ when there is one in the event, and also as a function of $\Delta\eta_J$. The conclusion remains the same as in the case of D0 data, namely that the NLL-BFKL formalism leads to a better description than the LL one. However, it is worth noticing that we are not able to describe the full $\Delta\eta_J$ dependence and especially the decrease at high $\Delta\eta_J$, which is somehow in disagreement with the D0 measurement. Further measurements in progress in the CDF collaboration will be very useful to understand these differences. Parton showering and hadronization effects are crucial in order to obtain this level of agreement with data.
\begin{figure}
\begin{center}
\epsfig{file=lhc1.eps,width=8.7cm}
\hfill
\epsfig{file=lhc2.eps,width=8.7cm}
\end{center}
\caption{Predictions of our model for the ratio of the jet-gap-jet to the inclusive-jet cross section at the LHC, as a function of the second-leading-jet transverse energy $E_T$ (left), and of the rapidity difference between the two leading jets $\Delta\eta_J$ (right).}
\label{lhc}
\end{figure}
\section{Predictions for the LHC}
Using the normalizations obtained from the fits to the D0 data, we are able to predict the
jet-gap-jet event ratio at the LHC. In doing so, we also take into account the fact that the gap-survival probability is smaller by a factor 10/3 (this is estimated for a collision energy of $\sqrt{s}=14$ TeV).
Requesting both jets to have $E_T>20$ GeV, and the jet rapidities to obey $2<|\eta_{1,2}|<5$ and
$\eta_1 \eta_2 <0$, the values of jet-gap-jet event ratios are shown in Fig.~\ref{lhc}, as a function of $E_T$ for different $\Delta\eta_J$ ranges (left plot), and as a function of $\Delta\eta_J$ for different $E_T$ ranges (right plot). The main feature of the predictions of our model is that jet-gap-jet event ratio is about 0.002 and does not vary a lot with $E_T$ or $\Delta\eta_J$, except for the LL-BFKL predictions which increase a bit with $\Delta\eta_J$. By contrast, the parton-level predictions obtained in \cite{us} featured an increase of the ratio with both $E_T$ or $\Delta\eta_J$.
Let us add a word of caution about these predictions. We have assumed a specific value for the gap survival probability at the LHC (namely 0.03), however this value shows large theoretical uncertainties \cite{gapsurvival}. It will be measured by the LHC experiments and our cross section predictions will have to be modified once such measurements are performed. For instance, our cross section should be reduced by a factor 10 if the survival probability is found to be ten times smaller than 0.03 for a center-of-mass energy of 14 TeV. In addition, our prediction assumes that the inclusive jet cross section at the LHC can be correctly described by the Herwig Monte Carlo, as is the case at the Tevatron \cite{Abazov:2004hm}. This can be again tested at the LHC using the first data. One first indication for a center-of-mass energy of 7 TeV was given by the measurement of the difference in azimuthal angle in dijet events \cite{lhcprelim} which seems to be well described by the Herwig Monte Carlo. However, the same measurement as well as cross section comparison between data and MC need to be done at a higher center-of-mass energy at 14 TeV to make sure that the Herwig Monte Carlo can describe the inclusive jet measurement. If some discrepancy need to be accounted for, this has to be taken into account as well in our prediction of the jet gap jet cross section ratio.
\section{Conclusions}
We have embedded the parton-level NLL-BFKL calculation of \cite{us} into the HERWIG Monte Carlo program, in order to obtain hadron-level results for the jet-gap-jet cross-section in hadron-hadron collisions, corresponding to the production of two high-$p_T$ jets around a large rapidity gap. The NLL-BFKL effects are implemented through a renormalization-group improved kernel in the S4 scheme, while the Mueller-Tang prescription is used to couple the BFKL Pomeron to colored partons, described with only LO impact factors.
After adjusting one parameter, the overall normalization, the NLL-BFKL calculation is able to describe all Tevatron data, except the higher end of the $\Delta\eta_J$ dependence measured by CDF. This still provides an improvement compared to the LL-BFKL calculation (obtained with the fixed value of the coupling
$\bar\alpha=0.16$), which in addition features an $E_T$ dependence that is too flat compared to the D0 data. We presented predictions which could be tested at the LHC, for the same jet-gap-jet event ratio measured at
the Tevatron, but for larger rapidity gaps.
We noticed that going from parton-level to hadron-level is necessary in order to obtain a global description of the Tevatron data, hence our results supersede those of Ref.~\cite{us}. Considering LHC predictions,
hadron-level calculations show almost no dependence of the jet-gap-jet event ratio (which is about 0.002 in the kinematic range we considered, and for the experimental cuts we used), while parton-level calculations showed an increase of this ratio with both $E_T$ and $\Delta\eta_J$. This should provide a strong test of the BFKL regime.
|
2,877,628,090,658 | arxiv | \section{Introduction}
Quantum walks (QWs) are considered to be a quantum analog of classical random walks.
The system and the dynamics of QWs have some similarities to those of random walks, but the behavior of QWs is different from that of random walks in terms of their probability distributions .
In general, the behavior of the QWs can not be predicted based on our intuition.
A 3-period time-dependent QW which we are going to consider in this paper leads to an interesting behavior.
We study this behavior after a large number of discrete time steps and describe it as a long-time limit theorem.
The theorem will be given as a convergence in distribution on a rescaled space by time. The fact that the relevant scale is time itself and not its square root has been observed from the very first papers in the subject, \cite{AharonovDavidovichZagury1993}.
For a time-independent standard QW on the line, a limit distribution was obtained by Konno~\cite{Konno2002a,Konno2005} in 2002 for the first time and the limit density function has a representation similar to an arcsine law, in marked contrast to a Gauss distribution which appears for classical random walks under appropriate conditions.
Time-dependent QWs were numerically studied in some papers~\cite{MackayBartlettStephensonSanders2002,RibeiroMilmanMosseri2004,BanulsNavarretePerezRoldanSoriano2006,Romanelli2009} and some limit theorems were analytically derived \cite{MachidaKonno2010,Machida2011,Machida2013b,IdeKonnoMachidaSegawa2011}.
In particular, Machida and Konno~\cite{MachidaKonno2010} treated a 2-period discrete-time QW on the line whose time evolution is given by two unitary matrices which are used as coin-flip operators.
The long time behavior of the 2-period time-dependent walk can be completely determined by one of the two matrices according to the determinant of the product of both of them.
In this paper we define a 3-period time-dependent discrete-time QW on the line and we will see that this 3-period time-dependent walk also exhibits interesting behavior.
The motivation for the analytical study for the 3-period time-dependent walk done here comes from numerical studies done in Ribeiro et al.~\cite{RibeiroMilmanMosseri2004}.
Besides periodic time-dependent walks, they also looked at time-dependent QWs whose coin-flip operator was controlled by a quasiperiodic sequence or a random sequence.
According to their result, we can expect that the long time behavior of a walk with a long period is sub-ballistic or diffusive.
That means that as the length of the period increases, the behavior of the periodic time-dependent walks gets either less ballistic or more diffusive departing form the behavior of a time-independent quantum walk.
So, we would see a different behavior for a periodic QW depending on the length of the period, and this would be important in order to discuss the relationship between QWs and random walks.
We will define a 3-period time-dependent QW on the line in the following section.
The walker starts from the origin on the lattice $\mathbb{Z}=\left\{0,\pm 1,\pm2,\ldots\right\}$ at time 0 and from its state at time $t\in\left\{0,1,2,\ldots\right\}$ one gets the state at time $t+1$ after operating with a coin-flip operator and a position-shift operator.
In our model the coin operator is 3-periodic as a function of time $t$, and we use just one and the same coin-flip operator in the evolution.
For the 3-period time-dependent walk, we give a limit theorem as $t\to\infty$ in Sec.~\ref{sec:limit_th}.
The proof of the theorem is based on Fourier analysis and is included in the same section.
In the final section, we give a summary and a discussion of our result.
There are two appendices: in the first one we show how the analytical proof can be made to work in the case of some unitary (as opposed to orthogonal) operators.
In the second one we look at a number of models not covered by our analytical results and give some interesting numerical evidence of their limiting behavior.
\section{Definition of a 3-period time-dependent QW on the line}
\label{sec:definition}
In this paper we deal with a discrete-time 2-state QW on the line and we give a 3-periodic time evolution rule for the walk.
The total system of a discrete-time 2-state QWs on the line is defined in a tensor space $\mathcal{H}_p\otimes\mathcal{H}_c$, where $\mathcal{H}_p$ is called a position Hilbert space which is spanned by an orthogonal normalized basis $\left\{\ket{x}:\,x\in\mathbb{Z}\right\}$ and $\mathcal{H}_c$ is called a coin Hilbert space which is spanned by an orthogonal normalized basis $\left\{\ket{0},\ket{1}\right\}$.
Let $\ket{\psi_{t}(x)} \in \mathcal{H}_c$ be the state of the walker at position $x$ at time $t$.
The state of the 2-state QW on the line at time $t$ is expressed by $\ket{\Psi_t}=\sum_{x\in\mathbb{Z}}\ket{x}\otimes\ket{\psi_{t}(x)}\in\mathcal{H}_p\otimes\mathcal{H}_c$.
In particular, we focus on a 3-period time-dependent discrete-time QW whose coin-state is given by
\begin{align}
C=&\cos\theta\ket{0}\bra{0}+\sin\theta\ket{0}\bra{1}+\sin\theta\ket{1}\bra{0}-\cos\theta\ket{1}\bra{1}\nonumber\\
=&c\ket{0}\bra{0}+s\ket{0}\bra{1}+s\ket{1}\bra{0}-c\ket{1}\bra{1},
\label{eq:coin-flip operator}
\end{align}
with $\theta\in [0,2\pi)$ and we have abbreviated $\cos\theta, \sin\theta$ to $c, s$ in Eq.~(\ref{eq:coin-flip operator}).
The total system at time $t$ evolves to the next state at time $t+1$ according to the time evolution rule
\begin{equation}
\ket{\Psi_{t+1}}=\left\{\begin{array}{ll}
\tilde{S}\tilde{C}\ket{\Psi_t}& (t=0,1 \mod 3)\\[1mm]
\tilde{S}\ket{\Psi_t}& (t=2 \mod 3)
\end{array}\right.,
\label{eq:time-evolution}
\end{equation}
where
\begin{align}
\tilde{C}=&\sum_{x\in\mathbb{Z}}\ket{x}\bra{x}\otimes C,\\
\tilde{S}=&\sum_{x\in\mathbb{Z}}\ket{x-1}\bra{x}\otimes\ket{0}\bra{0}+\ket{x+1}\bra{x}\otimes\ket{1}\bra{1}.
\end{align}
The time evolution of the state $\ket{\Psi_t}$ depends on the value $t \mod 3$.
Equation~(\ref{eq:time-evolution}) states that the position of the walker gets shifted after the coin-flip operation has been completed at time $t=0,1 \mod 3$, and it just gets shifted without any coin-flip operation at time $t=2 \mod 3$.
Here, we don't take $\theta=0,\frac{\pi}{2},\pi,\frac{3\pi}{2}$ because the behavior of the walker would be trivial.
Under the condition $\braket{\Psi_0|\Psi_0}=1$, the quantum walker can be observed at position $x$ at time $t$ with probability
\begin{equation}
\mathbb{P}(X_t=x)=\bra{\Psi_t}\biggl\{\ket{x}\bra{x}\otimes (\ket{0}\bra{0}+\ket{1}\bra{1})\biggr\}\ket{\Psi_t},
\end{equation}
where $X_t$ is a random variable and denotes the position of the walker at time $t$, regardless of the spin orientation.
The probability distribution evolves as a function of time $t$, as numerically shown in Fig~\ref{fig:time-probability}.
Actually, this linear behavior is reflected in a limit theorem which will show up after this section.
We also show how the time evolution of the probability distribution depends on the parameter $\theta$ of the coin-flip operator $C$ in Fig.~\ref{fig:theta-probability}.
We will analyze the long time behavior of this probability distribution $\mathbb{P}(X_t=x)$ as $t\to\infty$ in the next section, concentrating on values of time that are of the form $3t$. Other values of time show an undistinguishable behavior (see also Appendix~\ref{app:3t+1_and_3t+2}).
\begin{figure}[h]
\begin{center}
\begin{minipage}{70mm}
\begin{center}
\includegraphics[scale=0.7]{fig1-a.eps}\\[2mm]
(a) $\theta=\frac{\pi}{4}$
\end{center}
\end{minipage}
\begin{minipage}{70mm}
\begin{center}
\includegraphics[scale=0.7]{fig1-b.eps}\\[2mm]
(b) $\theta=\frac{2\pi}{5}$
\end{center}
\end{minipage}
\vspace{5mm}
\fcaption{Time evolution of probability distributions in the case of $\alpha=1/\sqrt{2}, \beta=i/\sqrt{2}$}
\label{fig:time-probability}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{minipage}{70mm}
\begin{center}
\includegraphics[scale=0.7]{fig2-a.eps}\\[2mm]
(a) $\alpha=1/\sqrt{2},\,\beta=i/\sqrt{2}$
\end{center}
\end{minipage}
\begin{minipage}{70mm}
\begin{center}
\includegraphics[scale=0.7]{fig2-b.eps}\\[2mm]
(b) $\alpha=1,\,\beta=0$
\end{center}
\end{minipage}
\vspace{5mm}
\fcaption{The relationships between the probability distribution at time $t=150$ and the parameter $\theta$ which determines the coin-flip operator $C$}
\label{fig:theta-probability}
\end{center}
\end{figure}
\section{Long-time limit theorem and its proof}
\label{sec:limit_th}
We get a long-time limit theorem for the probability distribution and its proof in this section assuming that the walker starts from the origin.
Let us take an initial state $\ket{\Psi_0}=\ket{0}\otimes\left(\alpha\ket{0}+\beta\ket{1}\right)$ with $|\alpha|^2+|\beta|^2=1$.
This initial condition means that the walker starts from the origin because of $\mathbb{P}(X_0=0)=1$.
Then we obtain a limit theorem for the 3-period time-dependent QW.
\begin{thm}
\begin{align}
\lim_{t\to\infty}\mathbb{P}\left(\frac{X_{3t}}{3t}\leq x\right)
=\int_{-\infty}^x \Biggl[&\left\{1-\nu(\alpha,\beta; y)\right\}f(y)I_{\left(\frac{1-4c^2}{3},\frac{\sqrt{1+8c^2}}{3}\right)}(y) \nonumber\\
&+\left\{1+\nu(\alpha,\beta; -y)\right\}f(-y)I_{\left(-\frac{\sqrt{1+8c^2}}{3},-\frac{1-4c^2}{3}\right)}(y)\Biggr]\,dy,\label{eq:cumulative}
\end{align}
where
\begin{align}
f(x)=&\frac{|s|\left(|s|x+\sqrt{D(x)}\right)^2}{\pi(1-x^2)\sqrt{W_{+}(x)}\sqrt{W_{-}(x)}\sqrt{D(x)}},\\[3mm]
\nu(\alpha,\beta; x)=&\frac{1}{c(1+8c^2)}\left\{9c^3(|\alpha|^2-|\beta|^2)+3s(1+6c^2)\Re(\alpha\overline{\beta})\right\}x\nonumber\\
&+\frac{s}{c|s|(1+8c^2)}\left\{cs(|\alpha|^2-|\beta|^2)-(1+2c^2)\Re(\alpha\overline{\beta})\right\}\sqrt{D(x)},\\[3mm]
D(x)=&1+8c^2-9c^2x^2,\\
W_{+}(x)=&-(1-4c^2)+3(1-2c^2)x^2+2|s|x\sqrt{D(x)},\\
W_{-}(x)=&1+8c^2-3(1+2c^2)x^2-2|s|x\sqrt{D(x)},\\[2mm]
I_A(x)=&\left\{\begin{array}{cl}
1&(x\in A)\\
0&(x\notin A)
\end{array}\right.,
\end{align}
and $\Re(z)$ denotes the real part of the complex number $z$.
\label{th:limit}
\end{thm}
The function $\nu(\alpha,\beta; x)$ is the part of the limit density function which gives the effect of the initial condition $\alpha,\beta$ on the limit behavior, and if the conditions $|\alpha|=|\beta|$ and $\Re(\alpha\overline{\beta})=0 $ are satisfied simultaneously (e.g. $\alpha=1/\sqrt{2}, \beta=i/\sqrt{2}$\,), this term disappears.
Note that $D(x), W_{+}(x), W_{-}(x) >0$ for $x\in \left(-\frac{\sqrt{1+8c^2}}{3},-\frac{1-4c^2}{3}\right) \cup \left(\frac{1-4c^2}{3},\frac{\sqrt{1+8c^2}}{3}\right)$ and $|1-4c^2|<\sqrt{1+8c^2}$.
As examples, Fig.~\ref{fig:limit} shows probability distributions and the limit density functions when $\alpha=1/\sqrt{2},\, \beta=i/\sqrt{2}$.
\begin{figure}[h]
\begin{center}
\begin{minipage}{70mm}
\begin{center}
\includegraphics[scale=0.5]{fig3-a.eps}\\[2mm]
(a) $\theta=\frac{\pi}{4}$
\end{center}
\end{minipage}
\begin{minipage}{70mm}
\begin{center}
\includegraphics[scale=0.5]{fig3-b.eps}\\[2mm]
(b) $\theta=\frac{2\pi}{5}$
\end{center}
\end{minipage}
\vspace{5mm}
\fcaption{Probability distribution at time $999\,(=3\times 333)$ (blue line) and the limit density function (red line), in the case of $\alpha=1/\sqrt{2}, \beta=i/\sqrt{2}$}
\label{fig:limit}
\end{center}
\end{figure}
\vspace{5mm}
\begin{proof}{
To prove the limit theorem we use Fourier analysis in the way introduced in Grimmett et al.~\cite{GrimmettJansonScudo2004}, and derive a convergence of the $r$-th moment $\mathbb{E}\left[(X_{3t}/3t)^r\right]$ ($r=0,1,2,\ldots$) which is equivalent to a convergence of the generating function $\mathbb{E}[e^{izX_{3t}/3t}]$.
First, we consider the following Fourier transform $\ket{\hat\Psi_t(k)}\, (k\in [-\pi,\pi))$ derived from the states of the walker
\begin{equation}
\ket{\hat\Psi_t(k)}=\sum_{x\in\mathbb{Z}}e^{-ikx}\ket{\psi_t(x)}.
\end{equation}
We should note that we can obtain the state $\ket{\psi_t(x)}$ by using the inverse Fourier transform
\begin{equation}
\ket{\psi_t(x)}=\int_{-\pi}^\pi e^{ikx}\ket{\hat\Psi_t(k)}\frac{dk}{2\pi}.
\end{equation}
Equation~(\ref{eq:time-evolution}) produces a time evolution of the Fourier transform
\begin{align}
\ket{\hat\Psi_{3t}(k)}=&\left(\hat S(k)\hat C(k)^2\right)^t\ket{\hat\Psi_0(k)},\nonumber\\
\ket{\hat\Psi_{3t+1}(k)}=&\hat C(k)\left(\hat S(k)\hat C(k)^2\right)^t\ket{\hat\Psi_0(k)},\\
\ket{\hat\Psi_{3t+2}(k)}=&\hat C(k)^2\left(\hat S(k)\hat C(k)^2\right)^t\ket{\hat\Psi_0(k)},\nonumber
\end{align}
where $\hat S(k)=e^{ik}\ket{0}\bra{0}+e^{-ik}\ket{1}\bra{1}$ and $\hat C(k)=\hat S(k)C$.
The operator $\hat S(k)$ corresponds to the position-shift operator $\tilde{S}$.
Before computing the $r$-th moment $\mathbb{E}\left(X_{3t}^r\right)$, we get the eigenvalues and the normalized eigenvectors of the unitary matrix $\hat S(k)\hat C(k)^2$ so that we rewrite the Fourier transform $\ket{\hat\Psi_{3t}(k)}$ on the appropriate eigenspace.
Let us take a standard basis as the orthogonal normalized basis $\left\{\ket{0},\ket{1}\right\}$ with
\begin{equation}
\ket{0}=\left[\begin{array}{c}
1\\0
\end{array}\right],\quad
\ket{1}=\left[\begin{array}{c}
0\\1
\end{array}\right].
\end{equation}
Then the matrix $\hat S(k)\hat C(k)^2$ has two eigenvalues
\begin{equation}
\lambda_j(k)=c^2\cos 3k+s^2\cos k -(-1)^j i\sqrt{1-(c^2\cos 3k+s^2\cos k)^2} \quad (j=1,2),
\end{equation}
and they are distinct as long as $k\neq -\pi,0$.
Again, we should note that $1-(c^2\cos 3k+s^2\cos k)^2$ is not a negative number and its value is zero if and only if $k= -\pi,0$.
As one of the possible expressions of the normalized eigenvector corresponding to each eigenvalue $\lambda_j(k)$, we have
\begin{equation}
\ket{v_j(k)}=\frac{1}{\sqrt{N_j(k)}}\left[\begin{array}{c}
-2cs\,e^{2ik}\sin k\\[2mm]
c^2\sin 3k+s^2\sin k+(-1)^j\sqrt{1-(c^2\cos 3k+s^2\cos k)^2}
\end{array}\right],
\end{equation}
where $N_{j}(k)$ are normalization factors given by
\begin{align}
N_{j}(k)=&2\biggl\{1-(c^2\cos 3k+s^2\cos k)^2 \nonumber\\
&+(-1)^j(c^2\sin 3k+s^2\sin k)\sqrt{1-(c^2\cos 3k+s^2\cos k)^2}\biggr\}.
\end{align}
Here, we treat the $r$-th moments at time $3t$ and express them in the Fourier space by using the eigenvalues $\lambda_j(k)$ and the eigenvectors $\ket{v_j(k)}$.
With a decomposition $\ket{\hat\Psi_{3t}(k)}=\sum_{j=0}^1\lambda_j^t(k)\braket{v_j(k)|\hat\Psi_0(k)}\ket{v_j(k)}$, we get
\begin{align}
\mathbb{E}(X_{3t}^r)=&\sum_{x\in\mathbb{Z}}x^r\mathbb{P}(X_{3t}=x)\nonumber\\
=&\int_{-\pi}^\pi \bra{\hat\Psi_{3t}(k)}\left(D^r\ket{\hat\Psi_{3t}(k)}\right)\frac{dk}{2\pi}\nonumber\\
=&(t)_r\int_{-\pi}^\pi \sum_{j=1}^2 \left(\frac{i\lambda'_j(k)}{\lambda_j(k)}\right)^r\left|\braket{v_j(k)|\hat\Psi_0(k)}\right|^2\frac{dk}{2\pi}+O(t^{r-1}),
\label{eq:r-th_moment}
\end{align}
where $D=i(d/dk)$ and $(t)_r=t(t-1)\times\cdots\times(t-r+1)$.
Equation~(\ref{eq:r-th_moment}) gives us a convergence as $t\to\infty$,
\begin{equation}
\lim_{t\to\infty}\mathbb{E}\left[\left(\frac{X_{3t}}{3t}\right)^r\right]=\int_{-\pi}^\pi \sum_{j=1}^2 \left(\frac{i\lambda'_j(k)}{3\lambda_j(k)}\right)^r\left|\braket{v_j(k)|\hat\Psi_0(k)}\right|^2\frac{dk}{2\pi},
\label{eq:r-th_moment2}
\end{equation}
where
\begin{equation}
\frac{i\lambda'_j(k)}{3\lambda_j(k)}=(-1)^j \frac{3c^2\sin 3k+s^2\sin k}{3\sqrt{1-(c^2\cos 3k+s^2\cos k)^2}}.
\end{equation}
Setting $i\lambda'_j(k)/3\lambda_j(k)=x$ in Eq.~(\ref{eq:r-th_moment2}) takes us to our goal because we have
\begin{align}
\lim_{t\to\infty}\mathbb{E}\left[\left(\frac{X_{3t}}{3t}\right)^r\right]=&\int_{-\infty}^\infty x^r \Biggl[\left\{1-\nu(\alpha,\beta; x)\right\}f(x)I_{\left(\frac{1-4c^2}{3},\frac{\sqrt{1+8c^2}}{3}\right)}(x) \nonumber\\
&+\left\{1+\nu(\alpha,\beta; -x)\right\}f(-x)I_{\left(-\frac{\sqrt{1+8c^2}}{3},-\frac{1-4c^2}{3}\right)}(x)\Biggr]\,dx,
\end{align}
which means that the random variable $X_{3t}/3t$ converges in distribution to a random variable with a density function
\begin{equation}
\left\{1-\nu(\alpha,\beta; x)\right\}f(x)I_{\left(\frac{1-4c^2}{3},\frac{\sqrt{1+8c^2}}{3}\right)}(x)+\left\{1+\nu(\alpha,\beta; -x)\right\}f(-x)I_{\left(-\frac{\sqrt{1+8c^2}}{3},-\frac{1-4c^2}{3}\right)}(x).
\end{equation}
To obtain the cumulative distribution function on the left hand side of Eq.~(\ref{eq:cumulative}), we need to integrate this density.
}
\end{proof}
\section{Summary and Discussion}
\label{sec:summary}
We have dealt with a 3-period time-dependent discrete-time 2-state QW on the line with the walker located at the origin at the initial time, and gave a limit theorem which gives the asymptotic behavior of the walker after a large number of steps.
On a rescaled space by time, the position of the walker converges in distribution to a random variable. The density function of the random variable has a compact support.
Its shape resembles that of a doubled arcsine distribution.
When we choose the parameter $\theta$, which determines the coin-flip operator $C$, in the open interval $(\pi/3, 2\pi/3) \cup (4\pi/3, 5\pi/3)$, we do not observe the walker at the starting point after a long time as shown in Fig.~\ref{fig:limit}-(b) because the compact support is the open interval $\left(-\frac{\sqrt{1+8\cos^2\theta}}{3},-\frac{1-4\cos^2\theta}{3}\right)\cup\left(\frac{1-4\cos^2\theta}{3},\frac{\sqrt{1+8\cos^2\theta}}{3}\right)$.
For a time-independent walk or a 2-period time-dependent walk starting from the origin, the initial condition at the origin produces a linear function in their limit density functions~\cite{Konno2002a,MachidaKonno2010}.
On the other hand, the 3-period time-dependent walk treated in this paper, features a non-linear term reflecting the initial condition at the origin, which is expressed by $\nu(\alpha,\beta; x)$ in the limit theorem, and the function $\sqrt{D(x)}=\sqrt{1+8c^2-9c^2x^2}$.
We showed that the limit distribution of the 3-period time-dependent walk is essentially different from that of the time-independent walk or the 2-period time-dependent walk.
We have treated a 3-period time-dependent walk whose coin-state is flipped by only one coin-flip operator $C$ at time
$t=0,1 \mod 3$, and is shifted without any coin-flip operation at time $t=2 \mod 3$.
We can also see very interesting behavior for a 3-period time-dependent walk with three distinct coin-flip operators by using numerics as displayed in the appendix. We intend to analyze these results carefully in a future publication.
We have described a mathematical property of a 3-period time-dependent walk.
It would be worth discussing this phenomenon from the perspective of physics, for example it would be nice to explore a possible application to the design of selective pulses in~\cite{MorrisMcIntyreRourkeNgo1989}.
\nonumsection{Acknowledgements}
\noindent T. Machida is grateful to the Japan Society for the Promotion of Science for the support, and to the Math. Dept. UC Berkeley for hospitality.
F.A. Gr\"{u}nbaum acknowledges support from the Applied Math. Sciences subprogram of the Office of Energy Research, US Department of Energy, under
Contract DE-AC03-76SF00098, and from AFOSR grant FA95501210087 through a subcontract to Carnegie Mellon University.
|
2,877,628,090,659 | arxiv | \section{Introduction}
Non-commutative geometries are widely considered as plausible candidates for describing physics at the Planck scale \cite{noncom} and have natural connections with string theory \cite{SW}. Moreover some of these models can be related to the intuitions of doubly special relativity (DSR) \cite{AMS} where another invariant scale (apart from the speed of light) is introduced ab initio in the theory. Interest in DSR is also increased because such a framework can be regarded as a semi-classical limit of quantum gravity (see \cite{RovSmo} and references therein).
In this paper the Snyder proposal \cite{Sny} of a non-commutative space-time is analyzed from a physical point of view. This model can be understood by means of the projective geometry approach to the de Sitter space of momenta with two universal constants and is relevant since it can be related to some of DSR models \cite{Kov}. Furthermore, it has some motivations from loop quantum gravity \cite{LO} and two-time physics \cite{tt}.
The starting point of our analysis is the requirement that the only deformed commutator in the Euclidean Snyder framework is one between the coordinates. This way, the translation group is not deformed and the rotational symmetry is preserved. We then show that, infinitely many commutators between the non-commutative coordinates and momenta are possible, such that in all the cases the algebra closes. This way, infinitely many different physical predictions of the Snyder space are allowed. These are summarized in the deformed symplectic geometry and in the generalized uncertainty principle at classical and quantum level, respectively. The physical interesting framework of a deformed quantum cosmology is also analyzed. Here we deal with a one-dimensional system and our picture is almost uniquely fixed. We show that this framework naturally leads to the non-singular (bouncing) Friedmann dynamics obtained in recent issues of loop quantum cosmology (LQC) \cite{bloop}.
The paper is organized as follows. In Section II the algebraic structure of the Euclidean Snyder space is analyzed. Section III is devoted to discuss the physical implications of this framework. Concluding remarks follow. Over the paper we adopt units such that $\hbar=c=1$.
\section{Realizations of Snyder space}
The algebraic structure of the non-commutative Snyder space is analyzed in this Section. All possible realizations of this space, the general form of the uncertainty principle and the required hermiticity conditions are showed. The known algebras are then recovered as particular cases of our construction.
{\it Realizations.} Let us start by considering a $n$-di\-men\-sio\-nal non-commutative (deformed) Euclidean space such that the commutator between the coordinates has the non-trivial structure ($\{i,j,...\}\in\{1,...,n\}$)
\be\label{snyalg}
[\tilde x_i,\tilde x_j]=\kappa M_{ij}\,,
\ee
where with $\tilde x_i$ we refer to the non-commutative coordinates and $\kappa\in\mathbb R$ is the deformation parameter with dimension of a squared length. We then demand that the rotation generators $M_{ij}=-M_{ji}=i(x_ip_j-x_jp_i)$ satisfy the ordinary $SO(n)$ algebra
\be
[M_{ij},M_{kl}]=\delta_{jk}M_{il}-\delta_{ik}M_{jl}-\delta_{jl}M_{ik}+\delta_{il}M_{jk}
\ee
and that the translation group is not deformed, i.e. $[p_i,p_j]=0$. In order to preserve the rotational symmetry the commutators between $M_{ij}$ and the coordinates $\tilde x_i$, as well as between $M_{ij}$ and $p_k$, have to be undeformed. Therefore, we assume that the relations
\bea\label{commx}
[M_{ij},\tilde x_k]&=&\tilde x_i\delta_{jk}-\tilde x_j\delta_{ik}, \\\nonumber
[M_{ij},p_k]&=&p_i\delta_{jk}-p_j\delta_{ik}
\eea
hold. This way we deal with the (Euclidean) Snyder space \cite{Sny}. The above relations however do not uniquely fix the commutators between $\tilde x_i$ and $p_j$. In particular, there are infinitely many of such commutators which are all compatible (in the sense that the algebra closes in virtue of the Jacobi identities) with the above natural requirements.
This feature can be understood by analyzing the realizations \cite{Mel,Luk,Gosh} of such a non-commutative space. The concept of realization was developed in a series of papers \cite{Mel} (for a similar approach in the $\kappa$-deformed space-time see \cite{Luk} and a related analysis in the context of DSR can be found in \cite{Gosh}). A realization of the Snyder algebra (\ref{snyalg}) is defined as a rescaling of the non-commutative coordinates $\tilde x_i$ in terms of the ordinary phase space variables ($x_i,p_j$). The most general $SO(n)$ covariant realization for $\tilde x_i$ is given by
\be\label{real}
\tilde x_i=x_i\varphi_1(\mu,\nu)+\kappa(x_jp_j)p_i\varphi_2(\mu,\nu),
\ee
where the convention $a_ib_i=\sum_i a_ib_i$ is adopted and $\varphi_1$ and $\varphi_2$ are two arbitrary finite functions depending on the dimensionless quantities $\mu=\kappa p^2$ and $\nu=\kappa m^2$. In particular, the second quantity accounts for a mass-like term $m^2$ which can be positive, negative or zero. In order to recover the ordinary Heisenberg algebra, suitable boundary conditions on these functions have to be imposed. We have to demand that, in the $\kappa\rightarrow0$ ($\mu,\nu\rightarrow0$) limit, $\varphi_1(0,0)=1$.
The realization above is, of course, not completely arbitrary since it depends on the adopted algebraic structure. In particular, the two functions $\varphi_1$ and $\varphi_2$ are constrained by the relations (\ref{snyalg}) and (\ref{commx}). Inserting the formula (\ref{real}) into the non-commutative coordinate commutator (\ref{snyalg}), the first restriction we obtain reads
\be\label{con1}
2\left(\varphi_1'\varphi_1+\mu\varphi_1'\varphi_2\right)-\varphi_1\varphi_2+1=0,
\ee
where $\varphi_1'=\partial\varphi_1/\partial\mu$. The other condition on $\varphi_1$ and $\varphi_2$ arises after considering the realization (\ref{real}) into the commutator $[M_{ij},\tilde x_k]$ in (\ref{commx}). Such second constraint can be written as
\be\label{con2}
\left(x_l[M_{ij},p_l]p_k+[M_{ij},x_l]p_lp_k\right)\varphi_2=0
\ee
and is immediate to verify that the argument in the brackets identically vanishes. Therefore, only one condition on $\varphi_1$ and $\varphi_2$ appears. As a matter of fact, given any function $\varphi_1(\mu,\nu)$ satisfying the boundary condition $\varphi_1(0,0)=1$, the function $\varphi_2(\mu,\nu)$ is uniquely determined by the equation (\ref{con1}) and reads $\varphi_2=(1+2\varphi_1'\varphi_1)/(\varphi_1-2\mu\varphi_1')$. In other words, there are infinitely many ways to express, via $\varphi_1$, the non-commutative coordinates (\ref{snyalg}) in terms of the ordinary ones without deforming either the rotation and the translation groups.
It is worth noting that: (i) The realizations (\ref{real}) have sense if there exist the inverse transformation $x_i=\tilde x_j(\varphi^{-1})_{ji}$ and the necessary and sufficient condition is $\det|\delta_{ij}\varphi_1+\kappa p_ip_j\varphi_2|>0$. If we deal with a $n\geq2$ dimensional space, such a condition reads $\varphi^{n-1}_1(\varphi_1+\mu\varphi_2)>0$, i.e. $\varphi_1>0$ and $\varphi_1+\mu\varphi_2>0$. (ii) Our analysis can be straightforward generalized to a Snyder Minkowskian space-time. In this case, all the relations above hold as soon as the following replacements are taken into account ($\{\alpha,\beta\}\in\{0,...,n\}$): the $SO(n)$ generators are substituted by the Lorentz generators $L_{\alpha\beta}$ and $(\tilde x_\alpha,p_\alpha)$ now transform as four-vectors under Lorentz algebra which indices are raised and lowered by the Minkowski metric $\eta_{\alpha\beta}$, i.e $p^2=\eta^{\alpha\beta}p_\alpha p_\beta$ is Lorentz invariant.
{\it Uncertainty relations.} In order to complete the analysis of the deformed algebra we need to analyze the $(\tilde x-p)$ commutation relation. This way, the general form of the uncertainty principle, and thus the physical consequences of the model, can be discussed. The commutator between $\tilde x_i$ and $p_j$ arises from the realization (\ref{real}) and reads
\be\label{xpcom}
[\tilde x_i,p_j]=i\left(\delta_{ij}\varphi_1+\kappa p_ip_j\varphi_2\right).
\ee
Of course, the ordinary one is recovered in the $\kappa\rightarrow0$ limit. From the commutator above we can immediately obtain the generalized uncertainty principle underlying the Snyder non-commutative space, i.e.
\be\label{uncrel}
\Delta\tilde x_i\Delta p_j\geq\f12|\delta_{ij}\langle\varphi_1\rangle+\kappa\langle p_ip_j\varphi_2\rangle|.
\ee
Three remarks are in order. (i) The algebra we obtain can be regarded as a deformed Heisenberg algebra. More precisely, the deformation of the only commutator between the spatial coordinates as in (\ref{snyalg}) leads to infinitely many realizations of the algebra, and thus generalized uncertainty relations (\ref{uncrel}), all consistent with the assumptions underlying the model. (ii) Unless $\varphi_2=0$ no compatible observables exist. These are coupled with each other and an exactly simultaneous measurable couple $(\tilde x_i,p_j)$ is not longer allowed. A measure of the $i$-component of the (non-commutative) position will always affect a measure of the $j(\neq i)$-component of the momentum by an uncertainty $\Delta p_j\gtrsim|\kappa\langle p_ip_j\varphi_2\rangle|/\Delta\tilde x_i$. (iii) For any fixed $\varphi_1$ the non-commutative framework is unique, but we can realize the commutator (\ref{xpcom}) in terms of any commutative coordinates $x_i'$ and corresponding canonical momenta $p_i'$ satisfying $[x_i',p_j']=i\delta_{ij}$. Of course all these descriptions lead to the same physical consequences.
{\it Hermiticity conditions.} The non-commutative coordinates $\tilde x_i$ have to be hermitian operators in any given realization. All the commutators given above are invariant under the formal anti-linear involution ``$\dag$''
\be
\tilde x_i^\dag=\tilde x_i, \quad p_i^\dag=p_i, \quad M_{ij}^\dag=-M_{ij}\,,
\ee
where the order of elements is inverted under the involution. However, the realization (\ref{real}) in general is not hermitian. The hermiticity condition can be immediately implemented as soon as the expression
\be
\tilde x_i=\f12\left(x_i\varphi_1+\kappa(x_jp_j)p_i\varphi_2+\varphi_1^\dag x_i^\dag+\kappa\varphi_2^\dag p_i^\dag(x_jp_j)^\dag\right)
\ee
is taken into account. However, the physical result do not depend on the choice of the representation as long as exist a smooth limit $\tilde x_i\rightarrow x_i$ as $\kappa\rightarrow0$. Therefore, we can restrict our attention to non-hermitian realization only.
{\it Recovering the know realizations.} The non-com\-mu\-ta\-ti\-ve Snyder space has been analyzed in literature from different points of view \cite{Kov,LO,tt} (see also \cite{GB}), but only two particular realizations of its algebra are known: the Snyder \cite{Sny} and the Maggiore \cite{Mag} ones. The original realization of Snyder, in particular, expressed through the commutator between $\tilde x$ and $p$, reads
\be\label{xpsny}
[\tilde x_i,p_j]=i\left(\delta_{ij}+\kappa p_ip_j\right).
\ee
It is not difficult to see that this is a particular case of our realization (\ref{real}) as soon as $\varphi_1=1$. From this condition, the function $\varphi_2$ is fixed by (\ref{con1}) as $\varphi_2=1$ and the above commutation relation is recovered. The condition on the inverse mapping implies that $p^2>-1/\kappa$. On the other hand, the Maggiore algebra
\be\label{magalg}
[\tilde x_i,p_j]=i\delta_{ij}\sqrt{1-\kappa(p^2+m^2)},
\ee
can be regarded as the particular case of (\ref{real}) when the condition $\varphi_2=0$ is taken into account. But this requirement alone is not enough. In fact, from the constraint (\ref{con1}), the function $\varphi_1$ is not uniquely fixed but reads $\varphi_1=\sqrt{1-\mu+f(\nu)}$, where $f(\nu)$ is a generic function of $\nu$ (the inverse mapping condition entails $p^2<(1+f)/\kappa$). Only in the specific case $f(\nu)=-\nu$ the commutator (\ref{magalg}) is recovered. Finally, we note that, the deformed algebra proposed by Kempf et al. in \cite{Kem} can be regarded as a particular case of (\ref{magalg}) as $|\mu|\ll1$ and $m=0$, i.e. $[\tilde x_i,p_j]=i\delta_{ij}(1+\beta p^2)$ where $\beta=-\kappa/2$ with $\kappa<0$. In the one-dimensional framework (see below), this algebra is the same of the Snyder one (\ref{xpsny}).
\section{Physical implications}
As understood, the physical consequences of a non-commutative space geometry are deeply and completely new scenarios are opened at both classical and quantum levels. Two physically relevant frameworks are analyzed in this Section: a generic mechanical system and the so-called quantum cosmological arena.
{\it Mechanical system.} Let us start by considering a mechanical system, i.e. a model with a finite number of degrees of freedom described by a Hamiltonian $H=H(\tilde x,p)$. At classical level the deformations induced on the system appear as soon as the (classical) limit $-i[\cdot,\cdot]\rightarrow\{\cdot,\cdot\}$ is taken into account. In doing this the deformation parameter $\kappa$ is regarded as an independent constant with respect to $\hbar$. Modifications of a non-commutative framework on the classical dynamics are then summarized in the deformed Poisson brackets
\be
\{F,G\}=\left(\frac{\partial F}{\partial\tilde x_i}\frac{\partial G}{\partial p_j}-\frac{\partial F}{\partial p_i}\frac{\partial G}{\partial\tilde x_j}\right)\{\tilde x_i,p_j\}+\frac{\partial F}{\partial\tilde x_i}\frac{\partial G}{\partial\tilde x_j}\{\tilde x_i,\tilde x_j\}
\ee
between any phase space functions. This symplectic structure is not fixed but depends on the two functions $\varphi_1$ and $\varphi_2$ constrained by (\ref{con1}) and $\varphi_1(0,0)=1$. From the relation above, the equations of motion of a mechanical system are deformed as
\bea\label{eqmod}
\dot{\tilde x}_i&=&\{\tilde x_i,H\}=\frac{\partial H}{\partial p_j}\left(\delta_{ij}\varphi_1+\kappa p_ip_j\varphi_2\right)+\frac\kappa i\frac{\partial H}{\partial\tilde x_j}M_{ij}, \nonumber\\
\dot p_i&=&\{p_i,H\}=-\frac{\partial H}{\partial\tilde x_j}\left(\delta_{ij}\varphi_1+\kappa p_ip_j\varphi_2\right).
\eea
When the deformation parameter vanishes ($\kappa\rightarrow0$) the ordinary Hamilton equations are recovered. At quantum level our picture implies either modifications of the Ehrenfest theorem through (\ref{eqmod}), either deformations of the canonical quantization prescription via the commutator (\ref{xpcom}). As we said, this commutator is not fixed at all by the assumptions described above and for any choice of the realization (\ref{real}) of the non-commutative coordinates, the corresponding Hilbert spaces are thus unitarily inequivalent. Each quantum framework (Hilbert space) corresponds to a specific choice of the realization (\ref{real}). We also stress that given an eigenvalue problem $\hat H(\tilde x,p)\psi(x)=E\psi(x)$, the wave function $\psi$ and the spectrum $E$ depend on $\varphi_1$.
This is not surprising since the deformation of the canonical commutation relations can be viewed, from the realization (\ref{real}), as an algebra homomorphism which is a non-canonical transformation. In particular, it can not be implemented at quantum level as an unitary transformation. From this point of view, the set of predictions of any deformed Heisenberg algebra can not be matched by the set of predictions arising from another one, i.e. neither by the set of prediction of the ordinary framework (the $\kappa\rightarrow0$ limit). New features are then introduced, for any fixed $\varphi_1$, at both classical and quantum level. This way, a Snyder structure (\ref{snyalg}) in which the translation and rotation groups are undeformed, leads to infinitely many different physical predictions through (\ref{real}).
A notable problem to be considered is the harmonic oscillator with non-commutative quadratic potential, i.e. $H=p^2/2m+m\omega^2\tilde x^2/2$. In the one-dimensional case the symmetry group is trivial ($SO(1)=\text{Id}$) and the most general realization is given by $\tilde x=x\sqrt{1-\mu+f(\nu)}$. Considering the representation of this algebra (we take $f=0$) in the momentum space, the deformed stationary Schr\"odinger equation for this model is given by the so-called Mathieu equation and the energy spectrum appears to be modified as $E_n=\omega(2n+1)/2-\omega\kappa(2n^2+2n+1)/8d^2+\mathcal O(\kappa^2/d^4)$ where $d=1/\sqrt{m\omega}$ is the characteristic length scale (for more details see \cite{Bat}). We note that, if $\kappa>0$ this is the spectrum obtained in polymer (loop) quantum mechanics \cite{pol}, while if $\kappa<0$ this result resembles the one recovered in \cite{DJM03}.
{\it Quantum cosmology.} An interesting quantum mechanical arena to test such a framework is the so-called minisuperspace reduction of the dynamics, i.e. quantum cosmology. As well-know \cite{Wald}, by imposing symmetries on the spatial Cauchy surfaces which fill the space-time manifold, a considerable simplification on the gravitational theory occurs. In particular, by requiring the spatial homogeneity the phase space of general relativity reduces to six dimensions. The system is described by three degrees of freedom, i.e. the three scalar factors of the Bianchi models. Furthermore, by imposing also the spatial isotropy, we deal with one-dimensional mechanical systems. These are the Friedmann-Robertson-Walker (FRW) models which describe the observed Universe and on which the standard model of cosmology is based.
In order to discuss the implications of the Snyder algebra on the FRW Universes we consider the one-dimensional case of the scheme analyzed above. If we assume the minisuperspace as Snyder-deformed, then the isotropic scale factor $a$ (namely the radius of the Universe) and its conjugate momentum $p$ satisfy the commutation relation $[a,p]=i\sqrt{1-\mu+f(\nu)}$. It is worth stressing that, when $\kappa>0$ (taking $f=0$) a natural cut-off on the momentum arises, i.e. $|p|<\sqrt{1/\kappa}$, while as $\kappa<0$ the uncertainty relation (\ref{uncrel}) predicts a minimal observable length $\Delta{\tilde x}_\text{min}=\sqrt{-\kappa}$. Moreover, at the first order in $\kappa$, the string theory result \cite{String} $\Delta{\tilde x}\gtrsim(1/\Delta p+l_s^2\Delta p)$, in which the string length $l_s$ can be identify with $\sqrt{-\kappa/2}$, is recovered.
Following \cite{Bat} is possible to show that the effective Friedmann equation of Snyder-deformed flat FRW cosmological model becomes
\be\label{modfri}
\left(\frac{\dot a}a\right)^2=\frac{8\pi G}3\rho\left(1-\frac\rho{\rho_c}+f(\nu)\right),
\ee
where $G$ is the gravitational constant, $\rho=\rho(a)$ denotes a generic matter energy density and $\rho_c=(2\pi G/3\kappa)\rho_P$ is the critical energy density ($\rho_P$ being the Planck one). When the limit $\kappa\rightarrow0$ is taken into account, the critical energy density diverges (the function $f(\nu)$ disappears) leading to the ordinary dynamics. It is worth noting that, if $f(\nu)=0$ and $\kappa>0$, the equation (\ref{modfri}) resembles exactly the effective bouncing Friedmann equation of LQC \cite{bloop}. Such a dynamics is singularity-free since, when $\rho$ reaches the critical energy density, $\dot a$ vanishes and the Universe experiences a (big)-bounce instead of the classical big-bang. On the other hand, if $f(\nu)=0$ and $\kappa<0$, the effective braneworlds dynamics is recovered \cite{Roy}.
Summarizing, the non-commutative Snyder minisuperspace framework can clarify similarities and differences between different quantum gravity theories. Other comparisons between deformed and loop-polymer quantum cosmology, in view of discussing the fate of the cosmological singularity at quantum level, were performed considering the flat FRW model filled with a massless scalar field \cite{BM07a} and the Taub cosmological model \cite{BM07b}. Such investigations deserve interest either in clarifying the role of loop quantization techniques in cosmology, either in establishing a phenomenological contact with some frameworks relevant in a flat space-time limit of quantum gravity.
\section{Concluding remarks}
In this paper we have shown how there are infinitely many realizations of the Snyder algebra, equations (\ref{snyalg}-\ref{commx}), implying different commutation relations between the non-commutative coordinates $\tilde x$ and momenta $p$, i.e. we deal with deformed Heisenberg algebras. These depend on an arbitrary function $\varphi_1(\mu,\nu)$ such that $\varphi_1(0,0)=1$ ensuring the correctness of the picture. Therefore, different non-commutative spaces, described by distinct commutations relations (\ref{xpcom}), imply different (unitarily inequivalent) physical consequences. On the other hand, in the one-dimensional case the commutator between $\tilde x$ and $p$ is fixed (up to a function of the mass-like term) and, when implemented in the minisuperspace dynamics, the loop as well as the braneworlds cosmological evolutions are recovered.
{\it Acknowledgments.} We thank Daniel Meljanac for comments. M.V.B. thanks S.M. for the warm hospitality in Zagreb during which this paper was carried out. This work was supported in part by Ministry of Science and Technology of the Republic of Croatia under contract No. 098-0000000-2865.
|
2,877,628,090,660 | arxiv | \section*{Data Availability Statement}
\nocite{*}
|
2,877,628,090,661 | arxiv | \section{Introduction}\label{sec: intro}
Over the past several years, the standard cosmological model, $\Lambda$CDM, has come under increased scrutiny as measurements of the late-time expansion history of the Universe~\cite{Scolnic:2017caz}, the cosmic microwave background (CMB)~\cite{Planck:2018vyg}, and large-scale structure (LSS) -- such as the clustering of galaxies~\cite{Alam:2016hwk,Abbott:2017wau,Hildebrandt:2018yau,eBOSS_cosmo} -- have improved. Some of these observations have hinted at possible tensions within $\Lambda$CDM{}, related to the Hubble constant ${H_0 = 100 h}$~km/s/Mpc~\cite{Verde:2019ivm} and the parameter combination $S_8 \equiv \sigma_8(\Omega_{\rm m}/0.3)^{0.5}$~\cite{Joudaki:2019pmv} (where $\Omega_{\rm m}$ is the total matter relic density parameter and $\sigma_8$ is the root mean square of the linear matter perturbations within 8 Mpc/$h$ today), reaching the $4-5\sigma$ \cite{Riess:2020fzl,Soltis:2020gpl,Pesce:2020xfe, Blakeslee:2021rqi, Riess:2021jrx} and $2-3\sigma$ level \cite{Joudaki:2019pmv,Heymans:2020gsg,DES:2021wwk}, respectively. While both of these discrepancies may be the result of systematic uncertainties, and not all measurements lead to the same level of tension \cite{Freedman:2019jwv,Freedman:2020dne} (see also Refs. \cite{Freedman:2021ahq,Anand:2021sum}), numerous models have been suggested as a potential resolution (see e.g. Refs.~\cite{DiValentino:2021izs,Schoneberg:2021qvd} for recent reviews), though none is able to resolve both tensions simultaneously \cite{Jedamzik:2020zmd,Schoneberg:2021qvd}.
In this work we focus on a scalar field model of `early dark energy' (EDE), originally proposed to resolve the `Hubble tension' (see e.g. Refs.~\cite{Karwal:2016vyq,Poulin:2018dzj, Poulin:2018cxd, Smith:2019ihp}). The EDE scenario assumes the presence of an ultra-light scalar field $\phi$ slow-rolling down an axion-like potential of the form $V(\phi)\propto [1-\cos(\phi/f)]^n$, where $f$ is the decay constant of the field. Due to Hubble friction the field is initially fixed at some value, $\theta_i = \phi_i/f$, and becomes dynamical when the Hubble parameter drops below the field's mass, which happens at a critical redshift $z_c$. Once that occurs, the field starts to evolve, eventually oscillates around the minimum of its potential, and its energy density dilutes at a rate faster than matter (for the potential we use here, with $n=3$, $\rho_{\rm EDE} \propto (1+z)^{4.5}$). The energy density of the scalar field around $z_c$ reduces the sound horizon at recombination leading to an increase in the inferred value of $H_0$ from CMB measurements (see e.g. Ref.~\cite{Knox:2019rjx}).
Up until recently, evidence for EDE came only from analyses which included a prior on the value of $H_0$ from the Supernova $H_0$ for the Equation of State (S$H_0$ES) collaboration\footnote{The S$H_0$ES{} prior is actually a constraint on the absolute calibration of the SNe data. However, since the EDE is dynamical at pre-recombination times, this distinction is unimportant~\cite{Schoneberg:2021qvd}.} \cite{Poulin:2018cxd, Smith:2019ihp, Chudaykin:2020acu,Chudaykin:2020igl,Murgia:2020ryi}.~Using this prior on $H_0$ and the full \textit{Planck} power spectra, within the EDE model one obtains a non-zero fraction of the total energy density in EDE at the critical redshift, ${f_{\rm EDE}(z_c) = 0.108^{+0.035}_{- 0.028}}$\,, with a corresponding Hubble parameter ${H_0 = 71.5 \pm 1.2}$ km/s/Mpc~\cite{Murgia:2020ryi} (adding supernovae (SNe) and baryon acoustic oscillation `standard ruler' (BAO) data leads to insignificant shifts). Without the S$H_0$ES{} prior, one has instead an upper bound of the form ${f_{\rm EDE}(z_c) < 0.088}$ at 95\% confidence level (CL) and ${H_0 = 68.29^{+0.75}_{-1.3}}$~km/s/Mpc~\cite{Hill:2020osr,Murgia:2020ryi}.\footnote{Given the weak evidence for EDE, the marginalized constraints are strongly dependent on the choice of priors for the EDE parameters, making these constraints hard to interpret \cite{Murgia:2020ryi,Smith:2020rxx,Herold:2021ksg}.}
Recent analyses of EDE using data from the Atacama Cosmology Telescope's fourth data release (ACT DR4{})~\cite{ACT:2020frw} alone have shown a slight ($\sim 2.2\sigma$) preference for the presence of an EDE component with a fraction $f_{\rm EDE}(z_c)\sim 0.15$ and $H_0 \sim 74$ km/Mpc/s \cite{Hill:2021yec,Poulin:2021bjr}. Interestingly, the inclusion of large-scale CMB temperature measurements by the Wilkinson Microwave Anisotropy Probe (WMAP) \cite{WMAP:2012fli} or the \textit{Planck} satellite~\cite{Planck:2018vyg} restricted to the WMAP multipole range increases the preference to $\sim3\sigma$. A similar analysis using the third generation South Pole Telescope 2018 (SPT-3G) data \cite{SPT-3G:2021eoc} was presented in Ref.~\cite{LaPosta:2021pgm} (see also Refs.~\cite{Chudaykin:2020acu,Chudaykin:2020igl} for previous studies using SPTpol). There is no evidence for EDE over $\Lambda$CDM{} using SPT-3G alone or when combined with the \textit{Planck} temperature power spectrum restricted to $\ell<650$, giving the marginalized constraint $f_{\rm EDE}(z_c)<0.2$ at 95\% CL in the latter case. Combining ACT DR4{} and/or SPT-3G with the full \textit{Planck} CMB power spectra returns an upper limit on $f_{\rm EDE}(z_c)$, albeit less restrictive than for \textit{Planck} alone.
In Refs.~\cite{Hill:2021yec,Poulin:2021bjr} it was argued that the ACT DR4{} preference for EDE is mainly driven by a feature in the ACT DR4\ EE power spectrum around $ \ell \sim 500$ when ACT DR4{} is considered alone, with an additional broadly-distributed contribution from the TE spectrum when in combination with restricted \textit{Planck} TT data ($\ell <650$ or $\ell <1060$). Ref.~\cite{Poulin:2021bjr} also considered the role of \textit{Planck} polarization data, finding that the evidence for a non-zero $f_{\rm EDE}(z_c)$ and an increased $H_0$ persists, as long as the {\em Planck} TT spectrum is restricted to $\ell < 1060$.
Building on these previous studies, the work presented here explores in more detail how the evidence for EDE using data from ACT DR4{}, SPT-3G or both data sets is impacted by the inclusion of the more precise intermediate-scale ($\mathcal{O}(\ell) = 100$) polarization measurements by \textit{Planck}. We test the robustness of the results to changes in the \textit{Planck} TE polarization efficiency and the dust contamination amplitudes in \textit{Planck} EE. We also further investigate the role of \textit{Planck} high-$\ell$ TT data as well as that of several non-CMB probes.
This paper is organized as follows. In Section \ref{sec:ana} we briefly summarize the numerical setup and cosmological data sets used in our analysis. In Section~\ref{sec:results} we present our results, focusing on the role of \textit{Planck} polarization and temperature data as well as that of possible systematic uncertainties. We conclude in Section~\ref{sec:concl} with a summary and final remarks. The Appendices contain additional figures and tables.
\section{Analysis method and data sets}\label{sec:ana}
For the numerical evaluation of the cosmological constraints on the models considered within this work ($\Lambda$CDM{} and EDE) and their statistical comparison we perform a series of Markov-chain Monte Carlo (MCMC) runs using the public code {\sf MontePython-v3}\footnote{\url{https://github.com/brinckmann/montepython_public}} \citep{Audren:2012wb,Brinckmann:2018cvx}, interfaced with our modified version\footnote{\url{https://github.com/PoulinV/AxiCLASS}} of {\sf CLASS}\footnote{\url{https://lesgourg.github.io/class_public/class.html}} \cite{Lesgourgues:2011re,Blas:2011rf}. We make use of a Metropolis-Hasting algorithm assuming uninformative flat priors on $\{\omega_b,\omega_{\rm cdm},H_0,A_s,n_s,\tau_{\rm reio}\}$\footnote{Here $\omega_b$ and $\omega_{\rm cdm}$ are the physical baryon and cold DM energy densities, respectively, $A_s$ is the amplitude of the scalar perturbations, $n_s$ is the scalar spectral index, and $\tau_{\rm reio}$ is the reionization optical depth.}, while when considering the EDE model we also vary $\{\log_{10}(z_c),f_{\rm EDE}(z_c) ,\theta_i\}$ with priors\footnote{We focus on the range of $z_c$ for which EDE mostly affects the sound horizon, and therefore $H_0$. Broadening the $z_c$ range can affect the constraints on $f_{\rm EDE}(z_c)$ from SPT-3G alone or in combination with \textit{Planck} TT650 \cite{LaPosta:2021pgm}.} of the form $\{3 \leq \log_{10}(z_c)\leq 4, 0.001 \leq f_{\rm EDE}(z_c) \leq 0.5 ,0.01 \leq \theta_i \leq 3.1\}$. We also include all nuisance parameters associated with each data set as given by the official collaborations and treat the corresponding sets of nuisance parameters independently.\footnote{In principle, one could use a common foreground model, which would reduce the overall number of free parameters and possibly reduce the uncertainties on the cosmological parameters. However, the publicly available likelihoods do not easily allow this and therefore many (if not all) joint CMB analyses that have appeared in the literature employ a separate foreground modeling (see, e.g., Refs.~\cite{ACT:2020gnv,SPT-3G:2021eoc}). Furthermore, our analysis shows that the posterior distributions for the foreground parameters are identical in the $\Lambda$CDM and EDE cosmologies and that they are uncorrelated with the EDE parameters. Because of this, we do not expect a joint foreground model to have a significant impact on our results.} As described in Ref.~\cite{Smith:2019ihp}, we use a shooting method to map the set of phenomenological parameters $\{\log_{10}(z_c), f_{\rm EDE}(z_c)\}$ to the theory parameters $\{m,f\}$. We adopt the {\em Planck} collaboration convention in modeling free-streaming neutrinos as two massless species and one massive with $m_\nu=0.06$ eV \cite{Ade:2018sbj}, and use {\sf Halofit} to estimate the non-linear matter clustering \cite{Smith:2002dz}. We consider chains to be converged using the Gelman-Rubin \citep{Gelman:1992zz} criterion $|R -1|\lesssim0.05$.\footnote{This condition is chosen because of the non-Gaussian (and sometimes multi-modal) shape of the posteriors of the EDE parameters. For all $\Lambda$CDM{} runs we have $|R -1|<0.01$.} To post-process the chains and produce our figures we use {\sf GetDist} \cite{Lewis:2019xzd}.
\begin{figure}
\centering
\includegraphics[width=0.85\columnwidth]{fig1.pdf}
\caption{1D and 2D posterior distributions (68\% and 95\% CL) of $H_0, f_{\rm EDE}(z_c), \theta_i,$ and $\log_{10}(z_c)$ for different data set combinations. The vertical gray band shows $H_0=73.04 \pm 1.04$ km/s/Mpc, as reported by the S$H_0$ES{} collaboration \cite{Riess:2021jrx}. The dashed curve shows the posterior distribution for $H_0$ within $\Lambda$CDM{} with \textit{Planck} TT650TEEE+ACT DR4+SPT-3G. When all data sets are combined, EDE is preferred at the $\sim 3\,\sigma$ level and leads to a higher $H_0$ value, in good agreement with the S$H_0$ES{} result.}
\label{fig: MCMC_res}
\end{figure}
We make use of the various \textit{Planck} 2018 \cite{Planck:2018vyg} and ACT DR4\ \cite{ACT:2020frw} likelihoods distributed together with the public {\sf MontePython} code, while the SPT-3G polarization likelihood \cite{SPT-3G:2021eoc} has been adapted from the official {\sf clik} format\footnote{\url{https://pole.uchicago.edu/public/data/dutcher21} (v3.0)}. In addition to the full \textit{Planck} polarization power spectra (refered to as TEEE), we compare the use of the \textit{Planck} TT power spectrum with a multipole range restricted to $\ell<650$ (TT650), or the full multipole range (TT). The choice of \textit{Planck} TT650 is motivated by the fact that the \textit{Planck} and WMAP data are in excellent agreement in this multipole range \cite{huang2018}. In all the runs of this paper, we include the \textit{Planck} low multipole ($\ell<30$) EE likelihood to constrain the optical depth to reionization, as well as the low-$\ell$ TT likelihood \cite{Planck:2018vyg}. For any data combination that includes \textit{Planck} TT650 we did not restrict ACT DR4{} TT. In analyses that include \textit{Planck} TT at higher multipoles, we removed any overlap with ACT DR4{} TT up until $\ell = 1800$ to avoid introducing correlations between the two data sets \cite{Aiola:2020azj}.
Finally, we briefly explore joint constraints from the primary CMB anisotropy data in combination with CMB lensing potential measurements from \textit{Planck} \cite{Planck:2018vyg}, BAO data gathered from 6dFGS at $z = 0.106$ \cite{Beutler:2011hx}, SDSS DR7 at $z = 0.15$ \cite{Ross:2014qpa} and BOSS DR12 at ${z = 0.38, 0.51, 0.61}$ \cite{Alam:2016hwk} (both with and without information on redshift space distortions (RSD) $f\sigma_8$), data from the Pantheon catalog of uncalibrated luminosity distance of SNe in the range ${0.01<z<2.3}$~\cite{Scolnic:2017caz} as well as the late-time measurement of the $H_0$ value reported by the S$H_0$ES{} collaboration, $H_0=73.04\pm1.04$ km/s/Mpc \cite{Riess:2021jrx} (which we account for as a Gaussian prior on $H_0$).
\begin{table}
\def1.2{1.2}
\scalebox{0.85}{
\begin{tabular}{|l|c|c|}
\hline
Model & $\Lambda$CDM & EDE \\
\hline
\hline
$f_{\rm EDE}(z_c)$ & $-$ & $0.163(0.179)_{-0.04}^{+0.047}$ \\
$\log_{10}(z_c)$&$-$ & $3.526(3.528)_{-0.024}^{+0.028}$\\
$\theta_i$ & $-$ & $2.784(2.806)_{-0.093}^{+0.098}$ \\
\hline
$m$ (eV) & $-$ & $(4.38 \pm 0.49) \times 10^{-28}$ \\
$f$ (Mpl) & $-$ & $0.213 \pm 0.035$ \\
\hline
$H_0$ [km/s/Mpc]&$68.02(67.81)_{-0.6}^{+0.64}$ & $74.2(74.83)_{-2.1}^{+1.9}$ \\
$100~\omega_b$ & $2.253(2.249)_{-0.013}^{+0.014}$ & $2.279(2.278)_{-0.02}^{+0.018}$ \\
$\omega_{\rm cdm}$ &$0.1186(0.1191)_{-0.0015}^{+0.0014}$ & $0.1356(0.1372)_{-0.0059}^{+0.0053}$\\
$10^{9}A_s$ & $2.088(2.092)_{-0.033}^{+0.035}$ & $2.145(2.146)_{-0.04}^{+0.041}$ \\
$n_s$& $0.9764(0.9747)_{-0.0047}^{+0.0046}$ & $1.001(1.003)_{-0.0096}^{+0.0091}$ \\
$\tau_{\rm reio}$& $0.0510(0.0510)_{-0.0078}^{+0.0087}$ & $0.0527(0.052)_{-0.0084}^{+0.0086}$ \\
\hline
$S_8$ &$0.817(0.821)\pm0.017$ & $0.829(0.829)_{-0.019}^{+0.017}$\\
$\Omega_m$ &$0.307(0.309)_{-0.009}^{+0.008}$ & $0.289(0.287)\pm0.009$\\
Age [Gyrs] & $13.77(13.78)\pm0.023$&$12.84(12.75)\pm0.27$ \\
\hline
$\Delta \chi^2_{\rm min}$ (EDE$-\Lambda$CDM) & $-$ &-16.2 \\
Preference over $\Lambda$CDM &$-$ & 99.9\% ($3.3\sigma$) \\
\hline
\end{tabular} }
\caption{The mean (best-fit) $\pm 1\sigma$ errors of the cosmological parameters reconstructed in the $\Lambda$CDM and EDE models from the analysis of the ACT DR4{}+SPT-3G+\textit{Planck} TT650TEEE data set combination.}
\label{tab:full}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=1.45\columnwidth]{fig2.pdf}
\caption{The difference between the EDE and $\Lambda$CDM{} best-fit models to the data combination ACT DR4{}+SPT-3G+\textit{Planck} TT650TEEE (solid black) and the residuals of the data points computed with respect to the $\Lambda$CDM{} best fit of the same data set combination (coloured data points). Although the EE power spectrum measurements around $\ell \sim 500$ of SPT-3G and \textit{Planck} do not follow the same fluctuations as the ACT DR4 data, we find a $3.3\,\sigma$ preference for EDE over $\Lambda$CDM{} when fitting ACT DR4{}+SPT-3G+\textit{Planck} TT650TEEE jointly.}
\label{fig: residuals}
\end{figure*}
\section{Results} \label{sec:results}
The resulting posterior distributions of the parameters most relevant for our discussion are shown in Fig.~\ref{fig: MCMC_res} for a variety of CMB data set combinations. The mean, best-fit, and 1$\sigma$ errors for the full CMB data set combination for both $\Lambda$CDM{} and EDE cosmologies are shown in Table~\ref{tab:full}. A complete list of CMB constraints can be found in Table \ref{tab:CMB} provided in Appendix~\ref{app: tables}.
We find that the combination of \textit{Planck} TT650TEEE+ ACT DR4{}+SPT-3G leads to a $3.3\sigma$ preference\footnote{We compute the preference assuming that the $\Delta\chi^2$ follows a $\chi^2$ distribution with three degrees of freedom. Because the parameters $\{z_c,\theta_i\}$ are not defined once $f_{\rm EDE}=0$, this test-statistics does not fully encapsulate the true significance, as required by Wilks' theorem \cite{Wilks:1938dza}. Still, we note that it gives results more conservative than {\em local significance} tests, which would consist in computing the preference at fixed $\{z_c,\theta_i\}$, and therefore with a single degree of freedom. We keep a more detailed analysis estimating the true significance for future work, for instance following Refs.~\cite{Gross:2010qma,Ranucci:2012ed,Bayer:2020pva} or dedicated mock data analyses.} for EDE over $\Lambda$CDM (${\Delta\chi^2 \equiv \chi^2({\rm EDE})-\chi^2(\Lambda{\rm CDM})= -16.2}$), with $f_{\rm EDE}(z_c)=0.163_{-0.04}^{+0.047}$ and $H_0=74.2^{+1.9}_{-2.1}$ km/s/Mpc (see Table~\ref{tab:full}). Although the $\chi^2$ preference is mainly driven by an improvement of the fit to {\it Planck} TT650TEEE and ACT DR4{} (the detailed breakdowns of the $\chi^2$ values are given in Appendix~\ref{app: chi2}), the addition of SPT-3G pulls the $f_{\rm EDE}(z_c)$ posterior up relative to {\it Planck} TT650TEEE alone (see Table \ref{tab:CMB}). It is remarkable that with the inclusion of the \textit{Planck} polarization power spectra, hints for the EDE cosmology are present when combined with \textit{Planck} TT650 (2.2$\sigma$), \textit{Planck} TT650+ACT DR4{} (3.3$\sigma$), \textit{Planck} TT650+SPT-3G (2.4$\sigma$), and when all three data sets are combined (3.3$\sigma$). Moreover, the resulting posterior distributions for $f_{\rm EDE}(z_c)$ visually agree with one another as shown in Figs.~\ref{fig: MCMC_res} and~\ref{fig: MCMC_full}, though quantifying this consistency is complicated by the partly shared data.
\subsection{Impact of \textit{Planck} TEEE data}
In the context of the EDE scenario, it was argued in Refs.~\cite{Hill:2021yec,Poulin:2021bjr} that the preference for a non-zero $f_{\rm EDE}(z_c)$ using ACT DR4\ data alone or with additional \textit{Planck} low-$\ell$ temperature data is driven, in part, by features in the ACT DR4{} EE power spectrum around $\ell \sim 500$. The lack of such a feature in the SPT-3G data might explain why in combination with ACT DR4\, these data do not show evidence for a non-zero $f_{\rm EDE}(z_c)$ \cite{LaPosta:2021pgm}. The effect of adding the \textit{Planck} polarization power spectra is most apparent at the intermediate TE and EE multipoles, since it is at these scales that the \textit{Planck} measurements are more constraining than those of ACT DR4{} and SPT-3G.
We show the difference of the TT, TE and EE power spectra between the EDE and $\Lambda$CDM{} best-fit models extracted from the data set combination ACT DR4{}+SPT-3G+\textit{Planck} TT650TEEE in Fig.~\ref{fig: residuals}, while in Fig.~\ref{fig: residuals_full} of Appendix \ref{app:nopol} we focus on ACT DR4{} and SPT-3G data with and without {\em Planck} polarization data. The figures show that \textit{Planck} TEEE data drive tight constraints on the spectra at low multipoles, with a small deviation away from $\Lambda$CDM in TE between $\ell\sim200-800$ and in EE between $\ell\sim500-800$ that is coherent with the behavior of the data. Remarkably, after the inclusion of \textit{Planck} polarization data, the best-fit models for ACT DR4{} and SPT-3G come into better agreement. Additionally, due to the presence of EDE\footnote{For discussions about the impact of EDE on the CMB power spectra and the correlation with other cosmological parameters see Refs.~\cite{Poulin:2018cxd,Knox:2019rjx,Hill:2020osr,Vagnozzi:2021gjh}.} the TT spectrum exhibits a lower power than $\Lambda$CDM around $\ell\sim 500-1300$, which follows a trend clearly visible in ACT DR4{} data. In fact, in this combined analysis of ACT DR4 with {\em Planck} TT650TEEE and SPT-3G, the preference for EDE within ACT DR4{} data is driven almost equally by temperature ($\Delta\chi^2$ (ACT DR4{} TT)$=-3.3$) and polarization ($\Delta\chi^2$ (ACT DR4{} TEEE)$=-4.7$) data.
At the parameter level, the main impact of including \textit{Planck} TEEE in combination with \textit{Planck} TT650+ACT DR4{}+SPT-3G is on the value of $\omega_b$, $z_c$, and $\theta_i$ (for comparison, see Appendix \ref{app:nopol} for analyses without \textit{Planck} polarization data). For instance, ACT DR4{}+\textit{Planck} TT650 gives $10^{2}\omega_b = 2.154^{+0.04}_{-0.046}$, $\log_{10}(z_c)=3.21^{+0.11}_{-0.01}$ and no constraints on $\theta_i$ (see Table \ref{tab:TTonly} and Fig.~\ref{fig:PlanckTT650}). The inclusion of \textit{Planck} polarization shifts the baryon density to $10^{2}\omega_{b } = 2.273 ^{+0.02}_{-0.023}$, tightly constrains ${\theta_i=2.784^{+0.098}_{-0.093}}$, and leads to a value of the critical redshift $z_c$ in good agreement with that of earlier findings \cite{Poulin:2018cxd,Smith:2019ihp,Murgia:2020ryi}, namely $\log_{10}(z_c)=3.529^{+0.03}_{-0.049}$, i.e. a field that becomes dynamical around the time of matter-radiation equality ($\log_{10}(z_{\rm eq}) = 3.580^{+0.022}_{-0.016}$).
Although there is an overall improvement in the $\chi^2$ when using EDE for all of the CMB data, the inclusion of \textit{Planck} polarization leads to a degradation of the fit to ACT DR4\: when compared to the EDE analysis with ACT DR4{}+\textit{Planck} TT650, ${\Delta \chi^2_{\rm ACT} =+11.8}$ (see Tables \ref{tab:chi2_ede} and \ref{tab:chi2_nopol}).\footnote{Even with this increase in the ACT DR4{} $\chi^2$, the overall goodness-of-fit as quantified by the probability-to-exceed goes from 0.17 to 0.07. Thus, in terms of the overall goodness-of-fit, both models are acceptable.} It is however remarkable that, regardless of the data combination, the improvement over $\Lambda$CDM is similar ($\Delta \chi^2\sim-8$). In the combined fit, we note that the $\chi^2$ of SPT-3G and \textit{Planck} TT650TEEE are also mildly degraded (both in the EDE and $\Lambda$CDM model), and exploring whether these shifts are compatible with pure statistical effects is left for future work (see Ref.~\cite{Handley:2019wlz} for a related discussion).
\subsection{Systematic uncertainty in \textit{Planck} TEEE data}
\label{sec:systs}
As explained in Ref.~\cite{Planck:2019nip} (see also Sec.~2.2.1 of Ref. \citep{Planck:2018vyg} and Ref. \citep{galli2021}), two different approaches for the modeling of the \textit{Planck} TE polarization efficiency (PE) calibration are possible\footnote{Polarization efficiencies are calibration factors multiplying polarization spectra. In principle, the polarization efficiencies found by fitting the TE spectra should be consistent with those obtained from EE. However, in \textit{Planck}, small differences (at the level of $2\sigma$) are found between the two estimates at 143 GHz. There are two possible choices: the `map-based' approach, which adopts the estimates from EE (which are about a factor of 2 more precise than TE) for both the TE and EE spectra; or the `spectrum-based' approach, which applies independent estimates from TE and EE. The baseline \textit{Planck} likelihood uses a `map-based' approach, but allows one to test the `spectrum-based' approach as well (see also Ref. \citep{camspec}), as we do in this paper.}. In principle, these techniques should give equivalent results for the TE PE parameters, but in practice estimates in \textit{Planck} are slightly discrepant, at the level of $\sim 2\sigma$ (see Eqs.~(45) -- used as baseline -- and (47) of Ref. \cite{Planck:2019nip}). Although these differences have a negligible impact on the parameter estimation within $\Lambda$CDM{}, it has been noted that constraints to several extensions of the $\Lambda$CDM{} model are affected by shifts in the TE PE parameters (see e.g.~Fig.~77 of Ref.~\cite{Planck:2019nip}).\footnote{We note that for \textit{Planck} there exist other likelihood codes which may be used. In this paper we used the Plik likelihood, which is the baseline \textit{Planck} likelihood for the final third data release (PR3) of the \textit{Planck} collaboration. Another \textit{Planck} likelihood based on PR3 is CamSpec \cite{Aghanim:2018eyx, camspec}, which gives 0.5$\sigma$ shifts relative to Plik in some extensions of $\Lambda$CDM for the TTTEEE data combination. These shifts are due to differences in the treatment of polarization data (Plik and CamSpec provide the same results in TT), which are mostly driven by different choices of polarization efficiencies (see Section 2.2.5 of \cite{Aghanim:2018eyx}). Thus, applying different efficiencies to the Plik likelihood (as done in this paper) provides an accurate proxy of the uncertainty introduced by the difference between the two likelihoods. Moreover, outside of the \textit{Planck} collaboration, new likelihoods (Camspec \cite{2022arXiv220510869R}, Hillipop -- \url{https://github.com/planck-npipe/hillipop}) have recently been proposed based on a new release of \textit{Planck} maps, NPIPE~\cite{NPIPE}. However, while the results are consistent with the ones from PR3, a detailed understanding of differences between data releases and likelihoods is outside of the scope of this paper.}
Another potential systematic effect in the \textit{Planck} data that has to be considered in beyond-$\Lambda$CDM{} models whose parameters are strongly correlated with the scalar spectral index, $n_s$, involves the choice made for the galactic dust contamination amplitudes \cite{Planck:2019nip}. For the latter, the standard analysis fixes the EE polarization dust amplitudes to values determined by analyzing the 353 GHz map, while the TE dust amplitudes are subject to Gaussian priors (see Fig.~40 of the reference). Lifting such choices does not have significant effects on the parameter estimation (see again Fig. 77 of Ref. \cite{Planck:2019nip}), however, since $f_{\rm EDE}(z_c)$ is strongly correlated with $n_s$ (as shown in Fig.~\ref{fig: MCMC_full}), we test whether relaxing the dust priors may have a significant impact on our constraints to EDE.
\begin{table*}
\def1.2{1.2}
\scalebox{1.0}{
\begin{tabular}{|l|c|c|c|}
\hline Parameter & \textit{Planck} TT650TEEE & \textit{Planck} TT650TEEE & \textit{Planck} TTTEEE\\
& +ACT DR4+SPT-3G& +ACT DR4+SPT-3G & +ACT DR4+SPT-3G\\
& +BAO+Pantheon&+$\phi\phi$+BAO/$f\sigma_8$+Pantheon&+BAO+Pantheon \\
\hline \hline
$f_{\rm EDE}(z_c)$ & $0.148(0.163)_{-0.035}^{+0.039}$ & $0.106(0.143)_{-0.044}^{+0.063}$ & $<0.128 (0.100)$\\
$\log_{10}(z_c)$& $3.524(3.529)_{-0.026}^{+0.028}$ & $3.494(3.515)_{-0.032}^{+0.083}$ & $3.511(-3.534)_{-0.11}^{+0.12}$\\
$\theta_i$ & $2.75(2.757)_{-0.065}^{+0.071}$&$2.512(2.743)_{-0.066}^{+0.41}$ & $2.42(2.77)^{+0.62}_{+0.098}$ \\
\hline
$H_0$ [km/s/Mpc] & $73.03(73.51)_{-1.5}^{+1.4}$ & $71.45(72.53)_{-1.7}^{+2.1}$ &$69.72( 70.78)^{+1.1}_{-1.8}$\\
$100~\omega_b$& $2.273(2.272)_{-0.018}^{+0.016}$ & $2.268(2.261)_{-0.02}^{+0.017}$ & $2.254(2.254)\pm 0.016$\\
$\omega_{\rm cdm}$& $0.1349(0.1368)\pm0.005$ &$0.1303(0.1345)_{-0.0058}^{+0.0068}$ & $0.1256(0.1299)_{-0.0056}^{+0.0038}$\\
$10^9A_s$ & $2.136(2.138)_{-0.038}^{+0.034}$& $2.129(2.155)_{-0.034}^{+0.033}$ & $2.130(2.135)\pm0.038$\\
$n_s$& $0.9965(0.9977)_{-0.0077}^{+0.0075}$ & $0.9899(0.9931)_{-0.0076}^{+0.0092}$ & $0.9804(0.9846)\pm 0.0075$ \\
$\tau_{\rm reio}$& $0.0505(0.0498)_{-0.0075}^{+0.0078}$ & $0.0516(0.0549)_{-0.0073}^{+0.0071}$ & $0.0546(0.0521)\pm0.0073$\\
\hline
$S_8$ & $0.838(0.841)\pm0.015$& $0.836(0.845)\pm0.014$ &$0.835(0.842)\pm0.014$\\
$\Omega_m$ &$0.297(0.297)_{-0.006}^{+0.007}$ & $0.301(0.299)_{-0.007}^{+0.006}$ &$0.306(0.306)\pm0.006$ \\
Age [Gyrs] &$12.95(12.86)_{-0.23}^{+0.22}$ & $13.18(12.99)_{-0.33}^{+0.26}$ &$13.45(13.24)_{-0.16}^{+0.31}$ \\
\hline
$\Delta \chi^2_{\rm min}({\rm EDE}-\Lambda{\rm CDM})$ & -14.4& -11.4 & -9.4\\
Preference over $\Lambda$CDM & 99.8\% ($3.0\sigma$) & 99.0\% ($2.6\sigma$)&97.6\% ($2.3\sigma$) \\
\hline
\end{tabular} }
\caption{The mean (best-fit) $\pm 1\sigma$ errors of the cosmological parameters reconstructed from analyses of various data sets (see column title) in the EDE model when including data beyond \textit{Planck} TT650TEEE+ACT DR4{}+SPT-3G. For each data set, we also report the best-fit $\chi^2$ and improvement $\Delta\chi^2\equiv\chi^2({\rm EDE})-\chi^2(\Lambda{\rm CDM})$.}
\label{tab:non-CMB}
\end{table*}
\begin{figure*}
\centering
\includegraphics[width=1.35\columnwidth]{fig3.pdf}
\caption{1D and 2D posterior distributions (68\% and 95\% CL) for a subset of the cosmological parameters for different data set combinations fit to EDE. The vertical gray band represents the $H_0$ value reported by the S$H_0$ES{} collaboration \cite{Riess:2021jrx}, $H_0=73.04 \pm 1.04$ km/s/Mpc. The non-CMB data tend to prefer lower values of $n_s$ and $\omega_{\rm cdm}$ leading to lower values of $f_{\rm EDE}(z_c)$. The overall preference for EDE is relatively unchanged when including the BAO and SNe data. Including the full \textit{Planck} data leads to a value of $f_{\rm EDE}(z_c)$ consistent with zero at $\sim 1 \sigma$.}
\label{fig:external}
\end{figure*}
In order to test the robustness of our results against these possible known sources of systematics, we perform two additional fits of EDE to {\it Planck} TT650TEEE data: one in which we fix the PE calibration factors to the values reported in Eq.~(47) of Ref. \cite{Planck:2019nip}, and another where we place uniform priors on six additional nuisance parameters describing the dust contamination amplitudes in the EE power spectrum. The results of this analysis are shown in Fig.~\ref{fig:syst}, presented in Appendix \ref{app:syst}. We find that the \textit{Planck} preference for EDE vanishes when the TE PE parameters are fixed to the non-standard values ($\Delta\chi^2=-5.1$). Interestingly, the ACT collaboration also found that a potential systematic error in their TE spectra can reduce the preference for EDE within ACT DR4{} data~\cite{Hill:2021yec}, although not quite as drastically as we find here for {\it Planck}. On the other hand, allowing the dust contamination amplitudes in EE to vary freely has only a marginal effect on the preference for EDE ($\Delta\chi^2=-10.2$).
\subsection{Impact of non-CMB data}
In Fig.~\ref{fig:external} we show the 1D and 2D posteriors for a subset of the cosmological parameters when including: (i) probes of the late-time expansion history, namely BAO and the (uncalibrated) Pantheon SNe, and (ii) probes of the clustering of matter at late times, namely $f\sigma_8$ and \textit{Planck} lensing. A complete list of constraints is given in Table~\ref{tab:non-CMB}.
The inclusion of BAO and Pantheon SNe has a relatively small effect on the preference for EDE over $\Lambda$CDM, slightly reducing it to $3.0\sigma$ ($\Delta\chi^2 = -14.4$). On the other hand, when both $f\sigma_8$ and the \textit{Planck} lensing power spectrum are included, the preference for EDE over $\Lambda$CDM{} is reduced to $2.6 \sigma$ ($\Delta\chi^2 = -11.4$). It is well known that EDE cosmologies can be tested using measurements of the clustering of matter, since their preferred values of $\omega_{\rm cdm}$ and $n_s$ predict larger clustering at small scales than $\Lambda$CDM{} \cite{Hill:2020osr,DAmico:2020ods,Ivanov:2020ril,Murgia:2020ryi,Klypin:2020tud}. In fact, the value of $S_8 = 0.829_{-0.019}^{+0.017}$ reconstructed in the EDE cosmology from {\em Planck} TT650TEEE+SPT-3G+ACT DR4{} is in slight tension\footnote{The level of tension is in fact smaller than in the fiducial {\em Planck} $\Lambda$CDM cosmology \cite{Aghanim:2018eyx,Heymans:2020gsg,DES:2021wwk}, but it is slightly larger than in the $\Lambda$CDM cosmology extracted from {\em Planck} TT650TEEE+SPT-3G+ACT DR4, see Tab.~\ref{tab:full}.} with the $S_8$ measurements from KiDS-1000+BOSS+2dfLenS \cite{Heymans:2020gsg} ($2.3\sigma$), and DES-Y3 \cite{DES:2021wwk} ($2.1\sigma$). Therefore, it is not surprising that probes of the clustering of matter at late times have a more significant impact on the EDE fit. This is evident in Fig.~\ref{fig:external}: both $f\sigma_8$ and estimates of the lensing potential power spectrum prefer lower values of $n_s$ and $\omega_{\rm cdm}$, leading to a decrease in the marginalized values of $f_{\rm EDE}(z_c)$. However, it is interesting to note that the resulting posterior distribution for the Hubble constant shifts to $H_0 = 71.45 ^{+2.1}_{-1.7}$ km/s/Mpc, i.e. with a central value still significantly higher than in $\Lambda$CDM{}. Stronger constraints on EDE may be obtained from analyses making use of the full shape of BOSS DR12 data\footnote{ Although these constraints are debated \cite{Murgia:2020ryi,Klypin:2020tud,Niedermann:2020qbw,Smith:2020rxx} and a recently raised potential issue with the calibration of the window function may affect such constraints \cite{Beutler:2021eqq}.} \cite{DAmico:2020ods,Ivanov:2020ril} or from including additional surveys such as KiDS-1000~\cite{KiDS:2020suj}, DES-Y3 \cite{DES:2021wwk} and HSC \cite{HSC:2018mrq}. A fully satisfactory resolution of the `$S_8$ tension', if not due to systematic errors, e.g. from galaxy assembly bias and baryonic effects \cite{Amon:2022ycy}, may require a more complicated EDE dynamics \cite{Karwal:2021vpk,McDonough:2021pdg,Sabla:2022xzj} or an independent mechanism \cite{Jedamzik:2020zmd,Allali:2021azp,Clark:2021hlo}.
Finally, in Appendix~\ref{app:ext_full} we present results of combined analyses with a prior on $H_0$ as reported by S$H_0$ES{} \cite{Riess:2021jrx}. We find that when considering the combination of {\it Planck} TT650TEEE+ACT DR4+SPT-3G+ BAO+Pantheon+S$H_0$ES{} the EDE model is favored at 5.3$\sigma$ over $\Lambda$CDM, with $f_{\rm EDE}(z_c)=0.143_{-0.026}^{+0.023}$ and $H_0=72.81_{-0.98}^{+0.82}$ km/s/Mpc. The inclusion of the full \textit{Planck} TT power spectrum, lensing power spectrum and $f\sigma_8$ measurement reduces the preference to $4.3\sigma$, but the EDE model still provides an excellent fit to all data sets, and a potential resolution to the `Hubble tension'.
\subsection{Impact of \textit{Planck} high-$\ell$ TT data}
In Fig.~\ref{fig:external} we show the parameter reconstructed posteriors when including the full range of the \textit{Planck} TT power spectrum. In that case, we find that the EDE contribution is constrained to be at most $f_{\rm EDE}(z_c)<0.128$ (95\% CL) with a corresponding $H_0=69.7_{-1.8}^{+1.1}$ km/s/Mpc (see Table \ref{tab:non-CMB}), while the preference for EDE drops to the $2.3\sigma$ level (with a best fit value $f_{\rm EDE}(z_c)= 0.1$). We note that, although the posterior distribution of $f_{\rm EDE}(z_c)$ is compatible with zero at $1\sigma$, it is interesting that the preference, computed using the $\Delta \chi^2$ statistics with three degrees of freedom\footnote{This likely indicates that the true significance of the preference over $\Lambda$CDM is lower than the one reported here, similarly to the way with which {\em local} and {\em global} significance can differ.}, stays above the $2\sigma$ level. This is reminiscent of the difference between the results reported using an EDE model with only one parameter \cite{Murgia:2020ryi,Smith:2020rxx,Niedermann:2020dwg}, or using a frequentist approach through a profile likelihood analysis \cite{Herold:2021ksg}, which led to a $2.2\sigma$ preference for EDE from full {\em Planck} data, as opposed to MCMC analyses that only find upper limits on $f_{\rm EDE}(z_c)$ \cite{Hill:2020osr,Murgia:2020ryi}. In addition to this, the marginalized constraints on $f_{\rm EDE}(z_c)$ using {\em Planck} TTTEEEE with ACT DR4{} and SPT-3G are roughly $50\%$ {\em weaker} than constraints from {\em Planck} only.
\begin{figure*}
\centering
\includegraphics[width=0.8\linewidth]{fig4.pdf}
\caption{Residual plot of the \textit{Planck} TT data with respect to the reference $\Lambda$CDM{} best-fit model for the \textit{Planck} TTTEEE+ACT DR4{}+SPT-3G data set combination. The orange line corresponds to the difference between the EDE best-fit model to the data combination \textit{Planck} TT650TEEE+ACT DR4{}+SPT-3G (`EDE TT650' in the legend) and the reference $\Lambda$CDM{} model. The blue line is the same for full \textit{Planck} TTTEEE+ACT DR4{}+SPT-3G (`EDE' in the legend). Coadded data residuals are computed with respect to the reference $\Lambda$CDM{} cosmological model but using the best-fit nuisance parameters for each of the two EDE cases (TT650 for the red points and full TT for the blue ones). Since in the TT650 case the high-$\ell$ data, shown in red transparent data points, do not enter the parameter determination, high-$\ell$ foreground parameters are not determined. Therefore, in this case they have been obtained by minimizing the \textit{Planck} TT likelihood when fixing the $C_\ell$ and low-$\ell$ nuisances to the \textit{Planck} TT650TEEE+ACT DR4{}+SPT-3G best-fit model. At $\ell>900$, the red transparent residual data points are very close to the blue ones, which indicates that the difference in nuisance parameters between the two cases is small. The high-$\ell$ orange best-fit line predicted by the EDE TT650 case is far from the residual data points, regardless of the nuisance model chosen. It is therefore the high-$\ell$ TT data which drives the best-fit closer to $\Lambda$CDM{}, from the orange line toward the blue one.}
\label{fig: EDE_plot_final}
\end{figure*}
Given that the posterior distribution of $f_{\rm EDE}(z_c)$ is compatible with zero at $1\sigma$, we conservatively interpret these results as an indication that the full \textit{Planck} TT power spectrum slightly disfavors the EDE cosmology preferred by the other data sets. We leave a more robust determination of this (in)consistency to future work.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{fig5.pdf}
\caption{The posterior distribution for $f_{\rm EDE}(z_c)$ and $H_0$ as a function of the maximum TT multipole for \textit{Planck} TT($\ell_{\rm max}$)TEEE+ACT DR4. The yellow and purple bands in the bottom panel give the S$H_0$ES\ and the full \textit{Planck} values for $H_0$, respectively. Note that, following Ref.~\cite{Aiola:2020azj}, in the chains used to make this figure we restricted the ACT DR4{} temperature bins so as to remove any overlap with \textit{Planck} up until $\ell_{\rm max} = 1800$. As the \textit{Planck} TT $\ell_{\rm max}$ is increased the preference for a non-zero contribution of EDE is decreased, leading to a smaller inferred value of $H_0$.}
\label{fig:TT_ellmax}
\end{figure}
We show in Fig.~\ref{fig: EDE_plot_final} the difference between the temperature power spectra obtained in the EDE best-fit to {\it Planck} TT650TEEE+ACT DR4{}+SPT-3G or full {\it Planck} TTTEEE+ACT DR4{}+SPT-3G, and the $\Lambda$CDM fit to full {\it Planck} TTTEEE+ACT DR4{}+SPT-3G. We also show {\it Planck} TT data residuals with respect to the $\Lambda$CDM model. To gauge the role of foregrounds in affecting the preference for EDE, we compare the data residuals for the foreground models obtained from the restricted fit to those obtained in the fit to the full range of data. One can see that data residuals are fairly similar, indicating that high-$\ell$ foregrounds are not strongly correlated with EDE, and cannot be the reason for which \textit{Planck} high-$\ell$ TT data seems to disfavor EDE. Additionally, one can see that data points up to $\ell \sim 850$ are in good agreement with the EDE best-fit model, but start diverging around $\ell \sim 900$.
To better understand the impact of the \textit{Planck} TT power spectrum, in Fig.~\ref{fig:TT_ellmax} we show how the preference for EDE evolves as we increase the considered range of the \textit{Planck} TT power spectrum in steps of $\Delta \ell=100$.\footnote{Here we do not include the SPT-3G data for sake of computational speed, but we have explicitly checked with a few dedicated runs that its addition does not impact our conclusions.} The evidence for EDE over $\Lambda$CDM{} (and the corresponding increased value of $H_0$) starts to drop off once the TT multipoles $\ell \gtrsim 1300$ are included. This is consistent with the fact that \textit{Planck} gains most of its statistical power between $\ell \sim 1300$ and $\ell \sim 2000$, and drives the model to be extremely close to $\Lambda$CDM. Given that high-$\ell$ ACT DR4{} temperature power spectrum is partly driving the preference for EDE, as mentioned previously, this may hint to a small inconsistency between {\it Planck} and ACT DR4{} temperature data (see also Ref.~\cite{Handley:2019wlz}), although at the current level of significance a statistical fluctuation cannot be ruled out.
\section{Summary and conclusions}\label{sec:concl}
We have found that when analyzing EDE using ACT DR4, SPT-3G, and \textit{Planck} measurements of the CMB a consistent story emerges if we exclude the \textit{Planck} temperature power spectrum at high-$\ell$: an EDE component consisting of $\sim 10-15\%$ of the total energy density at a redshift $\log_{10}(z_c) \simeq 3.5$ with an initial field displacement of $\theta_i \simeq 2.7$ and a corresponding increase in the inferred value of the Hubble constant with $H_0 \simeq 73-74$ km/s/Mpc, in contrast to $\Lambda$CDM{} which gives $H_0 \simeq 68$ km/s/Mpc (see Table \ref{tab:CMB}).
Such hints for an EDE cosmology are present when combining \textit{Planck} polarization power spectra with \textit{Planck} TT excised at $\ell>650$ (2.2$\sigma$), and when adding ACT DR4 (3.3$\sigma$) or SPT-3G (2.4$\sigma$). Combining all three CMB data sets yields a 3.3$\sigma$ preference for EDE over $\Lambda$CDM{}. The inclusion of the \textit{Planck} polarization data effectively removes the differences between the best-fits of the measurements of the lowest polarization multipoles by ACT DR4{} and SPT-3G, and emphasizes the new information that these observations provide. Indeed, together with \textit{Planck} polarization data the EDE best-fits for both ACT DR4{} and SPT-3G visually come into closer agreement (although a more careful analysis of their consistency is left for future work). This preference remains at the 3$\sigma$ level when adding the Pantheon SNe and the BAO standard ruler, increases above 5$\sigma$ when including an $H_0$ prior from S$H_0$ES{}, and is mildly reduced when considering CMB lensing potential data or estimates of~$f\sigma_8$.
We find that these results remain unchanged when increasing the maximum \textit{Planck} TT multipole until ${\ell=1300}$. On the contrary, the inclusion of small angular scale data from the \textit{Planck} temperature power spectrum above that multipole decreases this preference to $2.3 \sigma$ (in the absence of a $H_0$ prior). This is consistent with the fact that \textit{Planck} high-$\ell$ TT data have most of their constraining power at those scales, and drive parameters very close to their $\Lambda$CDM values, limiting the ability to exploit degeneracies between $\Lambda$CDM and EDE parameters. There have been several previous studies looking into the consistency between the `low' ($\ell \lesssim 1000$) and `high' TT multipoles (see e.g. Refs.~\cite{Addison:2015wyg,Planck:2018vyg,Planck:2016tof}). The high-$\ell$ TT power spectrum has a slight ($\sim 2 \sigma$) preference for higher $\omega_{\rm cdm}$, higher amplitude ($A_s e^{-2\tau_{\rm reio}}$), and lower $H_0$. However, an exhaustive exploration of these shifts indicates that they are all consistent with expected statistical fluctuations~\cite{Planck:2016tof}. Although there may be localized features in the high-$\ell$ TT power spectrum which are due to improperly modeled foregrounds (see Sec.~6.1 in Ref.~\cite{Planck:2018vyg}), under the assumption of $\Lambda$CDM{} there is no evidence that these data are broadly biased. However, it is interesting to note that the ACT DR4{} TT data at these multipoles are consistent with the preference for EDE.
Moreover, it is well known that \textit{Planck} polarization data may suffer from some systematic uncertainties which may, in turn, impact our conclusions. The most significant potential source of systematics would imply a change in the TE polarization efficiencies. We explore this by re-analyzing the \textit{Planck} constraints on EDE using different TE polarization efficiencies and find that the \textit{Planck} TT650TEEE preference for EDE largely reduces. A similar analysis conducted in Ref.~\citep{Hill:2021yec} accounted for a possible unknown source of systematics in ACT DR4{} TE data and showed that it also reduces the ACT DR4's preference for EDE. When allowing the EE dust amplitudes to vary we found almost no change to the constraints on EDE.
It is thus clear that future, high-precision, CMB temperature and polarization data will be necessary to disentangle whether the reported preference for EDE over $\Lambda$CDM{} is driven by systematics or a hint of new physics (or, possibly, a statistical fluctuation). In particular, the precision expected from upcoming data releases from SPT and ACT (as mentioned in the conclusions of Refs.~\cite{SPT-3G:2021eoc, SPT:2021slg, ACT:2020frw, ACT:2020gnv}) with combined temperature, polarization, and lensing likelihoods will be capable of constraining the parameter space of the EDE model even more tightly\footnote{In the future, CMB spectral distortions will also be able to determine the value of $n_s$ with a high significance, thereby testing the EDE ability to address the $H_0$ tension independently of CMB anisotropy data~\cite{Lucca:2020fgp}.} as well as of clarifying how the small-scale CMB TT measurements impact the EDE constraints.
This will not only provide a valuable cross-check on the \textit{Planck} measurements, but also an opportunity to obtain tight and robust constraints through joint analyses, which can be of primary importance to test physics scenarios beyond $\Lambda$CDM with CMB data (as in the case of e.g. primordial magnetic fields \cite{Jedamzik:2020zmd,Galli:2021mxk}, sterile neutrino self-interactions \cite{Corona:2021qxl} and New EDE \cite{Niedermann:2019olb, Niedermann:2020dwg}) as our work demonstrates. In fact, based on the analyses previously conducted in \citep{Poulin:2021bjr, Schoneberg:2021qvd}, we also carried out preliminary tests to check whether the same data set combinations that lead to a preference for EDE would also display a similar behavior in other beyond-$\Lambda$CDM models (such as New EDE and varying electron mass), finding that this is not the case. The same conclusion was also recently reached in \cite{Schoneberg:2022grr} in the context of the Wess Zumino Dark Radiation model introduced in \cite{Aloni:2021eaq}. Although a more in-depth analysis is left for future work, this is already indicative of the very important role that future data might play in testing and distinguishing non-standard cosmological models.
\acknowledgments
The authors thank J. Colin Hill, Thibaut Louis and Adam G. Riess for useful comments and suggestions. This work used the Strelka Computing Cluster, which is run by Swarthmore College. TLS is supported by NSF Grant No.~2009377, NASA Grant No.~80NSSC18K0728, and the Research Corporation. ML is supported by an F.R.S.-FNRS fellowship, by the `Probing dark matter with neutrinos' ULB-ARC convention and by the IISN convention 4.4503.15. LB acknowledges support from the University of Melbourne and the Australian Research Council (DP210102386). SG is supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement No 101001897). This work has been partly supported by the CNRS-IN2P3 grant Dark21. The authors acknowledge the use of computational resources from the Excellence Initiative of Aix-Marseille University (A*MIDEX) of the “Investissements d’Avenir” programme. This project has received support from the European Union’s Horizon 2020 research and innovation program under the Marie Skodowska-Curie grant agreement No 860881-HIDDeN.
\newpage
|
2,877,628,090,662 | arxiv | \section{\label{sec:introduction}Introduction}
Random sequential adsorption (RSA) refers to a numerical procedure popularized by Feder and others to model monolayers obtained in irreversible adsorption processes~\cite{feder1980}. It is based upon sequential iterations of the following steps:
\begin{itemize}
\item[--] a virtual particle is created and its position and orientation on the surface are selected randomly.
\item[--] the virtual particle is tested for overlaps with any of the other, previously placed, particles. If no overlap is found, it is placed, holding its position and orientation until the end of the simulation. Otherwise, the virtual particle is removed and abandoned.
\end{itemize}
The procedure ends when there is no free space large enough for any other particle to adsorb. The set containing the placed particles is called a saturated random packing. Besides modeling absorption layers, such packings have a large variety of applications including soft matter~\cite{evans1993,torquato2010}, telecommunication~\cite{hastings2005}, information theory~\cite{coffman1998} and mathematics~\cite{zong2014}.
The scientific history of these packings begins in 1939 when Flory studied reactions between substituents along a long, linear polymer~\cite{flory1939}. In 1958 R{\'e}nyi calculated analytically the mean packing fraction for infinitely large, one-dimensional, saturated random packing, the so-called car-parking problem~\cite{renyi1958}. For higher dimensional packings, the mean packing fraction is known only from numerical simulations. However, numerically generated random packings are always finite, so that, finite size effects can influence its properties. It is especially important, when results of adsorption experiments performed on macroscopic systems are interpreted in terms of numerical modeling. Therefore, it is important to know when finite-size effects can be neglected or not. It is expected that these effects are less important for larger packings than for smaller ones, but on the other hand, larger packings are much harder to obtain because RSA requires then significantly larger number of iterations \cite{ciesla2017}.
The main goal of this study is to determine how different boundary conditions influences the mean packing fraction and what is the minimal size of saturated random packing that produces the correct value of the mean packing density within a reasonable statistical error. Results of this study can be used to increase effectiveness of RSA modeling, which can be important for searching packing of desired properties, in particular, for finding shapes that gives the highest packing fraction~\cite{ciesla2015shapes,ciesla2016shapes}. The properties of finite random packing with open boudaries was studied previously e.g.~\cite{Adamczyk2007}, but rather in context of particles density fluctuations near boundaries. The periodic boundary conditions seems to be more popular e.g. \cite{feder1980,vigil1989,zhang2013,ciesla2016}, but there are no hints about packing size that is large enough.
\section{\label{sec:model}Model}
We studied RSA of disks on squares with periodic, open, and wall boundary conditions. Each disk had a unit surface area, so its diameter was equal to $d = \sqrt{4/\pi} \approx 1.13$. The surface area of the square boundary $S = L^2$ varied from $6$ to $40000$. To determine the mean packing fraction $\theta$ for each packing size several independent packings were generated. According to \cite{ciesla2016}, the statistical error of mean packing fraction for periodic boundary conditions is expected to behave as:
\begin{equation}
\sigma (\theta) = k \sqrt{\frac{\theta}{N_{tot}}}
\end{equation}
where $N_{tot}$ is a total number of particles in all packings used for determination of $\theta$. The constant $k\approx 0.57$ is for two-dimensional packings. To get similar precision of $\theta$ the number of independent packing generated for each packing size varied form $5\cdot 10^7$ for the smallest packing to $10^4$ for the largest one. This guaranteed that the absolute statistical error did not exceeded $10^{-5}$ for periodic boundary conditions and was at a similar level for other boundary conditions. To generate saturated random packings the improved version of the RSA algorithm described in~\cite{zhang2013} was used. This method significantly reduces simulation time due to excluding regions where there are no possibility of placing another particle. Therefore, the number of RSA trials that ends without adding a particle to the packing is greatly reduced. The idea comes from earlier works concerning RSA on discrete lattices~\cite{nord1991} and deposition of oriented squares on a continuous surface~\cite{brosilow1991}. In~\cite{zhang2013}, the idea was improved by an increase in the precision used to decribe remaining areas, where placing is potentially possible. For spherically symmetric particles each such area will either be filled by a disk or dissapear. When the last area vanishes the packing is saturated. It is worth noting that this algorithm also works for packings of different dimensions.
\section{\label{sec:results}Results}
Fig.\ \ref{fig:packings} presents saturated random packings using periodic (a), open (b), and wall (c) boundary conditions.
\begin{figure}[htb]
\centerline{%
\subfigure[]{\includegraphics[width=0.35\columnwidth]{packing_Sphere_40_periodic}}
\subfigure[]{\includegraphics[width=0.35\columnwidth]{packing_Sphere_40_open}}
\subfigure[]{\includegraphics[width=0.35\columnwidth]{packing_Sphere_40_wall.eps}}
}
\caption{Example finite saturated packing using periodic (a), open (b), and wall (c) boundary conditions. The gray square corresponds to the boundary of the packing. Packing sizes are $S=40$ for (a) and (b), and $S=55.546$ for (c).}
\label{fig:packings}
\end{figure}
Note that disks configurations in packings (b) and (c) are the same. The only difference between them is the packing size. For (b) it is equal to $S=40$ while for (c) $S=55.546$. This difference is a result of condition used to determine if particle is in packing. In the case (b) (open boundary conditions) the particle center has to be within the boundaries, while for (c) (walll boundary conditions) the entire particle has to fit within given square. Thus, the difference between open and wall boundary conditions for the same packing corresponds to a difference of the packing side length equal to particle diameter $d$.
\subsection{Periodic boundary conditions}
The mean packing fraction measured for the largest packings ($S>100$) with periodic boundary conditions is $\theta = 0.547067 \pm 0.000003$. which agrees with recent values $(0.547074\pm 0.000003$ \cite{zhang2013} and 0.547070 \cite{ciesla2016}). The dependence of the mean packing fraction on packing size is shown in Fig.\ \ref{fig:q_s}.
\begin{figure}[htb]
\centerline{%
\includegraphics[width=0.6\columnwidth]{q_s}
}
\caption{Dependence of the mean packing fraction on packing size. Dots are data taken from simulations. Statistical errors are approximately $10^{-5}$ for all points and are much less than the size of the dots. The red dashed line corresponds to $\theta=0.547067$. The black solid line is to guide the eye. Inset shows the same data, but focuses on small packing.}
\label{fig:q_s}
\end{figure}
As expected, for very small packings, finite size effects significantly affects obtained results. What is interesting is that this dependence is not monotonic. The mean packing fraction reaches its maximum around $S=14$. The side length of such square is only $3.3$ times larger than the disk diameter. The next minimum is observed for $S=18$. With the growth of packing size the mean-packing-fraction oscillations dampen quickly and for $S=40$ obtained value of $\theta$ agrees within statistical error with one measured for the largest packings. Note, that $S=40$ corresponds to a square for which side length $L$ is only $5.6$ times larger than the disk diameter $d$.
The reason for the oscillations is presumably that the finite size of the periodic repeat cell allows for a more ordered system compared to an infinite system. The ordering evidently enhances the coverage for some sizes and depresses it for other sizes, because the number of particles that can adsorb is discrete and is sometimes beneficial for increasing the coverage. On the other hand, the density of particles at a given distance is given by the density autocorrelation function:
\begin{equation}
G(r) = \lim_{dr \to 0} \frac{ \left< N(r, r+dr) \right> }{\theta 2\pi r dr },
\end{equation}
where $\left< N(r, r+dr) \right>$ is the mean number of particles with centers at a distance between $r$ and $r+dr$ from a center of a given disk.
Thus, if the packing size corresponds to the maximum or minimum of this function the mean packing fraction should increase or decrease respectively.
\begin{figure}[htb]
\centerline{%
\includegraphics[width=0.6\columnwidth]{qcor_l}
}
\caption{Comparison between oscillations of the mean packing fraction (black dots and solid line) and density autocorrelation function (blue dashed line). The density autocorrelation function was determined using $100$ independent saturated random packing of a size $S=10^5$ with periodic boundary conditions.}
\label{fig:qcorr_l}
\end{figure}
Indeed, the correspondence is clearly visible in Fig.\ \ref{fig:qcorr_l}. It has been proved analytically that for one-dimensional packings the oscillations of the density autocorrelation function are damped super-exponentially \cite{bonnier1994}, which explains also why the systematic error originated from finite size effects is negligible for quite small systems.
A direct comparison between the systematic error introduced by finite-size effects and statistical error is shown in Fig.\ \ref{fig:dq_s}.
\begin{figure}[htb]
\centerline{%
\includegraphics[width=0.6\columnwidth]{dq_s}
}
\caption{Dependence of difference between mean packing fraction obtained from simulation and the asymptotic value of $0.547067$ on packing size (black dots). The red dashed line corresponds to statistical error of each value.}
\label{fig:dq_s}
\end{figure}
The plot shows the difference between the best approximation of mean packing fraction obtained for largest packings and its value for each packing of a specific size. As the statistical error of these values is $10^{-5}$ it is clear that even for $S=30$ ($L \approx 4.85 \, d$) the mean packing fraction agrees with its true value within the statistical error.
\subsection{Open and wall boundary conditions}
The mean packing fraction dependence on packing size for open and wall boundary conditions is presented in Fig.\ \ref{fig:qfree_s}.
\begin{figure}[htb]
\centerline{%
\includegraphics[width=0.6\columnwidth]{qfree_s}
}
\caption{Dependence of the mean packing fraction on packing size. Black dots and blue squares correspond to free boundary conditions and wall boundary conditions, respectively. Statistical errors are approximately $10^{-5}$ for all points and are much less than the size of the dots. The red dashed line corresponds to $\theta=0.547067$. The blue and black solid lines are to guide the eye.}
\label{fig:qfree_s}
\end{figure}
As expected, the mean packing fraction is overestimated when using open boundaries and underestimated for wall boundaries but even for the largest packings the difference between obtained packing fractions and $\theta=0.547067$ is two orders of magnitude larger than statistical error.
The typical approach used to estimate real packing fraction is to estimate it using results obtained for finite packings. In such case the dependence of $\theta(1/L)$ can be studied in the limit of $1/L \to 0$. The plot presenting this dependence is shown in Fig.\ \ref{fig:qfree_scaling}.
\begin{figure}[htb]
\centerline{%
\includegraphics[width=0.6\columnwidth]{qfree_scaling}
}
\caption{Dependence of the mean packing fraction on inverse of packing side length $L$ for open and wall boundary conditions. Dots are data and solid lines are fits: $\theta = 0.54707 + 0.82232 \, L^{-1} + 0.23901 \, L^{-2}$ and $\theta = 0.54708 - 0.41285 \, L^{-1} + 0.012711 \, L^{-2}$ for open and wall boundaries, respectively. The red dashed line corresponds to $ \theta = 0.547067$.}
\label{fig:qfree_scaling}
\end{figure}
Although all packing fractions differ significantly from the true value, obtained limiting values from quadratic fits for $L \ge 7$ are within statistical error range for both: open and wall boundaries. Because $\theta = N/L^2$, where $N$ is the number of disks in packing, both these fits can be rewritten in the form:
\begin{equation}
N(L) =
\cases{
0.54707 \cdot (L + 0.7516)^2 - 0.070 & for open boundaries, \\
0.54708 \cdot (L - 0.3773)^2 - 0.065 & for wall boundaries. \\
}
\label{effective_packing}
\end{equation}
In both cases, we can define the effective packing size $(L_\mathrm{eff})^2$. It is $(L+0.7516)^2$ and $(L-0.3773)^2$ for open and wall boundaries, respectively. Because $0.7515 + 0.3773 \approx d$, the effective size of packing is exactly the same for both cases. Also nearly the same is the negative constant correction which is presumably connected with influence of packing corners. The knowledge about these corrections can help to estimate the mean packing fraction according to the following rule
\begin{equation}
\theta \approx \frac{N + 0.70}{(L_\mathrm{eff})^2}
\label{theta_eff}
\end{equation}
where $L_\mathrm{eff}$ is the effective size of the packing and depends on type of boundary conditions.
The dependence of estimation error on packing sizes for open and wall boundaries are presented in Fig.\ \ref{fig:dqfree_s}.
\begin{figure}[htb]
\centerline{%
\includegraphics[width=0.6\columnwidth]{dqfree_s}
}
\caption{Dependence of difference between mean packing fraction obtained from simulation and the value of $0.547067$ on packing size. Black dots corresponds to free boundary conditions and blue squares to wall boundary conditions. Solid lines are power fits: $\Delta\theta = 0.907 \, S^{-0.511}$ and $\Delta\theta = 0.408 \, S^{-0.499}$. The green points are the packing fraction calculated using Eq.\ \ref{theta_eff}. The red dashed line corresponds to statistical error of each value.}
\label{fig:dqfree_s}
\end{figure}
It shows that finite size effects decrease with packing size according to a power-law with the exponent $-1/2$. The value of the exponent is a direct consequence of the fact that the area near boundaries scales with packing size as $S^{1/2}$. Thus its relative influence on the whole packing decreases as $S^{1/2} / S = S^{-1/2}$.
According to the power-law the systematic error connected with boundary conditions will reach the level of statistical error for $S \approx 10^9$ or larger. On the other hand, packing fraction estimated using Eq.\ \ref{theta_eff} are almost as good as ones obtained using periodic boundary conditions. Here, the level of statistical error is reached near $S=80$, which corresponds to $L \approx 8d$. However, it should be noted that here we neglected finite precision of $L_\mathrm{eff}$, thus, for accurate estimation of packing fraction from packing with open or wall boundaries the precise values of parameters in Eq.\ (\ref{effective_packing}) is needed.
\section{\label{sec:summary}Summary}
Comparison of periodic, open, and wall boundary conditions shows that the best estimation of the mean packing fraction can be achieved using periodic boundary conditions. In this case, even quite small packing gives reliable results. For a square packing of a side length $5$ times larger than a particle diameter, the systematic error originated in finite size effects is not larger than the statistical error, which in this study was equal to $10^{-5}$. For smaller packings the mean packing fraction oscillates. It seems that character of these oscillations is similar to oscillations of the density autocorrelation function. It can help in designing numerical experiments for packing of different shapes as well as for packing in higher dimensions.
Open and wall boundaries conditions gives results which seems to be much worse, but there is a systematic way to account finite size effects in calculations by using the effective packing size and constant correction term. It leads to results which are almost as good as ones obtained using periodic boundary conditions. Here, the length of a square packing side should be approximately $8$ times larger than particle diameter to reduce systematic error below $10^{-5}$.
Finally it is worth noting that in a typical adsorption experiment, the packing fraction is determined with a precision of only a few percent, which is much lower accuracy than that of the simulations presented here. Therefore, if in numerical modeling one is interested only in predicting the experimental number of adsorbed particles, then RSA of quite small packings will give the proper value.
\section*{Acknowledgements}
This work was supported by Grant No. 2016/23/B/ST3/01145 of the National Science Centre, Poland.
\section*{References}
\bibliographystyle{iopart-num}
|
2,877,628,090,663 | arxiv | \section{Introduction}
The quantum anomalous Hall (QAH) insulator, or called the Chern insulator, has aroused the ongoing interests in condensed matter physics, where the dissipationless chiral currents can flow in a zero magnetic field~\cite{D.J.Thouless, F.D.M.Haldane, X.L.Qi}. It was first observed in magnetic topological insulator (TI) thin films~\cite{C.Z.Chang2013} that are formed by an intricate interplay between the ferromagnetism due to the magnetic-doping and the intrinsic spin-orbit coupling~\cite{R.Yu}. The magnetic-doping breaks the time-reversal symmetry (TRS) in the system and can open a small Dirac mass gap in the topological surface states~\cite{Y.L.Chen}. In these systems, the unavoidable inhomogeneity of the magnetic-doping would lead to the complex magnetic orders~\cite{J.Zhang, X.Kou2015}. As a consequence, the QAH effect can only be observed at extremely low temperature, which is about several tens of milikelvin, one to two orders of magnitude below the Curie temperature of the magnetic dopants~\cite{C.Z.Chang2013, X.Kou2015, J.G.Checkelsky, Y.Feng, X.Kou2014, C.Z.Chang2015}. When the crystalline structure and magnetic order are self-organized into a well-ordered topological superlattice, the QAH insulator can be observed at much higher temperature~\cite{Rienks, H.Deng}.
Besides the $C=1$ Chern insulator phase, a recent experimental work by Zhao \textit{et al.} demonstrated the existence of the high Chern number $C>1$ phase in the TI multilayer structures~\cite{Y.F.Zhao}. The authors explained the high-$C$ behavior through the interface Dirac state mechanism, where a nontrivial interface state appears at the interface between the magnetic-doped and undoped TI layers and when such a state is occupied, it will contribute $\frac{e^2}{2h}$ to the total Hall conductance $\sigma_{xy}$. In our previous work~\cite{Y.X.Wang2021}, we instead attributed the inverted bands in the high-$C$ phase to the two-dimensional (2D) subbands, which may not appear at the interface, but are splitted by the thickness confinement of the total stacking layers. In both theoretical analysis~\cite{Y.F.Zhao, Y.X.Wang2021}, the bulk $\boldsymbol k\cdot\boldsymbol p$ model of the three-dimensional (3D) TIs~\cite{H.Zhang, C.X.Liu} were used to describe the TI multilayer structures and the plane-wave expansion method was adopted to satisfy the open boundary condition at both ends. The plane-wave expansion method can correctly describe the top and bottom surface states, so it works well for the TI thin film~\cite{C.X.Liu,J.Wang,B.Zhou}. For the TI multilayers, they were fabricated by the MBE method in the experiment and thus the linear Dirac cones should be present on each surface of the TI layers. However, this fact may not be effectively captured by the plane-wave expansion method, as the electronic states were found to even penetrate into the deep TI layer interior~\cite{Y.F.Zhao, Y.X.Wang2021}.
On the other hand, as the bulk states of the TIs are gapped, a highly simplified Dirac cone model was initially proposed to explain the QAH behavior in the magnetic TI thin film~\cite{R.Yu}. The model was later developed to describe the magnetic-doped TI and ordinary insulator multilayers that can host the 3D Weyl semimetal (WSM) phase~\cite{A.A.Burkov, A.A.Zyuzin}. In the Dirac cone model, only the Dirac cone degrees of freedom on each TI surface were retained, with the intralayer and interlayer Dirac cone hoppings being included; while the bulk electronic dynamics were completely ignored. This was demonstrated to be valid by comparing the model parameters with the DFT band calculations~\cite{C.Lei}. Furthermore, the Dirac cone model was used to explore the dependence of the QAH behavior on the film thickness, the magnetic configuration as well as the stacking sequences of the TI layers~\cite{C.Lei}.
Motivated by these progresses, here we try to investigate the high-$C$ behavior in the magnetic-doped TI multilayer structures from the point of view of the Dirac cone model. We focus on the Chern number phase diagram, where the Chern number is calculated by capturing the evolution of the phase boundaries with the parameters~\cite{Y.X.Wang2021, J.Wang}. For comparison, we also consider another two TI multilayer structures as well as the 3D TI superlattice structures. Our main findings are as follows: (i) Within the Dirac cone model, the high-$C$ behavior is attributed to the band inversion of the renormalized Dirac cones that is driven by the exchange splitting, along with which the spin polarization at the $\Gamma$ point gets increased. In the highest-$C$ phase, the occupied states are fully downspin polarized at the $\Gamma$ point. (ii) To explain the observation that the Chern number decreases with the decreasing middle magnetic-doped layer thickness~\cite{Y.F.Zhao}, we assume that the related intralayer Dirac cone hopping changes from negative to positive. Based on this assumption, it is interesting to find that at certain magnetic-doping, the Chern number can also get increased when the layer thickness decreases. (iii) For another two TI multilayer structures, the band inversion cannot be achieved for all negative-mass Dirac cones even when the magnetic exchange splittings are very strong. (iv) For the 3D TI superlattice structures, besides the high-$C$ phases, the WSM phases can appear and span broad parameter regions. Our work can provide more insights on the high-$C$ behavior realized in the TI multilayer structures, which may pave the way for the future topological electronic devices.
\section{Model and method}
\begin{figure}
\includegraphics[width=6.2cm]{Fig1.pdf}
\caption{(Color online) Schematics of the TI multilayer structure, including the alternating magnetic-doped ($N_1=3$) and undoped ($N_2=2$) TI layers. The magnetization in the magnetic-doped TI layer is along the $z$ direction and the interlayer distance is denoted by $d$. The Dirac cone hoppings $\Delta_S$ and $\Delta_D$, and the exchange splittings $J_S$ and $J_D$ are indicated by the arrows. Without hoppings and exchange splittings, the Dirac cones are localized on the top and the bottom surface of each TI layer.}
\label{Fig1}
\end{figure}
The experimental configuration of the TI multilayers~\cite{Y.F.Zhao} is plotted in Fig.~\ref{Fig1}. It stacks along the $z$ direction and includes the alternating magnetic-doped and undoped TI layers, with the layer number in the plot being $N_1=3$ and $N_2=2$, respectively, so the undoped TI layer is sandwiched between the doped TI layers.
We use the Dirac cone model to describe the dynamics of the TI multilayer structures. In the model, the linear Dirac cones are assumed to be present on both the top and bottom surfaces of the magnetic-doped TI layer as well as the undoped TI layer. The Hamiltonian is written as $(\hbar=1)$~\cite{A.A.Burkov, A.A.Zyuzin, C.Lei}
\begin{align}
H(\boldsymbol k)&=\sum_i\Big[
(-1)^i v (-k_x\sigma_y+k_y\sigma_x)
+m_i \sigma_z \Big]
c_{\boldsymbol k i}^\dagger c_{\boldsymbol k i}
\nonumber\\
&+\sum_{i\in\text{odd}, j}(\Delta_S \delta_{i,j-1}
+\Delta_D \delta_{i+1,j} + \Delta_D \delta_{i-1,j})
c_{\boldsymbol k i}^\dagger c_{\boldsymbol k j}.
\label{Hk}
\end{align}
Here $c_{\boldsymbol k i}$ $(c_{\boldsymbol k i}^\dagger)$ annihilates (creates) a surface Dirac cone with the wavevector $\boldsymbol k=(k_x,k_y)$ and the surface index $i$. For $i$, we use the odd/even number to represent the top/bottom surface of each layer, respectively. $v$ is the velocity of the Dirac cone, $\sigma$ denote the Pauli matrice that act on the spin space, $\Delta_S$ is the intralayer Dirac cone hopping, and $\Delta_D$ the interlayer Dirac cone hopping across the van der Waals gap. When the mass term $m_i$ is absent, the system owns the TRS as ${\cal T}H(-\boldsymbol k){\cal T}^{-1}=H(\boldsymbol k)$, with the time-reversal operator ${\cal T}=i\sigma_y K$ and $K$ being the complex conjugate operator. The mass term $m_i$ of each Dirac cone that is induced by the exchange interactions with the nearest local magnetic moments can break the TRS of the system,
\begin{align}
m_i=\sum_\alpha J_{i\alpha}M_\alpha,
\end{align}
where $\alpha$ is the layer index. We assume that the magnetization is along the $z$ direction, so that $M_\alpha=1$ in the magnetic-doped layer, and $M_\alpha=0$ in the undoped layer. As shown in Fig.~\ref{Fig1}, each surface Dirac cone has one nearby magnetic-doped TI layer, which may be the same layer with the exchange splitting $J_{i\alpha}=J_S$ or the adjacent layer with the exchange splitting $J_{i\alpha}=J_D$. The Dirac cones localized on different surfaces can be coupled by the Dirac cone hoppings and exchange splittings and thus get renormalized. Note that the hoppings beyond the nearest-neighbor Dirac cones are neglected because they are minor~\cite{C.Lei} and will not change the main conclusions in this paper. In our calculations, we take $-\Delta_S=1$ as the unit of energy and set the parameter $\delta=\frac{J_D}{J_S}$.
To calculate the Chern number in the TI multilayer structures, we follow the previous works~\cite{Y.X.Wang2021, J.Wang} and try to judge the Chern number by exactly capturing the evolution of the phase boundaries with the parameters, as the Chern number change is closely connected to the gap closings. To help the Chern number judgment, two limiting cases need to be considered: (i) $J_S=0$ of no magnetic-doping case, where the TRS is preserved and the Chern number must be vanishing, with the renormalized Dirac cones always appearing in pairs with opposite masses; (ii) $\Delta_D=0$ of the limiting isolated-layer case, where the mass of each renormalized Dirac cone $M_i$ can be calculated analytically and then the Chern number is obtained. As the Dirac cones on all surfaces own the positive chirality $\chi_i=1$ and each Dirac cone can contribute a component $C_i$ to the Chern number, we have
\begin{align}
C=\sum_i C_i, \quad \text{with} \quad
C_i=\frac{1}{2}\text{sgn}(M_i).
\end{align}
Compared with the common Chern number calculations by using the Kubo's formula~\cite{D.J.Thouless} or the Fukui's algorithm~\cite{T.Fukui}, where the exact diagonalization is needed to obtain the eigenenergy and eigenstate for each wavevector in the Brillouin zone (BZ), the above method requires much less computational resources and time, and is expected to be extended to the antiferromagnetic configuration as well as other topological systems.
\begin{figure}
\includegraphics[width=8.8cm]{Fig2.pdf}
\caption{(Color online) The energy bands $\varepsilon\sim v k_x$ when setting $k_y=0$, and the wavefunction distributions at the $\Gamma$ point on each Dirac cone in the magnetic-doped TI thin film. We choose $J_S=0.4$ in (a)$-$(b), and $J_S=1.6$ in (c)$-$(d). In the wavefunction distributions, the blue and orange bars denote the upspin and downspin contributions, respectively and the bar heights are proportional to the weights of the distributions. For clarity, the neighboring distributions are shifted vertically by 0.8. }
\label{Fig2}
\end{figure}
\section{magnetic-doped TI thin film}
First we consider the magnetic-doped TI thin film. Within the Dirac cone model, a pair of Dirac cones with the same chirality are localized on the top and bottom TI surfaces and there exist the intralayer hoppings between the two Dirac cones. In the four-orbit basis $\begin{pmatrix}
|1\uparrow\rangle& |1\downarrow\rangle& |2\uparrow\rangle& |2\downarrow\rangle
\end{pmatrix}^T$, where $1/2$ denotes the top/bottom surface state and $\uparrow/\downarrow$ the upspin/downspin state, respectively, the Hamiltonian is~\cite{R.Yu}
\begin{align}
H(\boldsymbol k)=\begin{pmatrix}
J_S& -ivk_-& \Delta_S& 0
\\
ivk_+& -J_S& 0& \Delta_S
\\
\Delta_S& 0& J_S& ivk_-
\\
0& \Delta_S& -ivk_+& -J_S
\end{pmatrix},
\label{mTI}
\end{align}
here $k_\pm=k_x\pm ik_y$.
To see the Dirac cone renormalization, we use the unitary matrix~\cite{R.Yu, Y.X.Wang2018}
\begin{align}
U=\frac{1}{\sqrt2}\begin{pmatrix}
1& 0& 1& 0\\
0& 1& 0& -1\\
1& 0& -1& 0\\
0& -1& 0& -1
\end{pmatrix}.
\end{align}
Then the Hamiltonian is transformed as
\begin{align}
&H_T(\boldsymbol k)=UH(\boldsymbol k)U^{-1}
\nonumber\\
&=\begin{pmatrix}
J_S-|\Delta_S|& -ivk_-& 0& 0
\\
ivk_+& -J_S+|\Delta_S|& 0& 0
\\
0& 0& J_S+|\Delta_S|& ivk_-
\\
0& 0& -ivk_+& -J_S-|\Delta_S|
\end{pmatrix},
\end{align}
which becomes block diagonalized. It shows that the chirality of the renormalized Dirac cones in the upper/lower block remains the same as in Eq.~(\ref{mTI}), but the Dirac cone mass in the upper/lower block will get reduced/increased by the intralayer Dirac cone hopping. So the Chern number of the system is obtained as
\begin{align}
C=\frac{1}{2}\text{sgn}\big(J_S-|\Delta_S|\big)+\frac{1}{2}\text{sgn}\big(J_S+|\Delta_S|\big).
\label{s-layer}
\end{align}
We can see that (i) when $J_S<|\Delta_S|$, the Dirac cone mass is negative in the upper block and positive in the lower block, and thus the Chern number $C=0$; (ii) when $J_S>|\Delta_S|$, the Dirac cone mass in the upper block becomes positive, thus the band inversion occurs and $C=1$. So in a magnetic-doped TI thin film, the nontrivial $C=1$ phase can be observed only when the magnetic-doping reaches a certain ratio, which is consistent with the experimental observations in the Cr-doped or V-doped (Bi,Sb)$_2$Te$_3$ thin film~\cite{C.Z.Chang2013, X.Kou2014, Y.Feng, J.G.Checkelsky, C.Z.Chang2015}. Note that the Dirac cone model cannot support the high-$C$ phase in the TI thin film, which, although predicted in theory~\cite{H.Jiang, J.Wang, Y.X.Wang2021}, but to our knowledge, has not yet been reported in experiment.
\begin{figure*}
\includegraphics[width=18.4cm]{Fig3.pdf}
\caption{(Color online) Phase diagrams of the TI multilayer structures in the parameter space $(J_S,\Delta_D)$, with the Chern number $C$ and the spin polarization $\eta$ at the $\Gamma$ point being labeled as $(C,\eta)$. The contour scale represents the magnitude of log$_{10}\Delta$, where $\Delta$ is the energy gap of the lowest bands, and the bright blue lines denote that the gap is closed. We choose the parameter $\delta=0.6$, the layer number $N_2=1-5$ in (a) to (e) and $N_1=N_2+1$. }
\label{Fig3}
\end{figure*}
In Fig.~\ref{Fig2}, we plot the energy bands and the wavefunction distributions at the $\Gamma$ point in the $C=0$ and $C=1$ phases. For the Hamiltonian including only the linear $k$ terms, the normal and inverted bands are identical [Figs.~\ref{Fig2}(a) and (c)], so there is no signature of the band inversions in the band structures. If the second order $k$ term $(\propto k^2)$ is included, the band inversion can be found as an additional energy extremum at the nonzero wave vector, whereas a single extremum at the $\Gamma$ point remains in the normal band~\cite{R.Yu}. On the other hand, the band inversions can be manifested in the wavefunction distributions at the $\Gamma$ point. In the $C=0$ phase, the orbital contributions to the first valence band (VB) are $\frac{1}{\sqrt2}(-|1\uparrow\rangle-|2\uparrow\rangle)$, and those to the first conduction band (CB) are $\frac{1}{\sqrt2}(-|1\downarrow\rangle+|2\downarrow\rangle)$ [Fig.~\ref{Fig2}(b)], while in the $C=1$ phase, the contributions to the first VB and first CB are interchanged [Fig.~\ref{Fig2}(d)], meaning that all VBs at the $\Gamma$ point are downspin polarized.
\section{TI multilayer structures}
Next, for the TI multilayer structures, the Chern number phase diagrams are plotted in the parametric space $(J_S,\Delta_D)$ in Fig.~\ref{Fig3}, with the layer number $N_2$ increasing from 1 to 5 in (a) to (e) and $N_1=N_2+1$. Experimentally, tuning the parameters $J_S$ and $\Delta_D$ is quite feasible, as the exchange splitting $J_S$ is expected to increase with the magnetic-doping or increase with the decreasing temperature, and the interlayer hopping $\Delta_D$ is expected to increase when the van der Waals gap is narrowed by the external pressure~\cite{C.Lei, W.T.Guo}. In the phase diagrams, the contour scale represents the magnitude of log$_{10}\Delta$, where $\Delta$ is the energy gap of the lowest bands and the bright blue lines denote that the gap is closed.
Fig.~\ref{Fig3} shows that the Chern number $C$ can increase from zero up to its highest value $C=N_1+N_2$. When $J_S=0$ of no magnetic-doping case, the Chern number always vanishes due to the presence of the TRS. On the other hand, when $\Delta_D=0$ of the isolated-layer case, similar to Eq.~(\ref{s-layer}), the Chern number of the TI multilayer structures is obtained as
\begin{align}
C=&\frac{N_1}{2}\Big[\text{sgn}\big(J_S-|\Delta_S|\big)+\text{sgn}\big(J_S+|\Delta_S|\big)\Big]
\nonumber\\
&+\frac{N_2}{2}\Big[\text{sgn}\big(J_D-|\Delta_S|\big)+\text{sgn}\big(J_D+|\Delta_S|\big)\Big].
\label{m-layer}
\end{align}
We can see that when the exchange splitting $J_S$ increases, the negative mass $J_S-|\Delta_S|$ of the Dirac cone in the magnetic-doped TI layer becomes positive at the critical point $J_S=|\Delta_S|$, which is $N_1$-fold degenerate; whereas the negative mass $J_D-|\Delta_S|$ of the Dirac cone in the magnetic-undoped TI layer becomes positive at the critical point $J_D=|\Delta_S|$, which is $N_2$-fold degenerate. So the Chern number evolutions with $J_S$ are as follows: (i) when $J_S<|\Delta_S|$, the Chern number vanishes $C=0$; (ii) when $|\Delta_S|<J_S<\frac{|\Delta_S|}{\delta}$, $C=N_1$; (iii) when $J_S>\frac{|\Delta_S|}{\delta}$, the Chern number increases to its highest value $C=N_1+N_2$, where the band inversions occur for all negative-mass Dirac cones.
When the interlayer hopping $\Delta_D$ is nonvanishing, the Dirac cones in different TI layers will get coupled and further renormalized, which thus breaks the Dirac cone degeneracy. The renormalized Dirac cones may not be localized in one TI layer, but extended to all layers [see Figs.~\ref{Fig4}(a) and (b)]. As a result, with the increasing of $J_S$, the band inversion occurs successively for the renormalized Dirac cones. Each time a phase boundary is crossed, the band inversion occurs for one negative-mass Dirac cone and the Chern number will be changed by one. When $\Delta_D$ increases, we can see that all phase boundaries originating from $J_S=\frac{|\Delta_S|}{\delta}$ move to larger $J_S$; on the other hand, for the phase boundaries from $J_S=|\Delta_S|$, only one will move to $J_S=0$, meaning that the region spanned by the $C=0$ phase is shrinking so that a minor magnetic doping can drive the transition from the $C=0$ phase to $C=1$, while the remaining $N_1-1$ ones all move to larger $J_S$.
In the phase diagrams, if the magnetic-doping is small, the value of the critical point $J_S=\frac{|\Delta_S|}{\delta}$ at $\Delta_D=0$ will become larger; while if the magnetic-ordered septuple layer is formed, such as in MnBi$_2$Te$_4$~\cite{M.M.Otrokov}, where the exchange splitting is greatly strengthened, $\delta$ is expected be close to 1 and thus the critical point is close to $J_S=|\Delta_S|$. In the experiment~\cite{Y.F.Zhao}, the thicknesses of the magnetic-doped and undoped layers are taken as $d=3$ nm and $d=4$ nm, respectively. The different TI layer thicknesses and properties can lead to the unequal $\Delta_S$ in the different layers, which, however, will not change the basic physics and the structure of the phase diagrams (see Appendix).
Next, we study the wavefunction distribution and the spin polarization in the TI multilayer structures. The spin polarization $\eta_{\boldsymbol k}$ is defined as
\begin{align}
\eta_{\boldsymbol k}=
\frac{\sum_{n\in\text{occ}}\big(|\psi_{n,\downarrow}(\boldsymbol k)|^2
-|\psi_{n,\uparrow}(\boldsymbol k)|^2\big)}
{\sum_{n\in\text{occ}}\big(|\psi_{n,\downarrow}(\boldsymbol k)|^2
+|\psi_{n,\uparrow}(\boldsymbol k)|^2\big)},
\end{align}
where $\psi_{n,\uparrow/\downarrow}$ is the upspin/downspin component of the wavefunction with the band index $n$, and $n$ is to be summed over all occupied states.
Because the band inversion is accompanied by the change of the orbital contribution at the $\Gamma$ point from upspin to downspin, so $\eta_{\Gamma}$ is directly related to the Chern number,
\begin{align}
\eta_{\Gamma}=\frac{C}{N_1+N_2}.
\label{polarization}
\end{align}
The above relation can also be demonstrated by the numerical results in Fig.~\ref{Fig3}. For example, in Fig.~\ref{Fig4}(a) and (b), with the parameters chosen as the points labeled by the crosses in Fig.~\ref{Fig3}(b), we plot the wavefunction distributions of all VBs at the $\Gamma$ point on each Dirac cone. For Chern number phase changing from $C=1$ to $C=2$, we observe that besides some band movements, the orbital contribution to the first VB changes from upspin to downspin, and thus the spin polarization $\eta_{\Gamma}$ will increase by $\frac{1}{N_1+N_2}$. At the highest Chern number phase $C=N_1+N_2$, the spin polarization can reach its saturation value, $\eta_{\Gamma}=1$.
\begin{figure}
\includegraphics[width=8.8cm]{Fig4.pdf}
\caption{(Color online) (a)-(b) The wavefunction distributions of all VBs at the $\Gamma$ point on each Dirac cone in the TI multilayer structures. The blue and orange bars denote the upspin and downspin contributions, respectively, and the bar heights are proportional to the weights of the distributions. For clarity, the neighboring distributions are shifted vertically by 0.3. (c) The spin polarization versus $vk_x$, where $J_S$ increases from 0.12 to 3.6 and the corresponding Chern number is labeled in the right. We choose $N_2=2$ and $N_1=N_2+1$, $\Delta_D=1$, $J_S=0.84$ in (a), $J_S=1.08$ in (b), and $k_y=0$ in (c). }
\label{Fig4}
\end{figure}
We also investigate the spin polarization at nonzero wave vectors. In Fig.~\ref{Fig4}(c), when setting $k_y=0$, $\eta$ is plotted as a function of $vk_x$, where the different lines correspond to $J_S$ increasing from $0.12$ to $3.6$. We can see that although $\eta_{\boldsymbol k}$ may increase with $|k_x|$ around the $\Gamma$ point, it decreases when $|k_x|$ is large enough. On the other hand, $\eta_{\boldsymbol k}$ always increases with $J_S$. So at the asymptotically strong Zeeman splitting, the system may be driven into the ferromagnetic Chern insulator phase~\cite{W.Wang}. In the experiment, the spin polarization can be detected by the spin-polarized angle-resolved photoemission spectroscopy~\cite{J.A.Sobota}.
\begin{figure}
\includegraphics[width=9cm]{Fig5.pdf}
\caption{(Color online) Phase diagrams of the TI multilayer structures in the parameter space of $(J_S,\Delta_{Sm})$, with the Chern number $C$ being labeled. The contour scale represents the magnitude of log$_{10}\Delta$, where $\Delta$ is the energy gap of the lowest bands. The bright blue lines denote that the gap is closed. We choose the parameter $\delta=0.6$, the layer number $N_2=2$, $N_1=N_2+1$, and $\Delta_D=0.5$ in (a) and $\Delta_D=1.5$ in (b).}
\label{Fig5}
\end{figure}
The recent experiment that implemented on the TI multilayers [(Bi,Sb)$_{2-x}$Cr$_x$Te$_3-$(Bi,Sb)$_2$Te$_3$]$_{N_2}-$(Bi,Sb)$_{2-x}$Cr$_x$Te$_3$ reported the observations of the high-$C$ phase with $C=N_2$ when the magnetic-doping ratio is $x=0.24$~\cite{Y.F.Zhao}. According to the above analysis, this suggests that at such a ratio, the band inversion only occurs in $N_2$ negative-mass Dirac cones. When $\Delta_D=0$, all these Dirac cones are localized in the magnetic-doped TI layers. In their work, the authors also reported that for the TI multilayers $N_1=3$ and $N_2=2$, the Chern number can change from $C=2$ to $C=1$ when the middle magnetic-doped layer thickness was tuned to decrease from $d=4$ nm to zero~\cite{Y.F.Zhao}. They suggested that when the magnetic-doped layer thickness decreases, a pair of nontrivial interface states will disappear, so the Chern number is reduced by one. Here in the Dirac cone model, this observation could be instead explained by assuming that the intralayer hopping within the middle magnetic-doped TI layer $\Delta_{Sm}$ will change from negative to positive when the layer thickness decreases. This means that when the layer is thick enough, the Dirac cones attract each other so that $\Delta_{Sm}$ is negative, while when the layer thickness decreases and becomes small, the Dirac cones become less attractive and even repel each other when a critical thickness is acrossed, so $\Delta_{Sm}$ becomes positive. In our calculations, the chosen model parameters $\Delta_S=-1$ and $\Delta_D>0$ are consistent with the above assumptions. Moreover, in Ref.~\cite{C.Lei}, by comparing the model parameters with the DFT band calculations, the authors obtained the opposite $\Delta_S$ and $\Delta_D$, which also supports our assumptions.
To see the effect of $\Delta_{Sm}$ on the Chern number modulation, in Fig.~\ref{Fig5}, the phase diagrams of the TI multilayer structures are plotted in the parameter space $(J_S,\Delta_{Sm})$, with $\Delta_D=0.5$ in (a) and $\Delta_D=1.5$ in (b). We can see that in Fig.~\ref{Fig5}(a), there is no $C=2$ to $C=1$ phase transition along the vertical direction. However, in Fig.~\ref{Fig5}(b), the $C=2$ to $C=1$ phase transition can be captured along the left solid arrow, which agrees with the experiment~\cite{Y.F.Zhao}. More interestingly, in Fig.~\ref{Fig5}(b), when the magnetic-doping increases and the exchange splitting lies within the range $1.51<J_S<1.84$, there can exist the $C=2$ to $C=3$ phase transition, \textit{e.g.}, along the right dashed arrow. Therefore, as long as the layer thickness is not vanishing and the Dirac cones are still present on the surfaces, the Chern number may get increased. The conclusion is quite different from the interface Dirac state mechanism, but also roots in the band inversion of the Dirac cone.
\section{other TI multilayer structures}
To make comparisons, we study another two TI multilayer structures, with the Chern number phase diagrams plotted in Fig.~\ref{Fig6}. In Fig.~\ref{Fig6}(a), we choose the layer number $N_1=2$ and $N_2=3$ so that the magnetic-doped TI layers are sandwiched between the undoped ones, whereas in Fig.~\ref{Fig6}(b), we choose $N_1=N_2=2$ and the number of the magnetic-doped TI layer is equivalent to the undoped layer.
\begin{figure}
\includegraphics[width=9cm]{Fig6.pdf}
\caption{(Color online) Phase diagrams of two alternative TI multilayer structures in the parameter space of $(J_S,\Delta_D)$, with the Chern number $C$ and the spin polarization $\eta$ at the $\Gamma$ point being labeled as $(C,\eta)$. The contour scale represents the magnitude of log$_{10}\Delta$, where $\Delta$ is the energy gap of the lowest bands and the bright blue lines denote that the gap is closed. We choose the parameter $\delta=0.6$, the layer number $N_1=2$, $N_2=N_1+1$ in (a) and $N_1=N_2=2$ in (b).}
\label{Fig6}
\end{figure}
In the isolated layer case of $\Delta_D=0$, for the topmost undoped TI layer, the masses of the two renormalized Dirac cones are calculated as
\begin{align}
M_{t,\pm}^o=&\frac{1}{2}J_D\pm\frac{1}{2}\sqrt{J_D^2+4\Delta_S^2}.
\end{align}
Clearly, $M_{t,\pm}^o$ is always positive/negative and there is no sign change. So the contributions of the two Dirac cones to the Chern number are always zero. Cases are the same for the bottommost undoped TI layer, if exists. This means that the Chern number is contributed by the magnetic-doped TI layers as well as the sandwiched undoped TI layers, but not the topmost or bottommost undoped TI layer. This conclusion also holds when $\Delta_D$ is nonvanishing. Therefore, as shown in Figs.~\ref{Fig6}(a) and (b), the highest Chern number can only reach $C=2N_1-1$.
For the spin polarization $\eta$ at the $\Gamma$ point, the results are shown in Fig.~\ref{Fig6}. We can see that the relation in Eq.~(\ref{polarization}) of the spin polarization to the Chern number still holds. Comparing Figs.~\ref{Fig6}(a) and (b) with Fig.~\ref{Fig3}(a), we find that although the nontrivial Chern number phases span the similar regions, the spin polarizations are quite different. As analyzed above, even when the Chern number takes its highest value, the spin polarization at the $\Gamma$ point cannot be saturated in the two TI multilayer structures.
We note that in Fig.~\ref{Fig6}, the gaps are well opened in the Chern insulator phases. Because the Fermi energy needs to be tuned in the gap, so the large gap favors the experimental observations. This conclusion is quite different from our previous work~\cite{Y.X.Wang2021}, where the gaps of the Chern insulator phases were found to be too small to hinder the experimental observations.
\section{TI superlattice structure}
In addition, we study the TI superlattice structure with the periodic boundary conditions in the $z$ direction. Note that in a unit cell, the topmost and bottommost Dirac cones may have two nearby magnetic-doped TI layers. As more and more magnetic-ordered TIs have been successfully fabricated in experiments~\cite{M.M.Otrokov, J.Wu, R.C.Vidal, I.I.Klimovskikh}, the TI superlattice considered here may also be realized in the future.
\begin{figure}
\includegraphics[width=9.4cm]{Fig7.pdf}
\caption{(Color online) Phase diagrams of the TI superlattice structures in the parameter space of $(J_S,\Delta_D)$, and the corresponding $k_z$ value for the energy gap. In (a) and (b), the Chern number $C$ is labeled and the contour scale represents the magnitude of log$_{10}\Delta$, where $\Delta$ is the energy gap of the lowest bands. The bright blue regions denote that the gap is closed, and the system lies in the gapless WSM phase. We choose the parameter $\delta=0.6$, the layer number $N_2=1$ in (a) and (c), $N_2=2$ in (b) and (d), and $N_1=N_2+1$. }
\label{Fig7}
\end{figure}
The phase diagrams for the TI superlattice are plotted in Figs.~\ref{Fig7}(a) and (b), where the unit cell includes the same TI layer number as in Figs.~\ref{Fig3}(a) and (b), respectively. The bright blue regions denote that the energy gaps are closed so that the system lies in the gapless WSM phase. Figs.~\ref{Fig7}(a) and (b) show that the WSM phases can span large parameter regions and separate the distinct Chern insulator phases. This is consistent with the previous studies~\cite{A.A.Burkov, C.Lei}, but is quite different from the TI multilayer structures, where no WSM phase exists in Fig.~\ref{Fig3}. This is explained that the wave vector $k_z$ needs to take a specific value to close the gap so that the WSM phase can be accommodated in the TI superlattice~\cite{A.A.Burkov, A.A.Zyuzin}, but $k_z$ cannot be modulated in the TI multilayer structures. We find that the wave vector $k_z$ for the gap can also be used to distinguish the Chern insulator and WSM phases, where in the former, $k_z$ takes the fixed value $0$ or $\frac{\pi}{(N_1+N_2)d}$ that situates in the BZ center or boundary, while in latter, $k_z$ gradually changes from $0$ to $\frac{\pi}{(N_1+N_2)d}$. Note that in the superlattice structure, due to the enlarged unit cell, the BZ in the $z$ direction has been folded into $[-\frac{\pi}{(N_1+N_2)d},\frac{\pi}{(N_1+N_2)d}]$.
When $J_S=0$ of no magnetic-doping case, the masses of the $2(N_1+N_2)$ renormalized Dirac cones can be calculated analytically as
\begin{align}
M_{\pm}^s=\pm\sqrt{\Delta_S^2+\Delta_D^2+2\Delta_S\Delta_D
\text{cos}\big(k_z d+\frac{2\pi\alpha}{N_1+N_2}\big)},
\end{align}
with the layer index $\alpha=1,2,\cdots,N_1+N_2$. The above equation shows that the masses always appear in pairs. One can check that the gap can get closed only when $M_{N_1+N_2,\pm}^s$ vanish, with the conditions of $k_z=0$ and $\Delta_D=|\Delta_S|$, as shown in Fig.~\ref{Fig7}.
On the other hand, when $\Delta_D=0$ of isolated-layer case, the masses of the two renormalized Dirac cones on the topmost magnetic-doped TI layer in a unit cell are obtained as
\begin{align}
M_{t,\pm}^s=J_S+\frac{1}{2}J_D\pm\frac{1}{2}\sqrt{J_D^2+4\Delta_S^2},
\label{msuper}
\end{align}
which are the same as the bottommost layer in a unit cell. One can infer from Eq.~(\ref{msuper}) that the mass $M^s_{t,-}$ changes from negative to positive at the critical point $J_S=\frac{|\Delta_S|}{\sqrt{1+\delta}}$ and is twofold degenerate. The other critical points at $\Delta_D=0$ are the same as those in TI multilayer structures, suggesting that the Chern number behaviors are similar.
\section{Discussions and Conclusions}
In summary, we have studied the Chern number phase diagrams in the TI multilayer structures by using the Dirac cone model, where the Chern number is effectively calculated by capturing the evolutions of the phase boundaries with the parameters. We find that the magnetic doping can drive the band inversion of the Dirac cones and thus induce the high-$C$ phase transitions. In the highest Chern number phase, the occupied states are downspin polarized at the $\Gamma$ point.
Compared with the previous theoretical works that used the plane wave expansion method on the $\boldsymbol k\cdot \boldsymbol p$ model~\cite{Y.F.Zhao, Y.X.Wang2021}, our results show the following differences: (i) the inverted bands are related to the renormalized Dirac cones; (ii) the Chern number may get increased even when the thickness of the middle magnetic-doped TI layer decreases; (iii) the gaps are well opened in the Chern insulator phase in another two TI multilayer structures. The differences are attributed to the fact that the surface states of each TI layer in the multilayer structures are well captured by the Dirac cone model, but may not be correctly described by the plane-wave expansion method. We hope that these conclusions, especially (ii) and (iii), would be demonstrated in the future TI multilayer experiments.
Besides the magnetic-doped TI multilayers, the magnetic-ordered ones, as well as the factors that cannot be included in the DFT calculations, such as disorder and magnetic field, can also be described by the Dirac cone model. We expect that the Dirac cone model to be widely used to explain more emergent phenomena in the TI multilayer structures, such as the axion insulator~\cite{M.Mogi, D.Xiao}.
\section{Acknowledgments}
This work was supported by the National Natural Science Foundation of China (Grant No. 11804122 and No. 11905054), the China Postdoctoral Science Foundation (Grant No. 2021M690970), and the Fundamental Research Funds for the Central Universities of China.
\section{Appendix: unequal intralayer Dirac cone hoppings}
\begin{figure}
\includegraphics[width=9cm]{FigApp.pdf}
\caption{(Color online) Phase diagrams of the TI multilayer structures in the parameter space $(J_S,\Delta_D)$ in (a) and $(J_S,|\Delta_{S2}|)$ in (b), with the Chern number $C$ and the spin polarization $\eta$ at the $\Gamma$ point being labeled as $(C,\eta)$. The contour scale represents the magnitude of log$_{10}\Delta$, where $\Delta$ is the energy gap of the lowest bands, and the bright blue lines denote that the gap is closed. We choose the parameter $\delta=0.6$, the layer number $N_2=2$, $N_1=N_2+1$, $\Delta_{S1}=-1$, and in (a) $\Delta_{S2}=-0.8$, in (b) $\Delta_D=0.9$.}
\label{Fig8}
\end{figure}
Here we discuss the effect of the unequal intralayer Dirac cone hoppings on the phase diagrams.
We use $\Delta_{S1/2}$ to represent the intralayer Dirac cone hopping in the magnetic-doped/updoped TI layer.
When $\Delta_D=0$, the Chern number in Eq.~(\ref{m-layer}) becomes
\begin{align}
C=&\frac{N_1}{2}\Big[\text{sgn}\big(J_S-|\Delta_{S1}|\big)+\text{sgn}\big(J_S+|\Delta_{S1}|\big)\Big]
\nonumber\\
&+\frac{N_2}{2}\Big[\text{sgn}\big(J_D-|\Delta_{S2}|\big)+\text{sgn}\big(J_D+|\Delta_{S2}|\big)\Big]. \end{align}
The above Chern number expression shows that the critical points are located at $J_S=|\Delta_{S1}|$ and $J_S=\frac{|\Delta_{S2}|}{\delta}$, as shown in Fig.~\ref{Fig8}(a). More importantly, the structure of the phase diagram in Fig.~\ref{Fig8}(a) is similar with Fig.~\ref{Fig3}(b). When setting $\Delta_D=0.9$, in Fig.~\ref{Fig8}(b), we plot the phase diagram in the parametric space $(J_S,|\Delta_{S2}|)$. We observe that as $|\Delta_{S2}|$ increases, the left three phase boundaries show minor changes, while the right two phase boundaries will move to higher $J_S$. So for the unequal $\Delta_{S1}$ and $\Delta_{S2}$, the qualitative results of the phase diagram remain unchanged.
|
2,877,628,090,664 | arxiv | \section{Introduction}
For a given family of finite sets which typically arise from an arithmetic source and whose cardinalities are naturally called class numbers -- for example, the family of ideal class groups indexed by a class of number fields --
the Gauss problem asks for the determination of the subfamily in which every member has class number one. As the most famous result, there are 9 imaginary quadratic fields of class number one, due to Heegner and proved independently by Baker and Stark around 1966.
Masley and Montgomery \cite{masley-montgomery} determined the cyclotomic fields of class number one; there are 29 of them, cf.~Washington~\cite[Chapter 11] {Washington-cyclotomic} for an exposition and historical background.
The family of imaginary abelian fields has been determined by K.~Yamamura~\cite{yamamura:h=1} in 1994.
Since then, solving the Gauss problem for normal CM fields has been a major undertaking by various authors. In 2006 the best bound for the degree $d$ of a normal CM field of class number one was given by Lee-Kwon~\cite{lee-kwon:2006}, who showed that $d$ satisfies $d\le 96$ under the Generalised Riemann Hypothesis (GRH). In 2007, Park-Kwon \cite{park-kwon:2007} and Park-Yang-Kwon \cite{park-yang-kwon:2007} determined all the remaining cases of $d\le 48$ among all possible values of $d$ in $\{2,4,\dots, 48\}\cup \{64,96\}$. The last two cases have been determined by Hofmann-Sircana \cite{hofmann-sircana}, who showed that there is no normal CM field of degree $d=64$ or $d=96$ of class number one.
Besides 172 abelian CM fields determined by Yamamura, there are 55 non-abelian normal CM fields with class number one under GRH.
The present paper solves two related Gauss problems: the
first one comes from positive-definite quaternion Hermitian forms and
the second one from the reduction modulo some prime~$p$ of Siegel modular varieties.
For the first Gauss problem, let $B$ be
a definite quaternion $\mathbb{Q}$-algebra of discriminant $D$ and let
$O$ be a maximal order in $B$. Let $V$ be a left $B$-module of rank
$n$, and $f:V\times V\to B$ be a positive-definite quaternion Hermitian form
with respect to the canonical involution $x\mapsto \bar x$.
For each $O$-lattice $L$ in $V$ denote by $h(L,f)$
the class number of the isomorphism classes in the genus containing $L$.
As the main result of the first, arithmetic, part of this paper, in Theorem~\ref{thm:mainarith} we determine precisely when $h(L,f)=1$ for all maximal $O$-lattices $L$. For the rank one case, the list of definite quaternion $\mathbb{Z}$-orders of class number one has been determined by Brzezinski~\cite{brzezinski:h=1} in 1995.
For the second Gauss problem, let $p$ a prime number and let $\calA_{g}$ denote the moduli space over
$\overline{\bbF}_p$ of $g$-dimensional principally polarised abelian
varieties. Let $k$ be an algebraically closed
field of characteristic\ $p$. For each point
$x=[(X_0,\lambda_0)]\in \calA_{g}(k)$, denote by
\[ \calC(x):=\{[(X,\lambda)]\in \calA_{g}(k) :
(X,\lambda)[p^\infty]\simeq (X_0,\lambda_0)[p^\infty] \} \]
the central leaf of $\calA_{g}$ passing through the point $x$. Our goal is to determine the set of all points~$x$ in $\calA_{g}(k)$ such that
$\calC(x)=\{x\}$.
Let $\calS_{g}\subseteq \calA_{g}$ denote the supersingular locus of
$\calA_{g}$, which is
the closed subvariety parametrising all supersingular abelian varieties in
$\calA_{g}$. For each abelian variety $X$ over $k$, the $a$-number
of $X$ is defined by
\begin{equation}
\label{eq:anumber}
a(X):=\dim \Hom(\alpha_p, X),
\end{equation}
where $\alpha_p$ is the kernel of the Frobenius morphism on the
additive group $\bbG_a$. The $a$-number of a point $x$ in
$\calA_{g}$ is denoted by $a(x)$.
The main result of the second, geometric, part of the paper is the following solution to the Gauss problem.
\begin{introtheorem}\label{thm:main} (Theorem~\ref{thm:main2})
Let $x=[X_0,\lambda_0]\in \calA_{g}(k)$ and $\calC(x)$ be the
central leaf of $\calA_{g}$ passing through the point $x$.
\begin{enumerate}
\item (Chai~\cite{chai}) The set $\calC(x)$ is finite if and only
if $X_0$ is supersingular, that is, $X_0$ is isogenous to a product of
supersingular elliptic curves.
\item Assume that $x\in \calS_{g}$. Then $\calC(x)$ has one element if and
only if one of the following three cases holds:
\begin{itemize}
\item [(i)] $g=1$ and $p\in \{2,3,5,7,13\}$;
\item [(ii)] $g=2$ and $p=2,3$;
\item [(iii)] $g=3$, $p=2$ and $a(x)\ge 2$.
\end{itemize}
\end{enumerate}
\end{introtheorem}
The result of Chai (cf.~Theorem~\ref{thm:main}.(1)) shows that the above two
Gauss problems are related. Theorem~\ref{thm:main}.(2)(i) is
well-known; Theorem~\ref{thm:main}.(2)(ii) is a result due to the
first author~\cite{ibukiyama}. We prove the remaining cases; namely,
we show that $\vert \calC(x) \vert >1$ for
$g\geq 4$, and that when $g=3$, (iii) lists the only cases
such that $|\calC(x)|=1$. When $g=3$ and $a(x)=3$ (the \emph{principal
genus} case), the class number
one result is known due to Hashimoto \cite{hashimoto:g=3}. Hashimoto
first computes an explicit class number formula in
the principal genus case and proves
the class number one result as a direct consequence.
Our method instead uses mass formulae and the
automorphism groups of certain abelian varieties, which is much simpler than proving explicit class number formulae.
Mass formulae for dimension $g=3$ were very recently provided by F.~Yobuko and the second and third-named authors~\cite{karemaker-yobuko-yu}. In addition, we perform a careful analysis of the Ekedahl-Oort strata in dimension $g=4$; in Proposition~\ref{prop:EO} we show precisely how the Ekedahl-Oort strata and Newton strata intersect.
It is worth mentioning that we don't use any computers in this paper (unlike most papers that treat the Gauss problem); the only numerical data we use is the well-known table in Subsection~\ref{ssec:Gaussarith}.
In the course of our proof of Theorem~\ref{thm:main}, we define the notion of minimal $E$-isogenies (Definition~\ref{def:minE}). This generalises the notion of minimal isogenies for supersingular abelian varieties in the sense of Oort \cite[Section 1.8]{lioort}. This new construction of minimal isogenies even shows a new (and stronger) universal property for minimal isogenies since the test object is not required to be an isogeny; see Remark~\ref{rem:min_isog}. We also extend the results of Jordan et al.~\cite{JKPRST} on abelian varieties isogenous to a power of an elliptic curve to those with a polorisation; see Corollary~\ref{cor:JKPRST} and Theorem~\ref{thm:pol+JKPRST}.\\
We remark on some clear connections between the arithmetic and the geometric setting. By the Albert classification of division algebras, the endomorphism algebra $B = \End^0(A)$ of any simple abelian variety $A$ over any field $K$ is either a totally real field $F$, a quaternion algebra over $F$ (totally definite or totally indefinite), or a central division algebra over a CM field over~$F$.
The results in Subsection~\ref{ssec:RSarith} apply to all these classes of algebras, except for totally indefinite quaternion algebras and non-commutative central division algebras over a CM field. Notably, Theorem~\ref{orthogonal} provides a very general statement about unique orthogonal decomposition of lattices, which enables us to compute the automorphism groups of such lattices via Corollary~\ref{autodecomposition}.
On the geometric side (in Section~\ref{sec:aut}), we obtain general unique decomposition results for abelian varieties which are isomorphic to a power of an elliptic curve (therefore including superspecial abelian varieties), since only these abelian varieties admit a natural description in terms of lattices, as was investigated in \cite[7.12-7.14]{OortEO}.
On the other hand, on the geometric side we are mostly interested in supersingular abelian varieties, which are by definition isogenous to a power of a supersingular elliptic curve; hence, the most important algebras for us to study are the definite quaternion $\mathbb{Q}$-algebras $B = \End^0(E)$ for some supersingular elliptic curve $E$ over an algebraic closed field. We specialise to these algebras in the remaining arithmetic parts (Subsections~\ref{ssec:massarith} and~\ref{ssec:Gaussarith}) and solve the arithmetic Gauss problem for these in Theorem~\ref{thm:mainarith}. And indeed, we solve the geometric Gauss problem for all supersingular abelian varieties, including those which are not directly governed by lattices.
Finally, for solving the arithmetic Gauss problem, we restrict to maximal lattices for the maximal order $O$ in a definite quaternion $\mathbb{Q}$-algebra $B$. When $B=\End^0(E)$ and $O=\End(E)$ as above, on the geometric side these lattices correspond to polarised superspecial abelian varieties in either the principal genus or the non-principal genus; see Corollary~\ref{cor:Autsp}. This is how we can reduce our geometric Gauss problem immediately to the lower-dimensional cases ($g\le 4$). On the other hand, when $B$ is a more general definite $\mathbb{Q}$-algebra, this provides an extension of our geometric Gauss problem from Siegel modular varieties to fake Siegel modular varieties, which are direct generalisations of fake modular curves (that is, Shimura curves).\\
The structure of the paper is as follows. The arithmetic theory is treated in Section~2, building up to the main result in Theorem~\ref{thm:mainarith}. Theorem~\ref{orthogonal} is the unique orthogonal decomposition result for lattices, and Corollary~\ref{autodecomposition} gives its consequence for automorphism groups of such lattices. The geometric theory starts in Section~\ref{sec:GMF}, which recalls mass formulae due to the second and third authors as well as other authors. Section~\ref{sec:aut} treats automorphism groups and provides the geometric analogue of the unique decomposition results in Subsection~\ref{ssec:RSarith}, including their consequence for the determination of automorphism groups in Corollary~\ref{cor:Aut}. Finally, Section~\ref{sec:proof} solves the geometric Gauss problem for central leaves (Theorem~\ref{thm:main2}), using mass formulae for the case $g=3$ (Subsection~\ref{ssec:g3}) and explicit computations on Ekedahl-Oort strata for the hardest case $g = 4$ (Subsection~\ref{ssec:g4}).
\subsection{Acknowledgements}
The first author is supported by JSPS Kakenhi Grant Number JP19K-03424. The second author is supported by the Dutch Research Council (NWO) through grant VI.Veni.192.038.
The third author is partially supported by
the MoST grant 109-2115-M-001-002-MY3.
\section{The arithmetic theory}\label{sec:Arith}
\subsection{Uniqueness of orthogonal decomposition}\label{ssec:RSarith}\
Let $F$ be a totally real algebraic number field,
and let $B$ be either $F$ itself, a
CM field over~$F$ (i.e., a totally imaginary
quadratic extension of $F$), or a totally definite quaternion algebra over~$F$ (i.e., such that any simple component of $B\otimes \mathbb{R}$ is a
division algebra).
We may regard~$B^n$ as a left $B$-vector space. As a vector
space over $F$, we see that $B^n$ can be identified with~$F^{en}$, where $e=1$, $2$, or $4$ according the choice of $B$ made above.
Let $O_F$ be the ring of integers of $F$.
A lattice in $B^n$ is a finitely generated $\mathbb{Z}$-submodule $L
\subseteq B^n$ such that $\mathbb{Q} L=B^n$ (i.e., $L$ contains a basis of $B^n$ over $\mathbb{Q}$); it is called an $O_F$-lattice if $O_F L \subseteq L$.
A unitary subring $\mathcal{O}$ of~$B$ is called an order of $B$ if it is a lattice in $B$; $\mathcal{O}$ is called an $O_F$-order if $\mathcal{O}$ also contains~$O_F$.
Any element of $\mathcal{O}$ is integral over $O_F$.
We fix an order $\mathcal{O}$ of $B$.
Put $V=B^n$ and let $f:V\times V\rightarrow B$ be a
quadratic form, a Hermitian form, or a quaternion Hermitian form according to whether $B=F$, $B$ is CM, or $B$ is quaternionic.
This means that $f$ satisfies
\begin{equation}\label{eq:hermitian}
\begin{split}
f(ax,y) & =af(x,y) \qquad \text{ for any $x$, $y\in V$, $a\in B$}, \\
f(x_1+x_2,y)& =f(x_1,y)+f(x_2,y) \quad \text{ for any $x_i$, $y \in V$},\\
f(y,x) & = f(x,y)^* \qquad \text{ for any $x$, $y \in V$},
\end{split}
\end{equation}
where $*$ is the main involution of $B$ over $F$, that is,
the trivial map for $F$, the complex conjugation for a fixed embedding $B \subseteq \mathbb{C}$
if $B$ is a CM field,
or the anti-automorphism of $B$ of order~$2$ such that
$x+x^*=Tr_{B/F}(x)$ for the reduced trace $\mathrm{Tr}_{B/F}$.
By the above properties, we have $f(x,x)\in F$ for any $x\in V$.
We assume that $f$ is totally positive, that is, for any $x\in V$ and
for any embedding $\sigma:F\rightarrow \mathbb{R}$, we have $f(x,x)^{\sigma}>0$ unless $x=0$.
A lattice $L\subseteq V$ is said to be a left $\mathcal{O}$-lattice if $\mathcal{O} L\subseteq L$. An $\mathcal{O}$-submodule~$M$ of an $\mathcal{O}$-lattice $L$ is called an $\mathcal{O}$-sublattice of $L$; then $M$ is an $\mathcal{O}$-lattice in the $B$-module $B M$ of possibly smaller rank.
We say that a left $\mathcal{O}$-lattice $L\neq 0$ is indecomposable if whenever
$L=L_1+L_2$ and $f(L_1,L_2)=0$ for some left $\mathcal{O}$-lattices $L_1$ and $L_2$, then
$L_1=0$ or $L_2=0$.
For quadratic forms over $\mathbb{Q}$,
the following theorem is in \cite[p. 169 Theorem 6.7.1]{kitaoka} and
\cite[Satz 27.2]{kneser}. The proof for the general case is almost the same.
\begin{theorem}\label{orthogonal}
Assumptions and notation being as above,
any left $\mathcal{O}$-lattice $L\subseteq B^n$ has a finite orthogonal decomposition
\[
L=L_1\perp \cdots \perp L_r
\]
for some indecomposable left $\mathcal{O}$-sublattices $L_i$.
The set of lattices $\{L_i\}_{1\leq i\leq r}$ is uniquely determined by $L$.
\end{theorem}
\begin{proof}[Proof of Theorem \ref{orthogonal}]
Any non-zero $x \in L$ is called primitive if there are no $y$,$z\in L$ such that
$y\neq 0$, $z\neq 0$, and $x=y+z$ with $f(y,z)=0$.
First we see that any $0\neq x\in L$ is a finite sum of primitive elements of $L$.
If $x$ is not primitive, then we have $x=y+z$ with $0\neq y$, $z\in L$ with
$f(y,z)=0$.
So we have $f(x,x)=f(y,y)+f(z,z)$ and hence
\[
\mathrm{Tr}_{F/\mathbb{Q}}(f(x,x))=\mathrm{Tr}_{F/\mathbb{Q}}(f(y,y))+\mathrm{Tr}_{F/\mathbb{Q}}(f(z,z)).
\]
Since $f$ is totally positive, we have
$\mathrm{Tr}_{F/\mathbb{Q}}(f(x,x))=\sum_{\sigma:F\rightarrow \mathbb{R}}f(x,x)^{\sigma}=0$ if and only if
$x=0$. So we have $\mathrm{Tr}_{F/\mathbb{Q}}(f(y,y))<\mathrm{Tr}_{F/\mathbb{Q}}(f(x,x))$.
If $y$ is not primitive, we continue the same process.
We claim that this process terminates after finitely many steps.
Since $L\neq 0$ is a finitely generated $\mathbb{Z}$-module,
$f(L,L)$ is a non-zero finitely generated $\mathbb{Z}$-module.
So the module $\mathrm{Tr}_{F/\mathbb{Q}}(f(L,L))$ is
a fractional ideal of $\mathbb{Z}$ and we have
$\mathrm{Tr}_{F/\mathbb{Q}}(f(L,L))=e\mathbb{Z}$ for some $0<e\in \mathbb{Q}$. This means that
$\mathrm{Tr}_{F/\mathbb{Q}}(f(x,x))\in e\mathbb{Z}_{>0}$ for any $x \in L$.
So after finitely many iterations, $\mathrm{Tr}_{F/\mathbb{Q}}(f(y,y))$ becomes $0$ and the claim is proved.
We say that primitive elements $x$, $y\in L$ are connected if there are primitive elements $z_1$, $z_2$, \ldots, $z_r \in L$ such that
$x=z_0$, $y=z_r$, and $f(z_{i-1},z_{i})\neq 0$ for $i=1$,\ldots, $r$.
This is an equivalence relation.
We denote by $K_{\lambda}$ ($\lambda \in \Lambda$) the set of equivalence
classes of primitive elements in $L$. By definition, elements of
$K_{\lambda}$ and $K_{\kappa}$ for $\lambda\neq \kappa$ are orthogonal.
We denote by $L_{\lambda}$ the left $\mathcal{O}$-module spanned by elements of
$K_{\lambda}$. Then we have
\[
L=\perp_{\lambda\in \Lambda}L_{\lambda}.
\]
Since $F\mathcal{O}=B$, we see that $V_{\lambda}=FL_{\lambda}$ is a left $B$-vector space and
$L_{\lambda}$ is an $\mathcal{O}$-lattice in $V_{\lambda}$.
Since $\dim_B \sum_{\lambda\in \Lambda}V_{\lambda}=n$, we see that
$\Lambda$ is a finite set. Hence any primitive element in $L_{\lambda}$ belongs
to $K_{\lambda}$. Indeed, if $y\in L_{\lambda}\subseteq L$ is primitive,
then $y\in K_{\mu}$ for some $\mu\in \Lambda$, but if $\lambda\neq \mu$, then
$y\in K_{\mu}\subseteq L_{\mu}$, so $y=0$, a contradiction.
Now if $L_{\lambda}=N_1\perp N_2$ for some left $\mathcal{O}$-modules $N_1\neq 0$, $N_2\neq 0$,
then whenever $x+y$ with $x\in N_1$, $y\in N_2$ is primitive, we have $x=0$ or $y=0$.
So if $0\neq x \in N_1$ is primitive and if $f(x,z_1)\neq 0$ for
some primitive element $z_1\in L_{\lambda}$, then $z_1 \in N_1$.
Repeating the process, any $y\in K_{\lambda}$ belongs to $N_1$, so
$N_1=L_{\lambda}$, so $L_{\lambda}$ is indecomposable.
Now if $L=\perp_{\kappa \in K}M_{\kappa}$ for other indecomposable
lattices, then any primitive element $x$ of $L$ is contained
in some $M_{\kappa}$ by definition of primitivity.
By the same reasoning as before, if $x \in M_{\kappa}$ is primitive, then
any primitive $y\in L$ connected to $x$ belongs to $M_{\kappa}$.
This means that there is a injection $\iota:\Lambda\rightarrow K$
such that $L_{\lambda}\subseteq M_{\iota(\lambda)}$.
Since
\[
L=\perp_{\lambda\in \Lambda}L_{\lambda}\subseteq \perp_{\lambda\in \Lambda}
M_{\iota(\lambda)}\subseteq L
\]
we have $L_{\lambda}=M_{\iota(\lambda)}$ and $\iota$ is a bijection.
\end{proof}
\begin{corollary}\label{autodecomposition}
Assumptions and notation being as before, suppose that
$L$ has an orthogonal decomposition
\[
L=\perp_{i=1}^{r}M_i
\]
where $M_i=\perp_{j=1}^{e_i}L_{ij}$ for some indecomposable left $\mathcal{O}$-lattices
$L_{ij}$ such that $L_{ij}$ and $L_{ij'}$ are isometric for any $j$, $j'$, but
$L_{ij}$ and $L_{i'j'}$ are not isometric for $i\neq i'$.
Then we have
\[
\Aut(L)\cong \prod_{i=1}^{r}\Aut(L_{i1})^{e_i}\cdot S_{e_i}
\]
where $S_{e_i}$ is the symmetric group on $e_i$ letters and
$\Aut(L_{i1})^{e_i}\cdot S_{e_i}$ is a semi-direct product where $S_{e_i}$ normalises
$\Aut(L_{i1})^{e_i}$.
\end{corollary}
\begin{proof}
By Theorem \ref{orthogonal}, we see that for any element $\epsilon \in \Aut(L)$,
there exists $\tau\in S_{e_i}$ such that
$\epsilon(L_{i1})=L_{i\tau(1)}$, so the result follows.
\end{proof}
\begin{remark}\label{rem:product}
The proof of Theorem~\ref{orthogonal} also works in the following more general setting: $B=\prod_i B_i$ is a finite product of $\mathbb{Q}$-algebras $B_i$, where $B_i$ is either a totally real field $F_i$, a CM field over $F_i$, or a totally definite quaternion algebra over $F_i$. Denote by $*$ the main involution on~$B$ and $F=\prod_i F_i$ the subalgebra fixed by $*$. Let $\calO$ be any order in $B$, and let $V$ be a faithful left $B$-module equipped with a totally positive Hermitian form $f$, which satisfies the conditions in~\eqref{eq:hermitian} and is totally positive on each factor in $V=\oplus V_i$ with respect to $F=\prod_i F_i$.
\end{remark}
\subsection{Quaternionic unitary groups and mass formulae}\label{ssec:massarith}\
For the rest of this section, we let $B$ be a definite quaternion $\mathbb{Q}$-algebra central over $\mathbb{Q}$ with discriminant $D$
and let $O$ be a maximal order in $B$. Then $D=q_1\cdots q_t$ is a
product of an odd number $t$ of primes.
The canonical involution on $B$ is denoted by $x\mapsto \bar x$.
Let $(V,f)$ be a positive-definite quaternion Hermitian space over $B$ of
rank $n$. By definition, $f:V\times V\to B$ is a $\mathbb{Q}$-bilinear form
such that
\begin{itemize}
\item [(i)] $f(ax,y)=af(x,y)$ and $f(x,ay)=f(x,y)\bar a$,
\item [(ii)] $f(y,x)=\overline{f(x,y)}$, and
\item [(iii)] $f(x,x)\ge 0$ and $f(x,x)=0$ only when $x=0$,
\end{itemize}
for all $a\in B$ and $x,y\in V$.
The isomorphism class of $(V,f)$ over $B$ is uniquely
determined by $\dim_B V$. We denote by $G=G(V,f)$ the group of all
similitudes on $(V,f)$; namely,
\[
G=\{\alpha\in \GL_B(V): f(x \alpha,y \alpha)=n(\alpha)f(x,y) \quad \forall\, x,y\in V\ \},
\]
where $n(\alpha)\in \mathbb{Q}^\times$ is a scalar depending only on $\alpha$.
For each prime $p$, we write $O_p:=O\otimes_\mathbb{Z} {\bbZ}_p$,
$B_p:=B\otimes_\mathbb{Q} {\bbQ}_p$ and $V_p:=V\otimes_\mathbb{Q} {\bbQ}_p$, and let
$G_p=G(V_p,f_p)$ be the group of all similitudes on the local quaternion
Hermitian space $(V_p,f_p)$.
Two $O$-lattices $L_1$ and $L_2$ are said to be equivalent,
denoted $L_1\sim L_2$, if there exists an element
$\alpha\in G$ such that $L_2=L_1 \alpha$; the equivalence of two $O_p$-lattices
is defined analogously. Two $O$-lattices $L_1$ and $L_2$
are said to be in the same genus if $(L_1)_p\sim (L_2)_p$ for all primes~$p$.
The norm $N(L)$ of an $O$-lattice $L$ is defined to be
the two-sided fractional $O$-ideal generated by
$f(x,y)$ for all $x,y\in L$. If $L$ is maximal among the $O$-lattices
having the same norm $N(L)$, then it is called a maximal $O$-lattice.
The notion of maximal $O_p$-lattices in~$V_p$ is defined analogously. Then an $O$-lattice $L$ is maximal if and only if
the $O_p$-lattice $L_p:=L\otimes_\mathbb{Z} {\bbZ}_p$ is maximal for all prime
numbers $p$.
For each prime $p$, if $p\nmid D$, then there is only one equivalence
class of maximal $O_p$-lattices in $V_p$, represented by the
standard unimodular lattice $(O_p^n, f=\bbI_n)$.
If $p|D$, then there are two equivalence classes of maximal
$O_p$-lattices in $V_p$, represented by the principal lattice
$(O_p^n,f=~\bbI_n)$ and a non-principal lattice
$((\Pi_p O_p)^{\oplus (n-c)}\oplus O_p^{\oplus c},\bbJ_n)$, respectively, where
$c=~\lfloor n/2\rfloor$, and $\Pi_p$ is a uniformising element in $O_p$ with
$\Pi_p \overline \Pi_p=p$, and $\bbJ_n=\text{anti-diag}(1,\dots, 1)$ is the anti-diagonal
matrix of size $n$.
Thus, there are $2^t$ genera of maximal $O$-lattices in $V$ when $n\geq 2$.
For each positive integer $n$ and a pair $(D_1,D_2)$ of positive
integers with $D=D_1D_2$, denote by $\calL_n(D_1,D_2)$ the genus
consisting of maximal $O$-lattices in $(V,f)$ of rank $n$ such
that for all primes $p|D_1$ (resp.~$p|D_2$) the $O_p$-lattice
$(L_p,f)$ belongs to the principal class (resp.~ the
non-principal class). We denote by $[\calL_n(D_1,D_2)]$ the set of
equivalence classes of lattices in $\calL_n(D_1,D_2)$ and by
$H_n(D_1,D_2):=\# [\calL_n(D_1,D_2)]$ the class number of the genus
$\calL_n(D_1,D_2)$. The mass $M_n(D_1,D_2)$ of $[\calL_n(D_1,D_2)]$ is defined by
\begin{equation}
\label{eq:Mass}
M_n(D_1,D_2)=\Mass([\calL_n(D_1,D_2)]):=\sum_{L\in
[\calL_n(D_1,D_2)]} \frac{1}{|\Aut(L)|},
\end{equation}
where $\Aut(L):=\{\alpha\in G: L\alpha=L\}$. Note that if $\alpha\in \Aut(L)$ then
$n(\alpha)=1$, because $n(\alpha)>0$ and $n(\alpha)\in \mathbb{Z}^\times=\{\pm 1 \}$.
Let $G^1:=\{\alpha\in G: n(\alpha)=1\}$. The class number and mass for a $G^1$-genus
of $O$-lattices are defined analogously to the case of $G$:
two $O$-lattices $L_1$ and $L_2$ are said to be isomorphic,
denoted $L_1\simeq L_2$, if there exists an element
$\alpha\in G^1$ such that $L_2=L_1 \alpha$; similarly,
two $O_p$-lattices $L_{1,p}$ and $L_{2,p}$ are said to be isomorphic,
denoted $L_{1,p}\simeq L_{2,p}$ if there exists an element $\alpha_p\in G^1_p$
such that $L_{2,p}=L_{1,p} \alpha_p$.
Two $O$-lattices $L_1$ and $L_2$
are said to be in the same $G^1$-genus if $(L_1)_p\simeq (L_2)_p$
for all primes $p$.
We denote by $\calL_n^1(D_1,D_2)$ the $G^1$-genus
which consists of maximal $O$-lattices in $(V,f)$ of rank $n$ satisfying
\[ (V_p,f_p)\simeq
\begin{cases}
(O_p^n,\bbI_n) & \text{for $p\nmid D_2$}; \\
((\Pi_p O_p)^{n-c}\oplus O_p^c,\bbJ_n) & \text{for $p\mid D_2$}, \\
\end{cases}
\]
where $c:=\lfloor n/2\rfloor$.
We denote by $[\calL_n^1(D_1,D_2)]$ the set of isomorphism classes of $O$-lattices in $\calL_n^1(D_1,D_2)$ and by
$H^1_n(D_1,D_2):=\# [\calL^1_n(D_1,D_2)]$ the class number of
the $G^1$-genus $\calL_n^1(D_1,D_2)$. Similarly, the mass $M^1_n(D_1,D_2)$ of $[\calL^1_n(D_1,D_2)]$ is defined by
\begin{equation}
\label{eq:Mass1}
M^1_n(D_1,D_2)=\Mass([\calL^1_n(D_1,D_2)]):=\sum_{L\in
[\calL^1_n(D_1,D_2)]} \frac{1}{|\Aut_{G^1}(L)|},
\end{equation}
where $\Aut_{G^1}(L):=\{\alpha\in G^1: L\alpha=L\}$, which is also equal to $\Aut(L)$.
\begin{lemma}\label{lm:GvsG1}
The natural map $\iota:[\calL^1_n(D_1,D_2)]\to [\calL_n(D_1,D_2)]$ is a bijection. In particular, we have the equalities
\begin{equation}
\label{eq:GvsG1}
M^1_n(D_1,D_2)=M_n(D_1,D_2) \quad \text{and}\quad H^1_n(D_1,D_2)=H_n(D_1,D_2).
\end{equation}
\end{lemma}
\begin{proof}
Fix an $O$-lattice $L_0$ in $\calL_n(D_1,D_2)$ and
regard $G$ and $G^1$ as algebraic groups over $\mathbb{Q}$.
Denote by $\widehat \mathbb{Z}=\prod_{\ell} \mathbb{Z}_\ell$ the profinite completion of
$\mathbb{Z}$ and by $\A_f=\widehat \mathbb{Z}\otimes_{\mathbb{Z}} \mathbb{Q}$ the finite adele ring of $\mathbb{Q}$.
We have natural isomorphisms of pointed sets
\[ [\calL_n(D_1,D_2)]\simeq U \backslash G(\A_f)/G(\mathbb{Q}), \quad
[\calL^1_n(D_1,D_2)]\simeq
U^1 \backslash G^1(\A_f)/G^1(\mathbb{Q}), \]
where $U$ is the stabiliser of $L_0\otimes \widehat \mathbb{Z}$ in $G(\A_f)$ and $U^1:=U\cap G^1(\A_f)$. Via these isomorphisms, the natural map $\iota:[\calL^1_n(D_1,D_2)]\to [\calL_n(D_1,D_2)]$ is nothing but the map induced by the identity map
\[
\iota: U^1 \backslash G^1(\A_f)/G^1(\mathbb{Q}) \to U \backslash G(\A_f)/G(\mathbb{Q}).
\]
The map $n$ induces a surjective map $U \backslash G(\A_f)/G(\mathbb{Q})\to n(U)\backslash \A_f^\times/\mathbb{Q}^\times_+$. One shows that $n(U)=\widehat \mathbb{Z}^\times$ so the latter term is trivial. Then every double coset in $U \backslash G(\A_f)/G(\mathbb{Q})$ is presented by an element of norm one. Therefore, $\iota$ is surjective. Let $g_1,g_2\in G^1(\A_f)$ such that $\iota [g_1]=\iota[g_2]$ in the $G$-double coset space. Then $g_1=u g_2 \gamma $ for some $u\in U$ and $\gamma\in G(\mathbb{Q})$. Applying $n$, one obtains $n(\gamma)=1$ and hence $n(u)=1$. This proves the injectivity of $\iota$.
\end{proof}
For each $n\geq 1$, define
\begin{equation}
\label{eq:vn}
v_n:=\prod_{i=1}^n \frac{|\zeta(1-2i)|}{2},
\end{equation}
where $\zeta(s)$ is the Riemann zeta function. For each prime $p$ and
$n\ge 1$, define
\begin{equation}
\label{eq:Lnp}
L_n(p,1):=\prod_{i=1}^n (p^i+(-1)^i)
\end{equation}
and
\begin{equation}
\label{eq:L*np}
L_n(1,p):=
\begin{cases}
\prod_{i=1}^c (p^{4i-2}-1) & \text{if $n=2c$ is even;} \\
\frac{(p-1) (p^{4c+2}-1)}{p^2-1} \cdot \prod_{1=1}^c (p^{4i-2}-1) & \text{if $n=2c+1$ is odd.}
\end{cases}
\end{equation}
\begin{proposition}\label{prop:max_lattice} We have
\begin{equation}
\label{eq:Massformula}
M_n(D_1,D_2)=v_n \cdot \prod_{p|D_1} L_n(p,1) \cdot \prod_{p|D_2}
L_n(1,p).
\end{equation}
\end{proposition}
\begin{proof}
When $(D_1,D_2)=(D,1)$, the formula \eqref{eq:Massformula} is proved in
\cite[Proposition~9]{hashimoto-ibukiyama:1}. By Lemma~\eqref{lm:GvsG1}, we may replace
$M_n(D_1,D_2)$ by $M^1_n(D_1,D_2)$ in \eqref{eq:Massformula}. We have
\begin{equation}
\label{eq:massquot}
\frac{M^1_n(D_1,D_2)}{M^1_n(D,1)}=\prod_{p|D_2} \frac{\vol(\Aut_{G^1_p}(O_p^n,\bbI_n))}{\vol(\Aut_{G^1_p}((\Pi_pO_p)^{n-c}\oplus O_p^c,\bbJ_n))},
\end{equation}
where $c=\lfloor n/2\rfloor$ and where $\vol(U_p)$ denotes the volume of
an open compact subgroup $U_p\subseteq G^1_p$ for a Haar measure on
$G^1_p$. The right hand side of \eqref{eq:massquot} does not depend
on the choice of the Haar measure. It is easy to see that the dual
lattice $((\Pi_pO_p)^{n-c}\oplus O_p^c)^\vee$ of
$(\Pi_pO_p)^{n-c}\oplus O_p^c$ with respect to $\bbJ_n$
is equal to $O_p^{c}\oplus (\Pi_p^{-1} O_p)^{n-c}$. Therefore,
\[
\Aut_{G^1_p}((\Pi_pO_p)^{n-c}\oplus O_p^c,\bbJ_n)=
\Aut_{G^1_p}((\Pi_pO_p)^{c}\oplus O_p^{n-c},\bbJ_n).
\]
On the other hand, in the notation of Theorem~\ref{thm:sspmass}, we have
\begin{equation}
\label{eq:localquot}
\frac{\vol(\Aut_{G^1_p}(O_p^n,\bbI_n))}{\vol(\Aut_{G^1_p}((\Pi_pO_p)^{c}\oplus O_p^{n-c}c,\bbJ_n))}=\frac{\Mass(\Lambda_{n,p^c})}{\Mass(\Lambda_{n,p^0})}
=\frac{L_{n,p^c}}{L_{n,p^0}}=\frac{L_n(1,p)}{L_n(p,1)}
\end{equation}
by \eqref{eq:npgc}. Then the formula \eqref{eq:Massformula} follows from \eqref{eq:massquot}, \eqref{eq:localquot} and \eqref{eq:Massformula} for $(D_1,D_2)=(D,1)$.
\end{proof}
\subsection{The Gauss problem for definite quaternion Hermitian maximal lattices}\label{ssec:Gaussarith}\
In this subsection we determine for which $n$ and $(D_1,D_2)$ the class
number $H_n(D_1,D_2)$ is equal to one.
The Bernoulli numbers $B_n$ are defined by (cf. \cite[p.~91]{serre:arith})
\begin{equation}
\label{eq:Bernoulli}
\frac{t}{e^t-1}=1-\frac{t}{2} +\sum_{n=1}^\infty B_{2n}
\frac{t^{2n}}{(2n)!}.
\end{equation}
For each $n\ge 1$, we have
\begin{equation}
\label{eq:zeta2n}
B_{2n}=(-1)^{(n+1)} \frac{2 (2n)!}{(2\pi)^{2n}} \zeta(2n)
\end{equation}
and
\begin{equation}
\label{eq:zeta1-2n}
\frac{|\zeta(1-2n)|}{2} =
\frac{|B_{2n}|}{4n}=\frac{(2n-1)!\zeta(2n)}{(2\pi)^{2n}} .
\end{equation}
Below is a table of values of $|B_{2n}|$ and $|\zeta(1-2n)|/2$:
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
$n$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 \\ \hline
$|B_{2n}|$ & $\frac{1}{6}$ & $\frac{1}{30}$ & $\frac{1}{42}$
& $\frac{1}{30}$ & $\frac{5}{66}$ & $\frac{691}{2730}$
& $\frac{7}{6}$ & $\frac{3617}{510}$ & $\frac{43867}{798}$
& $\frac{174611}{330}$ & $\frac{864513}{138}$ &
$\frac{236364091}{2730}$
\\ \hline
$\frac{|\zeta(1-2n)|}{2}$ & $\frac{1}{24}$ & $\frac{1}{240}$
& $\frac{1}{504}$
& $\frac{1}{480}$ & $\frac{1}{264}$ & $\frac{691}{2730\cdot 24}$
& $\frac{1}{24}$ & $\frac{3617}{510\cdot 32}$ & $\frac{43867}{798\cdot 36
}$
& $\frac{174611}{330\cdot 40}$ & $\frac{864513}{138\cdot 44}$ &
$\frac{236364091}{2730\cdot 48}$
\\ \hline
$\frac{|\zeta(1-2n)|}{2}$ & $\frac{1}{24}$ & $\frac{1}{240}$
& $\frac{1}{504}$
& $\frac{1}{480}$ & $\frac{1}{264}$
& $\frac{691}{2730\cdot 24}$
& $\frac{1}{24}$ & $0.222$ & $1.527$
& $13.228$ & $142.38$ & $1803.8$
\\ \hline
\end{tabular}
\end{center}
We have (cf.~\eqref{eq:vn})
\begin{equation}
\label{eq:valuevn}
\begin{split}
&v_1=\frac{1}{2^3\cdot 3}, \quad v_2=\frac{1}{2^7\cdot 3^2\cdot
5}, \quad v_3=\frac{1}{2^{10}\cdot 3^4 \cdot
5\cdot 7}, \\
&v_4=\frac{1}{2^{15}\cdot 3^5 \cdot
5^2\cdot 7}, \quad v_5=\frac{1}{2^{18}\cdot 3^6 \cdot
5^2\cdot 7\cdot 11}.
\end{split}
\end{equation}
\begin{lemma}\label{lem:vn}
If $n\geq 6$, then either the numerator of $v_n$ is not one or $v_n>1$.
\end{lemma}
\begin{proof}
Put $A_n=|\zeta(1-2n)|/2$. First, by
\[ \zeta(2n)<1+\int_{2}^\infty
\frac{1}{x^{2n}}dx=1+\frac{1}{2^{2n+1}}, \]
we have
\[ \frac{A_{n+1}}{A_n}> \frac{(2n+1)(2n)}{(2\pi)^2\cdot
\zeta(2n)}> \left (\frac{2n}{2\pi}\right )^2
\cdot \frac{1+\frac{1}{2n}}{1+\frac{1}{2^{2n+1}}}>1 \quad
\text{for $n\ge 3$}. \]
From the table and the fact
that $A_n$ is increasing for $n\ge 4$ which we have just proved,
we have
\[ v_n=\prod_{i=1}^6 A_i \cdot \prod_{i=7}^{11} A_i \cdot
\prod_{i=12}^n A_i
> \frac{1}{504^6}\cdot 1 \cdot (1803)^{n-11} \quad \text{for $n\ge 12$.} \]
Thus, $v_n>1$ for $n\geq 17$.
By a classical result of Clausen and von Staudt (see \cite[Theorem 3.1, p.~41]{AIK14}), $B_{2n}\equiv -\sum_{p-1|2n} (1/p) \mod 1$ where $p$ are primes. So if $n\le 17$ (even for $n\le 344$), then $B_{2n}$ has denominators only for primes such that $p-1\le 34$ (or $p-1 \le 344\cdot 2$) and this does not include $691$. Thus, for $6\le n\le 344$, we have $691|v_n$. This proves the lemma.
\end{proof}
\begin{corollary}\label{cor:ge6}
For $n\geq 6$, we have $H_n(D_1,D_2)>1$.
\end{corollary}
\begin{proof}
By Lemma~\ref{lem:vn}, either $v_n>1$ or
the numerator of $v_n$ is not one. From the mass formula \eqref{eq:Mass},
either $M_n(D_1,D_2)>1$ or the numerator of $M_n(D_1,D_2)$
is not one. Therefore, $H_n(D_1,D_2)>1$.
\end{proof}
\begin{proposition}\label{prop:np2}
We have $H_3(2,1)=1$, $H_3(1,2)=1$, and $H_4(1,2)=1$.
\end{proposition}
\begin{proof}
It follows from Proposition~\ref{prop:max_lattice} and Equations~\eqref{eq:L*np} and~\eqref{eq:valuevn} that
\[
M_3(1,2) = \frac{1}{2^{10} \cdot 3^2 \cdot 5} \qquad \text{ and } \qquad M_4(1,2) = \frac{1}{2^{15}\cdot 3^2 \cdot 5^2}.
\]
It follows from \cite[Section 5]{ibukiyama} that the unique lattice $(L,h)$ in the non-principal genus $H_2(1,2)$ has an automorphism group of cardinality $1920 = 2^7 \cdot 3 \cdot 5$.
Consider the lattice $(O,p\mathbb{I}_1) \oplus (L, h)$ contained in $\calL_3(1,2)$. By Corollary~\ref{autodecomposition} we see that
\[
\Aut((O,p\mathbb{I}_1) \oplus (L, h)) \simeq \Aut((O,p\mathbb{I}_1)) \cdot \Aut((L, h)) = O^{\times} \cdot \Aut((L,h)).
\]
Since $O^{\times} = E_{24} \simeq \SL_2(\mathbb{F}_3)$ has cardinality $24$ (cf.~\cite[Equation~(57)]{karemaker-yobuko-yu}), it follows that
\[
\vert \Aut((O,p\mathbb{I}_1) \oplus (L, h)) \vert = 24 \cdot 1920 = 2^{10} \cdot 3^2 \cdot 5 = \frac{1}{M_3(1,2)},
\]
showing that the lattice $(O,p\mathbb{I}_1) \oplus (L, h)$ is unique and hence that $H_3(1,2) = 1$.
Next, consider the lattice $(L, h)^{\oplus 2}$ contained in $\calL_4(1,2)$. Again by Corollary~\ref{autodecomposition} we see that
\[
\Aut((L, h)^{\oplus 2}) \simeq \Aut((L, h))^2 \cdot C_2
\]
which has cardinality
\[
1920^2 \cdot 2 = 2^{15} \cdot 3^2 \cdot 5^2 = \frac{1}{M_4(1,2)},
\]
showing that also $(L, h)^{\oplus 2}$ is unique and therefore $H_4(1,2) = 1$.
Finally, we compute that
\[
M_3(2,1)=\frac{1}{2^{10}\cdot 3^4}=\frac{1}{24^3 \cdot 3!}=\frac{1}{|\Aut(O^3,\bbI_3)|},
\ \text{and therefore}\ H_3(2,1)=1.
\]
\end{proof}
\begin{theorem}\label{thm:mainarith}
The class number $H_n(D_1,D_2)$ is equal to one if and only if $D=p$
is a prime number and one of the following holds:
\begin{enumerate}
\item $n=1$, $(D_1,D_2)=(p,1)$ and $p\in \{2,3,5,7,13\}$;
\item $n=2$, and either $(D_1,D_2)=(p,1)$ with $p=2,3$ or
$(D_1,D_2)=(1,p)$ with $p \in \{2,3,5,7,11\}$;
\item $n=3$, and either $(D_1,D_2)=(2,1)$ or $(D_1,D_2)=(1,2)$;
\item $n=4$ and $(D_1,D_2)=(1,2)$.
\end{enumerate}
\end{theorem}
\begin{proof}
\begin{enumerate}
\item When $n=1$ we only have the principal genus class number and $H_1(D,1)$ is the class number $h(B)$ of $B$. The corresponding Gauss problem is a classical result:
$h(B)=1$ if and only if $D\in \{2,3,5,7,13\}$; see the list in \cite[p.~155]{vigneras}. We give an alternative proof of this fact for the reader's convenience.
Suppose that $H_1(D,1)=1$. Then
\begin{equation}
\label{eq:M1}
M_1(D,1)=\frac{\prod_{p|D} (p-1)}{24} =\frac{1}{m}, \quad \text{where $m\in 2\bbN $.}
\end{equation}
The discriminant $D$ has an odd number of prime divisors, since $B$ is a definite quaternion algebra. That the numerator of $M_1(D,1)$ is $1$
implies that
every prime factor $p$ of~$D$ must satisfy
$(p-1)|24$ and hence $p\in\{2,3,5,7,13\}$.
Suppose that $D$ has more than one prime
divisor; using the condition \eqref{eq:M1},
$D$ must then be $2\cdot 3\cdot 7=42$. Using the class number formula
(see \cite{eichler-CNF-1938, vigneras}, cf. Pizer~\cite[Theorem 16, p.~68]{pizer:arith})
\[
H_1(D,1)=\frac{\prod_{p|D} (p-1)}{12} +\frac{1}{4} \prod_{p|D}
\left ( 1-\left (\frac{-4}{p} \right ) \right )+\frac{1}{3} \prod_{p|D}
\left ( 1-\left (\frac{-3}{p} \right ) \right ),
\]
we calculate that $H_1(42,1)=2$. Hence, $D$ must be a prime $p$, which is in $\{2,3,5,7,13\}$. Conversely, we check that $H_1(p,1)=1$ for these primes.
\item See Hashimoto-Ibukiyama
\cite[p.~595]{hashimoto-ibukiyama:1},
\cite[p.~696]{hashimoto-ibukiyama:2}. One may still want to verify $H_2(D_1,D_2)>1$ for pairs $(D_1,D_2)$ not in the data there. Using the class number formula in \cite{hashimoto-ibukiyama:2} we compute that $M_2(1,2\cdot 3\cdot 11)=1/2$ and $H_2(1,2\cdot 3 \cdot 11)=9$. For the remaining cases, one can show that either the numerator of $M_2(D_1,D_2)$ is not equal to $1$ or $M_2(D_1,D_2)>1$, by the same argument as that used below for $n \geq 3$.
\item[(3)+(4)]
The principal genus part for $n=3$ with $D=p$ a prime is due to Hashimoto \cite{hashimoto:g=3}, based
on an explicit class number formula.
We shall prove directly that for $n\geq 3$, (3)
and (4) are the only cases for which $H_n(D_1,D_2)=1$. In particular, our proof of the principal genus part of
(3) is independent of Hashimoto's result.
By
Corollary~\ref{cor:ge6}, it is enough to treat the cases
$n=3,4,5$, so we assume this.
We have $L_{n+1}(p,1)=L_n(p,1)(p^{n+1}+(-1)^{n+1})$,
and
\[ L_2(1,p)=(p^2-1), \quad L_3(1,p)=(p-1)(p^6-1), \]
\[ L_4(1,p)=(p^2-1)(p^6-1), \quad L_5(1,p)=(p-1)(p^6-1)(p^{10}-1). \]
In particular, $(p^3-1)$ divides both $L_n(p,1)$ and $L_n(1,p)$ for
$n=3,4,5$.
Observe that if $L_n(p,1)$ or $L_n(1,p)$ has a prime factor greater than $11$,
then $H_n(D_1,D_2)>1$ for all $(D_1,D_2)$ with $p|D_1 D_2$; this follows from Proposition~\ref{prop:max_lattice} and \eqref{eq:valuevn}.
We list a prime factor $d$ of $p^3-1$ which is greater than $11$:
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c| }
\hline
$p$ & 3 & 5 & 7 & 11 & 13 \\ \hline
$d|p^3-1$ & 13 & 31 & 19 & 19 & 61 \\ \hline
\end{tabular}
\end{center}
Thus, $H_n(D_1,D_2)>1$ for $n=3,4,5$ and $p|D$ for some prime $p$ with $3\le p \le 13$. It remains to treat the cases $p\ge 17$ and $p=2$.
We compute that $M_3(17,1) \doteq 7.85$ and $M_4(1,17) \doteq 4.99$. One sees
that $M_3(1,17)>M_3(17,1)$, $M_5(17,1)>M_3(17,1)$ and
$M_4(17,1)>M_4(1,17)$. Therefore $M_n(p,1)>1$ and $M_n(1,p)>1$ for
$p\ge 17$. Thus, $H_n(D_1,D_2)=1$ implies that $D=2$. One
checks that $31|L_5(2,1)$, $31|L_5(1,2)$ and $17|L_4(2,1)$. Thus
\[ H_5(2,1)>1, \quad H_5(1,2)>1, \quad \text{and} \quad H_4(2,1)>1. \]
It remains to show that $H_3(2,1)=1$, $H_3(1,2)=1$ and $H_4(1,2)=1$, which is done in Proposition~\ref{prop:np2}.
\end{enumerate}
\end{proof}
\section{The geometric theory: mass formulae and class numbers}\label{sec:GMF}
\subsection{Set-up and definition of masses}\label{ssec:not}\
For the remainder of this paper,
let $p$ be a prime number, let $g$ be a positive integer, and let $k$ be an
algebraically closed field of characteristic $p$. Unless
stated otherwise, $k$ will be the field of definition of abelian varieties.
The cardinality of a finite set $S$ will be denoted by $\vert S\vert $. Let $\alpha_p$ be the unique local-local finite group scheme of order $p$ over ${\bbF}_p$;
it is defined to be the kernel of the Frobenius morphism on the additive group $\mathbb G_a$
over ${\bbF}_p$. As before, denote by $\widehat \mathbb{Z}=\prod_{\ell} \mathbb{Z}_\ell$ the profinite completion of
$\mathbb{Z}$ and by $\A_f=\widehat \mathbb{Z}\otimes_{\mathbb{Z}} \mathbb{Q}$ the finite adele ring of $\mathbb{Q}$. Let $B_{p,\infty}$ denote the definite quaternion $\mathbb{Q}$-algebra of discriminant $p$.
Fix a quaternion Hermitian $B_{p,\infty}$-space $(V,f)$ of rank $g$, let $G=G(V,f)$ be the quaternion Hermitian group associated to $(V,f)$ and $G^1\subseteq G$ the subgroup consisting of elements $g \in G$ of norm $n(g)=1$. We regard $G^1$ and $G$ as algebraic groups over $\mathbb{Q}$.
For any integer $d\ge 1$, let $\calA_{g,d}$ denote the (coarse) moduli
space over $\overline{\bbF}_p$ of $g$-dimensional polarised abelian varieties
$(X,\lambda)$ with polarisation degree $\deg(\lambda)=d^2$.
An abelian variety over~$k$ is said to be \emph{supersingular} if
it is isogenous to a product of supersingular elliptic curves; it is said to be \emph{superspecial} if it is isomorphic to a product of supersingular elliptic curves.
For any $m \geq 1$, let $\calS_{g,p^m}$ be the
supersingular locus of $\calA_{g,p^m}$, which consists of all
polarised supersingular abelian varieties in $\calA_{g,p^m}$.
Then $\calS_g:=\mathcal{S}_{g,1}$ is the moduli space of
$g$-dimensional principally polarised
supersingular abelian varieties.
If $S$ is a finite set of objects with finite
automorphism groups in a specified category,
the \emph{mass} of $S$
is defined to be the weighted
sum
\[
\Mass(S):=\sum_{s\in S} \frac{1}{\vert \Aut(s)\vert }.
\]
For any $x = (X_0, \lambda_0) \in \mathcal{S}_{g,p^m}(k)$, we define
\begin{equation}\label{eq:Lambdax}
\Lambda_{x} = \{ (X,\lambda) \in \mathcal{S}_{g,p^m}(k) : (X,\lambda)[p^{\infty}] \simeq (X_0, \lambda_0)[p^{\infty}] \},
\end{equation}
where $(X,\lambda)[p^{\infty}]$ denotes the polarised $p$-divisible
group associated to $(X,\lambda)$. We define a group scheme $G_x$ over $\mathbb{Z}$ as follows. For
any commutative ring $R$, the group of its $R$-valued points is
defined by
\begin{equation}\label{eq:aut}
G_{x}(R) = \{ \alpha \in (\text{End}(X_0)\otimes _{\mathbb{Z}}R)^{\times} : \alpha^t \lambda_0 \alpha = \lambda_0\}.
\end{equation}
Since any two polarised supersingular abelian varieties are isogenous,
i.e., there exists a quasi-isogeny $\varphi: X_1\to X_2$ such that $\varphi^* \lambda_2=\lambda_1$,
the algebraic group $G_x\otimes \mathbb{Q}$ is independent of~$x$ (up to isomorphism) and it is known to be isomorphic to $G^1$. We shall fix an isomorphism $G_x\otimes \mathbb{Q} \simeq G^1$ over $\mathbb{Q}$ and regard $U_x:=G_x(\widehat \mathbb{Z})$ as an open compact subgroup of $G^1(\A_f)$.
By \cite[Theorem 2.1]{yu:2005}, there is a natural bijection between the following pointed sets:
\begin{equation}
\label{eq:smf:1}
\Lambda_x \simeq G^1(\mathbb{Q})\backslash G^1(\A_f)/U_x.
\end{equation}
In particular, $\Lambda_x$ is a finite set. The mass of $\Lambda_x$
is then defined as
\begin{equation}
\label{eq:Massx}
\mathrm{Mass}(\Lambda_{x}) = \sum_{(X,\lambda) \in \Lambda_{x}} \frac{1}{\vert
\mathrm{Aut}(X,\lambda)\vert}.
\end{equation}
If $U$ is an open compact subgroup of $G^1(\A_f)$, the \emph{arithmetic mass} for $(G^1,U)$ is defined by
\begin{equation}
\label{eq:arithmass}
\Mass(G^1,U):=\sum_{i=1}^h \frac{1}{|\Gamma_i|}, \quad \Gamma_i:=G^1(\mathbb{Q})\cap c_i U c_i^{-1},
\end{equation}
where $\{c_i\}_{i=1,\ldots, h}$ is a complete set of representatives of the double coset space
$ G^1(\mathbb{Q})\backslash G^1(\A_f)/U$.
The definition of $\Mass(G^1,U)$ is independent of the choices of representatives $\{c_i\}_i$.
Then we have the equality (cf.~ \cite[Corollary 2.5]{yu:2005})
\begin{equation}
\label{eq:smf:2}
\Mass(\Lambda_x)=\Mass(G^1,U).
\end{equation}
\subsection{Superspecial mass formulae}\label{ssec:sspmass}\
For each integer $c$ with $0 \leq c \leq \lfloor g/2 \rfloor$,
let $\Lambda_{g,p^c}$ denote the set of isomorphism classes of
$g$-dimensional polarised superspecial abelian varieties $(X,
\lambda)$ whose polarisation $\lambda$ satisfies $\ker(\lambda) \simeq
\alpha_p^{2c}$. The mass of $\Lambda_{g,p^c}$ is
\[
\mathrm{Mass}(\Lambda_{g,p^c}) = \sum_{(X,\lambda)\in \Lambda_{g,p^c}}
\frac{1}{\vert \mathrm{Aut}(X,\lambda) \vert}.
\]
Note that the $p$-divisible group
of a superspecial abelian variety of given dimension is unique up to
isomorphism. Furthermore, the polarised $p$-divisible group associated
to any member in~$\Lambda_{g,p^c}$ is unique up to isomorphism,
cf.~\cite[Proposition 6.1]{lioort}. Therefore,
if $x = (X_0, \lambda_0)$ is any member in $\Lambda_{g,p^c}$, then we have
$\Lambda_x = \Lambda_{g,p^c}$ (cf.~\eqref{eq:Lambdax}). In
particular, the mass $\Mass(\Lambda_{g,p^c})$ of the superspecial locus $\Lambda_{g,p^c}$ is a special case of $\Mass(\Lambda_x)$.
We fix a supersingular elliptic curve $E$ over
$\mathbb{F}_{p^2}$ such that its Frobenius endomorphism $\pi_E$ satisfies $\pi_E=-p$, and let ${E_k}=E\otimes_{\mathbb{F}_{p^2}} k$.
It is known that every polarisation on ${E^g_k}$ is
defined over $\mathbb{F}_{p^2}$, that is, it descends
uniquely to a polarisation on $E^g$ over $\mathbb{F}_{p^2}$.
For each integer~$c$ with $0\leq c \leq \lfloor g/2 \rfloor$, we denote by $P_{p^c}(E^g)$ the set of isomorphism classes of polarisations $\mu$ on $E^g$ such that $\mathrm{ker}(\mu) \simeq \alpha_p^{2c}$; we define $P_{p^c}({E^g_k})$ similarly, and have the identification $P_{p^c}({E^g_k})=P_{p^c}(E^g)$.
As superspecial abelian abelian varieties of dimension $g>1$ are unique up to isomorphism, there is a bijection $P_{p^c}(E^g) \simeq \Lambda_{g,p^c}$ when $g>1$.
For brevity, we shall also write $P(E^g)$ for $P_1(E^g)$.
\begin{theorem}\label{thm:sspmass}
For any $g \ge 1$ and $0 \leq c \leq \lfloor g/2 \rfloor$, we have
\[ \mathrm{Mass}(\Lambda_{g,p^c})=v_g \cdot L_{g,p^c},\]
where $v_g$ is defined in \eqref{eq:vn} and where
\begin{equation}
\label{eq:Lgpc}
L_{g,p^c} =\prod_{i=1}^{g-2c} (p^i + (-1)^i)\cdot \prod_{i=1}^c
(p^{4i-2}-1)
\cdot \frac{\prod_{i=1}^g
(p^{2i}-1)}{\prod_{i=1}^{2c}(p^{2i}-1)\prod_{i=1}^{g-2c} (p^{2i}-1)}.
\end{equation}
\end{theorem}
\begin{proof}
This follows from \cite[Proposition
3.5.2]{harashita} by the functional equation for $\zeta(s)$. See \cite[p.~159]{ekedahl} and \cite[Proposition
9]{hashimoto-ibukiyama:1} for the case where $c=0$ (the principal genus case). See
also
\cite{yu2} for a geometric proof in the case where $g=2c$ (the non-principal genus case).
\end{proof}
Clearly, $L_{g,p^0}=L_g(p,1)$. One can also see from \eqref{eq:Lgpc} that
for $c= \lfloor g/2 \rfloor$,
\begin{equation}
\label{eq:npgc}
L_{g,p^c}= \begin{cases}
\prod_{i=1}^c (p^{4i-2}-1) & \text{if $g=2c$ is even;} \\
\frac{(p-1) (p^{4c+2}-1)}{p^2-1} \cdot \prod_{1=1}^c (p^{4i-2}-1) & \text{if $g=2c+1$ is odd,}
\end{cases}
\end{equation}
and therefore $L_{g,p^c}=L_g(1,p)$, cf.~\eqref{eq:L*np}.
For $g=5$ and $c=1$, one has
\begin{equation}
\label{eq:Lambda5p}
\Mass(\Lambda_{5,p})=v_5 \cdot (p-1)(p^2+1)(p^3-1)(p^4+1)(p^{10}-1),
\end{equation}
noting that this case is different from either the principal genus or the non-principal genus case.
\begin{lemma}\label{lem:poly}
For any $g \ge 1$ and $0 \leq c \leq \lfloor g/2 \rfloor$,
the local component $L_{g,p^c}$ in \eqref{eq:Lgpc}
is a polynomial in $p$ over $\mathbb{Z}$ of degree
$(g^2+4gc-8c^2+g-2c)/2$. Furthermore, the minimal degree occurs precisely when $c=0$ if $g$ is odd and when $c= g/2$ if $g$ is even.
\end{lemma}
\begin{proof}
It suffices to show that the term
\[ A:=\frac{\prod_{i=1}^g
(p^{2i}-1)}{\prod_{i=1}^{2c}(p^{2i}-1)\prod_{i=1}^{g-2c} (p^{2i}-1)} \]
is a polynomial in $p$ with coefficients in $\mathbb{Z}$. Notice that
$A=[g;2c]_{p^2}$, where
\[ [n;k]_q:=\frac{\prod_{i=1}^n (q^i-1)}{\prod_{i=1}^k(q^i-1)\cdot \prod_{i=1}^{n-k}(q^i-1)}, \quad n\in \bbN, \ k=0,\dots, n. \]
It is known that $[n;k]_q\in \mathbb{Z}[q]$; cf.~\cite{exton}. Alternatively, one considers the recursive relation $[n~+~1~;~k]_q=[n;k]_q+q^{n-k+1} [n;k-1]_q$ and concludes that $[n,k]_q\in \mathbb{Z}[q]$ by induction.
The degree of $L_{g, p^c}$ is
\begin{equation}
\label{eq:degree}
\begin{split}
&\sum_{i=1}^{g-2c} i + \sum_{i=1}^c (4i-2) + \sum_{i=g-2c+1} ^g 2i - \sum_{i=1}^{2c} 2i \\
&= \frac{1}{2}\left [(g-2c)(g-2c+1)+c\cdot 4c+2c\cdot(4g-4c+2)-2c(4c+2) \right ] \\
&= \frac{1}{2}\left [ g^2+4gc-8c^2+g-2c \right ].
\end{split}
\end{equation}
The degree is a polynomial function of degree 2 in $c$ with negative leading coefficient. So the minimum occurs either at $c=0$ or at $c=\lfloor g/2 \rfloor$; the former happens if $g$ is odd and the latter happens
if $g$ is even.
\end{proof}
If $g=2m$ is even, then the polynomial $L_{g,1}$ has degree $g(g+1)/2=2m^2+m$ and $L_{g,p^m}$ has degree $2m^2$.
\subsection{Mass formulae and class number formulae for
supersingular abelian surfaces and threefolds}
\subsubsection{Non-superspecial supersingular abelian surfaces}\label{ssec:cng2}\
Let $x=(X_0,\lambda_0)$ be a principally polarised supersingular abelian surface over $k$. If $X_0$ is superspecial, then $\Lambda_x=\Lambda_{2,p^0}$ and the class number formula for $|\Lambda_{2,p^0}|$ is obtained in \cite{hashimoto-ibukiyama:1}. We assume that $X_0$ is not superspecial, that is, $a(X_0)=1$. In this case there is a unique (up to isomorphism) polarised superspecial abelian surface $(Y_1,\lambda_1)$ such that $\ker(\lambda_1) \simeq \alpha_p^2$ and an isogeny $\phi:(Y_1,\lambda_1)\to (X_0,\lambda_0)$ of degree $p$ which is compatible with polarisations. Furthermore, there is a unique polarisation $\mu_1$ on $E^2$ such that $\ker(\mu_1) \simeq \alpha_p^2$ such that $(Y_1,\lambda_1)\simeq (E^2,\mu_1)\otimes_{\mathbb{F}_{p^2}} k$.
Then $x$ corresponds to a point $t$ in $\bbP^1(k)=\bbP^1_{\mu_1}(k):=\{\phi_1:(E^2,\mu_1)\otimes k \to (X,\lambda) \text{ an isogeny of degree $p$} \}$, called the Moret-Bailly parameter for $(X_0,\lambda_0)$.
The condition $a(X_0)=1$ implies that $t\in \bbP^1(k)\setminus \bbP^1(\mathbb{F}_{p^2})=k \setminus \mathbb{F}_{p^2}$.
We consider two different cases, corresponding to the structures of $\End(X_0)$: the case $t\in k\setminus \mathbb{F}_{p^4}$, which we call the first case (I), and the case $t\in \mathbb{F}_{p^4} \setminus \mathbb{F}_{p^2}$, called the second case (II). The following explicit formula for the class number of a non-superspecial supersingular ``genus'' $\Lambda_x$ is due to the first-named author \cite{ibukiyama}.
\begin{theorem}\label{thm:nsspg2}
Let $x=(X_0,\lambda_0)$ be a principally polarised supersingular abelian surface over~$k$ with $a(X_0)=1$ and let $h$ be the cardinality of $\Lambda_x$.
\begin{enumerate}
\item In case (I), i.e., when $t\in k \setminus \mathbb{F}_{p^4}$, we have
\[ h=
\begin{cases}
1 & \text{if $p=2$}; \\
\frac{p^2(p^4-1)(p^2-1)}{5760} & \text{if $p\ge 3$}.
\end{cases}
\]
\item In case (II), i.e., when $t\in \mathbb{F}_{p^4} \setminus \mathbb{F}_{p^2}$, we have
\[ h=
\begin{cases}
1 & \text{if $p=2$}; \\
\frac{p^2(p^2-1)^2}{2880} & \text{if } p\equiv \pm 1 \bmod 5 \text{ or } p=5; \\
1+\frac{(p-3)(p+3)(p^2-3p+8)(p^2+3p+8)}{2880} & \text{if } p\equiv \pm 2 \bmod 5. \\
\end{cases}
\]
\item For each case, we have $h=1$ if and only if $p=2,3$.
\end{enumerate}
\end{theorem}
\begin{proof}
Parts (1) and (2) follow from Theorems 1.1 and 3.6 of \cite{ibukiyama}.
Part (3) follows from the table in Section 1 of \cite{ibukiyama}.
\end{proof}
\begin{theorem}\label{thm:massg2}
Let $x=(X_0,\lambda_0)$ and $t\in \bbP^1(k)$ be as in
Theorem~\ref{thm:nsspg2}. Then
\begin{equation}
\label{eq:massg2}
\Mass(\Lambda_{x'})=\frac{L_p}{5760} ,
\end{equation}
with
\[ L_p=
\begin{cases}
(p^2-1)(p^4-p^2), & \text{ if } t\in \bbP^1(\mathbb{F}_{p^4}) \setminus \bbP^1(\mathbb{F}_{p^2});\\
2^{-e(p)}(p^4-1)(p^4-p^2) & \text{ if }t\in \bbP^1(k) \setminus \bbP^1(\mathbb{F}_{p^4}),\\
\end{cases}
\]
where $e(p)=0$ if $p=2$ and $e(p)=1$ if $p>2$.
\end{theorem}
\begin{proof}
See \cite[Theorem 1.1]{yuyu}; also cf.~\cite[Proposition 3.3]{ibukiyama}.
\end{proof}
\begin{corollary}\label{cor:p2g2aut}
Let $p=2$, and let $x'=(X',\lambda')$ be a principally polarised supersingular abelian surface over $k$ with $a(X')=1$. Let $\varphi: (E^2_k, \mu_1)\to (X',\lambda')$ be the minimal isogeny with Moret-Bailly parameter $t\in \bbP^1(k)-\bbP^1(\mathbb{F}_{p^2})$,
where $\mu_1$ is a polarisation on $E^2$ such that $\ker (\mu_1) \simeq \alpha_p^2$. Then
\begin{equation}
\label{eq:autg2}
\vert \Aut(X',\lambda') \vert=
\begin{cases}
160, & \text{ if } t\in \bbP^1(\mathbb{F}_{p^4}) \setminus \bbP^1(\mathbb{F}_{p^2});\\
32 & \text{ if } t \in \bbP^1(k) \setminus \bbP^1(\mathbb{F}_{p^4}).
\end{cases}
\end{equation}
\end{corollary}
\begin{proof}
By Theorem~\ref{thm:nsspg2}, we have $|\Lambda_{x'}|=1$ in both cases. The mass formula (cf.~Theorem~\ref{thm:massg2}) for $p=2$ yields
\[
\Mass(\Lambda_{x'})=
\begin{cases}
1/160, & \text{ if } t\in \bbP^1(\mathbb{F}_{p^4}) \setminus \bbP^1(\mathbb{F}_{p^2});\\
1/32 & \text{ if } t\in \bbP^1(k) \setminus \bbP^1(\mathbb{F}_{p^4}).\\
\end{cases}
\]
This proves \eqref{eq:autg2}.
\end{proof}
\subsubsection{Supersingular abelian threefolds}\label{ssec:mfg3}\
We briefly describe the framework of polarised
flag type quotients as developed in
\cite{lioort}.
Let $E/\mathbb{F}_{p^2}$ be the elliptic curve fixed in Section~\ref{ssec:sspmass}.
An $\alpha$-group of rank $r$ over an ${\bbF}_p$-scheme~$S$
is a finite flat group scheme which is Zariski-locally
isomorphic to $\alpha_p^r$. For an abelian scheme $X$ over $S$, put
$X^{(p)}:=X\times_{S,F_S} S$, where $F_S:S\to S$ denotes the absolute Frobenius morphism on $S$. Denote by $F_{X/S}:X\to X^{(p)}$ and $V_{X/S}: X^{(p)}\to X$ be the relative Frobenius and Verschiebung morphisms, respectively. If $f:X\to Y$ is a morphism of abelian varieties, we also write $X[f]$ for $\ker(f)$.
\begin{definition}\label{def:PFTQ}(cf.~\cite[Section 3]{lioort}) Let $g$ be a positive integer.
\begin{enumerate}
\item For any polarisation $\mu$ on $E^g$ such that $\ker(\mu)=E^g[F]$ if $g$ is even and $\ker(\mu) = 0$ otherwise,
a $g$-dimensional \emph{polarised flag
type quotient (PFTQ)} with respect to $\mu$ is a chain of polarised
abelian varieties over a base $\mathbb{F}_{p^2}$-scheme $S$
\[ (Y_\bullet,\rho_\bullet):(Y_{g-1},\lambda_{g-1}) \xrightarrow{\rho_{g-1}} (Y_{g-2},\lambda_{g-2})\cdots \xrightarrow{\rho_{2}}(Y_1,\lambda_1) \xrightarrow{\rho_1} (Y_0, \lambda_0),\]
such that:
\begin{itemize}
\item [(i)] $(Y_{g-1},\lambda_{g-1}) = ({E^g}, p^{\lfloor (g-1)/2 \rfloor}\mu)\times_{{\rm Spec}\, \mathbb{F}_{p^2}} S$;
\item [(ii)] $\ker(\rho_i)$ is an $\alpha$-group of rank $i$ for $1\le i\le g-1$;
\item [(iii)] $\ker(\lambda_i) \subseteq Y_i [\mathsf{V}^j \circ \mathsf{F}^{i-j}]$
for $0\le i\le g-1$ and $0\le j\le \lfloor i/2 \rfloor$, where
$\mathsf{F}=F_{Y_i/S}$ and $\mathsf{V}=V_{Y_i/S}$.
\end{itemize}
An isomorphism of $g$-dimensional polarised flag type quotients is a
chain of isomorphisms $(\alpha_i)_{0\le i \le g-1}$ of polarised abelian
varieties such that $\alpha_{g-1}={\rm id}_{Y_{g-1}}$.
\item A $g$-dimensional polarised flag
type quotient $(Y_\bullet,\rho_\bullet)$ is said to be
\emph{rigid} if
\[ \ker(Y_{g-1}\to Y_i)=\ker (Y_{g-1}\to Y_0)\cap
Y_{g-1}[\mathsf{F}^{g-1-i}], \quad \text{for $1\le i \le g-1$}. \]
\item Let $\mathcal{P}_{\mu}$ (resp.~$\calP'_\mu$) denote the moduli
space over $\mathbb{F}_{p^2}$ of
$g$-dimensional (resp.~rigid) polarised flag
type quotients with respect to $\mu$.
\end{enumerate}
\end{definition}
Now let $g=3$. According to \cite[Section 9.4]{lioort}, $\mathcal{P}_{\mu}$ is
a two-dimensional geometrically irreducible scheme over $\mathbb{F}_{p^2}$.
The projection to the last member gives
a proper $\overline{\mathbb{F}}_p$-morphism
\begin{align*}
\mathrm{pr}_0 : \mathcal{P}_{\mu} & \to \mathcal{S}_{3,1}, \\
(Y_\bullet, \rho_\bullet) & \mapsto (Y_0, \lambda_0).
\end{align*}
Moreover, for each principally polarised supersingular abelian
threefold $(X,\lambda)$ there exist a $\mu \in P(E^3)$ and a
polarised flag type quotient $y \in \mathcal{P}_{\mu}$ such that
$\mathrm{pr}_0(y) = [(X, \lambda)] \in \mathcal{S}_{3,1}$, cf.~\cite[Proposition 5.4]{katsuraoort}. Put differently, the morphism
\begin{equation}\label{eq:moduli}
\mathrm{pr}_0: \coprod _{\mu \in P(E^3)}\mathcal{P}_{\mu} \rightarrow \mathcal{S}_{3,1}
\end{equation}
is surjective and generically finite.
We define the mass function on $\calP_\mu(k)$ as follows:
\begin{equation}
\label{eq:massfcn}
\Mass: \calP_\mu(k) \to \mathbb{Q}, \quad \Mass(y):=\Mass(\Lambda_x), \ x=\mathrm{pr}_0(y).
\end{equation}
We now describe the structure of $\calP_{\mu}$. First of all, this structure is independent of the choice of $\mu$;
see \cite[Section 3.10]{lioort}.
The map
\[ \pi: ((Y_2,\lambda_2) \to (Y_1,\lambda_1) \to (Y_0, \lambda_0)) \mapsto
((Y_2,\lambda_2) \to (Y_1,\lambda_1)) \]
induces a morphism $\pi: \calP_\mu \to \bbP^2$ whose image is isomorphic to the Fermat curve $C$ defined by the equation $X_{1}^{p+1}+X_{2}^{p+1}+X_{3}^{p+1} = 0$.
Moreover, as a fibre space over $C$, $\calP_\mu$ is isomorphic to $\mathbb{P}_{C}(\mathcal{O}(-1)\oplus \mathcal{O}(1))$;
see \cite[Sections 9.3-9.4]{lioort} and~\cite[Proposition 3.5]{karemaker-yobuko-yu}. According to \cite[Section 9.4]{lioort} (cf.~\cite[Definition 3.14]{karemaker-yobuko-yu}),
there is a section $s:C\twoheadrightarrow T\subseteq \calP_\mu$ of $\pi$. Furthermore, one has $\calP_\mu'=\calP_\mu-T$.
We pull back the $a$-numbers of the points of $\calS_{3}$ to the $a$-numbers of the points of $\calP_\mu$, by setting $a(y):=a(\pr_0(y))$ for $y\in \calP_\mu(k)$.
We shall write a point $y\in \calP_\mu(k)$ as $(t,u)$, where $t=\pi(y)$ and $u\in \pi^{-1}(t)=: \bbP^1_t(k)$.
\begin{lemma}\label{lm:a_strata}
Let $y=(t,u) \in \calP_\mu(k)$ be a point corresponding to a PFTQ.
\begin{itemize}
\item [(i)] If $y\in T$ then $a(y)=3$.
\item [(ii)] If $t\in C(\mathbb{F}_{p^2})$, then $a(y)\ge 2$. Moreover, $a(y)=3$ if and only if $u\in \bbP^1_t(\mathbb{F}_{p^2})$.
\item [(iii)] We have $a(y)=1$ if and only if $y\notin T$ and $t\not\in C(\mathbb{F}_{p^2})$.
\end{itemize}
\end{lemma}
\begin{proof}
See \cite[Sections 9.3-9.4]{lioort}.
\end{proof}
\begin{theorem}\label{introthm:a2}
Let $y = (t,u) \in \mathcal{P}_{\mu}(k)$ be a point such that $t\in C(\mathbb{F}_{p^2})$.
Then
\[
\mathrm{Mass}(y)=\frac{L_p}{2^{10}\cdot 3^4\cdot 5\cdot 7},
\]
where
\[
L_p=
\begin{cases}
(p-1)(p^2+1)(p^3-1) & \text{if } u\in
\mathbb{P}_t^1(\mathbb{F}_{p^2}); \\
(p-1)(p^3+1)(p^3-1)(p^4-p^2) & \text{if }
u\in\mathbb{P}_t^1(\mathbb{F}_{p^4})\setminus
\mathbb{P}_t^1(\mathbb{F}_{p^2}); \\
2^{-e(p)}(p-1)(p^3+1)(p^3-1) p^2(p^4-1) & \text{ if
} u \not\in
\mathbb{P}_t^1(\mathbb{F}_{p^4}),
\end{cases}
\]
where $e(p)=0$ if $p=2$ and $e(p)=1$ if $p>2$.
\end{theorem}
\begin{proof}
See \cite[Theorem A]{karemaker-yobuko-yu}.
\end{proof}
Theorem~\ref{introthm:a2} gives the mass formula for points with $a$-number greater than or equal to $2$.
To describe the mass formula for points with $a$-number $1$, we need the construction of an auxiliary divisor $\calD\subseteq \calP'_\mu$, cf.~\cite[Definition 5.16]{karemaker-yobuko-yu}, and a function $d:C(k) \setminus C(\mathbb{F}_{p^2})\to \{3,4,5,6\}$, cf.~\cite[Definition 5.12]{karemaker-yobuko-yu} that is proven in \cite[Proposition 5.13]{karemaker-yobuko-yu} to be related to the field of definition of the parameter $t$. The function $d$ is surjective when $p\neq 2$, and it only takes value $3$ when $p=2$.
\begin{theorem}\label{introthm:a1}
Let $y = (t,u) \in \mathcal{P}'_{\mu}(k)$ be a point such that $t\not\in C(\mathbb{F}_{p^2})$.
Then
\[
\mathrm{Mass}(y)=\frac{p^3 L_p}{2^{10}\cdot 3^4\cdot 5\cdot 7},
\]
where
\[
\begin{split}
L_p = \begin{cases}
2^{-e(p)}p^{2d(t)}(p^2-1)(p^4-1)(p^6-1) & \text{ if } y \notin \calD; \\
p^{2d(t)}(p-1)(p^4-1)(p^6-1) & \text{ if } t \notin C(\mathbb{F}_{p^6}) \text{ and } y \in \calD; \\
p^6(p^2-1)(p^3-1)(p^4-1) & \text{ if } t \in C(\mathbb{F}_{p^6}) \text{ and } y \in \calD.
\end{cases}
\end{split}
\]
\end{theorem}
\begin{proof}
See \cite[Theorem B]{karemaker-yobuko-yu}.
\end{proof}
\begin{remark}
In \cite{{karemaker-yobuko-yu}} the authors define a stratification on $\calP_\mu$ and $\calS_{3}$ which is the coarsest one so that the mass function is constant.
Using Theorem~\ref{introthm:a2}, the locus of $\calS_{3}$ with $a$-number $\ge 2$ decomposes into three strata: one stratum with $a$-number $3$ and two strata with $a$-number~$2$.
In the locus with $a$-number $1$, the stratification depends on $p$. When $p=2$, the $d$-value is always $3$ and Theorem~\ref{introthm:a1} gives three strata, which are of dimension $0$, $1$, $2$, respectively. When $p\neq 2$, the $d$-value $d(t)=3$ if and only if $t\in C(\mathbb{F}_{p^6})$, cf. \cite[Proposition 5.13]{karemaker-yobuko-yu}. In this case, Theorem~\ref{introthm:a1} says that the mass function depends only on the $d$-value of $t$ and on whether or not $y\in \calD$, and hence it gives eight strata. The largest stratum is the open subset whose preimage consists of points $y=(t,u)$ with $d(t)=6$ and $y\not\in \calD$, and the smallest mass-value stratum is the zero-dimensional locus whose preimage consists of points $y=(t,u)$ with $d(t)=3$ and $y\in \calD$. Note that the mass-value strata for which the points $y=(t,u)$ have $d$-value less than~6 and are in the divisor $\calD$ are also zero-dimensional.
Besides the the superspecial locus, in which points have $a$-number three,
the smaller mass-value stratum with $a$-number $2$ also has dimension $0$.
For every point $x$ in the largest stratum, one has
\begin{equation}
\label{eq:asympopen}
\Mass(\Lambda_x)\sim \frac{p^{27}}{2^{11}\cdot 3^4\cdot 5\cdot 7} \quad \text{as $p\to \infty$.}
\end{equation}
On the other hand, for every point $x$ in the superspecial locus, one has
\begin{equation}
\label{eq:asympsp}
\Mass(\Lambda_x)\sim \frac{p^{6}}{2^{10}\cdot 3^4\cdot 5\cdot 7} \quad \text{as $p\to \infty$.}
\end{equation}
\end{remark}
From all known examples, we observe that the mass $\Lambda_x$ is a polynomial function in $p$ with $\mathbb{Q}$-coefficients. It is plausible to expect that this holds true as well for any $x$ in $\calS_g$ and for arbitrary~$g$.
Under this assumption, it is of interest to determine the largest degree of $\Mass(\Lambda_x)$, viewed as a polynomial in $p$. By all known examples, the smallest degree should occur exactly when $x$ is superspecial and be $g(g+1)/2$. This is indeed the case for any $g$. We will give a proof of this elsewhere.
\newpage
\section{The geometric theory: Automorphism groups of polarised abelian varieties}\label{sec:aut}
\subsection{Powers of an elliptic curve}\label{ssec:powers}\
Let $E$ be an elliptic curve over a field $K$ with canonical polarisation $\lambda_E$ and let $(X_0, \lambda_0) = (E^n, \lambda_{\mathrm{can}})$, where $\lambda_{\mathrm{can}} = \lambda_E^n$ equals the product polarisation on $E^n$. Denote by $R:=\End(E)$ the endomorphism ring of $E$ over $K$ and by $B=\End^0(E)$ its endomorphism algebra; $B$ carries the canonical involution $a \mapsto \bar a$.
Then $B$ is either $\mathbb{Q}$, an imaginary quadratic field, or the definite $\mathbb{Q}$-algebra $B_{p,\infty}$ of prime discriminant $p$.
We identify the endomorphism ring $\End(E^t)=\{a^t: a\in \End(E)\}$ with $\End(E)^{\rm opp}$. Via the isomorphism $\lambda_E$, the (anti-)isomorphism $\End(E^t)=\End(E)^{\rm opp} \stackrel{\sim}{\longrightarrow} \End(E)$ maps $a^t$ to $\lambda_E^{-1} a^t \lambda_E=\bar a$. In other words, using the polarisation $\lambda_E$ we identify $\End(E)^{\rm opp}=\End(E^t)$ with
$\{\bar a: a \in \End(E)\}$.
The set $\Hom(E,X_0)=R^n$ is a free right $R$-module whose elements we view as column vectors. It carries a left $\End(X_0)$-module structure and it follows that
$\End(X_0)=\Mat_n(R)=\End_{R}(R^n)$ and $\End^0(X_0)=\Mat_n(B)=\End_{B}(B^n)$, where $B^n$ naturally identifies with $\Hom(E,X_0)\otimes \mathbb{Q}$.
The map $\End(X_0) \to \End(X_0^t)$, sending $a$ to its dual $a^t$, induces an isomorphism of rings $\End(X_0)^{\rm opp}\simeq \End(X_0)$.
The Rosati involution on $\End^0(X_0)=\Mat_n(B)$ induced by $\lambda_0$ is given by $A \mapsto A^* = \bar{A}^T$. Let $\calH_n(B)$ be the set of positive-definite Hermitian\footnote{Strictly speaking, one should call such a matrix $H$ symmetric, Hermitian or quaternion Hermitian according to whether
$B$ is $\mathbb{Q}$, an imaginary quadratic field, or $B_{p,\infty}$.} matrices $H$ in $\Mat_n(B)$, satisfying $H=H^*$ and $v^* H v>0$ for every non-zero vector $v\in B^n$. Let $\calP(X_0)_{\mathbb{Q}}$ denote the set of fractional polarisations on $X_0$.
\begin{lemma}\label{lm:PH}
The map $\lambda \mapsto \lambda_0^{-1} \lambda$ gives a bijection
$\calP(X_0)_\mathbb{Q} \stackrel{\sim}{\longrightarrow} \calH_n(B)$, under which $\lambda_0$ corresponds to the identity~${\mathbb I}_n$.
\end{lemma}
\begin{proof}
This is shown in \cite[7.12-7.14]{OortEO} for the case where $X_0$ is a superspecial abelian variety over an algebraically closed field of characteristic $p$ and the same argument holds for the present situation.
\end{proof}
For each $H\in \calH_n(B)$, we define a Hermitian form on $B^n$ by
\begin{equation}
\label{eq:hBn}
h: B^n\times B^n \to B, \quad h(v_1, v_2):= v_1 ^* \cdot H \cdot v_2, \quad \text{$v_1, v_2\in B^n$}.
\end{equation}
If $H=\lambda_0^{-1} \lambda$ is the corresponding Hermitian form for $\lambda$, then the Rosati involution induced by~$\lambda$ is the adjoint of $h$: $A\mapsto H^{-1} \cdot A^* \cdot H$.
The correspondence mentioned above induces an identification of automorphism groups
\begin{equation}\label{eq:AutAut0}
\mathrm{Aut}(X_0, \lambda) = \mathrm{Aut}(R^n, h):=
\{ A \in \mathrm{GL}_n(B) \text{ s.t. } A(R^n) = R^n \text{ and } A^* \cdot H \cdot A = H \}.
\end{equation}
In particular, the identification \eqref{eq:AutAut0} induces an identification of automorphism groups
\begin{equation}\label{eq:AutAut1}
\mathrm{Aut}(X_0, \lambda_0) = \mathrm{Aut}(R^n, \mathbb{I}_n),
\end{equation}
and we know that
\begin{equation}\label{eq:AutAut2}
\begin{split}
\mathrm{Aut}(R^n, \mathbb{I}_n)
&= \{ A \in \mathrm{GL}_n(R) \text{ s.t. } A^* \cdot A = \mathbb{I}_n \} \\
&\simeq (R^\times)^n \cdot S_n,
\end{split}
\end{equation}
where the last equality follows from the analogous result in
\cite[Theorem 6.1]{karemaker-yobuko-yu}. \\
If $E$ is the unique supersingular elliptic curve over $\overline \mathbb{F}_2$ up to isomorphism, then $R^\times=E_{24}\simeq \SL_2(\mathbb{F}_3)$ (cf. \cite[(57)]{karemaker-yobuko-yu}) and the automorphism group $\Aut(X_0,\lambda_0)$ has $(24)^n n!$ elements by \eqref{eq:AutAut1} and \eqref{eq:AutAut2}. We expect that this is the maximal size of $\Aut(X,\lambda)$
for any $n$-dimensional principally polarised abelian variety $(X,\lambda)$ over any field $K$. We show a partial result towards confirming this expectation.
\begin{proposition}\label{prop:maxsizeaut}
For $n\le 3$, the number $(24)^n n!$ is the maximal order of the automorphism group of an $n$-dimensional principally polarised abelian variety $(X,\lambda)$ over any field $K$.
\end{proposition}
\begin{proof}
Since any principally polarised abelian variety$(X,\lambda)$ is of finite type over the prime field of $K$, it admits a model $(X_1,\lambda_1)$ over a finitely generated $\mathbb{Z}$-algebra $S$ such that $\Aut_{K}(X,\lambda)=\Aut_S(X_1,\lambda_1)$. Taking any $\overline{\bbF}_p$-point $s$ of $S$ with residue field $k(s)$, one has $\Aut_{S}(X_1,\lambda_1)\subseteq \Aut_{\overline{\bbF}_p}((X_1, \lambda_1) \otimes_S k(s))$.
Thus, without loss of generality we may assume that the ground field~$K$ is the algebraically closed field $\overline{\bbF}_p$ for some prime $p$. Further we can assume that $(X,\lambda)$ is defined over a finite field ${\bbF}_q$ with $\End(X)=\End(X\otimes \overline{\bbF}_p)$.
Note that $\Aut(X,\lambda)$ is a finite subgroup of $\Aut(X)$ and hence a finite subgroup of $\End^0(X)^\times$. We will bound the size of $\Aut(X,\lambda)$ by a maximal finite subgroup $G$ of $\End^0(X)^\times$.
When $n=1$, it is well known that $24$ is the maximal cardinality of $\Aut(E)$ of an elliptic curve $E$ over $\overline{\bbF}_p$ for some prime $p$ and it is realised by the supersingular elliptic curve over $\overline \mathbb{F}_2$, cf.~\cite[V. Proposition 3.1, p. 145]{vigneras} .
Suppose $n=2$. If $X$ simple, then $X$ is either ordinary or almost ordinary. By Tate's Theorem, the endomorphism algebra $\End^0(X)$ is a CM field and $G$ consists of its roots of unity, so $|G|\le 12$.
If $X$ is isogenous to $E_1\times E_2$ where $E_1$ is not isogenous to $E_2$, then $\End^0(E_1\times E_2)=\End^0(E_1)\times \End^0(E_2)$ and any maximal finite subgroup $G$ of $\End^0(E_1)^\times \times \End^0(E_2)^\times$ is of the form
$\Aut(E_1')\times \Aut(E_2')$ for elliptic curves $E_i'$ isogenous to $E_i$. This reduces to the case $n=1$ and hence $|G|\le 24^2$.
Suppose now that $X\sim E^2$. If $L=\End^0(E)$ is imaginary quadratic, then $\End^0(X)\simeq \Mat_2(L)\simeq \End^0(\widetilde E^2)$ for a complex elliptic curve $\widetilde E$ with CM by $L$. By \cite{birkenhake-lange}, $G$ has order $\le 96$.
Thus, we may assume that $E$ is supersingular so~$L$ is a quaternion algebra.
If $X$ is superspecial, then the classification of $\Aut(X,\lambda)$ has been studied by Katsura and Oort; we have $|\Aut(X,\lambda)| \le 1152$ by \cite[Table 1, p. 137]{katsuraoort:compos87}. If $X$ is non-superspecial, then $\Aut(X,\lambda) < \Aut(\widetilde X,\widetilde \lambda)$, where $(\widetilde X,\widetilde \lambda)$ is the superspecial variety determined by the minimal isogeny of $(X,\lambda)$. The classification of $\Aut(\widetilde X,\widetilde \lambda)$ has been studied by the first author~\cite{ibukiyama:autgp1989}. By \cite[Lemma 2.1 p. 132 and Remark 1, p. 343]{ibukiyama:autgp1989} $\Aut(\widetilde X,\widetilde \lambda)$ has order $\le 720$ if $p>2$ and has order $1920$ if $p=2$. For the case $p=2$ and $a(X)=1$, the automorphism group $\Aut(X,\lambda)$ has order either $32$ or $160$ by Corollary~\ref{cor:p2g2aut}.
This proves the case $n=2$.
Now let $n=3$.
Write $X\sim \prod_{i=1}^r X_i^{n_i}$ as the product of isotypic components up to isogeny, where the $X_i$'s are mutually non-isogenous simple factors. By induction and by the same argument as for $n=2$, we reduce to the case where $r=1$, that is, $X$ is elementary. Thus, we need to bound the size of maximal finite subgroups $G$ in the simple $\mathbb{Q}$-algebra $\End^0(X)$. Finite subgroups in a division ring or in a certain simple $\mathbb{Q}$-algebra have been studied by Amitsur~\cite{amitsur:55} and Nebe~\cite{nebe:98}.
A convenient list for our case is given by Hwang-Im-Kim~\cite[Section 5]{hwang-im-kim:g3}. From this list we see that $|G|\le 24^3\cdot 6$ and that equality occurs exactly when $\End^0(X)\simeq\Mat_3(B_{2,\infty})$; see Theorem 5.13 of \emph{loc.~cit.} This proves the proposition.
\end{proof}
\begin{remark}\label{rem:Autchar0}
Similarly, if $E$ is the unique elliptic curve with CM by $\mathbb{Z}[\zeta_3]$ over $\mathbb{C}$ up to isomorphism, then $R^\times=\mathbb{Z}[\zeta_3]^\times=\mu_6$ and the automorphism group $\Aut(X_0,\lambda_0)$ has $6^n n!$ elements by \eqref{eq:AutAut1} and \eqref{eq:AutAut2}. We expect that this is the maximal size of $\Aut(X,\lambda)$
for any $n$-dimensional principally polarised abelian variety $(X,\lambda)$ over any field $K$ of characteristic \emph{zero}.
\end{remark}
\subsection{Varieties isogenous to a power of an elliptic curve}\
\def{\rm pol}{{\rm pol}}
\def{\rm opp}{{\rm opp}}
\def{\rm Lat}_R{{\rm Lat}_R}
\def{}_{R}{\rm Lat}{{}_{R}{\rm Lat}}
\def{}_{R^{\rm opp}}{\rm Lat}{{}_{R^{\rm opp}}{\rm Lat}}
Let $E/K$ and $R = \End(E)$ be as in the previous subsection. Let $\calA$ denote the category of abelian varieties over $K$ and $\calA^{\rm pol}$ denote that of abelian varieties $(X,\lambda)$ together with a fractional polarisation over $K$; we call $(X,\lambda)$ a $\mathbb{Q}$-polarised abelian variety. Let $\calA_E$ (resp.~$\calA^{\rm pol}_E$) be the full subcategory of $\calA$ (resp.~of $\calA^{\rm pol}$) consisting of abelian varieties that are isogenous to a power of $E$ over $K$. By an $R$-lattice we mean a finitely presented torsion-free $R$-module. Denote by ${\rm Lat}_R$ and ${}_{R}{\rm Lat}$ the categories of right $R$-lattices and left $R$-lattices, respectively. We may write $R^{\rm opp}=\{a^T: a\in R\}$ with multiplication $a^T b^T:=(ba)^T$.
For a right $R$-module $M$, we write $M^{\rm opp}:=\{m^T: m\in M\}$ for the left $R^{\rm opp}$-module defined by $a^T m^T=(ma)^T$ for $a\in R$ and $m\in M$. The functor $I:M\mapsto M^{\rm opp}$
induces an equivalence of categories from ${\rm Lat}_R$ to ${}_{R^{\rm opp}}{\rm Lat}$.
A Hermitian form on $M$ here will mean a non-degenerate Hermitian form $h: M_\mathbb{Q} \times M_\mathbb{Q} \to B$ in the usual sense, where $M_
\mathbb{Q}:=M\otimes \mathbb{Q}$. If $h$ takes $R$-values on $M$, we say $h$ is integral.
Let ${\rm Lat}_R^{\rm H}$ (resp. ${}_{R^{\rm opp}}{\rm Lat}^{\rm H}$) denote the category of positive-definite Hermitian right $R$-lattices (resp.~left $R^{\rm opp}$-lattices). The functor
\[
I:{\rm Lat}_R^{\rm H}\to {}_{R^{\rm opp}}{\rm Lat}^{\rm H}
\]
induces an equivalence of categories.
To each $\mathbb{Q}$-polarised abelian variety $(X,\lambda)$ in $\calA^{\rm pol}_E$, we associate a pair $(M,h)$, where
\begin{equation}\label{eq:M}
M:=\Hom(E,X)
\end{equation}
is a right $R$-lattice, and where
\begin{equation}\label{eq:h}
h=h_\lambda: M_\mathbb{Q} \otimes M_\mathbb{Q} \to B, \quad h_\lambda(f_1,f_2):=\lambda_E^{-1} f_1^t \lambda f_2
\end{equation}
is a pairing on $M_\mathbb{Q}$.
\begin{lemma}\label{lm:Mh}
\begin{enumerate}
\item The pair $(M,h)$ constructed above is a positive-definite Hermitian $R$-lattice. The Hermitian form $h$ is integral on $M$ if and only if $\lambda$ is a polarisation, and it is perfect if and only if $\lambda$ is a principal polarisation.
\item Let $\lambda$ be a fractional polarisation on $X_0=E^n$. Then the associated Hermitian form $h_\lambda$ on $M:=\Hom(E,X_0)=R^n$ defined in \eqref{eq:h} is the Hermitian form defined in \eqref{eq:hBn}.
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}
\item One checks that
\begin{equation}\label{eq:h1}
h(f_1 a,f_2)=\lambda_E^{-1} a^t f_1^t \lambda f_2 =(\lambda_E^{-1} a^t \lambda_E) \lambda_E^{-1} f_1^t \lambda f_2=\bar a h(f_1,f_2)
\end{equation}
and $h(f_1, f_2 a)=h(f_1,f_2)a$. Moreover,
\begin{equation}\label{eq:h2}
\overline{h(f_1,f_2)}=\lambda_E^{-1} ( \lambda_E^{-1} f_1^t \lambda f_2 )^t \lambda_E= \lambda_E^{-1} f_2^t \lambda f_1 \lambda_E^{-1} \lambda_E=h(f_2,f_1),
\end{equation}
so the $R$-lattice is indeed Hermitian.
For $f\neq 0\in M$, we have $h(f,f)=\lambda_E^{-1} f^*\lambda$. Since $f^*\lambda$ is a fractional polarisation on $E$, the composition $\lambda_E^{-1} f^*\lambda$ is a positive element in $B$, which is a positive rational number in our case. This shows that $h$ is positive-definite. The last two statements are clear as the polarisation $\lambda_E$ is principal.
\item For $f_1, f_2 \in \Hom(E,X)_\mathbb{Q}=B^n$, we have $h(f_1,f_2)=\lambda_E^{-1} f_1^t \lambda_0 \lambda_0^{-1} \lambda f_2$. If we write $f_1=(a_1, \dots, a_n)^T\in B^n$ and $\lambda_0^{-1} \lambda f_2 =(b_1, \dots, b_n)^T=:\underline b$, then $\lambda_E^{-1} f_1^t \lambda_0=(\bar a_1, \dots, \bar a_n)^T$ and $h(f_1,f_2)=\sum_{i=1}^n \bar a_i b_i= f_1^* \cdot\underline b= f_1^*\cdot H\cdot f_2$ for $H = \lambda_0^{-1} \lambda$.
\end{enumerate}
\end{proof}
\def\mathcal{Hom}{\mathcal{Hom}}
The sheaf Hom functor ${\mathcal Hom}_R(-, E):{}_{R}{\rm Lat}^{\rm opp} \to \calA_E$ produces a fully faithful functor whose essential image will be denoted by $\calA_{E,\mathrm{ess}}$. We refer to \cite{JKPRST} for the construction and properties of ${\mathcal Hom}_R(-, E)$. The functor $\Hom(-, E): \calA_E \to {}_{R}{\rm Lat}^{\rm opp}$ provides the inverse on $\calA_{E,\mathrm{ess}}$. The following result can be regarded as a polarised version of the construction in \cite{JKPRST}.
\begin{proposition}\label{prop:equiv}
The functor $(X,\lambda)\mapsto (M,h)$ introduced in Equations~\eqref{eq:M} and~\eqref{eq:h} induces an equivalence of categories
\[
\calA_{E,\mathrm{ess}}^{\rm pol} \longrightarrow {\rm Lat}_R^H.
\]
Moreover, $\lambda$ is a polarisation if and only $h$ is integral, and it is a principal polarisation if and only $h$ is a perfect pairing on $M$.
\end{proposition}
\begin{proof}
Let $T: \calA_E \to \calA_{E^t}$ be the functor sending $X$ to $X^t$; it induces an anti-equivalence of categories. The composition $\Hom(-, E^t)\circ T$ sends $X$ to $\Hom(X^t,E^t)$ and $I\circ \Hom(E,-)$ sends $X$ to $\Hom(E,X)^{\rm opp}$. The map that sends $f\in \Hom(E,X)$ to $f^t\in \Hom(X^t,E^t)$ gives a natural isomorphism $I\circ \Hom(E,-)\to \Hom(-, E^t)\circ T$. Restricted to $\calA_{E,\mathrm{ess}}$, the functor $\Hom(-, E^t)\circ T$ is an equivalence of categories. Therefore, $\Hom(E,-)$ induces an equivalence of categories from $\calA_{E,\mathrm{ess}}$ to ${\rm Lat}_R$.
The dual $M^t:=\Hom_R(M,R)$ of a right $R$-lattice $M$, which a priori is a left $R$-lattice, may be regarded as a right $R$-lattice via $f\cdot a:=\bar a f$.
This is simply the right $R^{\rm opp}$-module $(M^t)^{\rm opp}$ with the identification
$R^{\rm opp}=\{\bar a: a\in R\}$.
Suppose that $M = \Hom(E,X)$ is in the essential image of the equivalence, coming from some $(X,\lambda) \in \calA_{E,\mathrm{ess}}$.
We claim that the map
\begin{equation}
\label{eq:Mt}
\begin{split}
\varphi: \Hom(E,X^t) &\to M^t \\
\alpha & \mapsto (\varphi_{\alpha}: m \mapsto \lambda_E^{-1} \alpha^t m)
\end{split}
\end{equation}
is an isomorphism of right $R$-lattices. Indeed, it is injective by construction.
For surjectivity, pick any $\psi \in M^t$. Since $\psi \in \Hom(\Hom(E,X),\Hom(E,E))$ and the functor $\Hom(E,-)$ is fully faithful, there exists a unique map $\tilde{\psi}\in \Hom(X,E)$ such that $\psi(f)=\tilde{\psi} \circ f$ for all maps $f\in \Hom(E, X)=M$.
\begin{figure}[H]\label{fig:psitilde}
\begin{center}
\begin{tikzcd}
E \arrow["\psi(f)"]{r} \arrow["f"]{d} & E \\
X \arrow["\tilde{\psi}"]{ur} &
\end{tikzcd}
\end{center}
\end{figure}
Then $\psi(m) = \tilde{\psi} m$ and we have $\tilde{\psi}^t \in \Hom(E^t, X^t)$. Considering $\alpha = \tilde{\psi}^t \lambda_E \in \Hom(E,X^t)$, it follows from the construction that
\[
\varphi_{\alpha}(m) = \lambda_E^{-1} \alpha^t m = \lambda_E^{-1} \lambda_E \tilde{\psi} m = \psi(m)
\]
for all $m \in M$, hence $\psi = \varphi_{\alpha}$, which proves the claim.
To prove the proposition, it remains to show that for any $X \in \calA_{E,\mathrm{ess}}$
we have a bijection between fractional polarisations on $X$ in $\Hom(X,X^t)\otimes \mathbb{Q}$ and positive-definite Hermitian forms on $M_{\mathbb{Q}}$ in $\Hom(M,M^t) \otimes \mathbb{Q}$. By the definition $\Hom(E,X) =M$, the isomorphism $\Hom(E,X^t)\simeq M^t$, and the fact that the functor $\Hom(E,-)$ is fully faithful, the natural map $\Hom(X, X^t) \to \Hom(M, M^t)$ is an isomorphism. Note that the induced isomorphism $\Hom(X, X^t) \otimes \mathbb{Q} \to \Hom(M, M^t)\otimes\mathbb{Q}$ is the same as the construction in Equation~\eqref{eq:h}.
Hence, for every positive-definite Hermitian form $h$ on $M_{\mathbb{Q}}$, there exists a unique symmetric element $\lambda_1\in \Hom(X,X^t)_\mathbb{Q}$ such that $h_{\lambda_1} = h$ and it suffices to show that $\lambda_1$ is a fractional polarisation on $X$.
Any quasi-isogeny $\beta: X \to E^n$ induces an isomorphism $\beta_* :M \otimes \mathbb{Q} \to \Hom(E,E^n) \otimes \mathbb{Q}=B^n$ of $B$-modules.
Let $\lambda := \beta_* \lambda_1$ be the pushforward map in $\Hom(E^n,(E^n)^t) \otimes \mathbb{Q}$, and let $h_\lambda: B^n \times B^n \to B$ be the Hermitian form defined by \eqref{eq:h}. Then $\beta_*: (M_\mathbb{Q},h) \to (B^n, h_\lambda)$ is an isomorphism of $B$-modules with pairings. Since $h$ is a positive-definite Hermitian form by assumption, so is the pairing $h_\lambda$. Let $H\in \calH_n(B)$ be the positive-definite Hermitian matrix corresponding to $h_\lambda$ with respect to the standard basis. By Lemma~\ref{lm:Mh}(2), $H$ is equal to $\lambda_{\rm can}^{-1} \lambda$. Since $H\in \calH_n(B)$, by Lemma~\ref{lm:PH} the map $\lambda$ is a fractional polarisation and therefore $\lambda_1$ is a fractional polarisation, as required.
\end{proof}
By \cite[Theorem 1.1]{JKPRST} we obtain the following consequence.
\begin{corollary}\label{cor:JKPRST}
Let $E$ be an elliptic curve over a finite field $K={\bbF}_q$ with Frobenius endomorphism $\pi$ and endomorphism ring $R = \mathrm{End}(E)$. The functor $\Hom(E,-): \calA_E^{\rm pol} \to {\rm Lat}_R^H$ induces an equivalence of categories if and only if one the following holds:
\begin{itemize}
\item $E$ is ordinary and $\mathbb{Z}[\pi]=R$;
\item $E$ is supersingular, $K={\bbF}_p$ and $\mathbb{Z}[\pi]=R$; or
\item $E$ is supersingular, $K=\mathbb{F}_{p^2}$ and $R$ has rank $4$ over $\mathbb{Z}$.
\end{itemize}
\end{corollary}
\begin{remark}
A few results similar to Proposition~\ref{prop:equiv} exist in the literature. The first case of Corollary~\ref{cor:JKPRST} is proven in \cite{KNRR}. More precisely, when $E$ is ordinary and $R = \mathbb{Z}[\pi]$, in \cite[Theorem 3.3]{KNRR} the constructions of \cite{JKPRST} are used to derive an equivalence of categories between $\mathcal{A}^{\mathrm{pol}}_E$ and $\mathrm{Lat}_R^H$.\\
When $R = \mathbb{Z}[\pi]$, Serre's tensor construction (cf.~\cite{Lauter}) gives an analogue of Corollary~\ref{cor:JKPRST} in some cases, when replacing ${\mathcal Hom}$ with $\otimes_R E$. The tensor construction is used in \cite[Theorem A]{Amir} for a ring $R$ with positive involution, a projective finitely presented right $R$-module $M$ with an $R$-linear map $h: M \to M^t$, and an abelian scheme $A$ over a base $S$ with $R$-action via $\iota: R \hookrightarrow \End_S(A)$ and an $R$-linear polarisation $\lambda: A \to A^t$, to prove that $h \otimes \lambda: M \otimes_R A \to M^t \otimes_R A^t$ is a polarisation if and only if $h$ is a positive-definite $R$-valued Hermitian form. Also, for a superspecial abelian variety $X$ over an algebraically closed field $k$ of characteristic $p$ it is shown in \cite[7.12-7.14]{OortEO} that the functors $X \mapsto M = \Hom(E,X)$ and $M \mapsto M \otimes_R E = X$ yield bijections between principal polarisations on $X$ and positive-definite perfect Hermitian forms on $M$.
\end{remark}
For any elliptic curve $E$ over a field $K$, we know that $B = \End^0(E)$ satisfies the conditions in Section~\ref{sec:Arith} and in particular those of Corollary~\ref{autodecomposition}.
This means that when $E$ is defined over a finite field and is in one of the cases of Corollary~\ref{cor:JKPRST}, then we may apply the categorical constructions above to automorphism groups, in order to obtain the following result.
\begin{corollary}\label{cor:Aut}
\begin{enumerate}
\item For any $(X,\lambda) \in \calA^{\mathrm{pol}}_{E,\mathrm{ess}}$, the lattice $(M,h)$ associated to $(X,\lambda)$ admits a unique orthogonal decomposition
\[
(M,h) = \perp_{i=1}^r \perp_{j=1}^{e_i} (M_{ij},h_i).
\]
Hence, we have that
\begin{equation}
\label{eq:AutXl}
\Aut(X,\lambda) \simeq \Aut(M,h) \simeq \prod_{i=1}^r \Aut(M_{i1})^{e_i} \cdot S_{e_i}.
\end{equation}
\item Let $E$ be an elliptic curve over a finite field $K=\mathbb{F}_q$ such that Corollary~\ref{cor:JKPRST} applies. Then for any $(X,\lambda) \in \calA^{\mathrm{pol}}_{E}$, the automorphism group $\Aut(X,\lambda)$ can be computed as in Equation~\eqref{eq:AutXl}.
\end{enumerate}
\end{corollary}
\begin{corollary}\label{cor:Autsp}
Let $R$ be a maximal order in the definite quaternion $\mathbb{Q}$-algebra $B_{p,\infty}$. Let ${\rm Sp}^{\rm pol}$ be the category of fractionally polarised superspecial abelian varieties over an algebraically closed field $k$ of characteristic $p$.
Then there is an equivalence of categories between ${\rm Sp}^{\rm pol}$ and ${\rm Lat}_R^{\rm H}$. Moreover, for any object $(X,\lambda)$ in ${\rm Sp}^{\rm pol}$, the automorphism group $\Aut(X,\lambda)$ can be computed as in Equation~\eqref{eq:AutXl}.
\end{corollary}
\begin{proof}
Choose an elliptic curve $E$ over $\mathbb{F}_{p^2}$ with Frobenius endomorphism $\pi=-p$ and endomorphism ring $\End(E)\simeq R$.
Then the category $\calA_{E}$ is the same as that of superspecial abelian varieties over $\mathbb{F}_{p^2}$ with Frobenius endomorphism $-p$, because every supersingular abelian variety with Frobenius endomorphism $-p$ is superspecial.
The functor sending each object $X$ in $\calA_E$ to $X\otimes_{\mathbb{F}_{p^2}} k$ induces an equivalence of categories between $\calA_E$ and the category of superspecial abelian varieties over $k$ (cf.~\cite[Proposition 5.1]{yu:iumj18}). Thus, it induces an equivalence of categories between $\calA_E^{\rm pol}$ and $\Sp^{\rm pol}$.
By Corollary~\ref{cor:JKPRST}, there is an equivalence of categories between $\Sp^{\rm pol}$ and ${\rm Lat}_R^{\mathrm{H}}$.
The last statement of the corollary follows from Corollary~\ref{cor:Aut}.
\end{proof}
\subsection{Unique decomposition property}
\begin{definition}
Let $(X,\lambda)$ be a $\mathbb{Q}$-polarised abelian variety over $K$. We say $(X,\lambda)$ \emph{indecomposable} if whenever we have an isomorphism $(X_1,\lambda_1)\times (X_2,\lambda_2)=(X_1\times
X_2, \lambda_1\times \lambda_2) \simeq (X,\lambda)$, either $\dim X_1=0$ of $\dim X_2=0$.
\end{definition}
By induction on the dimension of $X$, every object $(X,\lambda)$ in $\calA^{\rm pol}$ decomposes into a product of indecomposable objects.
\begin{definition}
\begin{enumerate}
\item An object $(X,\lambda)$ in $\calA^{\rm pol}$ is said to have \emph{the Remak-Schmidt property} if for any two decompositions into indecomposable objects $(X,\lambda)\simeq \prod_{i=1}^r(X_i,\lambda_i)$ and $(X,\lambda)\simeq \prod_{j=1}^s(X_j',\lambda_j')$, we have $r=s$ and there exist a permutation $\sigma\in S_r$ and an isomorphism $(X_i,\lambda_i)\simeq (X_{\sigma(i)}', \lambda_{\sigma(i)}')$ for every $1 \le i \le r$.
\item An object $(X,\lambda)$ in $\calA^{\rm pol}$ is said to have \emph{the strong Remak-Schmidt property} if for any two decompositions into indecomposable objects $\phi=(\phi_i)_i: \prod_{i=1}^r (X_i,\lambda_i)\stackrel{\sim}{\longrightarrow} (X,\lambda)$ and $\phi'=(\phi'_j)_j: \prod_{j=1}^s(X_j',\lambda_j')\stackrel{\sim}{\longrightarrow}(X,\lambda)$, we have $r=s$ and there exist a permutation $\sigma\in S_r$ and an isomorphism $\alpha_i: (X_i,\lambda_i)\stackrel{\sim}{\longrightarrow} (X_{\sigma(i)}', \lambda_{\sigma(i)}')$ such that $\phi_i=\phi_{\sigma(i)}' \circ \alpha_i$ for every $1 \le i \le r$.
\end{enumerate}
\end{definition}
\begin{lemma}\label{lm:minE}
Let $X$ be an object in $\calA_E$. Then there exist an object $\widetilde X$ in $\calA_{E, {\rm ess}}$ and an isogeny $\gamma: X\to \widetilde X$ such that for any morphism $\phi: X \to Y$ with object $Y$ in $\calA_{E, {\rm ess}}$, there exists a unique morphism $\alpha: \widetilde X \to Y$ such that $\alpha\circ \gamma=\phi$. Dually, there exist an object $\widetilde X$ in $\calA_{E, {\rm ess}}$ and an isogeny $\varphi: \widetilde X\to X$ that satisfies the similar universal property.
\end{lemma}
\begin{proof}
We first construct a morphism $\gamma: X \to \widetilde X$, where $\widetilde X$ is an object in $\calA_{E,{\rm ess}}$.
It will be more convenient to adopt the contravariant functors.
Let $M:=\Hom(X,E)$ and let $\widetilde X:= {\mathcal Hom}_R(M,E)$. The abelian variety $\widetilde X$ represents the functor
\[ S \mapsto \Hom_R(M,E(S)), \quad M=\Hom(X,E) \]
for any any $K$-scheme $S$. Define a morphism $\gamma:X\to \widetilde X$ by
\begin{equation}\label{eq:minEisog}
\gamma: X(S)\to \widetilde X(S)=\Hom_R(M,E(S))\quad \text{ mapping } \quad x \mapsto \left (\gamma_x: f \mapsto f(x)\in E(S) \right ),
\end{equation}
for all $f\in M=\Hom(X,E)$.
Now let $Y$ be an object in in $\calA_{E, {\rm ess}}$ and $\phi: X \to Y$ be a morphism. Using \eqref{eq:minEisog}, we also have a morphism $\gamma_Y: Y\to \widetilde Y$ which is an isomorphism as
the functor $\Hom(-,E)$ induces an equivalence on $\calA_{E, {\rm ess}}$.
The morphism $\phi: X \to Y$ induces a map $M_Y:=\Hom(Y,E) \to M=\Hom(X,E)$ by precomposition with $\phi$.
This map also induces, after applying the functor ${\mathcal Hom}_R(-,E)$, a morphism $\beta: \widetilde X \to \widetilde Y$. We claim that the diagram
\begin{equation}\label{eq:min_cd}
\begin{CD}
X @>{\gamma}>> \widetilde X \\
@VV{\phi}V @VV{\beta}V \\
Y @>{\gamma_Y}>{\sim}> \widetilde Y
\end{CD}
\end{equation}
commutes; we will show this by proving it on $S$-points for any $K$-scheme $S$.
Let $x\in X(S)$ and $g:Y\to E$. We have $\beta (\gamma_x)(g)=\gamma_x(g \circ \phi)=g(\phi(x))$. On the other hand $\gamma_Y(\phi(x))(g)=g(\phi(x))$. This shows the claim.
Let $\alpha:=\gamma_Y^{-1} \circ \beta: \widetilde X \to Y$, so we have
$\alpha\circ \gamma=\phi$ by commutativity.
Finally, take $Y = E^n$ and any isogeny $\phi: X \to E^n$ and let $\alpha:\widetilde X \to E^n$ be the unique morphism satisfying $\alpha\circ \gamma=\phi$. Since $\dim X=\dim \widetilde X=\dim E^n = n$, it follows that $\gamma$ is an isogeny. The dual construction is entirely analogous.
\end{proof}
\begin{definition}\label{def:minE}
Let $X$ be an object in $\calA_E$. We call the isogeny $\gamma: X \to \widetilde X$ (resp. $\varphi:\widetilde X \to X$) constructed in Lemma~\ref{lm:minE} the \emph{minimal $E$-isogeny of $X$} and the abelian variety $\widetilde X$ the \emph{$E$-hull} of $X$.
\end{definition}
\begin{remark} \label{rem:min_isog}
If $E/K$ is a supersingular elliptic curve over an algebraically closed field $K=k$ of characteristic $p$, then $\calA_E$ is the category of supersingular abelian varieties over $k$ and $\calA_{E,{\rm ess}}$ is the category of superspecial abelian varieties over $k$. In this case, a minimal $E$-isogeny $\gamma:X\to \widetilde X$ or $\varphi:\widetilde X \to X$ of a supersingular abelian variety $X$ is precisely the minimal isogeny of $X$ in the sense of Oort, cf.~\cite[Definition~2.11]{karemaker-yobuko-yu}. By Lemma~\ref{lm:minE}, the minimal isogeny $(X, \gamma: X\to \widetilde X)$ satisfies the stronger universal property where the test object $\phi:X\to Y$ does not have to be an isogeny.
\end{remark}
\begin{lemma}\label{lm:product_compatibility}
Let $X$ be an object in $\calA_{E.{\rm ess}}$. Suppose there are abelian varieties $X_1, \dots, X_r$ in~$\calA_E$ and there is an isomorphism $\phi:X_1\times \dots \times X_r \stackrel{\sim}{\longrightarrow} X$. Then each abelian variety $X_i$ lies in $\calA_{E.{\rm ess}}$.
\end{lemma}
\begin{proof}
According to the construction of minimal $E$-isogenies, let
\[ M:=\Hom(X,E)\simeq \prod_{i=1}^r M_i \quad \text{and} \quad \widetilde X:={\mathcal Hom}_R(M,E)\simeq \prod_{i=1}^r \widetilde X_i, \]
where
\[ M_i:=\Hom(X_i,E)\quad \text{and} \quad \widetilde X_i:={\mathcal Hom}_R(M_i,E). \]
By the definition of $\gamma$ in Equation~\eqref{eq:minEisog}, we have
\[ \gamma=(\gamma_i)_i: X \simeq X_1\times \dots \times X_r \longrightarrow \widetilde X\simeq \widetilde X_1 \times \dots \times \widetilde X_r, \quad \text{where} \quad \gamma_i:X_i \to \widetilde X_i. \]
By applying the universal property of the minimal $E$-isogeny with $Y = X$ and $\phi = \mathrm{id}$, there is a unique isogeny $\alpha: \widetilde X \to X$ such that $\alpha\circ \gamma=\mathrm{id}$. This shows that $\gamma$ is an isomorphism, which means that each $\gamma_i:X_i \to \widetilde X_i$ is an isomorphism. In particular, every abelian variety $X_i$ lies in $\calA_{E,{\rm ess}}$.
\end{proof}
\begin{lemma}\label{lm:prod_compt2}
Let $X_1,\dots, X_r$ be objects in $\calA_{E}$. Then $(\gamma_i)_i: X=\prod_{i=1}^r X_i \to \prod_{i=1}^r \widetilde X_i$ is the minimal $E$-isogeny of $X$.
\end{lemma}
\begin{proof}
For any $Y\in \calA_{E,\mathrm{ess}}$, as in Equation~\eqref{eq:min_cd}, we obtain the following commutative diagram:
\begin{equation}\label{eq:min_cd2}
\begin{CD}
\prod_{i=1}^r X_i @>{(\gamma_i)_i}>> \prod_{i=1}^r \widetilde X_i \\
@VV{\sum_i \phi_i}V @VV{\beta=\sum_i\beta_i}V \\
Y @>{\gamma_Y}>{\sim}> \widetilde Y.
\end{CD}
\end{equation}
Then the unique morphism $\alpha = \gamma_Y^{-1} \circ \beta$ satisfies the desired property $\alpha\circ {(\gamma_i)_i}=\sum_i \phi_i$.
\end{proof}
\begin{theorem}\label{thm:RS}
Every object $(X,\lambda)$ in $\calA_{E,{\rm ess}}^{\rm pol}$ has the strong Remak-Schmidt property.
\end{theorem}
\begin{proof}
Let $\phi=(\phi_i)_i: \prod_{i=1}^r(X_i,\lambda_i) \stackrel{\sim}{\longrightarrow} (X,\lambda)$ and $\phi'=(\phi'_j)_j: \prod_{i=1}^s(X_j',\lambda_j') \stackrel{\sim}{\longrightarrow} (X,\lambda)$ be two decompositions of $(X,\lambda)$ into indecomposable polarised abelian varieties. Let $(M,h) = (\Hom(E,X),h_\lambda)$ be the positive-definite Hermitian $R$-lattice associated to $(X,\lambda)$. By Lemma~\ref{lm:product_compatibility}, every $(X_i,\lambda_i)$ and $(X_j', \lambda_j')$ is an object in $\calA_{E,{\rm ess}}^{\rm pol}$, and we let $(M_i,h_i)$ and $(M_j',h_j')$ be the associated positive-definite Hermitian $R$-lattices, respectively. Applying the functor $\Hom(E,-)$, we obtain two decomposition of $(M,h)$, namely $\phi_*: \prod_{i=1}^r (M_i,h_i) \stackrel{\sim}{\longrightarrow} (M,h)$ and $\phi'_*: \prod_{j=1}^s (M_j',h_j') \stackrel{\sim}{\longrightarrow} (M,h)$.
Since the functor $(X,\lambda)\mapsto (M,h)$ from $\calA_{E,{\rm ess}}^{\rm pol}$ to ${\rm Lat}_R^{\rm H}$ induces an equivalence of categories, every $(M_i,h_i)$ and $(M_j', h_j')$ is an indecomposable sublattice of $(M,h)$.
Therefore, there are two orthogonal decompostions of indecomposable sublattices of $(M,h)$:
\[ M=\perp_{i=1}^r \phi_*(M_i) \quad \text{and} \quad M=\perp_{j=1}^s \phi'_*(M_j').\]
It follows from Theorem~\ref{orthogonal} that $r=s$ and that there is an permutation $\sigma\in S_r$ such that $\phi_*(M_i)=\phi'_*(M_{\sigma(i)}')$ for all $i$. For any $i$, let $\bar \alpha_i: (M_i,h_i) \stackrel{\sim}{\longrightarrow} (M_{\sigma(i)},h_{\sigma(i)})$ be the unique isomorphism such that $\phi'_{i,*} \circ \bar \alpha_i =\phi'_{\sigma(i),*}$. The unique lifted isomorphism of $\bar \alpha_i$, say
$\alpha_i:(X_i,\lambda_i)\stackrel{\sim}{\longrightarrow} (X_{\sigma(i)}', \lambda_{\sigma(i)}')$, then satisfies $\phi_i=\phi_{\sigma(i)}' \circ \alpha_i$.
\end{proof}
\begin{corollary}\label{cor:RS_JKPRST}
Let $E$ be an elliptic curve over a finite field $K=\mathbb{F}_q$ such that Corollary~\ref{cor:JKPRST} applies.
Then every object $(X,\lambda)\in \calA^{\mathrm{pol}}_{E}$ has the strong Remak-Schmidt property.
\end{corollary}
\begin{proof}
This follows from Corollary~\ref{cor:JKPRST} and Theorem~\ref{thm:RS}.
\end{proof}
\begin{remark}\label{rem:geom-RS}
For the application of computing the automorphism groups of polarised abelian varieties, Theorem~\ref{thm:RS} and Corollary~\ref{cor:RS_JKPRST} actually do not provide any new useful information compared to Corollary~\ref{cor:JKPRST}.
However, the significance of Theorem~\ref{thm:RS} lies in providing the possibility that there may be more polarised abelian varieties having the strong Remak-Schmidt property than those abelian varieties which can be described directly in terms of (skew-)Hermitian lattices. We have not yet seen this phenomenon explored in the literature.
\end{remark}
\begin{lemma}\label{lm:AutX2}
Let $(X,\lambda)\in \calA_{E}^{\mathrm{pol}}$ and let $\varphi:(\widetilde X, \widetilde \lambda)\to (X,\lambda)$ be the minimal $E$-isogeny of~$(X,\lambda)$, where $\widetilde \lambda$ is chosen to be $\varphi^* \lambda$.
Then
\begin{equation}\label{eq:AutX2}
\Aut(X,\lambda)=\{\alpha\in \Aut(\widetilde X, \widetilde \lambda): \alpha(H)=H\},
\end{equation}
where $H:=\ker(\varphi)$ is the kernel of the morphism $\varphi$.
\end{lemma}
\begin{proof}
By the universal property of minimal $E$-isogenies, every $\sigma_0\in\Aut(X,\lambda)$ uniquely lifts to an automorphism $\sigma\in \Aut(\widetilde X, \widetilde \lambda)$. Since $X=\widetilde X/H$, an element $\sigma\in \Aut(\widetilde X, \widetilde \lambda)$ descends to an element $\sigma_0 \in \Aut(X,\lambda)$ if and only if $\sigma(H)=H$.
\end{proof}
\subsection{Abelian varieties that are quotients of a power of an abelian variety over ${\bbF}_p$.} \
In this subsection, we let $E$ denote an abelian variety over $K={\bbF}_p$ such that its endomorphism algebra $B=\End^0(E)$ is commutative.
This means that $E$ does not have a repeated simple factor (i.e., it is squarefree) nor a factor that is a supersingular abelian surface with Frobenius endomorphism $\sqrt{p}$. Since every abelian variety over a finite field is of CM type, the algebra $B$ is a product of CM fields. Denote again by $a\mapsto \bar a$ the canonical involution of $B$. Let $R=\End(E)$ and fix a polarisation~$\lambda_E$ on~$E$. We will use the same notation and terminology as in previous subsections, except that we let $\calA_E$ (resp.~$\calA_E^{\rm pol}$) be the full subcategory of $\calA$ (resp.~$\calA^{{\rm pol}}$) consisting of abelian varieties which are quotients of a power of $E$ over ${\bbF}_p$.
Recall that an $R$-module $M$ is called \emph{reflexive} if the canonical map $M\to (M^{t})^{t}$ is an isomorphism, where $M^t:=\Hom_R(M,R)$.
If $\mathbb{Z}[\pi_E,\bar \pi_E]=R$, where $\pi_E$ denotes the Frobenius endomorphism of $E$, then $R$ is Gorenstein and every $R$-lattice is automatically reflexive \cite[Theorem~11 and Lemma~13]{CS15}.
\begin{theorem}\label{thm:CS+JKPRST}
{\rm (\!\cite[Theorem 8.1]{JKPRST}, \cite[Theorem 25]{CS15}) }
Let $E$ be an abelian variety over ${\bbF}_p$ as above and assume that $\mathbb{Z}[\pi_E,\bar \pi_E]=R$. Then the functor ${\mathcal Hom}_R(-,E)$ induces an anti-equivalence of categories
\begin{equation}
\label{eq:avqEn}
{}_{R}{\rm Lat} \longrightarrow \calA_{E}
\end{equation}
and $\Hom(-,E)$ is its inverse functor. Moreover, the functor ${\mathcal Hom}_R(-,E)$ is exact, and it is isomorphic to the Serre tensor functor $M \mapsto M^t\otimes B$.
\end{theorem}
Also see \cite[Theorem 3.1]{yu:jpaa2012} for a construction of a bijection from the set of isomorphism classes in ${}_{R}{\rm Lat}$ to that in $\calA_E$. The category $\calA_E$ contains more objects than those which are isogenous to a power of $E$ in the case where $E$ is not simple. Note that an abelian variety $X/{\bbF}_p$ lies in $\calA_E$ if and only if there is a $\mathbb{Q}$-algebra homomorphism $\mathbb{Q}[\pi_E]\to \mathbb{Q}[\pi_X]$ mapping $\pi_E$ to the Frobenius endomorphism $\pi_X$ of $X$.
Let ${}_{R}{\rm Lat}^{\rm f}$ (resp.~${\rm Lat}_R^{\rm f}$) denote the full subcategary consisting of left (resp.~right) $R$-lattices $M$ such that $M\otimes \mathbb{Q}$ is a free $B$-module of finite rank.
Similarly, let ${}_{R}{\rm Lat}^{\mathrm{f},H}\subseteq {}_{R}{\rm Lat}^{H}$ (resp.~${\rm Lat}_R^{\mathrm{f},H}\subseteq {\rm Lat}_R^{H}$) be the full subcategory of positive-definite Hermitian left (resp.~right) $R$-lattices $(M,h)$ with free $B$-module $M\otimes \mathbb{Q}$. The functor ${\mathcal Hom}_R(-,E)$ induces an anti-equivalence from ${}_{R}{\rm Lat}^{\mathrm{f}}$ to the subcategory $\calA_E^{\mathrm{f}}$ consisting of abelian varieties isogenous to a power of $E$. Moreover, we prove the following result about polarised varieties.
\begin{theorem}\label{thm:pol+JKPRST}
Let $(E,\lambda_E)$ be a principally polarised abelian variety over ${\bbF}_p$ with the assumptions as in Theorem~\ref{thm:CS+JKPRST}. Then the following hold.
\begin{enumerate}
\item The functor $(X,\lambda)\mapsto (M,h)$ introduced in Equations~\eqref{eq:M} and~\eqref{eq:h} induces an equivalence of categories
\[
\calA_{E}^{\rm pol} \longrightarrow {\rm Lat}_R^H.
\]
\item For any $\mathbb{Q}$-polarised abelian variety $(X,\lambda)$ over ${\bbF}_p$ in $\calA_E^{\rm pol}$, the automorphism group $\Aut(X,\lambda)$ can be computed as in Equation~\eqref{eq:AutXl}.
\item Every $\mathbb{Q}$-polarised abelian variety $(X,\lambda)$ over ${\bbF}_p$ in $\calA_E^{\rm pol}$ has the strong Remak-Schmidt property.
\end{enumerate}
\end{theorem}
\begin{proof}
\begin{itemize}
\item[(1)] We first show that $(M,h)$ is a positive-definite Hermitian $R$-lattice. By Equations \eqref{eq:h1} and \eqref{eq:h2} in Lemma~\ref{lm:Mh}, $h$ is Hermitian and it remains to show that $h$ is positive-definite. Let $E_i$ ($1\le i\le r$) be the simple abelian subvarieties of $E$ and let $\varphi=\sum_{i} \iota_i: \prod_{i=1}^r E_i \to E$ be the canonical isogeny with inclusions $\iota_i:E_i \subseteq E$. Then we have an inclusion $M\subseteq \bigoplus_{i=1}^r M_i$, where $M_i=\Hom(E_i, X)$.
Let $\lambda_{E_i}$ be the restriction of the polarisation $\lambda_E$ to~$E_i$. The isogeny~$\varphi$ induces an isomorphism from $B\simeq \prod_{i} B_i$ onto a product of CM fields $B_i=\End^0(E_i)$, and the decomposition $M_\mathbb{Q}=\bigoplus_{i=1}^r M_{i,\mathbb{Q}}$ respects the decomposition $B\simeq \prod_{i} B_i$. Moreover, we have $(M_\mathbb{Q},h)= \perp_{i=1}^r (M_{i,\mathbb{Q}}, h_i)$, where $h_i$ is the restriction of $h$, which is also induced from the polarisation $\lambda_{E_i}$. Let $f=(f_i)\in M_\mathbb{Q}$ be a non-zero vector. Then $h(f,f)=(h_i(f_i,f_i))_i=(\lambda_{E_i}^{-1} f_i^*\lambda )_i$ and $\lambda_{E_i}^{-1} f_i^*\lambda$ is a totally positive element whenever $f_i\neq 0$. This shows that $h$ is positive-definite. Then the same argument as in
Proposition~\ref{prop:equiv} proves the equivalence. Note that the principal polarisation $\lambda_E$ ensures there is a natural isomorphism $\Hom(E,X^t)\simeq M^t$.
\item[(2)+(3)] These follow from Theorem~\ref{orthogonal} and Corollary~\ref{autodecomposition} in the extended setting where $B$ is a product of CM fields; see Remark~\ref{rem:product}.
\end{itemize}
\end{proof}
\begin{question}
Is it true that any $\mathbb{Q}$-polarised abelian variety admits the strong Remak-Schmidt property?
\end{question}
\section{The geometric theory: the Gauss problem for central leaves}\label{sec:proof}
\subsection{First results and reductions}\label{ssec:4first}\
Let $x=[(X_0,\lambda_0)]\in \calA_g(k)$ be a point and let $\calC(x)$ the central leaf passing through $x$.
\begin{proposition}[Chai]\label{prop:chai}
The central leaf $\calC(x)$ is finite if and only if $X_0$ is supersingular. In particular, a necessary condition for $|\calC(x)|=1$ is that $x\in \calS_{g}(k)$.
\end{proposition}
\begin{proof}
It is proved in \cite[Proposition 1]{chai} that the prime-to-$p$ Hecke orbit $\calH^{(p)}(X_0,\lambda_0)$ (i.e., the points obtained from $(X_0,\lambda_0)$ by polarised prime-to-$p$ isogenies) is finite if and only if $X_0$ is supersingular. Since $\calH^{(p)}(X_0,\lambda_0)\subseteq \calC(x)$, the central leaf $\calC(x)$ is finite only if $X_0$ supersingular. When $X_0$ is supersingular, we have $\calC(x)=\Lambda_x$ by definition and hence $\calC(x)$ is finite, cf.~\eqref{eq:smf:1} .
\end{proof}
From now on we assume that $x\in \calS_g(k)$. In this case
\[
\calC(x)=\Lambda_x\simeq G^1(\mathbb{Q})\backslash G^1(\A_f)/U_x,
\]
where $U_x=G_x(\widehat \mathbb{Z})$ is an open compact subgroup. Similarly, for
$0\leq c\leq [g/2]$ we have
\[ \Lambda_{g,p^c}\simeq G^1(\mathbb{Q})\backslash G^1(\A_f)/U_{g,p^c}, \]
where $U_{g,p^c}=G_{x_c}(\widehat \mathbb{Z})$ for a base point $x_c\in
\Lambda_{g,p^c}$.
\begin{lemma}\label{lem:Lgpc}
For every point $x\in \calS_g(k)$, there exists a (non-canonical)
surjective morphism $\pi:\Lambda_x \twoheadrightarrow \Lambda_{g,p^c}$ for some integer
$c$ with $0\le c\le \lfloor g/2 \rfloor$. Moreover, one can select a base point~$x_c'$ in $\Lambda_{g,p^c}$ so that $G_x({\bbZ}_p)$ is contained in $G_{x_c'}({\bbZ}_p)$
and $\pi$
is induced from the the identity map
\begin{equation}
\label{eq:Gxc}
G^1(\mathbb{Q})\backslash G^1(\A_f)/U_x \longrightarrow
G^1(\mathbb{Q})\backslash G^1(\A_f)/U_{x'_c}.
\end{equation}
\end{lemma}
\begin{proof}
We have
\[ G_{x_c}({\bbZ}_p)\simeq \Aut_{G^1({\bbQ}_p)} (\Pi_p O_p)^{n-c}\oplus O_p^c,
\bbJ_g)=:P_c. \]
By \cite[Theorem~3.13, p.~150]{platonov-rapinchuk}, the subgroups $P_c$ for
$c=1,\dots, [g/2]$ form a complete set of representatives of maximal parahoric subgroups of
$G^1(\mathbb{Q}_p)$ up to conjugacy.
So $G_x({\bbZ}_p)$ is contained in $g_p^{-1} G_{x_c}(\mathbb{Z}_p) g_p$ for
some integer $c$ with $0\le c\leq \lfloor g/2 \rfloor$ and some element $g_p\in
G^1({\bbQ}_p)$. Thus, we we have a surjective map
\begin{equation}
\label{eq:Gx}
G^1(\mathbb{Q})\backslash G^1(\A_f)/U_x \twoheadrightarrow
G^1(\mathbb{Q})\backslash G^1(\A_f)/g_p^{-1}U_{g,p^c}g_p
\xrightarrow{\cdot g_p} G^1(\mathbb{Q})\backslash G^1(\A_f)/U_{g,p^c}.
\end{equation}
This gives a surjective map $\Lambda_x\twoheadrightarrow \Lambda_{g,p^c}$.
\end{proof}
Let $\varphi: \widetilde x=(\widetilde X_0,\widetilde \lambda_0)\to x=(X_0,\lambda_0)$ the be minimal isogeny for $x$, cf.\cite[Definition 2.11]{karemaker-yobuko-yu} and \cite[Lemma 1.8]{lioort}. Then $U_x\subseteq U_{\widetilde x}$ and we have a surjective map $\Lambda_x\twoheadrightarrow \Lambda_{\widetilde x}$ which is induced from the natural map
\begin{equation}
\label{eq:minisog}
G^1(\mathbb{Q})\backslash G^1(\A_f)/U_x \longrightarrow
G^1(\mathbb{Q})\backslash G^1(\A_f)/U_{\widetilde x}.
\end{equation}
If the open compact subgroup $U_{\widetilde x}$ is maximal, then $U_{\widetilde x}$ is conjugate to $U_{g,p^c}$ for some $0\le c\le \lfloor g/2\rfloor$ and the map $\pi: \Lambda_x \twoheadrightarrow \Lambda_{g,p^c}$ in Lemma~\ref{lem:Lgpc} is realised by the minimal isogeny $\varphi$.
\begin{lemma}\label{lem:g1}
Let $x$ be a point in $\calS_{g}(k)$. If $g=1$, then $|\Lambda_x|=1$ if and only if $p\in \{2,3,5,7,13\}$.
\end{lemma}
\begin{proof}
In this case, the orbit $\Lambda_x$ is the supersingular locus
$\Lambda_{1,1}$. The assertion is well-known and also follows from
Theorem~\ref{thm:mainarith}.(1).
\end{proof}
\begin{lemma}\label{lem:g2}
Let $x$ be a point in $\calS_{g}(k)$. If $g=2$, then $|\Lambda_x|=1$ if and only if $p\in \{2,3\}$.
\end{lemma}
\begin{proof}
For the superspecial case, by
the first part of Theorem~\ref{thm:mainarith}.(2)
we have $H_2(p,1)=1$ if and only if $p=2,3$. For the non-superspecial
case, it follows from Theorem~\ref{thm:nsspg2}.(3) that $|\Lambda_x|=1$ for every non-superspecial point $x\in \calS_{2}(k)$ if and only if $p=2, 3$.
\end{proof}
\begin{lemma}\label{lem:g5+}
Let $x$ be a point in $\calS_{g}(k)$. If $g\ge 5$, then $|\Lambda_x|>1$.
\end{lemma}
\begin{proof}
We first show that $|\Lambda_{g,p^c}|>1$ for all primes $p$ and all integers $c$ with $0\le c \le \lfloor g/2 \rfloor$. From Theorem~\ref{thm:sspmass} we have $\Mass(\Lambda_{g,p^c})=v_g \cdot L_{p,p^c}$. Using Lemma~\ref{lem:vn} and the proof of Corollary~\ref{cor:ge6}, we show that $|\Lambda_{g,p^c}|>1$ for all $g\ge 6$, all primes $p$ and all $c$.
By Theorem~\ref{thm:mainarith}, we have $|\Lambda_{5,p^0}|=H_{5}(p,1)>1$ and $|\Lambda_{5,p^2}|=H_{5}(1,p)>1$ for all primes $p$. Using Theorem~\ref{thm:sspmass} and \eqref{eq:Lambda5p}, we have $\Mass(\Lambda_{5,p})=v_5\cdot L_{5,p^1}=\Mass(\Lambda_{5,p^0})(p^5+1)$ and $(p^3-1)$ divides $L_{5,p}$. From this the same proof of Theorem~\ref{thm:mainarith} shows that $|\Lambda_{5,p}|>1$ for all primes $p$.
By Lemma~\ref{lem:Lgpc}, for every point $x\in \calS_{g}(k)$ we have $|\Lambda_x|\ge |\Lambda_{g,p^c}|$ for some $0\le c \le \lfloor g/2 \rfloor$. Therefore, $|\Lambda_x|>1$.
\end{proof}
For any matrix $A=(a_{ij})\in \Mat_g(\mathbb{F}_{p^2})$, write $A^*=\overline A^ T=(a_{ji}^p)$, where $\overline A=(a_{ij}^p)$ and $T$ denotes the transpose. Let
\[ U_g({\bbF}_p):=\{A\in \Mat_g(\mathbb{F}_{p^2}) : A\cdot A^*= {\bbI}_g \} \]
denote the unitary group of rank $g$ associated to the quadratic extension $\mathbb{F}_{p^2}/{\bbF}_p$. Let ${\rm Sym}_g(\mathbb{F}_{p^2})\subseteq \Mat_g(\mathbb{F}_{p^2})$ be the subspace consisting of all symmetric matrices and $\Sym_g(\mathbb{F}_{p^2})^0\subseteq \Sym_g(\mathbb{F}_{p^2})$ be the subspace consisting of matrices $B=(b_{ij})$ with $b_{ii}=0$ for all $i$.
\begin{definition}\label{def:EGH}
Let $\calE\subseteq \Mat_g(\mathbb{F}_{p^2})$ be a maximal subfield of degree $g$ over $\mathbb{F}_{p^2}$ stable under the involution $*$. Let
\begin{align}
\label{eq:group_dc1}
G&:=\left \{
\begin{pmatrix}
\bbI_g & 0 \\ B & \bbI_g
\end{pmatrix}
\begin{pmatrix}
A & 0 \\ 0 & \overline A
\end{pmatrix}\in \GL_{2g}(\mathbb{F}_{p^2}): A\in U_g({\bbF}_p),
B\in {\rm Sym}_g(\mathbb{F}_{p^2})\, \right \}; \\
\calE^1&:=\{ A\in \calE^\times: A^* A=\bbI_g\}= \calE^\times \cap U_g({\bbF}_p); \\
H&:=\left \{ \begin{pmatrix}
\bbI_g & 0 \\ B & \bbI_g
\end{pmatrix}
\begin{pmatrix}
A & 0 \\ 0 & \overline A
\end{pmatrix}: A\in \calE^1, \quad B\in {\rm Sym}_g(\mathbb{F}_{p^2})^0 \right \}; \\
\Gamma &:= \left \{ \begin{pmatrix}
\bbI_g & 0 \\ B & \bbI_g
\end{pmatrix}
\begin{pmatrix}
A & 0 \\ 0 & \overline A
\end{pmatrix}: A\in \diag(\mathbb{F}_{p^2}^1, \dots, \mathbb{F}_{p^2}^1) \cdot S_g,\ B\in \diag(\mathbb{F}_{p^2}, \dots, \mathbb{F}_{p^2}) \, \right \},
\end{align}
where $\mathbb{F}_{p^2}^1\subseteq \mathbb{F}_{p^2}^\times$ denotes the subgroup of norm one elements and $S_g$ denotes the symmetric group of $\{1,\dots, g\}$.
\end{definition}
\begin{lemma}\label{lm:group_dcoset}
Using the notation introduced in Definition~\ref{def:EGH}, the following statements hold.
\begin{enumerate}
\item Up to isomorphism, the double coset space $(\diag(\mathbb{F}_{p^2}^1, \dots, \mathbb{F}_{p^2}^1) \cdot S_g) \backslash U_g({\bbF}_p)/\calE^1$ is independent of the choice of $\calE$.
\item For $p=2$, up to isomorphism, the double coset space $\Gamma \backslash G/H$ is independent of the choice of $\calE$.
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}
\item We know that $\calE^1$ is a cyclic group of order $p^g+1$ and choose a generator $\eta$ of~$\calE^1$. One has $\eta ^* \eta =1$ and $\calE=\mathbb{F}_{p^2}[\eta]$. Suppose that $\calE_1$ is another maximal subfield stable under~$*$. We will first show that $\calE_1$ is conjugate to $\calE$ under $U_g({\bbF}_p)$. By the Noether-Skolem theorem, there is an element $\gamma\in \GL_g(\mathbb{F}_{p^2})$ such that $\calE_1=\gamma \calE \gamma^{-1}$. Clearly, $\calE_1=\mathbb{F}_{p^2}[\eta_1]$ is generated by $\eta_1:=\gamma \eta \gamma^{-1}$ and $\eta_1$ has order $p^g+1$. We also have $\eta_1^* \eta_1=1$; this follows from the fact that the norm-one subgroup $\calE^1_1\subseteq \calE_1^\times$ is the unique subgroup of order $p^g+1$ and that $\eta_1$ has order $p^g+1$. It follows from $\eta^* \eta =1$ that $\gamma^* \eta ^* \gamma^* \gamma \eta \gamma^{-1}=1$. Putting $\alpha=\gamma^* \gamma$, we find that
\[ \eta^* \alpha \eta = \alpha\quad \text{and} \quad \alpha \eta \alpha^{-1}=\eta. \]
That is, $\alpha$ commutes with $\calE$, and $\alpha\in \calE^\times$ because $\calE$ is a maximal subfield. As $\alpha=\gamma^* \gamma$, $\alpha$ lies in the subfield $F\subseteq \calE$ fixed by the automorphism $*$ of order $2$. Since the norm map $N:\calE^\times \to F^\times, x \mapsto x^* x$ is surjective, we have $\alpha=\beta^* \beta$ for some $\beta\in \calE^\times$. Let $\gamma_1:=\gamma \beta^{-1}$. Then
\[ \gamma_1^* \gamma_1 =(\gamma \beta^{-1})^* (\gamma \beta^{-1})=(\beta^{-1})^* \gamma^* \gamma \beta^{-1}=(\beta^{-1})^* \alpha \beta^{-1}=(\beta^{-1})^* \beta^* \beta \beta^{-1}=1. \]
Therefore, $\calE_1=\gamma_1 \calE \gamma_1^{-1}$ and $\gamma_1\in U_g({\bbF}_p)$.
The right translation by $\gamma_1^{-1}$ gives an isomorphism $(\diag(\mathbb{F}_{p^2}^1, \dots, \mathbb{F}_{p^2}^1) \cdot S_g) \backslash U_g({\bbF}_p)/\calE^1\simeq (\diag(\mathbb{F}_{p^2}^1, \dots, \mathbb{F}_{p^2}^1) \cdot S_g) \backslash U_g({\bbF}_p)/\calE^1_1$. This proves (1).
\item We may regard $U_g({\bbF}_p)$ as a subgroup of $G$ via the map $A\mapsto
\begin{pmatrix}
A & 0 \\
0 & \overline A
\end{pmatrix}$.
Conjugation by $G$ gives an action of $U_g(\mathbb{F}_p)$ on $\Sym_g(\mathbb{F}_{p^2})$ by $A\cdot B= \overline A B \overline A^T$, where $A\in U(\mathbb{F}_{p})$ and $B\in \Sym_g(\mathbb{F}_{p^2})$. Suppose that $\calE_1$ is another maximal subfield stable under $*$ and
$H_1\subseteq G$ is the extension of $\Sym_g
(\mathbb{F}_{p^2})^0$ by $\calE_1^1$. To show $H_1$ is conjugate to $H$ under~$G$, it suffices to show they are conjugate under $U_g({\bbF}_p)$, since this is a subgroup of $G$. By~(1), it then suffices to show that ${\rm Sym}_g(\mathbb{F}_{p^2})^0$ is stable under the action of $U_g(\mathbb{F}_{p})$. When $p=2$, one checks directly that the diagonal entries of the matrix $A (I_{ij}+I_{ji}) \overline A^T$ are all zero, where $I_{ij}$ is the matrix whose entries are $1$ at $(i,j)$ and zero elsewhere. Since the $I_{ij}+I_{ji}$ generate ${\rm Sym}_g(\mathbb{F}_{p^2})^0$, we find that it is indeed stable under the action of $U_g(\mathbb{F}_{p})$.
This proves (2).
\end{enumerate}
\end{proof}
Let $\mathbb{F}_q$ be a finite field of characteristic $p$. Let $(V_0,\psi_0)$ be a non-degenerate symplectic space over $\mathbb{F}_q$ of dimension $2c$ and denote by $A\mapsto A^\dagger$ the symplectic involution on $\End(V_0)$ with respect to $\psi_0$. For any $k$-subspace $W$ of $V_0\otimes_{{\bbF}_q} k$, the endomorphism algebra of~$W$ over ${\bbF}_q$ is defined as
\begin{equation}\label{eq:endW}
\End(V_0,W):=\{ A\in \End(V_0): A(W)\subseteq W \},
\end{equation}
and the automorphism group of $W$ in the symplectic group $\Sp(V_0)$ is defined as
\begin{equation}
\label{eq:SpW}
\Sp(V_0,W):=\Sp(V_0)\cap \End(V_0,W).
\end{equation}
\begin{proposition}\label{prop:Cq^2+1}
If $W$ is a non-zero isotropic $k$-subspace of $V_0\otimes_{{\bbF}_q} k$ such that $\Sp(V_0,W)\supseteq C_{q^{c}+1}$, then $\End(V_0,W)\simeq \Mat_{{2c}/d}(\mathbb{F}_{q^d})$ for some positive integer $d|{2c}$ such that $\ord_2(d)=\ord_2({2c})$. Moreover, if ${2c}$ is a power of $2$, then $\End(V_0,W)\simeq \mathbb{F}_{q^{2c}}$ and
$\Sp(V_0,W)=C_{q^{c}+1}$.
\end{proposition}
\begin{proof}
Let $\eta$ be a generator of $C_{q^{c}+1}$ and let $\calE={\bbF}_q[\eta]$ be the ${\bbF}_q$-subalgebra of $\End(V_0)$ generated by $\eta$. Since $\vert C_{q^{c}+1}\vert$ is prime to $q$, the group algebra ${\bbF}_q[C_{q^{c}+1}]$ is semi-simple and it maps onto $\calE$. On the other hand, the finite field $\mathbb{F}_{q^{2c}}$ is the smallest field extension of~${\bbF}_q$ which contains an element of order $q^{c}+1$, so $\calE$ contains a copy $F$ of $\mathbb{F}_{q^{2c}}$ in $\End(V_0)$. Since $\dim V_0 = {2c} = [\mathbb{F}_{q^{2c}}:\mathbb{F}_q]$, we see that $F$ is a maximal subfield of $\End(V_0)$ and hence $\calE=F$.
Since $C_{q^{c}+1}\subseteq \Sp(V_0)$ and $\ord(\eta)=q^{c}+1$, we have that $\eta^\dagger=\eta^{-1}\in C_{q^{c}+1}$ and $\eta^{\dagger}\neq \eta$. So $\calE$ is stable under $\dagger$, and $\dagger$ is an automorphism of $\calE$ of order $2$. Moreover, $C_{q^{c}+1}$ is equal to the subgroup $\calE^1=\{a\in \calE^\times: N_{\calE/\calE_0}(a)=1\}$ of norm one elements in $\calE^\times$, where $\calE_0$ is the subfield of $\calE$ fixed by $\dagger$.
Let $\Sigma_\calE:=\Hom_{{\bbF}_q}(\calE, \overline \mathbb{F}_{p})$ denote the set of embeddings of $\calE$ into $\overline{\bbF}_p$; it is equipped with a left action by $\Gal(\mathbb{F}_{q^{2c}}/{\bbF}_q)=\Gal(\calE/{\bbF}_q)=\< \sigma \>\simeq \mathbb{Z}/{2c}\mathbb{Z}$, which acts simply transitively. Arrange $\Sigma_{\calE}=\{\sigma_i: i\in \mathbb{Z}/{2c}\mathbb{Z}\}$ in such a way that $\sigma\cdot \sigma_{i}=\sigma_{i+1}$ for all $i\in \mathbb{Z}/{2c}\mathbb{Z}$ and denote by $V^i$ the $\sigma_i$-isotypic eigenspace of $V_0\otimes k$. Then $V_0\otimes_{{\bbF}_q} k=\oplus_{i\in \mathbb{Z}/{2c}\mathbb{Z}} V^i$ is a decomposition into simple $(\calE\otimes_{{\bbF}_q} k)$-submodules. Since $W\subseteq V_0\otimes_{{\bbF}_q} k$ is an $(\calE\otimes_{{\bbF}_q}k)$-submodule, there is a unique and non-empty subset $J\subseteq \mathbb{Z}/{2c}\mathbb{Z}$ such that $W=\oplus_{i\in J} V^i$. Note that the involution
$\dagger$ acts on $\Sigma_{\calE}$ from the right and one has $\sigma_i^\dagger=\sigma_{i+c}$.
We claim that $J\cap J^\dagger=\emptyset$. For $i,j\in \mathbb{Z}/{2c}\mathbb{Z}$, one computes that
\[ \sigma_i(a)\psi(v_1,v_2)=\psi(a\cdot v_1, v_2)=\psi(v_1, a^\dagger \cdot v_2)=\sigma_{j+c}(a) \psi(v_1,v_2) \]
for any $a\in \calE$, $v_1\in V^i$ and $v_2\in V^j$. It follows that $\psi(V^i, V^j)=0$ if $i-j\neq c$ in $\mathbb{Z}/{2c}\mathbb{Z}$. Since $\psi$ is non-degenerate, the latter is also a necessary condition. Since $W$ is isotropic, $J$ does not contain $\{i,i+c\}$ for any $i$ and therefore $J\cap J^\dagger=\emptyset$, as claimed.
We represent the matrix algebra $\End(V_0)$ over ${\bbF}_q$ as a cyclic algebra, cf.~\cite[Theorem 30.4]{reiner:mo}:
\[ \End(V_0)=\calE[z], \qquad z^{2c}=1, za z^{-1}=\sigma(a) \ \text{ for all } a\in \calE. \]
Multiplication by $z$ maps $V^i$ onto $V^{i-1}$:
\[ a\cdot zv= z(\sigma^{-1}(a)\cdot v)=z \sigma_{i-1}(a) v= \sigma_{i-1}(a) z v, \quad \forall \, a\in \calE, v\in V^i.\]
Consider an element $x=\sum_{i\in \mathbb{Z}/{2c}\mathbb{Z}} a_i z^i\in \End(V_0,W)$; if $a_i\neq 0$, then $J$ is stable under the shift by $-i$. Let $d\ge 1$ be the smallest integer with $d|{2c}$ such that
$J$ is stable under the shift by $-d$. Then $\End(V_0,W)=\calE[z^d]\simeq \Mat_{{2c}/d}(\mathbb{F}_{q^d})$. Since $J\cap J^\dagger=\emptyset$, we have $d\nmid c$, Therefore, $d$ is a positive divisor of ${2c}$ such that $\ord_2(d)=\ord_2({2c})$. This proves the first statement.
When ${2c}$ is a power of $2$, the condition on $d$ implies $d={2c}$ and therefore $\End(V_0,W)=\calE$. This implies that $\Sp(V_0,W)=\calE^1=C_{q^{c}+1}$ and hence proves the second statement.
\end{proof}
\subsection{The case $\boldsymbol{g=3}$}\label{ssec:g3}\
\begin{lemma}\label{lem:p2coset}
We use the notation for $G, \Gamma, H$ defined in Definition~\ref{def:EGH}. For $g=3$ and $p=2$, we have $|\Gamma \backslash G/H|=2$.
\end{lemma}
\begin{proof}
Put $U:={\rm Sym}_g(\mathbb{F}_{p^2})$, embedded into $\mathrm{GL}_{2g}(\mathbb{F}_{p^2})$ via $B \mapsto \left( \begin{smallmatrix}
\bbI_g & 0 \\ B & \bbI_g
\end{smallmatrix}\right)$. Then $U_\Gamma:=U\cap \Gamma \simeq \diag(\mathbb{F}_{p^2}, \dots, \mathbb{F}_{p^2})$ and $U_H:=U\cap H \simeq {\rm \Sym}_g(\mathbb{F}_{p^2})^0$.
Consider the surjective map induced by the natural projection
\[ \pr: \Gamma\backslash G/H \to (\diag(\mathbb{F}_{p^2}^1, \dots, \mathbb{F}_{p^2}^1) \cdot S_g) \backslash U_g({\bbF}_p)/\calE^1. \]
One shows directly that the fibre of the double coset $(\diag(\mathbb{F}_{p^2}^1, \dots, \mathbb{F}_{p^2}^1) \cdot S_g)\cdot A\cdot \calE^1$ for an element $A\in U_g({\bbF}_p)$ is isomorphic to $U_\Gamma+ \overline A U_H \overline A^{T}$. Since $\overline A U_H \overline A^{T}=U_H$ for $p=2$ by Lemma~\ref{lm:group_dcoset}.(2), we have $U_\Gamma+ \overline A U_H \overline A^{T}=U_\Gamma+U_H=U$ and hence $\pr$ is an isomorphism.
Now let $g=3$ and $p=2$; we need to show that the target of $\pr$ has two double cosets.
Put $\mathbb{F}_4=\mathbb{F}_2[\zeta]$ with $\zeta^2+\zeta+1=0$ and
\begin{equation}\label{eq:etaA}
\eta:=
\begin{pmatrix}
0 & 0 & \zeta \\
1 & 0 & 0 \\
0 & 1 & 0 \\
\end{pmatrix}, \quad A: = \begin{pmatrix}
1 & \zeta & \zeta \\
\zeta & 1 & \zeta \\
\zeta & \zeta & 1 \\
\end{pmatrix}.
\end{equation}
We choose $\calE^1=\< \eta\>$ and verify directly that
\[ U_3(\mathbb{F}_2)= \Big( \diag(\mathbb{F}_4^\times, \mathbb{F}_4^\times, \mathbb{F}_4^\times)S_3\cdot 1 \cdot \calE^1 \Big) \, \coprod\, \Big( \diag(\mathbb{F}_4^\times, \mathbb{F}_4^\times, \mathbb{F}_4^\times)S_3\cdot A
\cdot \calE^1 \Big). \]
This shows that $|\Gamma\backslash G/H| = 2$; recall from Lemma~\ref{lm:group_dcoset}.(2) that the double coset space is independent of the choices made.
\end{proof}
\begin{proposition} \label{lm:a1DCFp6}
Let $p=2$, let $x=(X,\lambda)\in \calS_{3}(k)$ with $a(x)=1$ and let $y=(t,u)\in \calP'_\mu(k)$ be a point over $x$ for the unique element $\mu=\mu_{\rm can}$ in $P(E^3)$.
Assume that $y\in \calD$ and $t\in C(\mathbb{F}_{p^6})$. Then $|\Lambda_x|=2$. Moreover, the two members $(X',\lambda')$ and $(X'',\lambda'')$ of $\Lambda_x$ have automorphism groups
\[ \Aut(X',\lambda')\simeq C_2^3\rtimes C_9, \quad \Aut(X'',\lambda'')\simeq C_2^3 \times C_3, \]
where $C_9$ acts on $C_2^3$ by a cyclic shift.
\end{proposition}
\begin{proof}
Let $x_2=(X_2,\lambda_2)\to x=(X,\lambda)$ be the minimal isogeny for $(X,\lambda)$.
As $a(X)=1$ and the class number $H_3(2,1)=1$, we have $(X_2,\lambda_2)\simeq (E^3, p\mu_{\rm can})$. Again using $H_3(2,1)=1$, one has $|G^1(\mathbb{Q})\backslash G^1(\A_f)/U_{x_2}|=1$ so $G^1(\A_f)=G^1(\mathbb{Q})U_{x_2}$; recall that $U_x = G_x(\widehat \mathbb{Z})$ for any $x$. Hence,
\[ \Lambda_x\simeq G^1(\mathbb{Q})\backslash G^1(\A_f)/U_x= G^1(\mathbb{Q})\backslash
G^1(\mathbb{Q})U_{x_2} /U_{x}=G^1(\mathbb{Z})\backslash G_{x_2}({\bbZ}_p) /G_{x}({\bbZ}_p), \]
where $G^1(\mathbb{Z})=G^1(\mathbb{Q})\cap U_{x_2}=\Aut(X_2,\lambda_2)$, as $G_{x_2}(\mathbb{Z}_\ell)=G_x(\mathbb{Z}_\ell)$ for all primes $\ell\neq~p$.
Let $\underline M_2=(M_2,\<\, ,\>_2)$ and $\underline M=(M,\<\, ,\>)$ be the polarised Dieudonn\'{e} modules associated to $(X_2,\lambda_2)$ and $(X,\lambda)$, respectively.
Regarding $G_{x_2}({\bbZ}_p)=\Aut_{\rm DM}(\underline M_2)$, we have a reduction-modulo-$p$ map:
\[ m_p: G_{x_2}({\bbZ}_p)=\Aut_{\rm DM}(\underline M_2) \to \Aut(M_2/pM_2); \]
we write $G_{\underline M_2}$ for its image. For $G_{x}({\bbZ}_p)=\Aut_{\rm DM}(\underline M)$ it then follows from the construction that
\[ G_x({\bbZ}_p)=\{h\in G_{x_2}({\bbZ}_p): m_p(h)(M/pM_2)=M/pM_2\}. \]
Therefore, $G_{x}({\bbZ}_p)$ contains the kernel $\ker( m_p) \subseteq G_{x_2}({\bbZ}_p)$ and we obtain
\begin{equation}\label{eq:p2}
\Lambda_x\simeq \Gamma \backslash G_{\underline M_2} /G_{\underline M},
\end{equation}
where $G_{\underline M}$ is the image $m_p(G_{x}({\bbZ}_p))$ and $\Gamma:=m_p(G^1(\mathbb{Z}))$. It follows from \cite[Lemma 6.1]{karemaker-yobuko-yu} that reduction modulo $p$ gives an exact sequence
\[
\begin{CD}
1 @>>> C_2^3 @>>> \Aut(X_2, \lambda_2) @>{m_p}>> \Gamma @>>> 1.
\end{CD}
\]
Let $O=\End(E)$ be a maximal order of $\End^0(E)\simeq B_{p,\infty}$ and let $\Pi\in O$ be the Frobenius endomorphism.
Clearly, $G_{\underline M_2} = m_p(\Aut_{\rm DM}(\underline M_2))$ is a subgroup of $\GL_3(O/pO)=\GL_3(\mathbb{F}_{p^2}[\Pi])$. In fact, the group $G_{\underline M_2}$ is isomorphic to the group $G$ of Definition~\ref{def:EGH}; cf.~\cite[Definition 5.3]{karemaker-yobuko-yu}.
By further reduction modulo $\Pi$, we obtain an exact sequence
\[
\begin{CD}
1 @>>> U:=\Sym_3(\mathbb{F}_4) @>>> G_{\underline M_2} @>{m_\Pi}>> U_3(\mathbb{F}_2) @>>> 1.
\end{CD}
\]
Let $\calE$ be the image of $\End_{\rm DM}(\underline M)$ in $m_\Pi(\End_{\rm DM}(\underline M_2))$. Since $p=2$ and $t\in C(\mathbb{F}_{2^6})$, $\calE\simeq \mathbb{F}_{4^3}$ is a subalgebra of $\Mat_3(\mathbb{F}_4)$ of degree $3$ which is stable under the induced involution $*$, and $U \cap G_{\underline M}=\Sym_3(\mathbb{F}_4)^0$. Therefore, $G_{\underline M}$ is isomorphic to the group $H$ in Definition~\ref{def:EGH}.
As
\[
G^1(\mathbb{Z}) = \Aut(X_2, \lambda_2) \simeq \Aut(E^3,\mu_{\rm can}) \simeq (O^\times)^3\cdot S_3,
\]
we further see that $\Gamma$ is the same as in Definition~\ref{def:EGH}. So by Lemma~\ref{lem:p2coset}, for $x = (X, \lambda)$,
the set
\[ \text{$\Lambda_x \simeq \Gamma \backslash G / H$ has two elements}, \]
represented by
\begin{equation}\label{eq:Xlreps}
(X',\lambda') \leftrightarrow G^1(\mathbb{Z}) \cdot 1 \cdot G_x(\mathbb{Z}_p) \text{ and } (X'',\lambda'') \leftrightarrow G^1(\mathbb{Z}) \cdot \tilde{A} \cdot G_x(\mathbb{Z}_p),
\end{equation}
where $\tilde{A}$ is a lift of $A$ as in Equation~\eqref{eq:etaA}. That is, we may take
\[
\tilde{A}: = \frac{1}{a} \begin{pmatrix}
1 & \zeta & \zeta \\
\zeta & 1 & \zeta \\
\zeta & \zeta & 1 \\
\end{pmatrix}, \text{ for } 1 \neq \zeta \in O^{\times} \text{ such that $\zeta^3 = 1$ and } a = 2+\zeta \in O.
\]
The coset representation in \eqref{eq:Xlreps} also immediately implies that
\[
\Aut(X', \lambda') \simeq G^1(\mathbb{Z}) \cap G_x(\mathbb{Z}_p) \text{ and } \Aut(X'', \lambda'') \simeq G^1(\mathbb{Z}) \cap \tilde{A} G_x(\mathbb{Z}_p) \tilde{A}^{-1}.
\]
The group $G^1(\mathbb{Z}) \cap G_x(\mathbb{Z}_p)$ sits in the short exact sequence
\[ \begin{tikzcd}
1 \arrow[r] & C_2^3 \arrow[r] & G^1(\mathbb{Z}) \cap G_x(\mathbb{Z}_p) \arrow[r,"m_\Pi"] & \calE^1 \arrow[r] & 1
\end{tikzcd}\]
and one has $|\Aut(X',\lambda')|=8\cdot 9$.
From the mass $\Mass(\Lambda_x)=1/(2\cdot 3^2)$ and the fact that $\vert \Lambda_x \vert = 2$, we immediately see that $|\Aut(X'',\lambda'')|=8\cdot 3$.
To determine the automorphism groups precisely, we argue as follows. We have that $x = (X, \lambda)$ either equals $(X',\lambda')$ or equals $(X'',\lambda'')$. In either case, the group $\Aut(X,\lambda)$ is the subgroup of $\Aut(X_2,\lambda_2)$ consisting of elements $h$ such that $m_p(h)\in~H$. Since $U_\Gamma\cap U_H=0$, its image $m_p(\Aut(X,\lambda))$ is the same as its image $m_\Pi(\Aut(X,\lambda))\subseteq \calE^1\simeq C_9$.
Moreover, we know that $G^1(\mathbb{Z}) = (O^\times)^3\cdot S_3$ and that
\[
G_x(\mathbb{Z}_p) = m_p^{-1}(H) = m_p^{-1}( \Sym_3(\mathbb{F}_4)^0 \calE^1) = m_{\Pi}^{-1} (\calE^1),
\]
where
\[
C_2^3 \simeq \diag(\pm 1, \pm 1, \pm 1) = \ker(m_p)\cap (O^\times)^3\cdot S_3 \subseteq \ker(m_{\Pi})\cap (O^\times)^3\cdot S_3
\]
and $C_9 \simeq \calE^1 = \langle \eta \rangle \subseteq (O^\times)^3\cdot S_3$ by construction.
For $(X', \lambda')$ we therefore must have
\[
\Aut(X', \lambda') \simeq G^1(\mathbb{Z}) \cap G_x(\mathbb{Z}_p) = C_2^3 \rtimes C_9
\]
of cardinality $8\cdot 9$, since the conjugation action by $\eta$ on $\diag(\pm 1, \pm 1, \pm 1)$ is non-trivial.
For $(X'',\lambda'')$, we note that $\tilde{A} \in G_{x_2}(\mathbb{Z}_p)$ normalises $\ker(m_{\Pi}) = m_p^{-1}( \Sym_3(\mathbb{F}_4)^0)$ by construction and compute that
\[
\tilde{A} \eta \tilde{A}^{-1} = \frac{1}{2+\overline{\zeta}} \begin{pmatrix}
1 & 1 & 1 \\
1 & \zeta & \overline{\zeta} \\
\zeta & 1 & \overline{\zeta} \\
\end{pmatrix} =: B,
\]
where $\overline{\zeta} = \zeta^{-1}$. Hence, we get
\[
\Aut(X'', \lambda'') \simeq G^1(\mathbb{Z}) \cap \tilde{A} G_x(\mathbb{Z}_p) \tilde{A}^{-1} = \diag(\pm 1, \pm 1, \pm 1) \cdot \{ B^3, B^6, B^9 = 1 \} \simeq C_2^3 \times C_3
\]
of cardinality $8 \cdot 3$, since the conjugation action by $\{1, B^3, B^6\}$ is trivial.
\end{proof}
\begin{proposition}\label{prop:autg3}
Let $p=2$, choose $x=(X,\lambda)\in \calS_{3}(k)$ and let $y=(t,u)\in \calP'_\mu(k)$ be a point over $x$ for the unique element $\mu=\mu_{\rm can}$ in $P(E^3)$.
\begin{enumerate}
\item Suppose that $t\in C(\mathbb{F}_{p^2})$, that is, $a(x)\ge 2$. Then $\vert \Lambda_x \vert=1$ and we have that
\begin{equation}
\label{eq:auta23}
\vert \Aut(X,\lambda)\vert=
\begin{cases}
24^3\cdot 6=2^{10}\cdot 3^4 & \text{if } u\in
\mathbb{P}_t^1(\mathbb{F}_{p^2}); \\
24\cdot 160=2^8\cdot 3\cdot 5 & \text{if }
u\in\mathbb{P}_t^1(\mathbb{F}_{p^4})\setminus
\mathbb{P}_t^1(\mathbb{F}_{p^2}); \\
24\cdot 32=2^8\cdot 3 & \text{ if
} u \not\in
\mathbb{P}_t^1(\mathbb{F}_{p^4}).
\end{cases}
\end{equation}
\item Suppose that $t\not\in C(\mathbb{F}_{p^2})$, that is, $a(x)=1$. Then
\begin{equation}
\label{eq:cna1}
\vert \Lambda_x \vert=
\begin{cases}
4 & \text{ if } y \notin \calD; \\
4 & \text{ if } t \notin C(\mathbb{F}_{p^6}) \text{ and } y \in \calD; \\
2 & \text{ if } t \in C(\mathbb{F}_{p^6}) \text{ and } y \in \calD.
\end{cases}
\end{equation}
\end{enumerate}
\end{proposition}
\begin{proof}
\begin{enumerate}
\item If $u \in \bbP^1_t(\mathbb{F}_{p^2})$, then $a(x)=3$ and $\vert \Lambda_x \vert =H_3(2,1)=1$, and one computes that $\Mass(\Lambda_x)=1/(2^{10}\cdot 3^4)$. Therefore, $\vert \Aut(X,\lambda) \vert=24^3\cdot 6$. Alternatively, this also follows from \cite[Lemma 7.1]{karemaker-yobuko-yu}. Now we assume that $a(x)=2$. Using the mass formula (cf.~Theorem~\ref{introthm:a2}), we compute that
\begin{equation}
\label{eq:massa2}
\Mass(\Lambda_x)=
\begin{cases}
1/(2^8\cdot 3\cdot 5) & \text{if }
u\in\mathbb{P}_t^1(\mathbb{F}_{p^4})\setminus
\mathbb{P}_t^1(\mathbb{F}_{p^2}); \\
1/(2^8\cdot 3) & \text{ if
} u \not\in
\mathbb{P}_t^1(\mathbb{F}_{p^4}).
\end{cases}
\end{equation}
Let $(E_k^3,p\mu)\xrightarrow{\rho_2} (Y_1,\lambda_1)\xrightarrow{\rho_1} (Y_0,\lambda_0)\simeq (X,\lambda)$ be the PFTQ corresponding to the point $y=(t,u)$. Since $t\in C(\mathbb{F}_{p^2})$, $Y_1$ is superspecial and $(Y_1,\lambda_1)\simeq (E_k,\lambda_E)\times (E_k^2, \mu_1)$, where $\lambda_E$ is the canonical principal polarisation of $E$ and $\mu_1\in P_1(E^2)$. Since $p=2$, we have
$\vert \Aut(E,\lambda_E)\vert=\vert \Aut(E)\vert=24$ and $\vert \Aut(E^2,\mu_1)\vert=1920$, cf.~\cite{ibukiyama:autgp1989}. By Corollary~\ref{cor:Autsp} and Equation~\eqref{eq:AutXl},
we have $|\Aut( (E,\lambda_E)\times (E^2, \mu_1)|=|\Aut(E,\lambda_E)|\times |\Aut(E^2,\mu_1)|=24\cdot 1920$. Notice that $\ker(\rho_1)$ is contained in $\ker(\mu_1)$ since $\ker(\lambda_E)$ is trivial. Therefore, $(X,\lambda)$ is isomorphic to $(E,\lambda_E)\times (X', \lambda')$, where $X'=E_k^2/\ker(\rho_1)$. The computation of $\Aut (X,\lambda)$ is now reduced to computing $\Aut(X',\lambda')$. By Corollary~\ref{cor:p2g2aut}, we have
\[
\vert \Aut(X',\lambda') \vert=
\begin{cases}
160, & \text{if $u\in \bbP^1(\mathbb{F}_{p^4})-\bbP^1(\mathbb{F}_{p^2})$};\\
32 & \text{if $u\in \bbP^1(k)-\bbP^1(\mathbb{F}_{p^4})$}.
\end{cases}
\]
Therefore,
\[
\vert \Aut(X,\lambda) \vert=
\begin{cases}
24\cdot 160 = 2^8 \cdot 3 \cdot 5 & \text{if $u\in \bbP^1(\mathbb{F}_{p^4})-\bbP^1(\mathbb{F}_{p^2})$};\\
24\cdot 32 = 2^8 \cdot 3 & \text{if $u\in \bbP^1(k)-\bbP^1(\mathbb{F}_{p^4})$}.
\end{cases}
\]
Comparing this result with the values of $\Mass(\Lambda_x)$ in \eqref{eq:massa2}, we conclude that $\vert \Lambda_x \vert=1$ in both cases.
\item If $y\notin \calD$, by \cite[Corollary 7.5.(1)]{karemaker-yobuko-yu} we have that $\vert \Lambda_x \vert=4$.
Suppose then that $y\in \calD$ and $t\notin C(\mathbb{F}_{p^6})$.
For every point $x'$ in $\Lambda_x$, consider
the corresponding polarised abelian variety $(X',\lambda')$ satisfying
$(X',\lambda')[p^\infty]\simeq (X,\lambda)[p^\infty]$. If $y'\in \calP_\mu'(k)$ is a point over~$x'$, then again $y'\in \calD$ and $t'\notin C(\mathbb{F}_{p^6})$.
Thus, by \cite[Theorem 7.9.(1)]{karemaker-yobuko-yu}, we have that $\Aut(X',\lambda')\simeq C_2^3 \times C_3$. Using the mass formula (cf.~Theorem~\ref{introthm:a1}), noting that $d(t)=3$ when $p=2$, we compute that
\[
\Mass(\Lambda_x)=\frac{1}{6}.
\]
Therefore, $\vert \Lambda_x \vert =\vert C_2^3\times C_3\vert \cdot \Mass(\Lambda_x)=4$.
For the last case, where $y\in \calD$ and $t\in C(\mathbb{F}_{p^6})$, the assertion $\vert \Lambda_x \vert=2$ follows directly from Proposition~\ref{lm:a1DCFp6}.
\end{enumerate}
\end{proof}
\subsection{The case $\boldsymbol{g=4}$}\label{ssec:g4}\
\begin{definition}
\begin{enumerate}
\item An \emph{elementary sequence} is a map $\varphi: \{ 1, \ldots, g \} \to \mathbb{Z}_{\geq 0}$ such that $\varphi(0) =~0$ and $\varphi(i) \leq \varphi(i+1) \leq \varphi(i) + 1$ for all $0 \leq i < g$, cf.~\cite[Definition 5.6]{OortEO}. With each elementary sequence we associate an \emph{Ekedahl-Oort stratum}~$\mathcal{S}_{\varphi}$, which is a locally closed subset of the moduli space $\mathcal{A}_{g,1,n} \otimes \overline{\mathbb{F}}_p$ of principally polarised abelian varieties with level-$n$ structure. Roughly speaking, it consists of those varieties whose $p$-torsion has a canonical filtration described by $\varphi$. On $\mathcal{S}_g$ we consider the stratification induced by $\mathcal{S}_{\varphi} \cap \mathcal{S}_g$.
\item The $p$-divisible group of an abelian variety of dimension~$g$ is determined up to isogeny by its Newton polygon, which can be described as a set of slopes $(\lambda_1, \ldots, \lambda_{2g})$ with $0 \leq \lambda_i \leq 1$ for all $1 \leq i \leq 2g$ and $\sum_i \lambda_i = g$, cf.~\cite{manin}. These slopes moreover satisfy that $\lambda_i + \lambda_{2g+1-i} = 1$ for all $1 \leq i \leq 2g$ and that the denominator of each $\lambda_i$ divides its multiplicity. All abelian varieties with the same Newton polygon form a \emph{Newton stratum} of $\mathcal{A}_g$.
\item For $1 \leq a \leq g$, we will denote the \emph{$a$-number locus} of $\mathcal{S}_g$ by $\mathcal{S}_g(a) := \{ x \in \mathcal{S}_g(k) : a(x) = a \}$.
\end{enumerate}
\end{definition}
\begin{figure}[h!]\label{fig:EO}
\begin{center}
\begin{tikzcd}
& & & |[blue]| (0,1,2,3) \arrow[dl, dash] \\
& & |[orange]| (0, 1, 2, 2) \arrow[dl, dash] & \\
& |[orange]| (0, 1, 1, 2) \arrow[dl, dash] \arrow[dr, dash] & & \\
|[purple]| (0,1,1,1) \arrow[dr, dash] & & |[orange]| (0,0,1,2) \arrow[dl, dash] & \\
& |[purple]| (0,0,1,1) \arrow[d, dash] & & \\
& |[purple]| (0,0,0,1) \arrow[d, dash] & & \\
& |[teal]| (0,0,0,0) & & \\
\end{tikzcd}
\end{center}
\caption{Ekedahl-Oort strata of $p$-rank zero in dimension $g=4$. The blue stratum has $a$-number $1$, the orange strata have $a$-number $2$, the red strata have $a$-number $3$ and the green stratum has $a$-number $4$. Strata are connected by a line if the lower one is contained in the Zariski closer of the upper one.}
\end{figure}
\begin{proposition}\label{prop:EO}
\begin{enumerate}
\item The Ekedahl-Oort strata in dimension $g=4$ of $p$-rank zero are precisely the $\calS_{\varphi}$ for those $\varphi$ appearing in Figure~1.
\item The stratum $\calS_{\varphi}$ for $\varphi = (0,1,2,3)$ has $a$-number $1$, those for $\varphi = (0,1,2,2)$, $(0,1,1,2)$, and $(0,0,1,2)$ have $a$-number $2$, those for $\varphi = (0,1,1,1)$, $(0,0,1,1)$, and $(0,0,0,1)$ have $a$-number $3$ and that for $\varphi = (0,0,0,0)$ has $a$-number $4$.
\item The strata fully contained in the supersingular locus $\mathcal{S}_4$ are precisely the $\calS_{\varphi}$ for\\ $\varphi~=~(0,0,0,0), (0,0,0,1), (0,0,1,1)$, and $(0,0,1,2)$.
\item The Newton strata of $p$-rank zero are those corresponding to the slope sequences
\[
\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2}\right), \left(\frac{1}{3},\frac{1}{3},\frac{1}{3},\frac{1}{2},\frac{1}{2},\frac{2}{3},\frac{2}{3},\frac{2}{3}\right), \text{ and } \left(\frac{1}{4},\frac{1}{4},\frac{1}{4},\frac{1}{4},\frac{3}{4},\frac{3}{4},\frac{3}{4},\frac{3}{4}\right),
\]
which we denote respectively by $\mathcal{N}_{\frac{1}{2}}$, $\mathcal{N}_{\frac{1}{3}}$, and $\mathcal{N}_{\frac{1}{4}}$.
\item We have
\[
\begin{split}
\mathcal{S}_4 = \mathcal{N}_{\frac{1}{2}} & = \left( \mathcal{S}_{(0,1,2,3)} \cap \mathcal{S}_4 \right) \sqcup \mathcal{S}_{(0,0,0,0)}
\sqcup \mathcal{S}_{(0,0,0,1)} \\
& \sqcup \mathcal{S}_{(0,0,1,1)} \sqcup \mathcal{S}_{(0,0,1,2)} \sqcup \left( \mathcal{S}_{(0,1,1,2)} \cap \mathcal{S}_4 \right),
\end{split}
\]
and $\mathcal{S}_{(0,1,2,3)} \cap \mathcal{S}_4$ is dense. In particular, we have
\[
\begin{split}
\mathcal{S}_4(4) & = \mathcal{S}_{(0,0,0,0)}, \\
\mathcal{S}_4(3) & = \mathcal{S}_{(0,0,0,1)} \sqcup \mathcal{S}_{(0,0,1,1)}, \\
\mathcal{S}_4(2) & = \mathcal{S}_{(0,0,1,2)} \sqcup \left( \mathcal{S}_{(0,1,1,2)} \cap \mathcal{S}_4 \right).
\end{split}
\]
\item We have
\[
\mathcal{N}_{\frac{1}{3}} = \left( \mathcal{S}_{(0,1,2,3)} \cap \mathcal{N}_{\frac{1}{3}} \right) \sqcup \mathcal{S}_{(0,1,1,1)}
\sqcup \left( \mathcal{S}_{(0,1,1,2)} \cap \mathcal{N}_{\frac{1}{3}} \right),\]
and $\mathcal{S}_{(0,1,2,3)} \cap \mathcal{N}_{\frac{1}{3}}$ is dense.
\item We have
\[
\mathcal{N}_{\frac{1}{4}} = \left( \mathcal{S}_{(0,1,2,3)} \cap \mathcal{N}_{\frac{1}{4}} \right) \sqcup \mathcal{S}_{(0,1,2,2)},
\]
and $\mathcal{S}_{(0,1,2,3)} \cap \mathcal{N}_{\frac{1}{4}}$ is dense.
\end{enumerate}
All intersections appearing in (5)--(7) are non-empty.
\end{proposition}
\begin{proof}
The $p$-rank of an Ekedahl-Oort stratum $\mathcal{S}_{\varphi}$ is $\max\{ i : \varphi(i) = i \}$ and its $a$-number is $g - \varphi(g)$, proving (1) and (2). By \cite[Step 2, p.\ 1379]{COirr} we have $\mathcal{S}_{\varphi} \subseteq \mathcal{S}_4$ if and only if $\varphi(2) = 0$, proving (3).
The $p$-rank of a Newton stratum is the number of non-zero slopes, which implies~(4).
We read off from Figure~1 that $\mathcal{S}_{(0,1,2,3)} \cap \mathcal{S}_4$, $\mathcal{S}_{(0,1,2,3)} \cap \mathcal{N}_{\frac{1}{3}}$, and $\mathcal{S}_{(0,1,2,3)} \cap \mathcal{N}_{\frac{1}{3}}$ are the respective $a$-number $1$ loci of $\mathcal{S}_4$, $\mathcal{N}_{\frac{1}{3}}$, and $\mathcal{N}_{\frac{1}{3}}$. Hence, density of these intersections follows from \cite[Theorem 4.9(iii)]{lioort} for $\mathcal{S}_4$, and from combining \cite[Remark 5.4]{OortNP} with \cite[Theorem 3.1]{COirr} for $\mathcal{N}_{\frac{1}{3}}$, and $\mathcal{N}_{\frac{1}{3}}$.
By \cite[Corollary 4.2 and Lemma 5.12]{harafirst} we see that $\mathcal{S}_{(0,1,2,2)} \subseteq \mathcal{N}_{\frac{1}{4}}$ by minimality of the associated $p$-divisble group, concluding the proof of~(7).
Similarly, from \cite[Corollary 4.2 and Proposition 7.1]{harafirst}, we obtain that $\mathcal{S}_{(0,1,1,1)} \subseteq \mathcal{N}_{\frac{1}{3}}$, again by minimality.
Finally, we read off from Figure~1 that
\[
\mathcal{S}_{(0,1,1,2)} = \left( \mathcal{S}_{(0,1,1,2)} \cap \mathcal{N}_{\frac{1}{3}} \right) \sqcup \left( \mathcal{S}_{(0,1,1,2)} \cap \mathcal{S}_{4} \right).
\]
Now \cite[Theorem 4.17]{haraanumber} implies that $\mathcal{S}_4(2)$ has $H_4(1,p)+H_4(p,1)$ many irreducible components of two types, of which those of the type corresponding to $\mathcal{S}_{(0,0,1,2)}$ yield $H_4(1,p)$ many; see also~\cite[\S 9.9]{lioort}. Hence, the intersection $\mathcal{S}_{(0,1,1,2)} \cap \mathcal{S}_{4}$ must yield the other $H_4(p,1)$ components and thus be non-empty. On the other hand, since $\mathcal{S}_{(0,1,1,2)} \not\subseteq \mathcal{S}_4$ by (2), the intersection $\mathcal{S}_{(0,1,1,2)} \cap \mathcal{N}_{\frac{1}{3}}$ is also non-empty. This finishes the proof of (5) and (6) and hence of the proposition.
\end{proof}
By Lemma~\ref{lem:Lgpc}, for every point $x\in \mathcal{S}_4(k)$, there exists an integer $0\leq c\leq 2$ such that there exists a surjective morphism $\pi:\Lambda_x \twoheadrightarrow \Lambda_{g,p^c}$. For Ekedahl-Oort strata with $g=4$ we have the following result:
\begin{lemma}\label{lem:c}
We have $c = 0$ for $x \in \mathcal{S}_{(0,0,0,0)}$, and $c =1$ for $x \in \mathcal{S}_{(0,0,0,1)}$, and $c = 2$ for $x \in \mathcal{S}_{(0,0,1,1)} \cup \mathcal{S}_{(0,0,1,2)}$.
\end{lemma}
\begin{proof}
This follows from \cite[Proposition 3.3.2]{harashita}; the Deligne-Lusztig varieties $X(w')$ in \emph{loc. cit.} are given by $w' = \mathrm{id}$ when $c=0$, by $w' = (12)$ when $c=1$ and by $w' = (1342)$ or $(13)(24)$ when $c=2$.
\end{proof}
\begin{remark}
For any $x \in \mathcal{S}_4(k)$, the surjection $\Lambda_x \twoheadrightarrow \Lambda_{4,p^c}$ is realised through the minimal isogeny $\tilde{x} \twoheadrightarrow x$ for $x$, i.e., $\Lambda_{\tilde{x}} = \Lambda_{4,p^c}$ for the appropriate value of $c$. This is not necessarily true in $\mathcal{S}_g(k)$ with $g \neq 4$.
If $x$ is contained in a supersingular Ekedahl-Oort stratum (i.e., one of the strata in Proposition~\ref{prop:EO}.(3)) this follows directly, cf.~\cite{harashita}. Otherwise, we have either $x \in \mathcal{S}_{(0,1,2,3)} \cap \mathcal{S}_4$ or $x \in \mathcal{S}_{(0,1,1,2)} \cap \mathcal{S}_4$. In the former case, we have $a(x) = 1$, so
$\Lambda_{\tilde{x}} \simeq \Lambda_{4,p^2}$, as will see in the proof of Theorem~\ref{thm:maing4}. In the latter case, it follows from \cite[Proposition 7.1]{haraanumber} that $\Lambda_{\tilde{x}} \simeq \Lambda_{4,1}$.
\end{remark}
\begin{lemma}\label{lem:a4}
Let $x \in \mathcal{S}_4(k)$. When $a(x) = 4$, we have $\vert \mathcal{C}(x) \vert > 1$.
\end{lemma}
\begin{proof}
By Proposition~\ref{prop:EO}.(5), we have $x \in \mathcal{S}_{(0,0,0,0)}$, so by Lemma~\ref{lem:c}, there exists a surjection $\Lambda_x \twoheadrightarrow \Lambda_{4,p^0}$. As observed in Subsection~\ref{ssec:sspmass}, it holds that $\lvert \Lambda_{4,p^0} \rvert= H_4(p,1)$, so it follows from Theorem~\ref{thm:mainarith}.(4) that $H_4(p, 1) > 1$. This implies the result.
\end{proof}
\begin{lemma}\label{lem:a2S0001}
Let $x \in \mathcal{S}_4(k)$. When $a(x) = 3$ and $x \in \mathcal{S}_{(0,0,0,1)}$, we have $\vert \mathcal{C}(x) \vert > 1$.
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:c}, there exists a surjection $\Lambda_x \twoheadrightarrow \Lambda_{4,p}$. By Theorem~\ref{thm:sspmass} we get that
\[
\mathrm{Mass}(\Lambda_{4,p}) = \frac{(p-1)(p^2+1)(p^4+1)(p^6-1)^2}{2^{15}\cdot 3^5 \cdot 5^2 \cdot 7}.
\]
Since $(p^3-1)$ divides the numerator of $\mathrm{Mass}(\Lambda_{4,p})$, we may argue as in the proof of Theorem~\ref{thm:mainarith}.(3)+(4) to conclude that this numerator is always larger than $1$. This implies that $\vert \Lambda_{4,p} \vert > 1$, so the result follows.
\end{proof}
\begin{lemma}\label{lem:a2Sn0012}
Let $x \in \mathcal{S}_4(k)$. When $a(x) = 2$ and $x \not\in \mathcal{S}_{(0,0,1,2)}$, we have $\vert \mathcal{C}(x) \vert > 1$.
\end{lemma}
\begin{proof}
By Proposition~\ref{prop:EO}.(5), we know that $x \in \mathcal{S}_{(0,1,1,2)} \cap \mathcal{S}_4$.
By \cite[Main results, p.~164]{haraanumber} every generic point of $\mathcal{S}_{(0,1,1,2)} \cap \mathcal{S}_4$ has a minimal isogeny from $(E^g,p\mu)$ with a principal polarisation $\mu$. It follows that the minimal isogeny $\widetilde x=(\widetilde X ,\widetilde \lambda)$ for $x=(X,\lambda)$ is either isomorphic to $(E^4, p\mu)$ or there exists an isogeny $(E^4, p\mu)\to (\widetilde X, \widetilde \lambda)$ for some principal polarisation~$\mu$ on $E^4$. Therefore, $|\Lambda_{\widetilde x}|=|\Lambda_{4,1}|$ or $|\Lambda_{\widetilde x}|=|\Lambda_{4,p}|$. These two numbers are both greater than one as shown in Theorem~\ref{thm:mainarith}.(4) and in Lemma~\ref{lem:a2S0001}. Thus, $\lvert \calC(x) \rvert \ge \lvert \Lambda_{\widetilde x} \rvert >1$.
\end{proof}
\begin{theorem}\label{thm:maing4}
For every $x \in \mathcal{S}_4(k)$, we have $\vert \mathcal{C}(x) \vert > 1$.
\end{theorem}
\begin{proof}
It follows from Proposition~\ref{prop:EO}.(5) and Lemmas~\ref{lem:a4}--\ref{lem:a2Sn0012} that it suffices to consider $x \in \mathcal{S}_4(k)$ such that one of the following holds:
\begin{itemize}
\item[(i)] $x \in \mathcal{S}_{(0,0,1,2)} \sqcup \mathcal{S}_{(0,0,1,1)}$, or
\item[(ii)] $a(x)=1$.
\end{itemize}
In Case (i), by Lemma~\ref{lem:c}, there exists a surjection $\Lambda_x \twoheadrightarrow \Lambda_{4,p^2}$, i.e., $c=2$. In Case (ii), by \cite[Theorem 2.2]{oda-oort} (also see \cite[Lemma 4.4]{katsuraoort}),
there exists of unique four-dimensional rigid PFTQ
\[ (Y_\bullet, \rho_\bullet): (Y_3, \lambda_3) \to (Y_2,\lambda_2) \to (Y_1,\lambda_1)\to (X_0,\lambda_0)=x \]
extending $(X_0,\lambda_0)$. The construction in \emph{loc.~cit.} also shows that the composition
\[
y_3=(Y_3,\lambda_3)\to (X_0,\lambda_0) = x
\]
is the minimal isogeny for $x$, and hence so is $y_3=(Y_3,\lambda_3)\to y_2=(Y_2,\lambda_2)$ for $y_2$. By Definition~\ref{def:PFTQ}, the polarisation $\lambda_3$ is $p$ times a polarisation $\mu$ on $E^4$.
Dividing the
polarisation by $p$ therefore gives an isomorphism $\Lambda_{y_3}\simeq \Lambda_{4,p^2}$.
Thus, the minimal isogeny gives rise to surjective maps $\Lambda_x\twoheadrightarrow \Lambda_{y_2} \twoheadrightarrow \Lambda_{4,p^2}$. Hence, to show that $|\calC(x)|>1$, it suffices to show that $|\Lambda_{y_2}|>1$. Replacing $x$ with $y_2$, we now also have a surjection $\Lambda_x \twoheadrightarrow \Lambda_{4,p^2}$ in Case (ii).
Since we have $L_{4,p^c} = L_4(1,p)$ from Equation~\eqref{eq:npgc}, it follows immediately from Theorem~\ref{thm:mainarith}.(4) that $\vert \mathcal{C}(x) \vert > 1$ when $p>2$. So from now on, we assume that $p=2$.
We use the same notation as in Subsection~\ref{ssec:4first}. Since $\Lambda_{4,4} \simeq G^1(\mathbb{Q})\backslash G^1(\A_f)/U_{x_2}$ where the base point $x_2 \in \Lambda_{4,4}$ is taken from the minimal isogeny for $x$, and $\vert \Lambda_{4,4} \vert = 1$,
we get that $G^1(\A_f) = G^1(\mathbb{Q}) U_{x_2}$. Hence,
\[
\Lambda_x \simeq G^1(\mathbb{Q}) \backslash G^1(\mathbb{Q}) U_{x_2} / U_x \simeq G^1(\mathbb{Z}) \backslash G_{x_2}(\mathbb{Z}_p) / G_x(\mathbb{Z}_p),
\]
where $G_x(\mathbb{Z}_p)$ is the automorphism group of the polarised Dieudonn{\'e} module associated to $x$. Applying the reduction-modulo-$\Pi$ map $m_{\Pi}$ , we obtain $m_{\Pi}(G_{x_2}(\mathbb{Z}_p)) = \mathrm{Sp}_4(\mathbb{F}_4)$.
Further, let $(X_2,\lambda_2)$ be the superspecial abelian variety corresponding to the unique element $x_2 \in \Lambda_{4,4}$. Then by Proposition~\ref{prop:np2}, and using the same notation, we know that $G^1(\mathbb{Z}) = \Aut(X_2, \lambda_2) \simeq \Aut((L,h)^{\oplus 2})\simeq \Aut(L,h)^2 \cdot C_2$ and $\Aut(L,h)$ is the group of cardinality $1920$ described in \cite[Section 5]{ibukiyama}. By \cite[Section 5, p.~1178]{ibukiyama} the reduction modulo $\Pi$ induces a surjective homomorphism $\phi_0:\Aut(L,h)\twoheadrightarrow \SL_2(\mathbb{F}_4)$ whose kernel $\ker(\phi_0)$ has order 32 (also see \emph{loc. cit.} for the description of $\ker(\phi_0)$). Then it follows that $m_{\Pi}(G^1(\mathbb{Z})) = m_{\Pi}(\Aut(X_2,\lambda_2)) \simeq \mathrm{SL}_2(\mathbb{F}_4)^2 \cdot C_2$.
Writing $\overline{G} := m_\Pi(G_x(\mathbb{Z}_p))$, we obtain
\begin{equation}\label{eq:Lambdag4}
\Lambda_x \simeq (\mathrm{SL}_2(\mathbb{F}_4)^2 \cdot C_2)\backslash \mathrm{Sp}_4(\mathbb{F}_4) / \overline{G},
\end{equation}
since $\ker(m_\Pi) \subseteq G_x(\mathbb{Z}_p)$ (cf.~the proof of Proposition~\ref{lm:a1DCFp6}). Thus,
\begin{equation}\label{eq:Massg4p2}
\mathrm{Mass}(\Lambda_x) = \mathrm{Mass}(\Lambda_{4,4}) \cdot [\mathrm{Sp}_4(\mathbb{F}_4) : \overline{G}].
\end{equation}
We compute that
\begin{equation}\label{eq:massL44}
\mathrm{Mass}(\Lambda_{4,4}) = \frac{1}{2^{15}\cdot3^2\cdot5^2}
\end{equation}
from Theorem~\ref{thm:sspmass}, using Equation~\eqref{eq:valuevn}. Standard computations also show that
\begin{equation}\label{eq:Sp4F4}
\vert \mathrm{Sp}_4(\mathbb{F}_4) \vert = 2^8 \cdot 3^2 \cdot 5^2 \cdot 17
\end{equation}
and that
\begin{equation}\label{eq:SL2S2}
\vert \mathrm{SL}_2(\mathbb{F}_4)^2 \cdot C_2 \vert = 2^5 \cdot 3^2 \cdot 5^2.
\end{equation}
By~\eqref{eq:massL44} and~\eqref{eq:Sp4F4}, Equation~\eqref{eq:Massg4p2} reduces to
\begin{equation}\label{eq:mass17}
\mathrm{Mass}(\Lambda_{x}) = \frac{17}{2^7 \cdot \vert \overline{G} \vert}.
\end{equation}
We deduce that $\vert \Lambda_x \vert > 1$ whenever $17 \nmid \vert \overline{G} \vert$.
Suppose therefore that $17 \mid \vert \overline{G} \vert$, so that $\overline{G}$ contains a cyclic group $C_{17}$ of order $17$. We claim that then $\overline{G} = C_{17}$. This finishes the proof, since if $\vert \Lambda_x \vert = 1$, Equation~\eqref{eq:Lambdag4} would imply that $\mathrm{Sp}_4(\mathbb{F}_4) = (\mathrm{SL}_2(\mathbb{F}_4)^2 \cdot C_2)$ $\overline{G} = (\mathrm{SL}_2(\mathbb{F}_4)^2 \cdot C_2)$ $C_{17}$. Comparing the cardinalities from~\eqref{eq:Sp4F4} and~\eqref{eq:SL2S2} would then yield a contradiction.
Finally, we prove the claim. Let $(M_2, \<\, , \, \>_2)$
be the polarised Dieudonn\'{e} module attached to~$x_2$, and fix $V=M_2/\mathsf{V} M_2$ together with a non-degenerate symplectic form $\psi$ induced by $p\<\, , \, \>_2$. The four-dimensional symplectic space $(V, \psi)$ over $k$ admits an $\mathbb{F}_{4}$-structure $V_0$ induced by the skeleton of $M_2$. Inside $V$ we have an isotropic $k$-subspace $W=M/\mathsf{V} M_2$, where $M\subseteq M_2$ is the Dieudonn\'{e} module associated to $x$ and the inclusion is induced from the minimal isogeny. Note that $\dim W=2$ in Case (i) and $\dim W=1$ in Case (ii), respectively. According to our definition,
\[ \overline G:=\{A\in \Sp_4(\mathbb{F}_{4}): A(W)=W\, \}=\Sp(V_0,W). \]
Thus, it follows from Proposition~\ref{prop:Cq^2+1} for $g=4$
that $\overline G=C_{17}$. This completes the proof of the claim and hence of the theorem.
\end{proof}
\subsection{Proof of the main result}\
\begin{theorem}\label{thm:main2}
Let $x=[X_0,\lambda_0]\in \calS_{g}(k)$ and $\calC(x)$ be the
central leaf of $\calA_{g}$ passing through the point $x$.
Then $\calC(x)$ has one element if and
only if one of the following three cases holds:
\begin{itemize}
\item [(i)] $g=1$ and $p\in \{2,3,5,7,13\}.$
\item [(ii)] $g=2$ and $p=2,3$.
\item [(iii)] $g=3$, $p=2$, and $a(x)\ge 2$.
\end{itemize}
\end{theorem}
\begin{proof}
The cases where $g=1,2, 4$ or $g \geq 5$ follow from Lemma~\ref{lem:g1}, Lemma~\ref{lem:g2}, Theorem~\ref{thm:maing4} and Lemma~\ref{lem:g5+}, respectively.
Suppose then that $g=3$. By Lemma~\ref{lem:Lgpc}, either $\vert \Lambda_x\vert \geq \vert \Lambda_{3,1} \vert=H_3(p,1)$ or $\vert \Lambda_x \vert \geq \vert \Lambda_{3,p}\vert=H_3(1,p)$. Thus, by Theorem~\ref{thm:mainarith}, $\vert \Lambda_x \vert=1$ occurs only when $p=2$. Further assuming $p=2$, by Proposition~\ref{prop:autg3}, $\calC(x)$ has one element if and only if $a(x)\ge 2$.
\end{proof}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
|
2,877,628,090,665 | arxiv | \section{Introduction}
\noindent
Krein-Rutman theorem is a fundamental theorem in positive compact linear operator theory. It has been widely applied to Partial Differential Equations, Dynamical systems, Markov Process, Fixed Point Theory, and Functional Analysis. For instance, Krein-Rutman theorem is a basic tool to derive the existence of principal eigenvalue of a second order elliptic equation, which can be used to dynamic behaviors analysis of the corresponding system.
In the pioneering works of Perron \cite{perron1907theorie} and Frobenius \cite{frobenius1908ueber,frobenius1912matrizen}, it was proved that the spectral radius of a nonnegative square matrix is an eigenvalue with a nonnegative eigenvector. Krein and Rutman developed Perron and Frobenius's theory to a positive compact linear operator, which is the celebrated Krein-Rutman theorem. Nussbaum\cite{nussbaum1981eigenvectors} showed a more generalized Krein-Rutman theorem for a positive bounded linear operator whose spectral radius is larger than essential spectral radius.
Furthermore, Perron-Frobenius theorem also show that if the square matrix is nonnegative and irreducible, then the spectral radius is an algebraically simple eigenvalue with strongly positive eigenvector and other eigenvalues are less than the spectral radius. Krein-Rutman theorem also present a similar conclusion for a strongly positive compact linear operator(See, e.g., \cite[Theorem 1.2]{du2006order} or \cite[Theorem 19.3]{deimling1985nonlinear}). While for a strongly positive bounded linear operator whose spectral radius is larger than essential spectral radius, Nussbaum didn't show the similar conclusion in \cite{nussbaum1981eigenvectors} or other works. In this paper, we focus on proving this by using Krein-Rutman theorem proving thought ( \cite[Theorem 1.2]{du2006order}) and the observation about relationship of the essential spectral radius in different spaces.
To describe more exactly, now we recall some basic notations. Let $Y$ be a Banach space. $P\subset Y$ is called a cone if $P$ is convex closed set, $\lambda P\subset P$ for $\lambda \geq 0$ and $P\cap (-P)=\emptyset$. Let $\mathring{P}$ be the interior of $P$, $\partial P=P\setminus \mathring{P}$ the boundary of $P$ and $ \dot{P}=P\setminus \{0\}$. Recall that $a \geq b$ if $a-b \in P$, $a>b$ if $a-b \in \dot{P}$, and $a\gg b$ if $a-b \in \mathring{P}$. $P$ is called total if $Y=\overline{P-P}$. Obviously, if $\mathring{P}\neq \emptyset$, then $P$ is total. Let $\mathcal{L}(Y)$ be the collection of all bounded linear operators from $Y$ to $Y$. We recall $T\in \mathcal{L}(Y)$ is positive operator if $T:P \rightarrow P$, and $T\in \mathcal{L}(Y)$ is strongly positive operator if $T:\dot{P} \rightarrow \mathring{P}$. Let $\sigma(T)$, $\sigma_e(T)$ denote the spectrum and essential spectrum of $T \in \mathcal{L}(X)$, respectively, whose radius are denoted by $r(T)$ and $r_e(T)$. The following is the Krein-Rutman theorem(See, e.g., \cite[Theorem 1.2]{du2006order} or
\cite[Theorem 19.3]{deimling1985nonlinear}).
\begin{theorem}[Krein-Rutman Theorem]\label{lem:KR}
Let $X$ be a Banach space, a total cone $K \subset X$, and $L\in \mathcal{L}(X)$ a compact positive operator with $r(L)>0$, then $r(L)$ is an eigenvalue with eigenvector $x \in \dot{K}$. If further, the interior $\mathring{K}\neq \emptyset$ and $L$ is a strongly positive, then $x \in \mathring{K}$, $r(L)$ is an algebraically simple eigenvalue of $L$, and $\vert\lambda \vert <r(L)$ for any other eigenvalue $\lambda$ of $L$.
\end{theorem}
The generalized Krein-Rutman theorem(weak version) is from \cite[Corollay 2.2]{nussbaum1981eigenvectors}.
\begin{theorem}[Generalized Krein-Rutman Theorem, a weak version]\label{thm:GKR:weak} Let $X$ be a Banach space, a total cone $K \subset X$ and $L\in\mathcal{L}(X)$ a positive operator with $r(L)>r_e(L)$, then there exists a $x \in \dot{K}$ such that $Lx=r(L)x$.
\end{theorem}
In this paper, we focus on proving the following generalized Krein–Rutman Theorem(strong version).
\begin{theorem}[Generalized Krein-Rutman Theorem, a strong version]\label{thm:GKR:strong} Let $X$ be a Banach space, a cone $K \subset X$ with $\mathring{K}\neq \emptyset$, and $L\in \mathcal{L}(X)$ a strongly positive operator with $r(L)>r_e(L)$, then $r(L)$ is an algebraically simple eigenvalue of $L$ with an eigenvector $x \in \mathring{K}$, and $\vert\lambda \vert <r(L)$ for any other eigenvalue $\lambda$ of $L$.
\end{theorem}
\section{Proof of a generalized Krein-Rutman theorem}
At first, we recall a formula to compute essential spectrum (See e.g. \cite[Theorem 9.9]{deimling1985nonlinear}).
\begin{lemma}\label{lem:for:ess}
Let $X$ be a Banach space and $L\in \mathcal{L}(X)$, then
$r_e(L)=\lim_{n\rightarrow +\infty}(\gamma(L^n))^\frac{1}{n}$, where
\begin{equation}\label{equ:gamma:L}
\gamma(L):=\inf \{k>0: \alpha(L B) \leq k \alpha(B)\text{ for any bounded }B \subset X \}.
\end{equation}
and $\alpha$ represent Kuratowski measure of non-compactness.
\end{lemma}
\begin{lemma}\label{lem:ess:L1<L}
Let $X$ be a Banach space, $X_1$ a subspace of $X$, $L\in \mathcal{L}(X)$ and $LX_1\subset X_1$. Then $r_e(L_1)\leq r_e(L)$ where $L_1=L\vert_{X_1}$.
\end{lemma}
\begin{proof}
We set
$$
A:=
\{k>0: \gamma(L B) \leq k \gamma(B)\text{ for any bounded }B \subset X \},
$$
$$
A_1:=
\{k>0: \gamma(L_1 B_1) \leq k \gamma(B_1)\text{ for any bounded }B_1 \subset X_1 \}.
$$
Due to $L_1 B_1=L B_1$ for any bounded $B_1 \subset X_1 \subset X$, it implies that $k \in A_1$ for any $k \in A$. We deduce that $\gamma(L_1)\leq \gamma(L)$ from \eqref{equ:gamma:L}. By the same arguments, we have $\gamma(L_1^n)\leq \gamma(L^n)$ for any integer $n\geq 1$. It follows that $r_e(L_1) \leq r_e(L)$ from applying Lemma \ref{lem:for:ess}.
\end{proof}
\begin{lemma}\label{lem:pos:any_t}
Let $X$ be a Banach space, a cone $K \subset X$, and $x \in K$, if $x +t v \in K$ for any $t > 0$, then $v \in K$.
\end{lemma}
\begin{proof}
By virtue of $t^{-1} x+v \in K$ for any $t> 0$, letting $t\rightarrow +\infty$, it follows that $v \in K$.
\end{proof}
\begin{lemma} \label{lem:str_pos:thre}
Let $X$ be a Banach space, a cone $K \subset X$ with $\mathring{K}\neq \emptyset$. For $x \in \mathring{K}$ and $v \notin K$, there is a finite number $t_0>0$ such that $x+t_0 v \in \partial K$, $x+t v \in \mathring{K}$ for $ 0\leq t < t_0$ and $x+t v \notin K$ for $ t> t_0$.
\end{lemma}
\begin{proof}
Let $$t_0:=\sup \{t\geq 0: x+tv \in K\}.$$
It follows that that $t_0>0$ and $t_0<+\infty$ from $x \in \mathring{K}$ and Lemma \ref{lem:pos:any_t}. By the definition of $t_0$, we deduce that $x+tv \notin K$ for any $t >t_0$.
And for any $t\in [0,t_0)$, we have $x+t v=\frac{t}{t_0}(x+t_0v) +\frac{t_0-t}{t_0}x \in \mathring{K}$.
\end{proof}
\begin{lemma}\label{lem:eig:no_other>0}
Let $X$ be a Banach space, a cone $K \subset X$ with $\mathring{K}\neq \emptyset$, $L\in \mathcal{L}(X)$ a positive operator. If $r$ is an eigenvalue of $L$ corresponding an eigenvector $x \in \mathring{K}$, then $L$ cannot have an eigenvalue $s>r$.
\end{lemma}
\begin{proof}
We prove this lemma by indirect argument. Suppose that there exists $s>r$ an eigenvalue of $L$ corresponding an eigenvector $v \notin K$ (otherwise change $v$ to $-v$). Due to $x \in \mathring{K}$, we can find a $t_0>0$ such that $x+t_0 v \in \partial K$ and $x+t v \notin K$ for $t > t_0$ from applying Lemma \ref{lem:str_pos:thre}. Then $$L(x+t_0 v)=r x +t_0 s v=r(x+t_0 \frac{s}{r}v) \in K.$$
Therefore $s \leq r$, a contradiction with $s > r$.
\end{proof}
\begin{remark}
If $x\in K$ but $x \notin \mathring{K}$, the conclusion of Lemma \ref{lem:eig:no_other>0} maybe not right. For example, $1$ is an eigenvalue of matrix
$\left(\begin{array}{cc}
2 & 0 \\
0 & 1
\end{array}\right)$ corresponding a positive eigenvector
$\left(\begin{array}{c}
0 \\
1
\end{array}\right)$, while $2>1$ is also an eigenvalue of this matrix.
\end{remark}
\begin{remark}
There exists a positive operator rather than strongly positive operator satisfying the condition of Lemma \ref{lem:eig:no_other>0}, for instance,
$\left(\begin{array}{cc}
2 & 0 \\
1 & 1
\end{array}\right) $.
\end{remark}
\begin{lemma} \label{lem:simple}
Let $X$ be a Banach space, a cone $K \subset X$ with $\mathring{K}\neq \emptyset$, and $L\in \mathcal{L}(X)$ a strongly positive operator.
If $r>0$ is an eigenvalue of $L$ corresponding an eigenvector $x \in \dot{K}$, then $x \in \mathring{K}$ and $r$ is an algebraically simple eigenvalue.
\end{lemma}
\begin{proof}
It follows that $x=r^{-1}Lx \in \mathring{K}$ from $L$ is strongly positive.
We first show that $r$ is geometrically simple. Assume there is $v\neq 0$ and $v\notin K$ (otherwise change $v$ to $-v$) such that $L v = r v$.
We claim that $x + t v \notin \partial K$ unless $x+ tv =0$.
If the claim is not right, then there is $t_0\neq 0$ such that $x + t_0 v \in \partial K$ and $x + t_0 v \neq 0$. Since $L$ is strongly positive and $L(x+ t v)=r(x+ t v) \text{ for all } t \in \mathbb{R}$, it implies that $x+ t_0 v=r^{-1}L(x+ t_0 v) \in \mathring{K}$. This contradiction proves the claim.
By Lemma \ref{lem:str_pos:thre}, there exists $t_1> 0$ such that $x+t_1 v \in \partial K$ as $v \notin K$. We have $x+t_1 v=0$ from applying the above claim. This proves $r$ is a geometrically simple eigenvalue of $L$.
Next, we want to show that $r$ is algebraically simple.
Let $(rI -L)^2 v=0$, it follows that $rv-Lv=t_0 x$ for some $t_0$ as $r$ is a geometrically simple eigenvalue. It is necessary to show that $t_0 =0$. Assume $t_0> 0$(otherwise change $v$ to $-v$). We are going to show that $v \in K$. Otherwise, there is a $s_0>0$ such that $x+s_0v\in \partial K$ by Lemma \ref{lem:str_pos:thre}. But from $L(x+s_0v)=r(x+s_0v)-s_0t_0x \ll r(x+s_0v)$, we conclude $x+s_0v \in \mathring{K}$. This contradiction shows that $v \in K$. Therefore $v=r^{-1}(Lv+t_0 x) \in \mathring{K}$.
Lemma \ref{lem:str_pos:thre} implies that there exists $t_1>0$ such that $v-t_1x \in \partial K$ as $-x \notin K$. But from
$rv-t_0x-t_1rx=L(v-t_1x)\geq 0$, we deduce $r(v-t_1x)\geq t_0x\gg0$, contradicting $v-t_1x \in \partial K$. Hence, we must have $t_0=0$ and $rv-Lv=0$. This proves that $r$ is an algebraically simple eigenvalue of $L$.
\end{proof}
\begin{proof}[Proof of the Theorem \ref{thm:GKR:strong}]
By Theorem \ref{thm:GKR:weak} and Lemma \ref{lem:simple}, it implies that $r(L)$ is an algebraically simple eigenvalue of $L$ with an eigenvector $ x \in \mathring{K}$. Let $\lambda \neq r(L)$ be an eigenvalue of $L$ with eigenvector $w$, we want to prove $\vert \lambda \vert <r(L)$.
If $\lambda>0$, it is a straightforward result of Lemma \ref{lem:eig:no_other>0}.
If $\lambda<0$, then from $L^2 x=r(L)^2x$, $L^2 w= \lambda^2 w$ and the above argument(applied to $L^2$), we deduce $\vert \lambda\vert ^2 <r(L)^2$ and hence $\vert\lambda \vert < r(L) $.
Now we consider $\lambda= \sigma +i \tau$ with $\tau \neq 0$ and suppose $\vert \lambda \vert =r(L)$. Then necessarily $w=u +i v$ and
\begin{equation}\label{equ:uv}
Lu = \sigma u - \tau v,\qquad
Lv=\tau u+\sigma v
\end{equation}
We observe that $u$, $v$ are linearly independent for otherwise we have $\tau=0$. Let $X_1= span\{u,v\}$. Then \eqref{equ:uv} implies that $X_1$ is an invariant subspace of $L$.
\\
Claim 1: $K_1:=K \cap X_1 =\{0\}.$
If the claim is not right, then $K_1$ is a positive cone in $X_1$ with nonempty interior, as for any $w \in \dot{K_1} $, $Lw \in X_1 \cap \mathring{K}=\mathring{K_1}$. Now we denote $L_1:=L\vert_{X_1}$. It follows that $r(L_1)\geq r(L)$ from $\vert\lambda\vert =r(L)$ and $\lambda$ is an eigenvalue of $L_1$. In view of the Lemma \ref{lem:ess:L1<L}, we conclude $r_e(L_1)\leq r_e(L)$. Hence $r(L_1)\geq r(L)>r_e(L)\geq r_e(L_1)$. By applying Theorem \ref{thm:GKR:weak} again to $L_1$ on $X_1$, there exists $w_0 \in \dot{K_1}$ such that $L_1w_0=r(L_1)w_0$, which implies that $r(L_1)=r(L)$ by Lemma \ref{lem:eig:no_other>0}. Since $r(L)$ is an algebraically simple eigenvalue of $L$, then $w_0 \in span \{x\}$. In other words, $x=\alpha u+ \beta v$ for some real number $\alpha$ and $\beta$. But one can use \eqref{equ:uv} and $Lx=r(L)x$ to derive $\alpha=\beta=0$, a contradiction. Therefore $K_1=\{0\}$.
Claim 2: the set
$\Sigma:= \{(\xi,\eta)\in \mathbb{R}^2:x+\xi u +\eta v \in K\} $
is bounded and closed.
For any $(\xi,\eta)\in \mathbb{R}^2$ with $\xi^2+\eta^2\neq 0$, there is a unique $R>0$ and $\theta \in [0,2\pi)$ such that $(\xi,\eta)=R(\cos\theta,\sin\theta)$. We denote $w(\theta)=u\cos\theta +v\sin \theta$ for any $\theta\in \mathbb{R}$. $x \in \mathring{K}$ derives that $\Sigma= \{R(\cos\theta,\sin \theta)\in \mathbb{R}^2:x+ R w(\theta) \in K,R\geq 0, \theta\in\mathbb{R}\}$. We conclude that $w(\theta)\notin K$ for any $\theta\in\mathbb{R}$ by Claim 1.
By Lemma \ref{lem:str_pos:thre}, it follows for any $\theta \in \mathbb{R}$, there is $R_0(\theta)>0$ such that $x+R_0(\theta)w(\theta)\in \partial K$, $x+Rw(\theta) \in \mathring{K}$ for any $R\in [0,R_0(\theta))$ and $x+Rw(\theta) \notin K$ for any $R>R_0(\theta)$.
Hence, to prove the $\Sigma$ is bounded,
it is enough to prove that $R_0(\theta)$ is bounded on $\mathbb{R}$.
Now, we are going to show that $R_0(\theta)$ is continuous on $\mathbb{R}$. Suppose $R_0(\theta)$ is not upper semi-continuous at some point $\overline{\theta} \in \mathbb{R}$, that is, there is a sequence $\theta_n \rightarrow \overline{\theta}$ as $n \rightarrow +\infty$ and $\epsilon_0>0$, such that $R_0(\theta_n)\geq R_0(\overline{\theta})+\epsilon_0$ for $n$ large enough. Since $w(\theta)$ is continuous on $\mathbb{R}$ and $ x+R w(\theta_n) \in K$ for any $R\in [0,R_0(\theta_n)]$ and $n\geq 1$, it implies that $x+(R_0(\overline{\theta})+\epsilon_0) w(\overline{\theta}) \in K$, which is a contradiction with $x+Rw(\overline{\theta}) \notin K$ for any $R>R_0(\overline{\theta})$. Therefore $R_0(\theta)$ is upper semi-continuous on $\mathbb{R}$. By the similar arguments, $R_0(\theta)$ is lower semi-continuous on $\mathbb{R}$. Hence $R_0(\theta)$ is continuous.
Since $w(\theta)=w(\theta+2\pi)$ for any $\theta \in \mathbb{R}$, we derive $R_0(\theta)=R_0(\theta+2\pi)$ for any $\theta \in \mathbb{R}$. Therefore $R_0(\theta)$ is bounded on $\mathbb{R}$ and $\Sigma$ is bounded.
It follows that $\Sigma$ is closed from $K$ is closed. The claim is proved.
Claim 2 implies that $M:=\sup \{\xi^2+\eta^2:(\xi,\eta)\in \Sigma\}>0$ and $M$ is achieved at some point $(\xi_0,\eta_0)\in \Sigma$. Let $z_0=x+\xi_0 u+ \eta_0 v$, then $z_0 \in \dot{K}$ as $K_1=\{0\}$. Hence, $Lz_0 \in \mathring{K}$ and there is $\alpha >0$ small enough such that $L z_0 \geq \alpha x$, that is,
\begin{equation}\label{equ:xi1&eta1}
(r(L)-\alpha)x + \xi_1 u+ \eta_1 v \geq 0,
\end{equation}
where $\xi_1=\xi_0\sigma+ \eta_0 \tau, \eta_1=\eta_0 \sigma-\xi_0 \tau$. Clearly $\xi_1^2 +\eta_1^2=(\sigma^2+\tau^2)(\xi_0^2+\eta_0^2)=M\vert\lambda
\vert^2$.
By \eqref{equ:xi1&eta1}, we find that $(r(L)-\alpha)^{-1}(\xi_1,\eta_1)\in \Sigma$. Hence $\xi_1^2+\eta_1^2\leq M(r(L)-\alpha)^2$. We deduce that $\vert \lambda \vert ^2 \leq (r(L)-\alpha)^2$, which is a contradiction with $\vert \lambda \vert =r(L)$. Hence, we proved $\vert \lambda \vert <r(L)$.
\end{proof}
\begin{proposition}
Let $r(L)$ be an eigenvalue of $L$ with eigenvector $x \in \mathring{K}$. For $y \in \dot{K}$, we consider the following equation:
\begin{equation}\label{equ:thre}
(\lambda I-L)z=y,
\end{equation}
(i) ~~If $\lambda>r(L)$, the equation \eqref{equ:thre} has a unique solution $z\in \mathring{K}$. \\
(ii) ~If $\lambda \leq r(L)$, the equation \eqref{equ:thre} has no solution in $K$ .\\
(iii) If $\lambda = r(L)$, the equation \eqref{equ:thre} has no solution.
\end{proposition}
\begin{proof}
It follows that $z\neq 0$ from $ (\lambda I- L)z=y$.
$(i)$
If $\lambda>r(L)$, then $z=(\lambda I- L)^{-1}y\in \mathring{K}$.
$(ii)$ Suppose $z \in K$. If $\lambda\leq 0$, then $y=(\lambda I-L)z\leq -Lz$, a contradiction.
Now we assume $0 <\lambda \leq r(L)$. It follows that $z=\lambda^{-1}(y +Lz) \in \mathring{K}$. Lemma \ref{lem:str_pos:thre} implies that there exists $t_0$ such that $z-t_0 x \in \partial K$ as $-x\notin K$. By $L(z-t_0x)=\lambda z -y -t_0 r(L)x$, we deduce $z-t_0x=r(L)^{-1}(L(z-t_0 x)+y+(r(L)-\lambda) z) \in \mathring{K}$, which contradicts with $z-t_0 x \in \partial K$.
$(iii)$
If $ (r(L)I- L)z=y$, there exists $t_1>0$ small enough such that $x +t_1 z\in \mathring{K}$ as $x\in \mathring{K}$. Due to $(r(L)I- L)x=0$, it implies that $ (r(L)I- L)(x+t_1 z)=t_1 y \in K$, which contradicts with $(ii)$.
\end{proof}
\section*{Acknowledgements}We would like to thank Prof. Xiao-Qiang Zhao and Prof. Xing Liang for their helpful discussions and advice during the preparation of this work. We would also like thank Prof. Xue-Feng Wang for his short courses about Krein-Rutman theorem.
\bibliographystyle{siamplain}
|
2,877,628,090,666 | arxiv | \section{INTRODUCTION}
This paper has two main goals. The first is to demonstrate that
for any ensemble, whether classical, quantum, discrete or continuous,
there is essentially only {\it one} measure of the ``volume''
occupied by the ensemble which is compatible with basic geometric
notions. This {\it ensemble volume} is thus a preferred and
universal choice for characterising what is variously referred to as
the spread, dispersion, uncertainty, or localisation of an ensemble.
Remarkably, the derived ``ensemble volume'' turns
out to be proportional to the
exponential of the entropy of the ensemble. A by-product of the first
goal is thus a new universal characterisation of ensemble entropy, based
on geometric notions. Indeed, a number of
properties of ensemble entropy turn out to have simple geometric
interpretations. The universal nature of the characterisation is of
particular interest: the only previous {\it context-independent}
interpretation
of ensemble entropy to date (and hence
applicable in particular to ensembles
described by continuous probability distributions)
appears to be as a somewhat vague measure of
uncertainty or randomness.
The second goal is to apply ``ensemble volume'' to a wide range of
contexts in which ensembles appear. The applications demonstrate
not only the advantages of ensemble volume over other measures
of spread, but also to some extent why it is that
ensemble entropy makes a natural appearance in
contexts as diverse as statistical mechanics, information theory, chaos,
and quantum uncertainty relations. Some results have been briefly
reported elsewhere \cite{eprint}. Here important details and
extensions are given, as well as a number of new results.
The work reported here was originally motivated by several connections
between volume and information. Shannon proved an
upper bound on information transfer, via classical signals subject to
quadratic energy and noise constraints, by considering ratios of
spherical volumes in high-dimensional spaces \cite{shannon}.
One can similarly obtain
approximate upper bounds on information
for {\it quantum} signals, via semi-classical arguments
involving ratios of phase space volumes \cite{cd,hallpra},
which in some cases turn out to be exact.
This raises the question
of whether there is some general measure of volume which
can be used to derive rigorous information bounds for the general
case. This question is answered affirmatively here, and a new unified
derivation of the classical Shannon
and the quantum Holevo information bounds is given,
based on simple volume properties.
There are also a number of connections which have been made
previously between volume and entropy. For example, derivations in
statistical mechanics
typically obtain heuristic expressions for thermodynamic entropy by
counting ``microstates'' in a phase-space
volume of ``small'' thickness containing a
constant-energy surface \cite{statmech}.
Ma in an interesting approach attempted to
{\it define} the thermodynamic entropy of a system in classical
statistical mechanics as proportional to
the logarithm of a phase-space volume
corresponding to the ``region of motion'' of the system \cite{ma},
although
he could not rigorously define the latter region. A {\it precise
geometric} interpretation of thermodynamic entropy for both classical
and quantum equilibrium ensembles will be given here.
Further, Leipnik introduced the exponential of the position entropy of
a quantum system as a measure of its ``volume'', and favourably
compared the
associated uncertainty relations for position and momentum with the
usual Heisenberg uncertainty relations \cite{leipnik} (see also the
review in
\cite{manko} and Sec. II.C below). Generalisations to other measures
of ``volume'' were given by Zakai \cite{manko,zakai}. It is demonstrated
here that the former measure has a unique geometrical
significance, and a geometrical derivation of quantum
uncertainty relations is given based on the property that quantum
states have a minimum ensemble volume.
Zyczkowski \cite{zed} and more recently Mirbach and Korsch \cite{mk1,mk2}
have used entropy as a
measure of ``localisation'' for
chaotic quantum and classical systems
for various initial states.
The results of the present paper show that this measure can be simply
related to the spread of ensemble volume for arbitrary evolution
processes, and provide support for the use of
this measure over all other localisation
measures.
Rather than going immediately to general postulates for volume,
and formal proofs of uniqueness,
the following section first explores
ensemble volume
for a familiar class of ensembles: those described by
one-dimensional probability distributions. In this case
the ensemble volume reduces to a ``length'',
which is calculated for a number of concrete
examples and compared with other measures of uncertainty such as
root-mean-square deviation. Geometric
properties of this ``length'' and an associated quantum
uncertainty relation are discussed.
Two-dimensional joint probability distributions are also briefly
discussed, where the ensemble volume becomes an
``area'' that is geometrically related
to the ``lengths'' of the marginal distributions. This ``area''
motivates a new definition for the spot size of an optical beam.
In Section III and an accompanying appendix, the derivation
of the ensemble volume from universal geometric postulates is given.
These postulates depend on theory-independent notions of invariance,
projection onto orthogonal axes, and additivity, and in particular are
independent of whether the ensemble is classical or quantum.
The bonus of a new geometrical characteristion of ensemble entropy is
discussed, and a geometrical interpretation of relative entropy is
given.
Applications to statistical mechanics, semi-classical quantum mechanics,
information theory, chaos and other types of dynamical evolution
are given in Section IV. Conclusions are presented in Section V.
\section{1- AND 2-DIMENSIONAL EXAMPLES}
Before deriving the unique form of ensemble volume in
Sec.III, it is useful to first consider some of its properties and
connections to other measures of uncertainty in two
familiar settings: continuous distributions on the line and on
the plane,
for which ``volume'' reduces to the special cases of
``length'' and to ``area'' respectively.
These special cases are already
sufficient to exemplify a number of general features of
ensemble volume, and its advantages as a measure of spread.
\subsection{Length}
Consider a 1-dimensional probability distribution $p(x)$, corresponding
to some random variable $X$ (e.g., position, momentum, or phase).
There are then a number of candidates for
a direct measure of
the ``uncertainty'' or ``spread'' of $X$, the
most well known being the root-mean-square (RMS) deviation
\begin{equation} \label{rms}
\Delta X = [\int dx \, x^2 p(x) - (\int dx \, x p(x))^2 ]^{1/2} .
\end{equation}
This quantity is a ``direct'' measure in the sense of
having the same units as $X$, and has the virtues of being
invariant under translations and reflections, scaling linearly with
$X$ ($\Delta Y =$ $\lambda \Delta X$ for $Y=\lambda X$),
and vanishing in the
limit that $X$ has some definite value $x'$.
A second candidate is the inverse participation ratio \cite{zed,
mk2,hell}
\begin{equation} \label{part}
\xi_{X} = [\int dx \, p(x)^2 ]^{-1} ,
\end{equation}
(which may also be recognised as a monotonic function of the so-called
``linear entropy'' $-\int dx\, p(x)^2$ \cite{zurek}).
This quantity shares all of the above-noted virtues
of $\Delta X$. However, it is in fact only
a special case of what may be called the ``Renyi length''
\begin{equation} \label{renyi}
L_{X,\alpha} = [\int dx \, p(x)^{1+\alpha} ]^{-1/\alpha} \hspace{2cm}
(\alpha\geq -1)
\end{equation}
(named for its logarithm - a generalised entropy defined by Renyi
\cite{renyi}). Renyi lengths are directly related to measures of
uncertainty considered by Zakai for quantum systems \cite{manko,zakai},
and use of their reciprocals
as (indirect) measures of uncertainty
has been extensively investigated in \cite{thesis} (see also
\cite{maas}). The inverse participation ratio corresponds to $\alpha=1$
in Eq.~(\ref{renyi}).
The Renyi length $L_{X,\alpha}$ in Eq.~(\ref{renyi})
satisfies all of the above-noted properties
of $\Delta X$ (same units as $X$, translation/reflection invariance,
scaling linearly with $X$, and vanishing as $p(x)$ approaches a delta
function).
Eq.~(\ref{renyi}) thus introduces an uncountable infinity of possible
candidates for a direct measure of uncertainty! Fortunately, as will be
seen in Sec.III, just one of these Renyi lengths may be singled out
uniquely over all other possible measures on geometric grounds.
In particular, in this paper special attention will be paid to the case
$\alpha \rightarrow 0$ in Eq.~(\ref{renyi}).
The corresponding length
will simply be denoted by $L_{X}$, and is just the exponential
of the usual ensemble entropy \cite{foot1}:
\begin{equation} \label{length}
L_{X} = L_{X,0} = \exp [-\int dx \, p(x) \ln p(x)] .
\end{equation}
It is a special case of the ``ensemble volume'' to be derived
in Sec.III, and will therefore be referred to as the
{\it ensemble length}.
\subsection{Comparisons}
In Table I the RMS deviation and ensemble length are calculated for
several types of 1-dimensional distributions. As noted following
Eqs.~(\ref{rms}) and (\ref{renyi}) both quantities are invariant under
translations and scale linearly with $X$. Hence they can be trivially
calculated for distributions of the form $p(x/a - x')/a$ once they
have been found for $p(x)$ (by simply multiplying the result for the
latter case by $a$). The Table will be used to highlight
a number of differences between $\Delta{X}$ and $L_{X}$.
First, it is seen from Table I that the ensemble length exists in
cases when the RMS deviation does not (for Cauchy-Lorentz and
sink-squared distributions in particular). It may further be shown that
$L_{X}$ is finite whenever $\Delta{X}$ is:
the well known variational property that ensemble entropy is maximised
for a fixed value of $\Delta{X}$ by a Gaussian distribution
\cite{ash} immediately implies from the scaling property and Table I
that
\begin{equation} \label{rmsineq}
L_{X} \leq (2\pi e)^{1/2} \Delta{X} .
\end{equation}
Thus the use of ensemble length as a measure of
uncertainty allows a wider quantitative range of applicability than
does RMS deviation.
This permits, for example, the quantitative discussion
of quantum uncertainty relations, expressed in terms of ensemble
length, for cases in which the usual Heisenberg
uncertainty relations have nothing to say (see following subsection).
Second, the calculations for the uniform and circular
distributions, $p_{U}$ and $p_{C}$ in Table I respectively,
exemplify a maximality property of ensemble length: it
is maximised on a given interval by a uniform
distribution on the interval, with a
maximum value equal to the length of
the interval. Thus one may write
\begin{equation}
L_{X} \leq L
\end{equation}
for a distribution confined to an interval of length $L$ \cite{foot2}.
This property
reflects the intuitive notion that that $p(x)$ is most spread out or
least localised when it is {\it flat}, having no peaks where
probability is concentrated. The RMS deviation does not conform to this
notion, achieving its maximum possible value in the limit of two
maximally-separated peaks (a distribution equally concentrated on the
endpoints of the interval).
Third, the calculation in Table I for the uniform
and double-uniform
distributions $p_{U}$ and $p_{DU}$ illustrates an addivity property
of ensemble length: the ensemble length of
$p_{DU}$ is twice that of the two non-overlapping
uniform distributions $p_{U}(x-a)$ and
$p_{U}(-x-a)$ which it comprises in equal mixture. More generally,
if $p(x)$ and $q(x)$ denote two non-overlapping distributions of
equal ensemble length $L$, then any mixture $\lambda p(x)$ $+$
$(1-\lambda)q(x)$ of these distributions satisfies
\begin{equation}
L_{X} \leq 2L ,
\end{equation}
with the upper bound achieved for $\lambda =1/2$ \cite{foot3}.
This property
reflects the intuitive notions that such a mixture is least localised
(most spread out) when it is not more concentrated in one of the
non-overlapping regions than in the other, and that
for this equally-weighted case the non-overlapping lengths simply add.
In contrast, the RMS deviation of
$p_{DU}$ depends strongly on the separation of the peaks, and
indeed becomes infinite as this separation increases. This
example and the one above emphasise what
can be directly seen from Eq.~(\ref{rms}):
the RMS deviation is a measure of {\it separation} of the
region(s) of concentration from a particular point of the distribution
(the mean value), rather
than a measure of the extent to which the distribution is in
fact concentrated.
Fourth, except in cases where
the second moment of $p(x)$ has some particular
physical meaning, it is difficult to assess the significance
of a given value of $\Delta X$ without some further information
about the distribution. For example, even for single-peaked
distributions, the probability that $X$ lies within
$\pm \Delta X$ of the mean is highly dependent upon the nature
of $p(x)$ \cite{foot4}.
In contrast, as will be
seen in Sec. III, the ensemble length $L_{X}$ has a unique
geometrical significance.
Finally, it is of interest to make a
quantitative comparison between the
degrees to which a given distribution $p(x)$ is concentrated in a
region of length $L_{X}$ on the one hand, and of length $2\Delta X$
on the other. To do so, it is natural to define
the {\it maximum confidence} corresponding to a given length $L$ as
\begin{equation}
C(L) = \sup_{\{A:\mid A\mid=L\}} \{ \int_A dx\, p(x) \} ,
\end{equation}
where the supremum is over all measurable sets $A$ of total length $L$.
In the case of a distribution symmetric about a single peak this is
achieved by choosing $A$ to be the interval of length $L$ centred
on the mean value of the distribution.
From Table I one can calculate the values of $C(L_{X})$ to be
approximately 100\%, 99\%, 96\%, 93\%, 91\% and 90\% for the
uniform, circular, gaussian, exponential, sink-squared and
Cauchy-Lorentz distributions respectively. The corresponding values of
$C(2\Delta X)$ are 58\%, 61\%, 68\%, 86\% for the first four of the
above distributions, with the value being undefined for the last two.
It is seen that for these examples $C(L_{X})$ varies over a much
narrower range than $C(2\Delta X)$, and that $L_{X}$ typically
corresponds to a larger confidence value than $2\Delta X$.
\subsection{Uncertainty relations}
The relationship between ensemble length and ensemble entropy in
Eq.~(\ref{length}) allows the usual entropic uncertainty relation
for the position and momentum of a quantum particle \cite{bbm}
to be equivalently written in the geometric form
\begin{equation} \label{uncert}
L_{X} L_{P} \geq \pi e\hbar ,
\end{equation}
relating the product of the ensemble lengths to a minimum area in
phase space.
Bounding $L_{X}$ and $L_{P}$ from above via
Eq.~(\ref{rmsineq}) then immediately yields the well known
Heisenberg uncertainty relation
\begin{equation} \label{heis}
\Delta X \Delta P \geq \hbar /2 .
\end{equation}
The above two inequalities are similar in form, and have the
same broad physical significance: the particle cannot
be prepared in a state for which both the position and
momentum distributions have arbitrarily small spreads. However, it
is seen that the latter inequality is mathematically weaker,
as it follows from the former. For example, it follows
from Eq.~(\ref{uncert}) that $L_{P}$ (and hence, via
Eq.~(\ref{rmsineq}), $\Delta P$) becomes
infinite as $p(x)$ approaches a weighted sum of delta functions. This
cannot be concluded from Eq.~(\ref{heis}).
Inequality (\ref{uncert}) may used to make quantitative
evaluations regarding the relative spreads of position and momentum
in cases where the Heisenberg inequality (\ref{heis}) yields {\it no}
information. For example, consider a quantum particle confined to
an interval of length $L$, such that the position amplitude is constant
over the interval. It follows that the momentum statistics are
described by the sink-squared distribution
\begin{equation} \label{sink}
\pi^{-1} (2\hbar /L) (\sin [pL/(2\hbar)]/p)^2 .
\end{equation}
As noted in Table I the RMS deviation
$\Delta P$ is not defined in this case, and hence the Heisenberg
inequality cannot be used
to assess the degree to which position and momentum are jointly
localised. In contrast, using Eq.~(\ref{sink}), Table I and the
scaling property of ensemble length, one finds
\begin{equation} \label{sinkuncert}
L_{X} L_{P} = 2\pi\exp[2(1-C)]\hbar \approx 15\hbar ,
\end{equation}
where $C\approx 0.577 215 66$ denotes Euler's constant.
Hence the particle has an associated phase space area close to the
lower bound of $\pi e\hbar\approx 9\hbar$ in Eq.~(\ref{uncert}); i.e.,
the particle is in fact in an approximate minimum
uncertainty state of position and momentum.
A similar example
is the case of a particle confined to the positive $x$-axis, with
a position amplitude that decays exponentially with $x$.
The position and momentum distributions are then given by exponential
and Cauchy-Lorentz distributions of the forms $p_{E}(x/a)/a$ and
$2ap_{CL}(2ap/\hbar)/\hbar$ respectively, implying via Table I and
the scaling property that
\begin{equation}
L_{X} L_{P} = 2\pi e \hbar .
\end{equation}
Hence the state is relatively
well-localised in position and momentum, with an associated
phase-space area only twice that of the minimum in Eq.~(\ref{uncert}).
Again, the Heisenberg
uncertainty relation Eq.~(\ref{heis}) gives no information about the
joint localisation in this case.
Finally, it may be mentioned that
there is an uncertainty relation relating the Renyi lengths of position
and momentum for general $\alpha$: it follows from Eq.~(131) of
\cite{manko} that
\begin{equation} \label{renineq}
L_{X,\alpha} L_{P,\beta} \geq \pi\hbar [1+2\alpha]^{1+1/(2\alpha)}
/(1+\alpha)
\end{equation}
for $\alpha\geq -1/2$, where $\beta =-\alpha /(1+2\alpha )$.
For $\alpha=\beta=0$ the lower bound is maximum, and
the inequality reduces to Eq.~(\ref{uncert}) above.
\subsection{Area and spot size}
This section will be concluded by briefly looking at measures of spread
for {\it two}-dimensional distributions, to highlight a further
geometric property of ensemble length of importance
in later sections. This property also holds for RMS deviation,
but not for Renyi lengths in general. A related measure of
spot size for optical beams is defined and briefly discussed.
Each of the ``length'' measures in Eqs.~(\ref{rms}), (\ref{renyi}) and
(\ref{length}) has a natural generalisation to a measure of ``area'',
corresponding to the spread or uncertainty
of a 2-dimensional probability distribution
$p(x,y)$ of two random variables $X$ and $Y$:
\begin{eqnarray} \label{rmsarea}
\Delta A & = & [\det (\langle {\bf xx^{T}}\rangle -
\langle{\bf x}\rangle \langle{\bf x^{T}}\rangle)]^{1/2} ,
\\ \label{renarea}
A_{XY,\alpha} & = & \langle p^\alpha \rangle^{-1/\alpha} ,
\\ \label{area}
A_{XY} & = & \exp[\langle -\ln p \rangle ]
\end{eqnarray}
respectively, where ${\bf x}$ denotes the column vector $(x,y)$,
${\bf x^T}$ its transpose, and $\langle\cdot\rangle$ the average
with respect to $p$. These areas satisfy properties analogous to
to their 1-dimensional counterparts, and will be referred to as the
RMS area, Renyi area, and ensemble area respectively.
The RMS area in Eq.~(\ref{rmsarea}) may be recognised as the product of
the RMS deviations along the principal axes of the distribution in the
$xy$-plane, and in general
satisfies the inequality (Eq.~(2.13.7) of \cite{hardy})
\begin{equation} \label{rmsin}
\Delta A \leq \Delta X \Delta Y ,
\end{equation}
with equality for the case that $p(x,y)$ factorises into two
uncorrelated distributions for $X$ and $Y$.
This inequality for ``area'' and ``length'' has a simple geometric
interpretation, to be generalised in the following Section.
In particular,
the marginal distributions $p_{1}(x)$ and $p_{2}(y)$ for $X$ and $Y$
are obtained by "projecting" the joint distribution $p(x,y)$
onto the two orthogonal $x$ and $y$ axes. The associated RMS lengths
$\Delta X$ and $\Delta Y$ may be similarly thought of as obtained by
``projecting'' the RMS area $\Delta A$ onto these axes. However, this
is only consistent with Euclidean geometry if inequality
(\ref{rmsin}) holds: the product of the two lengths obtained by
projection of an area onto two orthogonal axes can never be less than
the original area.
Ensemble area and ensemble length are also consistent with this
``projection'' interpretation: the well known
subadditivity of entropy \cite{ash} can be equivalently
written via Eqs.~(\ref{length}) and (\ref{area}) as
\begin{equation} \label{areaproj}
A_{XY} \leq L_{X} L_{Y} ,
\end{equation}
in analogy to Eq.~(\ref{rmsin}). The subadditivity of entropy is thus
seen to correspond to a projection property of Euclidean geometry.
One has the further related property that
if $p(x,y)$ is uniform on a rectangular region oriented parallel to
the $x$ and $y$ axes, and vanishes outside this region, then equality
holds in Eq.~(\ref{areaproj}) with $L_{X}$ and $L_{Y}$ corresponding
to the lengths of the sides of the rectangle.
Thus Eq.~(\ref{areaproj}) reduces in this case to
the Euclidean property {\it area $=$ length $\times$ breadth}.
In general, the Renyi areas in Eq.~(\ref{renarea}) are not consistent
with the projection property, as will be seen in Sec.~III.
Finally, it may be noted that Eq.~(\ref{area}) may be applied to
physical distributions other than probability distributions,
with corresponding geometrical advantages.
For example, let $P(x,y)$ denote the time-averaged power distribution
in some plane orthogonal to the direction of propagation of an optical
beam. One may then define the ``geometric'' spot size of the beam as
the ensemble area of the normalised power distribution
$P(x,y)/P_{T}$, where $P_{T}$ is the integrated power over the plane:
\begin{equation}
A_{geom} = P_{T} \exp [-(P_{T})^{-1} \int dxdy \,
P(x,y) \ln P(x,y)] .
\end{equation}
This satisfies desirable properties such as being additive for
non-overlapping identical beams, being invariant with respect to
scaling the power up or down, scaling linearly with
beam magnification, having a maximum value of $A$ for a beam
confined to an area $A$ (attained for a uniform power distribution
over that area), and satisfying a ``projection property''
analogous to Eq.~(\ref{areaproj}). It is also invariant under any
transformation of coordinates which preserves area in the usual sense
(i.e., with
unit Jacobian), and so to this extent is independent of the
coordinatisation of the plane.
Alternative definitions based on, for example,
Eqs.~(\ref{rmsarea}) or (\ref{renarea}) are geometrically less
satisfying.
\section{ENSEMBLE VOLUME}
The previous section indicates the wide range of possible measures for
the spread of one- and two-dimensional probability distributions, and
draws attention to a number of geometric and other advantages
enjoyed by the ``length'' and ``area'' defined in
Eqs.~(\ref{length}) and (\ref{area}) respectively.
As noted in the Introduction, it has
often proved useful to employ various notions of ``volume''
for statistical ensembles across a wide variety of contexts,
such as information theory, statistical mechanics, uncertainty
relations, and chaotic evolution. Other contexts
include Ornstein-Uhlenbeck diffusion and
semi-classical quantum mechanics (see \cite{eprint} and Secs. IV.B and
IV.D below).
This raises the question of whether there is in
fact some {\it universal}
measure of ``volume'' for classical and quantum
ensembles, which may be usefully employed in all of the above contexts
and which is not restricted in application or interpretation to
various special cases.
Here it will be shown that indeed such a measure exists,
which may be uniquely derived from a small number of theory-independent
postulates fundamental to the concept of ``volume''. It generalises
the ensemble length and ensemble area of the previous section, and will
be referred to as the {\it ensemble volume}. It also leads to new
geometric characterisations of entropy and relative entropy.
\subsection{Notation}
Three generic types of ensemble will be considered here.
The first is a classical
ensemble described by a continuous probability distribution
$p({\bf x})$ on some $n$-dimensional space $X$; the
second is a classical ensemble described by a discrete probability
distribution $\{ p_{i}\}$ where $i$ ranges over some discrete set
$I$; and the third is a quantum ensemble described by a
density operator $W$ on some Hilbert space $H$.
Each of the above types of ensemble shares some universal features.
It is essential to abstract a number of these features via a common
notation if ``volume'' is to be discussed in a theory-independent
manner.
For example, consider the three identities
\begin{equation} \nonumber
\int_{X} d^n{\bf x}\, p({\bf x})=1,\,\sum_{i\in I} p_i =1,\,
{\rm tr}_{H}[W]=1 .
\end{equation}
Defining $\Gamma$ to correspond respectively to the spaces/sets
$X$, $I$ and $H$;
${\rm Tr}_{\Gamma}[\cdot]$ to correspond
respectively to integration over $X$,
summation over $I$, and the trace over $H$; and $\rho$ to correspond
respectively to the ensembles $p({\bf x})$, $\{ p_{i}\}$, and
$W$; these identities
can be subsumed into the generic identity
\begin{equation} \label{norm}
{\rm Tr}_{\Gamma}[\rho] = 1 .
\end{equation}
Another universal feature is the notion of {\it composite} or
{\it joint} ensembles: for a given pair of spaces/sets $\Gamma_1$,
$\Gamma_2$ of a given type one can define a composite set/space
$\Gamma_{12}$, where for classical and quantum ensembles
$\Gamma_{12}$ corresponds to the set product and the tensor
product respectively of $\Gamma_1$ and $\Gamma_2$.
Further, if
$\rho$ describes a composite ensemble on $\Gamma_{12}$ one may
define two {\it projected} ensembles $\rho_1$, $\rho_2$ on $\Gamma_1$
and $\Gamma_2$ respectively, via
\begin{equation} \label{marg}
\rho_1 = {\rm Tr}_{\Gamma_2}[\rho],\hspace{1cm}
\rho_2= {\rm Tr}_{\Gamma_1}[\rho] .
\end{equation}
These projected ensembles correspond to {\it marginal}
distributions and {\it reduced} density operators for the cases of
classical and quantum ensembles respectively.
Finally, one may define any two ensembles $\rho$, $\rho'$ of
the same type to be {non-overlapping} if and only if
\begin{equation}
{\rm Tr}_{\Gamma}[\rho\rho'] = 0 .
\end{equation}
Note that in general two ensembles are non-overlapping if and only if
they can be distinguished by measurement without error.
\subsection{Postulates for volume}
For the three types of ensemble discussed in the previous subsection
it is useful to think of ``volume'' in the following ways. First, for
a continuous distribution $p({\bf x})$
on a space $X$, the volume corresponds
to a direct measure of the region of ``spread'' of $p({\bf x})$ in $X$.
Second, for a classical discrete distribution $\{ p_i\}$,
one may imagine
the indices as labelling a set of boxes or bins.
In this case ``volume'' corresponds to the spread of
the distribution over these bins, i.e., as a continuous
measure of the effective number of bins occupied by the distribution.
Third, for a quantum ensemble, the volume may be considered as a
continous generalisation of Hilbert space dimension, corresponding to
a measure of the spread of the ensemble in Hilbert space.
Consider now a measure of volume, $V(\rho)$, which satisfies the
following properties:
{\it (i) Invariance Property:} $V(\rho)$ is invariant under all
transformations on $\Gamma$ which preserve ${\rm Tr}_{\Gamma}[\cdot]$
(these are represented by measure-preserving transformations on $X$ for
continous classical ensembles, permutations on $I$ for discrete
classical ensembles, and unitary transformations on $H$
for quantum ensembles).
{\it (ii) Cartesian Property:} If $\rho$ describes two {\it
uncorrelated}
ensembles $\rho_{1}$ and $\rho_{2}$ on $\Gamma_{1}$ and $\Gamma_{2}$
respectively, then
\begin{equation} \label{cart}
V(\rho) = V(\rho_{1}) V(\rho_{2})
\end{equation}
(note $\rho$ is the product $\rho_{1}\rho_{2}$ for classical ensembles,
and
the tensor product $\rho_{1}$$\otimes$$\rho_{2}$ for quantum ensembles).
{\it (iii) Projection Property:} If $\rho$ describes an ensemble of
composite systems on $\Gamma_{12}$ then
\begin{equation} \label{proj}
V(\rho) \leq V(\rho_{1}) V(\rho_{2}) ,
\end{equation}
where $\rho_{1}$, $\rho_{2}$ are the projections of $\rho$
defined in Eq.~(\ref{marg}).
{\it (iv) Additivity Property:} An equally-weighted mixture of $m$
{\it non-overlapping} ensembles $\rho_a$, $\rho_b$, $\dots$,
each of equal volume $V$, has a total volume of $mV$, i.e.,
\begin{equation} \label{add}
V(m^{-1}[\rho_a + \rho_b + \dots]) = m V .
\end{equation}
{\it (v) Uniformity Property:} If $\rho$ is any mixture of $m$
non-overlapping ensembles of equal volumes $V$, then
\begin{equation} \label{unif}
V(\rho) \leq m V .
\end{equation}
The above properties are essentially the same as those defined in
\cite{eprint}, where the additivity and uniformity properties were
combined in the latter. Their geometrical significance
is as follows.
First, the invariance property (i)
ensures that the volume $V(\rho)$ is a
function of the ensemble alone, independently of a particular
co-ordinatisation, labelling, or measurement basis for $\Gamma$.
Indeed, the transformations which preserve ${\rm Tr}_{\Gamma}[\cdot]$
are exactly those which preserve volume, or measure, on $\Gamma$
in the usual sense.
For example, for a classical distribution $p({\bf x})$
on $X$ the measure of a subset $S\subseteq X$ is given by
\begin{equation}
\mid S\mid = \int_S d^n{\bf x} = Tr_{S}[1] .
\end{equation}
The invariance property then requires that the ensemble volume is
invariant under all transformations which preserve the measure of
all subsets, i.e.,
those transformations with unit Jacobian. For the case of a
classical phase space such transformations include all
canonical transformations, and hence $V(\rho)$ will be
invariant under Hamiltonian evolution. One may similarly consider
the measure $\mid S\mid = {\rm Tr}_{S}[1]$ of
subsets $S\subseteq I$ and subspaces
$S\subseteq H$; in these cases the invariance property again requires
that $V(\rho)$ is invariant under measure-preserving transformations,
corresponding to permutations and unitary transformations respectively.
Second, the Cartesian property (ii) is exactly analogous to the
geometric property that area equals length times breadth, and more
generally that the volume of the Cartesian product of two sets is
equal to the product of the volume of the sets. This is illustrated
in Figure 1.
Third, the projection property (iii) is exactly analogous to the
geometric property that a volume is less than or equal to the product
of the lengths obtained by its projection onto orthogonal axes, and is
illustrated in Figure 2. It is a generalisation of the projection
property discussed for RMS area and ensemble area in Sec. II.D.
Fourth, the additivity property (iv) requires the ensemble volume to
be additive for a uniform mixture of
non-overlapping ensembles of equal volume. The geometric interpretation
of this is self-evident: the total volume of
$m$ equal non-overlapping volumes is the sum of the individual volumes.
Finally, the uniformity property (v) states that the maximum volume, of
a mixture of non-overlapping ensembles of
equal volume, is bounded by the
sum of the component volumes. Thus, noting the additivity
property, this maximum is achieved for a {\it uniform}
mixture, i.e., one which is not more concentrated
on one of the component ensembles than on any other.
\subsection{Derivation}
Here the unique, universal measure of volume for ensembles is obtained.
It may more generally be applied as a measure of spread for
any positive classical or
quantum density, such as beam intensity or mass density, by
calculating the ``volume'' of the corresponding normalised density.
In such cases, where no ensemble is involved, one could alternatively
label this quantity as the ``geometric dispersion''.
In particular, one has the following result, first stated in
\cite{eprint}, and proved in the Appendix:
{\it Theorem:} Any (continuous) measure of volume
satisfying properties (i)-(v)
above has the form
\begin{equation} \label{theo}
V(\rho) = K(\Gamma) e^{S(\rho)},
\end{equation}
where $S(\rho)$ denotes the ensemble entropy
\begin{equation} \label{ent}
S(\rho) = - {\rm Tr}_{\Gamma}[\rho \ln \rho] ,
\end{equation}
and $K({\Gamma})$ is a constant which may depend on $\Gamma$, and
satisfies
\begin{equation} \label{kgam}
K(\Gamma_{12}) = K(\Gamma_{1}) K(\Gamma_{2}) .
\end{equation}
The proof in the Appendix primarily relies on
applying properties (i)-(v) to an arbitrarily large number
of independent copies of a given ensemble $\rho$. I believe it may
be possible to prove the theorem without the uniformity
property (v), but have not been able to do so.
The constant $K(\Gamma)$ in Eq.~(\ref{theo}) is a
normalisation constant, reflecting the notion that only
relative volumes are of real interest in comparing different ensembles.
For continuous classical ensembles a natural choice is $K(\Gamma)=1$,
so that a distribution which is uniform over a set $S$ of measure $V$,
and vanishes outside $S$, has ensemble volume equal to $V$.
For discrete classical ensembles the choice $K(\Gamma) = 1$
corresponds to measuring the
ensemble volume in terms of the number of ``bins'' occupied by the
ensemble, with the minimum volume of 1 bin corresponding to a
distribution with $p_i = 1$ for some index $i$.
However, if the distribution arises from the
discretisation of a continuous observable such as position (due to
measurement limitations for example), then it would be natural to
choose $K(\Gamma)$ to correspond to the discretisation volume.
If the index set is finite, with $M$ labels, another
possible choice for $K(\Gamma)$ is $1/M$.
The ensemble volume then
measures the fraction of the total volume
occupied by the ensemble.
For quantum ensembles the choice $K(\Gamma) = 1$
corresponds to measuring the
ensemble volume in terms of the number of Hilbert space dimensions
occupied by the ensemble, with pure states occupying the minimum
possible of 1 dimension. However, if the Hilbert space $H$ has finite
dimension $M$ then one could alternatively take $K(\Gamma) = 1/M$,
corresponding to a fractional measure of volume in analogy to the
classical case. Finally,
for quantum systems with classical counterparts,
such as spin-zero particles, one may choose $K(\Gamma)$ so that
in the classical limit the quantum ensemble volume reduces to the
classical ensemble volume. This is explored further in Sec. IV.B, and
used to obtain semi-classical uncertainty relations.
It should be noted that the assumption of continuity in the statement
of the theorem is necessary. For example, one may for a discrete
classical ensemble $\{ p_i\}$ define the ``support volume'' as the
number of non-zero $p_i$ values. This satisfies all of properties
(i)-(v), but
is not continuous. The simplest counterexample is the
discrete probability distribution $\{1-\epsilon,
\epsilon\}$ for $\epsilon >0$. As $\epsilon
\rightarrow 0$ this distribution continuously approaches the
distribution $\{ 1, 0\}$, with a support volume of 1;
however for all $\epsilon >0$ the support volume is 2.
If one defines the ``RMS'' volume for an $n$-dimensional observable
${\bf x}$ by generalising Eq.~(\ref{rmsarea}) to arbitrary dimensions
\cite{footrms}, it is not difficult to show
that the invariance property
(restricted to {\it linear} transformations),
the Cartesian property, and the
projection property are satisfied. However
it does not satisfy the additivity and
uniformity properties.
Further, the ``Renyi'' volumes
\begin{equation} \label{renvol}
V_{\alpha}(\rho ) = ({\rm Tr}_{\Gamma}[\rho^{1+\alpha}])^{-1/\alpha} ,
\end{equation}
defined in analogy with the Renyi length and Renyi area in Eqs.
(\ref{renyi}) and (\ref{renarea}) respectively, satisfy properties (i),
(ii), (iv) and (v) for all $\alpha\geq -1$. However, a counterexample
given by Renyi (Theorem 4 of Sec. IX.6 in \cite{renyi}) shows that
the projection property is {\it not} satisfied, except for the cases
$\alpha =0$ (corresponding to Eq.~(\ref{theo}) with $K(\Gamma)=1$),
and $\alpha =-1$ (corresponding to the discontinuous case of ``support
volume'' discussed above).
\subsection{Geometric characterisation of entropy}
The appearance of the ensemble entropy in Eq.~(\ref{theo}) as a result
of geometric postulates (i)-(v) provides a new approach to this
quantity, which is moreover independent of whether the ensemble is
classical or quantum, discrete or continuous.
In particular, {\it ensemble entropy may be defined} (up to an
additive constant) {\it as the logarithm of the ensemble volume},
where the
latter is taken to be the primary quantity. The properties of ensemble
entropy may thus be regarded as being geometric in origin.
Indeed, it will be
seen that its natural appearance in a number of physical contexts
can be interpreted as following from its relationship to a ``volume''.
The geometric interpretation of ensemble entropy contrasts markedly
with its only other
context-independent interpretation as an (indirect) measure of
``uncertainty'' or ``randomness'' \cite{renyi,thesis,maas,ash,
shan,foot5}. Indeed ensemble volume provides a {\it direct} measure of
uncertainty, which is advantageous when one wishes to compare the
spreads of two
ensembles of a given type (i.e., with the same $\Gamma$). For example,
if two ensembles have
entropies of 0.5 bits and 1.5 bits respectively \cite{footbit}, should
one compare
their ratio or their difference in assessing the degree to which
the uncertainty of the first exceeds that of the second? Since entropies
are typically only defined up to a multiplicative constant
(see below), one might consider the ratio to be the more
signicant means of comparison. However, the ensemble
volume gives an unequivocal answer: the volume of the second ensemble
is twice that of the first in this case, and hence has twice
the spread.
It is interesting to briefly compare the derivation of ensemble volume
from properties (i)-(v) with existing axiomatic derivations of ensemble
entropy. Such axiomatic derivations are reviewed in \cite{behara}, and
are all related to the original derivation given by Shannon
\cite{ash,shan}. Unlike the theorem of the previous section
they are limited to {\it discrete} classical ensembles. Moreover,
they lead to an arbitrary multiplicative constant for entropy,
whereas the geometric approach leads to an arbitrary
{\it additive} constant for entropy.
To see that the axioms used by Shannon and others
are markedly different from
properties (i)-(v) used to derive ensemble volume, consider the
``grouping axiom'' of Shannon \cite{shan}
(see also Sec. 1.2 of \cite{ash}), which may
be written in the notation of this paper as:
\begin{equation} \label{shanax}
S(\lambda \rho + (1-\lambda)\rho') = S(\{\lambda, 1-\lambda\})
+ \lambda S(\rho) + (1-\lambda) S(\rho ')
\end{equation}
for any two non-overlapping discrete classical ensembles $\rho$,
$\rho '$. Thus it is assumed that the ``randomness'' $S(\cdot )$ of a
mixture of non-overlapping distributions is equal to that of
the mixing distribution plus the average randomness of the
individual ensembles. This axiom, together with a continuity assumption
and a symmetry assumption equivalent to the invariance property (i),
is sufficient to derive the form $S(\rho ) = -C \sum_i p_i \ln p_i$ for
the entropy of discrete classical ensembles, where $C$ is an arbitrary
constant \cite{behara}.
Eq.~(\ref{shanax}) does {\it not} translate
into a natural axiom for ensemble
volume: replacing $S$ by $\ln V$ gives the equivalent
constraint
\begin{equation}
V(\lambda \rho + (1-\lambda)\rho') = V(\{\lambda, 1-\lambda\})
[V(\rho)]^\lambda [V(\rho')]^{1-\lambda} ,
\end{equation}
which has no simple geometric interpretation. Conversely, the
additivity property Eq.~(\ref{add}), that non-overlapping equal
volumes add, translates under $V\rightarrow \exp S$
into the ``randomness'' constraint
\begin{equation}
S(\rho /2 + \rho '/2) = \ln 2 + S ,
\end{equation}
which is not a natural property to postulate
for a measure of ``randomness''.
The geometric approach to ensemble entropy given here thus differs
significantly from former approaches (as is also apparent from comparing
the proof in the Appendix with those in \cite{ash,shan,behara}).
Finally, it is of interest to note that the concavity property of
ensemble entropy, $S(\sum_i \lambda_i \rho_i) \geq \sum_i \lambda_i
S(\rho_i)$ \cite{ash,shan}, is equivalent to an inequality relating
the volume of a mixture to the weighted geometric mean of
the volumes of its components:
\begin{equation} \label{mixture}
V(\sum_i \lambda_i \rho_i) \geq \prod_i [V(\rho_i)]^{\lambda_i} .
\end{equation}
This may be regarded as a generalisation of the uniformity property
Eq.~(\ref{unif}), as it implies that uniform mixtures have the
greatest volumes.
Note that the ensemble volume may
itself be regarded as a weighted geometric mean (e.g., of the function
$p(x)^{-1}$ with respect to $p({\bf x})$ for continous classical
ensembles; see sections 2.2 and 6.7 of \cite{hardy}).
\subsection{Relative entropy}
The relative entropy of two ensembles $\rho$ and $\sigma$ may be
defined in a context independent manner by \cite{relent}
\begin{equation} \label{relent}
S(\rho\mid\sigma) = {\rm Tr}_\Gamma [\rho (\ln \rho - \ln \sigma)] .
\end{equation}
It is asymptotically related to the probability
of mistaking ensemble $\rho$ for
ensemble $\sigma$, as is reviewed in \cite{plenio}.
Here it will briefly be indicated how a geometric interpretation of this
quantity can be given.
Consider a compact $n$-dimensional space $X$ which is divided up into
into a set of non-overlapping bins $\{ B_i\}$ (e.g., for measurement
purposes). A discrete probability distribution $\{ p_i\}$ over the bins
(e.g., corresponding to measurement results), may then also be
modelled by the {\it continuous} distribution $p({\bf x})$ on $X$
defined by
\begin{equation} \label{cont}
p({\bf x}) = p_i /V_i , \hspace{1cm} {\bf x}\in B_i ,
\end{equation}
where $V_i = \int_{B_i} d^n{\bf x}$ denotes the measure of bin $B_i$.
Thus $p({\bf x})$ is uniform over each bin,
and its integral over bin $B_i$ is
equal to $p_i$. Let $\rho_D$ and $\rho_C$ denote the discrete and
continuous ensembles
corresponding to $\{ p_i\}$ and $p({\bf x})$ respectively.
Now, as discussed earlier, the ensemble volume $V(\rho_D )$ is
proportional to the effective number of bins occupied by $\rho_D$.
However, this does not indicate the effective volume or spread of
the ensemble relative to $X$, particularly in the case of varying
bin-sizes $V_i$. The latter is given by $V(\rho_C)$, which, making the
choice $K(\Gamma)=1$, follows from Eq.~(\ref{cont}) as
\begin{equation} \label{rhoc}
V(\rho_C ) = \exp [-\sum_i p_i \ln (p_i /V_i ) ] .
\end{equation}
Note that in the case of {\it equal}
bin-sizes $V_i \equiv V$ this reduces to
the bin-size $V$ multiplied by the effective number of bins occupied,
$\exp S(\rho_D)$.
Finally, if $X$ has total measure $\sum_i V_i = V_X$, one may define
the ``weighting'' ensemble $\sigma_D$ as corresponding to the
discrete probability distribution $\{ V_i /V_X\}$. Thus $\sigma_D$
describes the relative sizes or weightings of the bins.
It then follows via
Eqs.~(\ref{relent}) and (\ref{rhoc}) that
\begin{equation} \label{ratio}
V(\rho_C)/V_X = e^{-S(\rho_D\mid\sigma_D)} .
\end{equation}
Hence {\it the relative entropy} $S(\rho\mid\sigma)$ {\it is directly
related to
the volume of a discrete ensemble}
$\rho$ {\it embedded in a continuous space,
where} $\sigma$ {\it characterises the distribution of bin sizes of the
embedding}.
Note that this geometric interpretation of relative entropy allows its
properties to be understood as corresponding to ratios of volumes. For
example, the volume of an ensemble on $X$ can never be greater than
$V_X$ (corresponding to a uniform distribution on $X$). Hence
the left-hand-side of Eq.~(\ref{ratio}) is never greater
than unity, implying that
\begin{equation}
S(\rho\mid\sigma) \geq 0 .
\end{equation}
\section{APPLICATIONS}
The results of Sec. II for ensemble length and ensemble area indicate
the usefulness of ensemble volume as a direct measure of the spread of
an ensemble (and of other positive densities such as optical beam
power). Here other applications will be
examined, in the contexts of statistical mechanics, semi-classical
quantum mechanics, information theory, and quantum chaos.
A particular
result of note is a new unified proof of the classical Shannon
information bound and the quantum Holevo information bound based on
ratios of ensemble volumes. For the quantum case this proof is
conceptually and technically far simpler than previous proofs.
\subsection{Statistical mechanics}
First, in the statistical mechanics context,
the Gibbs relation $S_{th}=k S(\rho)$ between thermodynamic entropy and
ensemble entropy for equilibrium ensembles can be rewritten via
Eq. (\ref{theo}) as
\begin{equation} \label{gibb}
S_{th} = k \ln [V(\rho)/K(\Gamma)] .
\end{equation}
Thus, the thermodyamic entropy is
(up to an additive constant)
proportional to the logarithm of the ensemble volume.
From Eq.~(\ref{gibb}) and the third law of thermodynamics
(that thermodynamic entropy vanishes at absolute zero), it follows that
one should choose $K(\Gamma)$ to correspond to
a minimum ``zero-temperature" ensemble volume. For quantum
ensembles one has from Eqs. (\ref{theo}) and (\ref{ent}) that
$V(\rho)=K(\Gamma)$ for pure states, i.e., the
{\it quantum} zero-temperature volume is just that of a
{\it pure} state on $\Gamma$. Similarly, for discrete classical
ensembles, $K(\Gamma)$ is the volume of the ``pure'' ensemble
described by $\{ 1, 0, 0, \dots\}$.
However, continuous classical ensembles violate the
third law \cite{statmech} and $K(\Gamma)$ remains arbitrary in this
case (but see Sec. IV.B below).
The geometric expression (\ref{gibb}) is very similar
to the original Boltzmann relation
\begin{equation} \label{bolt}
S_{th} = k \ln W ,
\end{equation}
where $W$ is the number of distinct
microstates or ``elementary complexions" consistent with the
thermodynamic description. Indeed,
from the above discussion it follows that
Eq.~(\ref{gibb}) provides a {\it precise}
{\it geometric} interpretation
of the Boltzmann relation for discrete classical and quantum
equilibrium ensembles: {\it thermodynamic
entropy is proportional to the logarithm of
the number of non-overlapping zero-temperature
volumes contained
within the total volume of the ensemble}. Thus the Boltzmann
relation and the Gibbs formula for thermodynamic entropy become
{\it directly}
unified in the ensemble volume approach, without appeals to reservoirs,
microcanonical ensembles, etc.
Properties of thermodynamic entropy can be reinterpreted in terms
of geometric volume. For example, the additivity of thermodynamic
entropy for uncorrelated ensembles in thermal equilibrium follows from
Eq.~(\ref{gibb}) and the Cartesian property Eq.~(\ref{cart}) for
uncorrelated ensemble volumes. Note also
that irreversible processes correspond geometrically to those
which increase the volume of the ensemble.
\subsection{Semi-classical quantum mechanics}
Consider now a classical ensemble $\rho_{C}$ which is the
``classical limit" of some quantum ensemble $\rho_{Q}$, i.e., the
physical properties of $\rho_{C}$ approximate those of
$\rho_{Q}$. Such ensembles exist, for example, for
equilibrium ensembles in the high-temperature limit and for the
coherent states of a harmonic oscillator.
For the case of a spinless particle associated with a
$2n$-dimensional phase space one can obtain a relationship between the
constants $K(\Gamma_{C})$ and $K(\Gamma_{Q})$ in Eq.~(\ref{theo}) by
requiring that the ensemble volumes $V(\rho_{C})$ and $V(\rho_{Q})$
are approximately equal for such ensembles. Since these constants are
independent of the dynamics of the ensemble it suffices to choose
an equilibrium ensemble of
isotropic oscillators. Equating the calculated values of
$V(\rho_{C})$ and $V(\rho_{Q})$ in the high-temperature
limit then yields
\begin{equation} \label{corr}
K(\Gamma_{Q}) = h^{n} K(\Gamma_{C}) ,
\end{equation}
for the volume of a pure state, where $h$ is Planck's constant.
Thus the Bohr-Sommerfeld quantization rule that a pure quantum
state occupies a phase-space volume of $h^n$ is recovered
\cite{semiclass}.
Eq.~(\ref{corr}) can be used to derive semi-classical uncertainty
relations from geometric considerations. For two
corresponding ensembles $\rho_{Q}$ and $\rho_{C}$ as above
the position and momentum entropies $S_X$ and $S_P$ respectively
must be approximately equivalent for either ensemble. Further,
\begin{equation} \label{projxp}
\exp(S_X) \exp(S_P) \geq \exp(S(\rho_{C}))
\end{equation}
holds for the classical ensemble from the projection property
Eq.~(\ref{proj})
applied to projections onto the position and momentum axes.
Eqs. (\ref{theo}), (\ref{corr}) and
(\ref{projxp}) then yield the approximate inequality
\begin{equation} \label{entrop}
S_X + S_P - S(\rho_Q) \stackrel{>}{\sim} n \ln h
\end{equation}
for quantum ensembles which have classical limits.
I conjecture that {\it exact} inequality in fact
holds for {\it all} quantum
ensembles.
Since the entropy of a quantum ensemble has a minimum value of 0
(corresponding to the existence of a minimum volume for quantum
ensembles), it follows from Eq.~(\ref{entrop}) that one has the
semi-classical entropic uncertainty relation
\begin{equation} \label{enta}
S_X + S_P \stackrel{>}{\sim} n \ln h ,
\end{equation}
for quantum ensembles with classical limits.
As per the derivation of Eq.~(\ref{heis}) from
Eq.~(\ref{uncert}), the
corresponding semi-classical Heisenberg uncertainty relation
\begin{equation} \label{heisa}
\Delta X \Delta P \stackrel{>}{\sim} \hbar/e
\end{equation}
then follows for the $n=1$ case. Eqs. (\ref{enta}) and (\ref{heisa})
are close to the exact results for
general quantum ensembles \cite{manko,bbm} (see Eqs.~(\ref{uncert})
and (\ref{heis})).
It is seen that geometrically
they correspond to application of the projection property
Eq.~(\ref{proj}) to the
projections of a pure state of volume $h^n$ onto the position and
momentum axes (i.e., replacing $\Gamma_1$ and $\Gamma_2$ by
$X$ and $P$ in Figure 2).
\subsection{Information bounds}
Consider a communication channel
where signals represented by ensembles $\rho_{1}$, $\rho_{2}$, $\dots$
are transmitted with
prior probabilities $p_{1}$, $p_{2}$, $\dots$ respectively
\cite{footsig}.
The ensemble of signal
states itself corresponds to the mixture
\begin{equation} \label{ensem}
\rho = \sum_{i} p_{i} \rho_{i} .
\end{equation}
For classical ensembles it was shown by Shannon \cite{ash,shan}
that the average amount of error-free data $I$ which can be obtained per
transmitted signal, measured in terms of the number of binary digits
required to represent the data, is bounded above by
\begin{equation} \label{shanhol}
I \leq [S(\rho) - \sum_i p_i S(\rho_i)]/\ln 2 .
\end{equation}
The formally equivalent bound for quantum ensembles was proved by Holevo
\cite{holevo}, and hence Eq.~(\ref{shanhol}) may be referred to as
the Shannon-Holevo information bound.
Proofs given in the literature of Eq.~(\ref{shanhol}) for the quantum
case are mathematically rather technical in nature, and quite different
in character to proofs for the classical case
\cite{holevo,others}. However, the formal equivalence of the
quantum and classical bounds suggests that a unified proof exploiting
universal features of statistical ensembles may be possible. Indeed the
construction of such a proof, based on simple volume arguments,
was recently outlined in \cite{eprint}, and will be elaborated on here.
A second such proof, which reduces the general classical/quantum case
to that
of discrete classical noiseless channels, will also be pointed out.
First, consider a message consisting of $L$ signals chosen from the
set $\{\rho_i\}$. Such a message may be denoted by $\rho_{\alpha}$,
where $\alpha = (i_1 , i_2 , \dots , i_L )$ denotes the labels of the
signals comprising the message. In the limit that $L\rightarrow\infty$
the strong law of large numbers implies that the relative frequency of
signal $\rho_i$ appearing in the message approaches $p_i$ with
probability 1. It follows from the Cartesian property Eq.~(\ref{cart})
that the volume of the message satisfies
\begin{equation}
V(\rho_\alpha ) \rightarrow V_{mess} = \prod_i [V(\rho_i )]^{p_i L} ,
\end{equation}
as $L\rightarrow\infty$. Moreover, as will be shown below in
Eq.~(\ref{rhol}), the volume
of any ensemble of such messages is bounded above by $[V(\rho)]^L$.
Hence, using the additivity property Eq.~(\ref{add}), the maximum
possible number of non-overlapping messages of length $L$, $N_L$,
satisfies
\begin{equation} \label{rat}
N_L \leq [V(\rho)]^L /V_{mess}
\end{equation}
as $L\rightarrow\infty$. Noting that {\it error-free} data can
only be obtained from distinguishing
among a set of {\it non-overlapping} messages, and that $N_L$
such messages require at most $1 + \log_2 N_L$ binary digits to record,
it follows in the limit of infinitely long messages that the
average information gained per signal, $I$, is bounded by
\begin{equation} \label{made}
I \leq \lim_{L\rightarrow\infty} L^{-1} (1 + \log_2 N_L) \leq
\log_2 V(\rho)/\prod_i [V(\rho_i )]^{p_i} .
\end{equation}
Finally, since communication based on finite message lengths cannot
transmit more data per signal than communication based on infinite
lengths, the bound holds for all signalling schemes, and
Eq.~(\ref{shanhol}) follows from Eqs.~(\ref{theo}) and (\ref{made}).
The above proof of the Shannon-Holevo bound is geometrically simple,
being based on the ratio of the maximum available volume for an
ensemble of messages to the message volume (Eq.~(\ref{rat})).
Note that the argument cannot be used to derive similar bounds based on
other invariant volume measures, as all of the defining properties of
ensemble volume are required. However, heuristic arguments of the
same type for other volume measures can sometimes give excellent results
\cite{cd,hallpra}. Note that the
Shannon-Holevo bound is in fact {\it tight} for both classical and
quantum ensembles \cite{ash,shan,tight}, corresponding geometrically to
being able to choose a number $N_L$ of messages
arbitrarily close to the upper bound in Eq.~(\ref{rat}) which can be
distinguished with a vanishingly small average error probability as
$L\rightarrow\infty$.
To conclude this subsection it will be shown that the Shannon-Holevo
bound may also be proved by considering only messages of finite length,
and applying the classical noiseless coding theorem
\cite{ash, shan}. With notation as above, suppose that one chooses a
set of codewords $C$ from the set of messages of length $L$, and that
codeword $\rho_\alpha \in C$ is transmitted with probability
$q(\alpha)$. Defining $N_{i}(\alpha)$ as the number of times signal
$\rho_i$ appears in codeword $\rho_\alpha$, and $\overline{\rho}_l =
\sum_{\alpha\in C} q(\alpha) \rho_{i_l}$ as the average $l$-th
component of the transmitted codewords, consistency requires that
\begin{eqnarray} \label{consist}
p_i & = & L^{-1} \sum_{\alpha\in C} q(\alpha) N_i (\alpha) ,\nonumber\\
\rho & = & L^{-1} \sum_{l=1}^{L} \overline{\rho}_l .
\end{eqnarray}
Using the projection property Eq.~(\ref{proj}) and Eq.~(\ref{mixture})
one then has the inequality chain
\begin{equation} \label{rhol}
V(\sum_\alpha q(\alpha)\rho_\alpha ) \leq V(\overline{\rho}_{1})\dots
V(\overline{\rho}_{L})
\leq [V(\sum_l L^{-1}\overline{\rho_l})]^{L} = [V(\rho )]^L .
\end{equation}
To obtain a bound for error-free data, it must be assumed that the
codewords are non-overlapping, so that they can be distinguished without
error by measurement. From Eq.~(\ref{theo}) and the Cartesian property
Eq.~(\ref{cart}) one may then calculate
\begin{equation}
V(\sum_\alpha q(\alpha)\rho_\alpha ) = e^{S[q]} \prod_{\alpha\in C}
[V(\rho_\alpha)]^{q(\alpha)} = e^{S[q]} \prod_{\alpha\in C}
\prod_l [V(\rho_{i_l})]^{q(\alpha)} ,\nonumber
\end{equation}
where $S[q]$ denotes the entropy of the discrete distribution
$\{q(\alpha)\}$. Combining this with Eqs.~(\ref{consist}) and
(\ref{rhol}) then gives
\begin{eqnarray}
S[q] & \leq & L S(\rho) - \sum_{\alpha\in C} \sum_l
q(\alpha) S(\rho_{i_l}) \nonumber \\
& = & L S(\rho) - \sum_{\alpha\in C} \sum_i q(\alpha) N_i (\alpha)
S(\rho_i ) \nonumber \\
& = & L [ S(\rho) - \sum_i p_i S(\rho_i ) ] .
\end{eqnarray}
Finally, from Shannon's classical noiseless coding theorem \cite{ash,
shan} $S[q]/\ln 2$ is the maximum information (measured in binary
digits) which can be transmitted on average per codeword, and hence
Eq.~({\ref{shanhol}) follows for the average information transmitted per
signal.
\subsection{Chaotic and other diffusion processes}
Zyczkowski \cite{zed} and Mirbach and Korsch \cite{mk1,mk2} have
studied connections between quantum and classical chaos via
entropies associated with the evolution of coherent states. Here it
will be shown that this approach may be simply interpreted in terms of
ensemble volume, and considerably generalised.
Consider an ensemble $\rho_0$, classical or quantum, which evolves in
time under some dynamical process $D$ (not necessarily reversible). The
ensemble will explore some region of $\Gamma$, which may
be large for standard diffusion processes, or relatively small for
integrable and dissipative systems. The localisation of the ensemble in
$\Gamma$ over time is characterised by the time-averaged mixture
\begin{equation} \label{time}
\overline{\rho} = \lim_{T\rightarrow\infty} T^{-1} \int_{0}^{T} dt\,
\rho_t .
\end{equation}
This mixture gives greatest weight to regions of $\Gamma$ where the
ensemble spends the most time. Hence its ensemble volume,
$V(\overline{\rho})$, is a
measure of the spread of the region explored by the ensemble as it
evolves.
The {\it localisation ratio} for a given initial state and dynamical
process
may now be defined as the ratio of the volumes of $\overline{\rho}$
and $\rho_0$, i.e.,
\begin{equation} \label{rati}
r = V(\overline{\rho})/V(\rho_0) = \exp [S(\overline{\rho}) -
S(\rho_0) ] .
\end{equation}
It thus measures the localisation of the ensemble under the
evolution process, relative to
its initial spread. This ratio will be less than or equal to one if
the ensemble evolves to a fixed point, and greater than or equal to
one if it diffuses over the whole of $\Gamma$. For chaotic systems with
integrable regions it will depend strongly on the initial ensemble.
The above definition is clearly
natural on geometric grounds, and the ensemble entropy
appears as a consequence of the uniqueness theorem in Eq.~(\ref{theo}).
For classical and quantum systems corresponding to the same
evolution process, it is of interest to compare localisation properties.
This is easily done for the case of initial quantum ensembles
$\rho_Q$ which have corresponding classical counterparts $\rho_C$ (such
as coherent states). In this case the quantum and classical
localisation ratios $r_Q$ and $r_C$ can be calculated and compared.
Zyczkowksi partially carries through this procedure in \cite{zed},
where he plots $S(\overline{\rho})$
for the quantum counterpart of a classically chaotic process,
where $\rho_Q$ is chosen to range over a set of coherent
states indexed by their corresponding phase-space points. In this case
$S(\overline{\rho})$ is just the entropy of the
energy distribution of $\rho_Q$. Noting
$S(\rho_Q)=0$ for pure states, it
follows from Eq.~(\ref{rati}) that this is equivalent to plotting the
logarithm of the localisation ratio, $\ln r$. However, he compares
quantum localisation features qualitatively with the classical phase
space portrait,
rather than quantitatively with analogously calculated classical
localisation ratios.
Mirbach and Korsch extended the approach of Zyczkowski by also
calculating
$S(\overline{\rho})$ for the classical ensembles $\rho_C$ corresponding
to the coherent states $\rho_Q$. For a complete family of such states
they then compared the corresponding classical and quantum values of
$S(\overline{\rho})$ (Figures 1 and 3 of \cite{mk2}).
Since for this case $S(\rho_Q)$ and $S(\rho_C)$ are constants, this
amounts to comparing the logarithms of the classical and quantum
localisation ratios (up to an additive constant).
However, Mirbach and Korsch argue that one should in fact compare
{\it measurement} entropies rather than the direct ensemble entropies,
to smear out quantum fluctuations in the latter case \cite{mk1,mk2}.
This is also easily interpreted in terms of localisation ratios.
In particular, for a measurement observable $A$ on a classical or
quantum ensemble $\rho$, let
$V_{A}(\rho)$ denote the volume of the measurement distribution of $A$.
The localisation ratio of an evolution process
with respect to $A$, for an initial ensemble $\rho_0$, is then defined
in analogy to Eq.~(\ref{rati}) as
\begin{equation}
r_A = V_A (\overline{\rho})/V_A (\rho_0 ) .
\end{equation}
Again one may compare localisation ratios for classical and quantum
ensembles, where one chooses corresponding observables $A_Q$ and $A_C$.
The logarithm of this quantity (up to an additive constant) is plotted
in Figures 2 and 3 of \cite{mk1} for quantum and classical systems
respectively for a complete set of coherent states,
where $A_C$ is chosen to be a phase-space measurement
(so that $r_{A_C} = r_C$), and $A_Q$ to be
a ``Husimi'' phase-space measurement corresponding to the
complete set of coherent states \cite{husimi}.
\section{CONCLUSIONS}
In conclusion, an essentially unique measure of volume for classical
and quantum ensembles has been found, related to ensemble entropy,
which provides a
geometric tool for any context in which ensembles appear.
This measure is universal in the sense that it may
be defined by theory-independent concepts of invariance,
uncorrelated ensembles, projection, and non-overlapping ensembles
(properties (i)-(v)).
Its properties as a direct measure of ``spread''
have been investigated in Sec. II for continuous distributions, and
favourably compared with measures based on root-mean-square deviation.
New geometric characterisations of ensemble entropy and relative entropy
have been discussed in Secs. III.D and III.E.
Applications include a new definition of spot size for optical beams;
a precise geometric
interpretation of the Boltzmann relation in statistical mechanics;
a derivation of semi-classical uncertainty relations based on
the existence of a minimum volume for quantum states and a
projection property of volumes; a unified
derivation of results in classical and quantum information theory
based on simple volume ratios;
and a new and universal definition of a localisation ratio which
measures the time-averaged spreading of an ensemble and underlies
entropic measures previously
investigated in the context of quantum chaos.
Work is in progress on further applications, particularly
to quantum information theory \cite{tight}, measures of quantum
entanglement \cite{plenio}, and information exclusion relations
\cite{hallpra, hallprl}. The conjecture suggested following
Eq.~(\ref{entrop}) is also under active investigation, and the (mostly
weaker) bound
\begin{equation}
S_X + S_P - S(\rho) \geq \ln 2\pi e\hbar -
\ln [1 + \Delta X \Delta P/(\hbar/2)]
\end{equation}
has thus far been found for the $n=1$ case.
{\bf Acknowledgments}
I am grateful to Prof. Wolfgang Schleich for drawing my attention
to the inverse participation ratio
(thus stimulating the search for the
``best'' measure of volume), and to Prof. Hajo Leschke and
Drs. Gernot Alber and Bruno Mirbach for useful discussions.
This work was supported by the Alexander von Humboldt Foundation.
{\bf APPENDIX}
Here the fundamental
theorem stated in Sec. III.C is proved, showing essentially
that the exponential of the ensemble entropy is the unique measure
of the volume of a statistical ensemble. It is convenient to first
prove the theorem for discrete classical ensembles, and then extend
the arguments to quantum ensembles and to continuous classical
ensembles. Notation will be as defined in Sec. III.A, and reference
will be made to the five assumed properties of the volume measure
$V(\rho)$ stated in Sec. III.B.
Let $\rho$ denote a classical discrete ensemble $\{ p_i\}$, with finite
index set $I = \{ 1,2, \dots ,M\}$. Defining the ``pure'' ensemble
$\rho_j$ ($j\in I$) as corresponding to the distribution
$\{p^{(j)}_{i}\}$ with $p^{(j)}_{i} = \delta_{ij}$, one can write
$\rho$ as the mixture
\begin{equation} \label{rhomix}
\rho = \sum_{i\in I} p_i \rho_i .
\end{equation}
Note that one has the two basic properties
\begin{equation} \label{pure}
{\rm Tr}_{\Gamma}[\rho_j \rho_k] = 0 \,\, (j\not= k),\hspace{1cm}
V(\rho_j ) = {\rm constant} = V_I .
\end{equation}
The first states that these pure ensembles are non-overlapping, and
the second that they have equal ensemble volumes (this follows from the
invariance property, noting that the $\rho_j$
map to each other under permutations).
Now consider the ensemble $\rho^L\in\Gamma^L$ corresponding to $L$
uncorrelated copies of $\rho$. For
each $\alpha = (i_1, i_2, \dots , i_L)$ in $I^L$ define
\begin{equation} \label{rhoalpha}
\rho_{\alpha} = \rho_{i_1}\rho_{i_2}\dots\rho_{i_L} ,
\hspace{1cm} p(\alpha) = p_{i_1} p_{i_2} \dots p_{i_L} .
\end{equation}
Thus $\rho_{\alpha}$ corresponds to the uncorrelated composite ensemble
formed by $\rho_{i_1}, \rho_{i_2}, \dots , \rho_{i_L}$ (in that order).
One can then decompose $\rho^L$ into the mixture
\begin{equation} \label{mix}
\rho^L = \sum_{\alpha\in I^L} p(\alpha) \rho_{\alpha} .
\end{equation}
The proof of the theorem proceeds by finding a suitable set of
so-called ``typical sequences'' $T\subseteq I^L$
\cite{ash,shan}, which allows $\rho^L$
in Eq.~(\ref{mix}) to be approximated by certain mixtures
of the ensembles $\{\rho_{\alpha}\}$ where $\alpha$ is restricted to
range over $T$.
For a given $\alpha\in I^L$ let $N_i(\alpha)$ denote the number of
times the index $i$ appears as a
component of $\alpha$, and let $P(\alpha)\in I^L$ correspond to a
permutation of the components of $\alpha$. If $S(\rho)$
denotes the entropy of $\rho$ defined in Eq.~(\ref{ent}) of the
text, then for any
$\epsilon > 0$ and $L$ sufficiently large
one may choose a set $T$, with $\mid T\mid$ elements, which satisfies:
\begin{eqnarray}
& (T1) & C_T = \sum_{\alpha\in T} p(\alpha) > 1-\epsilon , \nonumber\\
& (T2) & \mid T\mid = e^{L[S(\rho)+\delta_L]} , \nonumber\\
& (T3) & \sum_{i\in I} \mid L^{-1} N_{i}(\alpha) - p_i\mid < \delta_{L}'
\,\, {\rm for \,\, all}\,\, \alpha \in T , \nonumber \\
& (T4) & \alpha\in T \,\,{\rm implies}\,\, P(\alpha)\in T \,\,
{\rm for\,\, all}\,\, P ,
\nonumber
\end{eqnarray}
where both $\delta_L$ and $\delta_{L}' \rightarrow 0$ as
$L\rightarrow\infty$.
A particular example of such a set is
\begin{equation} \label{Tdef}
T = \{ \alpha :\, \mid L^{-1}N_{i}(\alpha) - p_i\mid <
[Mp_i (1-p_i )/(L\epsilon )]^{1/2} \} .
\end{equation}
Properties (T1) and (T2) for this set
are proved in Theorem 1.3.1 of \cite{ash};
property (T3) follows noting that $\sum_i [p_i (1-p_i )]^{1/2}$ is
bounded by $(M-1)^{1/2}$ and hence that one can choose
$\delta_{L}' = M(L\epsilon)^{-1/2}$; and
property (T4) is an immediate consequence of
$N_{i}(\alpha)$ being invariant under permutations.
To obtain an upper bound for the volume $V(\rho)$ of $\rho$,
consider now the ensembles defined by the mixtures
\begin{equation} \label{tmix}
\rho_L (T) = C_T^{-1}
\sum_{\alpha\in T} p(\alpha) \rho_\alpha , \hspace{1cm}
\rho_{L}^{*}(T) = \mid T\mid^{-1} \sum_{\alpha\in T} \rho_\alpha ,
\end{equation}
where $C_T = \sum_{\alpha\in T} p(\alpha)$.
From the Cartesian property and Eqs.~(\ref{pure}) and (\ref{rhoalpha})
it follows that $V(\rho_\alpha )=[V_I]^L$ is constant, and further that
the $\rho_\alpha$ are non-overlapping. Hence, from the uniformity
and additivity properties, $V(\rho_L (T)) \leq V(\rho_{L}^{*}(T)) =
\mid T\mid [V_I]^L$. Property (T2) then gives
\begin{equation} \label{ding}
V(\rho_L (T)) \leq [V_I]^L e^{L[S(\rho)+\delta_L]} .
\end{equation}
Further, from property (T1) and Eqs.~(\ref{mix}) and (\ref{tmix}),
\begin{eqnarray}
{\rm Tr}_{\Gamma^L}[\mid\rho^L - \rho_L (T)\mid ] & = &
\sum_{\alpha\in T}
\mid p(\alpha) - p(\alpha)/C_T \mid + \sum_{\alpha\notin T} p(\alpha)
\nonumber\\ & = & (1/C_T - 1) C_T + (1 - C_T ) \leq 2\epsilon\nonumber .
\end{eqnarray}
Hence $\rho^L$ can be made arbitrarily close to $\rho_L (T)$ for $L$
sufficiently large, and so from the assumed continuity of $V(\cdot)$,
and noting from the Cartesian property that $V(\rho^L) = [V(\rho)]^L$,
one has from Eq.~(\ref{ding}) that
\begin{equation} \label{less}
V(\rho) = \lim_{L\rightarrow\infty} [V(\rho_L (T))]^{1/L} \leq
V_I e^{S(\rho)} .
\end{equation}
Thus the exponential of the entropy is an upper bound for the ratio of
the volume of $\rho$ to the volume of a ``pure'' state.
Note that only properties (T1) and (T2) of $T$ were needed to obtain
this result, and that the projection property has not been used.
To obtain the converse of inequality Eq.~(\ref{less}), note from the
projection property that
\begin{equation} \label{wow}
V(\rho_{L}^* (T)) \leq \prod_{l=1}^{L} V(\overline{\rho}_l (T)) ,
\end{equation}
where $\overline{\rho}_l (T)$ is the projection of $\rho_{L}^* (T)$ onto
its $l$-th component, i.e.,
\begin{equation} \label{sevtwo}
\overline{\rho}_l (T) = \sum_{\alpha = (i_1 , \dots i_L ) \in T}
|T|^{-1} \rho_{i_l} .
\end{equation}
From property (T4) of $T$, $\overline{\rho}_l (T)$ is
independent of $l$ and hence may be denoted by $\overline{\rho}$.
Eq.~(\ref{wow}) then becomes
$V(\rho_{L}^* (T)) \leq [V(\overline{\rho})]^L$.
But as noted earlier,
the volume of $V(\rho_{L}^* (T))$ follows from the additivity property
as $\mid T\mid [V_I]^L$, and hence via property (T2) of $T$
Eq.~(\ref{wow}) reduces to
\begin{equation} \label{sing}
V_I e^{S(\rho) + \delta_L} \leq V(\overline{\rho}) .
\end{equation}
Further, from Eqs.~(\ref{tmix}) and (\ref{sevtwo})
\begin{equation}
\overline{\rho}=L^{-1}\sum_l \overline{\rho}_l (T) = \mid T\mid^{-1}
\sum_{\alpha\in T} \sum_{i\in I} L^{-1} N_{i}(\alpha) \rho_i ,
\end{equation}
and hence from Eq.~(\ref{rhomix}) and property (T3) of $T$
\begin{eqnarray}
{\rm Tr}_{\Gamma}[\mid\rho - \overline{\rho}\mid] & = & \mid T\mid^{-1}
{\rm Tr}_{\Gamma}[\mid \sum_{\alpha\in T} \sum_{i\in I} (p_i -
L^{-1} N_{i}(\alpha)) \rho_i \mid ] \nonumber\\
& \leq & \mid T\mid^{-1} \sum_{\alpha\in T} \sum_{i\in I}
\mid p_i -L^{-1} N_{i}(\alpha) \mid \leq \delta_{L}' \nonumber .
\end{eqnarray}
Hence $\overline{\rho}$ can be
made arbitrarily close to $\rho$ for $L$ sufficiently
large, and so, taking the limit $L\rightarrow\infty$ in
Eq.~(\ref{sing}), the assumed continuity of $V(\cdot)$ gives
\begin{equation} \label{more}
V_I e^{S(\rho)} \leq V(\rho) .
\end{equation}
Eqs.~(\ref{less}) and (\ref{more}) yield the theorem of Sec. III.B
for classical discrete ensembles with finite index sets (where
$K(\Gamma)$ in Eq.~(\ref{theo}) is identified with the volume $V_I$ of
a pure ensemble $\{ p_i = \delta_{ij}\}$ on $I$, and Eq.~(\ref{kgam})
for $K(\Gamma)$ follows immediately from the Cartesian property).
The extension to ensembles with
infinite index sets is trivial by continuity. The distribution
$\{ p_i\}$ of such an ensemble $\rho$ can
be arbitrarily closely approximated by its (renormalised) first $M$
terms, corresponding to a discrete ensemble $\rho_M$ with a finite
index set. Hence, from from the assumed continuity of ensemble
volume and Eqs.~(\ref{less}) and (\ref{more}), $V(\rho) = V_I
\lim_{M\rightarrow\infty} \exp [S(\rho_M )]$ where $V_I$ is the
volume of a ``pure'' ensemble with respect to the infinite index set
$I$. Thus $V(\rho)$ is as per the theorem (but becomes infinite in the
case that the limit of $S(\rho_M )$ as $M\rightarrow\infty$
does not exist).
The extension to quantum ensembles is straightforward. Indeed, for
quantum ensembles the above analysis goes through formally unchanged,
where the expansion in
Eq.~(\ref{rhomix}) is now identified with an orthogonal decomposition
into pure states, and the first product in Eq.~(\ref{rhoalpha})
is a tensor product. Thus
the $\rho_i$ and $p_i$ represent (non-overlapping) eigenstates and
eigenvalues of $\rho$. The only additional consideration
is that $V_I$, the volume of an eigenstate of $\rho$, might
conceivably depend on the eigenstate basis. However this is ruled
out by the invariance property (i): {\it all} pure states on a given
Hilbert space can be
connected by unitary transformations, and hence have the same volume.
Finally, the theorem may be extended to continuous classical ensembles
as follows. Consider a classical ensemble $\rho$ described by a
probability distribution $p({\bf x})$ on an
$n$-dimensional space $X$. This
space may be partitioned into a set $\{ S_i\}$ of non-overlapping sets
of equal volume $V$ (i.e., $\int_{S_i} d^n{\bf x} = V$
for all $i$). Define
the corresponding ``pure'' ensembles $\rho_i$ by the associated
probability distributions $p^{(i)}({\bf x}) = 1/V$
for ${\bf x}\in S_i$ and $= 0$ for ${\bf x}\notin S_i$.
These pure ensembles can be mapped to each other
by measure-preserving transformations, and hence from the invariance
property have equal ensemble volumes, $V_0 (V)$ say. The
formal analogues of the properties in Eq.~(\ref{pure}) then hold, and
again the above analysis for classical discrete ensembles goes
through formally unchanged for mixtures of these pure ensembles, i.e.,
\begin{equation} \label{lem}
V(\sum_i p_i \rho_i ) = V_0(V) \exp (-\sum_i p_i \log p_i ) .
\end{equation}
Now consider the particular mixture defined by
\begin{equation}
\rho_V = \sum_i p_i (V) \rho_i , \hspace{1cm}
p_i (V) = \int_{S_i} d^{n} \, p({\bf x}) .
\end{equation}
Thus $\rho_V$ is a discrete approximation to $\rho$, and hence, noting
that $\int_X d^n{\bf x} \equiv \sum_i \int_{S_i} d^n{\bf x}$,
one has from the Mean Value Theorem that
\begin{equation} \label{mex}
{\rm Tr}_\Gamma [\mid \rho -\rho_V \mid ] = \sum_i \int_{S_i} d^n{\bf x}
\, \mid p({\bf x}) - p_i (V) /V \mid \rightarrow 0
\end{equation}
in the continuum limit $V\rightarrow 0$. Hence, from Eq.~(\ref{lem}) and
the assumed continuity of ensemble volume,
\begin{equation} \label{tex}
V(\rho ) = \lim_{V\rightarrow 0} V_{0}(V) \exp (S_V )
\end{equation}
where $S_V$ denotes the entropy of $\{ p_i (V)\}$.
But again approximating an integral by a summation,
\begin{equation}
S(\rho) = \lim_{V\rightarrow 0}
- V \sum_i [p_i(V)/V] \ln [p_i(V)/V] \\
=\lim_{V\rightarrow 0} (S_V + \ln V ) .
\end{equation}
Hence Eq.~(\ref{tex}) can be rewritten as
\begin{equation} \label{texx}
V(\rho ) = e^{S(\rho)} \,\lim_{V\rightarrow 0} V_{0}(V)/V .
\end{equation}
Finally, to show that the limit exists in Eq.~(\ref{texx}), note that
any set $S\in X$ of measure $\int_S d^n{\bf x} = V$
can be partitioned into
$m$ non-overlapping sets of equal measure $V/m$ for any integer $m$.
Moreover, a ``pure'' ensemble on $S$,
corresponding to a distribution which is uniform over $S$
and vanishing outside $S$, can trivially be written as an
equally-weighted mixture of analogously defined ensembles for the
members of the partition. Hence from the additivity property one has
the relation $V_{0}(V) = m V_{0}(V/m)$ for the ensemble volumes of
``pure'' ensembles. Further, replacing $V$ by $nV$ in this relation for
any integer $n$ implies that $V_{0}(rV) = r V_{0}(V)$ for any rational
number $r=n/m$. This can be extended to all real $r$ from the assumed
continuity of ensemble volume, so that $V_{0}(V)/V =$ constant $=
K(\Gamma)$ say, and the theorem follows via Eq.~(\ref{texx}).
\newpage
|
2,877,628,090,667 | arxiv | \section{Introduction}
\noindent Let $K$ be a field and let $\matrices$ denote the space of $m\times n$ matrices over $K$, where we assume that $m\leq n$. We say that a non-zero subspace $\M$ of $\matrices$ is a constant rank $r$ subspace if each non-zero element of
$\M$ has rank $r$, where $1\leq r\leq m$.
Let $q$ be a power of a prime, and let $\mathbb{F}_q$ denote the finite field of order $q$. We showed in a previous paper that a constant
rank $r$ subspace of $M_{m\times n}(\mathbb{F}_q)$ has dimension at most $n$ provided $q\geq r+1$, \cite{Gow}. This bound is optimal, subject
to the restriction on $q$, since for all $q$, $M_{m\times n}(\mathbb{F}_q)$ contains an $n$-dimensional constant rank $r$ subspace.
For suitable values of $r$, $m$ and $n$, there are a few
constant rank $r$ subspaces of $M_{m\times n}(\mathbb{F}_2)$ of dimension greater than $n$, but we know of no other counterexamples when $q>2$.
(The absence of any other counterexamples may be partly explained by the difficulty of performing computer searches when $q>2$.)
We have the bound $\dim \M\leq m+n-r$ for any constant rank $r$ subspace of $M_{m\times n}(\mathbb{F}_q)$, valid for all $q$, but this bound is usually too large.
It follows that if $q\geq r+1$, a constant rank $r$ subspace of symmetric $n\times n$ matrices over $\mathbb{F}_q$ has dimension at most $n$, but there are reasons to expect that this upper bound can be improved when we exploit the symmetry of the matrices. This paper is devoted to finding
bounds for such constant rank subspaces of symmetric matrices. It builds on a previous paper, \cite{DGS}, and its key ingredient is an application of Lemma 1 of \cite{Fill}, which we exploited in \cite{Gow} to investigate constant rank subspaces.
For ease of exposition, we have stated our findings in terms of subspaces of symmetric bilinear forms defined on $V\times V$, where $V$ is a vector space of dimension $n$ over the field $K$ (and $K$ is usually $\mathbb{F}_q$). We have done this mainly to exploit the idea of the set
of common isotropic points of a subspace of symmetric bilinear forms, which arises naturally in the context of Theorem \ref{radical_totally_isotropic}, proved below. Our paper \cite{DGS} gave a formula for the number of common isotropic points in the finite field case, and we make good use of this formula
here. We are sure that the reader will have no trouble translating our results on constant rank subspaces of symmetric bilinear forms into identical results about constant rank subspaces of symmetric matrices.
Throughout this paper, $\Symm(V)$ denotes the $K$-vector space of symmetric bilinear forms defined on $V\times V$.
\section{Common isotropic subspaces}
\noindent The following result is important for all our work in this paper. It is a straightforward application of the proof of Lemma 1
of \cite{Fill} to the context of symmetric matrices and symmetric bilinear forms.
\begin{theorem} \label{radical_totally_isotropic} Let $V$ be a vector space of dimension $n$ over the field $K$ and let $f$ and $g$ be non-zero elements of $\Symm(V)$. Suppose that $f$ has rank $r$ and that all $K$-linear combinations of $f$ and $g$ have rank
at most $r$. Then provided $|K|\geq r+1$, we have $g(u,w)=0$ for all elements $u$ and $w$ of the radical of $f$. Thus, the radical of $f$
is totally isotropic for all linear combinations of $f$ and $g$.
\end{theorem}
\begin{proof}
The result is trivial if $r=n$, so we may assume that $r<n$. We identify $V$ with $K^n$, and we choose a basis of $V$ with respect to which the
matrix of $f$ is
\[
C=\left(
\begin{array}
{cc}
0&0\\
0&A
\end{array}
\right),
\]
where $A$ is an invertible $r\times r$ symmetric matrix over $K$. The radical
of $f$ is then spanned by the $n-r$ standard basis vectors $e_1$, \dots, $e_{n-r}$ of $K^n$.
Let the matrix of $g$ with respect to the same basis be
\[
D=\left(
\begin{array}
{cc}
A_1&A_2\\
A_2^T&A_3
\end{array}
\right),
\]
where $A_1$ is an $(n-r)\times (n-r)$ symmetric matrix, $A_3$ is an $r\times r$ symmetric matrix,
and $A_2$ is an $(n-r)\times r$ matrix.
For any element $x\in K$, the matrix of $g+xf$ with respect to the basis is
\[
D+xC=
\left(
\begin{array}
{cc}
A_1&A_2\\
A_2^T&A_3+xA
\end{array}
\right).
\]
We note that
\[
\det(A_3+xA)=\det A\det (A^{-1}A_3+xI_r)
\]
is a polynomial in $x$ of degree $r$ with leading coefficient $\pm \det A$.
Let $A_1=(a_{ij})$, $1\leq i,j\leq n-r$ and let $x_i$, $x_j$ denote the $i$-th and $j$-th rows of $A_2$ (we consider $x_i$ and $x_j$ as
row vectors of size $r$). Then
\[
E=
\left(
\begin{array}
{cc}
a_{ij}&x_i\\
x_j^T&A_3+xA
\end{array}
\right)
\]
is an $(r+1)\times (r+1)$ submatrix of $D+xC$, whose determinant must be $0$, since $E$ has rank at most $r$. As $\det E$ is a polynomial in
$x$ of degree at most $r$ whose coefficient of $x^r$ is $\pm a_{ij}\det A$, the supposition that $|K|\geq r+1$ implies that this polynomial is identically zero, and thus $a_{ij}=0$. This shows that $A_1$ is the zero matrix. It follows that $g$ is totally isotropic on the subspace spanned by
$e_1$, \dots, $e_{n-r}$, as required.
\end{proof}
\section{Spaces of symmetric bilinear forms of odd constant rank}
\noindent For most of rest of this paper, $V$ will denote a vector space
of dimension $n$ over $\mathbb{F}_q$. In \cite{DGS}, Theorem 2 and Corollary 3, we showed the following. Let $\M$ be a
constant rank $r$ subspace of $\Symm(V)$. Then if $r$ is odd, we have $\dim \M\leq r$.
Furthermore,
$r$-dimensional constant rank $r$ subspaces of symmetric bilinear forms certainly exist. They may be constructed as follows. Let $U$ be a vector space of dimension $r$ over $\mathbb{F}_q$. Then there exists an $r$-dimensional constant rank $r$ subspace, $\N$, say, of symmetric bilinear
forms defined on $U\times U$. We can extend $\N$ to a subspace of $\Symm(V)$ in an obvious way, where the extended forms all have the same radical of dimension $n-r$.
Theorem \ref{common_radicals} below shows that when $q$ and $r$ are odd, and $q\geq r+1$, this is the only way in which such constant rank subspaces can be constructed.
We recall that if $\M$ is a subspace of $\Symm(V)$, a vector $v\in V$ that satisfies $f(v,v)=0$
for all $f\in\M$ is called a \emph{common isotropic point} for the forms in $\M$.
\begin{theorem} \label{common_radicals}
Let $\M$ be a constant rank $r$ subspace of $\Symm(V)$, where $V$ has
dimension $n$ over $\mathbb{F}_q$. Suppose that $q$ is odd and greater than $r$. Then if $r$ is odd, and $\dim \M=r$, all the non-zero
forms in $\M$ have the same radical. Thus, in this case, $\M$ is essentially defined on a space of dimension $r$.
\end{theorem}
\begin{proof} Let $f$ be a non-zero element of $\M$ and let $R$ denote the radical of $f$. Then $\dim R=n-r$. Moreover, as $q>r$, Theorem \ref{radical_totally_isotropic} implies that $R$ is totally isotropic for all the forms in $\M$. Thus, there are at least
$q^{n-r}$ common isotropic points for the elements of $\M$. Now, since all non-zero elements of $\M$ have odd rank, $q$ is odd, and
$\dim \M=r$, the total
number of common isotropic points for $\M$ is $q^{n-r}$, by Theorem 5 of \cite{DGS}. It follows that $R$ is precisely the set of common isotropic points. However,
this applies to the radical of every non-zero element of $\M$. Thus, all radicals are identical, and $\M$ is essentially defined on
$V/R \times V/R$, where $V/R$ is $r$-dimensional.
\end{proof}
We can be more precise about the radicals of the bilinear forms in a constant rank space if we examine the common isotropic points
more carefully. We recall that a \emph{subspace partition} of a finite-dimensional vector space $X$ over $\mathbb{F}_q$ is a collection of non-zero subspaces of $X$ such that each non-zero element of $X$ is in exactly one subspace of the partition. Thus, we may write
\[
X=X_1 \cup \cdots \cup X_t,
\]
where $X_i\cap X_j=0$ if $i\neq j$. We say that the partition is non-trivial if $t>1$.
Subspace partitions have been studied in considerable detail because of their applications in finite geometry. We require an estimate for the smallest value of $t$ arising in a subspace partition. Our estimate must be common knowledge, and more refined estimates are available,
but we provide a proof of what we need, as it is not very complicated.
\begin{lemma} \label{partition_number}
Let $X$ be a vector space of dimension $n\geq 2$ over $\mathbb{F}_q$ and let
\[
X=X_1 \cup \cdots \cup X_t,
\]
where $\dim X_1\geq \dim X_2\geq \ldots \geq \dim X_t$,
describe a non-trivial subspace partition of $X$.
Then, if $n=2m$ is even, $t\geq q^m+1$, and if $n=2m+1$ is odd, we have $t\geq q^{m+1}+1$.
\end{lemma}
\begin{proof}
Suppose first that $n=2m$. We begin by considering the case that $\dim X_1>m$. Then we have $\dim X_1=m+s$, where $1\leq s\leq m-1$. Since
$X_1\cap X_2=0$, we have
\[
2m\geq \dim X_1+\dim X_2
\]
and thus $\dim X_2\leq m-s$. It follows from our labelling of indices that $\dim X_i\leq m-s$ for $i\geq 2$.
Suppose that there are exactly $a_i$ $i$-dimensional subspaces in the partition, where $1\leq i\leq m-s$. Then we have
\[
q^{2m}-q^{m+s}=\sum_{i=1}^{m-s} a_i(q^i-1).
\]
Now
\[
\sum_{i=1}^{m-s} a_i(q^i-1)\leq (a_1+\cdots +a_{m-s})(q^{m-s}-1)=(t-1)(q^{m-s}-1).
\]
We deduce that
\[
q^{m+s}(q^{m-s}-1)\leq (t-1)(q^{m-s}-1)
\]
and hence $t\geq q^{m+s}+1\geq q^{m+1}+1$.
Next, we suppose that $\dim X_1\leq m$. The same argument as above then yields that
\[
q^{2m}-1\leq t(q^m-1),
\]
and thus $t\geq q^m+1$. Thus in all cases we have therefore $t\geq q^m+1$.
Suppose now that $n=2m+1$ is odd. Again, we begin by considering the case that $\dim X_1=m+s+1$, where $0\leq s\leq m-1$. Then we must have
$\dim X_2\leq m-s$ and we obtain as before
\[
q^{2m+1}-q^{m+s+1}\leq (t-1)(q^{m-s}-1).
\]
It follows that $t-1\geq q^{m+s+1}$ and hence $t\geq q^{m+s+1}+1\geq q^{m+1}+1$.
Finally, suppose that $\dim X_1\leq m$. This time we obtain
\[
q^{2m+1}-1\leq t(q^m-1)
\]
and thus
\[
t\geq \frac{q^{2m+1}-1}{q^m-1}.
\]
Since the inequality
\[
\frac{q^{2m+1}-1}{q^m-1}>q^{m+1}+1
\]
holds, we see once more that in all cases for odd $n$, $t\geq q^{m+1}+1$.
\end{proof}
The two bounds for $t$ are optimal, as is well known. When $n=2m$ is even, $X$ may be covered by a spread of $q^m+1$ subspaces of dimension
$m$. Suppose now that $n=2m+1$ is odd. Let $Y$ be a vector space of dimension $2m+2$ over $\mathbb{F}_q$ that is covered by
a spread of $q^{m+1}+1$ subspaces of dimension $m+1$, say
\[
Y=Y_1 \cup \cdots \cup Y_k,
\]
where $k=q^{m+1}+1$. We may choose $X$ to be a subspace of codimension 1 in $Y$ that contains $Y_1$. Then
\[
X=(Y_1\cap X) \cup \cdots \cup (Y_k\cap X)=Y_1\cup \cdots \cup (Y_k\cap X)
\]
is a partition of $X$ into one subspace $Y_1$ of dimension $m+1$ and $q^{m+1}$ subspaces
$Y_2\cap X$, \dots, $Y_k\cap X$ of dimension $m$. Thus we have a partition of $X$ into $q^{m+1}+1$ subspaces.
Let $\langle u\rangle$ denote the one-dimenional subspace of $V$ spanned by a non-zero vector $u$ in $V$ and let $\M$ denote a subspace
of $\Symm(V)$. We set
\[
\M_{\langle u\rangle}=\{ f\in \M: u\in \rad f\}.
\]
It is clear that $\M_{\langle u\rangle}$ is a subspace of $\M$.
Suppose now that $\M$ is a constant rank $n-1$ subspace. Let $\langle u\rangle$ and $\langle w\rangle$ be different one-dimensional subspaces of $V$ for which $\M_{\langle u\rangle}$ and $\M_{\langle w\rangle}$ are both non-zero. Then we claim that $\M_{\langle u\rangle}\cap \M_{\langle w\rangle}=0$. For, if $f$ is a non-zero element of $\M_{\langle u\rangle}\cap \M_{\langle w\rangle}$, the two-dimensional subspace spanned by
$u$ and $w$ is in $\rad f$, which contradicts the fact that $f$ has rank $n-1$.
Continuing with the hypothesis that $\M$ is a constant rank $n-1$ subspace, let $\langle u_1\rangle$, \dots, $\langle u_t\rangle$ be all the different one-dimensional subspaces of $V$ such that for $1\leq i\leq t$, $\langle u_i\rangle$ is the radical of a non-zero element of $\M$. We see then that the $\M_{\langle u_i\rangle}$, $1\leq i\leq t$, form a subspace partition of $\M$. Furthermore, Theorem \ref{radical_totally_isotropic}
implies that $g(u_i,u_i)=0$ for all $g\in \M$ and $1\leq i\leq t$. Thus $\M$ has at least $t(q-1)$ non-zero common isotropic points. This fact will enable us to obtain more information about common radicals.
\begin{theorem} \label{improved_common_radicals}
Suppose that $n$ is even and $\M$ is a $d$-dimensional constant rank $n-1$ subspace of $\Symm(V)$. Suppose also that $q\geq n$. Then all the non-zero elements of $\M$ have the same radical under any of the following conditions:
\begin{enumerate}
\item $n=6m$ and $4m\leq d\leq 6m-1$;
\item $n=6m+2$ and $4m+1\leq d\leq 6m+1$;
\item $n=6m+4$ and $4m+3\leq d\leq 6m+3$.
\end{enumerate}
\end{theorem}
\begin{proof}
Let $\langle u_1\rangle$, \dots, $\langle u_t\rangle$ be all the different one-dimensional subspaces of $V$ that occur as the radical of a non-zero element of $\M$. Clearly, all the non-zero elements of $\M$ have the same radical precisely when $t=1$. Thus, we want to show that
$t=1$ under any of the stated conditions. We note that it suffices to prove the theorem whenever $d$ assumes the lower bound in each case.
We also note that $\M$ has $q^{n-d}-1$ non-zero common isotropic points by Theorem 5 of \cite{DGS}.
We consider the three possibilities in turn. Suppose that $n=6m$ and $d=4m$. Then $\M$ has exactly $q^{2m}-1$ non-zero common isotropic points. Suppose
if possible that $t>1$. Then, since we have a non-trivial partition of $\M$ into
$t$ subspaces, Lemma \ref{partition_number} implies that $t\geq q^{2m}+1$. Furthermore, we must have $t(q-1)\leq q^{2m}-1$. However,
\[
t(q-1)\geq (q^{2m}+1)(q-1)>q^{2m}-1,
\]
and we have a contradiction. Thus, $t=1$ here.
Suppose next that $n=6m+2$ and $d=4m+1$. Then $\M$ has exactly $q^{2m+1}-1$ non-zero common isotropic points. Suppose
if possible that $t>1$. Then
Lemma \ref{partition_number} implies that $t\geq q^{2m+1}+1$ and we must also have $t(q-1)\leq q^{2m+1}-1$. However,
\[
t(q-1)\geq (q^{2m+1}+1)(q-1)>q^{2m+1}-1,
\]
and we have a contradiction. Thus, $t=1$ holds here as well.
Suppose finally that $n=6m+4$ and $d=4m+3$. Accordingly, $\M$ has exactly $q^{2m+1}-1$ non-zero common isotropic points. Suppose
that $t>1$. Then
Lemma \ref{partition_number} implies that $t\geq q^{2m+2}+1$ and we must also have $t(q-1)\leq q^{2m+1}-1$. However,
\[
t(q-1)\geq (q^{2m+2}+1)(q-1)>q^{2m+1}-1,
\]
and we have another contradiction. Thus, $t=1$ holds in all cases.
\end{proof}
\section{Constant rank $n-1$ subspaces when $n$ is odd}
\noindent Dimension bounds and analysis of structure for even constant rank subspaces are complicated by the fact that the formula for the number of common isotropic points is not so easy to use effectively, compared with the corresponding formula in the odd rank case. The difficulty arises from the fact that there are two different types of symmetric bilinear forms of even rank, namely, those of positive type and those of negative type.
We can define the type in various ways, and our nomenclature may not be universally used, but the two types are distinguished by the number of isotropic points for a given form. A symmetric bilinear form of even rank and positive type has more isotropic points than a symmetric bilinear form of the same rank
and negative type. See, for example, Lemma 1 of \cite{DGS}.
At present, we do not have any good idea of the number of elements of positive type compared with the number of elements of negative type
in an even constant rank space. This makes it more difficult to obtain good bounds for the dimension of an even constant rank
space of symmetric bilinear forms. We confine ourselves to examining one special case in this section.
\begin{theorem} \label{even_constant_rank_dimension}
Let $n\geq 3$ be an odd integer and let $\M$ be a constant rank $n-1$ subspace of $\Symm(V)$. Then if $q$ is odd and at least $n$, we have $\dim \M\leq n-1$.
\end{theorem}
\begin{proof}
We set $n=2m+1$ and follow the proof of Theorem \ref{improved_common_radicals} fairly closely.
Suppose if possible that $\dim \M=n$. Let $A$ be the number of non-zero elements in $\M$ of positive type and let
$B$ be the number of non-zero elements in $\M$ of negative type. Then of course $A+B=q^n-1$ and the number of non-zero
common isotropic points of $\M$ is
\[
(A-B)q^{-m},
\]
by Theorem 5 of \cite{DGS}.
Let $\langle u_1\rangle$, \dots, $\langle u_t\rangle$ be all the different one-dimensional subspaces of $V$ that occur as radicals of non-zero elements of $\M$. We claim that we cannot have $t=1$. For if $t=1$, all the non-zero elements of $\M$ have the same radical,
$\langle u_1\rangle$. In this case, we set $\overline{V}=V/\langle u_1\rangle$. Then we can identify $\M$ with an $n$-dimensional constant rank
$n-1$ subspace of symmetric bilinear forms defined on $\overline{V}\times \overline{V}$. However, since $\dim \overline{V}=n-1$, the largest
dimension of a constant rank $n-1$ subspace of symmetric bilinear forms defined on $\overline{V}\times \overline{V}$ is $n-1$. This contradiction implies that $t>1$.
We deduce that there is a non-trivial subspace partition of $\M$ by $t$ subspaces and since $\dim \M=2m+1$, we obtain the estimate
\[
t\geq q^{m+1}+1
\]
from Lemma \ref{partition_number}. Now, as we are assuming that $q\geq n$, the radical of each non-zero element is totally isotropic
with respect to all elements of $\M$ and hence there are at least
\[
(q^{m+1}+1)(q-1)
\]
non-zero common isotropic points of $\M$. It follows that
\[
(q^{m+1}+1)(q-1)\leq (A-B)q^{-m}\leq (A+B)q^{-m}=(q^{2m+1}-1)q^{-m}
\]
and hence
\[
(q^{m+1}+1)(q^{m+1}-q^m)\leq q^{2m+1}-1.
\]
This latter inequality is impossible, and we have a contradiction. We deduce that $\dim \M\leq n-1$ when $q$ is odd and at least $n$.
\end{proof}
We note that the hypothesis that $q$ is odd is essential in Theorem \ref{even_constant_rank_dimension}. For, if $q$ is a power of 2, and $n=2m+1$ is odd, there is a $2m+1$-dimensional constant rank $2m$ subspace of alternating bilinear forms defined on $V\times V$. Since alternating bilinear forms are symmetric in characteristic 2, Theorem \ref{even_constant_rank_dimension} requires that $q$ is odd.
\section{Examples of constant rank $n-1$ subspaces}
\noindent Let $n\geq 3$ be an integer and suppose that $n+1$ is not a prime. Write $n+1=mr$, where $m$ and $r$ are integers, and $1<m<n+1$. Suppose that
$K$ is an arbitrary field that has a separable field extension $L$ of degree $n+1$ and consider $L$ as a vector space over $K$. We
let $\Tr:L\to K$ denote the usual trace form.
For each element $z$ of $L$, we define an element $f_z$, say, of $\Symm(L)$ by setting
\[
f_z(x,y)=\Tr(z(xy))
\]
for all $x$ and $y$ in $L$. Each form $f_z$ is non-degenerate if $z\neq 0$, since the trace form is non-zero under the hypothesis of separability.
Suppose now that $L$ contains a subfield $M$ such that $K<M<L$ and $M$ has degree $m$ over $K$. (The hypothesis is obviously met
if $K= \mathbb{F}_q$.)
Let $V$ be any $K$-subspace of codimension 1 in $L$ that contains $M$ and let $f_z'$ denote the restriction of $f_z$ to
$V\times V$ for each $z\in L$.
Let $\mathcal{M}$ denote the subspace of all bilinear forms
$f_z'$.
Let us assume that $z\neq 0$. Then since $f_z$ is non-degenerate, $f'_z$ has rank $n$ or $n-1$. Now,
$\mathcal{M}$ is an $(n+1)$-dimensional subspace of $\Symm(V)$ and hence its non-zero elements cannot all have rank $n$. We deduce that there exists some $z\in L$ such that $f_z'$ has rank $n-1$. Let $\langle u\rangle$ be the radical of $f_z'$. It is then straightforward
to see that the radical of $f'_{zu}\in \M$ is spanned by the unity element $1$.
Consider now the set $\K$, say, of all symmetric bilinear forms $f'_{zuw}$, as $w$ runs over the subfield $M$. It is again easy to see that if $w\neq 0$,
$f'_{zuw}$ has rank $n-1$, and its radical is $\langle w^{-1}\rangle$. Thus the radicals are different subspaces provided we use elements
of $M$ that are not $K$-multiples of each other. Furthermore, $\K$ is a $K$-subspace of dimension $m$.
This discussion provides the justification for the following result.
\begin{theorem} \label{different_radicals}
Let $V$ be a vector space of dimension $n$ over $\mathbb{F}_q$, where $n\geq 3$. If $n=2m-1$ is odd, there exists an $m$-dimensional constant rank
$n-1$ subspace of $\Symm(V)$, in which all linearly independent forms have different radicals.
If $n$ is even and $n+1=3m$ is divisible by $3$, there also exists an $m$-dimensional constant rank
$n-1$ subspace of $\Symm(V)$, in which all linearly independent forms have different radicals.
\end{theorem}
The subspace described in the first part of Theorem \ref{different_radicals} has maximal dimension subject to the radicals being different
and $q$ sufficiently large, as we now show.
\begin{theorem} \label{maximum_dimension_different_radicals}
Let $V$ be a vector space of dimension $n$ over $\mathbb{F}_q$, where $n=2m-1$ is odd and at least $3$.
Let $\M$ be a $d$-dimensional constant rank
$n-1$ subspace of $\Symm(V)$, in which all linearly independent forms have different radicals. Then if
$q\geq n$, we have $d\leq m$.
\end{theorem}
\begin{proof}
Suppose, if possible, that $d=m+1$. Then there are $(q^{m+1}-1)/(q-1)$ different one-dimensional subspaces of $V$ that each occur as
the radical of a non-zero element of $\M$. Since we are assuming that $\M$ is a constant rank space and $q\geq n$, Theorem \ref{radical_totally_isotropic} implies that we obtain thereby $q^{m+1}-1$ non-zero common isotropic points for $\M$.
Let $A$ be the number of non-zero elements in $\M$ of positive type and let
$B$ be the number of non-zero elements in $\M$ of negative type. Then the number of
common isotropic points of $\M$ is
\[
q^{2m-1-(m+1)}+ (A-B)q^{2m-1-(m+1)-(m-1)}=q^{m-2}+(A-B)q^{-1},
\]
by Theorem 5 of \cite{DGS}.
Comparing the estimate for the number of common isotropic points with the formula above, we deduce that
\[
q^{m+1}\leq q^{m-2}+(A-B)q^{-1}\leq q^{m-2}+q^m.
\]
This inequality is clearly impossible, and hence $d\leq m$.
\end{proof}
\section{An unusual constant rank $4$ subspace}
\noindent Let $V$ denote the field $\mathbb{F}_{3^5}$ considered as a 5-dimensional vector space over $\mathbb{F}_{3}$. Let $\phi: V\times V\times V\to \mathbb{F}_{3}$ be the symmetric trilinear form given by
\[
\phi(x,y,z)=\Tr (x^9yz+xy^9z+xyz^9),
\]
where $\Tr$ is the trace form from $\mathbb{F}_{3^5}$ to $\mathbb{F}_{3}$.
For $x\in \mathbb{F}_{3^5}$, let $\phi_x:V\times V\to \mathbb{F}_{3}$ be defined by
\[
\phi_x(y,z)=\phi(x,y,z).
\]
It is clear that $\phi_x$ is an element of $\Symm(V)$ and the set of all such $\phi_x$, as $x$ ranges over $\mathbb{F}_{3^5}$, is a vector space, $\M$, say, over $\mathbb{F}_{3}$.
It turns out that $\M$ is a 5-dimensional constant rank 4 subspace. We note that Theorem \ref{even_constant_rank_dimension} shows
that such a subspace does not exist over $\mathbb{F}_{q}$, when $q\geq 5$, and this fact shows that, in at least one non-trivial case,
the theorem is optimal.
An expanation of why $\M$ has this constant rank property is by no means straightforward and it is embedded in a more general exploration
of the trilinear form $\phi$ carried out by Ward in a paper published in 1975, \cite{Ward}. He showed that $\M$ is invariant under a linear action of the Mathieu group $M_{11}$, of order 7920. This means that $\M$ is also invariant under $M_{11}$, and this accounts for some of the special aspects of this constant rank space.
We can give a short description of the simpler symmetries of $\phi$ and $\M$, but refer to Ward's paper for a complete exposition of this fascinating subject. Let $\epsilon$ denote an element of order 11 in $\mathbb{F}_{3^5}$ and let $\sigma$ denote the Frobenius automorphism
$x\to x^3$ of $\mathbb{F}_{3^5}$. It is easy to see that multiplication by $\epsilon$ defines a linear transformation $T$, say, of order 11 of $V$, which acts irreducibly on the vector space. It is clear that
\[
\phi(Tx,Ty,Tz)=\phi(x,y,z),
\]
which shows that $T$ fixes $\phi$ and hence leaves $\M$ invariant.
It is also clear that $\sigma$ defines a linear transformation, $S$, say, of $V$ of order 5, which satisfies
$S^{-1}TS=T^4$. The invariance of the trace form with respect to Galois automorphisms implies that
\[
\phi(Sx,Sy,Sz)=\phi(x,y,z)
\]
and thus $\M$ is invariant under the action of $S$. Together, $S$ and $T$ generate a metacyclic group, $G$, say, of order 55, which fixes
$\phi$ and leaves $\M$ invariant.
$G$ acts irreducibly on $V$, since $T$ does. $V$ is the direct sum of $\langle 1\rangle$ and the trace zero hyperplane that are both
$S$-invariant. These are the only non-trivial subspaces of $V$ that are $S$-invariant. Now the set of all elements $x$ such that $\phi_x=0$ is a
$G$-invariant subspace of $V$. Since $G$ acts irreducibly on $V$, this subspace is trivial and hence $\phi_x=0$ only when $x=0$. This explains
why $\dim M=5$.
We may easily verify that $S$ fixes $\phi_1$ and $\langle 1\rangle$ is contained in $\rad \phi_1$. Now $\rad \phi_1$ is invariant under $S$, and since the only proper subspaces of $V$ that are $S$-invariant are $\langle 1\rangle$ and the trace zero hyperplane, as we remarked above, it follows that $\rad \phi_1=\langle 1\rangle$ and thus $\phi_1$ has rank 4. The orbit of $\phi_1$ under $G$ consists of the eleven
elements $\phi_{\epsilon^j}$, $1\leq j\leq 11$. Since a symmetric bilinear form of rank 4 and positive type is not invariant
under an element of order 5, $\phi_1$ has negative type. Thus $\M$ has at least 22 elements of rank 4 and negative type: the eleven forms
$\phi_{\epsilon^j}$ and their negatives.
In its action on $V\setminus 0$, $G$ has six orbits, two orbits being of size 11, the others of size 55. The same holds for the action of
$G$ on $\M$. Furthermore, since $G$ has odd order, none of its elements can map a non-zero element of $V$ or $\M$ into its negative and
we deduce that that each $G$-orbit is paired with the orbit consisting of the negative of each element in the orbit.
The 22 elements $\pm \epsilon^j$, $1\leq j\leq 11$ are non-zero common isotropic points of $\M$. Since the common isotropic points are invariant under $G$, if there are more than 22 of them, there are at least $22+110=132$ of them. This number, however, is greater than the number of non-zero
isotropic points of
$\phi_1$ alone, which is 62. Thus there are precisely 22 non-zero common isotropic points for $\M$.
From our formula for the number of common isotropic points of a subspace of symmetric bilinear forms (Theorem 5 of \cite{DGS}), we can then deduce
that $\M$ has two possible structures. Either it contains 22 elements of rank 4 and negative type, and 220 elements of rank 4 and positive type, or else it contains 132 elements of rank 4 and negative type, and 110 elements of rank 2 and positive type. It seems that to decide which of these two possibilities actually occurs requires more detail than we have provided above and we must defer to Ward's paper for resolution of this issue.
We would like to point out further noteworthy features of the subspace $\M$. Let $v$ be a non-zero vector different from the 22 common isotropic points of $\M$. We map $\M$ linearly into $\mathbb{F}_{3}$ by associating $f\in\M$ with $f(v,v)$. There is thus a 4-dimensional
subspace $\N$, say, of $\M$, consisting of those elements $f$ that satisfy $f(v,v)=0$.
Let $A$ denote the number of non-zero elements of positive type in $\N$, and $B$ denote the number of non-zero elements of negative type in $\N$.
Now $\N$ has at least 24 non-zero common isotropic points, namely the 22 of $\M$, plus $\pm v$. Theorem 5 of \cite{DGS} implies that
\[
25\leq 3+(A-B)3^{-1},
\]
and also that 3 divides $A-B$.
We see that $A-B\geq 66$ and since $A+B=80$, we obtain that $A\geq 73$. Now $A$ is even, since if $f$ is an element of positive type in
a subspace, so also is $-f$. Thus $A\geq 74$. However, we cannot have $A=74$, $B=6$, since 3 does not divide $A-B$ in this case. Likewise,
$A=78$, $B=2$ is impossible. It follows that $A=76$, $B=4$, and $\N$ has exactly 26 non-zero common isotropic points, the 22 of $\M$, $\pm v$, and
$\pm w$, where $w$ is some other vector.
On the other hand, if we take an arbitrary 4-dimensional subspace of $\M$, it is easy to see that it either contains exactly 23 common isotropic
points, and hence 70 elements of positive type, and 10 of negative type, or it contains exactly 27 common isotropic points, and hence is of the type described above. By counting, we find that there are 66 subspaces of the former type, and 55 of the latter type (note that there are
$121=66+55$ 4-dimensional subspaces in $\M$).
It seems probable that the two types of 4-dimensional subspaces of $\M$ are single orbits under the action of $M_{11}$.
Finally, we mention that the author and his collaborators gave an example of a 5-dimensional constant rank 4 subspace of $5\times 5$ symmetric matrices with entries in $\mathbb{F}_{3}$, \cite{DGS}, Example 1. This subspace contains 220 elements of rank 4 and positive type, and 22 of rank 4 and negative type. At the time of writing of \cite{DGS}, we were unaware of Ward's paper and its relevance to the constant rank theme. The example was found by a computer search, and it is difficult to verify by hand that the five given matrices do indeed span a 5-dimensional constant rank space.
\section{Constant rank $2$ subspaces}
\noindent In this section, we initially let $K$ be an arbitrary field of characteristic different from 2. Let $V$ be a vector space of finite dimension $n$
over $K$. We would like to investigate constant rank 2 spaces of $\Symm(V)$. We begin by making a definition.
\begin{definition} \label{hyperbolic}
Suppose that $n\ge 2$.
Let $f$ be an element of $\Symm(V)$ of rank 2. We say that $f$ is of \emph{hyperbolic type} if there is a vector $w$ not in the
radical of $f$ that satisfies $f(w,w)=0$.
\end{definition}
We note that when $K$ is a finite field of odd characteristic, a symmetric bilinear form of rank 2 and hyperbolic type is the same as one
of positive type.
\begin{lemma} \label{same_radical}
Let $K$ be a field of characteristic different from $2$ and let $V$ be a finite dimensional vector space over $K$, with $\dim V\geq 2$.
Let $f$ and $g$ be elements of $\Symm(V)$ of rank $2$, and suppose that $g$ is not of hyperbolic type. Suppose also that all non-trivial
$K$-linear combinations of $f$ and $g$ have rank $2$. Then $f$ and $g$ have the same radical, of dimension $n-2$.
\end{lemma}
\begin{proof}
We note that as $K$ has odd characteristic, we have $|K|\geq 3$, and hence Theorem \ref{radical_totally_isotropic} applies. Thus, let $R$ be the radical of $f$.
Theorem \ref{radical_totally_isotropic} implies that $g$ is totally isotropic on $R$. It follows that if $R$ is not the radical of $g$, $g$ is of hyperbolic type, a contradiction. Thus $f$ and $g$ have the same radical.
\end{proof}
\begin{corollary} \label{dimension_at_most_2}
Let $K$ be a field of characteristic different from $2$ and let $V$ be a finite dimensional vector space over $K$, with $\dim V\geq 2$.
Let $\M$ be a constant rank $2$ subspace of $\Symm(V)$ that contains an element that is not of hyperbolic type.
Then $\dim \M\leq 2$.
\end{corollary}
\begin{proof}
Let $g$ be any element of $\M$ that is not of hyperbolic type and let $R$ be the radical of $g$. Let $f$ be any other non-zero element of
$\M$. By Lemma \ref{same_radical}, $R$ is also the radical of $f$. Thus $R$ is the common radical of all non-zero elements
of $\M$.
Let $\overline{V}=V/R$. It is straightforward to see that $\M$ is isomorphic to a constant rank 2 subspace of $\Symm(\overline{V})$, since
$R$ is the radical of each non-zero element of $\M$. Since $\dim \overline{V}=2$, it follows that $\dim \M\leq 2$.
\end{proof}
\begin{theorem} \label{dimension_at_most_n-1}
Let $V$ be a vector space of dimension $n\geq 2$ over $\mathbb{F}_{q}$, where $q$ is a power of an odd prime,
and
let $\M$ be a constant rank $2$ subspace of $\Symm(V)$. Suppose that each non-zero element
of $\M$ is of hyperbolic type. Then $\dim \M\le n-1$.
\end{theorem}
\begin{proof}
Suppose, if possible, that $\dim M\geq n$. Then, taking if necessary a subspace of dimension $n$ in $\M$, we may assume that $\dim M=n$.
We know then that in this case the number of common isotropic points is
\[
q^{n-n}-1+(q^n-1)q^{n-n-1}=q^{n-1}-q^{-1}.
\]
However, since this number is not an integer, we have a contradiction. Thus, $d\leq n-1$, as required.
\end{proof}
Theorem \ref{dimension_at_most_n-1} is optimal, since $(n-1)$-dimensional constant rank 2
subspaces of $\Symm(V)$ can be constructed, in which all non-zero elements have hyperbolic type. See, for example, Theorem 7 of \cite{DGS}. We also note that Theorem \ref{dimension_at_most_n-1} does not hold
necessarily for all fields of characteristic different from $2$. For example, consider the real $2\times 2$ matrices
\[
\left(
\begin{array}
{rr}
a&b\\
b&-a
\end{array}
\right),
\]
where $a$ and $b$ run over the real numbers. It is easy to verify that if $a$ and $b$ are not both zero, the matrix defines a real symmetric bilinear
form of rank 2 and hyperbolic type. Thus we have a two-dimensional real subspace of symmetric bilinear forms of rank 2 and hyperbolic type, defined
on a two-dimensional real vector space.
\section{Constant rank $4$ subspaces}
\noindent We would like to extend the analysis of the previous section to investigate general subspaces of symmetric bilinear forms of even constant rank. To this end, we return to the hypothesis that $V$ has dimension $n$ over $\mathbb{F}_{q}$.
We wish to draw attention to the following relevant facts. Suppose that $t$ is a positive integer with $2t\leq n$. Then the largest dimension
of a constant rank $2t$ subspace of $\Symm(V)$, in which each non-zero element has positive type, is $n-t$. Furthermore, constant rank $2t$ subspaces of $\Symm(V)$ of dimension $n-t$ and positive type exist for all $q$. See, for example, Theorems 6 and 7 of \cite{DGS}.
We suspect that if $n>3t$, the largest dimension of a constant rank $2t$ subspace of $\Symm(V)$ is $n-t$ and if this dimension is realized, then all non-zero elements in the subspace have positive type. We may need some requirement that $q\geq 2t+1$ for this to be true. Theorem \ref{dimension_at_most_n-1} has confirmed this bound when $t=1$, the simplest case.
At present, proof of this proposed upper bound for the dimension is missing, even in the probably simplest remaining case, when $t=2$. We will nonetheless present a proof of a weaker statement, namely, if $q$ is odd and at least 5, and $\M$ is a constant rank 4 subspace of $\Symm(V)$, then
$\dim \M\leq n-1$ if $n\geq 5$. We would expect the bound $\dim \M\leq n-2$ to hold, but we have been unable to achieve this. As will be seen, our proof involves some careful analysis of the way that the radicals of the non-zero elements of $\M$ intersect, and may not be the best
approach to the more general dimension problem.
Let $\M$ be a subspace of $\Symm(V)$ and let $U$ be a subspace of $V$. Extending the notation of Section 3, we set
\[
\M_U=\{ f\in \M: U\leq \rad f\}.
\]
It is easy to verify that $\M_U$ is a subspace of $\M$.
\begin{lemma} \label{radical_intersection}
Let $\M$ be a constant rank $4$ subspace of $\Symm(V)$ of dimension at least $2$ and suppose that $q\geq 5$. Suppose that $\M$ contains an element,
$f$, say, of negative type and let $R=\rad f$. Let $g$ be any other non-zero element of $\M$ and let $S=\rad g$. Then $R\cap S$ has codimension at most $1$ in both $R$ and $S$.
\end{lemma}
\begin{proof}
As $q\geq 5$, Theorem \ref{radical_totally_isotropic} implies that $S$ is totally isotropic for $f$. Let $\overline{f}$ be the element of
$\Symm(V/R)$ induced by $f$. Then $\overline{f}$ has rank 4 and negative type. We deduce that any non-zero subspace of $V/R$ that is totally isotropic for
$\overline{f}$ is one-dimensional. Now
\[
(R+S)/R\cong S/R\cap S
\]
is totally isotropic for $\overline{f}$, by our remarks above. Hence $\dim (S/R\cap S)\leq 1$. It follows that $R\cap S$ has codimension at most 1 in $S$, and hence also in $R$, since $\dim R=\dim S$.
\end{proof}
We continue with the hypothesis of Lemma \ref{radical_intersection}, but assume additionally that $n\geq 5$. We have $\dim R=n-4$. Let $U_1$,
\dots, $U_t$ be all the subspaces of $R$ of codimension 1 in $R$, where
\[
t=\frac{q^{n-4}-1}{q-1}.
\]
We set
\[
\M_i=\M_{U_i}
\]
for $1\leq i\leq t$. We note that $\M_R\leq M_i$ for all $i$, since $U_i\leq R$.
\begin{lemma} \label{M_i_intersections}
Adhering to the notation introduced above, we have $\M_i\cap M_j=\M_R$ if $i\neq j$. We also have $\dim \M_i\leq 4$ for all $i$ and
$\dim \M_R\leq 4$.
\end{lemma}
\begin{proof}
Suppose that $i\neq j$ and let $h$ be an element of $\M_i\cap \M_j$. Then $U_i\leq \rad h$ and $U_j\leq \rad h$, and hence $U_i+U_j\leq \rad h$.
Since $U_i\neq U_j$, and each subspace has codimension 1 in $R$, we obtain that $R\leq \rad h$. This implies that $h\in \M_R$. On the other hand,
since $\M_R\leq \M_i$ and $\M_R\leq \M_j$, it is clear that $\M_R\leq \M_i\cap M_j$ and we therefore have the equality $\M_i\cap M_j=\M_R$.
Suppose that $\theta$ is an element of $\M_i$. Then $U_i\leq \rad \,\theta$ and thus $\theta$ determines an element $\overline{\theta}$, say,
of $\Symm(V/U_i)$, where $\overline{\theta}$ has rank 4. Let $\overline{\M_i}$ denote the image of $\M_i$ under the mapping sending
$\theta$ to $\overline{\theta}$. Since $\theta\to \overline{\theta}$ is linear, and all elements of $\M_i$ vanish on $U_i$, $\overline{\M_i}$ is a subspace of $\Symm(V/U_i)$ and $\dim \overline{\M_i}=\M_i$.
Now $\dim (V/U_i)=5$ and $\overline{\M_i}$ is a constant rank 4 subspace of $\Symm(V/U_i)$. Since we are assuming that $q\geq 5$, $\dim \overline{\M_i}\leq 4$ by Theorem \ref{dimension_at_most_n-1}. Thus $\dim \M_i\leq 4$ for all $i$. Finally, since $\M_R\leq M_i$, $\dim \M_R\leq 4$. However, we can equally well regard $\M_R$ as a constant rank 4 subspace of $\Symm(V/R)$. Since $\dim (V/R)=4$, it follows that
$\dim \M_R\leq 4$ (and this inequality holds for all $q$).
\end{proof}
\begin{theorem} \label{constant_rank_4_dimension}
Suppose that $q$ is odd and at least $5$. Let $\M$ be a constant rank $4$ subspace of $\Symm(V)$. Then if $n=\dim V\geq 5$, we have $\dim \M\leq n-1$.
\end{theorem}
\begin{proof}
If $n=5$, the result follows from Theorem \ref{even_constant_rank_dimension}. Thus we may assume that $n\geq 6$.
Suppose now that all non-zero elements of $\M$ have positive type. Then $\dim \M\leq n-2$ by Theorem 6 of \cite{DGS}, and we are finished in this case.
We may therefore assume that $\M$ contains a non-zero element, $f$, say, of negative type, and we set $R=\rad f$. In accordance with our previous discussion, we define the subspaces $\M_1$, \dots, $\M_t$ of $\M$, where $t=(q^{n-4}-1)/(q-1)$. Lemma \ref{radical_intersection} implies that
each element of $\M$ is contained in some subspace $\M_i$.
Consider now the vector space $\M/\M_R$. If $i\neq j$, the subspaces $\M_i/\M_R$ and $\M_i/\M_R$ intersect trivially, by Lemma \ref{M_i_intersections}. Thus $\M/\M_R$ is the union of the subspaces $\M_i/\M_R$, but this may not be a subspace partition as some of the subspaces may be the zero subspace.
Suppose if possible that $\dim \M=n$. We set $\dim \M_R=e$, where $1\leq e\leq 4$, since $f\in \M_R$. Then $\M/\M_R$ is the union of at most
$t$ subspaces each of dimension at most $4-e$, intersecting trivially. Since $\dim (\M/\M_R)=n-e$, we obtain
\[
q^{n-e}-1\leq t(q^{4-e}-1)=\frac{(q^{n-4}-1)(q^{4-e}-1)}{q-1}.
\]
This yields the inequality
\[
(q^{n-e}-1)(q-1)\leq (q^{n-4}-1)(q^{4-e}-1),
\]
which in turn yields
\[
q^{n-e+1}\leq 2q^{n-e}-q^{n-4}-q^{4-e}+q,
\]
which is clearly impossible. Thus we have $\dim \M\leq n-1$, as required.
\end{proof}
\section{Conclusions}
\noindent We have shown that in some cases, the dimension of a constant rank subspace of $\Symm(V)$ is less than $n$. Furthermore, we have shown, when the rank is odd, that the radicals of the non-zero elements of a constant rank space of sufficiently large dimension are all the same. On the other hand, we have provided examples that show that the radicals in certain constant rank subspaces of reasonably large dimension can all be different.
It is reasonable to assert that improved dimension bounds for constant rank subspaces of $\Symm(V)$ depend on answers to two questions.
One answer needed is what are the relative numbers of elements of positive type and of negative type in a constant rank subspace.
Another desideratum is a better understanding of how radicals are distributed throughout $V$. At two extremes, all radicals are the same or all
radicals are different.
Finally, it would be interesting to find further counterexamples to our dimension bounds when $q$ is small, and to look for sporadic constant rank subspaces with as rich a structure as that displayed in Section 6.
|
2,877,628,090,668 | arxiv | \section{\NP-Hardness}\label{sec:nphard}
In this section we prove the following hardness results. First, we
prove standard \NP-hardness for the more general problem of \textsc{Min-Unique-Games$(q)$-Full}\xspace. The
proof for this theorem is similar in spirit to the hardness reductions
of \textsc{MinDisAgree}[$k$] by Giotis and
Guruswami~\cite{giotis2006correlation}.
\begin{theorem}\label{thm:NP-hardness}
\textsc{Min-Lin-Eq$(q)$-Full}\xspace is \NP-hard for $q=2$, and
\textsc{Min-Unique-Games$(q)$-Full}\xspace is \NP-hard for any value of $q$.
\end{theorem}
While we do expect \textsc{Min-Lin-Eq$(q)$-Full}\xspace to be \NP-hard for values of $q \geq
3$, this does not seem to follow from these proof techniques, which
leverage the use of non-cyclic permutations. In Theorem
\ref{thm:random-hardness}, we give a hardness proof for \textsc{Min-Lin-Eq$(q)$-Full}\xspace
using randomized reductions. Notice that Theorems
\ref{thm:NP-hardness} and \ref{thm:random-hardness} are incomparable.
\begin{proof}[Proof of Theorem \ref{thm:NP-hardness}]
We start with the \NP-hardness of \textsc{Min-Lin-Eq$(q)$}\xspace for $q=2$. In that case,
we observe that the problem directly reduces from
\textsc{Correlation-Clustering} with a number of clusters fixed to be
$2$, which was studied by Giotis and
Guruswami~\cite{giotis2006correlation}. Precisely, Giotis and
Guruswami study the problem \textsc{MinDisAgree}[$k$], where one is
given a complete graph on $n$ nodes with each edge labelled by either
$+$ or $-$. The task is to partition the vertices into exactly $k$
clusters so as to minimize the number of $+$ edges between vertices in
different clusters, plus the number of $-$ edges between vertices in
the same cluster. For the special case $k=2$, this can be easily
encoded as a \textsc{Min-Lin-Eq$(q)$}\xspace constraint in the following way. Following the
notation in the introduction, edges labelled $+$ get assigned an
integer $c_{uv}=0$, while edges labelled $-$ get assigned an integer
$c_{uv}=1$. Then, $+$ edges in different clusters and $-$ edges in the
same cluster directly translate into linear equations being violated,
which concludes the proof.
For the \textsc{Min-Unique-Games$(q)$}\xspace problem on complete graphs, we start with the same
reduction, and pad it using additional quite trivial groups of
nodes. More precisely, let $H$ be an instance of
\textsc{MinDisAgree}[$2$] on $n$ vertices, to which we add $q-2$
collections $G_3, \ldots, G_{q}$ of $M$ vertices each, where $M$ is to
be determined later. We denote by $\tau_q^i$ the cyclic permutation of
order $q$ mapping $j$ to $j+i$ modulo $q$, and by $\sigma$ a fixed
permutation on $q-1$ letters without fixed points. The edges and their
constraints are as follows, where the vertices of $H$ and $G_i$ are
numbered arbitrarily:
\begin{itemize}
\item Between two vertices $u$ and $v$ of $H$, we choose $\pi_{u,v}$ to permute the first two coordinates if the edge is a $-$, or to be the identity on these two coordinates if the edge is a $+$. The rest of the permutation is the identity.
\item Between two vertices $u$ and $v$ of the same collection $G_i$, we choose $\pi_{u,v}$ to be so that $\pi_{u,v}(i)=i$, and $\pi_{u,v}=\sigma$ for the other $q-1$ values (with the $i$th value skipped).
\item Between two vertices $u$ and $v$ of different collections $G_i$ and $G_j$, we choose $\pi_{u,v}$ to be $\tau_q^{j-i}$.
\item Between two vertices $u$ and $v$, where $u$ is in $H$ and $v$ is in $G_i$, we choose $\pi_{u,v}$ to be $\tau_q^{i}$ for half of the $v$ in $G_i$, and $\tau_q^{i-1}$ for the other half.
\end{itemize}
We claim that the optimal solution\footnote{There are actually two
different solutions here, depending on which cluster gets labelled
$0$ and $1$. They have the same cost and by a slight abuse, we
consider them to be the same.} to this \textsc{Min-Unique-Games$(q)$}\xspace instance is assigning $x_u=i$ for each vertex in $G_i$, and assigning $x_u=0$ for the vertices in one cluster of the \textsc{MinDisAgree}[$2$] instance in $H$, and $x_u=1$ for the other cluster. Denoting by $c$ the cost of the \textsc{MinDisAgree}[$2$] instance, the cost of this solution is exactly $OPT:=c+nM(q-2)/2$, with $c$ bounded by $\binom{n}{2}$.
We now prove that any minimal solution has this structure. Let $\ell$ be a labeling for a minimal solution. We first claim that for any collection $G_i$, all the vertices in $G_i$ have the same label. For each $j \in [3,q]$, let $S_j$ denote the biggest set of vertices of $G_j$ having the same label. Note that any $S_j$ has size at least $M/q$, and thus all the vertices in $S_j$ must be labeled by $j$ since otherwise the labels between them are violated (as $\sigma$ has no fixed points), yielding $M^2/q^2$ violated constraints, which is bigger than $OPT$ for $M=\Omega(q^2n^2)$. Similarly, the size of the second biggest label in a $G_i$ is at most $M/100q$. If $u$ is a vertex in $G_i$ that is not labelled $i$, all the constraints between $u$ and all the $S_j$ are violated, and changing the label of $u$ so that it matches that of $S_i$ fixes at least these $(q-3) M/q$ constraints, breaks at most $(q-3) M/100q$ constraints between the $G_i$, and breaks at most $n$ constraints with vertices in $H$. So the number of violated constraints is reduced if $99(q-3)M/100 >n$, contradicting the minimality of $\ell$.
We now claim that the vertices in $H$ are labeled $0$ or $1$. Let $u$ be a vertex in $H$ that is not labeled $0$ or $1$. Then all of its constraints with all the $G_i$ are violated. Replacing its label by a $0$ or $1$ label might break up to $n-1$ constraints (within $H$) but fixes exactly half of the constraints with all the $G_i$, which gives a better solution for $M(q-2)/2>n-1$.
Since all the vertices in $H$ are labelled $0$ or $1$, the optimal solution corresponds directly to the optimal \textsc{MinDisAgree}[$2$] instance on $H$, which concludes the proof.
\end{proof}
\section{PTAS}\label{sec:ptas}
The Voting Algorithm from Section \ref{sec:voting} provides a good
approximation to \textsc{Min-Unique-Games$(q)$-Full}\xspace when the value of the solution is small. In the
opposing regime, when the value of the solution is large, we can get a
good approximation of the solution by solving approximately the complementary problem \textsc{Max-Unique-Games$(q)$-Full}\xspace, which is the problem of maximizing the number of satisfied constraints. This complementary problem is the maximization version of a Constraint Satisfaction Problem, and, when the alphabet size is constant, those admit very efficient approximation algorithms on dense graphs using sampling techniques, and thus also on complete graphs.
In order to obtain a (randomized) polynomial-time approximation scheme (PTAS) for \textsc{Min-Unique-Games$(q)$-Full}\xspace we rely on the following theorem, where we emphasize that $q$ is considered a constant (i.e., the $O(\cdot)$ notation hides an unspecified dependency on $q$).
\begin{theorem}[{\cite[Theorem 7]{karpinski2009linear}}]\label{T:Maxversion}
For any Max-$2$-CSP and any $\varepsilon > 0$ there is a randomized
algorithm which returns an assignment of cost at least $OPT -
\varepsilon n^2$ in runtime $O(n^2) + 2^{O(1/\varepsilon^2)}$.
\end{theorem}
A MAX-$2$-CSP is a CSP where each constraint involves two variables.
When the alphabet size is not constant, it is known that a general purpose PTAS for Max-CSPs on complete graphs can not exist assuming Gap-ETH, see Romero, Wrochna and \v{Z}ivn\'y~\cite[Corollary~E.5]{rwz}. Whether a PTAS exists for \textsc{Max-Unique-Games$(q)$-Full}\xspace when the alphabet size is not constant seems to be open.
Our PTAS is then as follows.
\begin{theorem}\label{T:ptas}
When the alphabet size $q$ is constant, for any $\varepsilon>0$, we
can compute a $(1+\varepsilon)$-approximation for the problem \textsc{Min-Unique-Games$(q)$}\xspace in
time $O(n^3)+ 2^{O(1/\varepsilon^4)}$.
\end{theorem}
Note that the runtime in Theorem \ref{T:ptas} is $O(n^2) +
2^{O(1/\varepsilon^4)}$ if we use the Randomized Voting Algorithm.
This is similar to a result of Karpinsky and Schudy~\cite{karpinski2009linear}, with a simpler algorithm.
\begin{proof}[Proof of Theorem~\ref{T:ptas}]
Let $OPT$ denote the optimal value of the problem, which we can guess to arbitrary precision using binary search. If $2\nu (2+\nu)(OPT/m)<\varepsilon$, where $\nu=2/(1-2OPT/m)$, then by Lemma~\ref{L:mainlemma} we get the needed approximation. Otherwise, since $\nu\geq 2$, we have $OPT \geq \varepsilon m/16$, and thus $m\leq 16OPT/\varepsilon$.
In that case, we compute a $\varepsilon'$-approximation to the complementary problem using Theorem~\ref{T:Maxversion}, for $\varepsilon'=\varepsilon^2/32$. This provides us with a solution where the number of satisfied edges is at least $(m-OPT)-\varepsilon'n^2$, and thus the number of unsatisfied edges is at most $OPT+\varepsilon'n^2\leq OPT+32\varepsilon' OPT/\varepsilon \leq OPT (1+\varepsilon)$.
\end{proof}
This argument can be generalized to the following observation:
\begin{observation}
Let \textsc{Min-CSP} denote a constraint satisfaction problem where the objective is to minimize the number of violated constraints, while \textsc{Max-Comp-CSP} denotes the complementary problem of maximizing the number of satisfied constraints. Then if there exists a PTAS for \textsc{Max-Comp-CSP} and a super robust algorithm for \textsc{Min-CSP}, there exists a PTAS for \textsc{Min-CSP}.
\end{observation}
As a corollary of this observation, since the special case of
\textsc{Correlation Clustering} known as {\sc MinDisagree} on complete graphs
is APX-hard and its complementary max version admits a PTAS~\cite{bansal2004correlation}, it is very unlikely to admit a super robust algorithm.
\section{Analysis of Pivot Algorithm}\label{app:first}
In a given instance of \textsc{Min-Lin-Eq$(q)$-Full}\xspace on a graph $G$,
each cycle in $G$ is either {\em consistent} or {\em inconsistent}.
Let ${\mathcal{I}}$ denote the set of inconsistent cycles and let ${\mathcal{T}} \subseteq
{\mathcal{I}}$ denote the set of inconsistent triangles in $G$. Observe that a
feasible solution to Problem \ref{linEqModq} is a hitting set for the
set of inconsistent cycles. Consider the following linear programming
relaxation of Problem \ref{linEqModq} and its dual.
\begin{align*}
\min \sum_{e \in E} & x_e\\
\sum_{e \in C} x_e & \geq 1 \text{ for all cycles } C \in {\mathcal{I}},\\
x_e & \geq 0. \tag{$P_{UG}$}\label{ug}
\end{align*}
\begin{align*}
\max \sum_{C \in {\mathcal{I}}} & y_C\\
\sum_{C \in {\mathcal{I}}: e \in C} y_C & \leq 1 \text{ for all } e \in E,\\
y_C & \geq 0. \tag{$D_{UG}$}\label{ug_dual}
\end{align*}
\begin{claim}
Any fractional packing of inconsistent triangles in $G$ is a lower
bound on the optimal value of \ref{ug}.
\end{claim}
\begin{cproof}
The optimal value of \ref{ug} is lower bounded by a fractional packing
of inconsistent cycles (i.e., a feasible solution for \ref{ug_dual}).
A fractional packing of inconsistent triangles is a lower bound on a
fractional packing of inconsistent cycles.
\end{cproof}
\SimpleDualProof*
\begin{proof}
The Pivot Algorithm assigns a label $\ell(v) \in [q]$ to each $v \in
V$. Each edge $uv \in E$ whose constraint is unsatisfied by the
labels $\ell(u)$ and $\ell(v)$ is added to the ``deletion set'' $F
\subset E$. Let $G'$ be the graph consisting of the remaining edges
(i.e., $G' = (V, E\setminus{F})$). The following claim follows directly from the definition of $G'$.
\begin{claim}
$G'$ contains no inconsistent cycles.
\end{claim}
Let $t = \{i,j,k\}$ be an inconsistent triangle in $G$ and let $A_t$ denote
the event that $p \in \{i,j,k\}$. Let $p_t$ be the probability of
event $A_t$. Then,
\begin{eqnarray}
\mathbbm{E}[\text{Number of deleted edges}] & = & \sum_{t \in {\mathcal{T}}} p_T.\label{output-alg}
\end{eqnarray}
\begin{claim}\label{dual-LB}
Setting $y'_C = y'_t = \frac{p_t}{3}$ if $C = t \in {\mathcal{T}}$ and $y'_C = 0$
otherwise is dual feasible.
\end{claim}
\begin{cproof}
Let $B_e$ be the event that edge $e$ was deleted by the algorithm.
Let $B_e \wedge A_t$ be the event that edge $e$ was deleted due to
$A_T$. Given event $A_t$, each edge in $t$ is equally likely to be
deleted. So we have
\begin{align*}
Pr(B_{e} \wedge A_{t}) &= Pr(B_{e} | A_{t})Pr(A_{t}) \\
&=\frac{1}{3} \times p_{t} \\
&= \frac{p_{t}}{3}.
\end{align*}
Note that for any $t \neq t' \in {\mathcal{T}}$ such that $e \in t$ and $e \in
t'$, $B_{e} \wedge A_{t}$ and $B_{e} \wedge A_{t'}$ are disjoint
events. Hence, $\underset{t: e \in t }{\sum} \Pr(B_{e} \wedge A_{t})
\le 1$. This implies that, for all $e \in A$:
\begin{eqnarray*}
\sum_{C:e \in C} y'_C = \sum_{t:e\in t} \frac{p_t}{3} \leq 1.
\end{eqnarray*}
We can therefore conclude that $\{y'_C\}$ is a dual-feasible solution.
\end{cproof}
From Claim \ref{dual-LB} and \eqref{output-alg}, we can conclude that the
pivot algorithm has an approximation ratio of $3$.
\end{proof}
To derandomize the pivot algorithm, observe that we can run the
algorithm $n$ times, each time choosing a different vertex as pivot.
Consider some fixed optimal solution $\ensuremath{\operatorname{OPT}}\xspace$ that violates exactly
$\varepsilon {n \choose 2} = \varepsilon m = \ensuremath{\operatorname{OPT_{val}}}\xspace$ constraints. For some choice
of pivot, the number of labels the algorithm incorrectly assigns is at
most $\varepsilon(n-1)$. Since each of these vertices is incident to at most
$(n-1)$ edges, the total number of incorrect edges is at most
\[\varepsilon m + \varepsilon(n-1)^2 \leq 3 \varepsilon m = 3\cdot \ensuremath{\operatorname{OPT_{val}}}\xspace.\]
\subsection{Tight example}
We can show that the analysis yielding a 3-approximation ratio is
tight. Imagine that we have a complete graph such that all edges
except those in a Hamilton cycle are
associated with the constraint
$x_u - x_v \equiv 0 \bmod q$. The edges in the Hamilton cycle are
associated with the constraint $x_u - x_v \equiv 1 \bmod q$. Notice
that all
pivots lead to the same number constraints being (un)satisfied. An
optimal solution can satisfy ${n \choose 2}-n$ constraints and leaves
$n$ constraints unsatisfied. Let $p$ be the pivot and let $p-1$ and
$p+1$ be its two neighbors on the Hamilton cycle. Then the following
edges are unsatisfied:
\begin{enumerate}
\item The $n-4$ edges in the Hamilton cycle with neither endpoint in
$\{p-1, p, p+1\}$.
\item The $n-4$ edges not in the Hamilton cycle with endpoint $p-1$.
\item The $n-4$ edges not in the Hamilton cycle with endpoint $p+1$.
\end{enumerate}
So, asymptotically, we have $3n$ unsatisfied edges, while an optimal
solution leaves only $n$ edges unsatisfied.
\section{Analysis of Voting Algorithm in Everywhere-Dense Case}\label{app:B}
In this section, we prove the following theorem.
\mainDenseCase*
For convenience, we restate the algorithm. For simplicity, it is
stated for \textsc{Min-Lin-Eq$(q)$}\xspace. It can be extended to \textsc{Min-Unique-Games$(q)$}\xspace on everywhere-dense
graphs by trying all labels in the first step (see Section
\ref{sec:ext-ugc}).
\vspace{5mm}
\noindent
\fbox{\parbox{15cm}{
{\sc Voting Algorithm for Dense Case}
\vspace{1mm}
{\it Input:} An instance of \textsc{Min-Lin-Eq$(q)$}\xspace on a $(1-\delta)$-everywhere
dense graph $G=(V,E)$.
\begin{itemize}
\item[1.] Pick a pivot $p \in V$. Label $p$ with $0$, and label each
vertex $v \in V$ adjacent to $p$ with temporary label $\ensuremath{\operatorname{TEMP}}\xspace(v)$, which is chosen according to the constraint on edge $(p, v)$. (Specifically, $\ensuremath{\operatorname{TEMP}}\xspace(v)
= c_{vp}$.)
\item[2.] For each vertex $v$, each neighboring vertex $u$ with
a $\ensuremath{\operatorname{TEMP}}\xspace$ label votes for a label for $v$, where $u$'s vote
is based on its temporary label $\ensuremath{\operatorname{TEMP}}\xspace(u)$. (Specifically, the vote
of $u$ for $v$ is $(c_{vu} + \ensuremath{\operatorname{TEMP}}\xspace(u)) \bmod q$.)
\item[3.] Then each $v$ is assigned a final label $\ensuremath{\operatorname{FINAL}}\xspace(v)$ according
to the outcome of the votes it received (with a plurality rule). Ties are
resolved arbitrarily.
\item[4.] Output the best \ensuremath{\operatorname{FINAL}}\xspace solution over all choices of $p$ in Step 1.
\end{itemize}}}
\vspace{3mm}
Note that $p$ also votes in Step 2. As mentioned earlier, the analysis turns out to be cleaner if $p$ votes (i.e., does not abstain).
Let $\ensuremath{\operatorname{OPT_{val}}}\xspace$ denote the value of an optimal solution
(i.e., the minimum number of unsatisfied constraints) and let $\ensuremath{\operatorname{OPT_{val}}}\xspace =
\varepsilon m$ (i.e., $\varepsilon = \ensuremath{\operatorname{OPT_{val}}}\xspace/m$).
\begin{lemma}\label{L:mainlemmadense}
The Voting Algorithm on an everywhere $(1-\delta)$-dense graph gives a
$(1+ 2 \nu (2+\nu)\varepsilon/(1-\delta))$-approximation of the
optimal solution, where $\nu
= \frac{2}{1-2\varepsilon-2\delta}$.
\end{lemma}
Fix an optimal solution $\ensuremath{\operatorname{OPT}}\xspace$, and denote by $\ensuremath{\operatorname{OPT}}\xspace(v)$ the label it gives to
a vertex $v$. In this optimal solution, there are satisfied edges,
which we call \emph{green} edges and unsatisfied edges, which we call
\emph{red} edges.
Since $\varepsilon=\ensuremath{\operatorname{OPT_{val}}}\xspace/m$, the number of red edges incident to $p$ is
at most $\varepsilon \cdot d(p)$ for some choice of $p$. We analyze
the voting algorithm for this choice of $p$. Without loss of
generality, we assume that $\ensuremath{\operatorname{OPT}}\xspace(p)=0$. This means that at least
$(1-\varepsilon)d(p)$ vertices have $\ensuremath{\operatorname{TEMP}}\xspace(u)=\ensuremath{\operatorname{OPT}}\xspace(u)$; we call
these \emph{nice vertices}. The ones with $\ensuremath{\operatorname{TEMP}}\xspace(u) \neq \ensuremath{\operatorname{OPT}}\xspace(u)$
are \emph{rogue vertices}. The remaining vertices with no $\ensuremath{\operatorname{TEMP}}\xspace$
label (because the edge is missing) are \emph{abstaining vertices}.
Observe that there are at most $\varepsilon \cdot d(p)$ rogue vertices and $n
- d(p) - 1 \leq \delta(n-1)$ abstaining vertices. By convention, we
say that $p$ itself is a nice vertex.
Let $r$ denote the number of rogue vertices (so $r \leq \varepsilon \cdot
d(p)$). Let $\Delta(v) \subset E$ denote the edges incident to vertex
$v$. Let $\bd(v)$ denote the number of neighbors of vertex $v$ that
are non-abstaining. Notice that $\bd(v)$ is the number of votes that
vertex $v \neq p$ receives.
The plan is to analyze how much the outcome of the voting algorithm
differs from $\ensuremath{\operatorname{OPT}}\xspace$. A vertex is \emph{flipped} if $\ensuremath{\operatorname{FINAL}}\xspace(v) \neq
\ensuremath{\operatorname{OPT}}\xspace(v)$. For a vertex to be flipped, it must be badly influenced by
its neighbors. Observe that all nice vertices adjacent to $v$ via a
green edge in $\Delta(v)$ vote correctly with respect to vertex $v$
(i.e., they vote for label $\ensuremath{\operatorname{OPT}}\xspace(v)$).
The two types of vertices that can vote incorrectly for $v$'s label
(i.e., they might not vote for label $\ensuremath{\operatorname{OPT}}\xspace(v)$) are (i) rogue vertices
incident to green edges in $\Delta(v)$, and (ii) vertices incident to
red edges in $\Delta(v)$. The number of vertices falling into the
first category is at most the number of rogue vertices (i.e., at most
$r$). The number of vertices falling into the second
category is at most the number of red edges incident to $v$. Hence we
say that a vertex $v$ is \emph{flippable} if the number of red edges
incident to $v$ is at least $\bd(v)/2 - r$.
\begin{claim}\label{L:flipsdense}
If a vertex $v$ is not flippable, it is not flipped (i.e., $\ensuremath{\operatorname{FINAL}}\xspace(v) = \ensuremath{\operatorname{OPT}}\xspace(v)$).
\end{claim}
\begin{cproof}
If $v$ is not flippable, it has at least $\bd(v)/2 + r + 1$ incident
green edges (since by definition the number of incident red edges is
at most $\bd(v)/2 - r -1$). At least $\bd(v)/2 + 1$ of these green
edges are incident to nice vertices. (Recall a vertex $u$ is nice if
$\ensuremath{\operatorname{TEMP}}\xspace(u)=\ensuremath{\operatorname{OPT}}\xspace(u)$.) Thus all of these nice vertices vote for $v$ to be
labeled $\ensuremath{\operatorname{OPT}}\xspace(v)$, and they will win the vote since they form an
absolute majority, since the maximum possible number of votes is
$\bd(v)$.
\end{cproof}
\begin{claim}\label{L:countdense}
There are $f \leq \varepsilon \nu n$ flippable vertices.
\end{claim}
\begin{cproof}
By definition, there are $\ensuremath{\operatorname{OPT_{val}}}\xspace=\varepsilon m$ red edges. Denote by $f$
the number of flippable vertices. For a flippable vertex $v$, we need
at least $\bd(v)/2 - r \geq (1-2\delta)(n-1)/2 - r$ red edges in
$\Delta(v)$. Since $m \leq n(n-1)/2$, we have
$$ f \cdot ((1-2\delta) (n-1)/2- r) \leq
2\varepsilon m \leq \varepsilon n (n-1).$$
Recall $r \leq \varepsilon \cdot d(p) \leq \varepsilon (n-1)$
which implies
\begin{eqnarray*}
f
& \leq &
\frac{2\varepsilon n (n-1)}{(1-2\delta) (n-1)- 2r}\\
& \leq &
\frac{2\varepsilon n (n-1)}{(1-2\delta) (n-1)- 2 \varepsilon (n-1)}\\
& \leq &
\frac{2\varepsilon n}{1-2\delta - 2 \varepsilon}.
\end{eqnarray*}
implying the lemma.
\end{cproof}
At the end of the algorithm (i.e., according to the labels
$\{\ensuremath{\operatorname{FINAL}}\xspace(v)\}$), if an edge is unsatisfied, then either it is red, or
it is green and at least one of its endpoints got flipped. In the
latter case, we charge that edge positively to (one of) the
endpoint(s) that got flipped. Similarly, if an edge is satisfied, then
either it is green, or it is red and at least one of its endpoints got
flipped. In the latter case, we charge that edge negatively to (one
of) the endpoint(s) that got flipped.
\begin{claim}\label{L:chargesdense}
The charges on a flipped vertex $v$ at the end of the algorithm are at
most $2r + f \leq 2\varepsilon(n-1) + \varepsilon\nu n$.
\end{claim}
\begin{cproof}
For a given vertex $v$, each non-abstaining neighbor $u$ votes for
vertex $v$ to have the label $vote(u \rightarrow v)$, where
$vote(u \rightarrow v)$ is equal to $\ensuremath{\operatorname{TEMP}}\xspace(u)$ modified according to
the constraint on the edge $uv$.
A \emph{coalition} is a maximal set of neighboring vertices $C$
adjacent to $v$ that vote unanimously: for all $u \in C$, $vote(u
\rightarrow v)$ has the same value.
All the non-abstaining vertices adjacent to $v$ get partitioned into
coalitions, and the
\emph{winning coalition} is one with the largest cardinality.
A flippable vertex $v$ gets flipped if the winning coalition $C_{WIN}$
is not the coalition $C_{OPT}$ (where $C_{OPT}$ is the coalition that
votes for $\ensuremath{\operatorname{OPT}}\xspace(v)$). Observe that $C_{OPT}$ contains the subset of
nice vertices that are adjacent to $v$ via green edges. Call this
subset $W_{GG}$. The winning coalition $C_{WIN}$ is formed by nice
vertices adjacent to $v$ via red edges (call this subset $W_{GR}$),
rogue vertices adjacent to $v$ via green edges (call this subset
$W_{RG}$), and rogue vertices adjacent to $v$ via red edges (call this
subset $W_{RR}$). (Note that there might be some vertices in $V
\setminus{v}$ that belong to neither $C_{OPT}$ nor to $C_{WIN}$, nor
to any coalition if they are abstaining vertices.)
Since the winning coalition wins the vote, $|C_{OPT}| \leq |C_{WIN}|$. Thus,
$$|W_{GG}| \leq |W_{GR}| + |W_{RG}| + |W_{RR}| \leq |W_{GR}| + r.$$
The positive charges are upper bounded by $|W_{GG}|+|W_{RG}|$. (This
is not an equality as these edges might end up satisfied if their
other endpoint is flipped as well.) The negative charges are at
least $|W_{GR}| + |W_{RR}|$ minus those whose other endpoint has been
flipped as well and those incident to rogue neighbors (i.e.,
$W_{RR}$). For the other endpoint to be flipped, it needs to be
flippable, so the total number of negative charges is at least
$|W_{GR}|- f$.
So the total charge is at most:
\begin{eqnarray*}
|W_{GG}|+|W_{RG}|-(|W_{GR}|- \varepsilon \nu n) & \leq & |W_{GR}| + r
+ |W_{RG}| - |W_{GR}| + f \\
& = &
r + |W_{RG}| + f \\
& \leq &
2 r + f,
\end{eqnarray*}
where we used the fact that $W_{RG}$ is a subset of the
rogue vertices and therefore has cardinality at most $r$.
\end{cproof}
Denote by $\ensuremath{\operatorname{VAL}}\xspace$ the number of unsatisfied edges at the end of the algorithm.
\begin{claim}
$\ensuremath{\operatorname{VAL}}\xspace-\ensuremath{\operatorname{OPT_{val}}}\xspace \leq \ensuremath{\operatorname{OPT_{val}}}\xspace \cdot 2\varepsilon \nu (2+\nu)/(1-\delta)
+ \varepsilon^2 \nu^2 n$.
\end{claim}
\begin{cproof}
This difference is exactly the number of green edges (i.e.,
satisfied in $\ensuremath{\operatorname{OPT}}\xspace$) which become unsatisfied in $\ensuremath{\operatorname{VAL}}\xspace$ minus the number
of red edges (i.e., unsatisfied in $\ensuremath{\operatorname{OPT}}\xspace$) which become satisfied in
$\ensuremath{\operatorname{VAL}}\xspace$. This difference is exactly controlled by the charging
scheme. Combining with Claims~\ref{L:countdense} and~\ref{L:chargesdense},
the sum of all charges is at most
\begin{eqnarray}
(2r +f)f &\leq &
(2 \varepsilon(n-1) + \varepsilon \nu n) \varepsilon \nu n\\ & = & (2 \varepsilon(n-1) + \varepsilon \nu (n
- 1) + \varepsilon \nu) \varepsilon \nu n \\
& = & \varepsilon^2 \nu n (n-1) (2 + \nu) + \varepsilon^2 \nu^2 n \\
& \leq & \ensuremath{\operatorname{OPT_{val}}}\xspace \cdot \frac{2\varepsilon \nu (2 + \nu)}{1-\delta} + \varepsilon^2 \nu^2 n.
\end{eqnarray}
Above we use the fact that
$$\ensuremath{\operatorname{OPT}}\xspace = \varepsilon m \geq \varepsilon n (1-\delta)(n-1)/2.$$
\end{cproof}
\section{Introduction}
Let $G=(V,E)$ be a complete graph with an arbitrary linear order on
the vertices, let $q$ be a positive integer (where $q \leq \poly(n)$)
and let $[q] = \{0, \ldots, q-1\}$. (Note that $G$ is simple and
does not contain any multi-edges or self-loops.) Let $n$ denote the
number of vertices and $m$ the number of edges in $G$ (i.e., $n=|V|$
and $m = {n \choose 2}$). We use $uv=vu$ to refer to an edge in $E$
and $(u,v)$ to refer to an ordered pair or arc. For each ordered pair
of vertices, $(u,v)$, there is a permutation $\pi_{uv}: [q]
\rightarrow [q]$ which we interpret as the constraint
$x_v=\pi_{uv}(x_u)$. This is equivalent to the constraint
$x_u=\pi_{vu}(x_v)$ since we require $\pi_{vu} = \pi^{-1}_{uv}$. Then
the \textsc{Min-Unique-Games$(q)$-Full}\xspace problem is the following.
\begin{problem}[\textsc{Min-Unique-Games$(q)$-Full}\xspace]\label{UG}
Given a complete graph $G$, a positive integer $q$ and a permutation
$\pi_{uv}:[q] \rightarrow [q]$ for each ordered pair of vertices
$(u,v)$ with $u < v$ (such that $\pi_{vu} = \pi^{-1}_{uv}$), find a
minimum cardinality subset of edges of $G$ whose deletion results in a
satisfiable set of constraints.
\end{problem}
In a special case of this problem, each permutation is {\em cyclic}.
Specifically, for each ordered pair of vertices, $(u,v)$, there is a
given integer $c_{uv} \in [q]$ (symmetrically, $c_{vu} = q - c_{uv}
\bmod q$). For each edge $uv \in E$ with $u < v$, there is a
constraint $x_u - x_v \equiv c_{uv} \bmod q$. (Observe that $x_v -
x_u \equiv c_{vu} \bmod q$ is an equivalent constraint.)
\begin{problem}[\textsc{Min-Lin-Eq$(q)$-Full}\xspace]\label{linEqModq}
Given a complete graph $G$, a positive integer $q$ and a constraint $x_u - x_v
\equiv c_{uv} \bmod q$
for each ordered pair of vertices $(u,v)$ with $u < v$ (such that
$c_{vu} = q - c_{uv}$), find a minimum cardinality subset of edges of
$G$ whose deletion results in a satisfiable set of constraints.
\end{problem}
We refer to the general versions of Problems 1 and 2 (i.e., when $G$
is not necessarily a complete graph) as \textsc{Min-Unique-Games$(q)$}\xspace and \textsc{Min-Lin-Eq$(q)$}\xspace,
respectively. \textsc{Min-Unique-Games$(q)$}\xspace is a constraint satisfaction problem (CSP). As
defined by Zwick ~\cite{zwick1998finding}, an approximation algorithm
for a CSP is called {\em robust} if it outputs an assignment
satisfying a $(1 - f(\varepsilon))$-fraction of the constraints on any
$(1-\varepsilon)$-satisfiable instance, where the loss function $f$ is such
that $f(\varepsilon) \rightarrow 0$ as $\varepsilon \rightarrow 0$. Moreover, the
runtime of the algorithm should not depend in any way on $\varepsilon$.
Let us call an approximation algorithm {\em super robust}
if the loss function has the form $\varepsilon + O(\varepsilon^2)$. Such super
robust algorithms are relevant in the design of approximation
algorithms because if one disposes of a super robust algorithm for
the min version of a problem and a polynomial time approximation
scheme (PTAS) for the complementary max version, then we can derive
a PTAS for the min version as well (see
Section~\ref{sec:ptas}). However, while there is a wide range of
techniques to obtain a PTAS for the max version of constraint
satisfactions problems on dense graphs (see
e.g.,~\cite{arora1999polynomial}), much less tools are known for the
design super robust algorithms for the min versions, even on dense
graphs.
In this paper, we present such a super robust algorithm for {\sc
Unique-Games} on complete graphs. Specifically, the runtime of our
algorithm is $O(q n^3)$ in the RAM model (with no dependence on
$\varepsilon$) and the loss function is $f(\varepsilon) = (\varepsilon + c_{\varepsilon} \varepsilon^2)$,
where $c_{\varepsilon}$ is a constant depending on $\varepsilon$ such that
$\lim_{\varepsilon \rightarrow 0} c_{\varepsilon} = 16$. A randomized
implementation with a slightly larger constant $c_{\varepsilon}$ in the loss
function runs in time $O(q n^2)$. We show that our algorithm can be
extended to the so-called {\em everywhere dense} case, which is where
every vertex has degree at least $(1-\delta)(n-1)$ for some constant
density parameter $\delta \in (0,1)$~\cite{arora1999polynomial}.
Our algorithm is combinatorial and uses voting to find an assignment.
First, we find an initial assignment using a pivot algorithm in the
spirit of \cite{ailon2008aggregating}, which is a 3-approximation in
the case of \textsc{Min-Lin-Eq$(q)$-Full}\xspace (Section \ref{sec:pivot}). Then, we
improve the solution according to ``votes'' of the other vertices
based on their initial assignments (Section \ref{sec:voting}). We
discuss the extension to the dense case, whose details can be found in
Appendix \ref{app:B}. When the alphabet size is constant, we can couple our robust algorithm with classical approximation algorithms for the complementary problem to obtain a PTAS for \textsc{Min-Unique-Games$(q)$-Full}\xspace (and thus \textsc{Min-Lin-Eq$(q)$-Full}\xspace). This is explained in Section~\ref{sec:ptas}, and recovers a result of Karpinsky and Schudy~\cite{karpinski2009linear}, with a simpler proof.
We also consider the hardness of \textsc{Min-Unique-Games$(q)$-Full}\xspace (Section \ref{sec:nphard}). In
the case of $q=2$, the \NP-hardness for \textsc{Min-Lin-Eq$(q)$-Full}\xspace follows from
the NP-hardness of \textsc{Correlation-Clustering}\xspace with two clusters
(i.e., {\sc MinDisAgree[2]}) due to Giotis and
Guruswami~\cite{giotis2006correlation}. For larger $q \geq 3$, the
hardness of \textsc{Min-Unique-Games$(q)$-Full}\xspace does not appear to be explicitly considered anywhere
in the literature. Therefore, we prove \NP-hardness for \textsc{Min-Unique-Games$(q)$-Full}\xspace for $q
\geq 3$. For \textsc{Min-Lin-Eq$(q)$-Full}\xspace, we prove NP-hardness under the weaker
assumption that $\NP \subsetneq \BPP$. Our reduction is similar to the
hardness reductions for \textsc{Feedback-Arc-Set-Tournaments}\xspace~\cite{ailon2008aggregating,alon2006ranking,charbit2007minimum}
and for fully-dense problems~\cite{ailon2007hardness} but is not
directly implied by them since, for example, the latter result only
holds for fully-dense CSPs on a binary domain.
\subsection{Related Work}
{\sc Unique-Games} on general graphs is one of the most important
problems in approximation algorithms due to its direct connection with
the famous Unique Games Conjecture of Khot~\cite{khot2002power}.
Roughly speaking, the conjecture states that there is no
constant-factor approximation algorithm for \textsc{Max-Unique-Games$(q)$}\xspace. It is not hard to
see that there is an algorithm with approximation factor $1/q$. Many
approximation algorithms, which beat this factor, have been developed,
although none give constant-factor approximations. Some of these use
semidefinite programming
(SDP)~\cite{khot2002power,trevisan2005approximation,charikar2006near,raghavendra2008optimal},
and some use linear programming (LP)~\cite{gupta2006approximating}.
It is known that one can find a constant factor approximation for
\textsc{Max-Unique-Games$(q)$}\xspace in subexponential
time~\cite{arora2015subexponential,barak2011rounding,bafna2021playing}.
See \cite{khot2010inapproximability,steurer2014sum} for surveys on the
Unique Games Conjecture.
In terms of a robust algorithm for \textsc{Min-Unique-Games$(q)$}\xspace, there is an algorithm based on
semidefinite programming with loss function $f(\varepsilon) = \sqrt{\varepsilon
\log{q}}$~\cite{charikar2006near}. This is not really a robust algorithm
for \textsc{Min-Unique-Games$(q)$}\xspace since the loss function depends on $q$ and not solely on
$\varepsilon$, and $q$ could be a function of $n$. Robust algorithms for Constraint Satisfaction Problems have
been studied in
depth~\cite{guruswami2011tight,kun2012linear,barto2016robustly,dalmau2019robust}.
\textsc{Min-Unique-Games$(q)$}\xspace has also been studied on expanders \cite{arora2008unique}, and
this work gives an algorithm with loss function
$O(\frac{\varepsilon}{\lambda} \log{\frac{\lambda}{\varepsilon}})$, where $\lambda$
is the second smallest eigenvalue of the normalized Laplacian of the
input graph $G$. This algorithm is robust in the case of complete
graphs, since $\lambda = 1$ for a complete graph. In
\cite{guruswami2011lasserre}, the stated loss function for a graph
with $\lambda = 1$ is $f(\varepsilon) = (3 + \eta)\varepsilon$, which is achieved in
time $n^{O(2/\eta)}$. Perhaps a more careful analysis of these
algorithms can yield a slightly better loss function in the case of
complete graphs. In any case, these loss functions correspond to
constant factor approximations and are therefore worse than the one we
present in this paper by an order of magnitude, and for example, they
cannot be leveraged to obtain a PTAS as in Section \ref{sec:ptas}.
Moreover, it is somewhat interesting that our loss function can be
achieved using combinatorial methods rather than relying on tools from
semidefinite programming as is the case in~\cite{arora2008unique} and
on semidefinite hierarchies as in~\cite{guruswami2011lasserre}. We
remark that the Unique Games Conjecture is equivalent to the
conjecture that a basic assignment-based semidefinite program is the
best tool for solving an instance of
\textsc{Max-Unique-Games$(q)$}\xspace~\cite{raghavendra2008optimal}. Thus, it is interesting to
consider different algorithmic tools. Finally, we note that the
algorithm of \cite{arora2008unique} can be interpreted as a pivot
algorithm and we discuss this connection in Section \ref{sec:pivot}.
\textsc{Min-Unique-Games$(q)$}\xspace has also been studied on dense graphs and there is a PTAS with stated
runtime $O(n^2) + 2^{O(\frac{1}{\varepsilon})}$~\cite{karpinski2009linear}.
This algorithm, based on a combination of random sampling and voting,
is not robust as the runtime is not independent of $\varepsilon$. Notice
that this runtime assumes that both $q$ and the density parameter
$\delta$ are fixed (i.e., the dependence on $q$ and $\delta$ occur in
the exponent but are not stated explicitly in the runtime).
Finally, we remark that many combinatorial optimization problems have
been specifically studied on complete graphs or tournaments. For
example, \textsc{Feedback-Arc-Set-Tournaments}\xspace has a much better approximation guarantee than is
currently known for the general case, but is still
\NP-hard~\cite{ailon2008aggregating}. Another well-studied example is
the special case of \textsc{Correlation-Clustering}\xspace known as {\sc MinDisagree} on complete
graphs~\cite{bansal2004correlation,charikar2005clustering,giotis2006correlation,ailon2008aggregating,chawla2015near}.
The latter problem is APX-hard~\cite{charikar2005clustering}, so it is
unlikely to have a super robust approximation (see Section~\ref{sec:ptas}). Although \textsc{Feedback-Arc-Set-Tournaments}\xspace has a
PTAS~\cite{kenyon2007rank},
it also does not seem to have a known super
robust approximation algorithm.
\section{Pivot Algorithm for \textsc{Min-Lin-Eq$(q)$-Full}\xspace}\label{sec:pivot}
In a given instance of \textsc{Min-Lin-Eq$(q)$-Full}\xspace on a graph $G$, each cycle
in $G$ is either {\em consistent} or {\em inconsistent}. A cycle is
consistent (inconsistent) if it is satisfiable (unsatisfiable,
respectively). Observe that a feasible solution to Problem
\ref{linEqModq} is a hitting set for the set of inconsistent cycles.
The following algorithm outputs a vertex labeling such that the
unsatisfied edges form a hitting set for the inconsistent cycles.
\vspace{5mm}
\noindent
\fbox{\parbox{15cm}{
{\bf Pivot Algorithm}
\vspace{1mm}
{\it Input:} An instance of \textsc{Min-Lin-Eq$(q)$-Full}\xspace on a graph $G=(V,E)$.
\begin{itemize}
\item[1.] Pick a pivot $p \in V$ uniformly at random and label $p$
with 0.
\item[2.] For each vertex $v \in V\setminus{p}$, assign $v$ label
corresponding to the constraint on edge $pv$. (Specifically,
$\ell(v) = c_{vp}$.)
\end{itemize}
}}
\vspace{3mm}
On an input for \textsc{Min-Unique-Games$(q)$-Full}\xspace, the algorithm can be modified to test each
possible label in $[q]$ for the pivot chosen in Step 1.
\begin{restatable}{theorem}{SimpleDualProof}\label{thm:3approx}
The {Pivot Algorithm} is a $3$-approximation algorithm for
Problem \ref{linEqModq}.
\end{restatable}
The proof of Theorem \ref{thm:3approx} follows almost directly from
the analysis of the {\bf KwikSort Algorithm} for
\textsc{Feedback-Arc-Set-Tournaments}\xspace~\cite{ailon2008aggregating}. For completeness, the proof can be
found in Appendix \ref{app:first}. We also give an example showing
that this analysis is tight.
\subsection{Pivot Algorithm and SDP Rounding}
In \cite{arora2008unique}, they first solve a semidefinite program and
then they use its solution to produce a new set of permutations
$\{\sigma_{uv}\}$ for each edge $uv \in E$. Suppose that initial
instance (on which the SDP is solved) is on a complete graph and is
$(1-\varepsilon)$-satisfiable. If the new instance (using the
$\sigma$-permutations) is also $(1-\varepsilon)$-satisfiable (e.g., if
$\sigma_{uv} = \pi_{uv}$ for every edge), then their algorithm
produces the same output as the Pivot Algorithm
and the loss function $f(\varepsilon) = 3 \varepsilon$.
The analysis used in
\cite{arora2008unique} does not seem sufficient to show that the new
instance on the $\sigma$-permutations is actually a
$(1-\varepsilon')$-satisfiable instance for some $\varepsilon' < \varepsilon$. Thus, it
seems that new analysis or modifications of the algorithm is necessary
to obtain an improved loss function.
\section{Hardness for \textsc{Min-Lin-Eq$(q)$-Full}\xspace}
\begin{theorem}\label{thm:random-hardness}
Unless $\NP \subseteq \BPP$, \textsc{Min-Lin-Eq$(q)$-Full}\xspace has no polynomial-time algorithm.
\end{theorem}
To prove Theorem \ref{thm:random-hardness}, we follow the general
approach used for {\sc Feedback-Arc-Set-Tournament}
in \cite{ailon2008aggregating,alon2006ranking,charbit2007minimum} and
for fully-dense CSPs on a binary domain~\cite{ailon2007hardness}. We
``blow up'' an instance by replacing each vertex with $k$ copies. For
each non-edge, we require a particular bipartite gadget described in
the following lemma.
\begin{lemma}\label{lem:double-cloud}
For any positive integers $q$ and $\ell$, where $\ell > q$ and $\ell$
is a multiple of $q$, there exists an instance of \textsc{Min-Lin-Eq$(q)$}\xspace on the
complete bipartite graph $K_{\ell,\ell}$ such that for any vertex
labeling of $K_{\ell,\ell}$, the total number of satisfied equations
is at least $\ell^2/q - \Theta(\ell^{\frac{3}{2}})$ and at most
$\ell^2/q +
\Theta(\ell^{\frac{3}{2}})$.
\end{lemma}
\begin{proof}
We orient all edges from one side of $K_{\ell,\ell}$ to the other
side. For each of the $\ell^2$ arcs, we choose a label from
$[q]$ uniformly at random. Notice that there are
$q^{2\ell}$ possible vertex labelings.
For any fixed labeling, the expected number of satisfied constraints
is $\mu = \ell^2/q$. For a fixed labeling, let $X_{uv}$ denote the
random variable which is 1 if arc $(u,v)$ is satisfied by the randomly
chosen arc label (w.r.t.~the fixed vertex labeling) and 0 otherwise
and let $X = \sum_{uv \in E(K_{\ell,\ell})} X_{uv}$.
Recall some standard Chernoff bounds:
\begin{eqnarray*}
\Pr[X \geq (1 + \delta)\mu] & \leq e^{-\frac{\delta^2 \mu}{3}} \text{ and }
\Pr[X \leq (1 - \delta)\mu] & \leq e^{-\frac{\delta^2 \mu}{2}}.
\end{eqnarray*}
Let $B_1$
be the (bad) event that there is some vertex labeling for which the
number of satisfied constraints exceeds $(1+ \delta)\mu$, and let
$B_2$
be the (bad) event that there is some vertex labeling for which the
number of satisfied constraints is less than $(1- \delta)\mu$.
We have $\mu = \frac{\ell^2}{q}$. Setting $\delta = \sqrt{{\frac{c \cdot q \log{q}}{\ell}}}$,
where $c =60$, we have:
\begin{eqnarray*}
\Pr\left[X \geq \left(1 + \sqrt{{\frac{c \cdot q \log{q}}{\ell}}}\right)\frac{\ell^2}{q}\right]
\leq& e^{-\left(\frac{{\frac{c \cdot q \log{q}}{\ell}} \ell^2}{3q}\right)} \\
\Pr\left[X \geq \mu + \sqrt{c \cdot \log{q}}\frac{\ell^{3/2}}{\sqrt{q}}\right]
\leq& {e^{-20 \ell \cdot \log{q} }}.
\end{eqnarray*}
\begin{eqnarray*}
\Pr\left[X \leq \left(1 - \sqrt{{\frac{c \cdot q \log{q}}{\ell}}}\right)\frac{\ell^2}{q}\right]
\leq& {e^{-\left(\frac{{\frac{c \cdot q \log{q}}{\ell}} \ell^2}{2q}\right)}} \\
\Pr\left[X \leq \mu - \sqrt{c \cdot \log{q}}\frac{\ell^{3/2}}{\sqrt{q}}\right]
\leq& {e^{-30 \ell \cdot \log{q}}}.
\end{eqnarray*}
Now we take a union bound over all $q^{2\ell}$ vertex labelings.
We have
\begin{eqnarray*}
\Pr[B_1] + \Pr[B_2] \leq
\frac{e^{2\ell \log q}}{e^{20 \cdot \ell \log{q}}} +
\frac{e^{2\ell \log q}}{e^{30 \cdot \ell \log{q}}}
= \frac{1}{e^{18 \ell \log{q}}} +\frac{1}{e^{28 \ell \log{q}}} < 1
\end{eqnarray*}
Thus, we can conclude that there is a positive probability that the
number of satisfied constraints is within the desired range and
therefore the necessary gadget exists.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:random-hardness}]
We begin with an arbitrary instance of \textsc{Min-Lin-Eq$(q)$}\xspace on the graph
$G=(V,A)$. (We can think of $G$ as an oriented graph.) For each
arc $(u,v) \in A$, we have a constraint $x_u - x_v \equiv c_{uv}
\bmod q$. We pick an integer $k =\poly(n)$ whose exact value is
determined later and where $n = |V|$ and $k$ is a multiple of $q$.
We construct a new ``blown-up'' graph $G^k = (V^k, A^k \cup B^k
\cup C^k)$ as follows: \begin{eqnarray*} V^k & = & \{v_i ~|~ v \in
V, ~i \in \{1, \ldots, k\}\},\\ A^k & = & \{(u_i,v_j) ~|~(u,v)
\in A, ~i,j \in \{1, \ldots, k\}\},\\ B^k & = & \{(u_i,v_j)
~|~(u,v) \notin A, ~i,j \in \{1, \ldots, k\}\},\\ C^k & = &
\{(u_i,u_j) ~|~ u \in V, ~i \neq j \in \{1, \ldots,
k\}\}. \end{eqnarray*}
For a vertex $u \in V$, we refer to the corresponding $k$ copies
$\{u_1, \ldots, u_k\}$ in $V^k$ as a ``cloud''. For an arc
$(u_i,v_j) \in A^k$ we use same constraint as $(u,v)$. For an arc
$(u_i,u_j) \in C^k$ (i.e., an arc in a cloud), we can use the
constraint $c_{u_i u_j} = 0$. For an arc $(u_i,v_j) \in B^k$, we use
the bipartite gadget constructed in Lemma \ref{lem:double-cloud}.
Let $B$
denote the set of non-arcs in $G$ (i.e., $|B| = {n \choose 2} - |A|$).
Let $val(H)$ denote the minimum number of unsatisfied constraints in
$H$ over all assignments $V(H) \rightarrow \{1, \ldots, q\}$.
We now relate the values $val(G)$ and $val(G^k)$. We set $k
= \Omega(n^6)$. Notice that in this case, $k^{\frac{3}{2}} \cdot |B|
= o(k^2)$.
We define ${G^k_{\star}}$ to be the ``blow-up'' of $G$, which is a subgraph
of $G^k$. Specifically, ${G^k_{\star}} = (V^k, A^k \cup C^k)$. We can use
$val(G^k)$ to estimate $val({G^k_{\star}})$ via the following claim, which
follows from Lemma
\ref{lem:double-cloud}.
\begin{claim}
$$\left|val(G^k) - val({G^k_{\star}}) -k^2 \cdot
|B| \cdot \frac{q-1}{q}\right| = O(k^{\frac{3}{2}} \cdot |B|).$$
\end{claim}
Now we need to use $val({G^k_{\star}})$ to compute $val(G)$.
\begin{claim}
$$val({G^k_{\star}}) \leq k^2 \cdot val(G).$$
\end{claim}
\begin{cproof}
Consider an optimal vertex labeling for $G$ that leaves $val(G)$
constraints unsatisfied. We can construct a solution for ${G^k_{\star}}$
with the claimed upper bound. For each vertex in $V$, assign the same
label to each vertex in the corresponding cloud in $V^k$. Each
satisfied constraint in $G$ corresponds to $k^2$ satisfied constraints
in ${G^k_{\star}}$. Each unsatisfied constraint in $G$ corresponds to $k^2$
unsatisfied constraints in ${G^k_{\star}}$. Moreover, each cloud in $G^k$
has only satisfied constraints and contributes zero to $val(G^k)$.
\end{cproof}
\begin{claim}
$$k^2 \cdot val(G) \leq val({G^k_{\star}}).$$
\end{claim}
\begin{cproof}
Consider an optimal vertex labeling for ${G^k_{\star}}$ that leaves
$val({G^k_{\star}})$ constraints unsatisfied. We can construct a vertex
labeling for $G$ with the claimed upper bound. To do this, for each
vertex $v \in V$, we sample a label uniformly at random from the $k$
vertices in $v$'s cloud. Call this labeling $r:V \rightarrow \{1,
\ldots, q\}$. Then $\mathbbm{E}[val_r(G)] \leq val({G^k_{\star}})/k^2$. (In fact,
$\mathbbm{E}[val_r(G)] = val(A^k)/k^2$.) We can conclude that $val(G) \leq
val({G^k_{\star}})/k^2$.
\end{cproof}
In conclusion, we can use $val(G^k)$ to determine $val(G)$.
\end{proof}
\section{The Voting Algorithm}
\label{sec:voting}
In this section we present and analyze the voting algorithm for
\textsc{Min-Lin-Eq$(q)$-Full}\xspace. We show that this algorithm is a robust approximation
algorithm for \textsc{Min-Lin-Eq$(q)$-Full}\xspace. The idea is to begin with the pivot
algorithm from the previous section and use the resulting labels as
``temporary'' labels. Then, we ``correct'' this labeling: each vertex
(except the pivot) casts a ``vote'' for the label of every other
vertex according to the relevant constraint. The votes are tallied
for each vertex by a plurality rule: the final label of a vertex is
one that occurs most often in the list of its votes. The algorithm,
which we call the {\bf{Voting Algorithm}} is presented formally
in Section \ref{sec:n3-alg}. The runtime of the Voting Algorithm
is $O(n^3)$. In Section \ref{sec:n2-alg}, we present an algorithm
that is equivalent to the Voting Algorithm in that it produces the
same output assignment. In Section \ref{sec:randomized}, we present a
randomized version of the Voting Algorithm with running time $O(n^2)$
and a slightly worse approximation guarantee.
\subsection{The Voting Algorithm for \textsc{Min-Lin-Eq$(q)$-Full}\xspace}\label{sec:n3-alg}
\vspace{5mm}
\noindent
\fbox{\parbox{15cm}{
{\bf Voting Algorithm}
\vspace{1mm}
{\it Input:} An instance of \textsc{Min-Lin-Eq$(q)$-Full}\xspace.
\begin{itemize}
\item[1.] Pick a pivot $p \in V$. Label $p$ with $0$ and label each
vertex $v \in V$ with temporary label $\ensuremath{\operatorname{TEMP}}\xspace(v)$, which is chosen
according to the constraint on edge $(p, v)$. (Specifically, $\ensuremath{\operatorname{TEMP}}\xspace(v)
= c_{vp}$.)
\item[2.] For each vertex $v$, each neighboring vertex $u \neq p$
votes for a label for $v$, where $u$'s vote is based on its
temporary label $\ensuremath{\operatorname{TEMP}}\xspace(u)$. (Specifically, the vote of $u$ for $v$
is $(c_{vu} + \ensuremath{\operatorname{TEMP}}\xspace(u)) \bmod q$.)
\item[3.] Then each $v$ is assigned a final label $\ensuremath{\operatorname{FINAL}}\xspace(v)$ according
to the outcome of its $n-2$ votes (with a plurality rule). Ties are
resolved arbitrarily.
\item[4.] Output the best $\ensuremath{\operatorname{FINAL}}\xspace$ solution over all choices of $p$ in Step 1.
\end{itemize}}}
Notice that for technical reasons, we do not let the pivot
$p$ vote in Step 2.
\begin{theorem}\label{T:main}
On a $(1-\varepsilon)$-satisfiable instance of \textsc{Min-Lin-Eq$(q)$-Full}\xspace, for $0 \leq
\varepsilon < \frac{1}{2}$, the Voting Algorithm returns a solution
with at most $(\varepsilon+ c_{\varepsilon}\varepsilon^2) m$ unsatisfied
constraints where $\lim_{\varepsilon \to 0 } c_{\varepsilon}^{} = 16$.
\end{theorem}
We prove Theorem \ref{T:main} via the following lemma.
\begin{lemma}\label{L:mainlemma}
The Voting Algorithm returns a solution with at most $(\varepsilon + 2
\varepsilon^2 \nu (2+\nu) + o(1)) m$ unsatisfied constraints, where
$\nu=2/(1-2\varepsilon)$.
\end{lemma}
Fix an optimal solution $\ensuremath{\operatorname{OPT}}\xspace$ and denote by $\ensuremath{\operatorname{OPT}}\xspace(v)$ the label it gives to a
vertex $v$. In this fixed optimal solution, there are satisfied edges,
which we call \emph{green} edges and unsatisfied edges, which we call
\emph{red} edges.
Since $\varepsilon=\ensuremath{\operatorname{OPT_{val}}}\xspace/m$, the number of red edges incident to $p$ is
at most $\varepsilon (n-1)$ for some choice of $p$. We analyze the
voting algorithm for this choice of $p$. Without loss of generality,
we assume that $\ensuremath{\operatorname{OPT}}\xspace(p)=0$. This means that at least
$(1-\varepsilon)(n-1)$ vertices have $\ensuremath{\operatorname{TEMP}}\xspace(u)=\ensuremath{\operatorname{OPT}}\xspace(u)$; we call these
\emph{good vertices} (i.e., incident to green edges), while the other
ones are \emph{rogue vertices} (i.e., incident to red edges).
The plan is to analyze how much the outcome of the voting algorithm
differs from \ensuremath{\operatorname{OPT}}\xspace. A vertex is \emph{flipped} if $\ensuremath{\operatorname{FINAL}}\xspace(v) \neq
\ensuremath{\operatorname{OPT}}\xspace(v)$. For a vertex to be flipped, it must be badly influenced by
its neighbors. Let $\delta(v) \subset E$ denote the edges incident to
vertex $v$. Observe that all good vertices adjacent to $v$ via a
green edge in $\delta(v)$ vote correctly with respect to vertex $v$
(i.e., they vote for label $\ensuremath{\operatorname{OPT}}\xspace(v)$).
The two types of vertices that can vote incorrectly for $v$'s label
(i.e., they might not vote for label $\ensuremath{\operatorname{OPT}}\xspace(v)$) are (i) rogue vertices
incident to green edges in $\delta(v)$, and (ii) vertices incident to
red edges in $\delta(v)$. The number of vertices falling into the
first category is at most the number of rogue vertices (i.e., at most
$\varepsilon (n-1)$). The number of vertices falling into the second
category is at most the number of red edges incident to $v$. Hence we
say that a vertex $v$ is \emph{flippable} if the number of red edges
incident to $v$ is at least $(n-1)/2-\varepsilon (n-1)$.
\begin{lemma}\label{L:flips}
If a vertex $v$ is not flippable, it is not flipped (i.e., $\ensuremath{\operatorname{FINAL}}\xspace(v) = \ensuremath{\operatorname{OPT}}\xspace(v)$).
\end{lemma}
\begin{proof}
A non-flippable vertex $v$ has at least $(n-1)/2+\varepsilon (n-1)+1$
incident green edges (since by definition the number of incident red
edges is at most $(n-1)/2 - \varepsilon (n-1) -1$). At least
$(n-1)/2+1$ of these edges are incident to good vertices. (Recall a
vertex $u$ is good if $\ensuremath{\operatorname{TEMP}}\xspace(u)=\ensuremath{\operatorname{OPT}}\xspace(u)$.) Thus all of these good
vertices vote for $v$ to be labeled $\ensuremath{\operatorname{OPT}}\xspace(v)$, and they will win the
vote since they form an absolute majority.
\end{proof}
\begin{lemma}\label{L:count}
There are at most $\varepsilon \nu n$ flippable vertices.
\end{lemma}
\begin{proof}
By definition, there are $\ensuremath{\operatorname{OPT_{val}}}\xspace=\varepsilon m$ red edges. Denote by $f$
the number of flippable vertices. Summing the red degree around each
flippable vertex gives $f \cdot ((n-1)/2-\varepsilon (n-1)) \leq
2\varepsilon m$ implying the lemma.
\end{proof}
At the end of the algorithm (i.e., according to the labels
$\{\ensuremath{\operatorname{FINAL}}\xspace(v)\}$), if an edge is unsatisfied, then either it is red, or
it is green and at least one of its endpoints got flipped. In the
latter case, we charge that edge positively to (one of) the
endpoint(s) that got flipped. Similarly, if an edge is satisfied, then
either it is green, or it is red and at least one of its endpoints got
flipped. In the latter case, we charge that edge negatively to (one
of) the endpoint(s) that got flipped.
\begin{lemma}\label{L:charges}
The charges on a flipped vertex $v$ at the end of the algorithm are at
most $2\varepsilon(n-1) + \varepsilon\nu n$.
\end{lemma}
\begin{proof}
For a given vertex $v$, each neighbor $u$ votes for vertex $v$ to have the
label $vote(u \rightarrow v)$, where $vote(u \rightarrow v)$ is equal
to $\ensuremath{\operatorname{TEMP}}\xspace(u)$ modified according to the constraint on the edge $uv$.
A \emph{coalition} is a maximal set of neighboring vertices $C$
adjacent to $v$ that vote unanimously: for all $u \in C$, $vote(u
\rightarrow v)$ has the same value.
All the vertices adjacent to $v$ get partitioned into coalitions, and the
\emph{winning coalition} is one with the largest cardinality.
A flippable vertex $v$ gets flipped if the winning coalition $C_{WIN}$
is not the coalition $C_{OPT}$ (where $C_{OPT}$ is the coalition that
votes for $\ensuremath{\operatorname{OPT}}\xspace(v)$). Observe that $C_{OPT}$ contains the subset of
good vertices that are adjacent to $v$ via green edges. Call this
subset $GG$. The winning coalition $C_{WIN}$ is formed of good
vertices adjacent to $v$ via red edges (call this subset $W_{GR}$),
rogue vertices adjacent to $v$ via green edges (call this subset
$W_{RG}$), and rogue vertices adjacent to $v$ via red edges (call this
subset $W_{RR}$). (Note that there might be some vertices in $V
\setminus{v}$ that belong to neither $C_{OPT}$ nor to $C_{WIN}$.)
Since the winning coalition wins the vote, $|C_{OPT}| \leq |C_{WIN}|$. Thus,
$$|GG| \leq |W_{GR}| + |W_{RG}| + |W_{RR}| \leq |W_{GR}| + \varepsilon(n-1).$$
The positive charges are upper bounded by $|GG|+|W_{RG}|$.
(This is not an equality as these edges might end up satisfied if their
other endpoint is flipped as well.) The negative charges are at least
$|W_{GR}| + |W_{RR}|$ minus those whose other endpoint has been
flipped as well and those adjacent to rogue neighbors (i.e., $W_{RR}$).
For the
other endpoint to be flipped, it needs to be flippable, so the total
number of negative charges is at least $|W_{GR}|- \varepsilon \nu n$.
So the total charge is at most:
\begin{eqnarray*}
|GG|+|W_{RG}|-(|W_{GR}|- \varepsilon \nu n) & \leq & |W_{GR}| + \varepsilon(n-1)
+ |W_{RG}| - |W_{GR}| + \varepsilon \nu n \\
& = &
\varepsilon(n-1) + |W_{RG}| + \varepsilon \nu n \\
& \leq &
2 \varepsilon (n-1) + \varepsilon \nu n,
\end{eqnarray*}
where we used the fact that $W_{RG}$ is a subset of the
rogue vertices and therefore has cardinality at most $\varepsilon(n-1)$.
\end{proof}
Denote by $\ensuremath{\operatorname{VAL}}\xspace$ the number of unsatisfied edges at the end of the algorithm.
\begin{lemma}\label{lem:last}
$\ensuremath{\operatorname{VAL}}\xspace-\ensuremath{\operatorname{OPT_{val}}}\xspace \leq (1 + o(1))\cdot\ensuremath{\operatorname{OPT_{val}}}\xspace \cdot 2\varepsilon \nu (2+\nu)$.
\end{lemma}
\begin{proof}
This difference is exactly the number of green edges (i.e.,
satisfied in $\ensuremath{\operatorname{OPT}}\xspace$) which become unsatisfied in $\ensuremath{\operatorname{VAL}}\xspace$ minus the number
of red edges (i.e., unsatisfied in $\ensuremath{\operatorname{OPT}}\xspace$) which become satisfied in
$\ensuremath{\operatorname{VAL}}\xspace$. This difference is exactly controlled by the charging
scheme. Combining with Lemmas~\ref{L:count} and~\ref{L:charges},
the sum of all charges is at most
\begin{align*}
\varepsilon\nu n(2\varepsilon(n-1) + \varepsilon\nu n) &= 2\varepsilon^2\nu n(n-1) + \varepsilon^2\nu^2 n^2 \\
&= 2\nu \varepsilon^2 \frac{n(n-1)}{2} (2+ \nu
\frac{n}{n-1}) \\
&= \varepsilon \frac{n(n-1)}{2} (4\nu \varepsilon +
2 \nu^2\varepsilon \frac{n}{n-1}) \\
& = \ensuremath{\operatorname{OPT_{val}}}\xspace \cdot 2\varepsilon \nu(2 + \nu (1 +
\frac{1}{n-1}))\\
& \leq \ensuremath{\operatorname{OPT_{val}}}\xspace \cdot 2\varepsilon \nu (2 + \nu) \cdot (1 +
\frac{1}{n-1}).
\end{align*}
\end{proof}
We note that Lemma \ref{L:mainlemma} is implied by Lemma \ref{lem:last}.
\subsection{Equivalent Implementation of Voting Algorithm}\label{sec:n2-alg}
We now give an equivalent interpretation of the Voting Algorithm.
Recall that $G=(V,E)$ is a simple, complete graph. We define the
multigraph ${G^2_{mult}}$ to be a graph that contains $n-2$ edges connecting
$u$ and $v$, each edge corresponding to a path $uwv$ for $w \in
V\setminus{\{u,v\}}$. Each new edge corresponding to a path $uwv$
inherits a $c_{uv}$ value from this path (i.e., $c_{uv} = (c_{uw} +
c_{wv}) \bmod q$). Now we create a simple, complete graph ${G^2_{\star}}$ on
the vertex set $V$ in which the edge label $c_{uv}$ for edge $uv$ is
determined by taking the most popular $c_{uv}$ value from the $n-2$
values in ${G^2_{mult}}$ (ties broken arbitrarily). Notice that it takes
$O(n^3)$ time to construct the instance ${G^2_{\star}}$ of $\textsc{Min-Lin-Eq$(q)$-Full}\xspace$,
since it takes $O(n)$ time to compute the constraint value on an edge.
Now we can run the Pivot Algorithm from Section \ref{sec:pivot} on
the input instance ${G^2_{\star}}$, which takes $O(n)$ time to output an
assignment and takes $O(n^2)$ time if try every vertex as a pivot.
Notice that the best output of the Pivot Algorithm on ${G^2_{\star}}$ (over all
pivots) is
the same as the output of the Voting Algorithm on $G$.
\subsection{A Faster Randomized Voting Algorithm}\label{sec:randomized}
Instead of trying all vertices to be the pivot in Step 1. of the
Voting Algorithm, we simply choose a single pivot uniformly at
random. We refer to this as the {\bf Randomized Voting Algorithm}.
If we choose a vertex $v$ at random, by Markov's Inequality, it
has probability at least $1/2$ of being incident to at most $2\varepsilon n$ red
edges in a fixed optimal solution. Thus, we execute the analysis used
in Section \ref{sec:voting} replacing $\varepsilon$ with $2\varepsilon$ which leads
to the following theorem.
\begin{theorem}\label{T:main-rand}
On a $(1-\varepsilon)$-satisfiable instance of \textsc{Min-Lin-Eq$(q)$-Full}\xspace, for $0 \leq
\varepsilon < \frac{1}{2}$, with probability at least $1/2$, the Randomized Voting Algorithm returns a solution
with at most $(\varepsilon+ c_{\varepsilon}\varepsilon^2) m$ unsatisfied
constraints where $\lim_{\varepsilon \to 0 } c_{\varepsilon}^{} = 32$.
\end{theorem}
\subsection{Extension to \textsc{Min-Unique-Games$(q)$-Full}\xspace}\label{sec:ext-ugc}
In the more general setting of \textsc{Min-Unique-Games$(q)$-Full}\xspace, we cannot assume that for any vertex $v$
there is an optimal solution that assigns the label $0$ to $v$. We
modify the Voting Algorithm from Section~\ref{sec:voting} slightly to
take this into account and obtain the following result, which differs
from the case of \textsc{Min-Lin-Eq$(q)$-Full}\xspace(i.e., Theorem \ref{T:main}) only in the runtime.
\begin{theorem}\label{T:main:ugc}
On a $(1-\varepsilon)$-satisfiable instance of \textsc{Min-Unique-Games$(q)$-Full}\xspace, for $0 \leq
\varepsilon < \frac{1}{2}$, the Voting Algorithm returns a solution
with at most $(\varepsilon+ c_{\varepsilon}\varepsilon^2) m$ unsatisfied
constraints where $\lim_{\varepsilon \to 0 } c_{\varepsilon}^{} = 16$.
The runtime of the algorithm is $O(qn^3)$.
\end{theorem}
The only necessary modification of the Voting Algorithm is in Step 1.
For each label $\ell \in [q]$ and each pivot choice $p$, the algorithm
assigns label $\ell$ to $p$ and then computes the \ensuremath{\operatorname{TEMP}}\xspace and \ensuremath{\operatorname{FINAL}}\xspace labels
as before (see Steps 1.--3.~of the Voting Algorithm). The algorithm
returns the \ensuremath{\operatorname{FINAL}}\xspace labels with the fewest violated constraints. Thus,
the runtime is multiplied by a factor of $q$. The analysis of the
modified voting algorithm is identical to the analysis presented in
Section~\ref{sec:voting}, once we fix a pivot $p$ with label $\ell$,
such that the number of red edges incident to $p$ is at most
$\varepsilon(n-1)$.
\subsection{Extension to Everywhere-Dense Case}\label{sec:dense}
For a graph $G=(V,E)$, let $d(v)$ denote the degree of a vertex $v$.
Following \cite{arora1999polynomial}, we define an {\em everywhere
$(1-\delta)$-dense} graph $G=(V,E)$ to be a graph in which
$d(v) \geq (1-\delta)(n-1)$ for each vertex $v \in V$. We can extend the Voting Algorithm to this case. The algorithm is slightly modified.
\vspace{5mm}
\noindent
\fbox{\parbox{15cm}{
{\sc Voting Algorithm for Everywhere-Dense Graph}
\vspace{1mm}
{\it Input:} An instance of \textsc{Min-Lin-Eq$(q)$}\xspace on a $(1-\delta)$-everywhere
dense graph $G=(V,E)$.
\begin{itemize}
\item[1.] Pick a pivot $p \in V$. Label $p$ with $0$, and label each
vertex $v \in V$ adjacent to $p$ with temporary label $\ensuremath{\operatorname{TEMP}}\xspace(v)$, which is chosen according to the constraint on edge $(p, v)$. (Specifically, $\ensuremath{\operatorname{TEMP}}\xspace(v)
= c_{vp}$.)
\item[2.] For each vertex $v$, each neighboring vertex $u$ with
a $\ensuremath{\operatorname{TEMP}}\xspace$ label votes for a label for $v$, where $u$'s vote
is based on its temporary label $\ensuremath{\operatorname{TEMP}}\xspace(u)$. (Specifically, the vote
of $u$ for $v$ is $(c_{vu} + \ensuremath{\operatorname{TEMP}}\xspace(u)) \bmod q$.)
\item[3.] Then each $v$ is assigned a final label $\ensuremath{\operatorname{FINAL}}\xspace(v)$ according
to the outcome of the votes it received (with a plurality rule). Ties are
resolved arbitrarily.
\item[4.] Output the best solution over all choices of $p$ in Step 1.
\end{itemize}}}
\vspace{3mm}
Notice that in contrast to the Voting Algorithm on a complete graph,
$p$ also votes in Step 2. The algorithm would also work if $p$ does
not vote, but the analysis turns out to be cleaner if $p$ votes.
Let $\ensuremath{\operatorname{OPT_{val}}}\xspace$ denote the value of an optimal solution (i.e., the
minimum number of unsatisfied constraints) and let $\ensuremath{\operatorname{OPT_{val}}}\xspace = \varepsilon m$
(i.e., $\varepsilon = \ensuremath{\operatorname{OPT_{val}}}\xspace/m$). The proof of Theorem
\ref{T:maindense} is very similar to that of Theorem \ref{T:main} and
the details can be found in Appendix \ref{app:B}.
\begin{restatable}{theorem}{mainDenseCase}\label{T:maindense}
On a $(1-\varepsilon)$-satisfiable, $(1-\delta)$-everywhere-dense instance of \textsc{Min-Unique-Games$(q)$}\xspace, for $0 \leq
\varepsilon < \frac{1}{2}$, the Voting Algorithm returns a solution
with at most $m(\varepsilon+ c_{\varepsilon}\varepsilon^2)/(1-\delta)$ unsatisfied
constraints where $\lim_{\varepsilon \to 0 } c_{\varepsilon}^{} = 16$.
The runtime of the algorithm is $O(qn^3)$.
\end{restatable}
|
2,877,628,090,669 | arxiv | \section{Introduction}
Commercial-off-the-shelf (COTS), government-off-the- shelf (GOTS), and open-source face recognition software are widely used for automatically searching large galleries of subjects, with a face image of an unknown subject, to identify a smaller subset of potential candidates.
These candidate matches are commonly presented to expert analysts for human adjudication. Almost all deployed facial recognition software, whether for commercial, law enforcement, or defense related applications, operate in the visible bands of the electromagnetic spectrum due to the ubiquity of low cost visible cameras.
The ability to synthesize a visible spectrum face from facial imagery acquired in the infrared spectrum provides a unique way to expand conventional face recognition capabilities beyond the visible spectrum without the need to develop highly customized software. Therefore, we propose a new synthesis method that leverages both global and local regions to produce a visible-like face image from a thermal infrared face image, so that existing face detection, landmark detection, and face recognition algorithms may be directly applied.
\begin{figure}[tbp]
\centering
\begin{subfigure}[b]{0.65\linewidth}
\includegraphics[width=\linewidth]{difficult_match.png}\\
\caption{}
\end{subfigure}\\
\begin{subfigure}[b]{0.65\linewidth}
\includegraphics[width=\linewidth]{easy_match.png}\\
\caption{}
\end{subfigure}
\caption{(a) Thermal infrared and visible images of a subject demonstrate the large modality gap, which make matching and adjudication much more challenging. Whereas, (b) synthesized visible (from a thermal infrared) and visible images allow for more effective cross-spectrum matching and adjudication.}
\label{fig:challenge}
\end{figure}
Due to the abundance of visible spectrum face imagery (via social media, CCTV, etc.), improvements in processing hardware (CPU, GPU, and memory speeds), and advancement of machine learning algorithms, deep neural networks \cite{ParkhiVedaldi2015} have attained significant improvement in face recognition performance. Deep neural networks, such as convolutional neural networks (CNNs), have demonstrated near human-level recognition in the visible spectrum \cite{TaigmanYang2014}. CNNs are able to learn deep hierarchical feature representations that are robust to natural variations, including but not limited to pose, illumination, and expression (PIE) conditions.
Attention to infrared spectrum face recognition has increased over the last several years \cite{BourlaiKalka2010, JuefeiXuPal2015, KlareJain2010, NicoloSchmid2012}. Thermal infrared imaging is ideal for covert nighttime face recognition. However, thermal face signatures acquired at night need to be compared with visible face signatures from existing face databases and watchlists. Thermal-to-visible face recognition is challenged by a substantial modality gap between signatures in each domain. Recent approaches on cross-spectrum face recognition have partially addressed this issue, and it has been shown that incorporating polarization state information reduces the modality gap even further \cite{ShortHu2015}. In this paper, thermal imaging may refer to either conventional thermal or polarimetric thermal imaging.
Although state-of-the-art cross-spectrum face recognition algorithms have demonstrated significant promise \cite{GurtonYuffa2014,ShortHu2015b,ShortHu2015}, these customized algorithms are difficult to integrate into existing infrastructure and common practices. The top matches from recent cross-spectrum face recognition methods are not easily verifiable by human analysts due to the large differences in visual appearance of facial imagery collected in the thermal and visible bands of the electromagnetic spectrum. Moreover, a significant amount of time and resources have been invested in developing existing visible face recognition technology, which are less effective on cross-spectrum face recognition tasks. Thus, there is a need to alleviate the difficulty of matching thermal and visible spectrum imagery.
An example that demonstrates how thermal-to-visible synthesis can be used facilitate cross-spectrum matching is shown in Fig.~\ref{fig:challenge}.
Therefore, the primary objective of this is work is to present a new method for synthesizing visible spectrum face imagery from conventional thermal or polarimetric thermal face images. This method exploits the complementary representations and effects from various regularizations that are induced by multiple regions on the resulting synthesized image. We compare recognition performance of our approach with existing synthesis methods, and we also explore using the synthesized imagery for landmark detection.
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.9\textwidth]{Figure1-overview-updated.png}
\caption{Given a thermal image, features are first extracted from a global region of interest (red) and local fiducial regions of interest (blue, yellow, and green) using a fully convolutional neural net, $g(\cdot)$. Then, region specific cross-spectrum mappings are used to estimate corresponding visible representations from the extracted thermal features. Lastly, by backpropagating the error between $g(\mathbf{x})$ and the estimated visible features from each region, we determine the gradient updates from both global and local regions, which are combined in order to produce a synthesized visible image.}
\label{fig:overview}
\end{figure*}
\section{Cross-Spectrum Face Recognition \& Synthesis}
The recent interest in cross-spectrum face recognition has led to two types of approaches: (1) custom cross-spectrum face recognition, and (2) cross-spectrum synthesis. The first approach is similar to traditional visible-to-visible face recognition. In general, traditional face recognition algorithms leverage a mapping that transforms images (commonly from a single spectral band) to a corresponding latent feature representation, which can then be matched with latent representations of gallery images either directly using similarity/dissimilarity metrics (e.g., cosine similarity, Euclidean distance) or indirectly using classifiers, such as partial least squares, support vector machines, or softmax classifiers. However, cross-spectrum face recognition aims to map corresponding images from different domains (e.g., visible and thermal) to a common latent subspace, so that matching can be performed.
In \cite{HuChoi2015}, a one-vs-all framework using binary classifiers, such as partial least squares (PLS), was used for thermal-to-visible face recognition, where thermal counter examples were used to augment the negative sample set during training. This approach demonstrated improved cross-spectrum recognition performance. However, there is a relatively fast rate of diminishing returns as the number of counter examples increases.
Sarfraz et al. \cite{SarfrazStiefelhagen2015} trained a model to estimate local thermal features (e.g., HOG or SIFT) from visible features. The reverse, namely estimating visible features from thermal features, was also considered, but a slight performance drop was reported. Riggan et al. \cite{RigganReale2015} proposed an auto-associative neural network model that learns a common latent subspace in which the visible and thermal features are highly correlated, so that a classifier trained using visible imagery generalizes to thermal imagery.
More recently, \cite{RigganShort2016} combined feature regression methods and PLS for polarimetric thermal-to-visible face recognition. This approach proposed a framework that combined local regression neural nets (e.g., \cite{SarfrazStiefelhagen2015, RigganReale2015}) with a discriminative classifier, outperforming state-of-the-art methods for both conventional and polarimetric thermal-to-visible face recognition.
The second class of methods, namely synthesis-based methods, perform image generation or image visualization. Here, the objective is to generate an image based on some conditional information (e.g., a latent representation, an image, or a class label). Synthesis-based methods were first studied for visible images. For example, Hoggles \cite{VondrickKhosla2013} generated images based on Histogram of Oriented Gradient (HOG) representation in order to better understand detection failures, and \cite{MahendranVedaldi2015} demonstrated improved synthesis results using HOG, dense scale invariant feature transform (DSIFT), and CNN features. Manhendran et al.~\cite{MahendranVedaldi2015} pose the problem as the following optimization problem:
\begin{equation}
\mathbf{x}^* = \arg\min_{\mathbf{x}} \ell(f(\mathbf{x}),\mathbf{y}) + \lambda R(\mathbf{x})
\end{equation}
where $\mathbf{y}$ denotes the given feature vector, $f(\mathbf{x})$ is function that extract features from a given image $\mathbf{x}$, $\ell(\cdot)=\|f(\mathbf{x})-\mathbf{y}\|^2$, and $R(\mathbf{x})$ denotes the regularization term. The regularization term encourages (1) the pixels intensities to lie in a specific range using $\|\mathbf{x}\|^\alpha_\alpha$, and (2) a piece-wise constant reconstruction using
a total variation constraint $\sum_{i,j} \left ( \left ( x_{i,j+1} - x_{i,j} \right )^2 + \left ( x_{i+1,j} - x_{i,j} \right )^2\right )^{\beta / 2}$
Specifically for cross-spectrum synthesis, the goal is to synthesize a corresponding visible image from a given thermal image (or polarimetric thermal image stack). The main reasons for performing cross-spectrum synthesis are: (1) to enable easy verification by a human-in-the-loop and (2)
to provide direct integration with state-of-the-art commercial and academic face recognition algorithms aimed at matching visible faces.
The first method to perform thermal-to-visible and polarimetric thermal-to-visible synthesis was Riggan et al.~\cite{RigganShort2016b}. This study demonstrated the initial success for synthesizing a visible image from a thermal face image, reporting approximately a 20\% improvement in structural similarity on average. For this approach, the following optimization performed:
\begin{equation}
\label{eq:opt2}
\mathbf{x}^* = \arg\min_{\mathbf{x}} \ell(g(\mathbf{x}), h \circ g(\mathbf{t})) + \lambda R(\mathbf{x})
\end{equation}
where, like \cite{MahendranVedaldi2015}, the objective is to determine the optimal image $\mathbf{x}$ that produces similar features using a fixed mapping $g(\cdot)$. However, in \cite{RigganShort2016b} the given features are produced by $g(\cdot)$ followed by another function $h(\cdot)$, which maps thermal (or polarimetric thermal) features to corresponding visible features. Since the mapping $g$ is fixed and the $h$ is pretrained, the optimal visible image representation is given by solving Eq.~\ref{eq:opt2} when given a thermal image.
Also, \cite{ZhangPatel2017} applied a conditional generative adversarial networks (CGAN) to perform cross-spectrum synthesis, which extends upon GAN-based methods \cite{GoodfellowPouget2014,RadfordMetz2015}. This state-of-the-art method was able to produce images with photo-realistic texture and report face verification results in the form of true positive rate versus false positive rate. The discriminability of the synthesized imagery in \cite{ZhangPatel2017}, though surpassing the state-of-the-art at that time, can still be further improved.
The synthesis problem is fundamentally related to the recognition problem. In principle, if one can generate an image to be sufficiently discriminative, then one could inherently do recognition.
\section{Cross-Spectrum Synthesis using Multiple Regions}
The primary objective is to synthesize a visible-like image from a given polarimetric thermal or conventional thermal face image. In practice, a polarimetric thermal image is composed of three Stokes vectors (or images): $S_0$, $S_1$, and $S_2$, where $S_0$ is the total intensity image (i.e., conventional thermal image), $S_1$ is the difference between horizontal and vertical polarization states, and $S_2$ is the difference between diagonal (45 degree and 135 degree) polarization states. Additionally, the degree of linear polarization (DoLP) representation can be derived from these Stokes vectors: $I_{DoLP}=(S_1^2 + S_2^2)^{1/2} / S_0$ \cite{YuffaGurton2014}. The polarization state information is useful in providing additional information about the structure and geometry of faces that is not provided by conventional thermal imagery.
Here, we present a cross-spectrum synthesis method that enhances facial details (compared to recent work \cite{RigganShort2016b,ZhangPatel2017}) by optimizing an objective function that comprises complementary representations and regularizations that are induced from multiple fiducial regions. This multi-region objective function jointly leverages both global and local evidences to generate images that preserve overall facial structure and local fiducial details. An overview of our approach is shown in Fig.~\ref{fig:overview}.
\subsection{Problem Formulation}
Consider a given thermal face image, $\mathbf{t} \in \mathbb{R}^{h \times w \times c}$, where the goal is to produce an image $\mathbf{x} \in \mathbb{R}^{h \times w}$ that is sufficiently similar to the corresponding visible image representation of $\mathbf{t}$. Let the images be indexed by $0 < u < h-1$ and $0 < v < w-1$, where $\mathbf{t}_{u,v}$ and $x_{u,v}$ denotes the intensity value (or vector) at the pixel location $(u,v)$. Note that $h$, $w$, and $c$ denote the height, width, and number of channels for the images, respectively. In general, $\mathbf{t}$ may be either a conventional thermal image (i.e., $c=1$) or a polarimertric thermal image (i.e., $c>1$) that may use any combination of $S_0$, $S_1$, $S_2$ (including DoLP).
In this work, we consider the impact of multiple loss and regularization functions that are induced by predefined regions of interest (ROIs). Although, these ROIs are arbitrary, we consider ones corresponding to local discriminative features (e.g., eyes, nose, and mouth) and one global region that contains the entire face.
Consider a set of ROIs, $\left \{\beta_1, \dots, \beta_K \right \}$, where $\beta_i$ represents the $i^{th}$ ROI within a given thermal image $\mathbf{t}$. For each region, we want to minimize the following objective function
\begin{equation}
J_i(\mathbf{x}) =\left\{\begin{matrix}
\mathcal{L}(\tilde{\mathbf{x}}_i) &,\; \tilde{\mathbf{x}}_i = \{ x_{u,v} : (u,v) \in \beta_i \}\\
0 &,\; otherwise
\end{matrix}\right. ,
\end{equation}
where $\mathcal{L}(\tilde{\mathbf{x}}_i)=\ell(g(\tilde{\mathbf{x}}_i), h_i \circ g(\tilde{\mathbf{t}}_i)) + \lambda R(\tilde{\mathbf{x}}_i)$ represents the objective function similar to \cite{RigganShort2016b}, except it is computed using $\tilde{\mathbf{x}}_i$ and $\tilde{\mathbf{t}}_i$, which are defined over the region $\beta_i$. Here, $\ell(\mathbf{a},\mathbf{b})=\left \| \mathbf{a} - \mathbf{b} \right \|^2$ is the loss function, $R(\mathbf{x})$ enforces the $\alpha$-norm and total variation penalties, $g(\cdot)$ represents a generic mapping from an input image to some representative features, and $h_i(\cdot)$ is a region specific cross-spectrum mapping that corresponds to the region $\beta_i$. In practice, we implement $g$ as a fully convolutional neural network so that the size of the input image does not have to be defined, and $h_i$ is composed of $1\times1$ convolutional layers.
A synthesized image is generated by solving the following optimization problem:
\begin{equation}
\mathbf{x}^* = \arg\min_{\mathbf{x}} J(\mathbf{x}),
\label{eq:opt}
\end{equation}
where
\begin{equation}
J(\mathbf{x}) = \sum_i \omega_i J_i(\mathbf{x}).
\label{eq:total_obj}
\end{equation}
The combined objective function in Eq.~\ref{eq:total_obj} tries to balance global structure and local details using the weights, $\omega_i$, corresponding to each region. Note that $\sum_i \omega_i = 1$, which implies that there must be at least one of the $K$ regional objectives enforced.
The multi-region objective generalizes the approach in \cite{RigganShort2016b}. When $K=1$, Eq.~\ref{eq:total_obj} reduces to the objective in Eq.~\ref{eq:opt2}. However, when $K > 1$, the generalized multi-region objective function leverages multiple overlapping regional objectives that can be combined to synthesize a face image with enhanced discriminative facial features.
Similar to \cite{MahendranVedaldi2015}, we use gradient descent with momentum to solve Eq.~\ref{eq:opt}. Here, the update equation is given by
\begin{equation}
\mathbf{x}^{j+1} = \mathbf{x}^{j} +\mathbf{v}^j,
\end{equation}
where the velocity term is
\begin{equation}
\mathbf{v}^{j+1}=\mu \mathbf{v}^j - \eta \nabla_{\mathbf{x}} J(\mathbf{x}^j).
\label{eq:momentum}
\end{equation}
The parameters $\mu$ and $\eta$ denote the momentum and learning rates, respectively. We fixed these parameters as $\mu=0.9$ and $\eta=0.004$. The instantaneous gradient term in Eq.~\ref{eq:momentum}, which guides the solution towards better minima, is
\begin{align}
\begin{split}
\nabla_{\mathbf{x}} & J(\mathbf{x}) = \sum_i \omega_i \nabla_{\mathbf{x}} J_i(\mathbf{x}) \\
= & \left\{\begin{matrix}
\sum_i \omega_i \nabla_{\mathbf{x}} \mathcal{L}(\tilde{\mathbf{x}}_i) &,\; \tilde{\mathbf{x}}_i = \{ x_{u,v} : (u,v) \in \beta_i \}\\
0 &,\; otherwise
\end{matrix}\right. .\\
\end{split}
\label{eq:gradient}
\end{align}
From Eq.~\ref{eq:gradient}, we see that the overall gradient is given by a linear combination of gradients of the regional objective functions: $\nabla_{\mathbf{x}} \mathcal{L}(\tilde{\mathbf{x}}_i)=\nabla_{\mathbf{x}} \ell(g(\tilde{\mathbf{x}}_i), h_i \circ g(\tilde{\mathbf{t}}_i)) + \lambda \nabla_{\mathbf{x}} R(\tilde{\mathbf{x}}_i)$.
Fundamentally, there are two ways the different regions may affect the synthesis results. The first is through the distinctive thermal-to-visible mapping $h_i$, which is trained using only features from the corresponding region. In other words, the mapping for the left eye is trained using only exemplars from the left eye region; the mapping for the entire face uses exemplars from across the face; and so on. The second way is through the regularization penalties, especially the total variation component. Given that the total variation defined as the sum of gradient magnitudes over a given set of pixels, there is a direct dependence on the size of the image (or region). Thus, a larger region (e.g., the entire face region) is penalized more than a smaller region, yielding an image that blurs local details. This concept is illustrated in Fig.~\ref{fig:regions}.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.25\textwidth]{figure2-regions-updated}
\caption{Example synthesized imagery using a smaller, local regions (left column) versus using a larger, global region (right column).}
\label{fig:regions}
\end{figure}
\subsection{Implementation Details}
Until now, we have ignored the specific architectures used for the mappings $g(\cdot)$ and $h_i(\cdot)$ for $i=1\dots K$.
The input images used in our implementation are the $S_0$ images for the conventional thermal case and the $S_0$ and DoLP images for the polarimetric case. Moreover, when training, we augment the input images by applying a Non-Local Means (NLM) photometric normalization filter \cite{Struc2010}. The output targets are the extracted feature from $g(\cdot)$ using corresponding visible images. Note that no NLM normalization is applied to the output. We augment the training data in order to alleviate potential issues with over-fitting.
The generic mapping, $g(\cdot)$, is a CNN that produces a set of feature maps. Here, we assume that the types of features (e.g., HOG, SIFT, or some arbitrary deep feature representation) being extracted are identical between the visible and thermal images. In \cite{MahendranVedaldi2015}, separate CNN architectures are defined to produce HOG and DSIFT feature maps for purposes of image understanding. Here, we repurpose the DSIFT architecture for cross-spectrum synthesis.
Next, the cross-spectrum mappings, $h_i(\cdot)$ for $i=1\dots K$ are all implemented as a $1\times1$ convolutional network and are trained to map features extracted from thermal images to the features from corresponding visible images. Each of the these networks has two hidden layers with 200 hidden units and use the hyperbolic tangent activation function: $\sigma(u)=(1-\exp(-2u))/ (1+\exp(-2u))$.
Thus, given corresponding pairs of visible and thermal features, $(y^{(k)}, z^{(k)})$ for $k=1\dots N$, which are computed by mapping corresponding images with $g(\cdot)$, we train the cross-spectrum mapping to minimize
\begin{equation}
\sum_k \left \| y^{(k)} - h_i(z^{(k)})\right \|^2
\end{equation}
for each region.
After training $h_i(\cdot)$ for $i=1\dots K$, the multi-region optimization in Eq.~\ref{eq:opt} can be performed. We adapted the source code from \cite{MahendranVedaldi2015} to incorporate multiple predefined regions for synthesizing a visible image from a given thermal image. This is done by tracking the gradient updates for each region and weighting them accordingly. Specifically, within the left and right eyes regions, the locally computed gradients are weighted by a factor of 0.95 and the globally computed gradients are weighted by 0.05. Within the region covering the nose and mouth, the locally computed gradients are weighted by a factor of 0.75 and the globally computed gradients are weighted by 0.25. The remaining parts of the image only consider the globally computed gradients.
The biggest difference between this work and \cite{RigganShort2016b} is the generalization to multi-region cross-spectrum synthesis, which is shown to improve the quality of the synthesized imagery for matching against visible face imagery using existing face recognition algorithms (Section~\ref{sec:exp}).
\subsection{Relevance to GANs}
Unlike the original GAN framework introduced by Goodfellow et al.~\cite{GoodfellowPouget2014}, which fundamentally learns how to produce a random, but photo-realistic image from some underlying distribution, our multi-region cross-spectrum synthesis method aims to produce a visible image given a thermal image that is sufficiently discriminative---meaning that the synthesized image can be matched against gallery images with a high degree of accuracy. Conceptually, this is not all that different from conditional GAN-based methods, like \cite{IsolaZhu2017}, where an image in one domain is generated based on an input from another domain. However, the key objective for GAN-based methods is to produce imagery with similar photometric properties (e.g., dynamic range) as the underlying distribution (or conditional distribution) provided by the training set. Thus, it is possible that GANs may not emphasis discriminability as much as photo-realism. Zhang et al.~\cite{ZhangPatel2017} alleviate the lack of discriminability, to some degree, by incorporating an identity loss as part of their GAN-based approach for cross-spectrum synthesis. Whereas, our approach, which is based on synthesizing images from edge-based features (both locally and globally), preserves much of the discriminative facial structure that is useful for matching faces. In the following section, we demonstrate that our multi-region cross-spectrum synthesis method produces more discriminative imagery than current state-of-the-art approaches.
\section{Experiments and Results}
\label{sec:exp}
We evaluate our multi-region cross-spectrum synthesis model (Section 3) using the database\footnote{The dataset is available to the research community upon request with a signed database release agreement. Requests for the database can be made by contacting Shuowen (Sean) Hu at {\it shuowen.hu.civ@mail.mil}.} from \cite{HuShort2016}, which contains corresponding polarimetric thermal and visible faces from 60 distinct individuals. The imagery contains distance and expression variations. For complete information with regards to the data collection please refer to \cite{HuShort2016}.
For the experiments described in this paper, we use 30 subjects (baseline and expression) for training, and the remaining 30 subjects for evaluation. The imagery is aligned using the eye coordinates, as in \cite{RigganShort2016b}.
For this dataset, we define the bounding boxes for the local regions to be: $[30\; 89\; 64\; 34]$ for the right eye region, $[106\; 89\; 64\; 34]$ for the left eye region, and $[70\; 125\; 65\; 85]$ for the region covering the nose and mouth. Note the bounding box format is {\it upper left x coordinate}, {\it upper left y coordinate}, {\it width}, and {\it height}. These defined regions are used with registered images from \cite{HuShort2016}, which are $200 \times 250$ pixels.
We consider two use cases for our synthesis method: (1) adjudication of matches by an analyst and (2) utilization of existing face recognition components or systems (e.g., landmark detection and matching). Therefore, we compare the resulting synthesized imagery from our approach with the state-of-the-art cross-spectrum synthesized imagery using:
\begin{enumerate}
\item qualitative analysis,
\item verification performance,
\item landmark detection accuracy.
\end{enumerate}
This analysis highlights the key differences in visual appearance, discriminability, and interoperability of synthesize imagery.
\begin{figure*}[htbp]
\centering
\begin{subfigure}[b]{0.02\linewidth}
\rotatebox{90}{\hspace{0.08cm} Polar-to-Vis \hspace{0.78cm} $S_0$-to-Vis} \\ \\
\end{subfigure}
~
\begin{subfigure}[b]{0.14\linewidth}
\includegraphics[width=\linewidth]{s0.png}\\
\includegraphics[width=\linewidth]{polar.png}
\caption{Raw}
\end{subfigure}
~
\begin{subfigure}[b]{0.14\linewidth}
\includegraphics[width=\linewidth]{visible_gt.png}\\
\includegraphics[width=\linewidth]{visible_gt.png}
\caption{Ground Truth}
\end{subfigure}
~
\begin{subfigure}[b]{0.14\linewidth}
\includegraphics[width=\linewidth]{Manhendran_s0.png}\\
\includegraphics[width=\linewidth]{Manhendran_polar.png}
\caption{\cite{MahendranVedaldi2015}}
\end{subfigure}
~
\begin{subfigure}[b]{0.14\linewidth}
\includegraphics[width=\linewidth]{Riggan_s0.png}\\
\includegraphics[width=\linewidth]{Riggan_polar.png}
\caption{\cite{RigganShort2016b}}
\end{subfigure}
~
\begin{subfigure}[b]{0.14\linewidth}
\includegraphics[width=\linewidth]{Zhang_s0.png}\\
\includegraphics[width=\linewidth]{Zhang_polar.png}
\caption{\cite{ZhangPatel2017}}
\end{subfigure}
~
\begin{subfigure}[b]{0.14\linewidth}
\includegraphics[width=\linewidth]{multiregion_s0.png} \\
\includegraphics[width=\linewidth]{multiregion_polar.png}
\caption{{\bf ours}}
\end{subfigure}
\caption{Comparison of synthesis methods.}
\label{fig:results1}
\end{figure*}
\subsection{Qualitative Analysis}
In Fig.~\ref{fig:results1}, we compare multiple thermal-to-visible synthesis based approaches. The baseline approach \cite{MahendranVedaldi2015}, which does not explicitly perform cross-spectrum synthesis, produces imagery akin to thermal imagery. By comparing the results \cite{RigganShort2016b} with our results using multiple local regions, the benefits of enhanced geometric and fiducial details are observed. Also, when comparing our results with the photo-realistic results from \cite{ZhangPatel2017}, it is not clear whether the photo-realistic texture from synthesis is a critical requirement for making visual comparisons between a visible image and a synthesized. Although the texture produced by our approach is not photo-realistic, it does produce images that are more structurally similar to the ground truth visible images, which is important for discriminability (both for humans and for recognition algorithms).
Additionally, we compare the synthesized imagery of multiple subjects side-by-side (Fig.~\ref{fig:results2}) in an attempt to qualitatively assess the discriminative nature of synthesis methods . Although the former approach does a better job at synthesizing the appearance of the visible spectrum imagery, our approach seems to capture more of the structural information, which is often useful for face recognition.
The apparent synthesis artifacts in our approach (as well as in \cite{RigganShort2016b}) are partially due to known issues related to using a total variation based reconstruction error term, as pointed out in \cite{MahendranVedaldi2015}. These artifacts may be enhanced even further by errors introduced from cross-spectrum mapping. However, despite such artifacts and lack of photo-realism, we demonstrate in section~\ref{sec:verfication} that our multi-region synthesis achieves better verification performance than state-of-the-art methods. This may not be so surprising since deep neural networks are often trained to tolerate specific natural variations in facial appearance, such as changes in pose, illumination, and expression.
\begin{figure*}[htb]
\centering
\includegraphics[width=0.60\linewidth]{discriminative002.png}
\caption{Discriminability of ground truth images (top row), $S_0$-to-Vis synthesize imagery (middle row), and Polar-to-Vis synthesized imagery (bottom row)}
\label{fig:results2}
\end{figure*}
\subsection{Verification Performance}
\label{sec:verfication}
Next, we compare the Area Under the Curve (AUC) and and Equal Error Rate (EER) measures from the Receiver Operating Characteristic (ROC) curves when matching various types of synthesized faces with visible gallery images. For fair comparison, we perform matching using the same tightly cropped regions as in \cite{RigganShort2016b,ZhangPatel2017}. Table~\ref{tab:verification} reports the AUC and EER for baseline methods, state-of-the-art approaches, and our new multi-region thermal-to-visible synthesis method. Note that all methods use the vgg-face architecture \cite{ParkhiVedaldi2015}, where the last fully connected layer, prior to the softmax classifier, is used to compute a latent representation for gallery and probe images. Then, we use the cosine similarity measure to produce similarity scores between latent representations from gallery and probe images.
In most cases, we use the pre-trained vgg-face model, which is trained using only visible faces, to demonstrated the discriminability of our synthesized imagery. The only exception is that the ``Raw'' method in Table~\ref{tab:verification} is fine-tuned with thermal facial imagery, which demonstrates that transfer learning may not be able to overcome such a large modality gap for thermal-to-visible face recognition.
The baseline methods: ``Raw'' and Manhendran et al.~\cite{MahendranVedaldi2015} serve to demonstrate
the need for a synthesis-based approach
and
a cross-spectrum component, respectively.
We also compare our multi-region synthesis approach with \cite{RigganShort2016b}, which is a special case of this work (i.e., a single region), to demonstrate how using multiple local regions to synthesize a more visible-like image impacts verification performance. Lastly, we compare our multi-region synthesis method with the state-of-the-art performance in Zhang et al.~\cite{ZhangPatel2017}.
From the table, we see that our multi-region synthesis method achieves about a 5\% improvement in AUC over the state-of-the-art when using polarization state information, and about a 3\% improvement in AUC over the state-of-the-art for conventional thermal-to-visible synthesis. Also, an improvement of about 4\% in EER over the state-of-the-art is observed, and about 1\% improvement in EER over the state-of-the-art for the conventional thermal case.
\begin{table*}[htpb]
\caption{Verification performance comparison between baseline methods, state-of-the-art methods, and our multi-region synthesis method for both polarimetric thermal (polar) and conventional thermal ($S_0$) cases.}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
Method & AUC (polar) & AUC ($S_0$) & EER (polar) & EER ($S_0$) \\ \hline
Raw & 50.35\% & 58.64\% & 48.96\% & 43.96\% \\
Mahendran et al.~\cite{MahendranVedaldi2015} & 58.38\% & 59.25\% & 44.56\% & 43.56\% \\ \hline
Riggan et al.~ \cite{RigganShort2016b} & 75.83\% & 68.52\% & 33.20\% & 34.36\% \\
Zhang et al.~\cite{ZhangPatel2017} & 79.90\% & 79.30\% & 25.17\% & 27.34\% \\ \hline
Multi-Region Synthesis ({\bf ours}) & {\bf 85.43\%} & {\bf 82.49\%} & {\bf 21.46\%} & {\bf 26.25\%} \\ \hline
\end{tabular}
\end{center}
\label{tab:verification}
\end{table*}%
\subsection{Landmark Detection}
Lastly, we consider evaluating the accuracy of using our synthesize imagery for landmark detection of thermal imagery. Often, face recognition performance depends upon the quality of landmark detection. For thermal-to-visible face recognition, this is a challenging task since few (e.g., \cite{WangLiu2013}) have focused on landmark detection for thermal images. Therefore, we address this problem by evaluating the accuracy of the detected landmarks in our synthesize imagery. We perform this experiment by using pre-aligned visible and thermal imagery. 68 landmarks from the visible imagery are used as ``ground truth,'' which are obtained using DLIB's \cite{dlib09} landmark detector. Then, we extract the same 68 landmarks from the synthesize imagery and report the average Euclidean distance between the two sets of landmarks. We found the landmark errors to be {\bf 4.95} pixels and {\bf 4.83} pixels on average for synthesized images from given conventional and polarimetric thermal images, respectively. Without synthesis, applying existing face and landmark detection software (designed for visible imagery) directly to thermal facial imagery is pointless, since few (if any) faces and landmarks are detected. Therefore, cross-spectrum synthesis methods, such as ours or \cite{ZhangPatel2017}, can be used to enable cross-spectrum landmark detection without having to design custom detectors for thermal imagery.
Fig.~\ref{fig:landmarks} shows several examples of aligned visible and thermal imagery with the detected landmarks. This figure shows that we can detect the fiducial landmarks with a certain degree of accuracy (i.e., within a few pixels) when using the multi-region synthesize imagery, which can be helpful when matching thermal and visible faces.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.22\linewidth]{lm_visible_1}
\includegraphics[width=0.22\linewidth]{lm_visible_2}
\includegraphics[width=0.22\linewidth]{lm_visible_3}
\includegraphics[width=0.22\linewidth]{lm_visible_4}\\
\includegraphics[width=0.22\linewidth]{lm_s0_1}
\includegraphics[width=0.22\linewidth]{lm_s0_2}
\includegraphics[width=0.22\linewidth]{lm_s0_3}
\includegraphics[width=0.22\linewidth]{lm_s0_4}\\
\includegraphics[width=0.22\linewidth]{lm_polar_1}
\includegraphics[width=0.22\linewidth]{lm_polar_2}
\includegraphics[width=0.22\linewidth]{lm_polar_3}
\includegraphics[width=0.22\linewidth]{lm_polar_4}
\caption{Landmark detections for ground truth images (top row), $S_0$-to-Vis synthesize imagery (middle row), and Polar-to-Vis synthesized imagery (bottom row)}
\label{fig:landmarks}
\end{figure}
\section{Conclusions}
In this paper, we presented a new multi-region synthesis approach that produces discriminative visible-like faces from either conventional thermal or polarimetric thermal facial imagery. We also demonstrated that our synthesized imagery provides new state-of-the-art verification performance, which better facilitates interoperability with existing visible face recognition frameworks, such as the open source vgg-face model. Furthermore, our synthesized imagery was shown to be effective for accurately detecting landmarks in thermal imagery, which is a critical component of face recognition systems.
We showed that by jointly optimizing over both global and local facial regions, which provided complementary representations and diverse regularization penalties, we were able to produce highly discriminative representations. Despite the fact that the synthesized imagery does not produce a photo-realistic texture, the verification performance achieved was better than both baseline and recent approaches when matching the synthesized faces with visible faces.
Secondly, we demonstrated that the synthesized imagery provided by our multi-region optimization method can be used to enable landmark detection capabilities for conventional thermal or polarimetric thermal imagery, where few landmark detection methods currently exist. Since many facial recognition pipelines typically depend on accurate landmark detection, this proves to be beneficial for thermal-to-visible face recognition. We showed that when using DLIB's \cite{dlib09} open-source landmark detection software on our synthesized imagery, the landmarks were detected accurately (less than 5 pixels in distance).
While recognition performance using custom cross-spectrum methodologies may be currently superior to those reported in this paper, the fundamental benefits of our framework include: the ability to adjudicate potential matches and the ability to leverage state-of-the-art visible spectrum face recognition technology for thermal-to-visible matching.
{\small
\bibliographystyle{ieee}
|
2,877,628,090,670 | arxiv | \section{Introduction}
The field of Numerical Relativity (NR) has progressed at a remarkable
pace since the breakthroughs of 2005~\cite{Pretorius:2005gq,
Campanelli:2005dd, Baker:2005vv} with the first successful fully
non-linear dynamical numerical simulation of the inspiral, merger, and
ringdown of an orbiting black-hole binary (BHB) system.
BHB physics has rapidly matured into a critical tool for
gravitational wave (GW) data analysis and astrophysics. Recent
developments include: studies of the orbital dynamics of spinning
BHBs~\cite{Campanelli:2006uy, Campanelli:2006fg, Campanelli:2006fy,
Herrmann:2007ac,
Herrmann:2007ex, Marronetti:2007ya, Marronetti:2007wz, Berti:2007fi},
calculations of recoil velocities from the merger of unequal mass
BHBs~\cite{Herrmann:2006ks, Baker:2006vn, Gonzalez:2006md}, and
very large recoils acquired by the remnant of the merger of two spinning BHs
~\cite{Campanelli:2007ew, Campanelli:2007cga, Lousto:2008dn, Pollney:2007ss,
Gonzalez:2007hi, Brugmann:2007zj, Choi:2007eu, Baker:2007gi,
Schnittman:2007ij, Baker:2008md, Healy:2008js, Herrmann:2007zz,
Herrmann:2007ex, Tichy:2007hk, Koppitz:2007ev, Miller:2008en},
empirical models relating the final mass and spin of
the remnant with the spins of the individual BHs
~\cite{Boyle:2007sz, Boyle:2007ru, Buonanno:2007sv, Tichy:2008du,
Kesden:2008ga, Barausse:2009uz, Rezzolla:2008sd, Lousto:2009mf}, and
comparisons of waveforms and orbital dynamics of
BHB inspirals with post-Newtonian (PN)
predictions~\cite{Buonanno:2006ui, Baker:2006ha, Pan:2007nw,
Buonanno:2007pf, Hannam:2007ik, Hannam:2007wf, Gopakumar:2007vh,
Hinder:2008kv}.
The surprising discovery ~\cite{Campanelli:2007ew, Campanelli:2007cga}
that the merger of binary black holes can produce recoil
velocities up to $4000\ \rm km\,s^{-1}$, and hence allow the remnant to escape
from major galaxies,
led to numerous theoretical and observational
efforts to find traces of this phenomenon. Several studies made
predictions of specific observational features of recoiling
supermassive black holes in the cores of galaxies in the
electromagnetic spectrum \citep{Haiman:2008zy, Shields:2008va,
Lippai:2008fx, Shields:2007ca, Komossa:2008ye, Bonning:2007vt,
Loeb:2007wz} from infrared \citep{Schnittman:2008ez} to X-rays
\citep{Devecchi:2008qy, Fujita:2008ka, Fujita:2008yi} and
morphological aspects of the galaxy cores \citep{Komossa:2008as,
Merritt:2008kg, Volonteri:2008gj}. Notably, there began to appear
observations indicating the possibility of detection of such effects
\citep{Komossa:2008qd, Strateva:2008wt,Shields:2009jf}, and although
alternative explanations are possible \citep{Heckman:2008en,
Shields:2008kn, Bogdanovic:2008uz, Dotti:2008yb}, there is still the
exciting possibility that these observations can lead to the first
confirmation of a prediction of General Relativity in the
highly-dynamical, strong-field regime.
Numerical simulations of the BHB problem have sampled the parameter
space of the binary for different values of the binary's mass ratio
$q$ and arbitrary orientations of the individual spins of the holes.
Two astrophysically important regions of this parameter space remain
challenging to describe accurately by numerical simulations: the small
$q$ limit, although recent development of the numerical techniques have
produced a successful simulation of the last few orbits before the merger of
a $q=1/100$ binary~\cite{Lousto:2010ut}, and the near maximal
spin limit.
The most recent simulations of highly-spinning BHBs was
of non-precessing binaries with intrinsic
spins $\alpha=0.95$ \cite{Lovelace:2010ne}.
Since BHBs with $\alpha=1$ are still elusive to full numerical
simulations, and the configuration that maximizes the gravitational
recoil is one that starts with maximally spinning BHs, with opposite
spins lying on the orbital plane
\cite{Campanelli:2007ew,Campanelli:2007cga}, we will model these
configurations for different values of the intrinsic spin parameter up
to $\alpha=0.92$ (which is achievable with current techniques to solve
initial ``puncture'' data) and then extrapolate to
$\alpha=1$ using an improves version of our original
empirical formula~\cite{Campanelli:2007ew, Campanelli:2007cga,
Lousto:2009mf}.
In Ref.~\cite{Lousto:2009mf} we extended our original empirical formula
for the recoil velocity imparted to the remnant of a
BHB merger~\cite{Campanelli:2007ew, Campanelli:2007cga} to include
next-to-leading-order corrections, still linear in the spins.
The extended formula has the form:
\begin{eqnarray}\label{eq:Pempirical}
\vec{V}_{\rm recoil}(q,\vec\alpha)&=&v_m\,\hat{e}_1+
v_\perp(\cos\xi\,\hat{e}_1+\sin\xi\,\hat{e}_2)+v_\|\,\hat{n}_\|,\nonumber\\
v_m&=&A\frac{\eta^2(1-q)}{(1+q)}\left[1+B\,\eta\right],\nonumber\\
v_\perp&=&H\frac{\eta^2}{(1+q)}\left[
(1+B_H\,\eta)\,(\alpha_2^\|-q\alpha_1^\|)\right.\nonumber\\
&&\left.+\,H_S\,\frac{(1-q)}{(1+q)^2}\,(\alpha_2^\|+q^2\alpha_1^\|)\right],\nonumber\\
v_\|&=&K\frac{\eta^2}{(1+q)}\Bigg[
(1+B_K\,\eta)
\left|\alpha_2^\perp-q\alpha_1^\perp\right|
\nonumber \\ && \quad \times
\cos(\Theta_\Delta-\Theta_0)\nonumber\\
&&+\,K_S\,\frac{(1-q)}{(1+q)^2}\,\left|\alpha_2^\perp+q^2\alpha_1^\perp\right|
\nonumber \\ && \quad \times
\cos(\Theta_S-\Theta_1)\Bigg],
\end{eqnarray}
where $\eta=q/(1+q)^2$, with $q=m_1/m_2$
the mass ratio of the smaller to larger mass hole,
$\vec{\alpha}_i=\vec{S}_i/m_i^2$, the index $\perp$ and $\|$ refer to
perpendicular and parallel to the orbital angular momentum respectively,
$\hat{e}_1,\hat{e}_2$ are
orthogonal unit vectors in the orbital plane, and $\xi$ measures the
angle between the unequal mass and spin contribution to the recoil
velocity in the orbital plane.
from newly available runs.
The angle $\Theta$ is defined as the angle
between the in-plane component of $\vec \Delta = M (\vec S_2/m_2 - \vec
S_1/m_1)$ or $\vec S=\vec S_1+\vec S_2$
and a fiducial direction at merger (see Ref.~\cite{Lousto:2008dn} technique).
Phases $\Theta_0$ and $\Theta_1$ depend
on the initial separation of the holes for quasicircular orbits
(astrophysically realistic evolutions of comparable masses black holes
lead to nearly zero eccentricity mergers).
The empirical formula (\ref{eq:Pempirical}) above was obtained by
assuming the post-Newtonian dependence on the spin and mass ratio of
instantaneous radiated linear momenta \cite{Kidder:1995zr} where the
coefficients are to be fitted by full numerical simulations.
Second order corrections in the spin have been obtained recently
\cite{Racine:2008kj} and could be added to the empirical formula.
Here, in this paper, we will consider instead the particular family of
configurations that lead to the maximum
recoil~\cite{Campanelli:2007ew, Campanelli:2007cga, Gonzalez:2007hi,
Dain:2008ck}, where $q=1$ and the two spins are in the orbital plane,
equal in magnitude, and opposite in direction. These configurations
are $\pi-$symmetric, i.e. rotating the system by 180 degrees
around the symmetry axis lead to the same configuration. This implies
in particular, that only odd powers of the spin and the $\cos\Theta$
are involved. We will then perform a series of simulations that vary
both the magnitude of the (intrinsic) spin in the range
$\alpha=0.2-0.92$ and the initial angle of the individual black-hole
spin and orbital linear momentum.
For a first exploration of the extended spin dependence we consider
cubic and possible fifth-order corrections~\cite{Boyle:2007ru}
to the empirical formula (\ref{eq:Pempirical}) of the form
\begin{eqnarray}
v_\| &=& \left(V_{1,1} \alpha + V_{1,3} \alpha^3\right)
\cos(\Theta_\Delta-\Theta_0) \nonumber \\
&+&
\left(V_{3,1} \alpha + V_{3,3} \alpha^3 + V_{3,5} \alpha^4 \right)
\cos(3 \Theta_\Delta-3 \Theta_3),
\label{eq:emp}
\end{eqnarray}
where $V_{1,1} = 2 K(1+\eta B_K) \frac{\eta^2}{(1+q)}$,
and the remaining terms are higher-order correction to
Eq.~(\ref{eq:Pempirical}).
\section{Techniques}
\label{sec:techniques}
To compute the numerical initial data, we use the puncture
approach~\cite{Brandt97b} along with the {\sc
TwoPunctures}~\cite{Ansorg:2004ds} thorn. In this approach the
3-metric on the initial slice has the form $\gamma_{a b} = (\psi_{BL}
+ u)^4 \delta_{a b}$, where $\psi_{BL}$ is the Brill-Lindquist
conformal factor, $\delta_{ab}$ is the Euclidean metric, and $u$ is
(at least) $C^2$ on the punctures. The Brill-Lindquist conformal
factor is given by $ \psi_{BL} = 1 + \sum_{i=1}^n m_{i}^p / (2 |\vec r
- \vec r_i|), $ where $n$ is the total number of `punctures',
$m_{i}^p$ is the mass parameter of puncture $i$ ($m_{i}^p$ is {\em
not} the horizon mass associated with puncture $i$), and $\vec r_i$ is
the coordinate location of puncture $i$. For the initial (conformal) extrinsic
curvature we take the analytic form $\hat{K}_{ij}^{BY}$ given by
Bowen and York\cite{Bowen80}. We evolve these
black-hole-binary data-sets using the {\sc
LazEv}~\cite{Zlochower:2005bj} implementation of the moving puncture
formalism~\cite{Campanelli:2005dd,Baker:2005vv} with the conformal
factor $W=\sqrt{\chi}=\exp(-2\phi)$ suggested by~\cite{Marronetti:2007wz}
as a dynamical variable.
For the runs presented here
we use centered, eighth-order finite differencing in
space~\cite{Lousto:2007rj} and an RK4 time integrator (note that we do
not upwind the advection terms).
We use the Carpet~\cite{Schnetter-etal-03b} mesh refinement driver to
provide a `moving boxes' style mesh refinement. In this approach
refined grids of fixed size are arranged about the coordinate centers
of both holes. The Carpet code then moves these fine grids about the
computational domain by following the trajectories of the two black
holes.
We use {\sc AHFinderDirect}~\cite{Thornburg2003:AH-finding} to locate
apparent horizons. We measure the magnitude of the horizon spin using
the Isolated Horizon algorithm detailed in~\cite{Dreyer02a}. This
algorithm is based on finding an approximate rotational Killing vector
(i.e.\ an approximate rotational symmetry) on the horizon $\varphi^a$. Given
this approximate Killing vector $\varphi^a$, the spin magnitude is
\begin{equation}
\label{isolatedspin} S_{[\varphi]} =
\frac{1}{8\pi}\int_{AH}(\varphi^aR^bK_{ab})d^2V,
\end{equation}
where $K_{ab}$ is the extrinsic curvature of the 3D-slice, $d^2V$ is
the natural volume element intrinsic to the horizon, and $R^a$ is the
outward pointing unit vector normal to the horizon on the 3D-slice.
We measure the direction of the spin by finding the coordinate line
joining the poles of this Killing vector field using the technique
introduced in~\cite{Campanelli:2006fy}. Our algorithm for finding the
poles of the Killing vector field has an accuracy of $\sim 2^\circ$
(see~\cite{Campanelli:2006fy} for details). Note that once we have the
horizon spin, we can calculate the horizon mass via the Christodoulou
formula
\begin{equation}
{m^H} = \sqrt{m_{\rm irr}^2 +
S^2/(4 m_{\rm irr}^2)},
\end{equation}
where $m_{\rm irr} = \sqrt{A/(16 \pi)}$ and $A$ is the surface area of
the horizon.
We measure radiated energy, linear momentum, and angular momentum, in
terms of $\psi_4$, using the formulae provided in
Refs.~\cite{Campanelli99,Lousto:2007mh}. However, rather than using
the full $\psi_4$, we decompose it into $\ell$ and $m$ modes and solve
for the radiated linear momentum, dropping terms with $\ell \geq 5$.
The formulae in Refs.~\cite{Campanelli99,Lousto:2007mh} are valid at
$r=\infty$.
We obtain highly accurate values for these quantities by
solving for them on spheres of finite radius (typically $r/M=50, 60,
\cdots, 100$), fitting the results to a polynomial dependence in
$l=1/r$, and extrapolating to
$l=0$~\cite{Baker:2005vv,Campanelli:2006gf,Hannam:2007ik,Boyle:2007ft}. Each quantity $Q$ has the radial
dependence $Q=Q_0 + l Q_1 + {\cal O}(l^2)$, where $Q_0$ is the
asymptotic value (the ${\cal O}(l)$ error arises from the ${\cal
O}(l)$ error in $r\, \psi_4$). We perform both linear and quadratic
fits of $Q$ versus $l$, and take $Q_0$ from the quadratic fit as the
final value with the differences between the linear and extrapolated
$Q_0$ as a measure of the error in the extrapolations.
We obtain accurate, convergent waveforms and horizon parameters by
evolving this system in conjunction with a modified 1+log lapse and a
modified Gamma-driver shift
condition~\cite{Alcubierre02a,Campanelli:2005dd}, and an initial lapse
$\alpha(t=0) = 2/(1+\psi_{BL}^{4})$. The lapse and shift are evolved
with
\begin{subequations}
\label{eq:gauge}
\begin{eqnarray}
(\partial_t - \beta^i \partial_i) \alpha &=& - 2 \alpha K,\\
\partial_t \beta^a &=& (3/4) \tilde \Gamma^a - \eta(x^a,t) \beta^a,
\label{eq:Bdot}
\end{eqnarray}
\end{subequations}
where different functional dependences for $\eta(x^a,t)$ have been
proposed in
\cite{Alcubierre:2004bm, Zlochower:2005bj, Mueller:2009jx, Mueller:2010bu, Schnetter:2010cz,Alic:2010wu}.
For the low-spin simulations we used a constant $\eta=2$, while for
the $\alpha=0.92$ simulation we used
a modification of the form proposed
in~\cite{Lousto:2010qx},
\begin{equation}
\eta(x^a,t) = R_0 \frac{\sqrt{\tilde
\gamma^{ij}\partial_i W \partial_j W }}{ \left(1 - W^a\right)^b},
\end{equation}
where we chose $R_0=1.31$~\cite{Mueller:2009jx}.
In practice we used
$a=2$ and $b=2$, which reduces $\eta$ by a factor of $4$ at infinity
when compared to the gauge proposed
by~\cite{Mueller:2009jx}, improving its stability at larger radii. Other values
of $(a,b)$ lead to an increase of the numerical noise.
Note that this gauge was originally proposed and used for the non-spinning, intermediate-mass-ratio binaries. Here we find that the
gauge is well adapted for the highly-spinning equal mass case, where,
after the initial burst of radiation passes,
the measured spin is found to never drop below $\alpha=0.905$.
Due to the differences in the spurious initial radiation content,
as well as spin-orbit effects on the
total mass, $\alpha$ near merger varied from between $0.90$ to $0.93$
for the different A09Tyyy configurations
(See tables \ref{tab:ID} and \ref{tab:rad} below).
\subsection{Initial Data}
\label{sec:ID}
We used 3PN parameters for quasicircular orbits with BH spins
(equal in magnitude and opposite in direction) aligned
with the linear momentum of each BH (i.e. in-plane spins) to obtain the
momenta and spin parameters for the Bowen-York extrinsic curvature.
We then chose puncture mass parameters such that the total ADM mass was
1M. We then rotated the spins by $30^\circ$, $90^\circ$, $130^\circ$,
$210^\circ$ and $315^\circ$, to obtain a total of 6 configurations
for each value of the intrinsic spin $\alpha$. We label the configuration
AxxTyyy where xx corresponds to the spin of each BH and yyy is the
initial rotation of the spin directions. We summarize the
initial data in Table~\ref{tab:ID}.
\begin{table}
\caption {Initial data parameters for the non-rotated configurations.
The initial puncture positions are $\pm (x,0,0)$, momenta are $\pm(0,p,0)$,
and spin $\pm(0,S,0)$. The remaining configurations are obtained by
rotating the spins, keeping all other parameters the same.}
\label{tab:ID}
\begin{ruledtabular}
\begin{tabular}{lcccc}
Config & $x$ & $p$ & $S$ & $m_p$\\
\hline
A02T000 & 3.878113 & 0.117404 & 0.051314 & 0.479782 \\
A04T000 & 3.879566 & 0.117405 & 0.102627 & 0.454076\\
A06T000 & 3.881979 & 0.117407 & 0.153936 & 0.403550\\
A08T000 & 3.885342 & 0.117409 & 0.205241 & 0.301026\\
A09T000 & 3.887375 & 0.117411 & 0.230891 & 0.172120
\end{tabular}
\end{ruledtabular}
\end{table}
This family of configurations has larger initial separations than the
configurations in our original studies in \cite{Campanelli:2007cga}.
In these configurations, the BHs orbit $\sim 3.5$ times prior to
merger, which allows for most of the eccentricity to be radiated away
before the plunge phase (where most of the recoil velocity is
generated). This provides for an accurate description of the plausible
astrophysical maximal recoil scenario.
\section{Results and Analysis}
\label{sec:results}
In order to analyze our results for different initial orientations of the
spin that span the $\Theta-$dependence, we use the techniques detailed
in~\cite{Lousto:2008dn}. For each $\alpha$ we fit the results of
the recoil as a function of angle to form
$V_{\rm recoil} = V_1 \cos(\theta - \theta_1) +
V_3 \cos[3(\theta - \theta_3)],$ where $\theta$ is defined to be the
angle of the spin direction (of the first BH) near merger (at a
fiducial radial separation of $r=1.2$) and
the spin direction of the corresponding AxxT000 configuration
(we cannot simply use the initial spin direction differences
because spin-orbit
effects for larger spins make this approximation inaccurate).
The radiated energy and recoil from each simulation is given in
Table~\ref{tab:rad}.
\begin{widetext}
\begin{table}
\caption{The radiated energy, recoil velocity, and angle between
the spins for the AxxTyyy configuration at merger and the
corresponding AxxT000 configuration, $\Delta\Theta$. Note the substantial
rotations apparent in the A09Tyyy configurations due to
spin orbit interactions.}
\label{tab:rad}
\begin{ruledtabular}
\begin{tabular}{lcccc}
Config & $\delta E$ & $V_{\rm recoil}$ & $\Delta\Theta$ & $\delta J_z$ \\
\hline
A02T000 & $ 0.03583 \pm 0.00020 $ & $ 551.95 \pm 0.83 $ & 0 &
$0.2727 \pm 0.0032 $\\
A02T030 & $ 0.03575 \pm 0.00020 $ & $ 225.49 \pm 0.86 $ & 30.7 &
$0.2724\pm0.0031$ \\
A02T090 & $ 0.03562 \pm 0.00019 $ & $ -482.10 \pm 0.13 $ & 88.6 &
$0.2721\pm0.0031$ \\
A02T130 & $ 0.03574 \pm 0.00019 $ & $ -721.03 \pm 0.34 $ & 127.3 &
$0.2727\pm0.0031$ \\
A02T210 & $ 0.03573 \pm 0.00020 $ & $ -234.14 \pm 0.76 $ & 210.0 &
$0.2723\pm0.0032$ \\
A02T315 & $ 0.03577 \pm 0.00019 $ & $ 730.10 \pm 0.32 $ & 312.4 &
$0.2729\pm0.0030$ \\
\hline
A04T000 & $ 0.03665 \pm 0.00022 $ & $ 1200.79 \pm 2.21 $ & 0 &
$0.2768\pm0.0031$ \\
A04T030 & $ 0.03625 \pm 0.00022 $ & $ 529.09 \pm 2.12 $ & 33.7 &
$0.2747\pm0.0031$ \\
A04T090 & $ 0.03574 \pm 0.00020 $ & $ -764.08 \pm 0.39 $ & 85.0 &
$0.2740\pm0.0028$ \\
A04T130 & $ 0.03620 \pm 0.00021 $ & $ -1390.22 \pm 0.89 $ & 126.0 &
$0.2759\pm0.0029$ \\
A04T210 & $ 0.03633 \pm 0.00021 $ & $ -637.049 \pm 1.97 $ & 209.2 &
$0.2755\pm0.0030$ \\
A04T315 & $ 0.03628 \pm 0.00021 $ & $ 1424.17 \pm 1.02 $ & 311.2 &
$0.2762\pm0.0029$ \\
\hline
A06T000 & $ 0.03803 \pm 0.00027 $ & $ 2001.06 \pm 3.5 $ & 0 &
$0.2834\pm0.0032$ \\
A06T030 & $ 0.03740 \pm 0.00026 $ & $ 1087.24 \pm 4.8 $ & 36.2&
$0.2798\pm0.0032$ \\
A06T090 & $ 0.03595 \pm 0.00025 $ & $ -870.93 \pm 2.4 $ & 87.3&
$0.2758\pm0.0026$ \\
A06T130 & $ 0.03669 \pm 0.00026 $ & $ -1944.04 \pm 1.5 $ & 127.7&
$0.2791\pm0.0029$ \\
A06T210 & $ 0.03751 \pm 0.00026 $ & $ -1212.62 \pm 4.4 $ & 212.4&
$0.2807\pm0.0031$ \\
A06T315 & $ 0.03679 \pm 0.00026 $ & $ 1984.74 \pm 1.3 $ & 310.3&
$0.2797\pm0.0029$ \\
\hline
A08T000 & $ 0.03996 \pm 0.00039 $ & $ 2651.75 \pm 7.54 $ & 0 &
$0.2912\pm0.0027$ \\
A08T030 & $ 0.03941 \pm 0.00037 $ & $ 1917.03 \pm 8.29 $ & 24.3 &
$0.2888\pm0.0027$ \\
A08T090 & $ 0.03677 \pm 0.00034 $ & $ -445.83 \pm 1.10 $ & 70.9&
$0.2791\pm0.0028$ \\
A08T130 & $ 0.03733 \pm 0.00038 $ & $ -2412.2 \pm 4.03 $ & 119.5&
$0.2823\pm0.0027$ \\
A08T210 & $ 0.03941 \pm 0.00036 $ & $ -1919.86 \pm 8.26 $ & 204.3&
$0.2887\pm0.0027$ \\
A08T315 & $ 0.03771 \pm 0.00038 $ & $ 2568.87 \pm 4.49 $ & 306.3&
$0.2838\pm0.0027$ \\
\hline
A09T000 & $ 0.04026 \pm 0.00057 $ & $ 89.74 \pm 1.00 $ & 0 &
$0.3063\pm0.0042$ \\
A09T030 & $ 0.04143 \pm 0.00055 $ & $ 3240.96 \pm 17.34 $ & 101.5&
$0.3011\pm0.0051$ \\
A09T090 & $ 0.04062 \pm 0.00057 $ & $ 1859.42 \pm 15.47 $ & 147.4&
$0.2951\pm0.0070$ \\
A09T130 & $ 0.03784 \pm 0.00054 $ & $ -759.16 \pm 0.85 $ & 190.6&
$0.2846\pm0.0058$ \\
A09T210 & $ 0.04144 \pm 0.00055 $ & $ -3239.25 \pm 17.24 $ & 281.7&
$0.3012\pm0.0050$ \\
A09T315 & $ 0.03917 \pm 0.00056 $ & $ -205.59 \pm 2.92 $ & 355.3&
$0.2919\pm0.0067$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
\end{widetext}
We then fit $V_1$ and $V_3$ to the functional forms $V_1 = V_{1, 1}
\alpha + V_{1, 3} \alpha^3$ and $V_3 = V_{3, 1} \alpha + V_{3, 3} \alpha^3$.
A summary of the fits is given in Table~\ref{tab:fits}. Note that
$V_{1,1}$ is related to the parameter $K$ in our empirical formula~(\ref{eq:emp}) by $K=16 V_{1,1}$. Here we find $K=58912\pm43$, where the error is
obtained from the fit and likely underestimated the true error in this
quantity. Previously we found $K=(6.0\pm0.1)\times10^4$, which agrees
reasonably well with the new value
\cite{Campanelli:2007cga, Lousto:2008dn}. We also include fits
where the linear term in $V_3$ and the cubic term in $V_1$ are
set to zero, as well as a fit of $V_3$ to $V_{3,3}\alpha^3 + V_{3,5}
\alpha^5$. We note that a cubic term in $V_1$ is expected
since $\cos^3 \theta = 3/4 \cos \theta + 1/3 \cos 3 \theta$, and
hence cubic corrections of the for $\alpha^3 \cos^3\theta$ will
contribute to the $\cos\theta$ dependence. On the other hand,
a linear dependence in $\cos 3\theta$ is not expected.
The form of the fitting above was first proposed in \cite{Boyle:2007ru} as a
generic expansion, where it was applied to data sets with constant
$\alpha$. Here we compare results from five different values of the
intrinsic
spin in the range $\alpha=0.2-0.92$, to obtain an accurate model of
the $\alpha$ dependence.
In Figs.~\ref{fig:fit_a2_ang}-\ref{fig:fit_a9_ang}
we show the angular fits for each set of
configurations. Note that the spin-orbit coupling effects are
strongest for the A09Tyyy configurations, as is apparent by the
relative translation of two configurations towards the same final
angle.
\begin{table}
\caption {Fits of the recoil to the functional form
$v_{\rm kick} = V_1 \cos(\theta - \theta_1) +
V_3 \cos[3(\theta - \theta_3)].$ Note that angles are
measured in degrees. The reported errors come from the nonlinear
least-squares fit of the data and are underestimates of the
actual errors. Note the average value of $\alpha$ for the
A09Tyyy configuration near merger was $\alpha\sim0.92$, which
was the value used to obtain the fits to
$V_{i,j}$.}
\label{tab:fits}
\begin{ruledtabular}
\begin{tabular}{lcc}
$\alpha$ & $V_1 $ & $\theta_1$ \\
\hline
0 & 0 & **** \\
0.2 & $737.70 \pm 0.12$ & $221.8002\pm0.0010$ \\
0.4 & $1472.59\pm0.06$ & $215.6909\pm0.0011$ \\
0.6 & $2204.98\pm0.56$ & $205.117\pm0.015$ \\
0.8 & $2935.93\pm0.65$ & $206.658\pm0.013$ \\
0.9 & $3376.3\pm7.5$ & $91.02\pm0.11$ \\
\hline
\hline
$\alpha$ & $V_3$ & $\theta_3$ \\
\hline
0 0 & **** \\
0.2 & $4.23\pm 0.12$ & $279.62\pm 0.65$\\
0.4 & $12.0838\pm0.024$ & $37.790\pm0.049$\\
0.6 & $31.63\pm0.55$ & $152.72\pm0.38$\\
0.8 & $69.21\pm0.74$ & $38.01\pm0.22$ \\
0.9 & $95.5\pm2.4$ & $36.7\pm1.5$\\
\hline
$V_{1,1}$ & $3681.77\pm2.66$ \\
$V_{1,3}$ & $-15.46\pm3.97$ \\
$V_{3,1}$ & $15.65\pm3.01$ \\
$V_{3,3}$ & $105.90\pm4.50$
\end{tabular}
\end{ruledtabular}
\end{table}
In Table~\ref{tab:fits} we provide the fitting constants $V_{i,j}$
assuming the spin of the A09Tyyy was $0.92$. in actuality, the spin
varied between configurations. In Table~\ref{tab:varya} we provide
fitting parameters for $V_{i,j}$ if we take the value of $\alpha$
for these configurations to be $\alpha=0.9$ (the expected value when
neglecting effects due to the initial radiation content),
$\alpha=0.91$, and $\alpha=0.92$ (which approximates the average value
of $\alpha$ over all configurations). We find that setting
$\alpha=0.92$ gives the best fit for the dominant $V_{1,1}$ term.
However, we note that these fits do indicate that the nonleading
$V_{1,3}$ term and $V_{3,1}$ term may be zero. We therefore also
provide fits assuming these two terms vanish. Fits to $V_1$ strongly
prefer $\alpha=0.92$ over the smaller values. We note that
the sign of $V_{1,3}$ changes if we assume smaller values of $\alpha$
for the A09Tyyy configurations.
\begin{table}
\caption {Fits $V_1$ and $V_3$ to the form
$V_1=V_{1,1} \alpha + V_{1,3} \alpha^3$ and
$V_3 = V_{3,1} \alpha + V_{3,3} \alpha^3$,
as well as $V_3 = V_{3,3} \alpha^3 + V_{3,5} \alpha^5$. For the
A09Tyyy configurations we take $\alpha=0.9$, $0.91$, and $0.92$, which
accounts for the expected value of $\alpha$ for these configuration,
the actual average value observed, and a spin between these two
values, as
explained in the text. $\delta^2$ is the average of the square of the
error in the fit. Note that fits to the dominant $V_1$ term strongly
prefer $\alpha=0.92$ over smaller values, while fits to the subleading
$V_3$ prefer $\alpha=0.9$.}
\label{tab:varya}
\begin{ruledtabular}
\begin{tabular}{lccc}
$\alpha$ (A09Tyyy) & $V_{1,1}$ & $V_{1,3}$ & $\delta^2$ \\
\hline
0.92 & $3681.77\pm2.66$ & $-15.46\pm3.966$ & 1.21 \\
0.91 & $3658.21\pm 20.74$ & $49.16, 31.47$ & 71.13 \\
0.90 & $3634.85\pm41.09$ & $115.31, 63.40$ & 270.25 \\
\hline
\hline
$\alpha$ (A09Tyyy) & $V_{3,1}$ & $V_{3,3}$ & $\delta^2$ \\
\hline
0.92 & $15.65\pm 3.01$ & $105.90\pm 4.50$ & 1.55 \\
0.91 & $13.68\pm 1.82$ & $111.15\pm 2.77$ & 0.55 \\
0.90 & $11.75\pm 1.14$ & $116.45\pm 1.77$ & 0.21 \\
\hline\hline
$\alpha$ (A09Tyyy) & $V_{1,1}$ & & $\delta^2$\\
\hline
0.92 & $3672.08\pm 1.84$ & 0& 5.80 \\
0.91 & $3688.56\pm 8.23$ & 0& 114.53 \\
0.90 & $3704.98\pm 17.17$ &0 & 493.78 \\
\hline\hline
$\alpha$ (A09Tyyy) & & $V_{3,3}$ & $\delta^2$\\
\hline
0.92 & 0& $127.74\pm 3.96$ & 12.02\\
0.91 & 0& $130.60\pm 3.36$ & 8.29\\
0.90 & 0& $133.45\pm 2.90$ & 5.73 \\
\hline
$\alpha$ (A09Tyyy) & $V_{3,3}$ & $V_{3,5} $& $\delta^2$\\
\hline
0.92 & $172.55\pm10.20$ & $-58.98\pm13.21$ & $2.01$ \\
0.91 & $167.45\pm10.87$ & $-49.54\pm14.38$ & $2.09$ \\
0.90 & $161.80\pm12.15$ & $-38.88\pm16.43$ & $2.389 $
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{figure}
\caption{Fit of the recoil versus angle for the $\alpha=0.2$
configurations.}
\label{fig:fit_a2_ang}
\includegraphics[width=3in]{a2_v_angle_fit3.pdf}
\end{figure}
\begin{figure}
\caption{Fit of the recoil versus angle for the $\alpha=0.4$
configurations.}
\label{fig:fit_a4_ang}
\includegraphics[width=3in]{a4_v_angle_fit3.pdf}
\end{figure}
\begin{figure}
\caption{Fit of the recoil versus angle for the $\alpha=0.6$
configurations.}
\label{fig:fit_a6_ang}
\includegraphics[width=3in]{a6_v_angle_fit3.pdf}
\end{figure}
\begin{figure}
\caption{Fit of the recoil versus angle for the $\alpha=0.8$
configurations.}
\label{fig:fit_a8_ang}
\includegraphics[width=3in]{a8_v_angle_fit3.pdf}
\end{figure}
\begin{figure}
\caption{Fit of the recoil versus angle for the $\alpha=0.92$
configurations.}
\label{fig:fit_a9_ang}
\includegraphics[width=3in]{a9_v_angle_fit3.pdf}
\end{figure}
While arguments based on post-Newtonian scaling do not seem
to indicate the presence of an $\alpha \cos 3\Theta$ term, our results
indicate that this term is present. This may indicate an
error in $V_3$ for $\alpha=0.2$. If we exclude this data point,
then we can fit to reasonably well to either $V_{3,1} \alpha + V_{3,3}
\alpha^3$ or $V_{3,3} \alpha^3 + V_{3,5} \alpha^5$. Further
exploration in the small $\alpha$ regime is required.
In Figs~\ref{fig:v3_fit_comp} we
compare fits of $V_3$ and find that the best fit is
to $V_3 = \alpha V_{3,1} + \alpha^3 V_{3,3}$. On the other hand,
as seen in Fig.~\ref{fig:v1_fit_comp}, there is no
significant difference apparent in the fits of
$V_1$ to $V_{1,1} \alpha + V_{1,3} \alpha^3$ and
$V_1 = V_{1,1} \alpha$.
\begin{figure}
\caption{A comparison of fits of $V_3$ to
$V_3 = \alpha V_{3,1} + \alpha^3 V_{3,3}$ (solid),
$V_3 = \alpha^3 V_{3,3} + \alpha^5 V_{3,5}$ (dotted),
$V_3 = \alpha^3 V_{3,3}$ (dot-dashed). The first fit
is the best.
In all cases the spins for the A09Tyyy configurations
were assumed to be $\alpha=0.92$}
\label{fig:v3_fit_comp}
\includegraphics[width=3in]{V3_fit_comp.pdf}
\end{figure}
\begin{figure}
\caption{A comparison of fits of $V_1$ to
$V_1 = \alpha V_{1,1} + \alpha^3 V_{1,3}$ (solid),
$V_3 = \alpha V_{1,1}$ (dotted).
There is no significant differences between the fits.
In all cases the spins for the A09Tyyy configurations
were assumed to be $\alpha=0.92$}
\label{fig:v1_fit_comp}
\includegraphics[width=3in]{V1_fit_comp.pdf}
\end{figure}
\section{Conclusion}
\label{sec:discussion}
Using the enhanced recoil formula for the ``maximum kick'' configurations,
we predict that the maximum recoil will be $3680\pm130 \rm km\,s^{-1}$, where
the error in the prediction is due to the possibility of the
higher-order effects producing recoils in the same direction
or opposite direction of the dominant linear contribution.
We also established a model for higher-order dependences on the
spin in the recoil formula. These results are particularly relevant
for the interpretation of observations of emission lines in AGNs
displaying
displacements between narrow and wide emission lines of the order of
thousands of kilometers per second. In particular in Ref.~\cite{Civano:2010es}
a 1200 km/s offset velocity was measured (CXOCJ100043.1+020637).
A 2650 km/s recoiling supermassive black hole could explain the
observations (SDSS J092712+294344)
of Ref.~\cite{Komossa:2008qd}.
While in Ref.~\cite{Shields:2009jf} (SDSS J105041+345631)
and in Ref.~\cite{Boroson:2009va} (SDSS J153636+044127)
there is speculation that 3500 km/s recoiling black holes
are responsible for these features in the spectra.
While none of those cases effectively surpasses the maximum recoil
velocity determined here, they came close enough for the probability
of actually observing this event to be very low \cite{Lousto:2009ka}
thus leading
to the question about what are the astrophysical mechanisms responsible of
generating such large differential velocities
\cite{Vivek:2009mm,Lauer:2009us}.
\acknowledgments
We gratefully acknowledge the NSF for financial support from Grants
No. PHY-0722315, No. PHY-0653303, No. PHY-0714388, No. PHY-0722703,
No. DMS-0820923, No. PHY-0929114, No. PHY-0969855, No. PHY-0903782,
No. CDI-1028087; and NASA for financial support from NASA Grants
No. 07-ATFP07-0158 and No. HST-AR-11763. Computational resources were
provided by the Ranger cluster at TACC (Teragrid allocation TG-PHY060027N)
and by NewHorizons at RIT.
\bibliographystyle{apsrev}
|
2,877,628,090,671 | arxiv | \section{Introduction}
Over the last decade methods from statistical physics have contributed greatly to the theory of complex networks~\cite{ba:rev,mejn:rev,calda:book}. One of the major contributions is the development of methods to characterize and categorize the vertices (nodes) of real-world networks. Numerous networked systems are heterogeneous in the sense that a majority of vertices have a degree lower than the average, whereas a small number of vertices have a much higher degree than the average. For many such systems, one can relate the degree of a vertex to its function. In, for example, the network of air flights~\cite{gui:air} the central vertices are the largest airports. These are the hubs international travellers hardly can avoid and arguably the most important facilities for the function of global air transportation. Degree, and other centrality measures~\cite{harary,wf}, are therefore static measures of the importance of airports to the dynamic function of the system. However, there are other networked systems with broad degree distributions where this description is incomplete. Metabolism is the set of chemical reactions occurring in a normally functioning organism. From such a reaction system, one can construct networks of chemical substances~\cite{our:curr}. Such networks have heterogeneous degree distributions. The hubs of metabolic networks are the most abundant molecules, such as CO${}_2$ and H${}_2$O. These metabolites have very different functions compared to the low-degree vertices --- they are present throughout the cell and participate in reactions of all kinds of complexity. By analogy to money, frequently changing hands, the hubs of metabolic networks are called \textit{currency metabolites}. For the overall function of the system --- to develop and maintain high-level biological functionality, and ultimately life --- low-degree vertices are also essential. Although the hubs may affect the organism's health, on average, more than the peripheral vertices, most authors agree that using degree as a proxy of functional importance is misleading~\cite{our:curr,our:bio,zhao:meta,jing:baotai,ma:meta,wagner:sw,arita:not,gui:meta}. Instead, the picture often painted is that the higher functionality, and thus the most interesting information for questions of current scientific interest (related to evolution and metabolic diseases), is contained in the organization of the non-currency metabolites. For this reason, to achieve a network that is more informative, currency metabolites are often deleted~\cite{our:curr,our:bio,zhao:meta,jing:baotai,ma:meta,wagner:sw,arita:not,gui:meta}. Another characteristic property of metabolic networks is that the non-currency metabolites form network clusters that are more connected within, than between each other. This modular structure, one believes, is related to the function of the network --- a network cluster (network module) is responsible for one relatively well-defined task in the metabolic system. The currency metabolites, on the other hand, are involved in the production of a wide variety of molecules, from many different modules. Thus the currency metabolites hide the modular network structure, something that can be used for a graph based definition of currency metabolites~\cite{our:bio}. \emph{If vertices are deleted from the network in order of highest degree, then the set of currency metabolites is the set of vertices that, if deleted, gives the highest relative modularity.} (Where ``relative modularity'' is a measure quantifying the tendency of the network to be organized in network modules, and is defined mathematically below.)
In this paper, we pursue the idea that the description of metabolic networks above --- that the bulk of the dynamics are performed by currency metabolites, and the higher order function is produced in the network modules by the low-degree vertices --- also is relevant for some other networked systems. Consider the network of people present at the venue of a larger scientific meeting, where two persons are linked with each other if they have engaged in a conversation. Probably most scientists have links to the people at the reception desk, and links to their collaborators and other scientists working on similar problems. The functional output of the conference --- the advancement of science --- would then be performed in the network clusters of people with similar interests. The receptionists, the currency vertices, are nevertheless important for the meeting to be successful, but in a different way than the other vertices. The modular structure of the scientists would be more visible if the receptionists were not included in the network. (Similar descriptions of social networks can be found in Refs.~\cite{ada:ifnw,gui:chart}.)
Whether or not a network is well described by a dichotomy of the vertices into currency and non-currency vertices is ultimately a question about the whole system, including dynamic processes on the network. Nevertheless, as mentioned above, one can define currency metabolites for any network. Since there is no general, functional definition of currency vertices one cannot evaluate the definition directly. We will perform an indirect validation by creating a model producing networks where the network characteristics of currency metabolites can be tuned continuously. Using this model, we investigate the parameter values where the designated currency vertices of the model match the identified currency vertices. By mapping out the network structure of the region in parameter space where the matching is good, one can get an indication if a network fits to the currency-vertex picture. We will also use a more direct validation for nine different types of empirical network --- we derive model parameter values from the networks and calculate the matching scores as for the model networks, a high matching will be interpreted as a support for the currency-vertex picture.
The rest of the paper is organized as follows. First, we define network modularity and currency vertices mathematically. Then, we define the network model and, finally, evaluate the currency vertices of the model and empirical networks.
\begin{figure}
\includegraphics[width=0.8\linewidth]{63537Fig1.eps}
\caption{Example output of the network model. Model parameters are $g=4$, $n_g=10$, $n_c=4$, $p_g=0.4$, $p_o=0.04$, and $p_c=0.4$. In (a) the clear modular structure of the network without the model currency vertices (MCV) is shown. In (b), we also display the MCVs obscuring the modular structure.
}
\label{fig:ill}
\end{figure}
\section{Preliminaries}
In this paper, we consider networks modelled as graphs $G=(V,E)$ where $V$ is the set of $N$ vertices and $E$ are the $M$ edges (unordered pairs of vertices). We assume the graphs to be \emph{simple}, i.e.\ that they do not have multiple edges or self-edges. (Graphs that are not simple are called multigraphs.)
\subsection{Network modularity and currency vertices}
In this section, we will discuss how to calculate network modularity. For a more detailed account, see Ref.~\cite{mejn:spectrum}. Consider a partition of the vertex set into groups, and let $e_{ij}$ denote the fraction of edges between groups $i$ and $j$. The network modularity of this partition is defined as\cite{mejn:commu}
\begin{equation}\label{eq:q}
Q=\sum_i\left[e_{ii}-\left(\sum_je_{ij}\right)^2\right],
\end{equation}
where the sum is over all groups of vertices. The term $\left(\sum_je_{ij}\right)^2$ is the expectation value of $e_{ii}$ in a random multigraph. A prototype measure for the modularity of a graph is $Q$ maximized over all partitions, $\hat{Q}$. For many networks with broad degree distributions, it is common to measure network structure relative to a null-model of random graphs with the constraint that the set of degrees is the same as in $G$, $\mathcal{G}(G)$. In principle this means that one separates degree from other network structures, which is appropriate in our case --- in fact, this idea is implicit in the definition of currency vertices. With this null model, we subtract the average $\hat{Q}$-value for graphs in $\mathcal{G}(G)$ from $\hat{Q}(G)$:
\begin{equation}\label{eq:delta_q}
\Delta(G) = \hat{Q}(G) - \langle \hat{Q}(G') \rangle_{G'\in
\mathcal{G}(G)} ,
\end{equation}
where angular brackets denote average over $\mathcal{G}(G)$\cite{our:bio}. We use a random rewiring of the original graph to sample $\mathcal{G}(G)$\cite{maslov:pro}, and the heuristics proposed in Ref.~\cite{mejn:spectrum} to maximize $Q$.
To extract the currency vertices we start with the original graph $G_0$ and perform the following scheme
\begin{enumerate}
\item \label{step:q} Measure $\hat{Q}(G_i)$, where $i$ is the number of times this line has been executed before this time.
\item \label{step:del} Delete the vertex with highest degree from $G_i$ and call this graph $G_{i+1}$.
\item \label{step:cpy} Make a copy, $G_{i+1}'$, of $G_{i+1}$.
\item \label{step:swap} Rewire the edges of $G_{i+1}'$ and measure $\hat{Q}(G_{i+1}')$. Repeat this $n_{\mathrm{iter}}$ times and calculate $\langle \hat{Q}(G') \rangle_{G'\in\mathcal{G}(G)}$.
\item \label{step:condition} If $\Delta(G_i)$ is lower than $\Delta(G_0)$, or if $i=N-1$, then break the iterations.
\end{enumerate}
The vertices deleted at step~\ref{step:del} maximizing $\Delta(G_i)$ is the set of currency vertices. In this paper, we use $n_{\mathrm{iter}}=25$. A C-implementation of this algorithm can be downloaded at \url{www.csc.kth.se/~pholme/curr/}.
\subsection{Artificial networks}
To investigate the definition of currency vertices, as sketched in the Introduction, we use model networks where one can tune the strength of modularity, number of currency vertices and average degrees.
Let there be $g$ groups (corresponding to network modules), $n_g$ vertices within each group, and $n_c$ model currency vertices (MCV). Then go through all pairs of distinct non-MCVs and connect these with probability $p_g$ if they belong to the same group, and $p_o$ otherwise. Finally, go through all pairs of vertices containing at least one currency vertex and connect the pair with a probability $p_c$.
The expected number of vertices is
\begin{equation}\label{eq:n}
N = n_c+gn_g
\end{equation}
and the expected number of edges
\begin{equation}\label{eq:m}
M = \frac{p_ggn_g(n_g-1) + p_ogn_g^2(g-1) + p_cn_c(N-1)}{2}.
\end{equation}
The modularity $Q$ for the model with $n_c=0$ (or all MCVs removed), partitioned according to the groups, is
\begin{equation}\label{eq:q_mod}
Q = g\frac{p_gn_g(n_g-1)}{2M}-g\left(\frac{p_gn_g(n_g-1)}{2M}+(g-1)\frac{p_on_g^2}{2M}\right)^2.
\end{equation}
In the limit $g,n_g\gg 1$, Eq.~\ref{eq:q_mod} reduces to
\begin{equation}\label{eq:q_mod_app}
Q=\frac{1}{1+g/\gamma}-\frac{1}{g}\mbox{~~where~~} \gamma=\frac{p_g}{p_o}.
\end{equation}
Since our model produces simple graphs (and not multigraphs, as the theory behind the definition of $Q$), putting $p_g=p_o$ in Eq.~\ref{eq:q_mod}, does only approximately give $Q=0$. The error in this approximation is $\mathrm{O}(1/n_g+1/g)$. The model can easily be modified to produce multigraphs (by just dropping the requirement of no self-edges or multiple edges), in which case the $p_g=p_o$ would indeed give zero modularity.
\subsection{Matching score}
As mentioned in the Introduction, we will investigate how well the original structure of the network matches the output of the currency-vertex detection algorithm as a function of model parameter values. The quantity for measuring the overlap of model groups and identified network clusters is the fraction of overlapping group identities in the best matching between the two classifications. In other words, let $x_i$ be vertex $i$'s group in the original network ($x_i\in [1,\cdots,g]$, currency vertices are not counted as members of any group) and let $y_i$ be vertex $i$'s identity obtained from the currency-vertex detection ($y_i\in [1,\cdots,N_g]$, $N_g$ is the number of detected groups). Then find the labeling of the graph-clustering groups such that each group has a unique number in the interval $[1,\cdots,N_g]$, and that the number $n_{\mathrm{match}}$ of vertices $i$ with $x_i=y_i$ is maximized. Then we define the \textit{matching score} $\mu_g = n_{\mathrm{match}} / gn_g$. We calculate $n_{\mathrm{match}}$ by a simple heuristic:
\begin{enumerate}
\item \label{step:start} Start with a random labeling of the groups.
\item \label{step:chose} Select a pair of group labels.
\item \label{step:shift} If $n_{\mathrm{match}}$ does not decrease if these labels are swapped, then swap them.
\item \label{step:rep1} If no improvement has been made during the last $n_{\mathrm{rep}}$ steps, go to step~\ref{step:chose}.
\item \label{step:rep2} Start over from step~\ref{step:start} with a new random seed unless a new highest $n_{\mathrm{match}}$ has been found in step~\ref{step:rep1} the last $N_{\mathrm{rep}}$ time steps.
\end{enumerate}
In addition to measuring the matching of model groups and network clusters, we look at the matching between actual currency vertices (identified by the algorithm), and the MCVs assigned in the model during the generation of the graph. In this case, we use the Jaccard index of the two sets of vertices:
\begin{equation}\label{eq:jaccard}
\mu_c = \frac{|V_c\cap V_C|}{|V_c\cup V_C|},
\end{equation}
where $V_c$ is the set of detected currency vertices, $V_C$ is the set of MCVs, and $|\;\cdot\;|$ denote the number of elements of a set.
\section{Numerical results}
\begin{figure}
\includegraphics[width=0.8\linewidth]{63537Fig2.eps}
\caption{ The maximal relative modularity as a function of the ratio $\gamma=p_g/p_o$ of probabilities for attachment within a group. We chose $n_c=10$, $g=n_g$ and other parameter values such that the average degree is $181.1$ for model currency vertices, and $14.5$ for the others. The points are averages of $10$ to $20$ network realizations.
}
\label{fig:mod1}
\end{figure}
\begin{figure}
\includegraphics[width=0.8\linewidth]{63537Fig3.eps}
\caption{Matching scores for networks of different sizes. (a) shows the group matching scores $\mu_g$. (b) displays the currency-vertex matching scores. The symbols and parameter values are the same as in Fig.~\protect\ref{fig:mod1}.
}
\label{fig:mod2}
\end{figure}
\begin{figure}
\includegraphics[width=0.8\linewidth]{63537Fig4.eps}
\caption{The number of network clusters $N_g$ as a function of the number of groups $g$ in the model (a), and the group matching score $\mu_g$ as a function of $g$ (b). The other parameter values are $p_g=0.2$, $p_o=0.01$, $p_c=0.25$ and $n_c=10$. Averages are over $20$ network realizations.
}
\label{fig:ng}
\end{figure}
\begin{figure}
\includegraphics[width=0.8\linewidth]{63537Fig5.eps}
\caption{ The number of currency vertices $N_c$ as a function of the number of model currency vertices $n_c$ (a), and the currency-vertex matching score $\mu_c$ as a function of $n_c$ (b). The other parameter values are $p_g=0.2$, $p_o=0.01$, $p_c=0.25$ and $g=6$. Averages are over $20$ network realizations.
}
\label{fig:nc}
\end{figure}
\subsection{Artificial networks}
We start our numerical investigation by measuring the matching scores for networks of different modularity. As hinted from Eq.~\ref{eq:q_mod}, the modularity can be controlled by the ratio of edges between vertices of the same, and different, groups $\gamma$. The measurable modularity (i.e.\ the one that does not need the partition information from the network construction) is (for fixed network sizes) monotonously increasing with $\gamma$, see Fig.~\ref{fig:mod1}. This confirms the indication from Eq.~\ref{eq:q_mod_app} that $\gamma$ works as a control parameter for the relative modularity. We also see that the maximal value of the relative modularity $\hat\Delta$ depends on both the network size and $\gamma$. This effect is smaller if one let the degree increase with the number of vertices (which as been observed in some classes of networks\cite{doro:acc,pok}), instead of keeping degree fixed as in Fig.~\ref{fig:mod1}. This also suggests that comparing the $\Delta$-values of different networks should be done carefully. The comparison built into the currency-vertex definition algorithm concerns a sequence of monotonously shrinking networks from the same original. Since the size of the network do not change much during an iteration, and due to the smooth monotonous increase of Fig.~\ref{fig:mod1}, the shrinking size during the currency-vertex definition scheme is not a technical problem.
For real-world networks the $\hat\Delta$ (and not $\gamma$) is a measurable quantity. In Fig.~\ref{fig:mod2}, we show the matching scores as function of maximal relative modularity $\hat\Delta$. The matching scores (both $\mu_g$ and $\mu_c$) increase monotonously with $\hat\Delta$, meaning that the picture of regular vertices grouped into clusters (instantiated by the model) holds better the larger the relative modularity is. For the parameter values in question, $\hat\Delta$-values of $\sim 0.2$ are needed for matching-score values over $0.5$. For example, if one deems values of $\mu_g$ and $\mu_c$ less than $0.5$ too small, then one can conclude that networks with $\hat\Delta<0.2$ probably do not fit the currency-vertex description. We note that in Fig.~\ref{fig:mod2}, the matching scores for a given $\hat\Delta$-value seem to converge from above. If the $p$-parameters ($p_g$, $p_o$ and $p_c$) are fixed as $N$ are changed, then this convergence goes in the opposite direction (the $\mu$-values grow with the system size).
In Fig.~\ref{fig:ng}, we investigate how the number of network clusters $N_g$ depends on the number of groups $g$ in the model. For a small number of groups $g\approx N_g$ (as seen in Fig.~\ref{fig:ng}(a)). Indeed, the identified clusters are almost the same as the original groups ($\mu_g\approx 1$ in Fig.~\ref{fig:ng}(b)). For larger $g$, $N_g$ starts to deviate from $g$. This deviation appears later for larger network sizes, indicating that this is a finite size effect. The number of vertices sets a (trivial) upper bound of this matching. Fig.~\ref{fig:ng}(a) shows that the bound increases slower than linear (possibly logarithmically). In the light of this observation, if $N_g$ is too large (considering the network sizes), then the currency-vertex picture seems less appropriate.
Fig.~\ref{fig:nc} illustrates the model's dependence of the number of MCVs, $n_c$. Just like for the number of network clusters, the matching with the corresponding model parameters is largest for small values. For larger values of $n_c$, the number of currency vertices start to deviate (becoming lower than $n_c$). From both the $N_c$- and $\mu_c$-curves, we note that matching score is larger for larger networks. The mismatch between the currency vertices of the model network construction and the currency vertices by definition is thus a finite-size effect.
\begin{table*}\label{tab:empir}
\begin{ruledtabular}
\begin{tabular}{r|c|rrrrrrrrr}
network & Ref. & $N$ & $M$ & $N_c$ & $N_g$ & $\hat{n}_g/N$ & $\Delta_0$ & $\hat\Delta$ & $\mu_g$ & $\mu_c$ \\\hline
music collaborations & \cite{jazz} & 198 & 2256 & 16 & 5 & 0.273 & 0.261 & 0.318 & 0.98(1) & 0.81(7) \\
metabolic & \cite{our:curr} & 473 & 1694 & 4 & 13 & 0.214 & 0.303 & 0.349 & 0.80(2) & 0.68(9) \\
e-mail & \cite{bornholdt:email} & 1133 & 5161 & 10 & 15 & 0.286 & 0.189 & 0.247 & 0.44(6) & 0.45(9) \\
protein interaction & \cite{hh:pfp} & 4168 & 7434 & 13 & 41 & 0.108 & 0.080 & 0.099 & 0.05(5) & 0.07(4) \\
airport network & \cite{my:cps} & 456 & 2799 & 29 & 24 & 0.283 & 0.128 & 0.184 & 0.05(3) & 0.065(2) \\
neural network & \cite{cenn:brenner} & 280 & 1973 & 32 & 6 & 0.257 & 0.186 & 0.232 & 0.29(4) & 0.02(1) \\
dolphin social network & \cite{dolph} & 62 & 159 & 0 & 4 & 0.339 & 0.166 & 0.166 & 0.85(2) & -- \\
atmospheric & \cite{sole:astro} & 249 & 1197 & 0 & 4 & 0.518 & 0.122 & 0.122 & 0.33(1) & -- \\
software dependence & \cite{mejn:mix} & 1033 & 1718 & 0 & 29 & 0.181 & 0.148 & 0.148 & 0.19(1) & -- \\
\end{tabular}
\end{ruledtabular}
\caption{Values (network sizes, number of currency vertices $N_c$, number of network clusters $N_g$, relative size of the largest cluster $\hat{n}_g/N$, relative modularity $\Delta_0$ of the original network, maximal relative modularity $\hat\Delta$, group matching score $\mu_g$, currency-vertex matching score $\mu_c$) for empirical networks. In the music collaboration network, vertices are jazz musicians, connected if they have appeared on the same recording. In the metabolic and atmospheric networks the vertices are chemical substances and edges represent pairs of substances participating in the same reaction. The metabolic data comes from reactions in the bacterium \textit{Mycoplasma genitalium} and the atmospheric data regards Earth. In the e-mail network, vertices are e-mail addresses and edges mean that at least one e-mail within the three month sampling period has been sent from one address to the other. The protein interaction network consists of proteins connected if they can bind physically to one another. Vertices in the neural network are neuronal cells of the nematode \textit{Caenorhabditis elegans}, and edges indicate how these are connected. In the airport network, vertices are North American airports and edges pairs of airports with a regular nonstop flight. The dolphin social network is based on observed interactions between bottlenose dolphins in Doubtful Sound, New Zealand. In the software dependence data, a vertex is a software package and a link indicate that one package requires another package to be installed to function. Some of these network datasets are originally directed (the neuronal and e-mail networks are also weighted). These are transformed into simple graphs by reciprocating directed edges and treating any non-zero, weighed edge as an unweighted edge. The table is ordered primarily according to the $\mu_c$-values, secondarily after the $\mu_g$-values. The numbers in parentheses are standard errors in units of the last decimal.}
\end{table*}
\subsection{Empirical networks}
Now we turn to evaluate real-world networks. We perform the identification of currency vertices as outlined above, and obtain a decomposition into network clusters of the non-currency vertices. From this we obtain values of $N_c$, $N_g$, $\Delta_0$ and $\hat\Delta$ displayed in Table~\ref{tab:empir}. Furthermore, we calculate $\mu_c$ and $\mu_g$ for our model with parameter values derived from the network --- we let $g$ be the measured $N_g$, set $n_c$ equal to $N_c$, $n_g=(N-N_c)/g$ (rounded to the lower integer) and, for $p_g$, $p_o$ and $p_c$, use the fraction of edges between the respective types of vertices in the empirical network. By this procedure, we obtain matching scores giving some indication how appropriate the currency-vertex picture is. One difference between the model and the empirical network is that the clusters of the model have the same sizes, whereas the cluster sizes of the real-world network varies. This is a feature that could affect the results quantitatively, especially if there is a wide distribution of cluster sizes. This is (fortunately for the analysis method) not the case. Even if the degree distributions are broad, the cluster size distribution is rather narrow --- the $\hat{n}_g/N$ values of Table~\ref{tab:empir} are low, with the atmospheric network as an exception (the results for this network thus be taken with a grain of salt).
Of the nine empirical networks, three networks do not have any currency vertices at all. These three are clearly disqualified for our currency-vertex picture. Of the six networks with $N_c>0$, three networks --- a social network of music collaborations, a metabolic network and a network of e-mails --- have larger $\mu_c$- and $\mu_g$-values than other networks. These networks fulfil the structural prerequisites for a currency-vertex picture. In the music collaboration network, we can assume the currency vertices are studio musicians that are not strongly affiliated with one group, or orchestra, but participate on many artists' recordings. The e-mail network does not include spam mails\cite{bornholdt:email}, so we assume the hubs are addresses that send, or receive, information of more general nature (cf.\ the example of the social interactions at a scientific meeting in the Introduction). We also note that this classification seems independent of the network sizes --- of the three networks with large matching scores (and $n_c>0$), the collaboration network is comparatively small and dense, whereas the e-mail network is larger and sparser; also among the networks with low $\mu$-values, this observation holds (the protein interaction network is large and sparse, the neural network is denser and smaller). Furthermore, we note that the region of the network-structure space (for $\hat\Delta$, $N_g$ and $N_c$) giving large matching scores (as found in the previous section) is consistent with the observations in Table~\ref{tab:empir}. Examples of networks falling outside of these ranges are the airport network (with a too large $N_g$-value considering its size), and the neural network (having too many currency vertices for its size to have a good matching).
\section{Conclusions}
In this paper, we have extended a organizational principle, known in metabolic networks, to networks in general. In this picture, most vertices are of relatively low-degree, grouped into relatively distinct network clusters. A small minority of the vertices, however, have much larger degree than the average, are linked to vertices of all clusters, and thereby obscure the modular organization of the low-degree vertices. We call these currency vertices. In a functional interpretation of this picture, the currency vertices perform the bulk of the dynamics, whereas the more specialized (and not necessarily less important) features of the system occur in the modules.
By just measuring the modular structure of a network, one cannot validate the currency-vertex definition. Instead of a direct validation, one can assume the network itself is an encoding of the functions of the vertices, and the currency-vertex definition is a decoding of this information\cite{rosvall:maps,mejn:spectrum,mejn:mix}. Following this philosophy, we create a model with a tunable number of currency vertices, number of network clusters and strength of these features. The match between the encoded and decoded sets of currency vertices and network clusters are closest if the modularity is large, and numbers of currency vertices and network clusters are low. Using this procedure, we also evaluate empirical networks. We conclude that three of nine investigated networks fit rather well to the currency-vertex picture. The first of these networks is a network of collaborations between music artists, where we assume the currency vertices are studio musicians and the other vertices are group, or band, members (and the network clusters are the music groups). Our second example of a network with currency-vertex structure is a metabolic network --- appropriate, since this class of networks is the inspiration of the concept. The third network potentially fitting our picture is an e-mail network, where we interpret the currency vertices as senders, or receivers, of general content e-mails (since the e-mails are sampled from a group of university e-mail accounts, such e-mails could be information to and from the university administration). The dialogues between colleagues and classmates presumably take place within the network clusters. These dialogs correspond to a different type of information process than the e-mails to the hubs, just as the function of currency metabolites is different from other substances in metabolic networks and the hubs of the music collaboration network have different roles than the majority of musicians. Among the networks not fitting the picture of currency vertices are a social network of dolphins (with a clear modular structure, but no currency metabolites), a network of airports and a network derived from chemical reactions in the Earth's atmosphere.
We have described the currency metabolite picture as a dichotomous property --- networks either fit it, or not. This is just a simplification and one may argue that the hubs of e.g.\ the airport networks (if we for a moment ignore that our airport network did not pass our tests) share some of the characteristics of currency vertices in other networks. At least, larger airports have a larger fraction of transfer passengers, and thus a somewhat different function in the entire dynamic system of air travel. This also illustrates that, to determine how well characterized a network is by a division of the vertices into currency vertices and others, one needs to (in addition to the analysis presented in this paper) consider the dynamics of the subject system.
\section*{Acknowledgment}
P.H. acknowledges economic support from the Swedish Foundation for Strategic Research and thanks Holger Ebel, Michael Gastner, Mikael Huss, Andreea Munteanu and Mark Newman for data.
|
2,877,628,090,672 | arxiv | \section{The Euler Characteristic}\label{sec:euler}
In this section we prove Cor. \ref{cor:euler}, which computes
the Euler characteristic of $I^\#(Y;\lambda)$
where $Y$ is any closed, oriented 3-manifold and $\lambda$ is any
unoriented closed 1-manifold in $Y$.
The claim is that $\chi(I^\#(Y;\lambda)) = |H_1(Y;\mathbb{Z})|$,
where the expression on the right side means the cardinality of $H_1(Y;\mathbb{Z})$ if it is finite,
and is zero otherwise.
\begin{proof}[of Cor. \ref{cor:euler}]
We make the abbreviations
\[
i(Y;\lambda)=\chi(I^\#(Y;\lambda)), \quad |Y|=|H_1(Y;\mathbb{Z})|.
\]
Note that \S \ref{sec:prod} implies the multiplicativity
\begin{equation}\label{eq:multip}
i(Y;\lambda) i(Y';\lambda')=i(Y\#Y';\lambda\cup\lambda').
\end{equation}
We also note that $i(Y)=1$ when $|Y|=1$ by
Theorem \ref{thm:integerhom}.
Next, we claim the result is true for rational homology
3-spheres $Y$ that are obtained by integral surgery on an algebraically split link.
That is, $Y$ is the result of $(p_1,\ldots,p_k)$-surgery on a framed
link $L=L_1\cup\cdots\cup L_k$ in $S^3$ whose pairwise linking numbers vanish.
Thus $|Y|=|p_1\cdots p_k|$. Assume the result true
for $|Y|<n$. Since the case $|Y|=1$ has already been established,
we may assume that $Y$ is not an integral homology 3-sphere, and (by reordering) that $|p_1| > 1$.
Let $Z_p$ be $(p,p_2,\ldots,p_k)$-surgery on $L$. We have
an exact sequence
\[
\cdots I^\#(Z_{\infty};{\lambda})\toI^\#(Z_{p_1-1};{\lambda\cup\mu})\to I^\#(Z_{p_1};{\lambda})
\toI^\#(Z_{\infty};{\lambda})\cdots
\]
The degree of the first map is odd, while the other two are even, cf. \cite[\S 42.3]{kmm}.
Observing that $Z_{p_1}=Y$, we obtain
\[
i(Y;\lambda) = i(Z_{p_1-1};{\lambda\cup\mu}) + i(Z_\infty;\lambda).
\]
By the induction hypothesis, the right side is
\[
|(p_1-1)p_2\cdots p_k| + |p_2\cdots p_k| = n,
\]
establishing the result for all rational homology 3-spheres which are obtained by
integral surgeries on algebraically split links.
We now prove the result for all rational homology 3-spheres $Y$.
We use the fact that for any such 3-manifold, there is a framed algebraically
split link $L\subset S^3$ such that some integral surgery on $L$ yields
$Z=Y\# Y'$, where $Y'$ is a connected sum of lens spaces
of type $L(p,1)$, cf. \cite[Cor. 2.5]{ohtsuki}.
Since $Y'$ is integral surgery on an algebraically split link,
$i(Y')=|Y'|$. Then (\ref{eq:multip}) yields
\[
i(Y;\lambda) = i(Z;\lambda)/i(Y') = |Z|/|Y'| = |Y|,
\]
establishing the result for all rational homology 3-spheres.
Finally, we consider the case in which $b_1(Y)>0$.
We can always find $Z$
and a framed knot $K\subset Z$ such that $Y$ is 0-surgery on $K$
and $b_1(Z)+1=b_1(Y)$. We have an exact sequence
\[
\cdots I^\#(Y;{\lambda}) \to I^\#(Z_{1};{\lambda\cup\mu})\toI^\#(Z;{\lambda})\to I^\#(Y;{\lambda})\cdots
\]
where $Z_1$ is the result of 1-surgery on $K$.
The degree of the first two maps are even, while the third is odd, again cf. \cite[\S 42.3]{kmm}. Thus
\[
i(Y;\lambda)=i(Z_{1};{\lambda\cup\mu})-i(Z;{\lambda}).
\]
The proof is again by induction. If $b_1(Y)=1$, then the right side is known,
because $Z_{1}$ and $Z$ are rational homology spheres; we have $|Z_1|=|Z|$, so the right side vanishes.
Now suppose the result has been proven for $0<b_1<n$. If $b_1(Y)=n$, both terms on the right side vanish
by the induction hypothesis, and the proof is complete.
\end{proof}
\section{Framed Instanton Homology}
In this section we discuss the basic constructions and properties of the
groups $I^\#(Y)$. These are a special case of the groups
$I^\#(Y,K)$ introduced by Kronheimer and Mrowka in \cite{kmu}.
Here $Y$ is a 3-manifold and $K$ is a knot or link in $Y$,
and we have $I^\#(Y)=I^\#(Y,\emptyset)$. The name
{\em framed instanton homology} comes from \cite{kmki}. The group $I^\#(Y)$
is isomorphic to the sutured instanton group $\text{SHI}(M,\gamma)$
from \cite{kms}, where $M$
is the complement of an open 3-ball in $Y$ and
$\gamma$ is a suture on the 2-sphere boundary.\\
\subsection{Framed Instanton Groups}\label{sec:framed}
Let $Y$ be a connected, oriented, closed 3-manifold.
Consider an $\text{SO}(3)$-bundle $\mathbb{Y}^\#$ over $Y\# T^3$
with $\mathbb{Y}^\#$ trivial over $Y$ and non-trivial over $T^3$.
To make the construction of $\mathbb{Y}^\#$ from $Y$ more precise, we can
once and for all pick a point $x\in T^3$, a bundle $\mathbb{T}^3$ over $ T^3$
geometrically represented by an $S^1$-factor, and an isomorphism $\mathbb{T}^3_x\simeq\text{SO}(3)$.
Then, up to inessential choices, $\mathbb{Y}^\#$ can be constructed from $Y$ and a basepoint
$y\inY$. Indeed, we can perform the connected sum $Y\# T^3$ between
3-balls around $y$ and $x$, and glue the bundles $Y\times\text{SO}(3)$ and $\mathbb{T}^3$
by expanding the isomorphism $\text{SO}(3)\simeq \mathbb{T}_x^3$ near $x$.
We describe a useful operation for cobordisms in this context.
Let $X:Y_1\toY_2$ be a cobordism and let $\gamma:[0,1]\to X$ be a properly
embedded path with $\gamma(0)$ and $\gamma(1)$ being the chosen basepoints in
$Y_1$ and $Y_2$, respectively. Given another such pair $X',\gamma'$ where
$X':Y'_1\toY'_2$, we form a cobordism
\[
X\JoinX':Y_1\#Y_1'\toY_2\#Y_2'
\]
as follows: let $\Gamma$ be a neighborhood of $\gamma$ diffeomorphic to
$\text{int}(D^3)\times [0,1]$, and write
\[
\partial(X\setminus \Gamma) \setminus (Y_1\cup Y_2\setminus \partial\Gamma) = S^2\times [0,1];
\]
do the same for $X'$, and identify the copies of $S^2\times [0,1]$ by
an orientation reversing homeomorphism. See
Figure \ref{fig:join}. We omit the paths from the notation $X\JoinX'$
because for all of our cobordisms there will be a natural choice of path
up isotopy relative to the boundaries. The operation $\Join$ extends to
glue together cobordisms of bundles $\mathbb{X}$ and $\mathbb{X}'$ if a path of
isomorphisms $\mathbb{X}_{\gamma(t)}\simeq \mathbb{X}'_{\gamma'(t)}$ is chosen.
Let $g$ be a gauge transformation of $\mathbb{Y}^\#$ with $\eta(g)\in H^1(Y\# T^3;\mathbb{F}_2)$
Poincar\'{e} dual to a 2-torus $\Sigma\subset T^3$ over which $\mathbb{Y}^\#$ is non-trivial.
Here $\eta:\mathscr{G}(\mathbb{X})\to H^1(X;\mathbb{F}_2)$ is from the exact sequence (\ref{eta}).
Such a transformation may be constructed explicitly as in
\cite[Lemma A.2]{ds}.
Define the framed gauge transformations $\mathscr{G}^\#$
to be the subgroup of $\mathscr{G}(\mathbb{Y}^\#)$ generated by $\mathscr{G}_\text{ev}(\mathbb{Y}^\#)$
and $g$. We let $\mathfrak{C}^\#$ denote the critical set of a perturbed
Chern-Simons functional $\textbf{cs}_\pi$ on $\mathscr{C}/\mathscr{G}^\#$.
Note that $\mathfrak{C}^\#$ is obtained from $\mathfrak{C}(\mathbb{Y}^\#)$
by modding out by the $\mathbb{Z}/2$-action of degree 4
induced by the gauge transformation $g$.
We define the chain complex $\text{C}^\#(Y)$ for $I^\#(Y)$
following ideas from \cite[\S 4.4]{kmu}.
This definition transparently replaces the notion of an I-orientation
with that of a homology orientation.
Fix once and for all a bundle
$\mathbb{W}:S^3\times\text{SO}(3)\to \mathbb{T}^3$ over $T^2\times D^2\setminus \text{int}(D^4):S^3\to T^3$
extending $\mathbb{T}^3$. Fix a path $\gamma$ in $W$ beginning in $S^3$,
ending at $x\in T^3$, and
a path of isomorphisms $\mathbb{W}_{\gamma(t)}\simeq\text{SO}(3)$, the isomorphisms at the ends being the
natural choices.
We define
\[
\text{C}^\#(Y) = \bigoplus_{\mathfrak{a}\in\mathfrak{C}^\#} \mathbb{Z}\Lambda^\#(\mathfrak{a})
\]
where $\Lambda^\#(\mathfrak{a})$ is the 2-element set of orientations of the
line $\text{det}(D_A)$; here $A$ is a connection on $[0,1]\times\mathbb{Y}\Join\mathbb{W}$
(with cylindrical ends attached) where the
limit of $A$ over the $\mathbb{R}\times \mathbb{Y}$ cylindrical end is equivalent to
the trivial connection, and the limit of $A$ over the
$\mathbb{R}\times \mathbb{Y}^\#$ end is in the class $\mathfrak{a}$. The operator $D_A$
is as in \S \ref{sec:index}.
The differential for $\text{C}^\#(Y)$ is straight-forward to define,
following the construction of the differential for $I(\mathbb{Y})$ in \S \ref{sec:instantongroups}, which followed
\cite[\S 3.6]{kmu}. Note that a base connection as in the definition for $\text{C}(\mathbb{Y})$ is no longer needed.
In summary, given $Y$ with a basepoint, with suitable metric and
perturbation, the complex $\text{C}^\#(Y)$ and hence the group $I^\#(Y)$ are determined.
The isomorphism class of $I^\#(Y)$ depends only on $Y$.\\
\begin{figure}[t]
\includegraphics[scale=.38]{join2.pdf}
\caption{A schematic depiction of the $\Join$ operation. The thicker
lines represent actual boundary components.}
\label{fig:join}
\end{figure}
\subsection{Maps from Cobordisms}\label{sec:cobs}
We describe how a cobordism $X:Y_1\toY_2$ with a path $\gamma$ as above
gives rise to a map $I^\#(X):I^\#(Y_1)\toI^\#(Y_2)$. Again,
we omit $\gamma$ from the notation because there will always be a natural choice for us.
We always assume $X$ and $Y_1,Y_2$ are connected.
Take the path in $ T^3\times [0,1]$ given by $t\mapsto (x,t)$.
Using this we form a cobordism
\[
X^\#=X\Join( T^3\times [0,1]):Y_1\# T^3\to Y_2\# T^3.
\]
Further, there is a natural choice for bundle $\mathbb{X}^\#$ over $X^\#$ by
performing the $\Join$ operation between $X\times\text{SO}(3)$ and $\mathbb{T}^3\times [0,1]$
using the constant path of isomorphisms $\text{SO}(3)\simeq \mathbb{T}_x^3$.
We enlarge the even gauge transformation
group used for $\mathbb{X}^\#$ to include gauge transformations
whose restriction to each $T^3$ is of the form $g$ from \S \ref{sec:framed} above.
See \cite[\S 5.1]{kmu} for a general discussion. Then, in the usual way,
we obtain a chain map $m^\#(X):\text{C}^\#(Y_1)\to\text{C}^\#(Y_2)$ and an induced map
$I^\#(X)$ on homology.
The data of an I-orientation
may be replaced by an orientation of the line $\text{det}(D_A)$ where $A$ is the
trivial connection on $X\times\text{SO}(3)$. Following \cite[\S 3.8]{kmki}, but using
homology instead of cohomology, this amounts to an orientation of the vector space
\[
\mathcal{L}(X):=H_1(Y_1;\mathbb{R})\oplus H_1(X;\mathbb{R})\oplus H_2^+(X;\mathbb{R}),
\]
where $H_2^+(X;\mathbb{R})$ is a maximal positive definite subspace for the
intersection form on $H_2(X;\mathbb{R})$.
A choice of such an orientation is called a \textit{homology orientation}
for the cobordism $X$, and is typically denoted $\mu_X$. In summary, given
$X:Y_1\toY_2$, a path $\gamma$ from the basepoint of $Y_1$ to the basepoint of $Y_2$,
a suitable perturbation and metric, and a homology orientation
$\mu_X$, the chain map $m^\#(X)$ is determined. The induced map $I^\#(X)$ depends on $X$, $\mu_X$, and presumably $\gamma$.
We define $I^\#(\emptyset)=I^\#(S^3)$, and when $X:\emptyset\to \partial X$,
we define the map $I^\#(X)$ by deleting a 4-ball in $X$. In particular, when
$X$ is a compact, connected, oriented 4-manifold with connected boundary, and an orientation of
$H_1(X;\mathbb{R})\oplus H_2^+(X;\mathbb{R})$ is chosen, we obtain an element
\[
[X]^\# \in I^\#(\partial X).
\]
We also obtain a map $[X]_\#:I^\#(\overline{\partial X})\to\mathbb{Z}$
by viewing $X:\overline{\partial X}\to\emptyset$ and orienting
$H_1(\partialX;\mathbb{R})\oplus H_1(X;\mathbb{R})\oplus H_2^+(X;\mathbb{R})$.
If $X$ is a closed, connected, oriented 4-manifold and $H_1(X;\mathbb{R})\oplus H_2^+(X;\mathbb{R})$
is oriented, then we have a number $[X]^\#\in\mathbb{Z}$.
Finally, we mention another topological operation that arises naturally in
this setting. This is the boundary sum $W\natural W'$ of two 4-manifolds
with boundary, as used in \cite{gs}; one deletes a model half-4-ball along the boundaries of
$W$ and $W'$ and glues them together with an
orientation-reversing homeomorphism, so that $\partial(W\natural W')=\partial W\#\partial W'$.
We have
\[
\left(X\JoinX'\right)\circ \left(W\natural W'\right) \simeq
\left(X\circ W\right)\natural\left(X'\circ W'\right)
\]
where compositions involved are of course assumed to make sense, and the same relation holds
with the compositions reversed. See Figure \ref{fig:natural}.\\
\begin{figure}[t]
\includegraphics[scale=.33]{natural.pdf}
\caption{On the left, a schematic depiction of the boundary sum $\natural$ operation. On the right,
we compose the $\Join$ operation against $\natural$, and the result may be interpreted as involving only $\natural$. The thick lines represent actual boundary components.}
\label{fig:natural}
\end{figure}
\subsection{Grading} \label{sec:framedgr}
We now define the absolute $\mathbb{Z}/4$-grading on $I^\#(Y)$.
Let $\mathbb{W}'$ be a completion of $\mathbb{W}$ from \S \ref{sec:framed}
with the 4-ball filled in, so that it is
a non-trivial bundle over $T^2\times D^2$, and we may write $\mathbb{W}':\emptyset \to \mathbb{T}^3$. Fix an integer $k$.
For $\mathfrak{a}\in\mathfrak{C}^\#(Y)$ we define
\[
\text{gr}(\mathfrak{a}) := -\mu(\mathbb{E} \;\natural\; \mathbb{W}',\mathfrak{a}) -b_1(E)+b_+(E)-b_1(Y) + k \mod 4
\]
where $E:\emptyset\toY$ is a 4-manifold with boundary $Y$ and $\mathbb{E}=E\times\text{SO}(3)$.
We choose $k$ such that $I^\#(S^3)$
is supported in grading $0$. The proof that this grading is
well-defined is the same as the case of the absolute mod 2 grading for $I(\mathbb{Y})$ as for example in
\cite{d}; we get $\mathbb{Z}/4$ instead of $\mathbb{Z}/2$ because the characteristic classes
of the bundles are uniformly controlled in this case. We
give the argument for completeness, and compute the degrees of cobordism maps.
We have chosen our conventions so that the degree formula
aligns with that of \cite[Prop. 4.4]{kmu}.
\begin{prop}
The assignment \emph{$\mathfrak{a}\mapsto \text{gr}(\mathfrak{a})$} gives a well-defined
$\mathbb{Z}/4$-grading on $\text{C}^\#(Y)$ for which the differential has
degree $-1$ and thus descends to a $\mathbb{Z}/4$-grading on $I^\#(Y)$.
Given a cobordism $X:Y_1\to Y_2$ equipped with the data to form $X^\#$ as in \S \ref{sec:cobs}, the
degree of the induced map
$I^\#(X):I^\#(Y_1)\toI^\#(Y_2)$ is given by the expression for deg$(X)$ in (\ref{degofmap}) taken
modulo 4. More generally, if $\mathbb{Y}_i=Y_i\times\text{SO}(3)$ and $\mathbb{X}:\mathbb{Y}_1\to\mathbb{Y}_2$ is possibly non-trivial and comes equipped with the
data to form $\mathbb{X}^\#$, then the degree of the induced map $I^\#(\mathbb{X}):I^\#(Y_1)\toI^\#(Y_2)$ is given by
\begin{equation}
-\frac{3}{2}(\chi(X)+\sigma(X)) + \frac{1}{2}(b_1(Y_2)-b_1(Y_1)) + 2\mathscr{P}(\mathbb{X}) \mod 4\label{eq:degmaster}
\end{equation}
where the invariant $\mathscr{P}(\mathbb{X})\in\mathbb{Z}/2$ is defined by
\[
\mathscr{P}(\mathbb{X}) \equiv [S]\cdot [S] \mod 2.
\]
Here $S\subsetX$ is a surface in the interior of $X$, $[S]\in H_2(X;\mathbb{F}_2)$, and
the image of $[S]$ in $H_2(X,\partial X;\mathbb{F}_2)$ is Poincar\'{e} dual to $w_2(\mathbb{X})$.\label{gradingprop}
\end{prop}
\begin{proof}
Let $E':Y\to\emptyset$ and $\mathbb{E}'=E'\times\text{SO}(3)$,
and let $\mathbb{W}''$ be the reverse of $\mathbb{W}'$. In particular, we may write $\mathbb{W}'':\mathbb{T}^3\to\emptyset$.
Then by (\ref{glue}) we have
\begin{equation}
\mu(\mathbb{E}\;\natural\;\mathbb{W}',\mathfrak{a}) + \mu(\mathfrak{a},\mathbb{E}'\;\natural\;\mathbb{W}'') = \mu((\mathbb{E}'\circ\mathbb{E})\#(\mathbb{W}''\circ\mathbb{W}')).\label{eq:gradeglue}
\end{equation}
By (\ref{glue}) we may write the right hand side as
\[
\mu(\mathbb{E}'\circ\mathbb{E}) + 3 + \mu(\mathbb{W}''\circ\mathbb{W}').
\]
Note $\mathbb{W}''\circ\mathbb{W}'$ is a bundle over $T^2\times S^2$, which necessarily has $p_1$ congruent to
$0$ mod $4$. Also, $(1-b_1+b_+)(T^2\times S^2)=0$. By (\ref{closed}) we conclude that $\mu(\mathbb{W}''\circ\mathbb{W}')$ is
congruent to $0$ mod $4$. Noting that $\mathbb{E}'\circ\mathbb{E}$ is a trivial bundle,
(\ref{eq:gradeglue}) is mod 4 congruent to
\[
\mu(\mathbb{E}'\circ\mathbb{E}) + 3 = 3(b_1-b_+)(E'\circ E)
\]
which by a Mayer-Vietoris argument (see \S \ref{sec:homor}) is mod 4 congruent to
\[
-b_1(E)-b_1(E') + b_+(E) + b_+(E')+ b_1(Y).
\]
It follows that the expression
\[
\text{gr}(\mathfrak{a}) - \mu(\mathfrak{a},\mathbb{E}'\;\natural\;\mathbb{W}'') = b_1(E') - b_+(E') - 2b_1(Y)+k \mod 4
\]
is independent of $\mathbb{E}$, and thus so is $\text{gr}(\mathfrak{a})$.
In other words, $\text{gr}(\mathfrak{a})$ is a well-defined $\mathbb{Z}/4$-grading on $\text{C}^\#(Y)$.
Suppose $\mathfrak{a},\mathfrak{b}\in\mathfrak{C}^\#(Y)$ with $\mu(\mathfrak{a},\mathbb{R}\times\mathbb{Y}^\#,\mathfrak{b})=1$.
Then
\[
\mu(\mathbb{E}\;\natural\;\mathbb{W}',\mathfrak{a})+\mu(\mathfrak{a},\mathbb{R}\times\mathbb{Y}^\#,\mathfrak{b}) = \mu(\mathbb{E}\;\natural\;\mathbb{W}',\mathfrak{b})
\]
yields $\text{gr}(\mathfrak{b})-\text{gr}(\mathfrak{a}) = -1$. It follows that the differential
lowers the grading by 1. Now we compute the degree of a map $I^\#(X)$ induced by a cobordism
$X:Y_1\toY_2$.
Let $\mathbb{X}=X\times\text{SO}(3)$ and form $\mathbb{V} = \mathbb{X}\Join(\mathbb{T}^3\times [0,1])$.
Let $\mathfrak{a}\in \mathfrak{C}^\#(Y_1)$ and $\mathfrak{b}\in \mathfrak{C}^\#(Y_2)$
with $\mu(\mathfrak{a},\mathbb{V},\mathfrak{b})=0$.
Let $E:\emptyset\toY_1$ and $\mathbb{E}=E\times\text{SO}(3)$. Then (\ref{glue}) and $\mu(\mathfrak{a},\mathbb{V},\mathfrak{b})=0$ yield
$\mu(\mathbb{V}\circ(\mathbb{E}\;\natural\;\mathbb{W}'),\mathfrak{b}) = \mu(\mathbb{E}\;\natural\;\mathbb{W}',\mathfrak{a})$.
Thus $\text{deg}(X) \equiv \text{gr}(\mathfrak{b})-\text{gr}(\mathfrak{a})$ is given by
\[
-b_1(X\circ E) + b_+(X\circ E) - b_1(Y_2) + b_1(E) - b_+(E) + b_1(Y_1).
\]
From the discussion in \S \ref{sec:homor}, $-b_1(X\circ E) + b_+(X\circ E)$ is equal to
\[
-b_1(E)-b_1(X) + b_+(E) + b_+(X) + b_1(Y_1).
\]
We obtain the simplified expression
\begin{equation}
\text{deg}(X) \equiv -b_1(X) + b_+(X) + 2b_1(Y_1) - b_1(Y_2) \mod 4.\label{eq:degprop}
\end{equation}
Using the assumption that $X$, $Y_1$ and $Y_2$
are connected and non-empty, we have $\chi(X) = 1-b_1(X)+b_2(X)-b_3(X)$.
Poincar\'{e}-Lefschetz duality tells us $b_3(X)=b_1(X,\partial X)$, and by the long
exact sequence for the pair $(X,\partial X)$ with real coefficients we obtain
\[
d-b_2(X)+b_1(\partial X)-b_1(X)+b_1(X,\partial X) -b_0(\partial X) + b_0(X) = 0,
\]
where $d$ is the dimension of the image of the map $H_2(X)\to H_2(X,\partial X)$.
Note $b_0(\partial X)=2$ and $b_0(X)=1$. On the other hand, $d=b_+(X)+b_-(X)$
and $\sigma(X) = b_+(X)-b_-(X)$. We obtain
\[
\chi(X) = -2b_1(X) + b_1(Y_1) + b_1(Y_2) + d,\quad \sigma(X) = 2b_+(X) - d.
\]
Plugging this data into expression (\ref{degofmap}), rewritten here as
\[
-\frac{3}{2}(\chi(X)+\sigma(X)) + \frac{1}{2}(b_1(Y_2)-b_1(Y_1)),
\]
yields, modulo 4, the expression for deg$(X)$ in (\ref{eq:degprop}). Now we approach the more general statement, supposing that $\mathbb{X}:\mathbb{Y}_1\to\mathbb{Y}_2$ is possibly non-trivial.
We write
\begin{equation*}
\text{deg}(\mathbb{X}) \equiv \text{deg}(X) + 2\mathscr{P}(\mathbb{X}) \mod 4,\label{eq:grp1}
\end{equation*}
where $\mathscr{P}(\mathbb{X})$ is to be determined.
Let $E_1:\emptyset\toY_1$ and $\mathbb{E}_1=E_1\times\text{SO}(3)$. Given $\mathfrak{a}\in\mathfrak{C}^\#(Y_1)$, choose $\mathfrak{b}\in\mathfrak{C}^\#(Y_2)$ such that
$\mu(\mathfrak{a},\mathbb{X}^\#,\mathfrak{b})\equiv 0$. Write $\mathbb{X}_\text{tr}=X\times\text{SO}(3)$. Then
\begin{align*}
\text{deg}(\mathbb{X})-\text{deg}(X) &\equiv \mu(\mathbb{X}\circ\mathbb{E}_1\;\natural\;\mathbb{W}',\mathfrak{b})- \mu(\mathbb{X}_\text{tr}\circ\mathbb{E}_1\;\natural\;\mathbb{W}',\mathfrak{b}).
\end{align*}
After closing up bundles using some $E_2:Y_2\to\emptyset$ with $\mathbb{E}_2=E_2\times\text{SO}(3)$
and cancelling out the contribution from the bundle over $T^2\times S^2$ as above, this difference is seen from (\ref{closed}) to be
\[
-2p_1(\mathbb{E}_2\circ\mathbb{X}\circ\mathbb{E}_1) = \frac{1}{4\pi^2}\int_{E_2\circ X\circ E_1}\text{tr}(F_A^2),
\]
where $A$ is any connection. We can choose $A$ to be trivial away from the interior of $X$, thus
\[
\mathscr{P}(\mathbb{X}) \equiv \frac{1}{8\pi^2}\int_X \text{tr}(F_A^2) \mod 2
\]
where $A$ is any connection on $\mathbb{X}$ that restricts to trivial connections on each $\mathbb{Y}_i$.
In other words, $\mathscr{P}(\mathbb{X})\equiv p_1(\mathbb{X}')$ mod $2$, where $\mathbb{X}'$
is any trivial extension of $\mathbb{X}$ over a closed 4-manifold.
Thus
\begin{equation*}
\mathscr{P}(\mathbb{X})\equiv \widetilde{w}_2(\mathbb{X})^2 \mod 2,
\end{equation*}
where $\widetilde{w}_2(\mathbb{X})$ is a lift of $w_2(\mathbb{X})$ to $H^2(X,\partialX;\mathbb{F}_2)$.
The result follows.
\end{proof}
\vspace{10px}
\subsection{Duality}\label{sec:dual}
The chain group $\text{C}^\#(\overline{Y})$ is the same as $\text{C}^\#(Y)$ but
with the differential maps transposed. It follows that $I^\#(Y)$ and $I^\#(\overline{Y})$
are isomorphic over $\mathbb{Q}$. More precisely, given a homology orientation of $Y$,
i.e. an orientation of $H_1(Y;\mathbb{R})$, we get an isomorphism
\begin{equation}
I^\#(\overline{Y};\mathbb{Q})_i \simeq I^\#(Y;\mathbb{Q})_{b_1(Y)-i}^\ast.\label{eq:dual}
\end{equation}
The homology orientation is required to identify the chain groups. The
grading shift in (\ref{eq:dual}) is explained as follows. Let $E_1:\emptyset \to Y$ and
$E_2:Y\to\emptyset$, and $\mathfrak{a}\in\mathfrak{C}^\#(Y)$. Write $\overline{\mathfrak{a}}$
for the corresponding class in $\mathfrak{C}^\#(\overline{Y})$.
From (\ref{glue}) and (\ref{closed}) we obtain
\[
\mu(\mathbb{E}_1\;
\natural\; \mathbb{W}',\mathfrak{a}) + \mu(\overline{\mathfrak{a}},\mathbb{E}_2\;
\natural\;\mathbb{W}'') = 3(b_1(E)-b_+(E))
\]
where $\mathbb{E}_i=E_i\times\text{SO}(3)$, $E=E_2\circ E_1$, and $\mathbb{W}''$ is the reverse of $\mathbb{W}'$. The bundle $\mathbb{W}''\circ\mathbb{W}'$ over $T^2\times S^2$ has been removed from
the expression just as in \S \ref{sec:framedgr}. Using that
$b_1(E)-b_+(E)$ is equal to
\[
b_1(E_1)+b_1(E_2) - b_+(E_1) - b_+(E_2)-b_1(Y),
\]
see \S \ref{sec:homor}, we obtain $\text{gr}(\mathfrak{a}) + \text{gr}(\overline{\mathfrak{a}})\equiv b_1(Y) + 2k$. We claim $k$ is even. Let $\mathfrak{a}$ be the generator of $I^\#(S^3)$, represented by a flat connection on $T^3\simeq S^3\# T^3$. Recall that $k$ is chosen so that $I^\#(S^3)$ is supported in grading $0$, so we have $\text{gr}(\mathfrak{a}) =0$ (also see \S \ref{sec:examples}). In the definition of $\text{gr}(\mathfrak{a})$, choose $\mathbb{E}:\emptyset\to S^3$ to be a trivial bundle over a 4-ball. Then
\[
0 \equiv \text{gr}(\mathfrak{a}) \equiv -\mu(\mathbb{W}',\mathfrak{a}) + k \mod 4.
\]
Recall from the proof of Prop. \ref{gradingprop} that $\mu(\mathbb{W}''\circ\mathbb{W}')\equiv 0$ mod $4$, where $\mathbb{W}'':\mathbb{T}^3\to\emptyset$ is the reverse bundle-cobordism of $\mathbb{W}'$. By the index gluing formula (\ref{glue}) we then have $-\mu(\mathbb{W}',\mathfrak{a}) \equiv \mu(\mathfrak{a},\mathbb{W}'')$ mod $4$. Since $W'$ is diffeomorphic to its orientation reversal, which is $W''$, we also have $\mu(\mathbb{W}',\mathfrak{a}) \equiv \mu(\mathfrak{a},\mathbb{W}'')$ mod $4$, as follows from the Atiyah-Patodi-Singer index formula \cite[Thm. 3.10]{aps}. Thus $k \equiv \mu(\mathbb{W}',\mathfrak{a}) \equiv 0$ mod $2$. It follows that
\[
\text{gr}(\mathfrak{a}) + \text{gr}(\overline{\mathfrak{a}})\equiv b_1(Y) \mod 4,
\]
establishing the grading shift in (\ref{eq:dual}).\\
\subsection{Exact Triangles}
In this section we state a few exact triangles for framed instanton homology.
For these it is necessary to allow non-trivial bundles.
In the above constructions, take $\mathbb{Y}^\#$ to be geometrically
represented by $\lambda\cup\omega$ where $\lambda\subsetY$ and $\omega$ is an
$S^1$-factor of $T^3$. We obtain a group $I^\#(Y;\lambda)$ that is
now only relatively $\mathbb{Z}/4$-graded. It is isomorphic to four consecutive
gradings of the relatively $\mathbb{Z}/8$-graded group $I(\mathbb{Y}^\#)$.
The isomorphism class of $I^\#(Y;\lambda)$ depends only on the oriented homeomorphism
type of $Y$ and the class $[\lambda]\in H_1(Y;\mathbb{F}_2)$.
Let $Y$ be a closed, oriented 3-manifold and $\lambda\subsetY$ a closed, unoriented
1-manifold as above. Let $K$ be a framed knot in $Y$ disjoint from $\lambda$.
Denote by $Y_i$ the result of $i$-surgery on $K$.
Let $\mu$ be the core of the knot $K$ as viewed in $Y_0$. Then we have an
exact triangle
\begin{equation*}
\cdots I^\#(Y;\lambda)\toI^\#(Y_0;\lambda\cup\mu)\to I^\#(Y_{1};\lambda) \to I^\#(Y;\lambda)\cdots
\end{equation*}
There are two other exact triangles corresponding to the two other rows in Figure \ref{fig:ses}.
For example, if we view $\mu$ as the core of the knot inside $Y_i$ where $i=\infty$ or $i=1$,
the exact sequence has $\mu$ appearing in the twisting for the group of $Y_i$, and not the other two.
Each of these is an application of Floer's original exact triangle, Theorem \ref{thm:floer},
obtained by connected summing each 3-manifold with $T^3$
and performing the surgeries away from $T^3$, with the appropriate overlying bundles.
By changing the framing of $K$, we obtain variants of the above triangles
that are computationally handy. Let $l$ and $m$ be the longitude and meridian
of $K$, respectively. Suppose the meridian is unchanged but the longitude
is changed to $-pm+l$. Then we have
\begin{equation*}
\cdots I^\#(Y;\lambda)\toI^\#(Y_{p};\lambda\cup\mu)\to I^\#(Y_{p+1};\lambda)\toI^\#(Y;\lambda) \cdots
\end{equation*}
where again the core $\mu$ can be arranged in two other ways.
Alternatively, keep the longitude the same but change the meridian to $m-ql$. Then
we have
\begin{equation*}
\cdots I^\#(Y_0;\lambda)\toI^\#(Y_{1/(q+1)};\lambda\cup\mu)\to I^\#(Y_{1/q};\lambda)\toI^\#(Y_0;\lambda) \cdots
\end{equation*}
where the same freedom with the placement of $\mu$ is understood. For other
variants, we refer the reader to \cite[\S 42.1]{kmm}.
For an alternative perspective, one can begin with a 3-manifold $Z$ with torus boundary and consider
the possible ordered triplets of Dehn fillings of $Z$ that are compatible with a surgery
triangle description.
This is the viewpoint taken in \cite[\S 42.1]{kmm} and \cite{os}.
We mention that the mod 2 degrees of the the cobordism maps
in these exact triangles is the same as
the monopole case, and is explained in \cite[\S 42.3]{kmm}.
There are always non-trivial bundles amongst the 3 cobordism maps,
even if the three framed groups are untwisted.
For in this case the composite of three
consecutive cobordism bundles, call it $\mathbb{X}_{03}$ as in \S \ref{sec:x},
has $\mathscr{P}(\mathbb{X}_{03})\equiv 1\mod 2$. This is because $\mathbb{X}_{03}$ is
trivial away from a copy of $-\mathbb{C}\mathbb{P}^2$ minus a thickened $S^1$; over this area it restricts
to a non-trivial bundle $\mathbb{E}$ which is easily seen to have $\mathscr{P}(\mathbb{E})\equiv 1$.
Then, by the additivity of $\mathscr{P}(\mathbb{X})$, at least one of $\mathbb{X}_{i,i+1}$ has
$\mathscr{P}(\mathbb{X}_{i,i+1})\equiv 1$.
Note that, after computing $\text{deg}(X_{03})=1$, we see $\text{deg}(\mathbb{X}_{03}) \equiv -1\mod 4$.\\
\subsection{Examples}\label{sec:examples} In this section we consider the
framed instanton homology of $S^3$ and $S^1\times S^2$.
To compute $I^\#(S^3)$ it
suffices to compute $I(\mathbb{T}^3)$. This is well-known and elementary, see \cite{bd}.
Let $N$ be a regular neighborhood of the geometric representative for $\mathbb{T}^3$.
The flat connections modulo even gauge on $\mathbb{T}^3$ are in
correspondence with the set
\[
\{\rho\in\text{Hom}(\pi_1( T^3\setminus N),\text{SU}(2))\; | \; \rho(\nu)=-1\}/\text{SU}(2),
\]
where $\nu$ is a small meridian around $N$, and the $\text{SU}(2)$-action is by conjugation.
A computation shows that this set consists of two elements;
these two elements are non-degenerate and irreducible. The two classes as
generators for $\text{C}(\mathbb{T}^3)$ differ by degee $4$.
It follows that $\text{C}^\#(S^3)$ has one generator, and we obtain
\[
I^\#(S^3)\simeq\mathbb{Z}_{0},
\]
where, as usual, the subscript indicates the grading.
We usually assume a
distinguished generator for $I^\#(S^3)$ has been fixed.
Next, we compute $I^\#(S^1\times S^2)$. For this we adapt \cite[Lemma 8.3]{kmu}.
By placing the twisting $\mu$ at an $S^3$, we have an exact sequence
\[
\cdots I^\#(S^3) \xrightarrow{\alpha} I^\#(S^1\times S^2)
\xrightarrow{\beta}I^\#(S^3)\xrightarrow{\gamma} I^\#(S^3)\cdots
\]
We apply the grading formula (\ref{degofmap}).
The map $\alpha$ comes from the cobordism $D^2\times S^2\setminus \text{int}(D^4)$
from $S^3$ to $S^1\times S^2$. The overlying bundle is necessarily trivial.
We compute $\text{deg}(\alpha) \equiv -1$.
The map $\beta$ is the same cobordism, but reversed, and $\text{deg}(\beta)\equiv -2$.
By the previous section, we know the sum of the degrees of the three maps is $-1$ mod 4,
so $\text{deg}(\gamma)\equiv 2$. This can be computed directly by observing that
$\gamma$ comes from the cobordism $-\mathbb{C}\mathbb{P}^2$ minus two 4-balls, from
$S^3$ to $S^3$, with a non-trivial bundle.
Because $\gamma:\mathbb{Z}_0\to\mathbb{Z}_0$ has degree $2$,
it must be $0$. By exactness, we conclude
\[
I^\#(S^1\times S^2)\simeq \mathbb{Z}_{2}\oplus\mathbb{Z}_{3},
\]
where, as usual, the subscripts indicate gradings.
As is evident by the above computation,
a canonical generator in grading $3$ for $I^\#(S^1\times S^2)$ is
given by $[D^2\times S^2]^\#$. Recall from \S \ref{sec:cobs} that $[D^2\times S^2]^\#$ is the notation for the relative invariant induced by the cobordism $D^2\times S^2:\emptyset\to S^1\times S^2$. A canonical homology orientation is used here.
The element $[S^1\times D^3]^\#$ generates the summand in grading 2. This
is seen by identifying $S^1\times S^2$ with its orientation-opposite in
a standard way, and viewing
$[D^2\times S^2]_\#$ as a map $I^\#(S^1\times S^2)\to \mathbb{Z}$.
For then we have
\[
[D^2\times S^2]_\#[S^1\times D^3]^\# = \pm 1,
\]
since $S^1\times D^3$ and $D^2\times S^3$ glue along $S^1\times S^2$ to give $S^4$.
However, the element $[S^1\times D^3]^\#$ is not canonically
homology oriented; it requires an orientation of
$H_1(S^1\times S^2;\mathbb{R})$. Thus a generator for
$\mathbb{Z}_2\subset I^\#(S^1\times S^2)$ is distinguished by
orienting $H_1(S^1\times S^2;\mathbb{R})$.\\
\subsection{The K{\"u}nneth Formula}\label{sec:prod}
Let $Y$ and $Y'$ be closed, oriented and connected 3-manifolds. If either
one of $I^\#(Y)$ or $I^\#(Y')$ is torsion-free, there is a graded isomorphism
\[
I^\#(Y\#Y')\simeq I^\#(Y)\otimes I^\#(Y').
\]
This is a special case of \cite[Cor. 5.9]{kmu}, and follows from Floer's original
excision theorem. Further, this isomorphism is natural for split cobordisms,
in the following sense. Let $X:Y_1\toY_2$ and $X':Y_1'\toY_2'$ be cobordisms
with paths chosen so that the composite $X\JoinX'$ is defined. Suppose
the above product isomorphism holds for $Y_1\#Y_1'$ and $Y_2\#Y_2'$; then we
have a commutative diagram
\[
\begin{CD}
I^\#(Y_1\#Y_1')@>>{\simeq}> I^\#(Y_1)\otimes I^\#(Y_1')\\
@V I^\#(X\JoinX') VV @VV I^\#(X)\otimes I^\#(X') V \\
I^\#(Y_2\#Y_2') @>{\simeq}>> I^\#(Y_2)\otimes I^\#(Y_2') \\
\end{CD}
\]
We do not address the arrangement of homology orientations here, as we will not require it.\\
\subsection{A Connected Sum of $S^1\times S^2$'s}\label{sec:example} Let $Y$
be a 3-manifold with $Y\simeq \#^k S^1\times S^2$. From the K{\"u}nneth
formula it is clear that $I^\#(Y) \simeq \otimes^k (\mathbb{Z}_2\oplus\mathbb{Z}_3)$.
The subscripts here indicate gradings. Let $\mu_Y$ be an orientation of $H_1(Y;\mathbb{R})$.
In this section we construct an isomorphism
\[
\phi:{\textstyle{\bigwedge}}^\ast( H_1(Y;\mathbb{Z})) \toI^\#(Y)
\]
which only depends on $Y$ and $\mu_Y$, not the decomposition $Y\simeq \#^k S^1\times S^2$. The choice of $\mu_Y$ only affects the overall sign of $\phi$. The exterior power here, and for most of the paper, is over the ring $\mathbb{Z}$.
Choose oriented, closed, embedded curves $c_1,\ldots,c_k$ in $Y$ such that
there exists a diffeomorphism $Y\simeq\#^k_{i=1} S^1\times S^2$ sending
$c_i$ to $S^1\times\text{pt}$ in the $i^\text{th}$ copy of $S^1\times S^2$.
Given $J=\{i_1,\ldots,i_l\}\subset\{1,\ldots,k\}$ we define a
cobordism $X_J:\emptyset\toY$ by starting with $Y\times[0,1]$
and attaching a 2-handle to each $c_i\times\{0\}$ if $i\in J$, and on top of this, attaching 3-handles and a 4-handle in a way such that $\partial X_J = Y\times \{1\}$ and
\begin{equation}\label{handleattach}
X_J \simeq X_1 \natural \cdots \natural X_k, \qquad X_i =
\begin{cases} S^1\times D^3 &\mbox{if $i\not\in J$}\\
D^2\times S^2 & \mbox{if $i\in J$} \end{cases}
\end{equation}
with $\partialX_i$ the $i^\text{th}$ copy of $S^1\times S^2$ in the decomposition
$Y\simeq\#^k_{i=1} S^1\times S^2$. Let $\{1,\ldots k\}\setminus J = \{i_{l+1},\ldots,i_{k}\}$
be such that $\mu_Y = [c_{i_1}\wedge \cdots \wedge c_{i_k}]$. To homology
orient $X_J$ we orient $\mathcal{L}(X_J)= H_1(X_J;\mathbb{R})$
by $[c_{i_{l+1}}\wedge\cdots\wedge c_{i_k}]$. Define
$\psi:{\textstyle{\bigwedge}}^\ast(c_1,\ldots,c_k)\toI^\#(Y)$ by
\[
\psi(c_{i_1}\wedge\cdots\wedge c_{i_l})=[X_J]^\#.
\]
This map is an isomorphism by the case $k=1$ and the K{\"u}nneth formula.
With the help of the orientation $\mu_Y$ of $H_1(Y;\mathbb{R})$, we can define a
bilinear form
\[
\langle \cdot,\cdot \rangle: I^\#(Y)\otimes I^\#(Y)\to \mathbb{Z},
\]
see also (\ref{eq:dual}).
The elements $[X_J]^\#$ as $J$ runs over subsets of $\{1,\ldots,k\}$
form a basis for $I^\#(Y)$, so it suffices to define the form on these.
Given $J,K\subset \{1,\ldots,k\}$, let $X_J$ and $X_K$ be as above with homology orientations
$\mu_J$ and $\mu_K$, respectively. Then we have elements $[X_J]^\#,[X_K]^\#\inI^\#(Y)$.
Consider $\overline{X}_K:Y\to\emptyset$ and homology orient it by $\mu_{Y}\wedge\mu_{K}$.
This yields $[\overline{X}_K]_\#:I^\#(Y)\to\mathbb{Z}$.
Then the bilinear form $\langle\cdot,\cdot\rangle$ is given by
\[
\langle [X_K]^\#,[X_J]^\#\rangle = [\overline{X}_K]_\#[X_J]^\#=[\overline{X}_K\circX_J]_\# = A_{JK}\in\mathbb{Z}.
\]
Now observe that
\begin{equation}\label{splits}
X_{K} \circ X_{J} \simeq X_1 \# \cdots \# X_k, \qquad X_i \simeq
\begin{cases} S^1\times S^3 &\mbox{if $i\not\in J\cup K$}\\
S^2\times S^2 &\mbox{if $i\in J\cap K$}\\
S^4 & \mbox{otherwise} \end{cases}
\end{equation}
Note $[S^2\times S^2]^\#=0$, because the degree of the
cobordism $S^3\to S^3$ given by $S^2\times S^2$ minus two $4$-balls is odd,
and similarly for $[S^1\times S^3]^\#$.
Using the naturality with respect to split cobordisms of the K{\"u}nneth formula,
we conclude that $A_{JK}\neq 0$ if and only
if $J$ and $K$ are complementary, and in this case
$A_{JK}=\pm 1$.
This sign may be determined by using Definition \ref{def:homcom}, but we will not need it.
It is clear that this bilinear form is non-degenerate. Note that $\langle \cdot,\cdot \rangle$ depends on $c_1,\ldots,c_k$ (which determine an identification of $Y$ with $\overline{Y}$).
We argue that $\psi$ is independent of the 2-handle framings
chosen to construct the $X_J$. First construct
cobordisms $X_J$ for each subset $J\subset\{1,\ldots,k\}$ as above.
Choose some $J$, and construct a cobordism $X'_J$ by attaching
the 2-handles using possibly different framings as was done for $X_J$,
subject to the constraint that $X'_J$
is of the form (\ref{handleattach}). Then
\[
\overline{X}_{K} \circ X'_{J} \simeq X_1 \# \cdots \# X_k
\]
just as in (\ref{splits}), except now if $i\in J\cap K$ then
$X_i$ is a possibly non-trivial $S^2$-bundle over $S^2$, in which case $[X_i]^\#=0$.
We homology orient $X_J'$ in the
same way as $X_J$.
It is easily seen that $[X_J']^\#$ has
all the same values $A_{JK}$ as $[X_J]^\#$ under the bilinear pairing, and thus
$[X_J']^\# = [X_J]^\#$.
Now we see how $\psi$ changes when we change the loops $c_i$.
Consider replacing the oriented loop $c_1$ by an oriented connected sum
$c_1\#c_2$. There are many ways of forming this connected sum.
Let $X_{c_1\# c_2}$ be the cobordism $\emptyset\toY$ obtained
by attaching to $Y\times [0,1]$ a 2-handle along $c_1\# c_2\times\{0\}$ and 3-handles and a 4-handle as above. Supposing $\mu_Y = [c_1\wedge \cdots \wedge c_k]$, we homology orient $X_{c_1\# c_2}$ by $[c_2\wedge\cdots \wedge c_k]$, just as we homology orient $X_{\{1\}}$ and $X_{\{2\}}$. Then
\[
[X_{c_1\#c_2}]^\# = [X_{\{1\}}]^\# + [X_{\{2\}}]^\#.
\]
Viewing $X_{c_1\# c_2}:\emptyset\toY$ and $\overline{X}_J:Y\to\emptyset$, this follows from computing
\[
\overline{X}_{J}\circX_{c_1\# c_2} \simeq X \# X_3 \# \cdots \# X_k,
\qquad \begin{cases} X\simeq S^4 &\mbox{if $|\{1,2\}\cap J| =1$}\\
\text{deg}(X)\text{ is odd} & \mbox{otherwise} \end{cases}
\]
where each $X_i\simeq S^4$ if $i\in J$ and $\text{deg}(X_i)$ is odd otherwise, and then appealing to the non-degeneracy of our bilinear form.
A similar argument shows $[X_{c_1\#c_2\cup J}]^\# =
[X_{\{1\}\cup J}]^\# + [X_{\{2\}\cup J}]^\#$
where $J$ is any subset of $\{3,\ldots,k\}$.
As a consequence, $\psi$ induces a well-defined isomorphism
\[
\phi:{\textstyle{\bigwedge}}^\ast (H_1(Y;\mathbb{Z})) \to I^\#(Y).
\]
This is because any two sets of loops $c_1,\ldots,c_k$ in $Y$ as above (having the
property that there exists a diffeomorphism $Y\simeq \#^k S^1\times S^2$ sending
each $c_i$ to a factor $S^1\times \text{pt}$) are related by sequences of connected sums (and the reverse operation) as in the previous paragraph. Indeed, these are just handle-slides, and a result of Laudenbach and
Po\'{e}naru \cite{lp}, as cited in \cite[Rmk. 4.4.1]{gs}, says that any self-diffeomorphism
of $\#^k S^1\times S^2$ extends to a diffeomorphism of $\natural^k S^1\times D^3$, a
bounding 1-handlebody, which can be written as a composite of 1-handle slides. In fact, this result also says that the way in which the 3-handles and 4-handle are attached to construct $X_J$ above is essentially unique.
In summary, $\phi$ is defined by choosing an orientation $\mu_Y$ of $H_1(Y;\mathbb{R})$, a diffeomorphism
$Y\simeq \#^k S^1\times S^2$, oriented loops $c_1,\ldots,c_k$ corresponding to the
$S^1\times\text{pt}$ factors, and setting
\[
\phi([c_{i_1}]\wedge\cdots\wedge[c_{i_l}])=[X_J]^\#
\]
where the element $[X_J]^\#$ is
defined as above. The content of the above discussion is that this map is
well-defined and is an isomorphism. We mention that for
$x\in{\textstyle{\bigwedge}}^i (H_1(Y;\mathbb{Z}))$ with $b_1(Y)=k$, the grading
of $\phi(x)$ in $I^\#(Y)$ is given by $2k + i$ mod $4$.\\
\subsection{A Spectral Sequence}
The spectral sequence of Theorem \ref{ss1} leads to one for the groups
$I^\#(Y)$. The setup is as follows. Again we have an $m$-component framed link
$L$ in $Y$. We view $L$ as a link in $Y\# T^3$, and we choose a
family of bundles over the surgered manifolds $Y_v\# T^3$ which for
$v\in \{0,1\}^m$ are of the form $(Y_v\times\text{SO}(3))\#\mathbb{T}^3$, at the expense of
having possibly non-trivial bundles (so twisted framed groups)
for the indices $v\in\{0,1,\infty\}^m\setminus\{0,1\}^m$. We are using
the third row of Figure \ref{fig:ses} to achieve this setup.
This forces the bundle over $Y\# T^3$ to be geometrically represented
by the link $L$ together with an $S^1$-factor of $T^3$.
More general spectral sequences may be obtained by allowing twisting
in the $E^1$-page.
Before stating the resulting theorem,
we discuss how to lift the previous $\mathbb{Z}/2$-grading $\text{gr}[\textbf{C}]$
for the $E^1$-page of the link surgeries spectral sequence to a $\mathbb{Z}/4$-grading,
in the special case
where $[L]=0\in H_1(Y;\mathbb{F}_2)$.
Write $\text{gr}[Y]$ for the $\mathbb{Z}/4$-grading on $I^\#(Y)$
and $\mathbb{Y}_v^\#=\mathbb{Y}_v\#\mathbb{T}^3$, where for $v\in\{0,1\}^m\cup\{\boldsymbol{\infty}\}$
we have $\mathbb{Y}_v=Y_v\times\text{SO}(3)$. Recall that we
conflate $\boldsymbol{\infty}$ and $-\mathbf{1}$.
Also write $\mathbb{X}_{vw}^\#=\mathbb{X}_{vw}\Join(\mathbb{T}^3\times [0,1])$ for
the surgery cobordism bundles.
For $v\in\{0,1\}^m\cup\{\boldsymbol{\infty}\}$
we may view each $\text{C}(\mathbb{Y}^\#)$ as two copies of $\text{C}^\#(Y_v)$,
$\mathbb{Z}/4$-graded by $\text{gr}[Y_v]$. For $v\in\{0,1\}^m$ and
$x\in\text{C}(\mathbb{Y}_v^\#)\subset\textbf{C}$
of homogeneous $\text{gr}[Y_v]$ grading, we define
\begin{equation}
\text{gr}[\textbf{C}](x) = \text{gr}[Y_v](x) - \text{deg}(\mathbb{X}_{\boldsymbol{\infty}v}) - |v|_1 \mod 4.\label{eq:grt}
\end{equation}
The verification that $\boldsymbol{\partial}$ lowers this grading by 1,
and that the quasi-isomorphism $\mathbf{Q}:\text{C}(\mathbb{Y}^\#)\to \textbf{C}$
preserves the relevant $\mathbb{Z}/4$-gradings, is the same as in \S \ref{sec:linksgr}.
\begin{theorem}\label{thm:framed}
Let $L$ be an oriented framed link with $m$ components in $Y$ and for
each $v\in \{\infty,0,1\}^m$ denote by $Y_v$ the result of $v$-surgery
on $L$ in $Y$. There are surgery cobordisms $X_{vw}$ for $v<w$
from $Y_v$ to $Y_w$ with homology orientations $\mu_{vw}$ satisfying
$\mu_{uw}\circ\mu_{vu}=\mu_{vw}$ whenever $v<u<w$, and an appropriate
bundle $\mathbb{X}_{vw}$ over each $X_{vw}$, such that there is a spectral sequence $(E^r,d^r)$
with
\[
E^1 = \bigoplus_{v\in\{0,1\}^m} I^\#(Y_v), \qquad d^1 =
\sum_{\substack{v < w\\ |w-v|_1 =1}} (-1)^{\delta(v,w)}I^\#(\mathbb{X}_{vw})
\]
where $\delta(v,w)$ is as in Theorem \ref{ss1}.
The spectral sequence is graded
by $\mathbb{Z}/2\times\mathbb{Z}$, where $d^r$ has bi-degree $(1,r)$,
and it converges
by the $E^{m+1}$-page to the possibly twisted group
\[
I^\#(Y;L).
\]
The $\mathbb{Z}/2$-grading induced by the spectral
sequence agrees with the
$\mathbb{Z}/2$-grading of $I^\#(Y;L)$. If $[L]=0\in H_1(Y;\mathbb{F}_2)$,
then we can lift the $\mathbb{Z}/2$-grading
of the $E^1$-page to a $\mathbb{Z}/4$-grading by (\ref{eq:grt}), such that the induced
$\mathbb{Z}/4$-grading agrees with the one on $I^\#(Y)$. The differential
for the $\mathbb{Z}/4\times\mathbb{Z}$-grading has bi-degree $(-1,r)$.
\end{theorem}
\section{Connected Sum Formulas}\label{sec:connect}
A closed, connected, oriented 3-manifold $Y$ is called an integral homology 3-sphere
if $H_1(Y;\mathbb{Z})=0$, or equivalently, if $Y$ has
the same integral homology as the 3-sphere. In this section we study
$I^\#(Y)$ when $Y$ is an integral homology 3-sphere. We relate the
$\mathbb{Z}/4$-graded group $I^\#(Y)$ to Floer's original $\mathbb{Z}/8$-graded
instanton homology $I(Y)$ using the trivial connection and $u$-maps studied
by Donaldson \cite{d} and Fr{\o}yshov \cite{froy}.\\
\subsection{Gradings} Let $Y$ be an integral homology 3-sphere.
Let $\theta$ be the distinguished trivial connection on $Y\times\text{SO}(3)$.
We can use $\theta$ to fix an absolute $\mathbb{Z}/8$-grading on
$I(Y)$ as was done in Floer's original
construction \cite{f1}. For a connection $a$ on $Y\times\text{SO}(3)$ we set
\[
\text{gr}(a) = -3-\mu(\theta,a)
\]
and on $\mathscr{G}$-classes $\mathfrak{a}$ this descends to a function with
$\text{gr}(\mathfrak{a})\in\mathbb{Z}/8$. Note that $\mathscr{G}_\text{ev}=\mathscr{G}$
in this setting. When $a$ is irreducible, $\text{gr}(a)=\mu(a,\theta)$.
The trivial connection has $\text{gr}(\theta)=0$.
The differential shifts this grading by $-1$
and the grading descends
to the $\mathbb{Z}/2$-grading defined in \S \ref{sec:instgrading}.
We write $\mathfrak{t}$ for the $\mathscr{G}$-class of $\theta$.
Our $I(Y)_i$ agrees with Donaldson's $\text{HF}(Y)_i$ in \cite{d}.
Note that $I(\overline{Y})_i$ is the same as the cohomology group
$I(Y)^{5-i}$. In particular,
by the universal coefficients theorem, the vector spaces
$I(\overline{Y})_i\otimes\mathbb{Q}$ and $I(Y)_{5-i}\otimes\mathbb{Q}$
are isomorphic. Our $I(Y)_i$ is the same as Fr{\o}yshov's $\text{HF}(Y)^{5-i} =
\text{HF}(\overline{Y})_i$ from \cite{froy}.\\
\subsection{Other Boundary Maps}
From here on we fix a field $F$ which has char$(F)\neq 2$
and take all homology with $F$-coefficients.
With an integral homology 3-sphere $Y$ fixed, we write $\text{C}_i=\text{C}(Y)_i$ and
$I_i=I(Y)_i$. Following \cite{d,froy} we have maps
\[
\delta:\text{C}_1\to F,\qquad \delta':F \to\text{C}_4
\]
defined using the trivial connection.
For $\mathfrak{a}\in\mathfrak{C}^\text{irr}(Y)$ with $\text{gr}(a)\equiv 1$ we define
$\delta(\mathfrak{a})=\#\check{M}(\mathfrak{a},\mathfrak{t})_0$, and for $\mathfrak{b}$
with $\text{gr}(\mathfrak{b})\equiv -4$,
we define $\langle\delta'(1),\mathfrak{b}\rangle=\#\check{M}(\mathfrak{t},\mathfrak{b})_0$.
More precisely, one writes $F=F\Lambda(\mathfrak{t})$ and
$\epsilon[A]:\Lambda(\mathfrak{t})\to\Lambda(\mathfrak{a})$, and $\delta=\sum\epsilon[A]$ for each
$[A]\in\check{M}(\mathfrak{a},\mathfrak{t})_0$, and so on, as in \S \ref{sec:instantongroups}.
We will often conflate $\delta'$ with $\delta'(1)\in\text{C}_4$. These are chain maps,
in the sense that $\delta\partial=\partial\delta'=0$, and we write
\[
\boldsymbol{\delta}:I_1\to F, \qquad \boldsymbol{\delta}':F \to I_4
\]
for the induced maps on homology.
We also have maps that record data from the 3-dimensional
moduli spaces $\check{M}(\mathfrak{a},\mathfrak{b})_3$,
\[
v:\text{C}_i\to\text{C}_{i+4}.
\]
Our $v$ is $1/2$ times the $v$ of Fr{\o}yshov, and $4$ times the $U$ of Donaldson.
That is, it is defined, roughly, by evaluating the 4-dimensional class $2\mu(\text{pt})$
over 4-dimensional moduli spaces $M(\mathfrak{a},\mathfrak{b})_4$. We refer to \cite[\S 3.1]{froy} and
\cite[\S 7.3.1]{d} for precise definitions of $v$. We have in mind the following interpretation. First suppose $\check{M}(\mathfrak{a},\mathfrak{b})_3$ is connected.
We obtain a map $h:\check{M}(\mathfrak{a},\mathfrak{b})_3\to\text{SO}(3)$ by
evaluating the holonomy of a connection along the path from $(-\infty,y)$ to $(\infty,y)$ on the cylinder
$\mathbb{R}\timesY$. With some modifications, see \cite[\S 7.3.2]{d},
$\langle v(\mathfrak{a}),\mathfrak{b}\rangle = \deg(h)$. If $\check{M}(\mathfrak{a},\mathfrak{b})_3$ has more than one component, the evaluation is done on each component, and then added together.
The map $v$ is not quite a chain map. As explained in \cite[\S 7.3.3]{d},
when $\text{gr}(\mathfrak{a})\equiv 1$ and $\text{gr}(\mathfrak{b})\equiv -4$,
there are ends of $\check{M}(\mathfrak{a},\mathfrak{b})_4$
modelled on $\text{SO}(3)$, i.e. cylinders $\mathbb{R}\times\text{SO}(3)$,
one for each pair of instantons in $\check{M}(\mathfrak{a},\mathfrak{t})_0\times\check{M}(\mathfrak{t},\mathfrak{b})_0$.
Each copy of $\text{SO}(3)$ records the choices for gluing parameters. The holonomy
at a cross-section is captured by the gluing parameter and has degree 1. Accounting for the
other usual ends, modelled on
$\mathbb{R}\times\check{M}(\mathfrak{a},\mathfrak{c})_i\times\check{M}(\mathfrak{c},\mathfrak{b})_j$,
where $i=0$ and $j=3$, or vice versa, one is led to the relation
\begin{equation}
\partial v - v \partial + \delta'\delta = 0, \label{eq:umap}
\end{equation}
see \cite[Thm. 4]{froy} and \cite[Prop. 7.8]{d}. Here $\delta = 0$ in
gradings different from $1\in\mathbb{Z}/8$. In particular we obtain the maps
\begin{align*}
\boldsymbol{v}:I_i\toI_{i+4},\qquad i\neq 0,1 \mod 8\\
\boldsymbol{v}:I_0\to\text{coker}(\boldsymbol{\delta}'), \qquad \boldsymbol{v}:\text{ker}(\boldsymbol{\delta})\toI_5.
\end{align*}
\vspace{10px}
\subsection{Reduced Instanton Groups}
Fr{\o}yshov defined a $\mathbb{Z}/8$-graded group $\widehat{I}=\widehat{I}(Y)$ by
cutting down $I(Y)$ using the maps introduced above. Precisely,
\begin{align*}
&\widehat{I}_i = I_i, \quad i \not\equiv 0,1,4,5 \mod 8\\
&\widehat{I}_0 = I_0/\left(\text{$\sum$}\text{im}(\boldsymbol{v}^{2k+1}\boldsymbol{\delta}')\right),
&\widehat{I}_4 = I_4/\left(\text{$\sum$}\text{im}(\boldsymbol{v}^{2k}\boldsymbol{\delta}')\right),\\
&\widehat{I}_1 = \bigcap \text{ker}(\boldsymbol{\delta}\boldsymbol{v}^{2k})\subsetI_1,
&\widehat{I}_5 = \bigcap \text{ker}(\boldsymbol{\delta}\boldsymbol{v}^{2k+1})\subsetI_5.
\end{align*}
Using these groups Fr{\o}yshov defined his $h$-invariant by
\[
h(Y) = -\frac{1}{2}\left(\chi(I(Y))-\chi(\widehat{I}(Y))\right).
\]
This has several nice properties, among them
\[
h(\overline{Y})=-h(Y),\qquad h(Y\#Y')=h(Y)+h(Y').
\]
It also descends to a homomorphism $h:\Theta_H^3\to\mathbb{Z}$,
where $\Theta_H^3$ is the integral homology cobordism group.
Fr{\o}yshov showed that both $I$ and $\widehat{I}$ are 4-periodic
(recall that we are working with $F$-coefficients). By the chain level
relation (\ref{eq:umap}) either $\boldsymbol{\delta}$ or $\boldsymbol{\delta}'$ is
zero. It follows that, over $\mathbb{Q}$,
we can go between $I$ and $\widehat{I}$ using only $h$.
For example, if $h(Y)=0$, then $\widehat{I}=I$, whereas if $h(Y)>0$ then
$\widehat{I}_i=I_i$ for $i\neq 0,4$ and $\text{rk}(\widehat{I}_i)=\text{rk}(I_i)-h(Y)$
for $i=0,4$.
The maps $\boldsymbol{v}$ above induce maps $\widehat{v}:\widehat{I}(Y)_i\to\widehat{I}(Y)_{4+i}$ for
each grading $i\in\mathbb{Z}/8$. As mentioned,
this is half of Fr{\o}yshov's $u$ mentioned in
the introduction:
\[
\widehat{v} = u/2.
\]
We've chosen this normalization to avoid writing in certain factors of $2$.
Fr{\o}yshov showed that each $\widehat{v}$ is an isomorphism, and that
$\widehat{v}^2-16$ is nilpotent, i.e.
\[
(\widehat{v}^2-16)^n = 0
\]
for some $n>0$.
If $\mathbb{Y}$ is admissible and $b_1(Y)>0$, there is no trivial connection to work with,
and the maps $v:\text{C}_i\to\text{C}_{i+4}$ are indeed chain maps, inducing maps
$\widehat{v}:I(\mathbb{Y})_i\toI(\mathbb{Y})_{i+4}$
for each grading $i$ (here we arbitrarily fix an absolute grading).
Again, each $\widehat{v}$ is half of Fr{\o}yshov's $u$, is an isomorphism,
and $\widehat{v}^2-16$ is nilpotent. The hat notation in this case is
used only for uniformity.\\
\subsection{Connected Sums}
In this section we recall the connected sum theorem of Fukaya \cite{fukaya}, reviewing the proof
exposited by Donaldson in \cite[\S 7.4]{d}. This problem was also considered
in \cite{li}.
In the following sections we will
adapt the proof to the settings of interest to us. Let $Y_1$ and $Y_2$ be
integral homology 3-spheres. For $i=1,2$ write $\text{C}_{(i)}=\text{C}(Y_i)$ and
$\partial_{(i)}$ for the corresponding differentials, and
$\delta_{(i)},\delta_{(i)}',v_{(i)}$ for the relevant boundary maps. For a graded $F$-module
$A$ define the shifted module $A[n]$ by $A[n]_i = A_{i-n}$.
We define a chain complex
\[
\text{C} = \left( \text{C}_{(1)}\otimes\text{C}_{(2)}\right) \oplus \left( \text{C}_{(1)}[3]\otimes\text{C}_{(2)}\right) \oplus \left(\text{C}_{(1)}\otimes F \right) \oplus \left( F \otimes\text{C}_{(2)}\right)
\]
\[
\partial = \left(
\begin{array}{cccc}
\partial_{(12)} & 0 & 0 & 0\\
v_{(12)} & -\partial_{(12)} &
1\otimes\delta'_{(2)} & \delta_{(1)}'\otimes 1 \\
-1\otimes\delta_{(2)} & 0 & \partial_{(1)}\otimes 1 & 0 \\
\delta_{(1)}\otimes 1 & 0 & 0 & \epsilon \otimes \partial_{(2)}\\
\end{array}\right)
\]
where $\partial_{(12)}=\partial_{(1)}\otimes 1 + \epsilon \otimes \partial_{(2)}$,
$v_{(12)}=v_{(1)}\otimes 1 + 1 \otimes v_{(2)}$, and $\epsilon$ is equal, in grading $k$, to $(-1)^k$ times the identity map on $\text{C}_{(1)}$.
\begin{theorem}[Fukaya] As $\mathbb{Z}/8$-graded $F$-modules, {\em $I(Y_1\#Y_2)\simeq H_\ast(\text{C},\partial)$}.
\end{theorem}
\noindent For example, let $Y$ be the Poincar\'{e} homology 3-sphere $\Sigma(2,3,5)$.
The reader can verify that
\[
I(Y\#Y)\simeq F_1^2\oplus F_5^2, \qquad I(Y\#\overline{Y})=0
\]
using that $\text{C}(Y)=F_1\oplus F_5$ and $\delta,v$ are isomorphisms.
Recall that subscripts indicate gradings.
These examples appear in \cite{fukaya}.
Note that, generally, the $\delta,\delta',v$ maps for $\overline{Y}$ are the duals
of the maps $\delta',\delta,v$ for $Y$, respectively.
\begin{figure}[t]
\includegraphics[scale=.45]{zhs_cobs.pdf}
\caption{The cobordism $W:Y_1\#Y_2\toY_1\sqcupY_2$ and its reverse, $X$.}
\label{fig:cobs}
\end{figure}
We now review the proof that appears in \cite[\S 7.4]{d}. We mention at the outset
that to avoid certain factors of $2$ that appear in the composition law (since
we will glue along a disconnected 3-manifold), we enlarge the gauge transformation
group when necessary, as in \cite[\S 5.1]{kmu}.
Let $\text{C}' = \text{C}(Y_1\#Y_2)$ and $\partial'$
be its differential. Let $X:Y_1\sqcupY_2\toY_1\#Y_2$ be the cobordism which is $([0,1]\timesY_1)\natural ([0,1]\timesY_2)$, where the boundary sum is taken near $1$, and let $W:Y_1\#Y_2\toY_1\sqcupY_2$ be the corresponding cobordism when the boundary sum is taken near $0$. See Figure \ref{fig:cobs}. We define chain maps
\[
m_X:\text{C} \to \text{C}', \quad m_W:\text{C}'\to\text{C}
\]
as follows. The map $m_X$ is given by four components:
\begin{align*}
v_X &:\text{C}_{(1)}\otimes\text{C}_{(2)} \to \text{C}',\\
i_X &:\text{C}_{(1)}[3]\otimes\text{C}_{(2)}\to\text{C}',\\
{\delta'_X}_{(2)} &:\text{C}_{(1)}\otimes F\to\text{C}',\\
{\delta'_X}_{(1)} &:F\otimes\text{C}_{(2)}\to\text{C}'.
\end{align*}
In the following, $\mathfrak{a}\in\mathfrak{C}^\text{irr}(Y_1)$, $\mathfrak{b}\in\mathfrak{C}^\text{irr}(Y_2)$,
and $\mathfrak{c}\in\mathfrak{C}^\text{irr}(Y_1\#Y_2)$.
The map $i_X$ counts 0-dimensional moduli spaces $M(\mathfrak{a},\mathfrak{b},X,\mathfrak{c})_0$.
The map $v_X$ evaluates the holonomy of 3-dimensional
moduli spaces $M(\mathfrak{a},\mathfrak{b},X,\mathfrak{c})_3$
along a curve $\gamma_X$ running from $Y_1$ to $Y_2$ on the incoming end of $X$. The map
${\delta'_X}_{(2)}$ counts 0-dimensional moduli spaces $M(\mathfrak{a},\mathfrak{t},X,\mathfrak{c})_0$
where $\mathfrak{t}$ is a trivial connection class on $Y_2$,
and ${\delta'_X}_{(1)}$ is defined similarly, with $\mathfrak{t}$ on $Y_1$.
Now, $m_X$ is a chain map because of the following relations. First,
\[
i_X\partial_{(12)}=\partial'i_X
\]
is the usual relation for the map involving only irreducibles. Second,
\begin{align}
& i_X v_{(12)} + v_X\partial_{(12)} + {\delta'_X}_{(1)}(\delta_{(1)}\otimes 1) - {\delta'_X}_{(2)}(1\otimes \delta_{(2)}) = \partial' v_X \label{eq:pieces}
\end{align}
records how the holonomy interacts with the ends of a 4-dimensional moduli space $M(\mathfrak{a},\mathfrak{b},X,\mathfrak{c})_4$. This is essentially \cite[Thm. 6]{froy}. See Figure \ref{fig:pieces}.
Third, the relation
\[
i_X(\delta_{(1)}'\otimes 1) + {\delta'_X}_{(1)}(\epsilon\otimes\partial_{(2)}) = \partial'{\delta'_X}_{(1)}
\]
and its analogue with indices swapped, records the ends of a 1-dimensional moduli
space $M(\mathfrak{t},\mathfrak{b},X,\mathfrak{c})_1$ where $\mathfrak{t}$ is the trivial connection class on $Y_1$.
This is a variation of \cite[Lemma 1]{froy}.
\begin{figure}[t]
\includegraphics[scale=.4]{zhs_pieces.pdf}
\caption{A representation of the terms appearing in (\ref{eq:pieces}).
The pieces represent counts of isolated instantons,
unless there is a curve present in the interior,
indicating a contribution from a $v$-map.
All limiting connections are irreducible, except in the last two
diagrams, where trivial limits $\theta$ are present.
The first two diagrams make up $i_X v_{(12)}$ and the second two
make up $v_X \partial_{(12)}$.}
\label{fig:pieces}
\end{figure}
The map $m_W$ is defined similarly, this time with components
\begin{align*}
i_\text{W} &:\text{C}'\to\text{C}_{(1)}\otimes\text{C}_{(2)}, \\
v_\text{W} &:\text{C}'\to\text{C}_{(1)}[3]\otimes\text{C}_{(2)},\\
{\delta_\text{W}}_{(2)} &:\text{C}'\to\text{C}_{(1)}\otimes F,\\
{\delta_\text{W}}_{(1)} &:\text{C}'\to F\otimes\text{C}_{(2)}.
\end{align*}
Now, we argue that $m_X$ and $m_W$ are chain homotopy inverse to one another.
First consider $m_X m_W$. We have
\[
m_X m_W = v_X i_W + i_X m_W +{\delta'_X}_{(2)}{\delta_W}_{(2)} +{\delta'_X}_{(1)}{\delta_W}_{(1)}.
\]
We claim that $m_X m_W$
is chain homotopic to the map $m(Z,\gamma):\text{C}'\to\text{C}'$ obtained by evaluating $2\mu(\gamma)$ on the
composite $Z=X\circ W$ where $\gamma=\gamma_X\cup\gamma_{W}$. This
is the the same as the map defined by taking the degrees of modified holonomy maps
$M(\mathfrak{a},Z,\mathfrak{d})_3\to\text{SO}(3)$
along $\gamma$, see \cite[\S 5.1.2]{dk}. The chain homotopy is obtained by
stretching the middle copies of $Y_1$ and $Y_2$. The 3-dimensional space $M(\mathfrak{a},Z,\mathfrak{d})_3$
where $\mathfrak{a},\mathfrak{d}$ are irreducible has four components after stretching:
\begin{align*}
&M(\mathfrak{a},X,\mathfrak{b},\mathfrak{c})_0\timesM(\mathfrak{b},\mathfrak{c},W,\mathfrak{d})_3\\
&M(\mathfrak{a},X,\mathfrak{b},\mathfrak{c})_3\timesM(\mathfrak{b},\mathfrak{c},W,\mathfrak{d})_0\\
&M(\mathfrak{a},X,\mathfrak{b},\mathfrak{t})_0\times\text{SO}(3)\timesM(\mathfrak{b},\mathfrak{t},W,\mathfrak{d})_0\\
&M(\mathfrak{a},X,\mathfrak{t},\mathfrak{c})_0\times\text{SO}(3)\timesM(\mathfrak{t},\mathfrak{c},W,\mathfrak{d})_0
\end{align*}
As in (\ref{eq:umap}), in the last two cases the holonomy is
captured by the gluing space $\text{SO}(3)$. The four components correspond, in order,
to the four components of $m_X m_W$ above. In this way, the chain homotopy
from $m_X m_\text{W}$ to $m(Z,\gamma)$ may be defined as a map using the 1-dimensional
metric family that simultaneously stretches along $Y_1,Y_2$.
The next step is to use a surgery property, interesting in its own right,
due to Donaldson.
We state it in a form convenient for our purposes. Let $\mathbb{X}:\mathbb{Y}_1\to\mathbb{Y}_2$ be an $\text{SO}(3)$-bundle
over a cobordism which restricts to admissible bundles over its boundary components. Let
$\gamma$ be a loop in the interior of the base of $\mathbb{X}$. Let $\mathbb{X}_\gamma$ be the bundle obtained
by cutting out a neighborhood $S^1\times D^3\times\text{SO}(3)$ lying over $\gamma$ and gluing back in
a copy of $D^2\times S^2\times\text{SO}(3)$. Denote by $m(\mathbb{X},\gamma):\text{C}(\mathbb{Y}_1)\to\text{C}(\mathbb{Y}_2)$
the map obtained by evaluating $\mu(\gamma)$ on 3-dimensional moduli spaces $M(\mathfrak{a},\mathbb{X},\mathfrak{b})_3$.
\begin{theorem}[see \cite{d} Thm. 7.16]
$m(\mathbb{X},\gamma)$ is chain homotopic to $m(\mathbb{X}_\gamma)$.\label{thm:surgery}
\end{theorem}
\noindent In our situation, observe that the surgered manifold $Z_\gamma$ is the
product $[0,1]\times(Y_1\#Y_2)$.
It follows that $m_X m_W$ is chain homotopic to the identity.
Now consider $m_W m_X$. This has 16 components
\[
i_Wv_X,\quad v_Wi_X,\quad {\delta_W}_{(1)}{\delta'_X}_{(1)},\quad \ldots
\]
It is chain homotopic to a map $f$ that counts similar data on the cobordism $W\circX$
with metric stretched very long along the internal connected sum portion between $[0,1]\timesY_1$
and $[0,1]\timesY_2$. The map $f$ has components corresponding to the components of $m_W m_X$,
but most of them vanish. For instance, the 7 components of $f$ corresponding to
\[
i_Wi_X, \quad {\delta_W}_{(i)}i_X, \quad i_W{\delta'_X}_{(i)},\quad {\delta_W}_{(i)}{\delta'_X}_{(j)}\quad (i\neq j)
\]
all vanish by index arguments. Each counts instantons $A$ with $\mu(A)=0$ obtained by gluing an instanton $A_1$ on $\mathbb{R}\timesY_1$ to an instanton $A_2$ over $\mathbb{R}\timesY_2$ along a 3-sphere. For $i=1,2$ at least one of the limits on $\mathbb{R}\timesY_i$ is irreducible. Thus both $A_1,A_2$ are irreducible. It follows from $0=\mu(A)=\mu(A_1)+\mu(A_2)+3$ and $\mu(A_i)\geq 0$ that no such $A$ exist. Similarly, the 4 components of $f$ corresponding to
\[
{\delta_W}_{(i)}v_X,\quad v_W{\delta'_X}_{(i)}
\]
are zero. These components require 3-dimensional moduli spaces. However,
with the neck stretched, the relevant 3-dimensional moduli spaces are $M(\mathfrak{a},\mathfrak{b})_0\times\text{SO}(3)\timesM(\mathfrak{c},\mathfrak{d})_0$
where one of $\mathfrak{a},\mathfrak{b},\mathfrak{c},\mathfrak{d}$ is the trivial class $\mathfrak{t}$ and $\mathfrak{a},\mathfrak{b}$ are connection classes on $Y_1$ and $\mathfrak{c},\mathfrak{d}$ on $Y_2$. But $M(\mathfrak{a},\mathfrak{t})_0$ is empty for any irreducible $\mathfrak{a}$. Next, the 4 components of $f$ corresponding to
\[
i_\text{W}v_X,\quad v_\text{W}i_X, \quad {\delta_{\text{W}}}_{(i)}{\delta'_{X}}_{(i)}
\]
are identity maps. For instance, the first one uses 3-dimensional spaces modelled on
$M(\mathfrak{a},\mathfrak{b})_0\times\text{SO}(3)\timesM(\mathfrak{c},\mathfrak{d})_0$ from gluing; the holonomy map $v_X$ captures
the gluing parameter just as in (\ref{eq:umap}), leaving us to count
$M(\mathfrak{a},\mathfrak{b})_0\timesM(\mathfrak{c},\mathfrak{d})_0$.
Of course $M(\mathfrak{a},\mathfrak{b})_0$ forces $\mathfrak{a}=\mathfrak{b}$ and has one translation invariant irreducible flat connection.
Finally, we are left with 1 component of $f$ corresponding to
\[
v_Wv_X
\]
which may be nonzero. However, we know that $f$ is the identity plus this off-diagonal term,
and thus induces an isomorphism on homology. So $m_W m_X$ also induces an isomorphism on homology.
Because $m_X m_W$ induces the identity on $I(Y_1\#Y_2)$, so does $m_\text{W} m_X$.
This completes the proof.\\
\subsection{Connected sum with non-trivial bundles}
In this section we state two variants of the connected sum theorem,
when one or both of $Y_1$ and $Y_2$ is replaced by a non-trivial admissible bundle.
We then explain how the proof above adapts to these cases. These are
simpler than the above, having fewer trivial connections to deal with.
We first consider the case where $\mathbb{Y}_1$ is trivial
and $Y_1$ is an integral homology 3-sphere,
but $\mathbb{Y}_2$ is non-trivial and admissible. Let
$\text{C}_{(1)}=\text{C}(\mathbb{Y}_1)$ with maps $\partial_{(1)},\delta,\delta',v_{(1)}$.
Let $\text{C}_{(2)}=\text{C}(\mathbb{Y}_2)$ with maps $\partial_{(2)},v_{(2)}$. Define
\[
\text{C} = \left( \text{C}_{(1)}\otimes\text{C}_{(2)}\right) \oplus \left( \text{C}_{(1)}[3]\otimes\text{C}_{(2)}\right)\oplus \left( F\otimes\text{C}_{(2)}\right)
\]
\[
\partial = \left(
\begin{array}{ccc}
\partial_{(12)} & 0 & 0\\
v_{(12)} & -\partial_{(12)} & \delta'\otimes 1 \\
\delta\otimes 1 & 0 & \epsilon \otimes \partial_{(2)}\\
\end{array}\right)
\]
with notation as before.
\begin{theorem}
Let $\mathbb{Y}_1$ and $\mathbb{Y}_2$ be admissible bundles, with $\mathbb{Y}_1$ trivial and $\mathbb{Y}_2$ non-trivial. As $\mathbb{Z}/8$-graded $F$-modules, {\em $I(\mathbb{Y}_1\#\mathbb{Y}_2)\simeq H_\ast(\text{C},\partial)$}.\label{thm:connect2}
\end{theorem}
\noindent As before, we let $\text{C}'=\text{C}(\mathbb{Y}_1\#\mathbb{Y}_2)$ and let $\partial'$ be its differential.
Let
\[
\mathbb{X}:\mathbb{Y}_1\sqcup\mathbb{Y}_2 \to \mathbb{Y}_1\#\mathbb{Y}_2
\]
be the cobordism bundle obtained
from a boundary sum between $[0,1]\times\mathbb{Y}_1$ and $[0,1]\times\mathbb{Y}_2$ near $1$,
making some inessential choices in gluing the bundles.
Let $\mathbb{W}$ be the cobordism in the reverse direction obtained
from the boundary sum near $0$. We define chain maps
\[
m_\mathbb{X}:\text{C}\to\text{C}', \quad m_\mathbb{W}:\text{C}'\to\text{C}.
\]
The map $m_\mathbb{X}$ is given by three components:
\begin{align*}
v_\mathbb{X} &:\text{C}_{(1)}\otimes\text{C}_{(2)} \to \text{C}',\\
i_\mathbb{X} &:\text{C}_{(1)}[3]\otimes\text{C}_{(2)}\to\text{C}',\\
{\delta'_\mathbb{X}} &:F \otimes\text{C}_{(2)}\to\text{C}'.
\end{align*}
The map $i_\mathbb{X}$ counts instantons in 0-dimensional moduli spaces on $\mathbb{X}$ with all limits irreducible;
the map $v_\mathbb{X}$ evaluates holonomy along a path $\gamma_\mathbb{X}$ from $Y_1$ to $Y_2$ on
3-dimensional moduli spaces with irreducible limits on $\mathbb{X}$; the map $\delta'_\mathbb{X}$ counts 0-dimensional moduli
spaces over $\mathbb{X}$ where the limit over $Y_1$ is trivial. The map $m_\mathbb{X}$ is a chain map because of the following.
First, we have the usual relation for the map involving only irreducibles, $i_\mathbb{X}\partial_{(12)}=\partial'i_\mathbb{X}$. Second,
\begin{align*}
& i_\mathbb{X} v_{(12)} + v_\mathbb{X}\partial_{(12)} + {\delta'_\mathbb{X}}(\delta\otimes 1) = \partial' v_\mathbb{X}.
\end{align*}
These relations are the same as before, except that all terms involving a trivial connection
on $Y_2$ do not arise. In particular, all diagrams in Figure \ref{fig:pieces} are
relevant except the last. Third, we have the relation
\[
i_\mathbb{X}(\delta'\otimes 1) = \partial'{\delta'_\mathbb{X}}
\]
which again is the same as before but with the term involving a trivial connection
on $Y_2$ absent. The map $m_\mathbb{W}$ is
defined similarly to $m_\text{W}$, with the component involving the trivial connection on $Y_2$
thrown out.
We proceed as before. The first composite is $m_\mathbb{X} m_\mathbb{W}=i_\mathbb{X} v_\mathbb{W} + v_\mathbb{X} i_\mathbb{W}$, and this is
chain homotopic to $m(\mathbb{Z},\gamma)$ where $\mathbb{Z}=\mathbb{X}\circ\mathbb{W}$ and
$\gamma = \gamma_\mathbb{W}\cup\gamma_\mathbb{X}$ by stretching along $Y_1,Y_2$. The surgery theorem
\ref{thm:surgery} applies, so $m_\mathbb{X} m_\mathbb{W}$ is chain homotopic to $m(\mathbb{Z}_\gamma)$,
which is the identity. The other composite $m_\mathbb{W} m_\mathbb{X}$ now has only 9 components. We stretch the neck
as before, so that $m_\mathbb{W} m_\mathbb{X}$ is chain homotopic to a map $f$;
the terms of $f$ corresponding to the 9 components all vanish except the diagonal ones,
which are the identity, and possibly $v_\mathbb{W} v_\mathbb{X}$. As before, $m_\mathbb{W}$ and $m_\mathbb{X}$
are chain homotopy inverses, and the proof follows through.
Next, we consider the case where both $\mathbb{Y}_1$ and $\mathbb{Y}_2$ are non-trivial.
For $i=1,2$ let
$\text{C}_{(i)}=\text{C}(\mathbb{Y}_i)$ with maps $\partial_{(i)},v_{(i)}$. Define
\[
\text{C} = \left( \text{C}_{(1)}\otimes\text{C}_{(2)}\right) \oplus \left( \text{C}_{(1)}[3]\otimes\text{C}_{(2)}\right)
\]
\[
\partial = \left(
\begin{array}{cc}
\partial_{(12)} & 0 \\
v_{(12)} & -\partial_{(12)} \\
\end{array}\right)
\]
with notation as before.
\begin{theorem}
Let $\mathbb{Y}_1$ and $\mathbb{Y}_2$ be non-trivial admissible bundles. As $\mathbb{Z}/8$-graded $F$-modules, {\em $I(\mathbb{Y}_1\#\mathbb{Y}_2)\simeq H_\ast(\text{C},\partial)$}.\label{thm:connect3}
\end{theorem}
\noindent This is the simplest case of all. Let $\text{C}'=\text{C}(\mathbb{Y}_1\#\mathbb{Y}_2)$ with differential $\partial'$.
Let
\[
\mathbb{X}:\mathbb{Y}_1\sqcup\mathbb{Y}_2 \to \mathbb{Y}_1\#\mathbb{Y}_2
\]
be the cobordism bundle obtained
from a boundary sum between $[0,1]\times\mathbb{Y}_1$ and $[0,1]\times\mathbb{Y}_2$ near $1$,
making some inessential gluing choices. Let $\mathbb{W}$ be the cobordism in the reverse direction obtained
from the boundary sum near $0$. As before, we can define chain maps
$m_\mathbb{X}$ and $m_\mathbb{W}$. Here $m_\mathbb{X}$ is given by two components,
$v_\mathbb{X} :\text{C}_{(1)}\otimes\text{C}_{(2)} \to \text{C}'$ and
$i_\mathbb{X} :\text{C}_{(1)}[3]\otimes\text{C}_{(2)}\to\text{C}'$.
As usual, $i_\mathbb{X}$ counts 0-dimensional moduli spaces on $\mathbb{X}$ with all limits irreducible,
and $v_\mathbb{X}$ takes holonomy along a path $\gamma_\mathbb{X}$ from $Y_1$ to $Y_2$ on
3-dimensional moduli spaces with irreducible limits on $\mathbb{X}$. The relations that
make $m_\mathbb{X}$ a chain map are just $i_\mathbb{X}\partial_{(12)}=\partial'i_\mathbb{X}$ and $i_\mathbb{X} v_{(12)}
+ v_\mathbb{X}\partial_{(12)} = \partial' v_\mathbb{X}$, and follow from the previous cases,
with the terms involving trivial connections thrown out. This latter relation
is represented by Figure \ref{fig:pieces} with the last two diagrams omitted.
The rest of the argument is the same
as before.\\
\subsection{Framed homology for integral homology 3-spheres}\label{sec:inthomproof}
Now we apply the above results to compute $I^\#(Y)$ with $F$-coefficients
for an integral homology 3-sphere $Y$, proving Theorem \ref{thm:integerhom}.
Recall that $F$ is a field with char$(F)\neq 2$. Let
$\mathbb{T}^3$ be a non-trivial bundle over $T^3$ geometrically represented
by an $S^1$-factor of $T^3$. Let $V=F_0\oplus F_4$
be the chain complex that computes $I(\mathbb{T}^3)$.
Write $\tau:V\to V$ for the $v$-map on $\mathbb{T}^3$, with which our normalization
may be written as the degree 4 involution that multiplies by 4. Write $\text{C}=\text{C}(Y)$
and $\partial,v,\delta,\delta'$ for its relevant maps. Now let
\[
\textbf{C} = \left(\text{C}\otimes V \right)\oplus \left(\text{C}[3]\otimes V \right)\oplus \left(F\otimes V \right)
\]
\[
\boldsymbol{\partial} = \left(
\begin{array}{ccc}
\partial\otimes 1 & 0 & 0\\
v\otimes 1 + 1\otimes\tau& -\partial\otimes 1 & \delta'\otimes 1 \\
\delta\otimes 1 & 0 & 0\\
\end{array}\right)
\]
Theorem \ref{thm:connect2} tells us that
\[
H_\ast(\textbf{C},\boldsymbol{\partial})\simeq I(\mathbb{Y}\#\mathbb{T}^3)=I^\#(Y)[4]\oplusI^\#(Y)
\]
where $\mathbb{Y}$ is the trivial bundle over $Y$.
Consider the filtration on $(\textbf{C},\boldsymbol{\partial})$ given by
\[
0 \subset \text{C}[3]\otimes V \subset \left(\text{C}[3]\otimes V\right)\oplus \left(F \otimes V\right)\subset \textbf{C}.
\]
This induces a spectral sequence with $E^2$-page
\[
\left(\text{ker}(\boldsymbol{\delta})\otimes V\right)
\oplus \left(\text{ker}(\boldsymbol{\delta}')/\text{im}(\boldsymbol{\delta})\otimes V\right)
\oplus \left(\text{coker}(\boldsymbol{\delta}')[3]\otimes V\right)
\]
with the only non-zero component of the differential coming from
\[
\phi:=\boldsymbol{v}\otimes 1 + 1\otimes\tau:\text{ker}(\boldsymbol{\delta})\otimes V\to\text{coker}(\boldsymbol{\delta}')[3]\otimes V.
\]
We are writing all modules as $\mathbb{Z}/8$-graded modules; for example,
$\text{ker}(\boldsymbol{\delta})_i=I(Y)_i$ when $i\neq 1$, and similarly $\text{coker}(\boldsymbol{\delta}')_i=I(Y)_i$ when $i\neq 4$. Also note that the component $\text{ker}(\boldsymbol{\delta}')/\text{im}(\boldsymbol{\delta})$ is
supported in grading $0$ and is either $F$ or $0$. Write
\[
\phi_i = (\boldsymbol{v}\oplus\boldsymbol{v}+\sigma)_i:\text{ker}(\boldsymbol{\delta})_i\oplus\text{ker}(\boldsymbol{\delta})_{i+4}\to\text{coker}(\boldsymbol{\delta}')_{i+4}\oplus\text{coker}(\boldsymbol{\delta}')_{i}
\]
where $\sigma(x,y)=(4y,4x)$ is the degree 4 involution induced by $\tau$. Then
\begin{align*}
&I^\#(Y)_0 \simeq \text{ker}(\phi_0)\oplus\text{coker}(\phi_1)\oplus\text{ker}(\boldsymbol{\delta}')/\text{im}(\boldsymbol{\delta}),\\
&I^\#(Y)_i \simeq \text{ker}(\phi_i)\oplus\text{coker}(\phi_{i+1}), \quad i=1,2,3.
\end{align*}
Recall that $\widehat{v}$ is a degree 4 automorphism of $\widehat{I}(Y)$. We claim that
\begin{align*}
&\text{ker}(\phi_0)\simeq\text{ker}(\widehat{v}^2-16)_0\oplus\text{im}(\boldsymbol{\delta}')_4,\\
&\text{ker}(\phi_i)\simeq\text{ker}(\widehat{v}^2-16)_i, \quad i=1,2,3.
\end{align*}
To prove Theorem \ref{thm:integerhom} it
suffices to consider the case in which $h(Y)\leq 0$, so that $\boldsymbol{\delta}'=0$ and
$\widehat{I}_i=I_i$ for $i\neq 1,5$. For if $h(Y)>0$, then the theorem applies for $\overline{Y}$,
which has $h(\overline{Y})=-h(Y)<0$, and the $F[\widehat{v}]$-module $\widehat{I}$ dualizes upon
orientation reversal. Thus for $i=0,2,3$ we have
\[
\phi_i:I_i\oplusI_{i+4}\toI_{i+4}\oplusI_i
\]
and each $\widehat{I}_i=I_i$ with $\boldsymbol{v}=\widehat{v}$ an isomorphism. The isomorphisms
$\text{ker}(\phi_i)\simeq\text{ker}(\widehat{v}^2-16)_i$ for $i=0,2,3$ are given by $(x,y)\mapsto x$ with inverse
$x\mapsto (x,-\widehat{v}x/4)$. Next, consider
\[
\phi_1:\text{ker}(\boldsymbol{\delta})_1\oplusI_5\toI_1\oplusI_5.
\]
We have an isomorphism $\text{ker}(\phi_1)\simeq\text{ker}(\boldsymbol{v}^2-16)_1\subset\text{ker}(\boldsymbol{\delta})_1$ given by $(x,y)\mapsto x$ with inverse $x\mapsto (x,-\boldsymbol{v}^{-1}x/4)$,
using the isomorphism $\boldsymbol{v}:I_5\toI_1$. The natural inclusion $\widehat{I}_1\toI_1$
induces an injection $\text{ker}(\widehat{v}^2-16)_1\to\text{ker}(\boldsymbol{v}^2-16)_1$. Note here that $\widehat{v}$ is just the restriction of $\boldsymbol{v}$ to $\widehat{I}$. We show
that this is surjective and hence an isomorphism. Given $x\inI_1$ with $\boldsymbol{\delta}x=0$
and $\boldsymbol{v}^2x=16x$ we must show $x\in\text{ker}(\boldsymbol{\delta}\boldsymbol{v}^{2k})$
for all $k>0$. But $\boldsymbol{v}^2x=16x$ implies $\boldsymbol{\delta}\boldsymbol{v}^{2k}x=4^k\boldsymbol{\delta}x=0$. Having computed $\text{ker}(\phi_i)$, dimension counting then yields
\begin{align*}
&\text{coker}(\phi_1)\simeq\text{ker}(\widehat{v}^2-16)_1\oplus\text{im}(\boldsymbol{\delta}),\\
&\text{coker}(\phi_i)\simeq\text{ker}(\widehat{v}^2-16)_i, \quad i=0,2,3.
\end{align*}
Using in our case that $\dim(\text{im}(\boldsymbol{\delta}))+\dim(\text{ker}(\boldsymbol{\delta}')/\text{im}(\boldsymbol{\delta}))=1$,
we deduce that
\[
I^\#(Y) \simeq \text{ker}(\widehat{v}^2-16)\otimes H_\ast(S^3)\oplus H_\ast(\text{pt.})
\]
where it is understood that $\widehat{v}^2-16$ is acting on $\bigoplus_{i=0}^3\widehat{I}_i$. This
proves the first part of Thm. \ref{thm:integerhom}.\\
\subsection{Framed homology for non-trivial bundles}
Let $\mathbb{Y}$ be a non-trivial admissible bundle over $Y$
geometricially represented by $\lambda \subset Y$.
We now write $I^\#(Y;\lambda)$ in terms of $I(\mathbb{Y})$.
Let $V=F_0\oplus F_4$
and $\tau: V\to V$ be as before. Write $\text{C}=\text{C}(\mathbb{Y})$
and $\partial,v$ for its maps, and set
\[
\textbf{C} = \left(\text{C}\otimes V \right)\oplus \left(\text{C}[3]\otimes V \right)
\]
\[
\boldsymbol{\partial} = \left(
\begin{array}{cc}
\partial\otimes 1 & 0\\
v\otimes 1 + 1\otimes\tau& -\partial\otimes 1
\end{array}\right)
\]
Theorem \ref{thm:connect3} tells us that
\[
H_\ast(\textbf{C},\boldsymbol{\partial}) \simeq I(\mathbb{Y}\#\mathbb{T}^3)=I^\#(Y;\lambda)[4]\oplusI^\#(Y;\lambda).
\]
This is a degeneration of the computation in \S \ref{sec:inthomproof}. We want the kernel and cokernel of
\[
\widehat{v}\oplus \widehat{v} + \sigma: I[4]\oplusI\toI\oplusI[4].
\]
The kernel is isomorphic to $\text{ker}(\widehat{v}^2-16)$ by the assignment $(x,y)\mapsto x$, inverse to $x\mapsto (x,-\widehat{v}x/4)$.
The cokernel is of course the same, and we obtain
\begin{equation}
I^\#(Y;\lambda) \simeq \text{ker}(\widehat{v}^2-16)\otimes H_\ast(S^3)\label{eq:nontriv}
\end{equation}
as relatively $\mathbb{Z}/4$-graded $F$-modules, where it is understood
that $\widehat{v}^2-16$ is acting on four consecutively
graded summands of $I(\mathbb{Y})$. This proves the second part of Thm. \ref{thm:integerhom}.\\
\subsection{Degenerations}
In this section we briefly consider a few cases in which the
isomorphisms obtained degenerate into stricter relations between
framed instanton homology and Floer's instanton homology,
and in particular, we prove Corollaries \ref{cor:3}, \ref{cor:4} and \ref{cor:5}.
First, suppose $\mathbb{Y}$ is a non-trivial admissible bundle over $Y$
geometrically represented by $\lambda$. As mentioned in \cite[\S 6]{froy},
when there exists a surface $\Sigma\subsetY$ of genus $\leq 2$ with
$\mathbb{Y}|_\Sigma$ non-trivial, then $u^2=64$ on $I(\mathbb{Y})$. In this case
(\ref{eq:nontriv}) yields
\[
I^\#(Y;\lambda)\otimes H_\ast(S^4) \simeq I(\mathbb{Y})\otimes H_\ast(S^3)
\]
as relatively $\mathbb{Z}/4$-graded $F$-modules. The term $H_\ast(S^4)$ appears because we take the full $\mathbb{Z}/8$-graded group $I(\mathbb{Y})$ on the right, instead of four consecutive summands as before. Now suppose
$K$ is a knot in $S^3$ of genus $\leq 2$. Denote the result of $r$-surgery
on $K$ by $Y_r$. For $r=1$, the exact triangle, combined with passing to the reduced groups $\widehat{I}$, yields a map
\begin{equation}
I(\mathbb{Y}_0)\to \widehat{I}(Y_{1})\label{eq:0to1}
\end{equation}
where $\mathbb{Y}_0$ is a non-trivial bundle over $Y_0$. This map is an \textit{injection}. This follows from Fr{\o}yshov's observation after Thm. 10 in \cite{froy} that, in this situation, when passing to the reduced groups $\widehat{I}$, the surgery triangle retains exactness at the homology 3-spheres (but not $I (\mathbb{Y}_0)$). If $r=-1$, we similarly obtain a \textit{surjection} $\widehat{I}(Y_{-1})\to I(\mathbb{Y}_0)$. In either case, we
can form a surface $\Sigma$ in $Y_0$ by capping off a Seifert surface for $K$ of genus $\leq 2$ by a meridional
disk for the new framed knot in $Y_0$. The bundle $\mathbb{Y}_0|_{\Sigma}$ is non-trivial, and so $u^2=64$ on $I(\mathbb{Y}_0)$. Thus $u^2=64$ on $\widehat{I}(Y_{\pm1})$. With Theorem \ref{thm:integerhom}, this implies Corollary \ref{cor:3}.
Now we consider the proofs of Corollaries \ref{cor:4} and \ref{cor:5}. By the remarks in the introduction of \cite{froy}, we have
\[
h(\Sigma(2,3,6k+1))=0, \quad h(\Sigma(2,3,6k-1))>0.
\]
Fintushel and Stern \cite{fs} compute
\[
I(\Sigma(2,3,6k+1)) = F_1^{\lfloor k/2 \rfloor} \oplus F_3^{\lceil k/2 \rfloor}\oplus F_5^{\lfloor k/2 \rfloor} \oplus F_7^{\lceil k/2 \rfloor},
\]
from which Corollary \ref{cor:4} follows, as $\Sigma(2,3,6k+1)$ is $+1$-surgery on a twist knot with $k$ full twists, a knot of genus 1. On the other hand, $\Sigma(2,3,6k-1)$ is $-1$-surgery on a twist knot $K$ with $2k-1$ half twists. Since $K$ is also genus $1$, the inequality of Corollary 1 of \cite{froyh} yields $h(\Sigma(2,3,6k-1))=1$. Combined with Fintushel and Stern's computation from \cite{fs}, we obtain
\[
\widehat{I}(\Sigma(2,3,6k-1))=F_1^{\lceil k/2 \rceil -1}\oplus F_3^{\lfloor k/2 \rfloor}\oplus F_5^{\lceil k/2 \rceil -1}\oplus F_7^{\lfloor k/2 \rfloor}.
\]
Now Corollary \ref{cor:5} follows from Corollary \ref{cor:3}.\\
\section{Instanton Homology}\label{sec:instantons}
In this section we review the relevant aspects of instanton homology
for admissible bundles.
Our main technical references are \cite{d,kmu}. Other useful references
include \cite{f1,f2,bd,froy,s}. In \S \ref{sec:index}
we define the index $\mu$ that encodes
the expected dimension of instanton moduli spaces. Section \S \ref{sec:met}
is an adaptation of some results from \cite[\S 3.9]{kmu} regarding
maps obtained from families of metrics on cobordisms.
In \S \ref{sec:indexbounds} we discuss how to use the index to put constraints
on the existence of instantons, and in \S \ref{sec:instgrading} we discuss
the $\mathbb{Z}/2$-grading on $I(\mathbb{Y})$.\\
\subsection{Instanton Groups}\label{sec:instantongroups}
Let $\mathbb{Y}$ be an $\text{SO}(3)$-bundle over a closed, connected, oriented Riemannian
3-manifold $Y$. The group $I(\mathbb{Y})$ is heuristically a Morse homology group
computed using a suitably perturbed Chern-Simons functional $\textbf{cs}_\pi:
\mathscr{C}(\mathbb{Y})\to\mathbb{R}$ modulo a group of gauge transformations:
\[
\textbf{cs}_\pi(a) = -\frac{1}{8\pi^2}\int_{[0,1]\times Y} \text{tr}(F_A^2) + f_\pi(a).
\]
Here $A$ is a connection on $[0,1]\times \mathbb{Y}$ which restricts to a base connection $a_0$ on $\{0\}\times\mathbb{Y}$ and the connection $a$ on $\{1\}\times \mathbb{Y}$, and $f_\pi$ is a small perturbation, see \cite[\S 3.4]{kmu}.
We have written $\mathscr{C}(\mathbb{Y})$ for the space of smooth connections on $\mathbb{Y}$, an
affine space modelled on $\Omega^1(\mathbb{Y}_\text{ad})$, where
$\mathbb{Y}_\text{ad}=\mathbb{Y}\times_\text{ad}\mathfrak{so}(3)$ is the adjoint bundle
of $\mathbb{Y}$.
Let $\mathbb{X}$ be an $\text{SO}(3)$-bundle over an $n$-dimensional manifold $X$.
In our constructions we do not use the full automorphism group
$\mathscr{G}(\mathbb{X})$ of $\mathbb{X}$, but rather,
following the terminology of \cite{froy}, we use the subgroup
$\mathscr{G}_\text{ev}=\mathscr{G}_\text{ev}(\mathbb{X})$ of
\emph{even} gauge transformations. Elements
of $\mathscr{G}_\text{ev}$ are called determinant-1 gauge transformations
in \cite{kmu} and restricted gauge transformations in \cite{bd}. Viewing
gauge transformations as sections of the bundle $\mathbb{X}\times_{\text{Ad}}\text{SO}(3)$,
the even transformations are the ones that lift to sections of
$\mathbb{X}\times_{\text{Ad}}\text{SU}(2)$. There is an exact sequence
\begin{equation}
1 \longrightarrow \mathscr{G}_\text{ev}(\mathbb{X}) \longrightarrow \mathscr{G}(\mathbb{X})
\overset{\eta}{\longrightarrow} H^1(X;\mathbb{F}_2)\longrightarrow 1, \label{eta}
\end{equation}
where $\eta$ measures the obstruction to deforming a gauge transformation over
the 1-skeleton of $X$. For a connection $A$ on $\mathbb{X}$ we write $h^0(A)$
for the dimension of its $\mathscr{G}_\text{ev}$-stabilizer. The possible values
of $h^0(A)$ are $0,1,3$. For us, the only stabilizers that will appear will be
$1,S^1,\text{SO}(3)$.
We call $A$ {\em irreducible} if $h^0(A)=0$, and {\em reducible} otherwise.
We will write $a,b,c,\ldots$ for typical connections on bundles over 3-manifolds and
$\mathfrak{a},\mathfrak{b},\mathfrak{c},\ldots$ for their respective $\mathscr{G}_\text{ev}$-classes.
A typical connection on a bundle over a 4-manifold $X$ is written as $A$,
and simply $[A]$ for its $\mathscr{G}_\text{ev}$-class.
Let $\mathscr{B}(\mathbb{Y})$ denote the quotient $\mathscr{C}(\mathbb{Y})/\mathscr{G}_\text{ev}$.
The functional $\textbf{cs}_\pi$ induces a map
$\textbf{cs}'_\pi:\mathscr{B}(\mathbb{Y})\to \mathbb{R}/\mathbb{Z}$.
The set of critical points of $\textbf{cs}'_\pi$
is denoted $\mathfrak{C}$ or $\mathfrak{C}(\mathbb{Y})$;
when the perturbation $\pi$ is zero this is the set of flat connection classes on $\mathbb{Y}$.
We write $h^1(a)$ for the dimension of the Zariski tangent space of $\mathfrak{a}$ in
$\mathfrak{C}$. Following \cite{d}, when $h^0(a)=h^1(a)=0$,
the connection $a$ is called \textit{acyclic}.
Let $\mathfrak{C}^{\text{irr}}$ denote the subset of irreducibles in $\mathfrak{C}$.
When $\mathbb{Y}$ is admissible and a suitable perturbation is chosen, $\mathfrak{C}^{\text{irr}}$ is a
finite set of acyclic classes,
and it is in fact all of $\mathfrak{C}$ or is missing only the trivial class,
according to whether $b_1(Y)\neq 0$ or not. Assume such a perturbation is chosen.
Fix a base connection $a_0$ on $\mathbb{Y}$. We define the chain group
\[
\text{C}(\mathbb{Y}) = \bigoplus_{\mathfrak{a}\in\mathfrak{C}^\text{irr}} \mathbb{Z}\Lambda(\mathfrak{a})
\]
where $\Lambda(\mathfrak{a})$ is the 2-element set of orientations of the real line $\text{det}(D_{A})$,
where $A$ is a connection on $\mathbb{R}\times\mathbb{Y}$ with $A|_{\mathbb{Y}\times \{t\}}$ equivalent to
$a_0$ for $t \ll 0$ and in the class $\mathfrak{a}$ for $t\gg 0$, and $D_{A}$ is the Fredholm
operator $-d_A\oplus d_A^+$ defined on suitable Sobolev spaces in \S \ref{sec:index}; see also \cite[\S 3.6]{kmu}. Here $\mathbb{Z}\Lambda(\mathfrak{a})$ means the
infinite cyclic group with generators the elements of $\Lambda(\mathfrak{a})$. We often think
of $\text{C}(\mathbb{Y})$ as generated by $\mathfrak{C}^{\text{irr}}$; when doing this it is
understood that we have chosen distinguished elements from each set $\Lambda(\mathfrak{a})$.
A connection $A$ on an $\text{SO}(3)$-bundle over a Riemannian 4-manifold is an \emph{instanton} or is
\emph{anti-self-dual} (ASD) if its curvature $F_{A}$ satisfies
\[
\star F_{A}=-F_{A}
\]
where $\star$ is the Hodge star.
The \textit{energy} of a connection $A$ is given by $\| F_{A} \|_{L^2}^2=-\int\text{tr}(F_{A}\wedge\star F_{A})$.
Instantons on $\mathbb{X}=\mathbb{R}\times\mathbb{Y}$ may be interpreted as gradient flow-lines
for the Chern-Simons functional. In actuality we consider a perturbed instanton equation\
involving $\pi$, and call the solutions instantons as well.
Given acyclic $a,b\in\mathscr{C}(\mathbb{Y})$ we let
$M(a,b)$ be the space of $\mathscr{G}_\text{ev}$-classes of finite-energy
instantons on $\mathbb{X}$
asymptotic at $-\infty$ to $a$ and at $+\infty$ to $b$.
When the perturbation is zero, elements $[A] \inM(a,b)$
are distinguished by the property $\frac{1}{8\pi^2}\|F_{A}\|_{L^2}^2 = \textbf{cs}(b)-\textbf{cs}(a)$.
For a small, generic perturbation $M(a,b)$ is a smooth manifold,
and we write
\[
\mu(a,b)=\dimM(a,b).
\]
Passing to $\mathscr{G}_\text{ev}$-classes, the number $\mu(\mathfrak{a},\mathfrak{b})$ is well-defined modulo $8$,
and equips $\text{C}(\mathbb{Y})$ with a relative $\mathbb{Z}/8$-grading given by
$\text{gr}(\mathfrak{a})-\text{gr}(\mathfrak{b})\equiv \mu(\mathfrak{a},\mathfrak{b})$. The space
$M(a,b)$ has an $\mathbb{R}$-action by translation along the $\mathbb{R}$-factor
of $\mathbb{R}\times\mathbb{Y}$, and we write
\begin{align*}
& \check{M}(a,b) = M(a,b)/\mathbb{R}.
\end{align*}
The data of $\mathfrak{a},\mathfrak{b}$ and
the lift of $\mu(\mathfrak{a},\mathfrak{b})\in\mathbb{Z}/8$ to $d\in\mathbb{Z}$ are sufficient
to describe $M(a,b)$; viewing $[A]\in M(a,b)$ as a path in $\mathscr{B}(\mathbb{Y})$, the index $d$ faithfully records the homotopy class of $[A]$ relative to the endpoints $\mathfrak{a},\mathfrak{b}$.
That said, if $d=\mu(a,b)$, we also write $M(\mathfrak{a},\mathfrak{b})_d$ for the space $M(a,b)$, and similarly $\check{M}(\mathfrak{a},\mathfrak{b})_{d-1}$ for $\check{M}(a,b)$.
Thus $M(\mathfrak{a},\mathfrak{b})_d$ is a $d$-dimensional component of instanton classes
whose limits are in the classes $\mathfrak{a}$ and $\mathfrak{b}$.
Suppose $\mathfrak{a},\mathfrak{b} \in\mathfrak{C}^\text{irr}$ with $\mu(\mathfrak{a},\mathfrak{b})\equiv 1$.
With suitable perturbation, $\check{M}(\mathfrak{a},\mathfrak{b})_0$ is a
finite set, and as explained in \cite[\S 3.6]{kmu}, each of its elements determines
an isomorphism $\Lambda(\mathfrak{a})\to\Lambda(\mathfrak{b})$. Denoting the induced isomorphism
$\mathbb{Z}\Lambda(\mathfrak{a})\to\mathbb{Z}\Lambda(\mathfrak{b})$ corresponding to
$[A]\in \check{M}(\mathfrak{a},\mathfrak{b})_0$ by the symbol $\epsilon[A]$, the differential $\partial$
for $\text{C}(\mathbb{Y})$ is defined in pieces by
\[
\partial|_{\mathbb{Z}\Lambda(\mathfrak{a})\to\mathbb{Z}\Lambda(\mathfrak{b})} =
\sum_{[A]\in\check{M}(\mathfrak{a},\mathfrak{b})_0}\epsilon[A].
\]
If we choose an element from each $\Lambda(\mathfrak{a})$, then we may view $\partial$ as a map
on $\mathfrak{C}^\text{irr}$ and write
$\langle \partial \mathfrak{a},\mathfrak{b}\rangle = \#\check{M}(\mathfrak{a},\mathfrak{b})_0$, where $\#$ indicates a
signed count. The differential lowers the relative $\mathbb{Z}/8$-grading by $1$.
The identity $\partial^2=0$ is obtained by interpreting the boundary of
a 1-dimensional moduli space $\check{M}(\mathfrak{a},\mathfrak{b})_1$ as a disjoint union
of broken trajectories $\check{M}(\mathfrak{a},\mathfrak{c})_0\times\check{M}(\mathfrak{c},\mathfrak{b})_0$.
The relatively $\mathbb{Z}/8$-graded abelian group $I(\mathbb{Y})$ is defined to
be $H_\ast(\text{C}(\mathbb{Y}),\partial)$.
In defining the complex $\text{C}(\mathbb{Y})$ we have chosen a Riemannian 3-manifold $Y$, an
admissible $\text{SO}(3)$-bundle $\mathbb{Y}$ over $Y$, a perturbation $\pi$, and a base connection $a_0$ on $\mathbb{Y}$.
When working with the chain group we always assume such data is chosen.
The isomorphism class of the relatively $\mathbb{Z}/8$-graded group $I(\mathbb{Y})$ depends only
on the oriented homeomorphism type of $Y$ and $w_2(\mathbb{Y})$.\\
\subsection{Maps from Cobordisms}\label{sec:cob}
Let $X:Y_1\toY_2$ be a cobordism from $Y_1$ to $Y_2$. That is, $X$ is a compact,
connected, oriented 4-manifold
with an orientation preserving diffeomorphism $\partial X \simeq Y_2\sqcup \overline{Y}_1$.
As before, each $Y_i$ is connected.
Assume $X$ is equipped with a metric that is product-like near its boundary. Suppose further
that $\mathbb{X}$ is an $\text{SO}(3)$-bundle over $X$ with $\mathbb{X}|_{Y_i} = \mathbb{Y}_i$ where each
$\mathbb{Y}_i$ is admissible. We abbreviate this setup as $\mathbb{X}:\mathbb{Y}_1\to\mathbb{Y}_2$. To obtain a chain map
\[
m(\mathbb{X}):\text{C}(\mathbb{Y}_1)\to \text{C}(\mathbb{Y}_2),
\]
first form the bundle $(\mathbb{R}_{\leq 0}\times \mathbb{Y}_1) \cup \mathbb{X} \cup (\mathbb{R}_{\geq0}\times \mathbb{Y}_2)$
over the Riemannian 4-manifold obtained from $X$ by attaching cylindrical
ends to the boundary. We define $M(a,\mathbb{X},b)$ to be the space of $\mathscr{G}_\text{ev}$-classes of
finite-energy instantons on this bundle. With suitable perturbations chosen, $M(a,\mathbb{X},b)$ is a smooth
manifold, and we write $\mu(a,\mathbb{X},b)=\dimM(a,\mathbb{X},b)$. As before, $\mu(\mathfrak{a},\mathbb{X},\mathfrak{b})$
is well-defined modulo $8$, and we write
$M(\mathfrak{a},\mathbb{X},\mathfrak{b})_d=M(a,\mathbb{X},b)$ for $d=\mu(a,\mathbb{X},b)$.
Now suppose
$\mathfrak{a} \in\mathfrak{C}^\text{irr}(\mathbb{Y}_1)$ and $\mathfrak{b}\in\mathfrak{C}^\text{irr}(\mathbb{Y}_2)$ with
$\mu(\mathfrak{a},\mathbb{X},\mathfrak{b}) \equiv 0$. With suitable perturbations, $M(\mathfrak{a},\mathbb{X},\mathfrak{b})_0$ is
a finite set of points. In defining $\text{C}(\mathbb{Y}_i)$, basepoint connections $a_{i,0}$ are chosen.
Let $A$ be a connection on $\mathbb{X}$ (with cylindrical ends attached) with limits at the ends
equivalent to the $a_{i,0}$. An orientation of the line $\text{det}(D_A)$ will be called
an {\em I-orientation} of $\mathbb{X}$, following
\cite[Def. 3.9]{kmu}. With an I-orientation of $\mathbb{X}$, an element $[A]\inM(\mathfrak{a},\mathbb{X},\mathfrak{b})_0$
determines an isomorphism $\epsilon[A]:\mathbb{Z}\Lambda(\mathfrak{a})\to\mathbb{Z}\Lambda(\mathfrak{b})$,
and $m(\mathbb{X})$ is defined in pieces by
\[
m(\mathbb{X})|_{\mathbb{Z}\Lambda(\mathfrak{a})\to\mathbb{Z}\Lambda(\mathfrak{b})} = \sum_{[A]\inM(\mathfrak{a},\mathbb{X},\mathfrak{b})_0}\epsilon[A].
\]
In shorthand, $\langle m(\mathbb{X})\mathfrak{a},\mathfrak{b} \rangle = \#M(\mathfrak{a},\mathbb{X},\mathfrak{b})_0$.
When $\mu(\mathfrak{a},\mathbb{X},\mathfrak{b})\not\equiv 0$ this
part of the differential is zero. Different choices
of I-orientations only affect the overall sign of the map $m(\mathbb{X})$. The notation we use for
composing bundle cobordisms is given by
\[
\mathbb{X}_2\circ\mathbb{X}_1 = \mathbb{X}_1\cup_{\mathbb{Y}_2}\mathbb{X}_2:\mathbb{Y}_1\to \mathbb{Y}_3
\]
where $\mathbb{X}_i:\mathbb{Y}_i\to\mathbb{Y}_{i+1}$ for $i=1,2$. We write $I(\mathbb{X}_1):I(\mathbb{Y}_1)\toI(\mathbb{Y}_2)$ for the map on homology induced by $m(\mathbb{X}_1)$.
Having assumed $Y_i$ is connected for $i=1,2$, we have the composition law
\[
I(\mathbb{X}_2\circ\mathbb{X}_1) = I(\mathbb{X}_2)\circI(\mathbb{X}_1).
\]
There is a well-defined
notion of composing I-orientations using (\ref{detiso}) below,
and this is needed to make sense of this expression. For a general discussion of the composition law involving disconnected
3-manifolds see \cite[\S 5.2]{kmu}. We mention that the composition law follows from the homotopy formula (\ref{met1})
below, using a 1-dimensional family of metrics that stretches along $Y_2$.\\
\subsection{Index Formulas}\label{sec:index}
The numbers $\mu(a,b)$ and $\mu(a,\mathbb{X},b)$ above are more properly described as the indices of
certain Fredholm operators. Let $\mathbb{X}:\mathbb{Y}_1\to\mathbb{Y}_2$ as above. The $\mathbb{Y}_i$
are not assumed to be admissible. Let $a$ and $b$ be connections on $\mathbb{Y}_1$ and $\mathbb{Y}_2$,
respectively. Attach cylindrical ends to $\mathbb{X}$ as above and call the result $\mathbb{X}$ as well.
Choose a connection $A$ on $\mathbb{X}$ with $A|_{\mathbb{Y}_1\times \{t\}}$ equal to $a$ for
$t \ll0$ and $A|_{\mathbb{Y}_2\times \{t\}}$ equal
to $b$ for $t \gg 0$, and consider the operator
\[
D_A=-d_A^\ast\oplus d_A^+: L^p_{s,\phi}(\Lambda^1\otimes\mathbb{X}_\text{ad})\to
L^p_{s-1,\phi}((\Lambda^0\oplus\Lambda^+)\otimes\mathbb{X}_\text{ad})
\]
where $L^p_{s,\phi}=\phi L^p_s$ are Sobolev spaces weighted by the real function $\phi$, equal
to $e^{-\epsilon t}$ for some sufficiently small $\epsilon > 0$ on the ends
$\mathbb{R}_{\leq 0}\timesY_1$ and $\mathbb{R}_{\geq 0}\timesY_2$, and equal to $1$ otherwise.
This operator arises from linearizing the instanton equation and using a Coulomb gauge condition.
If $\mathbb{X}':\mathbb{Y}_2\to\mathbb{Y}_3$ and $A'$ is a connection on $\mathbb{X}'$ with limit $b$ over $\mathbb{Y}_2$,
there is a natural isomorphism
\begin{equation}
\text{det}(D_{A})\otimes\text{det}(D_{A'}) \simeq \text{det}(D_{A\cup A'})\label{detiso}
\end{equation}
and the index relation $\text{ind}(D_A)+\text{ind}(D_{A'})=\text{ind}(D_{A\cup A'})$
holds, see for example \cite[Prop. 5.11]{d}. In the definition of $\text{C}(\mathbb{Y})$ in \S \ref{sec:instantongroups} we take $\mathbb{X}=[0,1]\times\mathbb{Y}$ to define $D_A$.
Note that the two ends $\mathbb{Y}_1$ and $\mathbb{Y}_2$ of the cobordism $\mathbb{X}$ have opposite Sobolev weights
in the description of $D_A$. If we instead view $\mathbb{X}:\emptyset\to \mathbb{Y}_2\sqcup \overline{\mathbb{Y}}_1$ then the
construction yields a different operator $D_A'$. That is,
$D_A'$ differs from $D_A$ by using the weight function $\phi'$ in place of $\phi$,
where $\phi'$ is obtained by altering $\phi$ over $\mathbb{R}_{\leq 0}\timesY_1$
from $e^{-\epsilon t}$ to $e^{+\epsilon t}$. We have the relation
\[
\text{ind}(D_A')-\text{ind}(D_A) = h^0(a) + h^1(a),
\]
cf. \cite[Prop. 3.10]{d}. When there is one cylindrical end,
the number $\text{ind}(D_A')$ is the same as
$\text{ind}^-(A)$ in the notation of \cite{bd} and $\text{ind}^+(A)$ in the notation of \cite{d}.
The index $\text{ind}(D_A')$ is the expected dimension of the moduli space $M(a,\mathbb{X},b)^\text{irr}$
of irreducible instanton classes. It is this number that we refer to in computations, so we define
\[
\mu(a,\mathbb{X},b) =\mu(A) = \text{ind}(D_A'), \label{index}
\]
and this agrees with our earlier usage of $\mu(a,\mathbb{X},b)$. Note that the order of the symbols
$a,\mathbb{X},b$ does not matter, and is only suggestive of the situation in mind.
If $\mathbb{X}_1$ and $\mathbb{X}_2$ are bundles over
cobordisms and are composable, we have the gluing formula
\begin{equation}
\mu(a,\mathbb{X}_2\circ\mathbb{X}_1,c) = \mu(a,\mathbb{X}_1,b) + \mu(b,\mathbb{X}_2,c) + h^0(b) + h^1(b).\label{glue}
\end{equation}
If $\mathbb{X}$ is over a closed 4-manifold $X$ then we also have
\begin{equation}
\mu(\mathbb{X}) = -2p_1(\mathbb{X}) -3(1-b_1(X)+b_+(X)).\label{closed}
\end{equation}
Here $b_+(X)$ is the dimension of a maximal positive definite subspace for the intersection form on
$H_2(X;\mathbb{R})$. The term $1-b_1(X)+b_+(X)$ may also be written as $(\chi(X)+\sigma(X))/2$,
where $\chi$ is the Euler characteristic and $\sigma$ the signature.\\
\subsection{Maps from Families of Metrics on Cobordisms}\label{sec:met}
This section extracts formulae due to Kronheimer and Mrowka from \cite[\S 3.9]{kmu}.
We first consider families of metrics in a general context. Let $X$ be any smooth manifold and
$S$ a hypersurface in the interior of $X$. We assume $S$ has a neighborhood
$N\subsetX$ diffeomorphic to $(-1,1)\timesS$. A \emph{metric on}
$X$ \emph{cut along} $S$ is a Riemannian metric $g$ on $X\setminus S$ that on the
neighborhood $N$ is of the form
\[
dr^2/r^2 + g_0,
\]
where $g_0$ is a metric
on $S$ and $r$ is the parameter of $(-1,1)$. We also call $g$ simply a \emph{cut metric}.
We may regard a Riemannian manifold with a cut metric as one with
two opposing cylindrical ends that along the cut hypersurface $S$ meet only at infinity.
Given a collection of hypersurfaces $\mathcal{H}=\{S_i\}$ in the interior of $X$ with
similar neighborhoods we construct a set of metrics $G=G(\mathcal{H})$ on $X$ that are
cut along various subsets of $\mathcal{H}$. The construction is intuitively
simple: stretch an initial metric in all possible ways along each hypersurface.
First, suppose that $\mathcal{H}$ has no intersecting hypersurfaces. We will parameterize
the family $G$ by $[0,1]^d$ where $d=|\mathcal{H}|$. Let $b_t$ be a family of positive
smooth functions on $[-1,1]$ parameterized smoothly by $t\in[0,1)$ such that $b_t(r)$
approaches $1/r^2$ as $t$ goes to $1$. For some fixed $\varepsilon$, $0 < \varepsilon < 1$, we require that $b_t(r)=1$ for $|r|>\varepsilon$. We also require $b_t \neq b_{s}$ when $t\neq s$. We choose the
initial metric $G(0)$ on $X$ so that it is of the form $dr^2 + g_i$ in the neighborhood
of $S_i\subsetX$ diffeomorphic to $(-1,1)\times S_i$. Here $S_i\in\mathcal{H}$ and
$g_i$ is a metric on $S_i$. For $t\in[0,1]^{d}$ we define $G(t)$ on $X$ by changing
$G(0)$ in the neighborhood of $S_i$ to $b_{t_i}(r)dr^2+g_i$.
Now consider an arbitrary set of hypersurfaces $\mathcal{H}$. Let $\mathcal{H}_0$ be
a subset of $\mathcal{H}$ with no intersecting hypersurfaces. We have constructed a family
$G(\mathcal{H}_0)$ for each such $\mathcal{H}_0$. We glue the hypercubes $[0,1]^{d_0}$ where
$d_0=|\mathcal{H}_0|$ together to form a space in the obvious way: when two points
correspond to the same metric, identify them. This defines the family $G(\mathcal{H})$.
Now suppose $\mathbb{X}:\mathbb{Y}_1\to \mathbb{Y}_2$ as in \S \ref{sec:cob}. Let $G=G(\mathcal{H})$ be
a family of metrics on $X$ constructed as above. We extend $G$ to a family of metrics on
$X$ with cylindrical ends attached,
product-like on the ends, which we also call $G$. Let $M_G(a,\mathbb{X},b)$ be the moduli
space of pairs $([A],g)$ where $g\in G$ and $A$ is a finite-energy instanton with respect to $g$.
The meaning of this is staightforward if $g$ is an uncut, smooth metric. An instanton with a
metric cut along $S\subset X$ is an instanton on the complement of $\text{S}$, with
its limits on the two cylindrical ends $[0,\infty)\times\text{S}$ agreeing.
More details can be found in \cite[\S 3.9]{kmu}.
Let $G=G(\mathcal{H})$ be a family of metrics on $X$ as constructed above.
In the cases in which we are interested, $G$ will
have the structure of a convex polytope. The metrics parameterized by a face of $G$
consist of cut metrics, cut along a hypersurface in
$\mathcal{H}$.
The expected dimension of $M_G(a,\mathbb{X},b)$ is
$\mu(a,\mathbb{X},b) + \dim G$. A map
\[
m_G(\mathbb{X}):\text{C}(\mathbb{Y}_1)\to \text{C}(\mathbb{Y}_2)
\]
is defined just as for cobordisms. To fix the sign of $m_G(\mathbb{X})$,
in addition to an I-orientation of $\mathbb{X}$, we must orient the metric family $G$.
The following three formulae are due to Kronheimer and Mrowka, \cite[\S 3.9]{kmu}, and
arise from understanding the compactification and gluing of certain moduli spaces. First,
\begin{equation}
(-1)^{\dim G}m_G(\mathbb{X})\partial - \partial m_G(\mathbb{X}) =m_{\partial G}(\mathbb{X}). \label{met1}
\end{equation}
In writing this we have inherited the orientation conventions of \cite{kmu},
with the exception that the quotients $\check{M}(a,b)$ are oriented oppositely,
changing the signs of the maps $\partial$.
For the polytopes $G$ that we will consider, $\partial G$ decomposes
into a union of faces $G(S)$, one for each hypersurface $S\in\mathcal{H}$. In
this case
\begin{equation}
m_{\partial G}(\mathbb{X})=\sum_{S\in\mathcal{H}} m_{G(S)}(\mathbb{X}). \label{met2}
\end{equation}
Finally, suppose $\mathbb{X}$ is the composite of two bundle cobordisms: $\mathbb{X} = \mathbb{X}_2\circ\mathbb{X}_1$.
Also suppose that $G=G_1\times G_2$ where $G_1$ is a family of metrics that only varies on
$X_1$ and $G_2$ on $X_2$, and all metrics are cut along $X_1\capX_2$. Then
\begin{equation}
m_G(\mathbb{X}) =(-1)^{\dim G_1\dim G_2}m_{G_2}(\mathbb{X}_2)m_{G_1}(\mathbb{X}_1) \label{met3}
\end{equation}
where we interpret $G_1$ as a family of metrics on $X_1$ and $G_2$ as a family on $X_2$. Here
the metric families are oriented, and $G=G_1\times G_2$ is an orientation preserving identification.\\
\subsection{Index Bounds} \label{sec:indexbounds}
The following discussion is based on \cite[\S 3.4]{bd} and \cite[\S 4]{d}, with the
material of \cite[\S 3.9]{kmu} in mind.
So far we have only mentioned moduli spaces for which the limiting connections are
acyclic. This guarantees, in particular, that all instantons are irreducible.
For simplicity, suppose $\mathbb{X}$ has one cylindrical end.
We consider moduli spaces $M(\mathbb{X},a)$ where $a$ is any almost flat connection
(i.e., an element of $\mathfrak{C}$),
where the finite-energy instantons exponentially approach $a$ over the cylindrical end. Then,
with suitable perturbation,
the subset of irreducibles $M(\mathbb{X},a)^\text{irr}$ is a smooth manifold of
dimension $\mu(\mathbb{X},a)$.
In this case, the existence of $[A]\inM(\mathbb{X},a)^\text{irr}$
implies $\mu(\mathbb{X},a)=\mu(A)\geq 0$.
On the other hand, if all the instantons are reducible with common isotropy group $\Gamma$,
the space $M(\mathbb{X},a)$ has dimension $\mu(\mathbb{X},a)+ \dim \Gamma$. Recall
$h^0(A)=\dim\Gamma$.
In this case, after perturbation,
the existence of an instanton $[A]$ in the moduli space implies the bound
\begin{equation}
\mu(A) + h^0(A)\geq 0. \label{boundred}
\end{equation}
More generally, suppose
$([A],g)\inM_G(\mathbb{X},a)$ for a family of metrics $G$. Then we obtain
\begin{equation}
\mu(A) + h^0(A) + \dim G \geq 0. \label{bound1}
\end{equation}
We also consider the case in which some of the limiting connections are allowed to vary.
Suppose $[0,\infty)\times \mathbb{Y}$ is the cylindrical end of $\mathbb{X}$,
and consider a smooth manifold $\mathfrak{F}\subset \mathfrak{C}(\mathbb{Y})$ of critical points
to which the Chern-Simons functional is non-degenerate transverse.
We consider $M(\mathbb{X},\mathfrak{F})$,
the instanton classes that exponentially approach the set $\mathfrak{F}$.
The irreducibles within typically form a smooth manifold whose components have dimensions
mod $8$ congruent to $\mu(\mathbb{X},\mathfrak{a})+\dim\mathfrak{F}$, where $\mathfrak{a}\in \mathfrak{F}$.
We write $M(\mathbb{X},\mathfrak{F})_d^{\text{irr}}$ for the $d$-dimensional component.
We can introduce metrics into all of these situations.
The most general situation we consider is the following.
Suppose $\mathfrak{F}$ is as above,
and consider the moduli space $M_G(\mathbb{X},\mathfrak{F})$.
If $([A],g)$ is a member, in the generic case we obtain a bound
\begin{equation}
\mu(A) + h^0(A) + \dim G + \dim\mathfrak{F} \geq 0. \label{bound2}
\end{equation}
We write $M_G(\mathbb{X},\mathfrak{F})^{\circ}_d$
for the $d$-dimensional moduli space of instantons $([A],g)$
with $d$ equal to the left side of (\ref{bound2})
and where $\circ=\text{irr},\text{red},\text{flat}$ describes the respective stabilizer-types
$h^0(A)=0,1,3$. One can drop the assumption that $\mathfrak{F}$ is smooth and obtain moduli spaces that
are stratified according to the structure of $\mathfrak{F}$. Such spaces have been studied
in \cite{t,mmr}.\\
\subsection{Gradings}\label{sec:instgrading}
In addition to the relative $\mathbb{Z}/8$-grading on $I(\mathbb{Y})$,
we can define an absolute $\mathbb{Z}/2$-grading
following \cite[\S 2.1]{froy} and \cite[\S 5.6]{d}. It is more
generally defined on the critical sets $\mathfrak{C}$.
If $\mathfrak{a}\in\mathfrak{C}$, its grading is given by
\[
\text{gr}(\mathfrak{a}) = b_1(E)+b_+(E)+\mu(\mathbb{E},\mathfrak{a}) \mod 2,
\]
where $\mathbb{E}:\emptyset\to\mathbb{Y}$ is an $\text{SO}(3)$-bundle
over a connected 4-manifold $E$ with $\partial E=Y$ that restricts to $\mathbb{Y}$ over $Y$.
The differential of $\text{C}(\mathbb{Y})$ shifts this grading by $1$.
A map $m(\mathbb{X}):\text{C}(\mathbb{Y}_1)\to\text{C}(\mathbb{Y}_2)$ shifts the grading by the parity of
\begin{equation}
\text{deg}(X) = -\frac{3}{2}(\chi(X)+\sigma(X))+\frac{1}{2}(b_1(Y_2)-b_1(Y_1)) ,\label{degofmap}
\end{equation}
cf. \cite[\S 4.5]{kmu}. More generally, a map $m_G(\mathbb{X})$ shifts the grading by
$\text{deg}(X)+\dim G$. As an example, suppose $\mathbb{T}^3$ is the bundle over
$T^3$ with $w_2(\mathbb{T}^3)$ Poincar\'{e} dual to an $S^1$-factor. Then
$I(\mathbb{T}^3)$ is two copies of $\mathbb{Z}$ supported in the even grading.
Note that the trivial connection
$\theta$ on $S^3$ has $\text{gr}(\theta)\equiv 1$.
We note that $I(\overline{\mathbb{Y}})_i$ is the same as
the cohomology group $I(\mathbb{Y})^{b_1(Y)+1+i}$, where $\overline{\mathbb{Y}}$ means the orientation of
the base space $Y$ is reversed.
For our conventions regarding the absolute $\mathbb{Z}/8$-grading in the
case that $Y$ is a homology 3-sphere, see \S \ref{sec:connect}.
\section{Introduction}
Given a closed, connected, oriented 3-manifold $Y$, we study the framed instanton
homology $I^\#(Y)$, an absolutely $\mathbb{Z}/4$-graded abelian group
which is an invariant of the oriented homeomorphism type of $Y$.
This group is obtained by counting, in a suitable sense,
$\text{SO}(3)$-instantons on a bundle over $\mathbb{R}\times(Y\# T^3)$
which is non-trivial when restricted to the 3-tori. These groups are a
special case of Floer's instanton homology for admissible bundles from \cite[\S 1]{f2}
and are considered by Kronheimer and Mrowka in \cite[\S 4.1]{kmki} and \cite[\S 4.3]{kmu}.
The terminology {\em framed} is from \cite{kmki}.
Given an oriented link $L$ in $S^3$, we relate the framed instanton homology
of $\Sigma(L)$, the double cover of $S^3$ branched over $L$,
to the reduced odd Khovanov homology of $L$. This latter invariant,
written $\overline {\text{Kh}'}(L)$, was defined by Ozsv\'ath, Rasmussen and Szab\'o in \cite{ors}.
It is an abelian group bigraded by a quantum grading, $q$, and a homological
grading, $t$. It is a variant of Khovanov homology, defined originally by Khovanov in \cite{kh}.\\
\begin{theorem}\label{thm:1}
Given an oriented link $L$ in $S^3$, there is a spectral sequence whose second page
is $\overline {\text{Kh}'}(L)$ that converges to $I^\#(\overline{\Sigma(L)})$.
Each page of the spectral sequence comes equipped with a $\mathbb{Z}/4$-grading,
which on $\overline {\text{Kh}'}(L)$ is given by
\begin{equation}
\frac{3}{2}q -t +\frac{1}{2}\left(\sigma+\nu\right) \mod 4,\label{eq:oddkhgr}
\end{equation}
where $\sigma$ and $\nu$ are the signature and nullity of $L$, respectively,
and the induced $\mathbb{Z}/4$-grading on $I^\#(\overline{\Sigma(L)})$ is the usual one.\\
\end{theorem}
\noindent Our convention is that the signature of the right-handed trefoil is $+2$.
The theorem immediately implies the four rank inequalities
\[
\text{rk}_\mathbb{Z}\overline {\text{Kh}'}(L)_j \geq \text{rk}_\mathbb{Z}I^\#(\overline{\Sigma(L)})_j
\]
where $j\in\mathbb{Z}/4$ and the gradings are as in the Theorem. Non-split alternating links, and
more generally quasi-alternating links as introduced in \cite{os}, have the
property that $\overline {\text{Kh}'}(L)$ is supported in the even gradings of (\ref{eq:oddkhgr}) and is a free
abelian group of rank $\text{det}(L)$; see \cite[Thm. 1]{man}, \cite[\S 5]{ors} and the remarks
in \cite[\S 9.3]{ls}. In these cases the spectral sequence collapses.
\begin{corollary}
If $L$ is a quasi-alternating link, then $I^\#(\Sigma(L))$ is free
abelian of rank $\text{det}(L)$ and is supported in even gradings.
The rank in grading $j\in\{0,2\}\subset\mathbb{Z}/4$ is given by
\[
\frac{1}{2}\left[\text{det}(L) + (-1)^{j/2}2^{\# L - 1}\right]
\]
where $\# L$ is the number of components of $L$. \label{cor:2}
\end{corollary}
\noindent
If rational coefficients are assumed,
of the 250 prime knots that have at most 10 crossings,
only 7 of them have potentially non-trivial differentials
after the $E^2$-page of Theorem \ref{thm:1}. This follows
from the computations of odd Khovanov homology in \cite{ors}.
To put Theorem \ref{thm:1} into context, we relate framed instanton
homology to previously studied instanton invariants. To start,
framed instanton homology is a special case of
Floer's instanton homology for admissible bundles.
An $\text{SO}(3)$-bundle $\mathbb{Y}$ over a connected 3-manifold $Y$
is {\em admissible} if either $Y$ is a homology 3-sphere,
in which case $\mathbb{Y}$ is trivial, or there is an oriented surface
$\Sigma\subsetY$ with $\mathbb{Y}|_\Sigma$ non-trivial.
This latter condition guarantees that $\mathbb{Y}$ does not
support any reducible flat connections.
A \textit{geometric representative} for $\mathbb{Y}$
is an unoriented, closed
1-manifold $\omega\subsetY$ with $[\omega]\in H_1(Y;\mathbb{F}_2)$
Poincar\'{e} dual to $w_2(\mathbb{Y})$.
The non-trivial admissibility condition
for $\mathbb{Y}$ amounts to the existence of an oriented surface $\Sigma\subsetY$
that intersects $\omega$ in an odd number of points, or,
equivalently, to the conditions that $[\omega]$ is non-zero and lifts to a non-torsion
class in $H_1(Y;\mathbb{Z})$.
For admissible $\mathbb{Y}$, Floer defined in \cite{f1,f2}
a relatively $\mathbb{Z}/8$-graded abelian group $I(\mathbb{Y})$.
When $Y$ is a homology 3-sphere and $\mathbb{Y}$ is trivial,
we write $I(Y)$ for this group,
and it comes equipped with an absolute $\mathbb{Z}/8$-grading.
The isomorphism class of $I(\mathbb{Y})$ depends only on the
oriented homeomorphism type of $Y$ and $w_2(\mathbb{Y})$.
Now let $\mathbb{Y}$ be any $\text{SO}(3)$-bundle over a closed, connected, oriented 3-manifold $Y$
geometrically represented by $\lambda$.
Making some inessential choices, we can construct a bundle $\mathbb{Y}^\#$ over $Y\# T^3$ by gluing
together $\mathbb{Y}$ and a non-trivial bundle over $T^3$. The bundle $\mathbb{Y}^\#$
is always admissible, and the group $I(\mathbb{Y}^\#)$ is always 4-periodic.
The {\em framed instanton homology twisted by $\lambda$},
written $I^\#(Y;\lambda)$,
is relatively $\mathbb{Z}/4$-graded
and isomorphic to four consecutive gradings
of $I(\mathbb{Y}^\#)$. When $\lambda$ is mod 2 null-homologous,
we recover the framed instanton homology $I^\#(Y)$.
When $Y$ is a homology 3-sphere, we relate $I^\#(Y)$ to
Floer's $\mathbb{Z}/8$-graded $I(Y)$.
It is convenient to employ
Fr{\o}yshov's reduced groups $\widehat{I}(Y)$ from \cite{froy},
which are obtained from $I(Y)$ by considering interactions
with the trivial connection.
They come equipped with an absolute $\mathbb{Z}/8$-grading and
a degree 4 endomorphism $u$. Fr{\o}yshov's Theorem
10 says $(u^2-64)^n=0$ for some $n>0$, when
the coefficient ring used contains
an inverse for $2$.
If $\mathbb{Y}$ is non-trivial and admissible,
the situation is simpler, as there is no trivial connection
to worry about. In this case $u$ is a degree $4$ endomorphism
defined on $I(\mathbb{Y})$, and Fr{\o}yshov's Theorem 9 of\ \cite{froy} says
$(u^2-64)^n=0$ for some $n>0$.
The proof of the following is essentially
an application of Fukaya's connected sum theorem of \cite{fukaya}.
\begin{theorem}\label{thm:integerhom}
Let $F$ be a field with char$(F)\neq 2$, and
suppose all homology groups are taken with $F$-coefficients,
unless indicated otherwise.
If $H_1(Y;\mathbb{Z})=0$, then
\[
I^\#(Y) \simeq \text{{\em ker}}(u^2-64) \otimes H_\ast(S^3) \oplus H_\ast(\text{pt.})
\]
as $\mathbb{Z}/4$-graded $F$-modules, where $u^2-64$
is acting on $\bigoplus_{j=0}^3\widehat{I}(Y)_j$.
If $\mathbb{Y}$ is non-trivial and admissible
with geometric representative $\lambda$, then
\[
I^\#(Y;\lambda) \simeq \text{{\em ker}}(u^2-64)\otimes H_\ast(S^3)
\]
as relatively $\mathbb{Z}/4$-graded $F$-modules, where
$u^2-64$ is acting on four consecutive gradings of the relatively $\mathbb{Z}/4$-graded $F$-module $I(\mathbb{Y})$.
\end{theorem}
\noindent When $L$ is the $(3,5)$ torus knot, the double cover of $S^3$ branched over $L$
is the Poincar\'{e} homology sphere $\Sigma(2,3,5)$.
By the results in \cite{froy}, Fr{\o}yshov's reduced group
for $\Sigma(2,3,5)$ is trivial. Theorem \ref{thm:integerhom}
implies that $I^\#(\Sigma(2,3,5))$ has rank 1, supported in grading 0.
This provides an example where the
spectral sequence of Theorem \ref{thm:1} does not collapse,
as the reduced odd Khovanov homology of $L$ has rank 3,
as computed in \cite{ors}.
As another application of Theorem \ref{thm:integerhom}, a simple
inductive argument involving the exact triangle,
which we present in \S \ref{sec:euler}, yields
the following.
\begin{corollary} For any $Y$ and $\lambda$, we have $\chi(I^\#(Y;\lambda)) = |H_1(Y;\mathbb{Z})|$.
\label{cor:euler}
\end{corollary}
\noindent For a set $S$, the notation $|S|$ is to be interpreted as the cardinality
of $S$ if it is finite, and $0$ otherwise.
Theorem \ref{thm:integerhom} suggests that
knowledge of the smallest positive integer $n$ such that $(u^2-64)^n=0$
is useful in understanding the
relationships between the various instanton groups. It is known,
cf. \cite[\S 6]{froy}, that if $\mathbb{Y}$ is non-trivial
admissible and restricts non-trivially
to a surface of genus $\leq 2$, then one can take $n=1$.
For the following, let $h:\Theta_\mathbb{Z}^3\to\mathbb{Z}$
be Fr{\o}yshov's homomorphism from \cite{froy}, where $\Theta_\mathbb{Z}^3$ is
the integral homology cobordism group.
\begin{corollary}\label{cor:3}
Let $Y$ be the result of $\pm 1$-surgery on a knot $K\subset S^3$ with genus $\leq 2$.
Let $F$ be a field with char$(F)\neq 2$. Then, with all homology
taken with $F$-coefficients, we have an isomorphism
\[
I^\#(Y) \simeq H_\ast(\text{pt.})\oplus H_\ast(S^3)\otimes \bigoplus_{j=0}^3\widehat{I}(Y)_j
\]
as $\mathbb{Z}/4$-graded $F$-modules. In particular, if in addition $h(Y)=0$, then the groups $\widehat{I}(Y)_j$ on the right can be replaced by $I(Y)_j$.
\end{corollary}
\noindent We apply this result using computations from the literature.
Fr{\o}yshov proves in \cite{froy} that for the family of
manifolds $\Sigma(2,3,6k+1)$ with $k$ a positive integer
we have $h=0$.
Furthermore, $\Sigma(2,3,6k+1)$ can be realized as $1$-surgery on
the twist knot $(2k+2)_1$ with $k$ full twists, which is a knot of genus 1. Using
the computation of $I(\Sigma(2,3,6k+1))$ from \cite{fs} we obtain
\begin{corollary}\label{cor:4} With coefficients in a field $F$ with char$(F)\neq 2$,
we have isomorphisms
\[
I^\#(\Sigma(2,3,6k+1)) \simeq F_0^{\lfloor k/2\rfloor+1}\oplus
F_1^{\lfloor k/2\rfloor}\oplus F_2^{\lceil k/2\rceil}\oplus F_3^{\lceil k/2\rceil}
\]
where $k$ is a positive integer and the subscripts indicate the gradings.
\end{corollary}
\noindent We also consider the manifolds $\Sigma(2,3,6k-1)$ with $k$ a positive integer,
which are obtained from $-1$-surgery on twist knots with $2k-1$ half twists.
In this case $h(\Sigma(2,3,6k-1))=1$, and we can again use the computations of \cite{fs} to obtain
\begin{corollary}\label{cor:5} With coefficients in a field $F$ with char$(F)\neq 2$,
we have isomorphisms
\[
I^\#(\Sigma(2,3,6k-1)) \simeq F_0^{\lceil k/2\rceil}\oplus
F_1^{\lceil k/2\rceil-1}\oplus F_2^{\lfloor k/2\rfloor}\oplus F_3^{\lfloor k/2\rfloor}
\]
where $k$ is a positive integer and the subscripts indicate the gradings.
\end{corollary}
\noindent As $\Sigma(2,3,6k\pm 1)$ is the branched double cover over the $(3,6k\pm 1)$ torus knot, Corollaries \ref{cor:4} and \ref{cor:5} provide more examples in which the spectral sequence of Theorem \ref{thm:1} does not collapse. In \cite{bloom}, Bloom considers a spectral sequence
in monopole Floer homology with $\mathbb{F}_2$-coefficients
analogous to that of Theorem \ref{thm:1}.
For the $(3,6k\pm 1)$ torus knot, he conjectures that the spectral sequence collapses at the fourth page,
and guesses the higher differentials.
We note that if his speculation were to hold in our setting (also with $\mathbb{F}_2$-coefficients),
then we would recover, using Bloom's computations and our grading (\ref{eq:oddkhgr}),
the formulae of Corollaries \ref{cor:4} and \ref{cor:5}, but
with $F=\mathbb{F}_2$.\\
\subsection{Background \& Motivation}
In the setting of Heegaard Floer homology, Ozsv\'ath and Szab\'o in \cite{os}
constructed a spectral sequence with $\mathbb{F}_2$-coefficients
from the reduced Khovanov homology of a link $L$ to the Heegaard Floer hat homology
of $\Sigma(L)$, with orientation reversed. This was the first instance of a
structural relation between a Floer homology and a combinatorial link homology.
For this result we write
\[
\overline{\text{Kh}}(L;\mathbb{F}_2)\;\; \leadsto \;\;\widehat{\text{HF}}(\overline{\Sigma(L)};\mathbb{F}_2).
\]
The notation $A\leadsto B$ is an abbreviation for the existence of a spectral
sequence with some starting page $A$, converging to $B$.
An essential ingredient for their construction
was a link surgeries spectral sequence.
See \S \ref{sec:main} for the instanton analogue.
Subsequently, Bloom \cite{bloom}
constructed a link surgeries spectral sequence in the setting
of monopole Floer homology, and from this obtained a spectral sequence
\[
\overline{\text{Kh}}(L;\mathbb{F}_2)\;\; \leadsto \;\;\widetilde{\text{HM}}(\overline{\Sigma(L)};\mathbb{F}_2).
\]
It has since
been shown, after much work, that the monopole Floer group $\widetilde{\text{HM}}(Y)$ is isomorphic to $\widehat{\text{HF}}(Y)$, cf. \cite{klt,hgn,tech}.
Ozsv\'ath and Szab\'o speculated in \cite{os}
that their spectral sequence, if lifted to $\mathbb{Z}$-coefficients,
would not have reduced Khovanov homology as the $E^2$-page,
but would have some other link homology with altered signs in the differentials.
With this is mind, Ozsv\'ath, Rasmussen and Szab\'o \cite{ors} defined an abelian
group $\text{Kh}'(L)$ called the {\em odd Khovanov homology of} $L$. With
$\mathbb{F}_2$-coefficients, it coincides with Khovanov homology; but they
are very different with $\mathbb{Z}$-coefficients.
Odd Khovanov homology is bigraded
by a homological grading, called $t$, and a quantum grading,
called $q$.
There is a splitting
\[
\text{Kh}'(L)_{t,q}\simeq \overline {\text{Kh}'}(L)_{t,q-1}\oplus \overline {\text{Kh}'}(L)_{t,q+1},
\]
where $\overline {\text{Kh}'}(L)$ is called the {\em reduced} odd Khovanov homology.
The bigraded group $\overline {\text{Kh}'}(L)_{t,q}$ categorifies the Jones
polynomial $J_L$ in the sense that
\[
J_{L}(x) = \sum_{t,q} (-1)^t\text{rk}_\mathbb{Z}(\overline {\text{Kh}'}(L)_{t,q})x^q.
\]
Here, $J_\text{unknot}(x)=1$. It was conjectured in \cite{ors}
that there is a spectral sequence
\begin{equation*}
\overline {\text{Kh}'}(L) \leadsto \widehat{\text{HF}}(\overline{\Sigma(L)})
\end{equation*}
with $\mathbb{Z}$-coefficients. Our Theorem \ref{thm:1} provides such a spectral sequence with instanton homology in place of Heegaard Floer homology.
The correct version of instanton homology for the task
was suggested in the work of Kronheimer and Mrowka.
The framed instanton homology $I^\#(Y)$ is
isomorphic to the sutured instanton group $\text{SHI}(M,\gamma)$
introduced in \cite{kms}, where $M$ is the complement of an open 3-ball in $Y$ and
$\gamma$ is a suture on the 2-sphere boundary.
Restating a conjecture of Kronheimer and Mrowka, transferred from the sutured setting,
cf. \cite[Conj. 7.24]{kms}, it is expected that
\[
\widehat{\text{HF}}(Y;\mathbb{C})\simeq I^\#(Y;\mathbb{C})\simeq \widetilde{\text{HM}}(Y;\mathbb{C}).
\]
Theorem \ref{thm:1} was inspired by this conjecture,
and offers some validation for it, as does Corollary \ref{cor:2}.
Note that Corollary \ref{cor:euler} confirms that
the Euler characteristics of the three Floer groups
under consideration agree.
Recently, Kronheimer and Mrowka \cite{kmu} introduced singular
instanton homology groups. We only mention the groups $I^\#(Y,L)$,
where $L$ is a link in $Y$, and from which $I^\#(Y)$ is obtained
by taking $L$ to be empty. The construction of this group involves counting instantons
on $\mathbb{R}\times\mathbb{Y}^\#$ singular along $\mathbb{R}\times L$. Writing $I^\#(L)=I^\#(S^3,L)$,
Kronheimer and Mrowka produced a spectral sequence
\[
\text{Kh}(L) \;\;\leadsto \;\; I^\#(L),
\]
where the left side is unreduced Khovanov homology.
Their spectral sequence respects $\mathbb{Z}/4$-gradings.
Using related singular instanton groups,
they proved in \cite{kmu} that Khovanov homology detects the unknot.
This paper adapts many details from the aforementioned paper.\\
\subsection{Outline \& Acknowledgments}
Many of the proofs in this paper are partially adapted from
the papers so far mentioned, especially \cite{kmu}, and
are credited throughout. For example, the proof of the exact triangle
in \S 5 is inspired mostly by \cite{kmu}, although
the analysis of instanton counts is somewhat different.
To work out the signs in the differential
of the spectral sequence in Theorem \ref{thm:1},
we introduce an algebro-topological way of composing homology orientations
in \S \ref{sec:homor}.
The composition rule is associative and has distinguished units.
Indeed, homology orientations are part of the morphism data in
an appropriate category on which framed instanton homology is a functor.
The proof of Theorem \ref{thm:integerhom} follows from versions
of Fukaya's connected sum theorem of \cite{fukaya} where non-trivial admissible
bundles are involved, which to the author's knowledge have not appeared
in the literature. These versions are simpler than Fukaya's original theorem,
as there are fewer trivial connections with which to deal.
We outline Donaldson's proof of Fukaya's theorem from \cite[\S 7.4]{d},
more or less verbatim (except for notation), and show how the versions
of interest to us follow, making the necessary modifications.
The structure of this paper is as follows. In \S 2,
we outline the proof of Theorem \ref{thm:1},
introducing the surgery exact triangle and the link
surgeries spectral sequence. In \S 3,
we construct the bundles that are relevant to
the surgery exact triangle. In \S 4,
we review the construction of instanton homology
for admissible bundles.
In \S 5, we prove Floer's exact triangle, and
in \S 6, we prove the link surgeries spectral sequence.
In \S 7, we define framed instanton homology and
discuss its basic properties.
In \S 8, we define reduced odd Khovanov homology and
complete the proof of Theorem \ref{thm:1} and Corollary \ref{cor:2}.
In \S 9, we prove Theorem \ref{thm:integerhom} and Corollaries \ref{cor:3}, \ref{cor:4} and \ref{cor:5}.
In \S 10, we prove Corollary \ref{cor:euler}.
The author would like to thank Ciprian Manolescu
for suggesting the problem that lead to the results in this paper,
and for his encouragement and enthusiasm. The author thanks Tye Lidman for helpful conversations,
and Jianfeng Lin for providing an argument given in \S \ref{sec:main} and for helpful discussions regarding \S \ref{sec:homor}. The author has learned that some of the results in this paper
have been obtained independently by Aliakbar Daemi.
\section{A Link Surgeries Spectral Sequence}
In this section we prove Theorem \ref{thm:2}. We follow
\cite{kmu} and \cite{bloom}. In \cite{kmu}, Kronheimer and Mrowka work over
$\mathbb{Z}$, taking care with signs, and we adapt many of the details from their setup.
Bloom's paper \cite{bloom} is especially
descriptive of the combinatorics involved here, and provides many illustrations.
As mentioned in the introduction, the idea for this spectral sequence
originates from Ozsv\'ath and Szab\'o's paper \cite{os}.\\
\subsection{The Cobordisms \& Metric Families}
Let $\mathbb{Y}$ be an admissible bundle over $Y$ and $L\subset Y$ a framed link
with $m$ components $L_1,\ldots,L_m$. Suppose we have admissible bundles
$\mathbb{Y}_v$ for $v\in\{\infty,0,1 \}^m$ that form a surgery cube as in
\S \ref{sec:main}. We conflate the
subscript $\infty$ with $-1$ and write $\mathbb{Y}_v$ for $v\in\{-1,0,1\}^m$. Further,
we write $\mathbb{Y}_v$ for $v\in\mathbb{Z}^m$ by taking the modulo $3$ reduction of $v$.
Define the norms
\[
|v|_1=\sum_{i=1}^m |v_i|, \qquad |v|_\infty = \max_{1\leq i \leq m}\{|v_i|\}.
\]
We use the partial order on $\mathbb{Z}^m$ that says $v \leq w$ whenever $v_i \leq w_i$
for $i=1,\ldots,m$.
Since the $\mathbb{Y}_v$ form a surgery cube, they can be generated by the data of
$\mathbb{Y}$ and a framed link $\mathbb{L}=\mathbb{L}_1\cup\cdots\cup\mathbb{L}_m$ in $\mathbb{Y}$ as in
\S \ref{sec:dehn}, where each $\mathbb{L}_i$ is an equivariant embedding of
$S^1\times D^2\times\text{SO}(3)$ into $\mathbb{Y}$. For $v < w$ we have surgery bundle cobordisms
$\mathbb{X}_{vw}:\mathbb{Y}_v\to\mathbb{Y}_w$ constructed
by iterating the construction for $\mathbb{X}_{ij}$ from \S \ref{sec:x} for each $\mathbb{L}_i$.
To give a definition, first set $k=|w-v|_1$.
We choose a maximal chain $v=v(0)<v(1)<\cdots < v(k) = w$. Each $\mathbb{X}_{v(i)v(i+1)}$
may be viewed as a surgery bundle as defined in \S \ref{sec:x}, and we may set
\[
\mathbb{X}_{vw} = \mathbb{X}_{v(k-1)v(k)}\circ\cdots\circ\mathbb{X}_{v(0)v(1)}.
\]
The choice of maximal chain does not affect the isomorphism type of $\mathbb{X}_{vw}$.
In fact, the identification of (\ref{iso}) lends a more invariant
interpretation: we may view $\mathbb{X}_{vw}$ as $\mathbb{Y}_v\times [0,1]$
with, for each $i=1,\ldots,m$, a copy of $\mathbb{H} \cup_\psi \cdots \cup_\psi \mathbb{H}$
($w_i-v_i$ copies of $\mathbb{H}$)
attached to $\mathbb{Y}_v\times \{1\}$ via the framed knot $\Lambda^{v_i+1}(\mathbb{L}_i)$.
We have the isomorphism
\[
\mathbb{X}_{vw}\simeq\mathbb{X}_{uw}\circ\mathbb{X}_{vu}
\]
whenever $v<u<w$. We write $\bf{0}$ for the element of $\mathbb{Z}^m$ with all zeros, and similarly
$\bf{n}$ for the element with all elements equal to $n\in\mathbb{Z}$. Note
that $\mathbb{X}_{\bf{0}\bf{3}}$ is not $\mathbb{X}_{\bf{0}\bf{0}}=\mathbb{Y}_{\bf{0}}\times [0,1]$,
but for instance $\mathbb{X}_{\bf{0}\bf{1}}\simeq \mathbb{X}_{\bf{3}\bf{4}}$. The base space of
$\mathbb{X}_{vw}$ is written $X_{vw}$. In the sequel we will only consider $\mathbb{X}_{vw}$ with
$|w-v|_\infty \leq 3$.
As in the case when $L$ had one component, we have distinguished hypersurfaces in
the interior of $X_{vw}$. Of course, the 3-manifolds $Y_{u}\subset X_{vw}$ for
$v < u < w$ are the first examples. Note that $Y_{u}$ and $Y_{u'}$ are disjoint if
and only if $u < u'$ or $u' < u$. For each $i\in\{1,\ldots,m\}$ and $k$ with
$v_i<k<w_i$ we have a 3-sphere $S_k^i$ in $X_{vw}$ which generalizes $S_{1}\subsetX_{02}$
from \S \ref{sec:decomp1}. The spheres $S_k^i$ and $S_l^j$ intersect if and only if $i=j$ and
$|k-j|\leq 1$, and $S_k^i$ intersects $Y_u$ if and only if $u_i=k$. For $v,w\in\mathbb{Z}^m$
with $v< w$ and $|w-v|_\infty \leq 2$ we define a set of hypersurfaces in $X_{vw}$:
\[
\mathcal{H}_{vw} = \{ Y_u : v < u < w \} \cup \{ S^i_{k} : 1\leq i\leq m,\; v_i < k < w_i\}.
\]
Note that the second set is empty if $|w-v|_\infty<2$.
We obtain a family of metrics $G_{vw}=G(\mathcal{H}_{vw})$ on $X_{vw}$ as constructed in \S \ref{sec:met}.
The space of metrics $G_{vw}$ is a convex polytope called a graph-associahedron, and
\[
\dim G_{vw} = |w-v|_1-1,
\]
as Bloom explains in \cite[Thm. 5.3]{bloom}. In fact,
when $|w-v|_\infty<2$, $G_{vw}$ is the permutahedron $P_N$, the convex polytope
defined as the convex hull in $\mathbb{R}^N$ of all permutations of
$(1,2,\ldots,N)\in\mathbb{R}^N$ where $N= |w-v|_1$. For example, $P_3$ is a hexagon,
and the polytope $P_4$ is shown (hollowed out) in Figure \ref{fig:perm}.
Write $m_{vw}=m_{G_{vw}}(\mathbb{X}_{vw})$ and $\partial_v$ for the differential of
$\text{C}(\mathbb{Y}_v)$. From the formulae in \S \ref{sec:met} we obtain
\begin{equation}
(-1)^{|w-v|_1-1}m_{vw}\partial_{v}-\partial_w m_{vw} = \sum_{v < u < w} m_{G(Y_u)}(\mathbb{X}_{vw})
+ \sum_{\substack{1\leq i \leq m\\w_i <k < v_i}} m_{G(S_k^i)}(\mathbb{X}_{vw}).\label{mets}
\end{equation}
As in \S \ref{sec:firstmaps}, each $m_{G(S_k^i)}(\mathbb{X}_{vw})=0$. Also, the family $G(Y_u)$
can be identified with the product $G_{vu}\times G_{uw}$. Before we apply equation (\ref{met3}),
we discuss the arrangement of signs.
\begin{figure}[t]
\includegraphics[scale=.5]{perm.pdf}
\caption{The permutahedron $P_4$.}
\label{fig:perm}
\end{figure}
It is possible to choose I-orientations $\mu_{vw}$ for $\mathbb{X}_{vw}$ such that
$\mu_{vw}=\mu_{uw}\circ\mu_{vu}$ whenever $v < u < w$, and we do so.
For a proof, see \cite[Lemma 6.1]{kmu}. We can orient each $G_{vw}$ such that
the identification of $G_{vu}\times G_{uw}$ with $G(Y_u)\subset \partial G_{vw}$
has orientation deficiency $(-1)^{\dim G_{vu}}$. That is, the product orientation for
$G_{vu}\times G_{uw}$ using our chosen orientations differs from the
boundary orientation as induced from $G_{vw}$ by the sign $(-1)^{\dim G_{vu}}$. This essentially
follows from the discussion in \cite{kmu} following Prop. 6.4. With this understood, equation
(\ref{met3}) yields
\[
m_{G(Y_u)}(\mathbb{X}_{vw}) = (-1)^{(\dim G_{uw}+1)\dim G_{vu}}m_{uw}m_{vu}.
\]
Writing $m_{vv}=\partial_v$, equation (\ref{mets}) becomes
\[
\sum_{v\leq u \leq w} (-1)^{|w-u|_1(|u-v|_1-1)}m_{uw}m_{vu}=0.
\]
We remind the reader that this holds under the assumptions
that $v < w$ and $|w-v|_\infty \leq 2$. The case $v=w$ also holds, encoding the relation $\partial_v^2=0$.\\
\subsection{Constructing the Spectral Sequence}
We now construct the spectral sequence of Theorem \ref{thm:2}. We define a chain complex
$(\textbf{C},\boldsymbol{\partial})$ with a filtration $F^i \textbf{C}$.
The filtration will induce the spectral sequence we desire. To begin, set
\begin{equation}
\textbf{C} = \bigoplus_{v\in\{0,1\}^m} \text{C}(\mathbb{Y}_v), \quad \boldsymbol{\partial} =
\sum_{v \leq w} \partial_{vw}\label{complex}
\end{equation}
where $\partial_{vw}= (-1)^{s(v,w)}m_{vw}$. The sign here is given by
\[
s(v,w) = (|w-v|_1^2- |w-v|_1)/2 + |v|_1,
\]
as lifted from \cite[eq. 38]{kmu}. We compute the $\text{C}(\mathbb{Y}_v)\to\text{C}(\mathbb{Y}_w)$ component of $\boldsymbol{\partial}^2$ to be
\[
(-1)^{s(v,w)+|w|_1}
\sum_{v\leq u\leq w} (-1)^{|w-u|_1(|u-v|_1-1)}m_{uw}m_{vu}=0.
\]
We call $(\textbf{C},\boldsymbol{\partial})$ the \textit{link surgeries complex associated to}
$(\mathbb{Y},\mathbb{L})$, with the understanding that
the necessary auxiliary choices we've made have been fixed.
We define the filtration on $(\textbf{C},\boldsymbol{\partial})$ by setting
\begin{equation}
F^i{\textbf{C}} = \bigoplus_{|v|\geq i} \text{C}(\mathbb{Y}_v) \subseteq \textbf{C}.\label{eq:filt}
\end{equation}
Since $\boldsymbol{\partial}$ involves only terms with $v\leq w$, it is immediate that
$\boldsymbol{\partial} F^i{\textbf{C}} \subseteq F^i{\textbf{C}}$. This filtered complex
induces a spectral sequence whose $E^1$-page and $E^1$-differential $d^1$ are given by
\[
E^1 = \bigoplus_{v\in\{0,1\}^m}I(\mathbb{Y}_v), \quad d^1 =
\sum_{\substack{v < w\\|w-v|_1=1}} (-1)^{\delta(v,w)}m(\mathbb{X}_{vw}),
\]
where $\delta(v,w)\equiv \sum_{1\leq i \leq j}v_i$, in which
$j$ is the unique index where $v$ and $w$ differ. This carries over from the discussion
following \cite[Cor. 6.9]{kmu}. To prove Theorem \ref{thm:2} it remains
to identify the $E^\infty$-page: we must show that the homology of
$(\textbf{C},\boldsymbol{\partial})$ is the instanton homology $I(\mathbb{Y})$.\\
\subsection{Convergence}
Let $(\textbf{C},\boldsymbol{\partial})$ be the link surgeries complex associated to $(\mathbb{Y},\mathbb{L})$.
For $i\in\mathbb{Z}$ define the chain complex $(\textbf{C}_i,\boldsymbol{\partial}_i)$
to be the link surgeries complex associated
to $(\mathbb{Y}_{\Lambda^{i+1}}(\mathbb{L}_1),\mathbb{L}\setminus \mathbb{L}_1)$.
Recall that the notation $\mathbb{Y}_{\Lambda^{i+1}}(\mathbb{L}_1)$ is from \S 3.2, and stands for $\Lambda^{i+1}$-surgery on $\mathbb{L}_1$ in $\mathbb{Y}$.
We conflate $\infty$ and $-1$ in the following.
Note that for $i=\infty,0,1$ and $a,b\in\mathbb{Z}^{m-1}$
we have $(\boldsymbol{\partial}_i)_{ab} = \partial_{vw}$
where $v=(i,a)$ and $w=(i,b)$. Thus we can work exclusively with the maps $\partial_{vw}$
with $v,w\in\mathbb{Z}^m$. Consider the map $\textbf{f}_0:\textbf{C}_0\to \textbf{C}_1$ given by
\[
\textbf{f}_0= \sum_{\substack{v,w\in\{0,1\}^{m}\\ v_1=0, w_1 = 1}} \partial_{vw}.
\]
It should be understood that $\partial_{vw}=0$ if $v\not\leq w$. In words, $\textbf{f}_0$
is the sum of the components in the differential $\boldsymbol{\partial}$ that correspond to
surgery-cobordisms that include surgery on $L_1$. This is an anti-chain map,
and the larger complex $(\textbf{C},\boldsymbol{\partial})$ is the cone-complex of $\textbf{f}_0$. That is,
\[
\textbf{C} = \textbf{C}_0\oplus\textbf{C}_1, \quad \boldsymbol{\partial} =
\left( \begin{array}{cc} \boldsymbol{\partial}_0 & 0 \\ \mathbf{f}_0 &
\boldsymbol{\partial}_1 \end{array} \right).
\]
Define a map $\textbf{F}:\textbf{C}_{\boldsymbol{\infty}}\to\textbf{C}$ by
\[
\textbf{F} = \sum_{\substack{v_1=-1\\w_1\in\{0,1\}}} \partial_{vw}.
\]
This is an anti-chain map: the relation $\textbf{F}\boldsymbol{\partial}_{\boldsymbol{\infty}}+ \boldsymbol{\partial}\textbf{F}=0$
is an encoding of (\ref{mets}) via
\[
\sum_{\substack{v_1=u_1=-1\\w_1\in\{0,1\}}} \partial_{uw}\partial_{vu} +
\sum_{\substack{v_1=-1\\u_1,w_1\in\{0,1\}}} \partial_{uw}\partial_{vu}=0.
\]
Equip $\textbf{C}$ and $\textbf{C}_\infty$ with filtrations as in (\ref{eq:filt}) but using
the sum $\sum_{i=2}^m v_i$ instead of $|v|_1$. Then $\textbf{F}$ respects these
filtrations, and on the $E^0_p$-components of the induced spectral sequences,
the map induced by $\textbf{F}$ takes the form
\[
\textbf{F}^0_p:\bigoplus_{\substack{v_1=-1\\ \sum_{i \geq 2} v_i = p}}\text{C}(\mathbb{Y}_v) \to
\bigoplus_{\substack{v_1\in\{0,1\} \\ \sum_{i \geq 2} v_i = p}}\text{C}(\mathbb{Y}_v)
\]
and for $v$ with $v_1=-1$ is given by
\[
\textbf{F}^0_p|_{\text{C}(\mathbb{Y}_v)} = \partial_{vv'}\oplus \partial_{vv''}
\]
where $v'$, $v''$ have $v'_1=0$ and $v''_1=1$, and otherwise agree with $v$.
But $\partial_{vv'}$ is the map $f_{-1}$ in \S \ref{sec:firstmaps} for the surgery triangle
involving $\mathbb{Y}_v$ and $L_1$; and likewise $\partial_{vv''}$ is the map $h_{-1}$.
It follows from Lemma \ref{alg} that $\textbf{F}^0$ is a quasi-isomorphism, and
hence so is $\mathbf{F}$. By
removing each link component as we have just done for $L_1$, and composing the $m$
maps $\textbf{F}$ associated to each removal, we get a
quasi-isomorphism $\mathbf{Q}$ from $(\text{C}(\mathbb{Y}),\partial)$ to $(\textbf{C},\boldsymbol{\partial})$,
completing the proof of Theorem \ref{thm:2}.\\
\subsection{Gradings}\label{sec:linksgr}
We follow Bloom's \cite{bloom} treatment of gradings for the spectral sequence.
We refer to the mod 2 grading on the complex $\text{C}(\mathbb{Y})$ defined
in \S \ref{sec:instgrading} as $\text{gr}[\mathbb{Y}]$.
We define a grading $\text{gr}[{\textbf{C}}]$ on the
complex $\textbf{C}$ in (\ref{complex}). For $x\in \text{C}(\mathbb{Y}_v)\subset\mathbf{C}$
with homogeneous $\text{gr}[{\mathbb{Y}_v}]$ grading, we define
\begin{equation}
\text{gr}[\textbf{C}](x) \equiv \text{gr}[{\mathbb{Y}_v}](x) + \text{deg}(X_{\boldsymbol{\infty}v}) + |v|_1 \mod 2.\label{eq:gr1}
\end{equation}
Recall that we conflate $\boldsymbol{\infty}$ with $-\mathbf{1}\in\mathbb{Z}^m$. Let $\pi_w:\textbf{C}\to\text{C}(\mathbb{Y}_w)$ be the projection. Note that
\begin{equation}
\text{gr}[\textbf{C}](\pi_w(\boldsymbol{\partial}(x))) \equiv \text{gr}[\mathbb{Y}_w](m_{vw}(x)) + \text{deg}(X_{\boldsymbol{\infty}w})+|w|_1 \mod 2.\label{eq:gr2}
\end{equation}
We have the additivity relation $\text{deg}(X_{\boldsymbol{\infty}w}) \equiv \text{deg}(X_{\boldsymbol{\infty}v}) + \text{deg}(X_{vw})$, and also
\[
\text{gr}[\mathbb{Y}_w](m_{vw}(x)) = \text{gr}[\mathbb{Y}_v](x)+ \dim(G_{vw}) + \text{deg}(X_{vw}).
\]
Knowing $\dim(G_{vw})=|w-v|_1-1$ shows that the expressions (\ref{eq:gr1}) and
(\ref{eq:gr2}) differ by $1$ mod $2$, and
thus the differential $\boldsymbol{\partial}$ alters $\text{gr}[\textbf{C}]$ by $1$.
The quasi-isomorphism
$\mathbf{Q}:\text{C}(\mathbb{Y})\to \textbf{C}$ is a composition of $m$ maps $\mathbf{F}$ as in the
previous section. Thus it is a sum of maps of the form $m_G(\mathbb{X}_{\boldsymbol{\infty}v})$,
where $v\in \{0,1\}^m$ and $G = G_1\times\cdots\times G_m$. Here $G_i=G_{v(i)v(i+1)}$
only varies on $X_{v(i)v(i+1)}\subsetX_{\boldsymbol{\infty}v}$ and
$\boldsymbol{\infty}=-\mathbf{1} = v(1) < v(2) < \cdots < v(m+1)=v$.
Using $\text{dim}(G_{vw})=|w-v|_1-1$
for $v<w$, we find $\dim(G)=|v|_1$. Since the $\text{gr}[\mathbb{Y}]$ to $\text{gr}[\mathbb{Y}_v]$ degree of $m_G(\mathbb{X}_{\boldsymbol{\infty}v})$ is $\dim(G)+\text{deg}(X_{\boldsymbol{\infty}v})$, it follows that $\mathbf{Q}$ preserves the $\mathbb{Z}/2$-gradings.
There is also a $\mathbb{Z}$-grading on $\textbf{C}$ given by the vertex weight $|v|_1$ for a homogeneous
element in $\text{C}(\mathbb{Y}_v)\subset\textbf{C}$, and by construction $\boldsymbol{\partial}$
increases this by $1$. We summarize a more detailed statement of Theorem \ref{thm:2}; compare
\cite[Cors. 6.9, 6.10]{kmu} and \cite[Thm. 1.1]{bloom}.
\begin{theorem}
Let $L$ be an oriented, framed link with m components in $Y$. For each
$v\in \{\infty,0,1\}^m$ denote by $Y_v$ the result of $v$-surgery on $L$
and let $\mathbb{Y}_v$ be an admissible bundle over $Y_v$ such that the total
collection of $\mathbb{Y}_v$ forms a surgery cube.
For $v<w$ there are surgery cobordism bundles $\mathbb{X}_{vw}$ from $\mathbb{Y}_v$ to $\mathbb{Y}_w$ with
I-orientations $\mu_{vw}$ satisfying $\mu_{uw}\circ\mu_{vu}=\mu_{vw}$ whenever $v<u<w$,
such that there is a spectral sequence $(E^r,d^r)$ with
\[
\text{{\em $E^1 = \bigoplus_{v\in\{0,1\}^m} I(\mathbb{Y}_v), \qquad d^1 = \sum_{\substack{v < w\\ |w-v|_1 = 1}} (-1)^{\delta(v,w)}I(\mathbb{X}_{vw})$}}
\]
where $\delta(v,w)=\sum_{1\leq i \leq j} v_j$, in which $j$ is the unique index where
$v$ and $w$ differ.
The spectral sequence is graded by $\mathbb{Z}/2\times\mathbb{Z}$, where
$d^r$ has bi-degree $(1,r)$.
The $\mathbb{Z}/2$-grading is given by (\ref{eq:gr1}) while the $\mathbb{Z}$-grading
is by vertex weight.
The spectral sequence converges by the $E^{m+1}$-page to $I(\mathbb{Y})$, and it induces the
usual $\mathbb{Z}/2$-grading on $I(\mathbb{Y})$. \label{ss1}
\end{theorem}
\section{The Surgery Story}\label{sec:main}
In this section we outline the proof of Theorem \ref{thm:1}.
To begin, we recall the instanton surgery exact sequence, or exact triangle,
introduced by Floer \cite{f2}.
Let $K$ be a framed knot in $Y$. That is, $K$ has a preferred meridian
and longitude. Let $\omega$ be a geometric representative for $\mathbb{Y}$ disjoint from
$K$. Denote by $Y_0$ and $Y_1$ the results of performing $0$- and
$1$-surgery on $K$, respectively. We can view $\omega$ inside $Y_0$ and
$Y_1$ by keeping it away from the surgery neighborhood. Let $\omega_0=\omega\cup\mu\subset Y_0$
where $\mu$ is a core for the induced framed knot in $Y_0$,
and let $\omega_1=\omega\subsetY_1$. Finally, for $i=0,1$ choose a
bundle $\mathbb{Y}_i$ over $Y_i$ geometrically represented by $\omega_i$. If the
ordered triple of bundles
$\mathbb{Y},\mathbb{Y}_0,\mathbb{Y}_1$ can be geometrically represented in this way,
we say they form a \textit{surgery triad}.
\begin{theorem}[Floer]\label{thm:floer} There is an exact sequence
\[
\cdots I(\mathbb{Y}) \to I(\mathbb{Y}_0)\to I(\mathbb{Y}_1)\to I(\mathbb{Y})\cdots
\]
provided all three bundles are admissible and form a surgery triad.
\end{theorem}
\begin{figure}[t]
\includegraphics[scale=.60]{ses_1.pdf}
\caption{Local surgery diagrams. The slanted line in each case is the knot $K$.
Each row represents a possible construction for a surgery triad.}
\label{fig:ses}
\end{figure}
\noindent The loop $\mu$ in $Y_0$, pushed out of the surgery solid torus, becomes
a small meridional loop around the surgered neighborhood of $K$ in $Y_0$. This is depicted
in the top row of Figure \ref{fig:ses} in a local surgery diagram for $Y_0$.
One can view $Y_1$ (resp. $Y$) as obtained from $0$-surgery on the induced framed
knot in $Y_0$ (resp. $Y_1$), see \S \ref{sec:topology}.
Thus we obtain two more local surgery diagram depictions of where
$\mu$ may be placed,
listed in the bottom two rows of Figure \ref{fig:ses}. See also \S \ref{sec:geomrep}.
Floer's exact triangle was studied by Braam and Donaldson in \cite{bd}, where a
detailed proof following Floer's ideas can be found. In this paper we provide
an alternative proof.
The proof relies on an
algebraic lemma which was first used by Ozsv\'ath and Szab\'o \cite{os}.
The lemma requires the input of maps between the three
relevant chain complexes satisfying certain properties. The maps we choose count
instantons on families of metrics that are parameterized by convex polytopes.
This approach was used by Kronheimer, Mrowka, Ozsv\'ath and Szab\'o to prove a
surgery exact sequence in the monopole case \cite{kmos}.
Our proof is largely an adaptation of Kronheimer and Mrowka's proof in \cite{kmu}
of an analogous exact triangle in singular instanton knot homology.
This method of proof leads to a generalization of Floer's theorem to a so-called
link surgeries spectral sequence, as was first done by Ozsv\'ath and Szab\'o in Heegaard
Floer homology \cite{os}. Let $L$ be a framed link in $Y$ with components
$L_1,\ldots,L_m$. For each $v\in\{0,1,\infty\}^m$ let $Y_v$ be the result of
$v_i$ surgery on $L_i$ for $1\leq i \leq m$. Briefly, we say $Y_v$ is the result
of $v$-surgery on $L$. Choose a geometric representative $\omega$ for $\mathbb{Y}$ disjoint
from $L$. Let $\omega_v\subsetY_v$ be $\omega$ together
with a core for the knot in $Y_v$ induced by $L_i$ for each $i$ with $v_i=0$. Let $\mathbb{Y}_v$ be
bundles over $Y_v$ geometrically represented by the $\omega_v$. If the bundles $\mathbb{Y}_v$
can be geometrically represented according to these rules we say that they form a
\textit{surgery cube}.
\begin{theorem}\label{thm:2}
Suppose the bundles {\em $\mathbb{Y}_v$} for $v\in\{0,1,\infty\}^m$ are admissible and that they form a surgery cube.
Then there is a spectral sequence
\[
\bigoplus_{v\in\{0,1\}^m}I(\mathbb{Y}_v) \quad \leadsto \quad I(\mathbb{Y}).
\]
That is, the left side is the {$E^1$}-page and the sequence converges to the right side.
\end{theorem}
\noindent A more detailed statement is provided in Theorem \ref{ss1}.
An analogous result in monopole Floer homology was proved by Bloom \cite{bloom} with $\mathbb{F}_2$-coefficients,
and in singular instanton knot homology by Kronheimer and Mrowka \cite{kmu}.
From this we obtain a surgery spectral sequence for the groups $I^\#(Y)$,
which generally must involve the twisted groups $I^\#(Y;\lambda)$.
The group $I^\#(Y;\lambda)$ is four consecutive gradings of $I(\mathbb{Y}^\#)$,
where $\mathbb{Y}^\#$ is a bundle over $Y\# T^3$ geometrically
represented by an $S^1$-factor of $T^3$ together with $\lambda\subset Y$.
In this setting, the surgeries on the link $L$ are performed
away from the 3-tori, and every bundle is automatically admissible.
To minimize the number of non-trivial bundles in the mix,
we refer to the bottom row of Figure \ref{fig:ses}.
Using this, we can ensure that
the bundles for $v$-surgeries with $v\in\{0,1\}^m$ are
geometrically represented only by
the $S^1$-factor of $T^3$. The trade-off is that the geometric representative
for the bundle over $Y$ is the $S^1$-factor together with the link $L$.
We obtain
\begin{theorem}\label{thm:4}
Let $L$ be a framed $m$-component link in $Y$. There is a spectral sequence
\[
\bigoplus_{v\in\{0,1\}^m}I^\#(Y_v) \quad \leadsto \quad I^{\#}(Y;L).
\]
That is, the left side is the {$E^1$}-page and the sequence converges to the right side.
\end{theorem}
\noindent A more detailed statement is given in Theorem \ref{thm:framed}.
Now we introduce branched double covers, following Ozsv\'ath and Szab\'o \cite{os}.
Let $L$ be a link in $S^3$ and
$\Sigma(L)$ the double cover of $S^3$ branched over $L$. Let $D$
be a planar diagram for $L$ with $m$ crossings. For each $v\in\{0,1\}^m$ there
is a resolution diagram $D_v$ which is a disjoint union of circles, obtained by
performing $0$- and $1$-resolutions according to Figure \ref{fig:crossing_res}.
Each branched cover $\Sigma(D_v)$ is diffeomorphic to $\#^k S^1\times S^2$
where $D_v$ has $k+1$ circles. Further, there is a link
$L'\subset\overline{\Sigma(L)}$ and a framing on $L'$ such that
$\Sigma(D_v)$ is the result of $v$-surgery on $L'$. If we draw a small arc
between each crossing in $D$, the preimages in
the branched cover $\Sigma(L)$ are loops, and the link $L'$ is the union
of these preimages.
With this setup, from Theorem \ref{thm:4} we have a spectral sequence
\begin{equation}
\bigoplus_{v\in\{0,1\}^m}I^\#(\Sigma(\text{D}_v))
\quad \leadsto \quad I^\#(\overline{\Sigma(L)};L').\label{sps}
\end{equation}
We claim that
$[L']\in H_1(\Sigma(L);\mathbb{F}_2)$ is zero, so that
the target of this spectral sequence is in fact $I^\#(\overline{\Sigma(L)})$.
The diagram $D$ divides the plane into regions. To show $[L']=0$, it suffices
to color the regions black and white in a way such that each crossing touches exactly
one black region. See Figure \ref{fig:orientedres}.
For then the black regions can be lifted to a surface in $\Sigma(L)$
whose boundary is $L'$, implying $[L']=0$.
To color the regions, we follow an argument communicated to the author by Jianfeng Lin.
We proceed as if performing the algorithm to construct a Seifert surface, as in
\cite[\S 5.4]{rolfsen}. First, we orient $L$. Then we resolve each crossing as in
Figure \ref{fig:orientedres}.
We assign to each circle $z$ in the resolved diagram two signs, $a_z$ and $b_z$. The first
sign $a_z$ is $+1$ if $z$ is oriented counter-clockwise in the plane, and $-1$ otherwise.
The second sign $b_z$ is given by $(-1)^N$ where $N$ is the number of circles that
surround $z$. Now color, with black,
the regions that are directly interior to each circle $z$ with $a_zb_z=+1$. Transferring the coloring
back to the unresolved diagram, each crossing touches exactly one such region.
\begin{figure}[t]
\includegraphics[scale=.75]{crossing_res.pdf}
\caption{From the diagram $\text{D}$ to a resolution diagram $\text{D}_v$.}
\label{fig:crossing_res}
\end{figure}
\begin{figure}[t]
\includegraphics[scale=.70]{orientedres.pdf}
\caption{On the left, we want to color the regions of the diagram so that at each crossing exactly one of the
four regions is colored. On the right, we go from an oriented diagram to a disjoint union of oriented circles.}
\label{fig:orientedres}
\end{figure}
This reduces the proof of Theorem \ref{thm:1} to identifying the $E^1$-page of (\ref{sps})
and then understanding the gradings.
We can compute the groups $I^\#(\Sigma(D_v))$ because each $\Sigma(D_v)$
is of the form $\#^k S^1\times S^2$, and we can compute the $E^1$-differential
because the cobordism maps involved are topologically simple.
This is carried out in \S \ref{sec:branched}, where we identify the $E^1$-page
as the chain complex used to compute $\overline {\text{Kh}'}(L)$ from the diagram $D$.
We then check in \S \ref{sec:finalgr} that the relevant gradings are preserved,
completing the proof of Theorem \ref{thm:1}.
\section{Branched Double Covers}\label{sec:oddkh}
In this section we complete the proof of Theorem \ref{thm:1}.
First, we define reduced odd Khovanov homology in \S \ref{sec:oddkh},
following Bloom's description from \cite{bloom2}. We give an alternative
description of the differential that will suit our goals.
We then discuss how to compose homology orientations in \S \ref{sec:homor}.
This is the framework we use to understand the signs
in our spectral sequence.
In \S \ref{sec:branched}, we identify the $E^1$-page of
the spectral sequence (\ref{sps}) with the reduced odd Khovanov chain complex.
Finally, in \S \ref{sec:finalgr}, we discuss the $\mathbb{Z}/4$-grading of the spectral sequence
and the conclusion of Corollary \ref{cor:3}.\\
\begin{figure}[t]
\includegraphics[scale=.8]{oddkh.pdf}
\caption{Resolution conventions for the arc-decorated
diagrams in reduced odd Khovanov homology. There two choices for the placement of an arc at a given crossing; in the left-most picture, the arc can be pointing up (as depicted) or down. In the latter case, the arcs in the resolution pictures are correspondingly reversed.}
\label{fig:oddkh}
\end{figure}
\subsection{Odd Khovanov Homology}{\label{sec:oddkh}}
Let $L$ be an oriented link and $D$ a planar diagram for $L$. Suppose $D$
has $m$ crossings. We assume that each crossing has an arrow drawn over it,
as in Figure \ref{fig:oddkh}.
Then for each $v\in\{0,1\}^m$ we can define a resolution diagram $D_v$
according to the rules of Figure \ref{fig:oddkh}.
Each $D_v$ is a disjoint union of planar-embedded unoriented circles
together with a disjoint union of planar-embedded oriented arcs,
each arc beginning and ending at a circle.
Suppose $D_v$ has $k+1$ circles. Then we have a rank $k$
abelian group $V_v$ defined by
\[
V_v=\mathbb{Z}\{\text{arcs}\}/\text{ker}\left(\mathbb{Z}\{\text{arcs}\}\to\mathbb{Z}\{\text{circles}\}\right)
\]
where the map involved sends an arc to the circle at which it begins minus
the circle at which it ends. A basis for $\text{V}_v$ is given by any $k$
arcs that touch all $k+1$ circles in $D_v$. Otherwise said, a basis is given by the edges of any spanning tree of the graph whose vertices are the circles of $D_v$ and edges are the arcs. We define
\[
\text{C}_v = {\textstyle{\bigwedge}}^\ast (V_v), \qquad \text{C} = \bigoplus_{v\in\{0,1\}^m}\text{C}_v.
\]
For each $v,w\in\{0,1\}^m$ with $v<w$ and $|w-v|_1=1$
we introduce a map $\partial_{vw}':\text{C}_v\to \text{C}_w$. There is a single
arc $x_{vw}$ in each of $D_v$ and $D_w$ that
changes from a $0$-resolution position to a $1$-resolution position. There
are two cases to consider, corresponding to two circles merging or splitting:
\[
\partial'_{vw}(x) := \begin{cases} x_{vw} \wedge x &\mbox{if $0=x_{vw}\in \text{C}_v$ (split)}\\
x & \mbox{if $0\neq x_{vw}\in \text{C}_v$ (merge)} \end{cases}
\]
In these expressions we use the symbol $x_{vw}$ to stand both for
an arc and its equivalence class in $V_v$.
We call the collection of $\partial'_{vw}$
the \textit{pre-differential}.
The differential for $\text{C}$ is defined by
\[
\partial = \sum \partial_{vw} = \sum \varepsilon_{vw}\partial'_{vw}
\]
where each $\varepsilon_{vw}$ is $+1$ or $-1$, and the sums are over $v,w$
with $v<w$ and $|w-v|_1=1$. The signs $\varepsilon_{vw}$ are chosen to satisfy two conditions.
The first condition is that $\partial^2=0$. The second condition is as follows.
Let $v<t,u$ with $|t-v|_1=|u-v|_1=1$ be three vertices where the
arcs $x_{vu}$ and $x_{vt}$ are arranged
in $D_v$ as in the left of Figure \ref{fig:typexy}. Let $w$ be the vertex with $w >t,u$ and
$|w-t|_1=|w-u|_1=1$. Any four such vertices $v,u,t,w$
will be called a {\em type X} face. A {\em type Y} face
is obtained by reversing one of either $x_{vu}$ or $x_{vt}$. The second condition
is that for a type X face, the sign
\[
\varepsilon_{vu}\varepsilon_{vt}\varepsilon_{tw}\varepsilon_{uw}
\]
is always $+1$ or always $-1$; and the same product for a type Y face is also
always $+1$ or always $-1$, and is minus the type X sign.
We call the collection of $\varepsilon_{vw}$ a {\em valid edge assignment}
if it satisfies these two conditions. The reduced odd Khovanov homology
of $L$ is then defined to be $\overline {\text{Kh}'}(L) = H_\ast(\text{C}, \partial)$.
The well-defined-ness and invariance is proved in \cite{ors}.
The group $\overline {\text{Kh}'}(L)$ is bigraded by a homological grading $t$ and
and quantum grading $q$. For an element $x\in{\textstyle{\bigwedge}}^{|x|}(V_v)$
where $k=\dim(V_v)$, these are defined by
\[
t(x) = |v|_1 - n_-,
\]
\[
q(x) = k -2|x|+n_+ -2n_- + |v|_1.
\]
Here $n_\pm$ is the number of $\pm$ crossings in $D$.
We are interested in a $\mathbb{Z}/4$-grading defined by
\begin{equation}
\frac{3}{2}q -t +\frac{1}{2}\left(\sigma+\nu\right) \mod 4\label{eq:oddgr}
\end{equation}
where $\sigma$ is the signature of $L$ and $\nu$ the nullity.
\begin{figure}[t]
\includegraphics[scale=.9]{typexy.pdf}
\caption{The configurations of the two relevant arcs in
the initial vertex of a type X and type Y face.}
\label{fig:typexy}
\end{figure}
\vspace{10px}
\subsection{Composing Homology Orientations}\label{sec:homor}
Suppose we are given $X_1:Y_1\toY_2$ and $X_2:Y_2\toY_3$. Let $X_{12} = X_2\circX_1$.
In this section we describe the rule we use to orient $\mathcal{L}(X_{12})$ given
orientations of $\mathcal{L}(X_1)$ and $\mathcal{L}(X_2)$. We recall
\[
\mathcal{L}(X_1) =H_1(Y_1;\mathbb{R})\oplus H_1(X_1;\mathbb{R})\oplus H_2^+(X_1;\mathbb{R}).
\]
Typically,
an orientation of $\mathcal{L}(X_i)$ will be denoted $\mu_i$. Although
the composition of homology orientations originates from the determinants of
the relevant Fredholm operators, in our applications we prefer to have a concrete, algebro-topological description of such a rule.
Perhaps the two most important formal properties of a composition rule compatible with a construction of framed instanton homology are associativity and the existence of units. In other words,
\[
(\mu_3\circ\mu_2)\circ \mu_1 = \mu_3\circ(\mu_2\circ\mu_1)
\]
whenever $\mu_i$ is a homology orientation of $X_i:Y_i\toY_{i+1}$ for $i=1,2,3$,
and for $Y\times [0,1]$ there exists a distinguished homology orientation $\mu_{Y}^{\text{id}}$
such that
\[
\mu_{Y}^{\text{id}}\circ \mu = \mu, \qquad \mu\circ\mu_{Y}^{\text{id}}=\mu
\]
whenever $\mu$ is a homology orientation and these compositions make sense. We will first define a composition rule in an algebro-topological fashion and then show it has these two properties. At the end of this section, we will describe how the rule we have defined can be described using Fredholm determinant line bundles, using the setup of Kronheimer and Mrowka \cite[\S 20.2]{kmm}, ensuring that our rule is compatible with a construction of framed instanton homology. In this section all homology groups are assumed to have real coefficients.
We proceed to construct the composition rule.
As in the previous sections, we maintain the assumption
that the $X_i$ and $Y_i$ are connected.
For background on the following setup, see \cite[Thm. 27.5]{dfn} and \cite[\S 7]{as}.
Let $f_{12}:H_1(Y_2)\to H_1(X_1)\oplus H_1(X_2)$
be the map in the Mayer-Vietoris sequence.
Consider the following exact sequences:
\begin{equation}
0 \to\text{im}(f_{12})\to H_1(X_1)\oplus H_1(X_2)\to H_1(X_{12})\to 0\label{eq:exseq1}
\end{equation}
\begin{equation}
0 \to \text{ker}(f_{12}) \to H_1(Y_2)\to \text{im}(f_{12}) \to 0\label{eq:exseq2}
\end{equation}
\begin{equation}
0 \to H_2^+(X_1)\oplus H_2^+(X_2) \to H_2^+(X_{12}) \to \text{ker}(f_{12})\to 0\label{eq:exseq3}
\end{equation}
The first exact sequence is extracted from the Mayer-Vietoris sequence, and the
second is naturally associated to the map $f_{12}$. Our convention is
that $f_{12}(x)=(x,-x)$ on the chain level. For the third sequence, we
choose the positive definite subspace $H_2^+(X_{12})$ so that
it contains the image of $H_2^+(X_1)\oplus H_2^+(X_2)$ under
the map $H_2(X_1)\oplus H_2(X_2)\to H_2(X_{12})$.
The map $H_2^+(X_{12}) \to \text{ker}(f_{12})$ is a restriction of the
Mayer-Vietoris boundary map $H_2(X_{12})\to H_1(Y_2)$.
There is a concrete interpretation of (\ref{eq:exseq3}).
Upon splitting the sequence it says we can write
\[
H_2^+(X_{12})=H_2^+(X_1)\oplus H_2^+(X_2)\oplus \text{ker}(f_{12}).
\]
To interpret the summand $\text{ker}(f_{12})$, we write down a section $s$ for the map
$H_2^+(X_{12}) \to \text{ker}(f_{12})$. We define
$s:\text{ker}(f_{12})\to H_2^+(X_{12})$ on a basis of 1-cycle classes $[\gamma]$
in $\text{ker}(f_{12})\subset H_1(Y_2)$ as follows.
For each such 1-cycle $\gamma$
in $Y_2$, choose a 2-cycle $\Sigma$ in $Y_2$ such that $\#(\gamma\cap\Sigma)=1$, and extend
$\gamma$ to a 2-cycle $\Gamma$ in $X_{12}$. Then $s[\gamma] = [\Gamma]+[\Sigma]$.
Choosing splittings of the above three exact sequences, summing, cancelling a copy of $\text{ker}(f_{12})$ on both sides, and then moving summands around
yields an identification
\begin{equation}
\mathcal{L}(X_{12}) \oplus \text{im}(f_{12})^{\oplus 2} =
\mathcal{L}(X_1) \oplus \mathcal{L}(X_2). \label{eq:iso}
\end{equation}
Thus we can orient $\mathcal{L}(X_{12})$ by using given orientations of $\mathcal{L}(X_{1})$
and $\mathcal{L}(X_{2})$ and equipping the two copies of $\text{im}(f_{12})$ with the
same orientation. We will give an explicit rule for doing this, designed so as to
be associative. We choose splittings of the above exact sequences,
in their respective order:
\begin{equation}
F_{12}:\text{im}(f_{12})\oplus H_1(X_{12})\xrightarrow{\sim}H_1(X_1)\oplus H_1(X_2),\label{eq:split1}
\end{equation}
\begin{equation}
G_{12}:\text{ker}(f_{12})\oplus\text{im}(f_{12})\xrightarrow{\sim} H_1(Y_2),\label{eq:split2}
\end{equation}
\begin{equation}
H_{12}:H_2^+(X_1)\oplus H_2^+(X_2)\oplus\text{ker}(f_{12})\xrightarrow{\sim} H_2^+(X_{12}). \label{eq:split3}
\end{equation}
The space of such splittings is contractible, so these particular choices
do not matter for the following definition.
\begin{definition}\label{def:homcom}
For $i=1,2$ let $X_i:Y_i\to Y_{i+1}$
be two connected cobordisms between connected, non-empty
3-manifolds. Write $X_{12}=X_2\circX_1$.
Suppose $\mu_i$ is a homology orientation of $X_i$, i.e. an orientation of $\mathcal{L}(X_i)$,
for $i=1,2$. Write $\mu_i = \beta_i\wedge\alpha_i\wedge\gamma_i$ where $\alpha_i$ is
an orientation for $H_1(Y_i)$, $\beta_i$ for $H_1(X_i)$, and $\gamma_i$ for $H_2^+(X_i)$.
Choose any orientation $\delta_{12}$ of $\text{im}(f_{12})$.
Choose splittings of the exact sequences (\ref{eq:exseq1})-(\ref{eq:exseq3}) written
as in (\ref{eq:split1})-(\ref{eq:split3}).
Equip $H_1(X_{12})$ with an orientation
$\beta_{12}$ given by the condition
\[
F_{12}(\delta_{12}\wedge\beta_{12}) = \beta_1\wedge\beta_2.
\]
Similarly, equip $\text{ker}(f_{12})$ with an orientation $\zeta_{12}$ which satisfies
\[
G_{12}(\zeta_{12} \wedge \delta_{12}) = \alpha_2.
\]
Then define the composition of $\mu_1$ with $\mu_2$, which
is an orientation of $\mathcal{L}(X_{12})$, by
\[
\mu_{2}\circ\mu_{1} = (-1)^{s}\beta_{12}\wedge\alpha_1\wedge H_{12}(\gamma_1\wedge\gamma_2\wedge\zeta_{12}),
\]
\[
s = \frac{1}{2}\left(d_{12}^2-d_{12}\right) + b_1(X_1)b_1(Y_2) + b_1(X_1)b_2^+(X_2)+ b_1(Y_2)b_2^+(X_2).
\]
Here $d_{12}=\dim\left[\text{im}(f_{12})\right]$.
\end{definition}
\begin{prop}
This composition rule for homology orientations is associative.
\end{prop}
\begin{proof}
We first rephrase the problem in terms of linear algebra.
For $i=1,2$ consider quadruples $\mathscr{A}_i=(A_i,B_i,C_i,\mu_i)$ where $A_i,B_i,C_i$
are vector spaces and $\mu_i$ is an orientation of $A_i\oplus B_i\oplus C_i$.
In our application we have $A_i=H_1(Y_i)$, $B_i=H_1(X_i)$, and $C_i=H_2^+(X_i)$.
Given a linear map
\[
f_{12}:A_2\to B_1\oplus B_2,
\]
we can compose $\mathscr{A}_1$ and $\mathscr{A}_2$ along $f_{12}$ to form
\[
\mathscr{A}_{2}\circ_{f_{12}}\mathscr{A}_1 =(A_1,\text{coker}(f_{12}),C_1\oplus C_2\oplus \text{ker}(f_{12}),\mu_{2}\circ\mu_1).
\]
The orientation $\mu_{12}=\mu_{2}\circ\mu_{1}$ is adapted from Definition \ref{def:homcom} as follows. Write
$\mu_i = \beta_i\wedge \alpha_i\wedge\gamma_i$ where $\alpha_i,\beta_i,\gamma_i$
are respective orientations of $A_i,B_i,C_i$. Choose an orientation $\delta_{12}$
of $\text{im}(f_{12})$. Choose isomorphisms
\begin{equation}
F_{12}:\text{im}(f_{12})\oplus \text{coker}(f_{12})\xrightarrow{\sim} B_1\oplus B_2,\label{eq:iso1}
\end{equation}
\begin{equation}
G_{12}:\text{ker}(f_{12})\oplus \text{im}(f_{12})\xrightarrow{\sim} A_2\label{eq:iso2}
\end{equation}
that are splittings of the naturally associated exact sequences.
Orient $\text{coker}(f_{12})$ by $\beta_{12}$ and
$\text{ker}(f_{12})$ by $\zeta_{12}$ using
the conditions
\begin{equation*}
F_{12}(\delta_{12}\wedge\beta_{12})=\beta_1\wedge\beta_2, \quad G_{12}(\zeta_{12}\wedge\delta_{12})=\alpha_2.
\end{equation*}
Then the composition $\mu_{12}$ is given by
\[
\mu_{12} = (-1)^{s_{12}}\beta_{12}\wedge\alpha_1\wedge\gamma_1\wedge\gamma_2\wedge\zeta_{12},
\]
\[
s_{12}= b_1a_2+b_1c_2+a_2c_2 + (d_{12}^2-d_{12})/2,
\]
where $a_i=\dim A_i$, $b_i=\dim B_i$, $c_i =\dim C_i$, and $d_{12} = \dim \left[\text{im}(f_{12})\right]$.
Now suppose we have a third quadruple $\mathscr{A}_3=(A_3,B_3,C_3,\mu_3)$ and a linear map $f_{23}:A_3\to B_2\oplus B_3$. Consider
\[
f=f_{12}+f_{23}:A_2\oplus A_3\to B_1\oplus B_2\oplus B_3.
\]
The map $f$ induces further maps
\[
f_{1,23}:A_2\to B_1\oplus\text{coker}(f_{23}), \quad f_{12,3}:A_3\to \text{coker}(f_{12})\oplus B_3.
\]
We write $F_{23}, G_{23}$ for the isomorphisms associated to $f_{23}$ as in (\ref{eq:iso1}), (\ref{eq:iso2});
$F_{12,3},G_{12,3}$ associated to $f_{12,3}$; and $F_{12,3}, G_{12,3}$ to $f_{1,23}$.
We have identifications
\begin{equation}
\text{coker}(f_{1,23})=\text{coker}(f) = \text{coker}(f_{12,3}),\label{eq:cokerid}
\end{equation}
\begin{equation}
\text{ker}(f_{23})\oplus \text{ker}(f_{1,23}) = \text{ker}(f) =\text{ker}(f_{12})\oplus\text{ker}(f_{12,3}).\label{eq:kerid}
\end{equation}
The cokernel identifications are natural. The kernel identifications depend on some choices.
For instance, $\text{ker}(f_{12})\oplus \text{ker}(f_{12,3}) = \text{ker}(f)$ is established as follows.
Clearly $\text{ker}(f_{12})\subset \text{ker}(f)$.
Now suppose $a\in\text{ker}(f_{12,3})\subset A_3$. Then $\pi_{12}(f(a))\in\text{im}(f_{12})$ where
$\pi_{12}$ projects onto $B_1\oplus B_2$. Thus $\pi_{12}(f(a))=f_{12}(b)$ for some $b\in A_2$. Let $\sigma_{12}:\text{im}(f_{12})\to A_2$ be such that $f_{12}\sigma_{12}=\text{id}_{\text{im}(f_{12})}$.
Then we may take $b=\sigma_{12}(\pi_{12}(f(a)))$, and
the assignment $a\mapsto(-b,a)$ injects $\text{ker}(f_{12,3})$ into $\text{ker}(f)$.
In this way we obtain a map from $\text{ker}(f_{12})\oplus\text{ker}(f_{12,3})$ to
$\text{ker}(f)$ which is easily seen to be an isomorphism.
With these identifications, the associativity of our rule in Definition \ref{def:homcom} is nearly equivalent to
\begin{equation}
\mathscr{A}_3\circ_{f_{12,3}}(\mathscr{A}_2\circ_{f_{12}}\mathscr{A}_1) = (\mathscr{A}_3\circ_{f_{23}}\mathscr{A}_2)\circ_{f_{1,23}}\mathscr{A}_1.\label{eq:linalgass}
\end{equation}
We have only left out the roles of the $H_{12}$ maps; these are not essential and
we remark on their absence at the end of the proof. We proceed to establish (\ref{eq:linalgass}).
Let us write out $\mu_{12,3} = \mu_3\circ\mu_{12}$, the orientation associated to the left side of (\ref{eq:linalgass}).
Let $\mu_3=\beta_3\wedge\alpha_3\wedge\gamma_3$ where $\alpha_3,\beta_3,\gamma_3$ are orientations
of $A_3,B_3,C_3$, respectively. Let $\delta_{12,3}$ orient $\text{im}(f_{12,3})$.
Orient $\text{coker}(f_{12,3})$ by $\beta_{12,3}$ and $\text{ker}(f_{12,3})$ by
by $\zeta_{12,3}$, where
\begin{equation*}
F_{12,3}(\delta_{12,3}\wedge\beta_{12,3}) = \beta_{12}\wedge \beta_3, \quad G_{12,3}(\zeta_{12,3}\wedge\delta_{12,3}) = \alpha_3.
\end{equation*}
Then we use our composition rule to obtain
\[
\mu_{12,3} = (-1)^{s_{12,3}}\beta_{12,3}\wedge\alpha_1\wedge(\gamma_1\wedge\gamma_2\wedge\zeta_{12})\wedge\gamma_3\wedge\zeta_{12,3},
\]
\[
s_{12,3} = s_{12} + b_{12}a_3 +b_{12}c_3+ a_3c_3 + (d_{12,3}^2-d_{12,3})/2.
\]
Here $d_{12,3}=\dim\left[\dim(f_{12,3})\right]$ and $b_{12}=\dim\left[\text{coker}(f_{12})\right]$,
so in particular
\[
b_{12} = b_1+b_2 -d_{12}.
\]
Now we write out the orientation associated to the right side
of (\ref{eq:linalgass}). We first write
\[
\mu_{23} = \mu_3\circ\mu_2 = (-1)^{s_{23}}\beta_{23}\wedge\alpha_2\wedge\gamma_2\wedge\gamma_3\wedge\zeta_{23},
\]
\[
s_{23} = b_2a_3 + b_2c_3+ a_3c_3 +(d_{23}^2-d_{23})/2,
\]
where, given an orientation $\delta_{23}$ of $\text{im}(f_{23})$, we have imposed
\begin{equation*}
F_{23}(\delta_{23}\wedge \beta_{23}) = \beta_{2}\wedge\beta_3, \quad G_{23}(\zeta_{23}\wedge\delta_{23}) = \alpha_3.
\end{equation*}
Now we can also write
\[
\mu_{1,23} = (-1)^{s_{1,23}}\beta_{1,23}\wedge\alpha_1\wedge\gamma_1\wedge(\gamma_2\wedge\gamma_3\wedge\zeta_{23})\wedge\zeta_{1,23},
\]
\[
s_{1,23} = s_{23} + b_1a_2+b_1c_{23}+a_2c_{23} + (d_{1,23}^2-d_{1,23})/2,
\]
where $c_{23}=\dim\left[C_2\oplus C_3 \oplus \text{ker}(f_{23})\right]$, so that
\[
c_{23} = c_2+c_3+a_3-d_{23},
\]
and, given an orientation $\delta_{1,23}$ of $\text{im}(f_{1,23})$, we have the conditions
\begin{equation*}
F_{1,23}(\delta_{1,23}\wedge\beta_{1,23}) = \beta_1\wedge\beta_{23}, \quad G_{1,23}(\zeta_{1,23}\wedge\delta_{1,23})=\alpha_2.
\end{equation*}
We will now show that $\mu_{12,3} = \mu_{1,23}$. We choose identifications
\[
\text{im}(f_{12})\oplus\text{im}(f_{12,3})=\text{im}(f)=\text{im}(f_{23})\oplus\text{im}(f_{1,23}).
\]
These depend on $F_{12}$ and $F_{23}$. For instance,
let $\tau_{12}:\text{coker}(f_{12})\to B_1\oplus B_2$ be the map
extracted from $F_{12}$ (and conversely it may define $F_{12}$).
Then $\text{im}(f_{12,3})$ maps into $\text{im}(f)$ by
$a\mapsto (\tau_{12}(\pi(a)),\pi_3(a))$ where $\pi$ projects onto
$\text{coker}(f_{12})$ and $\pi_3$ onto $B_3$. Since $\text{im}(f_{12})$ is
naturally a subset of $\text{im}(f)$, we then obtain a map from $\text{im}(f_{12})\oplus\text{im}(f_{12,3})$ into $\text{im}(f)$ which yields the above identification.
We can thus orient $\text{im}(f)$ by $\delta_{12}\wedge\delta_{12,3}$ or
by $\delta_{23}\wedge\delta_{1,23}$. It suffices to show
\begin{equation}
\delta_{12}\wedge \delta_{12,3}\wedge\mu_{12,3}\wedge \delta_{12}\wedge \delta_{12,3} = \delta_{23}\wedge \delta_{1,23}\wedge\mu_{1,23}\wedge \delta_{23}\wedge \delta_{1,23}\label{eq:suffices}
\end{equation}
as orientations of $\text{im}(f)\oplus V\oplus\text{im}(f)$,
where $V$ is the total space of either side of (\ref{eq:linalgass})
for which $\mu_{1,23}$ and $\mu_{12,3}$ are orientations. We compute the left side of (\ref{eq:suffices}):
\begin{align*}
(-1)&^{s_{12,3}+d_{12}d_{12,3}}\delta_{12}\wedge \delta_{12,3} \wedge \beta_{12,3}\wedge\alpha_1\wedge\gamma_1\wedge\gamma_2\wedge\zeta_{12}\wedge\gamma_3\wedge\zeta_{12,3}\wedge\delta_{12,3}\wedge \delta_{12} \\
&= (-1)^{s_{12,3}+d_{12}d_{12,3}+d_{12}(a_{3}+c_3)+a_2c_3} \left[( \text{id}_{\text{im}(f_{12})}\oplus F_{12,3}^{-1})(F_{12}^{-1}\oplus\text{id}_{B_3})\right](\beta_1\wedge\beta_2\wedge\beta_3)\\
& \hfill\hfill\qquad \wedge\alpha_1\wedge\gamma_1\wedge\gamma_2\wedge\gamma_3\wedge\left[G_{12}^{-1}\oplus G_{12,3}^{-1}\right](\alpha_2\wedge\alpha_3).
\end{align*}
Now, choose splitting isomorphisms
\[
F:\text{im}(f)\oplus\text{coker}(f)\xrightarrow{\sim} B_1\oplus B_2\oplus B_3,
\]
\[
G:\text{ker}(f)\oplus \text{im}(f)\xrightarrow{\sim} A_2\oplus A_3
\]
for the naturally associated short exact sequences. We claim we have
\begin{equation}
\left[(\text{id}_{\text{im}(f_{12})}\oplus F_{12,3}^{-1})(F_{12}^{-1}\oplus\text{id}_{B_3})\right](\beta_1\wedge\beta_2\wedge\beta_3) = F^{-1}(\beta_1\wedge\beta_2\wedge\beta_3),\label{eq:F}
\end{equation}
\begin{equation}
\left[G_{12}^{-1}\oplus G_{12,3}^{-1}\right](\alpha_2\wedge\alpha_3) = G^{-1}(\alpha_2\wedge\alpha_3).\label{eq:G}
\end{equation}
We consider (\ref{eq:F}). To abstract the underlying problem, consider
a linear map $\phi:V\to W$ and distinguished subspaces $V'\subset V$ and $W'\subset W$ such
that $\phi(V')\subset W'$. In other words, we have a relative linear map $\phi:(V,V')\to (W,W')$. Choose an isomorphism
\[
\Phi:\text{im}(\phi)\oplus\text{coker}(\phi)\xrightarrow{\sim} W
\]
associated to the natural short exact sequence. Similarly, choose
\[
\Phi':\text{im}(\phi')\oplus\text{coker}(\phi')\xrightarrow{\sim} W'
\]
where $\phi':V'\to W'$ is a restriction of $\phi$. Also, with $\phi'':V/V'\to \text{coker}(\phi')\oplus W/W'$ choose
\[
\Phi'':\text{im}(\phi'')\oplus\text{coker}(\phi'')\xrightarrow{\sim}\text{coker}(\phi')\oplus W/W'.
\]
We can identify $\text{coker}(\phi'')=\text{coker}(\phi)$ and $\text{im}(\phi)=\text{im}(\phi')\oplus\text{im}(\phi'')$ just as we have done in our setting above. We also choose an identification $W/W'\oplus W' = W$. Then (\ref{eq:F}) is equivalent to
\[
\text{det}\Big[\Phi^{-1}(\Phi'\oplus\text{id}_{W/W'} )(\Phi''\oplus\text{id}_{\text{im}(\phi')})\Big] > 0,
\]
by setting $\phi=f$, $V= A_2\oplus A_3$, $V'=A_2$, $W=B_1\oplus B_2\oplus B_3$, and $W'=B_1\oplus B_2$. In fact, we can choose the data so that, under these identifications,
\begin{equation}
\Phi = (\Phi'\oplus\text{id}_{W/W'} )(\Phi''\oplus\text{id}_{\text{im}(\phi')}).\label{eq:thedecomp}
\end{equation}
This can be seen as follows. We may equip $V$ and $W$ with inner products so that we may freely take complements.
In the following, we use the notation $V_1^{\perp}\subset V_2$ to mean that the complement $V_1^\perp$ (with $V_2$ possibly inside a larger space) was taken inside $V_2$.
We may then identify $\text{coker}(\phi) = \text{im}(\phi)^{\perp} \subset W$,
$\text{coker}(\phi')=\text{im}(\phi')^{\perp} \subset W'$ and
$\text{im}(\phi'')= \text{im}(\phi')^{\perp} \subset \text{im}(\phi)$. We also identify $W/W'$ with $W'^\perp \subset W$. We use these identifications
to define $\Phi,\Phi',\Phi''$ in the natural way.
Then $\Phi$ is just the identification $\text{im}(\phi)\oplus \text{im}(\phi)^\perp = W$.
On the other hand, we view $\Phi''\oplus\text{id}_{\text{im}(\phi')}$ as a map
\[
\text{im}(\phi)\oplus \text{im}(\phi)^\perp \to \text{im}(\phi')\oplus \text{im}(\phi')^\perp\oplus W'^\perp
\]
where $\text{im}(\phi')^\perp\subset W'$. This last expression uses the identification
$\text{im}(\phi) = \text{im}(\phi') \oplus \text{im}(\phi')^\perp$
where $\text{im}(\phi')^\perp\subset \text{im}(\phi)$, followed by
the identification $\text{im}(\phi')^\perp \oplus \text{im}(\phi)^\perp = \text{im}(\phi')^{\perp}\oplus W'^\perp$, where on the left $\text{im}(\phi')^\perp \subset \text{im}(\phi)$ but on the right we have the larger complement $\text{im}(\phi')^\perp \subset W'$. These are just two different decompositions of $\text{im}(\phi')^\perp \subset W$. Then, $\Phi'\oplus\text{id}_{W/W'} $, viewed as a map
\[
\text{im}(\phi')\oplus \text{im}(\phi')^\perp\oplus W'^\perp\to W,
\]
where again $\text{im}(\phi')^\perp\subset W'$, first uses the identification
$\text{im}(\phi')\oplus \text{im}(\phi')^\perp = W'$, and then the identification
$W'\oplus W'^\perp=W$. From this perspective, from which everything happens inside $W$
and uses its various orthogonal decompositions, (\ref{eq:thedecomp}) is clear,
and thus (\ref{eq:F}) is established; (\ref{eq:G}) is similar. We return to establishing (\ref{eq:suffices}).
We now know the left hand side is
\begin{align*}
& (-1)^{s_{12,3}+d_{12}d_{12,3}+d_{12}(a_{3}+c_3)+a_2c_3} F^{-1}(\beta_1\wedge\beta_2\wedge\beta_3)\\
& \hfill\hfill\qquad \wedge\alpha_1\wedge\gamma_1\wedge\gamma_2\wedge\gamma_3\wedge G^{-1}(\alpha_2\wedge\alpha_3).
\end{align*}
We can also compute the right side of (\ref{eq:suffices}):
\begin{align*}
(-1)&^{s_{1,23}+d_{23}d_{1,23}}\delta_{23}\wedge \delta_{1,23} \wedge \beta_{1,23}\wedge\alpha_1\wedge\gamma_1\wedge\gamma_2\wedge\gamma_3\wedge\zeta_{23}\wedge\zeta_{1,23}\wedge\delta_{1,23}\wedge \delta_{23} \\
&= (-1)^{s_{1,23}+d_{23}d_{1,23}+d_{23}(b_1+a_2)+a_2a_3}F^{-1}(\beta_1\wedge\beta_2\wedge\beta_3)\\
& \qquad \qquad\wedge\alpha_1\wedge\gamma_1\wedge\gamma_2\wedge\gamma_3\wedge G^{-1}(\alpha_2\wedge\alpha_3).
\end{align*}
We have used the necessary analogues of (\ref{eq:F}) and (\ref{eq:G}).
Thus (\ref{eq:suffices}) holds if the quantity
\begin{eqnarray*}
s_{1,23}+d_{23}(d_{1,23}+b_1+a_2)+ a_2(a_3+c_3)+s_{12,3}+d_{12}(d_{12,3}+a_{3}+c_3)
\end{eqnarray*}
is even. Using $d_{23}+d_{1,23}=d_{12}+d_{12,3}$, this is easily verified. This
establishes (\ref{eq:linalgass}).
Finally, we remark on the absence of the $H_{12}$ maps in our setup. In our application, we can choose the relevant maps $H_{12}$ and $H_{12,3}$ so that
\[
(H_{12,3})(H_{12}\oplus \text{id}_{H_2^+(X_3)\oplus \text{ker}(f_{12,3})}) = H
\]
where we are using some chosen map
\[
H: H_{2}^+(X_2)\oplus H_2^+(X_2)\oplus H_2^+(X_3)\oplus\text{ker}(f)\xrightarrow{\sim} H_{2}^+(X_{123})
\]
associated to the natural short exact sequence, and the identifications (\ref{eq:kerid}).
This is established just as was (\ref{eq:F}).
$H_{23}$ and $H_{1,23}$ can be chosen similarly, and this compatibility allows the above argument
to carry through.
\end{proof}
\vspace{10px}
Now we define distinguished identity homology orientations. If $X=Y\times [0,1]$,
then $\mathcal{L}(X)=H_1(Y)\oplus H_1(X)$. Let $\alpha$ be
any orientation of $H_1(Y)$, and
choose an orientation $\beta$ of $H_1(X)$ such that $\alpha=\beta$ under the natural
identification of $H_1(X)$ with $H_1(Y)$. Then define
\[
\mu_{Y}^{\text{id}} := (-1)^{\frac{1}{2}(b_1(Y)^2+b_1(Y))}\beta\wedge\alpha
\]
to be the distinguished identity homology orientation of $Y\times[0,1]$.
\begin{prop}
Whenever $\mu$ is a homology orientation of a cobordism $X$ with incoming boundary $Y$, we
have $\mu\circ\mu_{Y}^\text{id}=\mu$. Similarly, if $X$ has outgoing boundary $Y$, then
$\mu_{Y}^\text{id}\circ\mu=\mu$.
\end{prop}
\begin{proof}
Suppose $X$ has incoming boundary $Y$, i.e. $X:Y\toY'$. We let $X_1=Y\times [0,1]$
and $X_2=X$ and use the notation of Definition \ref{def:homcom}. We have $\text{im}(f_{12})=H_1(Y)$
and thus $d_{12}=b_1(Y)$. We identify $X_{12}$ with $X_2=X$. Choose the section of the exact sequence (\ref{eq:exseq1}), which is
a map $H_1(X)\to H_1(Y\times [0,1])\oplus H_1(X)$, to be of the form $y\mapsto (0,y)$. The induced isomorphism
$F_{12}:H_1(Y)\oplus H_1(X)\to H_1(Y\times [0,1])\oplus H_1(X)$ is of the form $(x,y)\mapsto (x,y-\pi(x))$
where $\pi:H_1(Y)\to H_1(X)$ is induced by inclusion. Let $\mu = \mu_2 = \beta_2\wedge\alpha_2\wedge\gamma_2$
where $\beta_2,\alpha_2,\gamma_2$ are respective orientations of $H_1(X),H_1(Y),H_2^+(X)$.
Write $\mu_1 = \mu_{Y}^{\text{id}} = (-1)^{\frac{1}{2}(b_1(Y)^2+b_1(Y))}\beta_1\wedge\alpha_1$ as above,
where $\alpha_1=\alpha$ and $\beta_1=\beta$.
Choose $\delta_{12} = \alpha_1$. Then
\[
F_{12}^{-1}(\beta_1\wedge\beta_2) = \delta_{12}\wedge\beta_{12}
\]
where $\beta_{12}=\beta_2$. We can choose $\alpha_2=\alpha_1$ so that the condition $\zeta_{12}\wedge\delta_{12} = \alpha_2$ ($G_{12}$ implicit) forces $\zeta_{12}$ to be the canonical $+1$ orientation
of the $0$-vector space. Similarly, $\gamma_1$ is taken to be $+1$, and the expression $H_{12}(\gamma_1\wedge\gamma_2\wedge \zeta_{12})$ may be regarded as equal to $\gamma_2$. The sign $s$ in Definition \ref{def:homcom} is
equal to $\frac{1}{2}(b_1(Y)^2+b_1(Y))$, and so cancels with the sign in $\mu_Y^\text{id}$.
All together, Definition \ref{def:homcom} yields
\[
\mu\circ\mu_{Y}^\text{id}=\beta_2\wedge\alpha_2\wedge\gamma_2 = \mu.
\]
Next, suppose $X$ has outgoing boundary $Y$, i.e. $X:Y'\toY$.
Now we write $X=X_1=X_{12}$ and $Y\times[0,1]=X_2$ and, correspondingly, we swap the indices
for the above orientations and write
$\mu = \mu_1 = \beta_1\wedge\alpha_1\wedge\gamma_1$
and $\mu_Y^\text{id} = (-1)^{\frac{1}{2}(b_1(Y)^2+b_1(Y))}\beta_2\wedge\alpha_2 = \mu_2$.
Choose the section of the exact sequence (\ref{eq:exseq1}), which is
a map $H_1(X)\to H_1(X)\oplus H_1(Y\times [0,1])$, to be of the form $y\mapsto (y,0)$.
Now the induced map $F_{12}:H_1(Y)\oplus H_1(X)\to H_1(X)\oplus H_1(Y\times [0,1])$ is of the form $(x,y)\mapsto (y+\pi(x),-x)$. Choose $\delta_{12}=\alpha_2=\beta_2$ and so on, just as above. Then
\[
F_{12}^{-1}(\beta_1\wedge\beta_2)=\delta_{12}\wedge\beta_{12}
\]
where $\beta_{12}=(-1)^{b_1(Y)b_1(X)+b_1(Y)}\beta_1=(-1)^t \beta_1$. The exponent $s$ in
Definition \ref{def:homcom} is given by
\[
\frac{1}{2}\left(b_1(Y)^2-b_1(Y)\right) + b_1(Y)b_1(X) \mod 2.
\]
We see that $s+t\equiv \frac{1}{2}(b_1(Y)^2+b_1(Y)) \mod 2$. This cancels with the sign
put in front of $\mu_Y^\text{id}$, and we obtain from Definition \ref{def:homcom}
the identity $\mu_{Y}^\text{id}\circ\mu=\mu$, just as before.
\end{proof}
\vspace{10px}
In the remainder of this section, we describe how our composition rule can be described in the setting of Fredholm determinant line bundles, as in \cite[\S 20.2]{kmm}, the purpose of which is to show that our rule is compatible with a construction of instanton homology. As such, the following details are not needed to understand the rest of the paper.
In the Fredholm setting, a homology orientation of $X$ is an orientation of
$\text{det}(D)$, where $D$ is the operator $-d^\ast \oplus d^+$ acting on suitably weighted Sobolev spaces over $X$ with cylindrical ends attached. Recall that
\[
\text{det}(D) = {\textstyle{\bigwedge}}^{\text{max}}(\text{ker}(D))\otimes {\textstyle{\bigwedge}}^{\text{max}}(\text{coker}(D)^\ast).
\]
The Sobolev weights are chosen such that we have natural identifications
\[
\text{ker}(D) = H^1(X), \quad \text{coker}(D) = H^1(Y)\oplus H_+^2(X),
\]
where $Y$ is the incoming end of $X$, cf. \cite[Prop. 3.15]{d}. Note that an orientation of a vector space induces, in a natural way, an orientation of its dual space. Since we are working with real coefficients, homology and cohomology groups are dual to one another, so an orientation of $\text{det}(D)$ is the same as an orientation of $\mathcal{L}(X)$.
Let us now suppose we are in the situation of Definition \ref{def:homcom}, so that $\mu_i$ is an orientation of $\mathcal{L}(X_i)$, or equivalently $\text{det}(D_i)$, for $i=1,2$. We again write $\mu_i = \beta_i\wedge\alpha_i\wedge\gamma_i$ where now we view $\beta_i$ as orienting $\text{ker}(D)$ and $\alpha_i\wedge\gamma_i$ as orienting $\text{coker}(D)$ (or its dual). We will denote the composition of $\mu_1$ and $\mu_2$ as given in this setting by
\[
\mu_2 \;\overline{\circ}\; \mu_1
\]
to distinguish it from our previous rule. The composition $\mu_2 \;\overline{\circ}\; \mu_1$ goes in two steps. First, we use the $\mu_i$ to orient $\text{det}(D_1\oplus D_2)$, which is identified with
\[
{\textstyle{\bigwedge}}^\text{max}(\text{ker}(D_1)\oplus \text{ker}(D_2)) \otimes {\textstyle{\bigwedge}}^\text{max}(\text{coker}(D_1)\oplus \text{coker}(D_2))^\ast.
\]
We use the following general rule for doing this: if $K_i\wedge C_i$ is an orientation for $\text{det}(D_i)$ where $K_i$ orients $\text{ker}(D_i)$ and $C_i$ orients $\text{coker}(D_i)$ (or its dual), then we orient $\text{det}(D_1\oplus D_2)$ by
\[
(-1)^{\dim\text{coker}(D_2)\text{index}(D_1)}(K_2\wedge K_1)\wedge (C_1\wedge C_2).
\]
This is a slight modification of the rule in \cite[Lemma 20.2.1]{kmm} but is easily seen to be associative; the difference between the two rules is the sign $(-1)^s$ where
\[
s= \dim\text{coker}(D_1)\dim\text{ker}(D_2) + \text{index}(D_2)\dim\text{ker}(D_1).
\]
Applying this procedure to $\mu_1$ and $\mu_2$, we obtain the orientation
\[
\mu' := (-1)^{(a_2 + c_2)(a_1+b_1+c_1)}(\beta_2\wedge\beta_1)\wedge(\alpha_1\wedge\gamma_1\wedge \alpha_2\wedge\gamma_2)
\]
of $\text{det}(D_1\oplus D_2)$, where $a_i=\dim H^1(Y_i)$, $b_i=\dim H^1(X_i)$ and $c_i=\dim H_+^2(X_i)$.
The second step in describing the composition rule in this setting involves relating $\text{det}(D_1\oplus D_2)$ to $\text{det}(D_{12})$ by means of a (Fredholm) homotopy from the operator $D_1\oplus D_2$ to $D_{12}$, where $D_{12}$ is the operator associated to $X_{12}$. We will use the notation of \cite[\S 20.2]{kmm}. Let $P_s$ for $s\in [0,1]$ be such a homotopy, so that $P_0=D_1\oplus D_2$ and $P_1=D_{12}$. To be precise, we should understand these two aforementioned operators as having the same domain and codomain; this may be achieved using the finite cylinder setup as in \cite{kmm}. Denoting our codomain by $B$, choose $J\subset B$ so that $P_s^{-1}J + J = B$ for all $s$. We have for each $s$ an exact sequence
\begin{equation}
0 \to \text{ker}(P_s) \xrightarrow[]{j} P_s^{-1}J \xrightarrow[]{k} J \xrightarrow[]{l} \text{coker}(P_s) \to 0.\label{eq:detexseq}
\end{equation}
We use the following general rule for orienting $\text{det}(P_s)$ given an orientation $\mu''$ of the line ${\textstyle{\bigwedge}}^\text{max}P_s^{-1}J \otimes {\textstyle{\bigwedge}}^\text{max} J^\ast$ using the exact sequence (\ref{eq:detexseq}): write
\begin{equation}
\mu'' = (K \wedge D) \wedge (k(D) \wedge C)\label{eq:mupp}
\end{equation}
where $K$ is an orientation of $\text{im}(j)$, $D$ of $\text{im}(j)^\perp$, and $C$ of $k(\text{im}(j)^\perp)^\perp$; then orient $\text{det}(P_s)$ by
\begin{equation}
(-1)^{\phi(d)}j^{-1}(K)\wedge l(C)\label{eq:genrule}
\end{equation}
where $\phi(x) := (x^2-x)/2$ and $d:=\dim(\text{im}(j)^\perp)$. In our situation, we choose $J$ to be a complement of $\text{im}(P_0) = \text{im}(D_1\oplus D_2)$, and we make the identification
\[
J = H^1(Y_1)\oplus H_+^2(X_1) \oplus H^1(Y_2) \oplus H^2_+(X_2).
\]
We choose the homotopy so that $P_s^{-1}J = \text{ker}(P_0)=\text{ker}(D_1\oplus D_2)$ for all $s$, so that
\[
P_s^{-1}J = H^1(X_1)\oplus H^1(X_2).
\]
In particular, we have an identification of ${\textstyle{\bigwedge}}^\text{max}P_1^{-1}J \otimes {\textstyle{\bigwedge}}^\text{max} J^\ast$ with $\text{det}(D_1\oplus D_2)$, which is oriented by $\mu'$. Noting that the maps in (\ref{eq:detexseq}) for $s=1$ come from the Mayer-Vietoris maps as in Definition \ref{def:homcom}, we can write $\mu''$ from $\mu'$ as in (\ref{eq:mupp}):
\[
\mu'' = (-1)^t(\beta_{12}\wedge \delta_{12}) \wedge (\delta_{12}\wedge \gamma_{12}).
\]
In this expression, and in all to follow, the maps $F_{12}$, $G_{12}$ and $H_{12}$ from Definition \ref{def:homcom} as well as the maps in (\ref{eq:detexseq}) will be implicitly understood, e.g. $F_{12}(\delta_{12}\wedge\beta_{12})$ is the same as $\delta_{12}\wedge\beta_{12}$. The orientation $\beta_{12}$ plays the role of $K$ above, $\gamma_{12}$ that of $C$, and $\delta_{12}$ that of $D$. The sign $(-1)^t$ is given by
\[
t = (a_2 + c_2)(a_1+b_1+c_1) + d_{12}(b_1+b_2 + d_{12}) + b_1b_2,
\]
where $d_{12}$ is as in Definition \ref{def:homcom}. The first term in $t$ is from $\mu'$ and the rest are added to ensure that $\beta_{12}$ is defined by the condition $\delta_{12}\wedge \beta_{12} = \beta_1\wedge \beta_2$, to match Definition \ref{def:homcom}. The orientation $\gamma_{12}$ is defined by the condition $\delta_{12} \wedge\gamma_{12} = \alpha_1 \wedge \gamma_1\wedge\alpha_2 \wedge \gamma_2$. The general rule that takes $\mu''$ to (\ref{eq:genrule}), applied to our $\mu''$, tells us the final orientation of $\text{det}(D_{12})$:
\[
\mu_2 \;\overline{\circ}\; \mu_1 = (-1)^{\phi(d_{12})+t}\beta_{12}\wedge\gamma_{12}.
\]
Now write $\alpha_2=\zeta_{12}\wedge\delta_{12}$ as in Definition \ref{def:homcom}. We compute
\[
\mu_2 \;\overline{\circ}\; \mu_1 = (-1)^{r}\beta_{12}\wedge\alpha_1\wedge\gamma_1\wedge\gamma_2\wedge\zeta_{12},
\]
\[
r = \phi(d_{12}) + t + d_{12}(a_2 + d_{12} + c_1 + a_1) + c_2(d_{12}+a_2).
\]
The sign given by $r$ does not match the sign given by $s$ in Definition \ref{def:homcom}, and so this composition rule is not the same as the one previously defined. However, there is an automorphism $\mu\mapsto \overline{\mu}$ on the class of all homology orientations that intertwines the two rules. Given a homology orientation $\mu$ of a cobordism $X$, we set
\[
\overline{\mu} = (-1)^{\phi(b_1(X)) + \phi(b_1(Y)+b_2^+(X))}\mu
\]
where $Y$ is the incoming end of $X$. Then we have
\[
\overline{(\overline{\mu_1}\;\overline{\circ}\;\overline{\mu_2})} = \mu_1\circ\mu_2.
\]
The verification is a straightforward computation that we omit. It follows that the composition rule $\mu_1\circ\mu_2$ of Definition \ref{def:homcom} is compatible with a construction of Floer homology, and it is this rule that we will use in our computations below.
\vspace{10px}
\subsection{The $E^1$-page}\label{sec:branched}
In this section we identify the $E^1$-page of (\ref{sps})
with the chain complex that computes reduced odd Khovanov homology.
We fix as before a diagram $D$ for the $m$-component link $L$ with crossings decorated by
arcs as in \S \ref{sec:oddkh}. We let $Y_v=\Sigma(D_v)$
for each $v\in\{0,1\}^m$ so that $Y_v$ is homeomorphic to $\#^k S^1\times S^2$
when $D_v$ has $k+1$ circles. The $E^1$-page and differential of
the spectral sequence we are considering is given by
\[
E^1=\bigoplus_{v\in\{0,1\}^m} I^\#(Y_v), \quad d^1=\sum (-1)^{\delta(v,w)}I^\#(X_{vw}),
\]
where the sum runs over $v<w$ with $|w-v|_1=1$ and $v,w\in\{0,1\}^m$.
In writing $d^1$,
we have chosen homology orientations $\mu_{vw}$ of the $X_{vw}$ so that $\mu_{uw}\circ\mu_{vu}=
\mu_{tw}\circ\mu_{vt}$ always holds.
We are also using that the relevant bundles $\mathbb{X}_{vw}$
are trivial. This is because each such bundle lies over a cobordism which is
$D^2\times S^2\setminus \text{int}(D^4)$ running along a product cobordism, see (\ref{eq:easycob});
since we have arranged that the restriction of each such bundle over the boundary is trivial,
for topological reasons the bundle must be trivial.
Let $\text{C} = \bigoplus \text{C}_v$ be the reduced odd Khovanov chain group for the diagram
$D$ and $\partial'=\sum \partial_{vw}'$ its pre-differential. For each $v\in\{0,1\}^m$ we define
an isomorphism
\[
\Phi_v:\text{C}_v\to I^\#(Y_v)
\]
defined as a composition $\Phi_v = \phi_v\circ\rho_v$
where $\phi_v:{\textstyle{\bigwedge}}^\ast(H_1(Y_v;\mathbb{Z}))
\toI^\#(Y_v)$ is from
\S \ref{sec:example} and $\rho_v:\text{C}_v \to{\textstyle{\bigwedge}}^\ast(H_1(Y_v;\mathbb{Z}))$ is defined by lifting arcs in $D_v$ to loops in $Y_v$, and is explained in the following paragraph. For the $\phi_v$ maps, we of course fix orientations $\mu_v$ for each $H_1(Y_v;\mathbb{R})$. We write $\Phi:\text{C}\to E^1$ for the sum of the $\Phi_v$ maps.
Recall $\text{C}_v={\textstyle{\bigwedge}}^\ast(V_v)$,
and that $Y_v$ is branched over $D_v\subset S^3$. Let
$S$ be the union of disks in the plane enclosed by the circles in $D_v$.
They can be pushed out so that they are disjoint and form a Seifert surface for the union
of circles. Let $N$ be a neighborhood of the circles, a union of solid tori. Then
$Y_v$ can be written as a gluing
\[
Y_v = Y_- \cup N \cup Y_+
\]
where $Y_{\pm}=S^3\setminus(S\cup N)$. Distinguishing one of the copies of $S^3\setminus(S\cup N)$,
say $Y_+$, allows us to lift an arc $x$ in $D_v$ to an
\textit{oriented} loop $\widetilde{x}$ in $Y_v$: the orientation is obtained by
locally lifting the orientation of $x$ to the part of $\widetilde{x}$ in $Y_+$.
Then $x \mapsto [\widetilde{x}]$ is an
isomorphism from $V_v$ to $H_1(Y_v;\mathbb{Z})$, and $\rho_v$ is taken to
be the extension of this map to exterior algebras.
We can construct the $\rho_v$ in this way so that it is uniform among all $v$, in the
sense that there are natural ways of identifying $Y_v$ with $Y_w$ away from surgery
(or resolution) areas, and in these areas we can lift arcs the same way.
In summary, the map
$\Phi_v$ is described as follows. Let $x=x_1\wedge\cdots\wedge x_i$ be a wedge
of arcs in $\text{C}_v$. Lift the arcs to embedded loops $\widetilde{x}_{j}$ in the branched double cover $Y_v$ as above. Choose $x_{i+1},\ldots,x_k$ and their lifts such that $\mu_v=[\widetilde{x}_1\wedge \cdots \wedge \widetilde{x}_k]$. Attach 2-handles to $\widetilde{x}_{1},\ldots,\widetilde{x}_i$ and 3-handles and a 4-handle as in \S \ref{sec:example} to
obtain a cobordism $X:\emptyset\toY_v$ homology oriented by
$[\widetilde{x}_{i+1}\wedge\cdots\wedge\widetilde{x}_k]$. Then $\Phi_v(x)=[X]^\#$.
The following completes the proof of
Theorem \ref{thm:1} up to gradings, which are dealt with in the next section.
\begin{lemma}
$\Phi^{-1}d^1\Phi=\sum \varepsilon_{vw}\partial_{vw}'$ where $\varepsilon_{vw}$ is a valid edge assignment.
\end{lemma}
\begin{proof}
Let $v,w\in\{0,1\}^m$ with $v < w$ and $|w-v|_1=1$.
There are two cases to consider,
depending on whether $D_{vw}$ is a split or a merge diagram.
We retain the convention from \S \ref{sec:homor} that singular homology $H_\ast(X)$
is taken with real coefficients.
For most of the proof, we conflate the symbols $x$ and $\widetilde{x}$,
where $x$ is an arc (usually viewed as a class in $V_v$) and $\widetilde{x}$ is its lift to $Y_v$
(usually viewed as a class in $H_1(Y_v)$). That is, the maps $\rho_v$ from above are implicit.
Suppose first we are in the split case.
Let $k=b_1(X_{vw})$. Note that
$b_2^+(X_{vw})=0$, and $X_{vw}:Y_v\to Y_w$ is homeomorphic to
\begin{equation}
\left(Y_v\times [0,1]\right)\Join\left(D^2\times S^2\setminus \text{int}(D^4)\right).\label{eq:easycob}
\end{equation}
We note that we may also view $X_{vw}$ as the branched double cover of a pair of pants properly
embedded in $S^3\times [0,1]$. We have $\mathcal{L}(X_{vw})=H_1(Y_v)\oplus H_1(X_{vw})$.
We will follow the notation of Definition \ref{def:homcom},
setting $X_1=X$ and $X_2=X_{vw}$.
Choose orientations $\alpha_2$ and $\beta_2$ of $H_1(Y_v)$ and $H_1(X_{vw})$,
respectively. We can identify $H_1(Y_v)=H_1(X_{vw})$ using the map induced by inclusion,
and we choose to impose the condition $\alpha_2=\beta_2$. Define $\varepsilon_{vw}'=\pm 1$ by
\[
\mu_{vw} = \varepsilon_{vw}'\beta_2\wedge\alpha_2.
\]
Let $x=x_1\wedge\cdots\wedge x_i\in \text{C}_v$.
Recall that $\Phi_v(x)=[X]^\#$ where $X$ is obtained by attaching 2-handles to
$x_1,\ldots,x_i$ along with some 3-handles and a 4-handle. Choose $x_{i+1},\ldots,x_k$ so that
$\mu_v=[x_1\wedge\cdots\wedge x_k]$. Then $\mathcal{L}(X)=H_1(X)$ is generated by
$x_{i+1},\ldots,x_k$ and $X$ is homology
oriented by $\beta_1 := [x_{i+1}\wedge\cdots\wedge x_k]$.
We can identify $\mathcal{L}(X_{vw}\circX)=H_1(X_{vw}\circX)$.
Note that $\text{im}(f_{12})=H_1(Y_v)$, so $d_{12}=k$.
Choose the section in the exact sequence (\ref{eq:exseq1}),
which in this case is a map $H_1(X_{vw}\circX)\to H_1(X)\oplus H_1(X_{vw})$,
to be of the form $y \mapsto (y,0)$. The induced
isomorphism $F_{12}:H_1(Y_v)\oplus H_1(X_{vw}\circX)\to H_1(X)\oplus H_1(X_{vw})$,
written as in (\ref{eq:split1}), can be written
\[
F_{12}:\mathbb{R}\{x_1,\ldots,x_{k}\}\oplus\mathbb{R}\{x_{i+1},\ldots,x_k\}\to\mathbb{R}\{x_{i+1},\ldots,x_k\}\oplus\mathbb{R}\{x_1\ldots,x_{k}\},
\]
\[
F_{12}(x_p,x_q)=(x_q + \pi(x_p),-x_p),
\]
where $\pi:H_1(Y_v)\to H_1(X)$ is a projection induced by inclusion.
Writing $\beta_2=\delta_{12}$, we have
\[
F_{12}^{-1}(\beta_1\wedge\beta_2) =(-1)^{k}\beta_1\wedge\beta_2=\delta_{12}\wedge\beta_{12}
\]
where $\beta_{12}=(-1)^{(k-i)k + k}\beta_1=(-1)^{ki}\beta_1$. Using Definition \ref{def:homcom}, we obtain
\[
I^\#(X_{vw})\Phi_v(x) = (-1)^{(k^2+k)/2}\varepsilon_{vw}' [X_{vw}\circX]^\#
\]
where $X_{vw}\circX$ is homology oriented by $\beta_1$.
The sign $(-1)^{(k^2+k)/2}$ is obtained by computing
\[
ki + \left((k^2-k)/2 + (k-i)k\right),
\]
where the term $ki$ is from $\beta_{12}$, and the the expression
inside the parentheses is from Definition \ref{def:homcom}.
We mention that the condition $G_{12}(\zeta_{12}\wedge\delta_{12})=\alpha_2$
holds by $\alpha_{2}=\delta_{12}=\beta_2$ and setting $\zeta_{12}$ to
be the canonical $+1$ orientation of the $0$ vector space.
Note that $[X_{vw}\circX]^\# = \Phi_w(x_{vw}\wedge x)$ if and only if $\mu_w = [x_{vw}\wedge x_1 \wedge\cdots \wedge x_k]=x_{vw}\wedge \mu_v$; otherwise they differ in sign.
We record a sign $\varepsilon''_{vw}=\pm 1$ measuring this possible discrepancy between $\mu_v$ and $\mu_w$:
\[
\mu_w = \varepsilon_{vw}'' x_{vw}\wedge\mu_v.
\]
Recalling that $d_{vw}^1=(-1)^{\delta(v,w)}I^\#(X_{vw})$ and $\partial_{vw}'(x)=x_{vw}\wedge x$, we conclude
\[
\Phi_{w}(\partial_{vw}'(x))=\varepsilon_{vw}d^1_{vw}(\Phi_v(x))
\]
where $\varepsilon_{vw}=\pm 1$ is given by
\[
\varepsilon_{vw}=(-1)^{(k^2+k)/2+\delta(v,w)}\varepsilon'_{vw}\varepsilon''_{vw}.
\]
Now suppose we are in the merge case. Again, let $k=b_1(X_{vw})$.
As before, $b_2^+(X_{vw})=0$ and
the cobordism $X_{vw}:Y_v\to Y_w$ is now homeomorphic to
\[
\left(Y_w\times [0,1]\right)\Join\left(D^2\times S^2\setminus \text{int}(D^4)\right).
\]
We identify $H_1(X_{vw})=H_1(Y_w)$, and
write $\mathcal{L}(X_{vw})=H_1(Y_v)\oplus H_1(Y_w)$.
Note the natural codimension 1 inclusion $H_1(Y_w)\subset H_1(Y_v)$.
A complement for $H_1(Y_w)$ is generated by $x_{vw}$.
Let $\alpha_2$ be an orientation for $H_1(Y_v)$.
Define $\varepsilon_{vw}'=\pm 1$ by
\[
\mu_{vw} = \varepsilon_{vw}' \beta_2 \wedge \alpha_2, \quad \beta_2 = \alpha_2 \;\llcorner\; x_{vw}.
\]
The condition $\beta_2 = \alpha_2 \;\llcorner\; x_{vw}$ is equivalently expressed (or is defined) by $\beta_2\wedge x_{vw} = \alpha_2$. Let $x=x_1\wedge\cdots\wedge x_i\in{\textstyle{\bigwedge}}^i(V_v)$.
If $x_{vw}$ is among
$x_1,\ldots,x_i$ (or linearly dependent on them), the 4-manifold $X$ constructed by attaching
2-handles to $x_1,\ldots,x_i$ and some 3-handles and a 4-handle,
once paired with $X_{vw}$ to form $X_{vw}\circX$, contains
a non-trivial $S^2$-bundle over $S^2$ as in \S \ref{sec:example}, so $[X_{vw}\circX]^\#=0$.
Choose $x_{i+1},\ldots,x_{k+1}$ so that $\mu_v=[x_1\wedge\cdots \wedge x_{k+1}]$; we may assume that $x_{vw}=x_{k+1}$.
We may also set $\alpha_2 = \mu_v$, so that $\beta_2 = [x_1\wedge\cdots\wedge x_k]$.
Recall $\Phi_v(x)=[X]^\#$ where $X$ is homology oriented by
$\beta_1=[x_{i+1}\wedge\cdots\wedge x_{k+1}]$.
There is a codimension 1
inclusion $H_1(X_{vw}\circX)\subset H_1(X)$. The vector space
$H_1(X_{vw}\circX)$ is generated by $x_{i+1},\ldots,x_{k}$ and a complement for
$H_1(X_{vw}\circX)$ in $H_1(X)$ is generated by $x_{vw}=x_{k+1}$.
Choose the section in the exact sequence (\ref{eq:exseq1}),
which is a map $H_1(X_{vw}\circX)\to H_1(X)\oplus H_1(X_{vw})$,
to be of the form $y\mapsto (y,0)$.
As in the split case, $\text{im}(f_{12})=H_1(Y_v)$. We obtain an isomorphism
$F_{12}:H_1(Y_v)\oplus H_1(X_{vw}\circX) \to H_1(X)\oplus H_1(X_{vw})$ that
takes the form
\[
F_{12}:\mathbb{R}\{x_1,\ldots,x_{k+1}\}\oplus\mathbb{R}\{x_{i+1},\ldots,x_{k}\}\to\mathbb{R}\{x_{i+1},\ldots,x_{k+1}\}\oplus\mathbb{R}\{x_1,\ldots,x_{k}\},
\]
\[
F_{12}(x_p,x_q)=(x_q + \pi_1(x_p),-\pi_2(x_p)),
\]
where $\pi_1:H_1(Y_v)\to H_1(X)$ and $\pi_2:H_1(Y_v)\to H_1(X_{vw})$ are projections
induced by inclusion maps.
\begin{figure}[t]
\includegraphics[scale=1]{face.pdf}
\caption{Local pictures for four diagrams
appearing in a type X face, starting at the diagram $D_v$ and ending at $D_w$.
The circles in each diagram are colored so as to distinguish their roles in
Figure \ref{fig:thesign}.}
\label{fig:face}
\end{figure}
In particular, $\pi_1(x_p)=x_p$ if $p\geq i+1$ and is otherwise $0$, and
$\pi_2(x_p)=x_p$ if $p\neq k+1$ and $\pi_2(x_{k+1})=0$. Recalling that
$\beta_2=\alpha_2\;\llcorner\; x_{vw}$ and choosing $\delta_{12}=\alpha_2$, we have
\[
F_{12}^{-1}(\beta_1\wedge\beta_2) = (\beta_1\;\llcorner\; x_{vw})\wedge\alpha_2 = \delta_{12}\wedge\beta_{12}
\]
where $\beta_{12}=(-1)^{ki+i}(\beta_1\;\llcorner\; x_{vw})$. Note $\beta_1\;\llcorner\; x_{vw} =[x_{i+1}\wedge\cdots\wedge x_k]$. From Definition \ref{def:homcom} we obtain
\[
I^\#(X_{vw})[X]^\# = (-1)^{(k^2-k)/2+1}\varepsilon_{vw}'[X_{vw}\circX]^\#
\]
where $X_{vw}\circX$ is homology oriented by $\beta_1\;\llcorner\; x_{vw}$.
We have computed the sign from
\[
ki+i+\left(((k+1)^2-(k+1))/2 +(k+1-i)(k+1) \right),
\]
where $ki+i$ is from $\beta_{12}$, and
the expression inside the parentheses is from the sign in Definition \ref{def:homcom}.
On the other hand, $[X_{vw}\circX]^\# = \Phi_w(x)$ exactly when $\mu_w = [x_1\wedge \cdots \wedge x_k] = \mu_v\;\llcorner\; x_{vw}$. Accounting for this, we define $\varepsilon''_{vw}=\pm 1$ by
\[
\mu_v = \varepsilon_{vw}''\mu_w\wedge x_{vw}.
\]
Recalling that $\partial'_{vw}(x)=x$, we conclude
\[
\Phi_{w}(\partial_{vw}'(x))=\varepsilon_{vw}d^1_{vw}(\Phi_v(x))
\]
where $\varepsilon_{vw}=\pm 1$ is given by
\[
\varepsilon_{vw}=(-1)^{(k^2-k)/2+1+\delta(v,w)}\varepsilon'_{vw}\varepsilon''_{vw}.
\]
In summary, we have shown that
\[
\Phi^{-1}d^1\Phi=\sum_{\substack{v<w\\|w-v|_1=1}} \varepsilon_{vw}\partial'_{vw}
\]
where we have determined $\varepsilon_{vw}$ in the split and merge cases separately.
It remains to show that $\varepsilon_{vw}$ is a valid edge assignment.
The first condition, that the total differential squares to zero, already falls
out from the spectral sequence. We now show that the $\varepsilon_{vw}$ satisfy the second
condition, that is, if $v,u,t,w$ form a type X face, then the product
\begin{equation}
\varepsilon_{vu}\varepsilon_{vt}\varepsilon_{uw}\varepsilon_{tw}\label{eq:eps}
\end{equation}
is always +1 or -1,
independently of the particular face chosen; and if they form a type Y face, the same is true,
and the sign is opposite the type X case.
We fix such a type X face. Note
\[
\delta(v,u) + \delta(v,t) + \delta(u,w) + \delta(t,w) \equiv 0 \mod 2.
\]
Next we consider the $\varepsilon_{vw}''$ terms. We compute
\[
x_{vu}\wedge\mu_v = \varepsilon_{vu}''\mu_u = \varepsilon_{vu}''\varepsilon_{uw}''\mu_w\wedge x_{uw}.
\]
Since $x_{vu}=-x_{uw}$ in $D_u$, the above can be abbreviated to $\mu_v = (-1)^{k+1}\varepsilon_{vu}''\varepsilon_{uw}''\mu_w$. Similarly, we obtain $\mu_v = (-1)^{k+1}\varepsilon_{vt}''\varepsilon_{tw}''\mu_w$, implying $\varepsilon_{vu}''\varepsilon_{vt}''\varepsilon_{uw}''\varepsilon_{tw}''=1$. A similar argument in the type Y case yields $\varepsilon_{vu}''\varepsilon_{vt}''\varepsilon_{uw}''\varepsilon_{tw}''=1$ as well.
To summarize, we may reconsider the problem with (\ref{eq:eps}) replaced by
the expression $\varepsilon_{vu}'\varepsilon_{vt}'\varepsilon_{uw}'\varepsilon_{tw}'$.
Note $\mathcal{L}(X_{vu}) = H_1(Y_v)\oplus H_1(X_{vu})$,
and, as this is a split cobordism, we have a natural identification
$H_1(Y_v)= H_1(X_{vu})$. Choose respective orientations $\alpha_1$ and $\beta_1$ of
$H_1(Y_v)$ and $H_1(X_{vu})$ that agree under this identification. Recall that
$\varepsilon_{vu}'$ has been defined by
\[
\mu_{vu} = \varepsilon_{vu}'\beta_1\wedge\alpha_1.
\]
On the other hand, $\mathcal{L}(X_{uw})=H_1(Y_u)\oplus H_1(X_{uw})$,
and, as this is a merge cobordism, there is a codimension 1 inclusion
$H_1(X_{uw})\subset H_1(Y_u)$ with a complement generated by $x_{uw}$.
Let $\alpha_2$ be an orientation of $H_1(Y_u)$ and set $\beta_2 =\alpha_2\;\llcorner \; x_{uw}$.
Then $\varepsilon_{uw}'$ has been defined by
\[
\mu_{uw} = \varepsilon_{uw}'\beta_2\wedge\alpha_2.
\]
In this situation, the map $f_{12}$ of (\ref{eq:exseq1}) has a 1-dimensional
kernel spanned by $x_{uw}$.
In this way $\text{im}(f_{12})$ can be identified with $H_1(Y_v)$ and $H_1(X_{vu})$.
Let a section for the exact sequence (\ref{eq:exseq1}), here a map $H_1(X_{vw})\to H_1(X_{vu})\oplus H_1(X_{uw})$,
be given by $y\mapsto (y,0)$. The map $F_{12}:\text{im}(f_{12})\oplus H_1(X_{vw}) \to
H_1(X_{vu})\oplus H_1(X_{uw})$ of (\ref{eq:split1}) can be written
\[
F_{12}:\mathbb{R}\{x_1,\ldots,x_{k}\}\oplus\mathbb{R}\{x_1,\ldots,x_k\}\to\mathbb{R}\{x_1,\ldots,x_k\}\oplus\mathbb{R}\{x_1,\ldots,x_{k}\},
\]
\[
F_{12}(x_p,x_q)=(x_q + x_p,-x_p).
\]
Proceeding with the conditions of Definition \ref{def:homcom}, we find
\[
F_{12}^{-1}(\beta_1\wedge\beta_2)= \delta_{12}\wedge\beta_{12}
\]
where $\delta_{12}=\alpha_1=\beta_2$ and $\beta_{12}=\beta_1$.
We can arrange that $\beta_2=\beta_1$ under the appropriate identification.
The condition $G_{12}(\zeta_{12}\wedge\delta_{12})=\alpha_2$,
having that $\alpha_2 = \beta_2\wedge x_{uw}$, yields $\zeta_{12}=(-1)^k x_{uw}$.
Using Definition \ref{def:homcom} we obtain
\[
\mu_{uw}\circ\mu_{vu} = (-1)^{(k^2-k)/2+k(k+1)+k}\varepsilon_{vu}'\varepsilon_{uw}'\beta_1\wedge\alpha_1\wedge H^u_{12}(x_{uw})
\]
where we've used $k=b_1(X_{vu})=d_{12}$. The superscript $u$ in $H^u_{12}$ distinguishes this map from the map $H^t_{12}$ which appears when $u$ is replaced by $t$. We obtain a similar equation
for $\mu_{tw}\circ\mu_{vt}$ with $x_{uw}$ replaced by $x_{tv}$ and
$\varepsilon_{vu}'\varepsilon_{uw}'$ replaced by $\varepsilon_{vt}'\varepsilon_{tw}'$.
Because our setup includes the compatibility condition $\mu_{tw}\circ\mu_{vt}=\mu_{uw}\circ\mu_{vu}$,
we conclude
\[
\varepsilon_{vu}'\varepsilon_{vt}'\varepsilon_{tw}'\varepsilon_{uw}' = H^u_{12}(x_{uw})/H^t_{12}(x_{uw})=:\varepsilon.
\]
\begin{figure}[t]
\includegraphics[scale=1.20]{thesign.pdf}
\caption{This is an illustration (missing a dimension) of $S^4$ minus two 4-balls,
with a properly embedded surface $F$, a torus with two disks removed, with
the local portions of the diagrams of the type X face from Figure \ref{fig:face} embedded;
the circles of the diagrams lie on $F$, while only the endpoints of the arcs lie on $F$.
The cobordism $X_{vw}$ is the double cover over $S^4$ minus two 4-balls
branched over $F$. The disk $S_u'$ lifts to a 2-sphere $S_u\subsetX_{vw}$
intersecting $\widetilde{x}_{uw}$ (the lift of $x_{uw}$) in one point.
The disk $T_u'$ lifts to a 2-sphere $T_u\subset X_{vw}$ intersecting
$Y_u$ in $\widetilde{x}_{uw}$.}
\label{fig:thesign}
\end{figure}
In summary,
we see that $\varepsilon$ is the sign determined by comparing the result of orienting
$H_2^+(X_{vw})$ by $x_{uw}$ versus the result by using $x_{tw}$.
Using the interpretation of the splitting map (\ref{eq:exseq3})
from \S \ref{sec:homor}, we obtain the following interpretation of $\varepsilon$.
Here is a suitable moment to reintroduce the distinction between each arc $x$
and its lift $\widetilde{x}$.
Choose an oriented surface $S_u\subset Y_u$ transverse to
$\widetilde{x}_{uw}$ with intersection product $[S_u]\cdot[\widetilde{x}_{uw}]=1$.
Choose an oriented surface $T_u$ with $T_u\cap Y_u =\widetilde{x}_{uw}$.
Then
\[
[S_u] + [T_u] = H^u_{12}(\widetilde{x}_{uw}).
\]
To illustrate this, we supply Figure \ref{fig:thesign}, where we use that
$X_{vw}$ is a double cover of $S^4$ minus two 4-balls branched over
a properly embedded torus with two disks removed.
Similarly, we can write $[S_t]+[T_t]=H^t_{12}(\widetilde{x}_{tw})$.
The sign $\varepsilon$ is then the intersection product of these classes:
\[
\varepsilon = ([S_u] + [T_u])\cdot([S_t]+[T_t]).
\]
In fact, $[T_t]=\varepsilon[S_u]$.
From this it is clear that $\varepsilon$ only depends
on the topology of the type X configuration. A type Y face is obtained from a
type X face by reversing the direction of either $\widetilde{x}_{uw}$
or $\widetilde{x}_{tw}$, and $\varepsilon$ correspondingly changes sign.
\end{proof}
\vspace{10px}
\subsection{Gradings}\label{sec:finalgr}
In this section we prove that the spectral sequence preserves
the relevant $\mathbb{Z}/4$-gradings, completing the proof of Theorem \ref{thm:1}.
We then deduce Corollary \ref{cor:2}.
As usual, let $k=\dim(V_v)$. For $x\in{\textstyle{\bigwedge}}^i(V_v)\subset\text{C}$, the grading of $\Phi(x)$ in $E^1$ is given in (\ref{eq:grt}) by
\begin{equation}
\text{gr}[E^1](\Phi(x))\equiv \text{gr}[Y_v](\Phi(x)) -\text{deg}(\mathbb{X}_{\boldsymbol{\infty}v})-|v|_1 \mod 4.\label{eq:gr4}
\end{equation}
We know, by the remark at the end of \S \ref{sec:example},
that $\text{gr}[Y_v](\Phi(x))\equiv 2k+i$.
We have $\text{deg}(\mathbb{X}_{\boldsymbol{\infty}v})=\text{deg}(\mathbb{X}_{\boldsymbol{\infty}\mathbf{1}})-\text{deg}(X_{v\mathbf{1}})$, since $\mathbb{X}_{v\mathbf{1}}$ is trivial. From (\ref{degofmap}) we compute
\[
\text{deg}(X_{v\mathbf{1}}) = -\frac{3}{2}(m-|v|_1)+\frac{1}{2}(b_1(Y_\mathbf{1})-k)
\]
using $\chi(X_{vw})=|w-v|_1$, $\sigma(X_{v\mathbf{1}})=0$ and $b_1(Y_v)=k$. We also compute
\[
\text{deg}(X_{\boldsymbol{\infty}\mathbf{1}}) = -\frac{3}{2}(2m + \sigma(X_{\boldsymbol{\infty}\mathbf{1}}))+\frac{1}{2}(b_1(Y_\mathbf{1})-b_1(\Sigma(L)))
\]
knowing $\Sigma(L)=\overline{Y}_{\boldsymbol{\infty}}$. Recall from (\ref{eq:degmaster}) that $\text{deg}(\mathbb{X}_{\boldsymbol{\infty}\mathbf{1}}) \equiv \text{deg}(X_{\boldsymbol{\infty}\mathbf{1}})+2\mathscr{P}(\mathbb{X}_{\boldsymbol{\infty}\mathbf{1}})$.
\begin{lemma}\label{lem:gr}
$\mathscr{P}(\mathbb{X}_{\boldsymbol{\infty}\mathbf{1}})\equiv \sigma(X_{\boldsymbol{\infty}\mathbf{1}}) \mod 2$.
\end{lemma}
\noindent Before proving this lemma, we make our conclusion. In \cite{bloom}, Bloom computes
$\sigma(X_{\mathbf{0}\boldsymbol{\infty}})=\sigma-n_+$ and $b_1(\Sigma(L))=\nu$, where
$\sigma$ and $\nu$ are the signature and nullity of $L$, respectively,
and $n_{\pm}$ is the number of $\pm$ crossings of the diagram $D$. Note that $X_{\boldsymbol{\infty}\mathbf{1}}$ and $X_{\mathbf{1}\boldsymbol{\infty}}$ compose along $Y_\mathbf{1}$ to give a cobordism which, away from a manifold of signature $0$, has $m$ copies of $-\mathbb{C}\mathbb{P}^2$ connected summed to it (cf. $E$ from \S \ref{sec:decomp1}). In addition, since $\sigma(X_{\mathbf{0}\mathbf{1}})=0$, we have $\sigma(X_{\boldsymbol{\infty}\mathbf{1}})=\sigma(X_{\boldsymbol{\infty}\mathbf{0}})$. Additivity of the signature again implies that $\sigma(X_{\boldsymbol{\infty}\mathbf{1}}) = -m-\sigma+n_+$. Note $m=n_+ +n_-$. All together, (\ref{eq:gr4}) computes to
\[
i + 2n_- +\frac{3}{2}\left(n_+ + k\right)+\frac{1}{2}\left(|v|_1+\nu+\sigma\right) \mod 4,
\]
which is congruent to (\ref{eq:oddgr}). This completes the proof of Theorem \ref{thm:1}.
\begin{figure}[t]
\includegraphics[scale=.37]{figureeight.pdf}
\caption{To obtain a relative Kirby diagram for $(X_{\boldsymbol{\infty}\mathbf{0}},Y_{\boldsymbol{\infty}})$
where $Y_{\boldsymbol{\infty}}=\overline{\Sigma(L)}$, we borrow some constructions from Bloom \protect\cite{bloom}.
With a diagram of the figure eight knot in (i) as an example, we first choose a resolution
that yields one connected circle as in (ii), drawing small arcs where crossings used to be.
We then cut the connected circle at the dot, and straighten it out, as in (iii).
Reflecting this picture across the line, we obtain a surgery diagram (iv) for $Y_{\boldsymbol{\infty}}$
by choosing a $+1$ framing for each circle corresponding to a $0$-resolution,
and a $-1$ framing for $1$-resolution circles. Finally,
the relative Kirby diagram (v) is obtained by placing a small meridional circle on
each circle in (iv) framed by $0$ or $-1$, depending on whether the circle
corresponds to a $0$- or $1$-resolution, respectively.}
\label{fig:figureeight}
\end{figure}
\begin{proof}[Proof of Lemma \ref{lem:gr}]
By additivity and the fact that $\mathscr{P}(\mathbb{X}_{\mathbf{0}\mathbf{1}})\equiv \sigma(\mathbb{X}_{\mathbf{0}\mathbf{1}})\equiv 0$ mod 2, it suffices to show that $\mathscr{P}(\mathbb{X}_{\boldsymbol{\infty}\mathbf{0}})\equiv \sigma(X_{\boldsymbol{\infty}\mathbf{0}})$. Write $\mathbb{X}=\mathbb{X}_{\boldsymbol{\infty}\mathbf{0}}$ and $X$ for its base space. We have
\[
\mathbb{X} = ([0,1]\times\mathbb{Y})\cup_{\mathbb{L}}(\cup^m_{i=1}\mathbb{H})
\]
where $\mathbb{L}=\mathbb{L}_1\cup\cdots\cup\mathbb{L}_m$
is an $\text{SO}(3)$-thickening of $L=L_1\cup\cdots\cup L_m$, and each $\mathbb{L}_i:\mathbb{H}_1\to\mathbb{Y}\times\{1\}$ is as in \S \ref{sec:x}.
Here we are viewing
\[
\mathbb{Y} = (Y\times\text{SO}(3))_\Psi(\mathbb{L})
\]
as a bundle over $Y=Y_{\boldsymbol{\infty}}=\overline{\Sigma(L)}$
built from the geometric representative $L$ as in \S \ref{sec:geomrep}.
In \S \ref{sec:main}
we saw that $L$ is the boundary of a surface $S\subset Y$,
so in fact $\mathbb{Y}$ is a trivial bundle.
Note $\mathbb{X}$ is reducible to an $S^1$-bundle by its very construction.
Let $\mathscr{L}$ be the associated complex line bundle.
The Poincar\'{e} dual of a pre-image of $c_1(\mathscr{L})\in H^2(X;\mathbb{Z})$ in $H^2(X,\partial X;\mathbb{Z})$
is represented by the closed surface $S'\subset \text{int}(X)$
which is the union of the cores of the 2-handles together with $S\subset Y\times\{1\}$.
Indeed, it is straightforward to define a section of $\mathscr{L}$ with zero set $S'$.
By the definition of $\mathscr{P}(\mathbb{X})$, it suffices to show that
\[
[S']\cdot [S']\equiv \sigma(X)\mod 2
\]
where $[S']\cdot [S']$ is the intersection product.
To do this we write down a relative Kirby diagram for $(X,Y)$.
We start by writing a surgery diagram for $Y=\overline{\Sigma(L)}$
using the chosen diagram $D$.
For this we follow Bloom \cite{bloom}.
First, choose $v\in\{0,1\}^m$ for which the resolution $D_v$ has 1 circle.
We can always choose $D$ so that there is such a resolution.
Then, in $D_v$, having placed arcs where crossings once were,
cut the lone circle at an isolated point $p$ and
unravel it, with the arcs attached, into a horizontal segment;
then double it as in Figure \ref{fig:figureeight} (iv). Place a $+1$ framing on a circle
in the resulting picture if that circle came from a $0$-resolution,
and a $-1$ framing otherwise. This gives a surgery diagram
for $Y=\overline{\Sigma(L)}$.
To turn this into a relative Kirby diagram for $(X,Y)$,
we simply add small meridians around each circle,
framed with a $0$ if the circle is +1 framed and a $-1$ if
the circle is $-1$ framed. The intersection number $[S']\cdot [S']$
is concentrated at the attaching locations of the 2-handles,
represented by the meridional circles in the relative Kirby diagram.
Thus there is a $-1$ contribution to $[S']\cdot [S']$ from each $1$-resolution in $D_v$.
We conclude $[S']\cdot [S']=-|v|_1$.
According to \cite{bloom} Prop. 1.7 and Lemma 9.4, the signature $\sigma(X)$ is mod 2
congruent to the vertex weight of a 1-circle resolution.
\end{proof}
We now discuss Corollary \ref{cor:2}. As alluded to in the introduction,
\cite[Thm. 1]{man}, \cite[\S 5]{ors} and the remarks in \cite[\S 9.3]{ls}
tell us that, for a quasi-alternating link $L$, the gradings $q$ and $t$ of $\overline {\text{Kh}'}(L)$
satisfy $q/2-t-\sigma/2=0$. Let us write $\delta=q/2-t-\sigma/2$; then
we may say that $\overline {\text{Kh}'}(L)$ is supported in $\delta$-grading $0$.
Note that $\nu=0$ when $L$ is quasi-alternating.
Further, as is described in \cite{man},
the rank of $\overline {\text{Kh}'}(L)_{q,t}$ is given by $|a_q|$, where
\[
J_{L}(x)=\sum a_q x^q
\]
is the Jones polynomial, with conventions as given in \cite{ors}.
The grading (\ref{eq:oddkhgr}) of Theorem \ref{thm:1},
which we shall call $\delta^\#$, is given by $\delta^\# = \delta + q + \sigma$
for quasi-alternating links. Note that $\delta$ and $\delta^\#$ agree modulo 2,
implying that the spectral sequence collapses at the $E^2$-page.
Write $\overline {\text{Kh}'}(L)_j$ with $j\in\{0,2\}\subset\mathbb{Z}/4\mathbb{Z}$ for the $\delta^\#$-grading.
Then
\[
\text{rk}_\mathbb{Z}\overline {\text{Kh}'}(L)_j = \sum_{q + \sigma \equiv j} |a_q|
\]
where the congruence is modulo 4. We remark that $\sum |a_q| = \text{det}(L)$, and
the sign of $a_q$ is $(-1)^{q/2+\sigma/2}$,
cf. \cite[\S 9.1]{bloom}. It follows that
\[
\text{rk}_\mathbb{Z}\overline {\text{Kh}'}(L)_j = \frac{1}{2}\left[\sum |a_q|+(-1)^{j/2}\sum a_q\right].
\]
Now we obtain the result of Corollary \ref{cor:2} using the fact that
$J_L(1)=\sum a_q=2^{m-1}$, where $m$ is the number of components of $L$.
\section{Bundles in the Exact Triangle}\label{sec:topology}
In this section we introduce the manifolds and bundles that feature
in the proof of Theorem \ref{thm:floer}. We take a systematic approach to the
bundles $\mathbb{Y}_i$ that appear in Floer's exact triangle by
extending Dehn surgery to $\text{SO}(3)$-bundles. This viewpoint was Floer's \cite{f2},
and is expanded upon in \cite{bd}. The construction of surgery cobordism bundles
$\mathbb{X}_{ij}$ in \S \ref{sec:x} is straightforward in this setting.
These bundles induce the maps in the exact triangle. We then introduce some
hypersurfaces in $X_{ij}$ that yield useful metric families;
these were used in \cite{kmos,bloom,kmu}. In \S \ref{sec:geomrep}
we relate our new setup to that of the statement of Theorem \ref{thm:floer} in \S \ref{sec:main}.
In this section, we write $A\cup_f B$ for the space obtained from the disjoint union
of $A$ and $B$, with points identified using the map $f$. Our convention is that the
gluing map $f$ is always from a subset of $B$ to a subset of $A$. We freely
use isomorphisms of the form $A\cup_f B\simeq A\cup_{fg} C$, where
$g$ is an isomorphism from a subset of $C$ to a subset of $B$. All constructions
that are not smooth have a canonical smoothing, as mentioned in \cite[Rmk. 1.3.3]{gs}.
All (principal) $\text{SO}(3)$-bundles have right actions. Thus our bundle gluing maps,
in order to be equivariant, always involve left multiplication on trivialized fibers.\\
\subsection{Dehn Surgery with Bundles}\label{sec:dehn}
Let $\mathbb{Y}$ be an $\text{SO}(3)$-bundle over a closed, oriented 3-manifold $Y$. Let
$K:S^1\times D^2\to Y$ be an embedding. We refer to $K$ as a framed
knot in $Y$. We consider equivariant embeddings $\mathbb{K}:S^1\times D^2\times \text{SO}(3)\to\mathbb{Y}$
that lie above $K$, i.e. $\mathbb{K}/\text{SO}(3)=K$. We refer to $\mathbb{K}$ as a framed knot
in $\mathbb{Y}$.
The space of bundle automorphisms of $S^1\times D^2\times\text{SO}(3)$ fixing the base space
has two connected components.
An automorphism $\tau$ not isotopic to the identity is
\[
\tau(w,z,a)=(w,z,c(w)a)
\]
where $(w,z)\in S^1\times D^2$, $a\in\text{SO}(3)$, and
$c$ is a standard inclusion $S^1\to \text{SO}(3)$ of a maximal torus. In particular, $c$
is a homomorphism and generates $\pi_1(\text{SO}(3))\simeq \mathbb{Z}/2$.
If $\mathbb{K}$ is one embedding, another embedding lying above $K$ is given by
$\mathbb{K}\tau$.
We generalize Dehn surgery to surgery on the framed knots $\mathbb{K}$. For
$\Omega = (A,b)\in \text{SL}(2,\mathbb{Z})\ltimes (\mathbb{Z}/2)^2$
we define an automorphism $\psi_\Omega$ of $S^1\times\partial D^2\times \text{SO}(3)$ by
\[
\psi_\Omega(w,z,a) = (w^{A_{11}} z^{A_{12}},w^{A_{21}} z^{A_{22}}, c(w)^{b_1}c(z)^{b_2}a).
\]
Let $\mathbb{K}'$ be the interior of the image of $\mathbb{K}$. The result of $\Omega$-surgery on $\mathbb{K}$
is then defined to be the identification space
\[
\mathbb{Y}_\Omega(\mathbb{K}) = (\mathbb{Y}\setminus \mathbb{K}')\cup_{\mathbb{K}\psi_\Omega} (S^1\times D^2\times\text{SO}(3)).
\]
There is an induced framed knot $\Omega(\mathbb{K})$ in $\mathbb{Y}_\Omega(\mathbb{K})$ given by the inclusion
of $S^1\times D^2\times\text{SO}(3)$ into the above expression for $\mathbb{Y}_\Omega(\mathbb{K})$.
The product of elements in $G=\text{SL}(2,\mathbb{Z})\ltimes (\mathbb{Z}/2)^2$ is given by
$(A',b')(A,b) = (A'A, b' A + b )$. The assignment $\Omega\mapsto \psi_\Omega$
induces an isomorphism from $G$ to the group of isotopy classes of orientation preserving
equivariant automorphisms of $S^1\times \partial D^2 \times \text{SO}(3)$.
We have an associativity rule
\[
\mathbb{Y}_{\Omega'\Omega}(\mathbb{K}) \simeq (\mathbb{Y}_{\Omega'}(\mathbb{K}))_{\Omega}(\Omega'(\mathbb{K})).
\]
The space $\mathbb{Y}_\Omega(\mathbb{K})$ is naturally a bundle over $Y_{p/q}(K)$, the result of
$p/q$ Dehn surgery on the framed knot $K$ in $Y$, where $\Omega=(A,b)$, $p=A_{22}$,
$q=A_{12}$ and of course $K=\mathbb{K}/\text{SO}(3)$. Note that
the automorphism $\tau$ above restricts to $\psi_\Theta$ where
$\Theta=(1_{2\times 2},(1,0))\in G$. We have the transformation rule
$\mathbb{Y}_\Omega(\mathbb{K}\tau)\simeq \mathbb{Y}_{\Theta\Omega}(\mathbb{K})$.\\
\subsection{The Surgery Bundle $\mathbb{Y}_i$} There is a particular choice of surgery parameter
$\Omega$ that Floer used in the setting of his exact triangle:
\begin{equation}\label{lambda}
\Lambda = \left(\left[\begin{array}{cc} -1 & 1 \\ -1& 0 \end{array}\right],(1,0) \right).
\end{equation}
To understand this, write $\Lambda = \Psi\Lambda'$, where
\begin{equation}
\Psi = (1_{2\times 2},(0,1)), \quad \Lambda'=\left(\left[\begin{array}{cc} -1 & 1 \\ -1& 0 \end{array}\right],(0,0) \right).\label{eq:psi}
\end{equation}
First, $\Psi$ twists the trivialization around $\partial D^2$. Then,
$\Lambda'$ performs $0$-surgery on $K$, leaving bundles alone.
Note that $\Lambda^3=1$. With $\mathbb{Y}$ and $\mathbb{K}$
fixed, we define for $i\in\mathbb{Z}$ the surgery bundles $\mathbb{Y}_i = \mathbb{Y}_{\Lambda^{i+1}}(\mathbb{K})$,
the surgery base manifolds $Y_i =\mathbb{Y}_i/\text{SO}(3)$, and the induced embeddings
$\mathbb{K}_{i} = \Lambda^{i+1}(\mathbb{K})$. The index offset is here so that $Y_0$ and $Y_1$ are
simply $0$- and $1$-surgery on $K\subsetY$, respectively. Because $\Lambda^3=1$,
there are isomorphisms $\mathbb{Y}_i \simeq \mathbb{Y}_{i+3}$.\\
\subsection{The Surgery Cobordism $\mathbb{X}_{ij}$}\label{sec:x} Our goal is to construct
cobordism bundles $\mathbb{X}_{ij}:\mathbb{Y}_i\to \mathbb{Y}_j$ for $i< j$. Each $\mathbb{X}_{ij}$ will be
an $\text{SO}(3)$-bundle over a standard surgery cobordism $X_{ij}:Y_i\to Y_j$. We first
construct $\mathbb{X}_{ij}$ when $j-i=1$ and use these as building blocks for the general
construction. Write $\mathbb{H}=D^2\times D^2\times \text{SO}(3)$. We view $\mathbb{H}$ as a 2-handle thickened
by $\text{SO}(3)$. Also write
\begin{figure}[t]
\includegraphics[scale=.90]{decomp1.pdf}
\caption{The two hypersurfaces $Y_1$ and $S_1$ in the interior of $X_{02}$. The
3-sphere $S_1$ separates off a copy of $-\mathbb{C}\mathbb{P}^2$ minus a 4-ball.}
\label{fig:met13}
\end{figure}
\begin{center}
$\partial \mathbb{H} = \mathbb{H}_1 \cup \mathbb{H}_2$,\\
$\mathbb{H}_1 = \partial D^2\times D^2\times \text{SO}(3)$,\\
$\mathbb{H}_2 = D^2\times \partial D^2\times \text{SO}(3)$.\\
\end{center}
Viewing $\mathbb{K}_0$ as a map $\mathbb{H}_1 \to \{1\}\times \mathbb{Y}_0$, we define $\mathbb{X}_{01}$ by setting
\[
\mathbb{X}_{01} =([0,1] \times \mathbb{Y}_0) \cup_{\mathbb{K}_0} \mathbb{H}.
\]
The definition of $\mathbb{X}_{ij}$ for general $j-i=1$ is similar. We want to define $\mathbb{X}_{02}$ as
$\mathbb{X}_{01}\cup_{\mathbb{Y}_1}\mathbb{X}_{12}$. To make sense of this expression we give an explicit
identification of $\partial \mathbb{X}_{01}\setminus \mathbb{Y}_0$ with $\mathbb{Y}_1$. Let the interior of
the image of $\mathbb{K}_0$ in $\mathbb{Y}_0$ be denoted $\mathbb{K}'_0$. Note that
\[
\partial\mathbb{H}_1=\mathbb{H}_1\cap\mathbb{H}_2=\partial\mathbb{H}_2
\]
is a trivial bundle over a 2-torus. Now we write
\[
\partial \mathbb{X}_{01}\setminus \mathbb{Y}_0 = (\mathbb{Y}_0 \setminus \mathbb{K}_0') \cup_{\mathbb{K}_0|{\mathbb{H}_1\cap\mathbb{H}_2}} \mathbb{H}_2.
\]
Let $\psi:\mathbb{H}_1\to \mathbb{H}_2$ be an isomorphism. Then
\[
\partial \mathbb{X}_{01}\setminus \mathbb{Y}_0 \simeq (\mathbb{Y}_0 \setminus \mathbb{K}'_0)
\cup_{\mathbb{K}_0\psi|{\mathbb{H}_1\cap\mathbb{H}_2}} \mathbb{H}_1 = (\mathbb{Y}_0)_{\psi|{\mathbb{H}_1\cap\mathbb{H}_2}}(\mathbb{K}_0).
\]
To identify this bundle with $\mathbb{Y}_1=(\mathbb{Y}_0)_\Lambda(\mathbb{K}_0)$ we need $\psi$ such that
$\psi|_{\mathbb{H}_1\cap\mathbb{H}_2}=\psi_\Lambda$; we choose
\[
\psi:\mathbb{H}_1\to\mathbb{H}_2, \quad \psi(w,z,a) := (\overline{w}z,\overline{w},c(w)a).
\]
Making this choice, we have identified $\partial \mathbb{X}_{01}\setminus \mathbb{Y}_0$
with $\mathbb{Y}_1$. Finally, to construct $\mathbb{X}_{ij}$ for $j-i>1$,
we inductively define $\mathbb{X}_{ij}=\mathbb{X}_{i,j-1}\cup_{\mathbb{Y}_{j-1}}\mathbb{X}_{j-1,j}$,
where the gluing is done according to the same identification process.\\
\subsection{The Bundle $\mathbb{S}_i$}\label{sec:decomp1} We construct a subset
$\mathbb{S}_1\subset \mathbb{X}_{02}$ which is a bundle over a 3-sphere $S_1\subsetX_{02}$. One
gets $\mathbb{S}_i$ inside $\mathbb{X}_{i-1,i+1}$ for each $i$ in a similar fashion. Write
\begin{equation}
\mathbb{X}_{02}= ([0,1]\times \mathbb{Y}_0 \cup_{\mathbb{K}_0} \mathbb{H}) \cup_{\mathbb{Y}_{1}} ([0,1]\times \mathbb{Y}_1\cup_{\mathbb{K}_1} \mathbb{H}) \simeq ([0,1]\times \mathbb{Y}_0) \cup_{\mathbb{K}_0} \mathbb{H} \cup_{\psi} \mathbb{H}\label{iso}
\end{equation}
with notation as in the construction of $\mathbb{X}_{01}$. Introduce the subset
\[
\mathbb{H}(r,s) = D^2(r)\times D^2(s)\times\text{SO}(3) \subset \mathbb{H}
\]
where $D^2(r)$ is the disk of radius $r$, $0<r\leq 1$, and consider the following restriction bundles of $\mathbb{X}_{02}$:
\[
\mathbb{U}=\mathbb{H}(1/2,1) \cup \mathbb{H}(1,1/2)\subset\mathbb{H}\cup_\psi\mathbb{H}, \quad \mathbb{S}_1 = \partial\mathbb{U}.
\]
It is well-known that the base space $U$ of $\mathbb{U}$
is diffeomorphic to $-\mathbb{C}\mathbb{P}^2$ minus
an embedded 4-ball, cf. \cite[Ex. 4.2.4]{gs}. It follows that $\mathbb{S}_1$
is a trivial bundle over a 3-sphere $S_1$. We see that we can decompose $X_{02}$ along $S_1$ into a
connected sum of $-\mathbb{C}\mathbb{P}^2$ with a manifold whose boundary is $Y_2 \sqcup \overline{Y}_0$.
The intersection $S_1\capY_1$ is 2-torus. This decomposition is depicted in Figure \ref{fig:met13}.
We claim that $\mathbb{U}$ is a non-trivial bundle.
We check that the restriction of $\mathbb{U}$ to an essential
sphere is non-trivial. Define $\mathbb{D}_1 = D^2\times \{0\} \times \text{SO}(3)$ and
$\mathbb{D}_2 = \{0\}\times D^2\times \text{SO}(3)$ as subsets of $\mathbb{H}$. Consider
\[
\mathbb{D}_2 \cup_{\psi|{\partial \mathbb{D}_1}} \mathbb{D}_1\subset\mathbb{U}.
\]
This is
isomorphic to $D^2\times \text{SO}(3)\cup_{f} D^2\times \text{SO}(3)$ where $f$ is the automorphism
of $\partial D^2\times\text{SO}(3)$ given by $f(z,a)=(\overline{z},c(z)a)$. This
is a nontrivial bundle over a 2-sphere.\\
\begin{figure}[t]
\includegraphics[scale=.75]{decomp2.pdf}
\caption{The intersections of the five hypersurfaces in the interior of $X_{03}$.
The $S^1\times S^2$ hypersurface $T$ divides $X_{03}$ into two pieces, $V$ and $E$. This picture first appeared in \protect\cite{kmos}.}
\label{fig:met3}
\end{figure}
\subsection{The Bundle $\mathbb{T}$}\label{sec:decomp2} We construct a subset
$\mathbb{T}\subset\mathbb{X}_{03}$ which is a trivial bundle over $T\subsetX_{03}$ where
$T$ is diffeomorphic to $S^1\times S^2$. By iterating (\ref{iso}) and
stretching the ends we write $\mathbb{X}_{03}$ as
\[
\mathbb{X}_{03} \simeq
([0,1]\times \mathbb{Y}_0)\cup_{\mathbb{K}_0} \mathbb{H} \cup_{\psi} \mathbb{H}
\cup_{\psi} \mathbb{H}\cup_{\mathbb{Y}_0} ([0,1] \times \mathbb{Y}_0).
\]
Identifying $\mathbb{X}_{03}$ with the expression on the right, we define the restriction bundles
\[
\mathbb{E} = \mathbb{H} \cup_{\psi} \mathbb{H} \cup_{\psi} \mathbb{H}, \quad \mathbb{T} = \partial \mathbb{E},
\]
and their respective base spaces $E$ and $T$. We have an isomorphism
\begin{equation}
f:\mathbb{T} = \mathbb{H}_1\cup_{(\psi|_{\mathbb{H}_1\cap\mathbb{H}_2})^2}\mathbb{H}_2 \to S^1\times S^2\times \text{SO}(3)\label{triv}
\end{equation}
where, viewing $S^2\subset S^1\times S^2$ as $\mathbb{C}\cup\infty$, we set
\begin{equation*}
f|_{\mathbb{H}_1}=\text{id}, \quad f|_{\mathbb{H}_2}(z,w,a) = (\overline{w},\overline{w}/\overline{z},c(w)a).
\end{equation*}
The triviality of the bundle $\mathbb{T}$ is also seen
from the observation that it is the restriction of a
bundle on a space in which $T$ is contractible. We note that
we could have also trivialized $\mathbb{T}$
by using a similar isomorphism in which $f|_{\mathbb{H}_2}=\text{id}$.
These two isomorphisms determine trivializations
that differ, in the terminology of \S \ref{sec:instantons},
by a non-even gauge transformation.
We remark that the
intersections $T\capY_1$ and $T\capY_2$ are 2-tori. We illustrate the
arrangement of intersections in Figure \ref{fig:met3}. We note that $T$ may
be described as the boundary of a regular neighborhood of the union of the two
essential spheres inside the copies of $-\mathbb{C}\mathbb{P}^2$ divided off by
$S_1$ and $S_2$. The hypersurface $T$ separates $X_{03}$ into two 4-manifolds,
$E$ and $V$, where $E$ is diffeomorphic to $-\mathbb{C}\mathbb{P}^2$
minus a neighborhood of an unknotted circle, and $V$ is diffeomorphic to $[0,1] \times Y_0$ minus a
neighborhood of $\{1/2\}\times K$.\\
\subsection{An Involution of $\mathbb{E}$}\label{sec:involution}
We construct an involution $\sigma:\mathbb{E}\to\mathbb{E}$.
We write
\[
\mathbb{E} = \mathbb{H}^{-1}\cup_\psi \mathbb{H}^0\cup_\psi \mathbb{H}^{+1}
\]
where the superscripts have been added to distinguish the copies of $\mathbb{H}$.
We write $[w,z,a,i]\in\mathbb{E}$ for the point represented by $(w,z,a)\in\mathbb{H}^i$.
We define our involution by
\[
\sigma[w,z,a,i] = [\overline{z},\overline{w},c(w)^{i(i+1)}c(z)^{i(i-1)}a,-i].
\]
Here we have extended $c^2:S^1\to\text{SO}(3)$ to a map
$c^2:D^2\to\text{SO}(3)$ such that $c^2(\overline{w})=(c^2(w))^{-1}$. Note that $\sigma$ interchanges the outer copies of $\mathbb{H}$ and
fixes the middle copy of $\mathbb{H}$.
It is straightforward that $\sigma$ is well-defined: writing $\sigma$
as three maps $\sigma_i:\mathbb{H}^{+i}\to\mathbb{H}^{-i}$, one uses the relations
\[
\sigma_0^2=\text{id}, \quad \sigma_{\pm 1} = \psi^{\pm 1}\sigma_0\psi^{\pm 1}, \quad \psi^3 = \text{id},
\]
whenever these compositions are defined.
The involution $\sigma$ is a bundle automorphism that restricts to an orientation-preserving
diffeomorphism of $E$. It fixes $\mathbb{T}$ and swaps $\mathbb{S}_1$ with $\mathbb{S}_2$.
\begin{figure}[t]
\includegraphics[scale=.65]{involution.pdf}
\caption{The involution $\sigma$.}
\label{fig:involution}
\end{figure}
Let us look at how the involution affects $\mathbb{T}$. Recall the isomorphism (\ref{triv}).
We have
\begin{equation}
f\sigma f^{-1}(w,z,a) = (w,w/z,c(\overline{w})d(z)a)\label{eq:invdiff}
\end{equation}
where $w\in S^1, z\in\mathbb{C}\cup\infty$, $a\in\text{SO}(3)$
and $d:S^2\to\text{SO}(3)$ is the double of $c^2:D^2\to\text{SO}(3)$.
It is easily seen that $f\sigma f^{-1}$ is isotopic to a composition, $\theta\circ\upsilon$, where
\[
\theta(w,z,a)= (w,w/z,a), \quad \upsilon(w,z,a) = (w,z,c(w)a).
\]
The map $\theta$ is a diffeomorphism of $S^1\times S^2$ that, with respect to our trivialization, is extended in trivial way to the overlying bundle.
In the terminology of \S \ref{sec:instantons}, $\upsilon$ is a non-even gauge transformation of the trivial bundle over $S^1\times S^2$. The involution $\sigma$ will be useful in the proof of the exact triangle.\\
\subsection{Geometric Representatives}\label{sec:geomrep}
Let $\omega$ be an embedded loop in $Y$. Extend this to an embedding
$\mathbb{K}_\omega:S^1\times D^2\times\text{SO}(3)\toY\times\text{SO}(3)$. Let $\Psi=(1_{2\times 2},(0,1))$
as in (\ref{eq:psi}). Then the result of
$\Psi$-surgery on $\mathbb{K}_\omega$ as a framed knot in $Y\times\text{SO}(3)$ is a bundle geometrically
represented by $\omega$. More generally, $\omega$ can be a collection of embedded loops,
and $\Psi$-surgery for each component gives a bundle geometrically represented by $\omega$.
This relates our current framework to the statement of Theorem \ref{thm:floer} in \S \ref{sec:main}.
Let $\omega$ be a closed, unoriented 1-manifold in $Y$, and $K$ a framed knot in $Y$
disjoint from $\omega$. We set
\begin{equation}\label{geomrep}
\mathbb{Y} = (Y\times\text{SO}(3))_\Psi(\mathbb{K}_\omega),
\end{equation}
where it is understood that if $\omega$ has multiple components, we do $\Psi$-surgery for
each component. This description of $\mathbb{Y}$ gives a preferred trivialization away from a
neighborhood of $\omega$. We let $\mathbb{K}$ be the $\text{SO}(3)$-thickening of $K$ using this
preferred data, precomposed with $\tau$. That is,
\[
\mathbb{K} = (K\times\text{id}_{\text{SO}(3)})\tau.
\]
Recall that $\tau$ restricts to $\psi_\Theta$ where
$\Theta=(1_{2\times 2},(1,0))$, and that $\mathbb{Y}_0$ is defined as
$\mathbb{Y}_\Lambda(\mathbb{K})$. Using $\Theta\Lambda = \Lambda'\Psi$
with notation as in (\ref{eq:psi}), we have
\[
\mathbb{Y}_0 \simeq \mathbb{Y}_{\Lambda'\Psi}(K\times\text{id}_{\text{SO}(3)}).
\]
Because $\Lambda'$ is $0$-surgery without bundle-twisting,
we see $\mathbb{Y}_0$ is of the form (\ref{geomrep}),
where $Y$ is replaced by $Y_0$ and $\omega$ replaced by
$\omega\cup K_0$, where $K_0$ is the induced knot in $Y_0$.
Thus $\mathbb{Y}_0$ is geometrically represented by $\omega\cup K_0$.
Pushing $K_0$ away from the surgered
neighborhood makes it a small meridional loop $\mu$ as in Figure \ref{fig:ses}, by the
nature of $0$-surgery.
We may deduce that $\mathbb{Y}_1$ is geometrically represented by $\omega\subsetY_1$ by
either of two ways. First, we may interpret $Y_1$ as $0$-surgery on the induced knot
$K_0\subsetY_0$ and iterate the rule already established, forgetting about bundles
altogether. Alternatively, we can
repeat the above argument for $\Lambda^2$ in place of $\Lambda$. The difference in
this case is that $\Theta\Lambda^2 = (\Lambda')^2$. This is $1$-surgery on $K$ without
bundle-twisting.
\section{Proving the Exact Triangle}
In this section we prove Theorem \ref{thm:floer}, Floer's exact triangle.
We use an algebraic lemma first
used in \cite{os} by Ozsv\'ath and Szab\'o
to prove an exact sequence in
Heegaard Floer homology.
The use of metric stretching maps in this context was
applied in \cite{kmos} by Kronheimer, Mrowka, and the previous two authors
to prove an exact triangle in monopole Floer homology.
Bloom \cite{bloom} also treats
the monopole case.
Our proof is largely an adaptation of Kronheimer and Mrowka's proof \cite{kmu}
in the singular instanton knot homology setting.
In particular, while \S \ref{sec:firstmaps} is
essentially part of Floer's original proof, see \cite[\S 4]{bd}, the contents of
\S \ref{sec:secondmaps}, notably the idea for Lemma \ref{lem:fiber} and its proof,
are based on ideas from \cite[\S 7.1]{kmu}.\\
\subsection{The Triangle Detection Lemma}
The following statement is adapted from \cite[\S 7.1]{kmu}
and first appeared in \cite{os}.
\begin{lemma}
Let $(\emph{\text{C}}_i,\partial_i)$ be a sequence of complexes,
$i\in\mathbb{Z}$. Suppose that there are chain maps $f_i:\emph{\text{C}}_i\to
\emph{\text{C}}_{i+1}$ and maps $h_i:\emph{\text{C}}_i\to \emph{\text{C}}_{i+2}$ satisfying
\[
f_{i+1}f_i + \partial_{i+2}h_i + h_{i}\partial_i=0.
\]
Suppose further that each sum
\[
f_{i+2}h_{i} + h_{i+1}f_{i}
\]
induces an isomorphism $H(\emph{\text{C}}_i)\to H(\emph{\text{C}}_{i+3})$. Then
\[
\cdots \to H(\emph{\text{C}}_i) \xrightarrow[]{{H}(f_i)}
H(\emph{\text{C}}_{i+1}) \xrightarrow[]{{H}(f_{i+1})}
H(\emph{\text{C}}_{i+2}) \to \cdots
\]
is an exact sequence. Furthermore, the anti-chain map
$f_i\oplus h_i:\emph{\text{C}}_i\to \text{\emph{Cone}}(f_{i+1})$ is a
quasi-isomorphism for each $i\in\mathbb{Z}$.\label{alg}
\end{lemma}
\noindent To apply this lemma, we use the notation of \S \ref{sec:topology},
so that we have a 3-periodic sequence of surgery bundles $\mathbb{Y}_i$, $i\in\mathbb{Z}$,
and surgery cobordism bundles $\mathbb{X}_{ij}:\mathbb{Y}_i\to\mathbb{Y}_j$ whenever $j> i$.
We let $(\text{C}_i,\partial_i)$ be the instanton
chain complex $\text{C}(\mathbb{Y}_i)$ with its differential. We take $f_i$ to be $m(\mathbb{X}_{i,i+1}):\text{C}(\mathbb{Y}_i)\to \text{C}(\mathbb{Y}_{i+1})$.
The map $h_i$ is defined in \S \ref{sec:firstmaps}, and in \S \ref{sec:secondmaps} we define a chain homotopy $k_i$ from
$f_{i+2}h_{i} + h_{i+1}f_{i}$ to an intermediate map, and then show that this
intermediate map is chain homotopic to the identity map of $\text{C}_i$ up to sign.
All maps are of the form $m_G(\mathbb{X})$.\\
\subsection{The $h_i$ maps}
\label{sec:firstmaps}
We define $h_0:\text{C}_0\to \text{C}_{2}$ in this section. Recall from \S \ref{sec:decomp1}
that we can write
\[
X_{02}=W\cup_{S_1} U
\]
where $U$ is diffeomorphic to $-\mathbb{C}\mathbb{P}^2$ minus a 4-ball, and $W$
has boundary $Y_2 \sqcup \overline{Y}_0\sqcup S_1$. The map $h_0$
is taken to be $m_G(\mathbb{X}_{02})$ where $G$ is a family of metrics on $X_{02}$ induced
by the set of two intersecting hypersurfaces $\mathcal{H}=\{S_1,Y_{1}\}$. Thus $G$ is
parameterized by an interval, with endpoint metrics $G(S_1)$ and $G(Y_1)$, cut
along $S_1$ and $Y_1$, respectively, as depicted in Figure \ref{fig:met2}. Equations
(\ref{met1}) and (\ref{met2}) yield
\[
-h_0\partial_0 -\partial_{2}h_0 = m_{G(S_1)}(\mathbb{X}_{02}) + m_{G(Y_{1})}(\mathbb{X}_{02}).
\]
By equation (\ref{met3}), we also have $m_{G(Y_{1})}(\mathbb{X}_{02}) = m(\mathbb{X}_{12})m(\mathbb{X}_{01}) = f_{1}f_0$.
It remains to show that $m_{G(S_1)}(\mathbb{X}_{02})=0$.
\begin{figure}[t]
\includegraphics[scale=.85]{met2.pdf}
\caption{The family of metrics on $X_{02}$ used to define the $h_0$ map.}
\label{fig:met2}
\end{figure}
\begin{figure}[t]
\includegraphics[scale=.85]{met7.pdf}
\caption{The family of metrics $G_T$ on $E\subsetX_{03}$.}
\label{fig:met7}
\end{figure}
Let $a$ and $b$ be given with $\mu(a,\mathbb{X}_{02},b)=0$. To show that $m_{G(S_1)}(\mathbb{X}_{02})=0$,
it suffices to show that $M_{G(S_1)}(a,\mathbb{X}_{02},b)$ is empty for any such $a,b$.
We prove this by contradiction. Suppose $[A]\inM_{G(S_1)}(a,\mathbb{X}_{02},b)$.
Write
$\mathbb{U}$ and $\mathbb{W}$ for the restriction of $\mathbb{X}_{02}$ to $U$ and
$W$, respectively. Because
$G(S_1)$ is cut along $S_1$, $[A]$ is a pair $[A_{W}],[A_{U}]$ in
$M(a,\mathbb{W},b,c)\times M(c,\mathbb{U})$ for
some flat connection $c$ on $\mathbb{S}_1$. We arrange that the perturbation data near $S_1$ is $0$.
The gluing formula (\ref{glue}) says
\[
\mu(A) = \mu(A_{W}) + \mu(A_{U}) + h^0(c) + h^1(c).
\]
The flat connection $c$ is on a 3-sphere, so $h^1(c)=0$ and $h^0(c)=3$. Since $a$ and $b$ are
irreducible, so is $A_{W}$. It follows that $\mu(A_W)\geq 0$, see inequality
(\ref{bound1}). The connection $A_{U}$ may be reducible to $S^1$,
but no further, because $\mathbb{U}$ is non-trivial, so $h^0(A_{U})\leq 1$.
It follows from (\ref{boundred}) that $\mu(A_{U})\geq -1$, implying $\mu(A)=\mu(a,\mathbb{X}_{02},b)\geq 2$, a contradiction. \\
\subsection{The $k_i$ maps}\label{sec:secondmaps}
We define $k_0:\text{C}_0\to \text{C}_0$ in this section. Recall from \S \ref{sec:decomp2} that we
have five hypersurfaces $Y_1,Y_2,S_1,S_2,T$ in $X_{03}$
that intersect one another as in Figure \ref{fig:met3}. We define $k_0$ to be
$m_G(\mathbb{X}_{03})$ where $G$ is the family of metrics on $X_{03}$ induced by the set of
hypersurfaces $\mathcal{H}=\{Y_1,Y_2,S_1,S_2,T\}$. The family $G$
is parameterized by a pentagon and has faces $G(Y_{1}), G(Y_{2}), G(S_1), G(S_2), G(T)$,
each of which is an interval of metrics broken along the indicated
hypersurface. See Figure \ref{fig:met4}. Equations (\ref{met1}) and (\ref{met2}) yield
\[
k_0\partial_0-\partial_{0}k_0 = \sum_{S\in\mathcal{H}}m_{G(S)}(\mathbb{X}_{03})
\]
and the argument from \S \ref{sec:firstmaps} shows that
$m_{G(S_1)}(\mathbb{X}_{03}) = m_{G(S_2)}(\mathbb{X}_{03}) = 0$.
We also have $m_{G(Y_{1})}(\mathbb{X}_{03}) = h_{1}f_0$ and $m_{G(Y_{2})}(\mathbb{X}_{03}) = f_{2}h_0$ by
(\ref{met3}). Thus
\[
k_0\partial_0 -\partial_0k_0= m_{G( T )}(\mathbb{X}_{03}) + f_2h_0 + h_1f_0,
\]
or in other words, $k_0$ is a chain homotopy from $-m_{G(T)}(\mathbb{X}_{03})$ to $f_{2}h_{0} + h_{1}f_{0}$.
The proof is thus complete if we establish
\begin{figure}[t]
\includegraphics[scale=.8]{met4.pdf}
\caption{The family of metrics on $X_{03}$ used to define the $k_0$ map. This picture
is modelled on Bloom's from \protect\cite{bloom}.}
\label{fig:met4}
\end{figure}
\begin{lemma}
\emph{$m_{G(T)}(\mathbb{X}_{03})$} is chain homotopic to \emph{$\pm\text{id}:\text{C}_0\to \text{C}_0$}. \label{chainhom}
\end{lemma}
\noindent The remainder of this section goes towards proving this lemma.
From \S \ref{sec:decomp2}, we know the hypersurface $T$ induces a decomposition
\[
X_{03}=V \cup_{T} E
\]
where $E$ is diffeomorphic to
$-\mathbb{C}\mathbb{P}^2$ minus a regular neighborhood of an unknotted $S^1$. Let
$\mathbb{V}, \mathbb{E}$ be the restrictions of $\mathbb{X}_{03}$ to $V, E$, respectively.
The restriction of $G(T)$ to $V$ is a single metric. On the other hand, the
restriction of $G(T)$ to $E$ is an interval of metrics, and we denote this
family by $G_T$, see Figure \ref{fig:met7}. We arrange that the perturbations
used near $T$ are zero, so that the relevant limiting connections are flat.
The map $m_{G(T)}(\mathbb{X}_{03})$ is defined by counting isolated points
$[A]\in M_{G(T)}(\mathfrak{a},\mathbb{X}_{03},\mathfrak{b})_0$. That is,
\[
\langle m_{G( T )}(\mathbb{X}_{03}) \mathfrak{a},\mathfrak{b}\rangle = \#M_{G_T}(\mathfrak{a},\mathbb{X}_{03},\mathfrak{b})_0
\]
where $\#$ means a signed count determined by orienting moduli spaces.
Note that $\mu(A)=-1$ since $\dim G(T) = 1$.
Let $a$ and $b$ be the limiting connections of $A$ on $\mathbb{Y}_0$ and $\mathbb{Y}_3$, respectively,
so $[a]=\mathfrak{a}$ and $[b]=\mathfrak{b}$.
Each such $A$ can be written as a pair
\begin{equation}
A_V,(A_E,g)\label{pair}
\end{equation}
where $A_V$ is an instanton on $\mathbb{V}$
with limit $a$ over $\mathbb{Y}_0$, $b$ over $\mathbb{Y}_3$,
and some flat limit $c$ over $\mathbb{T}$; and $A_E$ is a $g$-instanton on $\mathbb{E}$
where $g\in G_T$, and $A_E$ has the same flat limit $c$ over $\mathbb{T}$.
First, let us understand $\mathfrak{T}=\mathfrak{C}(\mathbb{T})$, the
space of $\mathscr{G}_\text{ev}$-classes of flat connections on $\mathbb{T}$.
Recall that $\mathbb{T}$ is a trivial $\text{SO}(3)$-bundle over an $S^1\times S^2$.
Choose a spin structure for $\mathbb{T}$, i.e. a lift to an $\text{SU}(2)$-bundle. Lifting connections
sets up a bijection between flat $\text{SO}(3)$-connections modulo $\mathcal{G}_\text{ev}$ on $\mathbb{T}$
with flat $\text{SU}(2)$-connections modulo $\text{SU}(2)$ gauge transformations. It is well-known that this
latter set is in correspondence with $\text{Hom}(\pi_1(T),\text{SU}(2))$ modulo conjugation, which is
essentially the set of conjugacy classes of $\text{SU}(2)$.
The space of conjugacy classes of $\text{SU}(2)$ is $[-1,1]$, given
by the trace map divided by $2$.
The isomorphism $\mathfrak{T} \simeq [-1,1]$ depends on the spin structure of $\mathbb{T}$
chosen. There are two such choices, and they
are related by any \textit{non}-even gauge transformation of $\mathbb{T}$;
using such a transformation the
isomorphisms $\mathfrak{T}\simeq [-1,1]$ are related by reflecting $[-1,1]$ about $0$. The choice of
isomorphism can also be determined by choosing a trivial holonomy flat connection
on $\mathbb{T}$; this choice corresponds to $1\in[-1,1]$.
We record the following.
\begin{lemma}
A choice of spin structure for $\mathbb{T}$ determines an isomorphism $\mathfrak{T}\simeq [-1,1]$.
The action on $\mathfrak{T}$ by $\mathscr{G}/\mathscr{G}_\text{ev}\simeq\mathbb{Z}/2$ under
this isomorphism is reflection about $0$. \label{lem:flat}
\end{lemma}
\noindent We can now understand the structure of the relevant moduli space
following basic index computations.
Write $\mathfrak{T}^0$ for the interior of
$\mathfrak{T}$, and $G_T^0$ for the interior of $G_T$.
\begin{lemma}\label{lem:fiber}
The moduli space $M(\mathfrak{a},\mathbb{X}_{03},\mathfrak{b})_0$ can be identified
with the fiber product
\[
M(\mathfrak{a},\mathbb{V},\mathfrak{b},\mathfrak{T}^0)_0\times_{\mathfrak{T}^0}M_{G_T^0}(\mathfrak{T}^0,\mathbb{E})^\text{red}_1
\]
after a suitable perturbation.\label{lemma:fiber}
\end{lemma}
\noindent The moduli space on the right is the space of pairs $([A_E],g)$ where
$g\in G_T^0$ and $A_E$ is a $g$-instanton on $\mathbb{E}$ (exponentially decaying over the ends),
such that the flat limit class of $A_E$ over $T$ lies in the interior of $\mathfrak{T}$;
$h^0(A_E)=1$, i.e. $A_E$ has gauge-stabilizer $S^1$; and
$\mu(A_E)=1-h^0(A_E)-\dim G_T^0-\dim\mathfrak{T}^0=-2$.
In other words, the lemma says that in the pair (\ref{pair}) representing
$[A]\inM(\mathfrak{a},\mathbb{X}_{03},\mathfrak{b})_0$,
we have the constraints
\begin{align}
\mathfrak{c} = [c] \in\mathfrak{T}^0, \quad g\in G_T^0,\quad \mu(A_V)=-1,\quad \mu(A_E)=-2.\label{eq:constraints}
\end{align}
The fiber product is taken with respect to limit maps $\lambda:M\to\mathfrak{T}^0$
that send an instanton class to its flat limit class over $\mathbb{T}$, where
$M$ is one of the two moduli spaces appearing in the lemma.
This fiber product description is an application of the Morse-Bott gluing
theory as discussed in \cite[\S 4.5.2]{d} and \cite{mrowka,mmr,t}. Our situation,
that of instantons broken along $S^1\times S^2$ with flat limits in
$\mathfrak{T}\simeq [-1,1]$, is similar to that of Fintushel and Stern's in \cite{fs2tor},
where results of Mrowka's thesis \cite{mrowka} are used, and we will refer the reader
to these sources for more details.
We mention that for the above fiber product it is important that
the stabilizers of $c$ and $A_E$, each a circle, can be identified.
In general, one must record a gluing parameter in $\Gamma_c/\Gamma_{A_V}\times\Gamma_{A_E}$ where
$\Gamma_A$ is the stabilizer of $A$.
For instance, if both $[A_V]$ and $[A_E]$ were irreducible, there would be more than one choice of such a parameter.
We proceed to prove that the constraints (\ref{eq:constraints}) characterize
the possible gluing data.
\begin{proof}[Proof of Lemma \ref{lemma:fiber}]
We first show $\mathfrak{c}\in\mathfrak{T}^0$.
For convenience we set
\[
h(c)=(h^0(c)+h^1(c))/2.
\]
We note that $h(c)=1$ or $3$, depending on whether $\mathfrak{c}$ is in the
interior or boundary of $\mathfrak{T}$, respectively, cf. \cite[\S 3]{fs2tor}.
By assumption $\mu(A)=-1$, so (\ref{glue}) yields
\[
-1 = \mu(A) = \mu(A_V) +\mu(A_E) + 2h(c).
\]
Let $A_{S^1\times D^3}$ be a connection on the trivial bundle over $S^1\times D^3$
with one cylindrical end attached.
We identify the bundle over cross-sections of the end with $\mathbb{T}$,
with the base having the opposite orientation of $T$. Suppose
$A_{S^1\times D^3}$ has flat limit $c$. We glue $A_{S^1\times D^3}$ to $A_E$ to obtain a
connection $A_{-\mathbb{C}\mathbb{P}^2}$ on a non-trivial bundle $\mathbb{E}'$ over
$-\mathbb{C}\mathbb{P}^2$. The isomorphism class of $\mathbb{E}'$ depends on $c$, but we know
$p_1(\mathbb{E}')=4k-1$ for some $k\in\mathbb{Z}$, cf. \cite[\S 4.1.4]{dk}. We have
\[
\mu(A_E) + \mu(A_{S^1\times D^3}) + 2h(c)=\mu(A_{-\mathbb{C}\mathbb{P}^2}).
\]
We compute $\mu(A_{S^1\times D^3})$.
Two copies of $S^1\times D^3\times\text{SO}(3)$, each with a cylindrical end,
glue, overlapping the ends, to give $S^1\times S^3\times\text{SO}(3)$. Index additivity yields
\[
2\mu(A_{S^1\times D^3})+ 2h(c) = \mu(S^1\times S^3\times\text{SO}(3)).
\]
On the other hand, (\ref{closed}) says the right hand side is
\[
-3(1-b_1+b_2^+)(S^1\times S^3) = 0.
\]
Thus $\mu(A_{S^1\times D^3}) = -h(c)$. This can also
be deduced from the Atiyah-Patodi-Singer index theorem,
cf. \cite[Thm. 3.10]{aps}. From (\ref{closed}) we obtain
$\mu(A_{-\mathbb{C}\mathbb{P}^2})=-8k-1$, and then
\[
\mu(A_V) = 8k - h(c), \qquad \mu(A_E) = -8k - 1 - h(c).
\]
Suppose for contradiction that $\mathfrak{c}$ is on the boundary of $\mathfrak{T}$, so that $h(c)=3$.
Since $A_V$ is irreducible and the boundary of $\mathfrak{T}$ has dimension $0$, we have
\[
8k-3=\mu(A_V) \geq 0
\]
in the generic case, so $k > 0$. Since $\mathbb{E}'$ is nontrivial, $h^0(A_E)\in\{0,1\}$.
Using (\ref{bound1}), we find
\[
-8k-4 = \mu(A_E) \geq -\dim G_T -\dim \partial \mathfrak{T} - h^0(A_E) \geq -2.
\]
Then $k < 0$, a contradiction. Thus $h(c)=1$ and $\mathfrak{c}\in\mathfrak{T}^0$.
It follows that $\mu(A_V)=8k-1$ and $\mu(A_E)=-8k-2$.
Applying (\ref{bound1}) in this case,
\[
\mu(A_E) \geq -\dim\mathfrak{T}^0 -\dim G_T -h^0(A_E) \geq -3,
\]
so $k \leq 0$. Similarly, $\mu(A_V) \geq -\dim\mathfrak{T}^0 = -1$
gives $k\geq 0$. Thus $k=0$, yielding $\mu(A_V) = -1$ and $\mu(A_E)=-2$,
as claimed.
Next, we rule out the possibility that $h^0(A_E)=0$,
that is, that $[A]\in M_{G(T)}(\mathfrak{a},\mathbb{X}_{03},\mathfrak{b})_0$
can be written as a gluing of $[A_V]$ and $([A_E],g)$ where
$A_E$ is {\em irreducible}, i.e.
\[
([A_E],g)\inM_{G_T}(\mathfrak{T}^0,\mathbb{E})^\text{irr}_0.
\]
Note that if there were such a gluing, we would have to keep track of a gluing
parameter, as mentioned earlier.
However, this moduli space of irreducibles and $M(\mathfrak{a},\mathbb{V},\mathfrak{b},\mathfrak{T}^0)_0$
are both finite sets after perturbation, by standard compactness
results, cf. \cite[\S 5]{fs2tor}. Further, the intersection of their flat limits in $\mathfrak{T}^0$
can be made transverse, in which case they have empty intersection.
Thus, after a suitable perturbation, $h^0(A_E)=1$.
Finally, we show $g\in G_T^0$. Suppose for contradiction that $g\in\partial G_T$. Then
$g$ is one of two metrics on $E$, $G_T(S_1)$ or $G_T(S_2)$, cut along $S_1$ or
$S_2$, respectively. See Figure \ref{fig:met7}. Suppose $g=G_T(S_1)$; the other case is similar.
Write
\[
E = X\cup_{S_1} U
\]
where $U\simeq -\mathbb{C}\mathbb{P}^2\setminus\text{int}(D^4)$
and $X\simeq D^2\times S^2\setminus \text{int}(D^4)$. Note that the restriction of $\mathbb{E}$
over $X$ is trivial, while the restriction over $U$, as in \S \ref{sec:firstmaps},
is non-trivial; write $A_X$ and $A_{U}$
for the restriction of $A_E$
over these respective bundles. They have a common
flat limit $d$ on $\mathbb{S}_{1}$. In particular, $h^0(d)=3$ and $h^1(d)=0$.
The connection $A_X$ has the limit $c$ over $\mathbb{T}$ from before.
We compute $\mu(A_X)$ and $\mu(A_{U})$. There
is only one instanton class on $X$:
the trivial class, cf. \cite[\S 7.4.1]{d}. Thus $A_X$ is trivial,
so $h(c)=3$. Let $A_{S^1\times D^3}$ be a connection on the trivial bundle over
$S^1\times D^3$ with one cylindrical end attached
whose flat limit is $c$. Then $A_X$ and $A_{S^1\times D^3}$ glue,
overlapping ends, to give a connection $A_{D^4}$ over $D^4$ with one cylindrical end attached.
Then (\ref{glue}) and (\ref{closed}) yield
\[
\mu(A_X) + \mu(A_{S^1\times D^3}) + 2h(c) = \mu(A_{D^4}) = -3.
\]
From above, $\mu(A_{S^1\times D^3})=-h(c)=-3$. Thus $\mu(A_X) = -6$.
With $\mu(A_{U})=8k-1$ for some $k\in\mathbb{Z}$, we apply (\ref{glue}) once more to get
\[
\mu(A_E) = \mu(A_X) + \mu(A_{U}) + 2h(d) = 8k-4.
\]
It follows that $\mu(A_E)\neq-2$, a contradiction.
\end{proof}
\begin{lemma}
The projection $M_{G_T^0}(\mathfrak{T}^0,\mathbb{E})_1^\text{red}\to G_T^0$ is a smooth homeomorphism.\label{lem:met}
\end{lemma}
\begin{proof}
The moduli space here is topologized as a subset of $\mathscr{B}\times G_T^0$,
so the projection map is a continuous, open map. It is also smooth,
in the transverse case, by general theory. It suffices to show
bijectivity. The argument is a standard account of counting reducible instantons.
Let $([A_E],g)$ be such that $\mu(A_E)=-2,h^0(A_E)=1$ and $g\in G_T^0$.
Because $H^1(E;\mathbb{R})=0$, $E$ admits no non-trivial real line bundles.
Thus $h^0(A_E)=1$ implies $A_E$ is compatible
with a splitting $\mathbb{L} \oplus \underline{\mathbb{R}}$ of the associated vector bundle of $\mathbb{E}$,
where $\mathbb{L}$ is a complex line bundle and $\underline{\mathbb{R}}$
is a trivial real line bundle.
Gluing $A_E$ to a connection $A_{S^1\times D^3}$ on a trivial
bundle over $S^1\times D^3$ with one cylindrical end attached
gives an instanton $A_{-\mathbb{C}\mathbb{P}^2}$ on a bundle
$\underline{\mathbb{R}}'\oplus \mathbb{L}'$ over $-\mathbb{C}\mathbb{P}^2$ where $\underline{\mathbb{R}}'$ and
$\mathbb{L}'$ are extensions of $\underline{\mathbb{R}}$ and $\mathbb{L}$.
The gluing formula says
\begin{equation}\label{p1}
\mu(A_E) + \mu(A_{S^1\times D^3}) + h^0(c) + h^1(c) = \mu(A_{-\mathbb{C}\mathbb{P}^2}) =
-2p_1(\underline{\mathbb{R}}'\oplus \mathbb{L}')-3.
\end{equation}
Using that $p_1(\underline{\mathbb{R}}'\oplus \mathbb{L}')=c_1(\mathbb{L}')^2$
we have $\mu(A_E)= -2c_1(\mathbb{L}')^2-4$. Since $\mu(A_E)=-2$, we conclude that $c_1(\mathbb{L}')^2=-1$.
Let $P(E)$ denote the image of the map $H^2(E,\partial E;\mathbb{Z})\to H^2(E;\mathbb{Z})$.
Note that inclusion $E\to-\mathbb{C}\mathbb{P}^2$ induces an isomorphism of intersection forms from $H^2(-\mathbb{C}\mathbb{P}^2;\mathbb{Z})$ to $P(E)$,
both negative definite of rank 1, under which $c_1(\mathbb{L}')$ is sent to $c_1(\mathbb{L})$. It follows that $c_1(\mathbb{L})$ is a generator of $H^2(E;\mathbb{Z})$.
There are thus two choices of $\mathbb{L}$ corresponding
to the choices of generator for $H^2(E;\mathbb{Z})$. To get one from the other take the
conjugate $\mathbb{L}^*$. The choice we make does not matter in the end, as we can relate
the two by an even gauge transformation, by combining the conjugation map $\mathbb{L}\to\mathbb{L}^\ast$
with the involution of $\underline{\mathbb{R}}$ that
reflects each fiber. Note
that $\mathscr{G} = \mathscr{G}_\text{ev}$ for $\mathbb{E}$.
We are left with the problem of finding $g$-instantons on $\mathbb{L}$.
According to \cite[Prop. 4.9]{aps}, the space of $L^2$
harmonic 2-forms on $E$
is isomorphic to the image of $H^2(E,\partial E;\mathbb{R})\to H^2(E;\mathbb{R})$,
and under this isomorphism a harmonic form $x$ corresponds to its de Rham
class $[x]$. In our case this map is an isomorphism $\mathbb{R}\to\mathbb{R}$. Further,
any such harmonic $x$ satisfies $\star x = -x$, as follows:
$\star x$ is $L^2$ harmonic, so $\star x =cx$ for some $c\in\mathbb{R}$; then
$\star^2=1$, $\int x\wedge x<0$, and
$0\leq \left\| x \right\|_{L^2}^2 = \int x\wedge \star x = c\int x\wedge x$ imply that $c=-1$.
Conversely, a closed $L^2$ 2-form $x$ satisfying $\star x = -x$ is easily seen to be
$L^2$ harmonic.
The arguments from \cite[\S 2.2.1]{dk} easily
adapt here, since $H^1(E;\mathbb{R})=0$, to show
that given a closed $L^2$ 2-form $x$ on $E$, there is a
connection $A$ on $\mathbb{L}$ with curvature $ix$ which is unique
up to gauge equivalence. In this way, the unique $L^2$ harmonic
2-form representing $-2\pi c_1(\mathbb{L})$ specifies
a unique $g$-instanton class on $\mathbb{L}$.
\end{proof}
\begin{lemma}
The moduli space $M_{\partial G_T}(\partial \mathfrak{T},\mathbb{E})_0^\text{red}$ consists of two points,
and is the natural boundary of the open interval $M_{G_T^0}(\mathfrak{T}^0,\mathbb{E})_1^\text{red}$.
\end{lemma}
\begin{proof}
The previous lemma tells us that the ends of the latter moduli space
are essentially the ends of $G_T$.
There are two endpoint metrics of $G_T$, labelled
$G_T(S_1)$ and $G_T(S_2)$, each broken along the indicated 3-sphere.
Any instanton $A$ on $\mathbb{E}$
compatible with $G_T(S_1)$ is a gluing of
the trivial instanton on the trivial bundle over $X\simeq D^2\times S^2\setminus \text{int}(D^4)$
with two cylindrical ends attached
and an instanton $A_{U}$ on
$U\simeq-\mathbb{C}\mathbb{P}^2\setminus \text{int}(D^4)$
with one cylindrical end attached.
By the removable singularities theorem of Uhlenbeck,
cf. \cite[Thm. 4.4.12]{dk}, the instanton $A_{U}$
uniquely extends to an instanton $A$ on a bundle $\mathbb{W}$
over $-\mathbb{C}\mathbb{P}^2$. If $A$ is to be a
limit of elements in $M_E$, then $p_1(\mathbb{W})=-1$.
There is only one such instanton class
on $\mathbb{W}$, cf. \cite[\S 2.7]{kmu}.
Thus $[A]$ is uniquely determined. Similarly, there is one instanton
class to add for $G_T(S_2)$. That $A$ is trivial over $X$
implies the flat limits over $\mathbb{T}$ of these two instanton classes lie in $\partial\mathfrak{T}$.
\end{proof}
Note that the map in Lemma \ref{lem:met} extends
to a homeomorphism of closed intervals. We write
$M_{G_T}(\mathfrak{T},\mathbb{E})_1^\text{red}$
for the completed closed interval moduli space. We
call a map between closed intervals \textit{proper}
if it sends boundary to boundary. A proper map between
oriented, closed intervals has
a well-defined degree, which is $0$ or $\pm 1$. Indeed,
one can define the degree by looking at the induced map
$S^1\to S^1$ obtained by identifying boundary points.
\begin{lemma}
The map $\lambda:M_{G_T}(\mathfrak{T},\mathbb{E})_1^\text{red}\to\mathfrak{T}$ defined
by sending an instanton class to its flat limit class over $\mathbb{T}$ has degree $\pm 1$.
\end{lemma}
\begin{proof}
We use the involution $\sigma:\mathbb{E}\to\mathbb{E}$ of \S \ref{sec:involution}.
Write $M$ for the moduli space in the lemma.
We see that $\sigma$ induces an action on $M$, and
because $\sigma(\mathbb{T})=\mathbb{T}$, an action on $\mathfrak{T}$.
We can arrange the family of metrics $G_T$ so that
$\sigma$ restricts to an isometry of the base space and reflects $G_T$,
in turn swapping the endpoints of the interval $M$.
If we establish that $\sigma$ also swaps the endpoints of the interval $\mathfrak{T}$,
we are done, because the limit map $\lambda$ respects the
action of $\sigma$. From \S \ref{sec:involution} we know that with respect to a fixed trivialization $\mathbb{T}\simeq S^1\times S^2\times\text{SO}(3)$, $\sigma$
is isotopic to a composition $\theta\circ\upsilon$, where $\theta$ is a diffeomorphism of $S^1\times S^2$ lifted in a trivial way to $S^1\times S^2\times\text{SO}(3)$. The diffeomorphism under consideration acts trivially on $\pi_1(T)$, and hence $\theta$ acts trivially on $\mathfrak{T}$. The map $\upsilon$ is a non-even gauge transformation, so by Lemma \ref{lem:flat}, it reflects the interval $\mathfrak{T}$. It follows that $\sigma$ reflects $\mathfrak{T}$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{chainhom}]
By our fiber product description of $\moduli_{G(\sthree)}(a,\xbb_{03},b)$ we can write
\[
\#M_{G_T}(\mathfrak{a},\mathbb{X}_{03},\mathfrak{b})_0 =\pm\sum\varepsilon(x)\varepsilon(y)
\]
where the sum is over pairs
\[
(x,y)\inM(\mathfrak{a},\mathbb{V},\mathfrak{b},\mathfrak{T}^0)_0\timesM_{G_T^0}(\mathfrak{T}^0,\mathbb{E})_1^\text{red}
\]
having equal flat limit class $\lambda(x)=\lambda(y)\in\mathfrak{T}^0$.
Each $x$ and $y$ has a sign, $\varepsilon(x)$ and $\varepsilon(y)$ respectively, prescribed by orienting
moduli spaces. In the generic case, the sum of the
$\varepsilon(y)$ for a fixed value $\lambda(y)$ equals $\pm\text{deg}(\lambda)=\pm 1$. In this way
we obtain
\[
\#M_{G_T}(\mathfrak{a},\mathbb{X}_{03},\mathfrak{b})_0=\pm\#M(\mathfrak{a},\mathbb{V},\mathfrak{b},\mathfrak{T}^0)_0
\]
where the sign does not depend on the pair $(\mathfrak{a},\mathfrak{b})$.
Thinking of cobordisms as morphisms, we abbreviate $[0,1]\times \mathbb{Y}_0$ to $1_{\mathbb{Y}_0}$. Write
$1_{\mathbb{Y}_0} = \mathbb{V} \cup_{\mathbb{T}} \mathbb{W}$ where
$\mathbb{W}$ is a trivial bundle over $W=S^1\times D^3$. We choose
the perturbation data for $\mathbb{W}$ to be $0$. Let $Q$ be the family of metrics
on $[0,1]\timesY_0$ induced by $\mathcal{H}=\{T\}$. The boundary of $Q$ consists of an initial
product metric on $[0,1]\timesY_0$ and a metric $Q(T)$ cut along $T$. Thus (\ref{met1}) and
(\ref{met2}) yield
\[
-m_{Q}(1_{\mathbb{Y}_0}) \partial_0-\partial_0 m_{Q}(1_{\mathbb{Y}_0}) = m_{Q(T)}(1_{\mathbb{Y}_0}) + m(1_{\mathbb{Y}_0}).
\]
Of course, $m(1_{\mathbb{Y}_0})$ is the identity. It remains to show
$m_{Q(T)}(1_{\mathbb{Y}_0})=\pm m_{G(T)}(\mathbb{X}_{03})$, or
\begin{equation}
\#M_{Q(T)}(\mathfrak{a},\mathfrak{b})_0=\pm\#M(\mathfrak{a},\mathbb{V},\mathfrak{b},\mathfrak{T}^0)_0 \label{eq:fib2}
\end{equation}
where again the sign does not depend on the pair $(\mathfrak{a},\mathfrak{b})$.
In the spirit of our previous arguments, we establish this by arguing that
$M(\mathfrak{a},\mathbb{V},\mathfrak{b},\mathfrak{T}^0)_0$
can be written as a fiber product
\[
M(\mathfrak{a},\mathbb{V},\mathfrak{b},\mathfrak{T}^0)_0\times_{\mathfrak{T}^0}M(\mathfrak{T}^0,\mathbb{W})_1^\text{flat}.
\]
Here $M(\mathfrak{T}^0,\mathbb{W})_1^\text{flat}$ is the 1-dimensional family of flat connection
classes on $\mathbb{W}$ with arbitrary flat limit class in $\mathfrak{T}^0$.
Indeed, any flat connection class on $\mathbb{T}$ uniquely extends to a flat connection class
on $\mathbb{W}$ over $S^1\times D^3$.
We conclude that all instantons on $\mathbb{W}$ are flat, cf. \cite[\S 7.4]{d}.
In particular, the limit map $\lambda:M(\mathfrak{T}^0,\mathbb{W})_1^\text{flat}\to\mathfrak{T}^0$
is a smooth homeomorphism. Now suppose $[A]\inM_{Q(T)}(\mathfrak{a},\mathfrak{b})_0$
restricts to a pair $[A_V],[A_\text{W}]$ of instantons on $\mathbb{V}$ and $\mathbb{W}$, respectively,
with equal limit $\mathfrak{c}$ over $\mathbb{T}$. Then
\[
0 = \mu(A) = \mu(A_V)+\mu(A_\text{W}) + 2h(c).
\]
We saw in Lemma \ref{lemma:fiber} that $\mu(A_W)=-h(c)$, so $\mu(A_V)= -h(c)$. The space of $[A_V]$ with
$\mu(A_V)=-2$ is generically empty, so we conclude that $\mu(A_V)=-1$. It follows that
$\mathfrak{c}\in\mathfrak{T}^0$. Because the stabilizer of each $A_W$ is $\text{SU}(2)$,
the gluing parameter space is trivial, and our fiber product description is verified, cf. \cite[\S 4]{fs2tor}.
Because the limit map $\lambda:M(\mathfrak{T}^0,\mathbb{W})_1^\text{flat}\to\mathfrak{T}^0$
is a homeomorphism, our fiber product yields (\ref{eq:fib2}). This completes the proof of Lemma \ref{chainhom},
and consequently the proof of Theorem \ref{thm:floer}.
\end{proof}
|
2,877,628,090,673 | arxiv | \section{Introduction}
Let $X$ be a set and $Bin(X)$ the set of all binary operations on $X$.
We say that $S\subset Bin(X)$ is a distributive set of operations if all
pairs of elements $*_{\alpha},*_{\beta} \in S$ are right distributive,
that is, $ (a*_{\alpha}b)*_{\beta}c= (a*_{\beta}c)*_{\alpha}(b*_{\beta}c)$ (we allow $*_{\alpha}=*_{\beta}$).
$(X;S)$ is called a multi-shelf\footnote{If $(X;*)$ is a magma and $*$ is a right self-distributive operation,
then $(X;*)$ is called a shelf - the term coined by Alissa Crans in her PhD thesis \cite{Cr}.} in this case.
It was observed in \cite{Prz-1} (compare also \cite{Ro-S}) that $Bin(X)$ is a monoid with composition
$*_1*_2$ given by $a*_1*_2b= (a*_1b)*_2b$, with the identity $*_0$ being the right trivial operation, that is,
$a*_0b=a$ for any $a,b\in X$.
The submonoid of $Bin(X)$ of all invertible elements in $Bin(X)$ is a group denoted by $Bin_{inv}(X)$.
If $* \in Bin_{inv}(X)$ then $*^{-1}$ is usually denoted by $\bar *$.
We say that a subset $S \subset Bin(X)$ is a distributive set if all
pairs of elements $*_{\alpha},*_{\beta} \in S$ are right distributive,
that is, $ (a*_{\alpha}b)*_{\beta}c= (a*_{\beta}c)*_{\alpha}(b*_{\beta}c)$ (we allow $*_{\alpha}=*_{\beta}$).
The following important basic lemma was proven in \cite{Prz-1}:
\begin{lemma}\label{Lemma 1}
\begin{enumerate}
\item[(i)] If $S$ is a distributive set and $*\in S$ is invertible, then $S\cup \{\bar *\}$ is also a
distributive set.
\item[(ii)] If $S$ is a distributive set and $M(S)$ is the monoid generated by $S$, then $M(S)$ is a
distributive monoid.
\item[(iii)] If $S$ is a distributive set of invertible operations and $G(S)$ is the group generated by $S$, then
$G(S)$ is a distributive group.
\end{enumerate}
\end{lemma}
The question was asked by J.Przytycki which groups can be realized as distributive sets.
Soon after the definition of a distributive submonoid of $Bin(X)$ was given in \cite{Prz-1},
Michal Jablonowski, a graduate student
at Gda\'nsk University, noticed that any distributive monoid whose elements are idempotent operations
is commutative. We have:
\begin{proposition} (\cite{Prz-1})\label{Proposition ??}
\begin{enumerate}
\item[(i)]
Consider $*_{\alpha},*_{\beta}\in Bin(X)$ such that $*_{\beta}$ is idempotent ($a*_{\beta}a=a$) and
distributive with respect to $*_{\alpha}$. Then $*_{\alpha}$ and $*_{\beta}$ commute. In particular:
\item[(ii)] If $M$ is a distributive monoid and $*_{\beta}\in M$ is an idempotent operation, then $*_{\beta}$
is in the center of $M$.
\item[(iii)] A distributive monoid whose elements are idempotent operations is commutative.
\end{enumerate}
\end{proposition}
\begin{proof} We have: $(a*_{\alpha}b)*_{\beta}b \stackrel{distrib}{=} (a*_{\beta}b)*_{\alpha}(b*_{\beta}b)
\stackrel{idemp}{=}
(a*_{\beta}b)*_{\alpha}b$.
\end{proof}
A few months later Agata Jastrz{\c e}bska (also a graduate student at Gda\'nsk University)
checked that any distributive group in $Bin_{inv}(X)$ for $|X|\leq 5$ is
commutative.
The first
noncommutative subgroup of $Bin(X)$ (the group $S_3$) was found in October of 2011 by Yosef Berman.
Soon after Berman (with the help of Carl Hammarsten) constructed an embedding of a general dihedral group $D_{2\cdot n}$
in $Bin(X)$ where $X$ has $2n$ elements. The embedding of Berman $\phi: D_{2\cdot 3}\to Bin(X)$ is
given as follows:
if $X=\{0,1,2,3,4,5\}$ then the subgroup $D_{2\cdot 3} \subset Bin(X)$ is generated by the binary operations
$*_{\tau}$ (reflection) and $*_{\sigma}$ (a $3$-cycle):
$$*_{\tau}= \left( \begin{array}{cccccc}
1 & 1 & 3 & 5 & 5 & 3 \\
0 & 0 & 4 & 2 & 2 & 4 \\
3 & 3 & 5 & 1 & 1 & 5 \\
2 & 2 & 0 & 4 & 4 & 0 \\
5 & 5 & 1 & 3 & 3 & 1 \\
4 & 4 & 2 & 0 & 0 & 2
\end{array} \right)
\text{ and }
*_{\sigma} = \left( \begin{array}{cccccc}
2 & 4 & 2 & 4 & 2 & 4 \\
5 & 3 & 5 & 3 & 5 & 3 \\
4 & 0 & 4 & 0 & 4 & 0 \\
1 & 5 & 1 & 5 & 1 & 5 \\
0 & 2 & 0 & 2 & 0 & 2 \\
3 & 1 & 3 & 1 & 3 & 1
\end{array} \right). $$
where $i*j$ is placed in the $i$th row and $j$th column,
and $D_{2\cdot 3}=\{\tau, \sigma\ | \ \tau \sigma \tau = \sigma^{-1} \}$.
\section{Regular distributive embedding}
We now show that any group $G$ can be embedded in $Bin(X)$ for some $X$.
\begin{theorem}(Regular embedding)\label{Theorem 1}\\
Every group $G$ embeds in $Bin(G)$.
This embedding (monomorphism), $\phi^{reg}: G \rightarrow Bin(G)$ sends $g$ to $*_g$ where $a*_gb=ab^{-1}gb$.
\end{theorem}
\begin{proof}
(i)
We check that the set $\{*_g\}_{g\in G}$ is a distributive set.
We have: \\
$(a*_{g_1}b)*_{g_2}c= (ab^{-1}g_1b)*_{g_2}c= ab^{-1}g_1bc^{-1}g_2c$,
and
$$(a*_{g_2}c)*_{g_1}(b*_{g_2}c)=(ac^{-1}g_2c)*_{g_1}(bc^{-1}g_2c)=
ab^{-1}g_1bc^{-1}g_2c, \mbox{ as needed.}$$
(ii) Now we check that the map $\phi^{reg}$ is a monomorphism.
Of course the image of the identity $*_0$ is the identity in $Bin(G)$. Furthermore:
$a*_{g_1g_2}b=ab^{-1}g_1g_2b$, and \\
$a*_{g_1}*_{g_2}b= (a*_{g_1}b)*_{g_2}b=ab^{-1}g_1bb^{-1}g_2b=ab^{-1}g_1g_2b$, as needed.
We have proven that $\phi^{reg}$ is a homomorphism. To show that $\phi^{reg}$ is a monomorphism
we substitute $b=1$ in the formula for $a*_gb$, to get
$a*_{g} 1=ag$, so different $g$'s give different binary operations in $Bin(G)$.
Notice that $\phi^{reg}(g^{-1}) = \bar *_g$.
\end{proof}
We call our embedding {\it regular} by analogy to the regular representation of a group.
We do not claim that the regular embedding is minimal. In fact, finding minimal distributive embeddings
is a very interesting problem in itself.
\section{General conditions for a distributive embedding}
We now discuss a method that can be used to embed groups into subsets of $Bin_{inv}(X)$
satisfying a given condition. We then use this method when the condition is right distributivity,
which led us to discover the regular distributive embedding of $G$ in $Bin(G)$,
and also should be a natural tool to look for minimal embeddings. For the group $S_3$
we know, by Jastrzebska's calculations, that $X$ with six elements is the minimal set such that
$S_3$ embeds in $Bin(X)$.
We start from the following basic observation:
\begin{lemma}
There is an isomorphism between $Bin_{inv}(X)$ and $S_{X}^{|X|}$, where $|X|$ is the cardinality of $|X|$
and $S_{X}$ is the group of permutation on set $X$ ( i.e. bijections of the set $X$). The isomorphism
$\alpha: Bin_{inv}(X) \to S_{X}^{|X|}=\Pi_{y\in X}{S}_X^y$ is described as follows:
$\alpha (*)(y): X\to X$ is the bijection where
$(\alpha (*)(y))(x)=x*y$. In other words $\alpha (*)(y)$ is the bijection corresponding to the $y$ coordinate of
$S_{X}^{|X|}$.
\end{lemma}
Using the map $\alpha $, we can translate conditions on a set of binary
operations in $Bin(X)$ into a group-theoretic condition on (coordinates of)
elements of $S_{X}^{|X|}.$ With some work, we can use this to find an embedding
of a group into $Bin(X).$ This is possible since the group axioms
require that such an embedding sits inside $Bin_{inv}(X).$
Let us consider distributive, invertible sets $\mathcal{S}$ of
binary operations in $Bin_{inv}(X)$. These are subsets $\mathcal{S}\subseteq
Bin_{inv}(X)$ that satisfy:
\[
(x\ast _{i}y)\ast _{j}z=(x\ast _{j}z) \ast _{i}(y\ast _{j}z),\text{ }for\text{ }all\text{
}\ast _{i},\ast _{j}\in S\text{ }and\text{ }x,y,z\in X.
\]
Let $\sigma _{i,y}=p_{y}\alpha (*_i),$ where $p_{y}:S_{X}^{|X|}\rightarrow
S_{X}$ is projection onto the $y^{th}$ coordinate. Then translating the
distributivity condition via $\alpha $:
\[
\sigma _{j,z}(x\ast _{i}y)=\sigma _{i,(y\ast _{j}z)}(x*_jz),
\]%
or%
\[
\sigma _{j,z}(\sigma _{i,y}(x))=\sigma _{i,\sigma _{j,z}(y)}(\sigma_{j,z}(x)),
\]%
which leads to%
\[
\sigma _{i,\sigma _{j,z}(y)}=\sigma _{j,z}\sigma _{i,y}\sigma _{j,z}^{-1}.
\]
Now the problem of embedding a group into $Bin_{inv}(X)$ is reduced to finding subsets of $S_{X}^{|X|}$
isomorphic to the group that satisfy the condition above.
We can then use tools of group theory
(e.g., representation theory) to solve the problem.
This process can by attempted for subsets of $Bin_{inv}(X)$ satisfying any condition, and led
to the embedding defined in the previous section for distributive subsets.
\section{Future directions; multi-term homology}
Przytycki defined multi-term homology for any distributive
set in \cite{Prz-1}. This provided motivation to have examples of
distributive sets. The regular embedding of a group (Theorem \ref{Theorem 1}) provides an
interesting family of distributive sets ripe for studying this homology (compare \cite{CPP,Prz-1,Prz-2,Pr-Pu,P-S}).
As a nontrivial example we propose computing $n$-term distributive homology related with
the regular embedding of the cyclic group $Z_n$.
Another problem related to Theorem \ref{Theorem 1} is which
monoids are distributive submonoids of $Bin(X)$.
A key motivation is to use multi-term distributive homology in knot theory. This possibility arises from
the relation of the third Reidemeister move with right distributivity (and eventually the Yang-Baxter
operator), and the important
work of Carter, Kamada, Saito, and other researchers on applications of quandle homology
to knot theory (see \cite{CKS}).
\section{Acknowledgements}
I was partially supported by the GWU Presidential Merit Fellowship.\\
I would like to thank Prof. Jozef Przytycki, Carl Hammarsten, and Krzysztof Putyra for helpful discussions.
|
2,877,628,090,674 | arxiv | \section{Introduction}
The fundamental blocks of strong interaction have been confirmed to be
quark and gluon. Meson, baryon, nucleus, and neutron star are the observed
samples of the strong interaction matter. Theoretically it is expected
that there should be other samples of the strong interaction matter such
as exotic quark-gluon systems, strangelet, strange star, and quark-gluon
plasma. Experimental search for these new strong interaction matters
has been undertaken for two decades, even though there are some candidates
of these new strong interaction matters but nothing new has been
established. New facilities have been under operation or under construction
for further experimental search. To help these search projects, new
theoretical input is needed. An estimate of the $d^*$ production cross
section through the $\pi d\rightarrow\pi d^*$ aimed to use the LAMPF
$\pi$ beam was reported in 1989\cite{gold1}. A strong production of $d^*$,
aimed to use the existed proton machine, has been reported by Wong
\cite{wong1}. Intensive electron beam facilities in the
GeV energy range are available. This paper reports an electro-production
calculation of $d^*\left(IJ^p=03^+\right)$\cite{gold1,wang1}.
There are experimental indications of dibaryon states. A high mass
($\sim2.7~GeV$) dibaryon was reported by the Sacley group\cite{le} and a low
mass($2.06~GeV$) one was reported by Moscow-Tuebingen group\cite{bi} in
addition to other more tentative ones. H particle\cite{ja} has been
hunted for more than 20 years. Why is $d^*$ interesting?
Dibaryon states are closely related to the hadronic interaction, which
is too complicated to be studied directly from the fundamental strong
interaction theory, the QCD at present. Many QCD inspired models have
been developed to describe the hadronic interaction.
The meson exchange
model, based on meson-baryon coupling, was developed long before
QCD\cite{yu} and it is still best at fitting the experimental
data quantitatively\cite{dema}. However its validity in QCD is not clear
at the moment. Moreover there are many phenomenological parameters
fixed in the fitting process and in turn it is hard to make definite
predictions for new physics such as the dibaryon states\cite{kf}.
Chiral
perturbation effective field theory\cite{wein} employs Goldstone bosons,
resulting from spontaneous chiral symmetry breaking, as the effective
degree of freedom in the low energy region, and the extension to the N-N
interaction is encouraging\cite{ka}. Dibaryon has not been studied in
this approach yet.
L.Ya.~Glozman, D.O.~Riska and G.E.~Brown\cite{gl} proposed that the
Goldstone boson is not only an effective degree of freedom for describing
hadronic interactions but also good for analyzing baryon internal
structure. In their model, constituent quarks and Goldstone bosons are
used as the effective degrees of freedom. Up to now the application is
mainly restricted in the baryon spectroscopy except a study on the
origin of the N-N repulsive core.
A.~Manohar and
H.~Georgi\cite{mg}, however, have argued that between the chiral symmetry
breaking scale($\sim1~GeV$) and the confinement scale($\sim0.2~GeV$), the
effective degrees of freedom are Goldstone bosons, constituent quarks
and gluons. Such a hybrid quark-gluon-meson exchange model has been
developed to describe nucleon-baryon interactions and a
semi-quantitative fit has been obtained\cite{fu}. Some dibaryon states have
been studied with this model\cite{fae}.
A constituent quark
and effective one gluon exchange model\cite{rgg} describes hadron
spectroscopy quite well\cite{is1}, but only the repulsive core of the N-N
interaction was obtained when this model was extended to study hadron
interactions\cite{wong}.
The MIT bag model uses the current quark and
gluon to describe hadron internal structure\cite{cjj}. It was used
extensively in the study of dibaryon states in the early 1980's
and resulted in an explosion of dibaryon states
\cite{dsw}. It was realized latter that the unphysical boundary condition
should be modified\cite{is2}. One modified version is the R-matrix method
\cite{lo} and the other one is the compound quark model approach\cite{sim}.
The Skyrme model\cite{sw} has also been used to study hadron interactions
\cite{wa} and dibaryons\cite{jaf}. Additional models might exist that
should be added to the list.
It seems hard to discriminate these models just by the hadron spectroscopy
and the existed scattering data of hadron interactions.
Theoretically it is also hard to justify which effective degrees of freedom
are the proper ones. On the other hand the well known phenomena, that the
nucleus is a collection of nucleons rather than quarks, and that the
nuclear force has similarities to the molecular force except for energy
and length scale differences, have not been explained by any of these
model approaches.
A pure quark-gluon model description of the $N$-$N$ interaction has
been developed\cite{wang1,wang2}. It starts from a multiquark system
and demonstrates that in the $N$-$N$ channels, it is energetically
favorable for the system to cluster into two nucleons and that the
nuclear intermediate range attraction is caused by quark delocalization
similar to the electron delocalization which induces intermediate range
molecular attraction. In the 9 and 12 quark systems with quantum numbers
of $^3H$, $^3He$ and $^4He$ the nucleon clustering has been verified
as well as the two nucleon system\cite{gold2}.
This model has been extended to $N$-$\Lambda$ and
$N$-$\Sigma$ interactions\cite{wu} and the results show that the quarks
delocalize properly in different channels to induce qualitatively
correct $N$-$N$ (JI = 10, 01, 11, 00), $N$-$\Lambda$
(JI = 1$\frac{1}{2}$, 0$\frac{1}{2}$), and $N$-$\Sigma$
(JI = 1$\frac{1}{2}$, 0$\frac{1}{2}$, 1$\frac{3}{2}$, 0$\frac{3}{2}$)
interactions. For other channels\cite{wang1}, such as $\Delta\Delta$
(JI = 30), $\Delta\Omega$(3$\frac{3}{2}$), $\Delta\Sigma^*$(3$\frac{1}{2}$)
and $\Delta\Xi^*$(31), it is energetically favorable
for the quarks to merge into quark matter instead of two baryons, and
there are often strong effective attractions in these channels. In the
$\Delta\Delta$ (JI = 30), the $d^*$ channel, the effective attraction is so
strong that the total energy of the system($\sim 2.1~GeV$) is near the $NN\pi$
threshold; therefore the $d^*$(JI = 30) might be a narrow resonance state
\cite{wong1}.
Different model approaches give quite different mass of $d^*$.
The meson baryon coupling model\cite{kf} gave a $\Delta-\Delta$
binding of $11-340~MeV$ depending on the coupling constants of $\rho$ and $\omega$
and the hard core radii($0.2-0.3~fm$). If the hard core radii are larger than
$0.38-0.40 ~fm$,the binding energy will be less than $10~ MeV$. The
quark-meson-gluon hybrid model obtained a binding between $37~ MeV$ and $80~MeV$
\cite{yz}. The pure gluon exchange model got $18~ MeV$ binding even though there is a
repulsive core in many other baryon-baryon channels\cite{okay}. MIT bag
model will give a $d^*$ mass of $2340~MeV$ if the bag radius is adjusted to give
the minimum. On the other hand the R-matrix version of the modified bag
model give a mass as high as $2840~MeV$\cite{lo}. The skyrmion model obtained
a very weak binding $\sim10~MeV$\cite{wal}. Therefore $d^*$ study will provide
a critical test of hadron interaction models. (This was already pointed
out in the US Long Range Plan for Nuclear Science in 1996\cite{nsf}.)
Quark delocalization plays a vital role in lowering the $d^*$ mass. In a
variational calculation this is nothing else but just a method to enlarge
the variational Hilbert space. Therefore it might be a general property of
quantum mechanics. If it is really realized in the nucleus(nucleon swollen explaination of the EMC effect
\cite{cj} might be taken as an evidence of quark delocalization),
it has far-reaching implications however.
The phase transition from nuclear matter to quark-gluon plasma would be at
best a second order one or just a crossover as happened in the transition from atomic gases into plasma and would make the already hard identification of
this phase transition even harder\cite{wong1}. $d^*$ search is a critical test
of the quark delocalization mechanism.
\section{Electo-production mechanism of $d^*$}
$d^*$ is a spin 3 state. Its dominant hadronic component is $\Delta\Delta$.
To produce a $d^*$ from a nuclear target, one has to change two nucleons into
two $\Delta$'s. A virtual photon exchange can only excite one nucleon into
a $\Delta$. Double photon exchange is a fourth order QED process. Its
contribution to the cross section will be proportional to $\alpha^4$, a too
small effect. Here $\alpha=e^2/\left(\hbar c\right)$. Therefore the electro-
production of $d^*$ is critically dependent on the gluon exchange current
in the quark-gluon description and on the meson exchange current in the
hadronic description. The gluon exchange current is unique as depicted in
Fig.1.
\begin{figure}
\begin{center}
\rotatebox{180}{\psfig{figure=fig1.ps,height=6cm}}
\caption{Quark gluon exchange current}
\rotatebox{180}{\psfig{figure=fig2.ps,height=6cm}}
\caption{Quark meson exchange current}
\end{center}
\end{figure}
However as mentioned before that some models emphasized the Goldstone boson
quark coupling. Then there will be meson exchange current as depicted in
Fig.2 even within a baryon. This makes the exchange current in the quark-
gluon description rather model dependent. J.A.Gomez Tejedor and E. Oset(GO)
calculated the $ed\rightarrow e^\prime p\Delta$ and $\gamma d\rightarrow$
$\Delta\Delta\rightarrow pn \pi^+\pi^-$ cross sections with the hadronic
degree of freedom\cite{go}. This approach is complicated by the
many effective meson-baryon couplings. However those coupling constants
have been fixed by the experimental data. We will take GO approach to do
the $d^*$ electro-production cross section calculation. Based on the
isospin and angular momentum conservation and taking into account the results of GO, the
following Feynman diagrams are included in our calculation,
\begin{figure}
\begin{center}
\rotatebox{180}{\psfig{figure=fig3.ps,height=6cm}}
\caption{Kroll-Ruderman process}
\rotatebox{180}{\psfig{figure=fig4.ps,height=6cm}}
\caption{Meson exchange current}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\rotatebox{180}{\psfig{figure=fig5.ps,height=6cm}}
\caption{$NN$ intermediate state}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\rotatebox{180}{\psfig{figure=fig6.ps,height=6cm}}
\caption{$NN^*$ intermediate state}
\end{center}
\end{figure}
The $d^*$ is an isospin I=0 state,
\begin{eqnarray}
\mid d^*\rangle&=&\frac{1}{2}(\Delta^{++}\Delta^--\Delta^+\Delta^\circ
+\Delta^\circ\Delta^+-\Delta^-\Delta^{++}).
\end{eqnarray}
The Kroll-Ruderman term Fig.3 does not contribute due to the cancellation
of the $\pi^+$ and $\pi^-$ exchange between different components of $d^*$
and d.
The meson exchange term Fig.4. does not contribute for the same
reason. Therefore only Fig.5 and 6 contribute to the d* production.
For the NN* intermediate state, only
N*(1520 $IJ^p=\frac{1}{2}\frac{3}{2}^+$)
has been included. Because the results of GO\cite{go} show that the
contribution of NN*(1520) might be more important, the
NN*(1440) term will be left for further refinement. The $\Delta\Delta$ and
the D-wave components of deuteron are neglected temporary.
\section{Meson exchange current and cross section}
The general electron scattering cross section formula of Donnelly and
Raskin\cite{dr} will be used to calculate the inelastic $d^*$ production.
\begin{eqnarray}
\frac{d\sigma}{d\Omega}&=&\sigma_M(\nu_LR^L_{fi}+\nu_TR^T_{fi}+
\nu_{TT}R^{TT}_{fi}+\nu_{TL}R^{TL}_{fi})f^{-1}_{rec},
\end{eqnarray}
where $\sigma_M$ is the Mott scattering cross section,
\begin{eqnarray}
\sigma_M&=&\left(\frac{\alpha\cos\frac{\theta_e}{2}}{2\epsilon\sin^2\frac
{\theta_e}{2}}\right)^2,\\
\nu_L&=&\left(\frac{q^2}{\vec{q}^2}\right)^2,\\
\nu_T&=&-\frac{1}{2}\left(\frac{q^2}{\vec{q}^2}\right)+\tan^2\frac{\theta_e}{2},\\
\nu_{TT}&=&-\frac{1}{2}\left(\frac{q^2}{\vec{q}^2}\right),\\
\nu_{TL}&=&\frac{1}{\sqrt{2}}\left(\frac{q^2}{\vec{q}^2}\right)\sqrt{-\left(\frac{q^2}{\vec{q}^2}\right)+\tan^2\frac{\theta_e}{2}},\\
R^L_{fi}&=&\arrowvert\rho(\vec{q})_{fi}\arrowvert^2,\\
R^T_{fi}&=&\arrowvert J(\vec{q},+1)_{fi}\arrowvert^2+\arrowvert J(\vec{q},-1)_{fi}\arrowvert^2,\\
R^{TT}_{fi}&=&2Re\{J^*(\vec{q},+1)_{fi}J(\vec{q},-1)_{fi}\},\\
R^{TL}_{fi}&=&-2Re\{\rho^*(\vec{q})_{fi}(J(\vec{q},+1)_{fi}-J(\vec{q},-1)_{fi})\},\\
f_{rec}&=&1+\frac{2\epsilon}{M_T}\sin^2\frac{\theta_e}{2}.
\end{eqnarray}
$f_{rec}$ is the recoil correction. The four vector current $J^{\mu}(\vec{q})_{fi}$ is the Fourier transformed four vector
transition current of the target,
\begin{eqnarray}
J^{\mu}(\vec{q})_{fi}&=&\int d^3\vec{r}e^{i\vec{q}\cdot\vec{r}}\langle f\arrowvert J^{\mu}(\vec{r})\arrowvert i\rangle,\\
\vec{J}(\vec{q})_{fi}&=&\sum_{m=0,\pm 1} J(\vec{q},m)\vec{e}(\vec{q};1,m),\\
\vec{e}(\vec{q};1,0)&=&\vec{u}_z,~
\vec{e}(\vec{q};1,\pm 1)=\mp\frac{1}{\sqrt{2}}(\vec{u}_x\pm i\vec{u}_y),\\
\vec{u}_x&=&-\frac{\vec{q}\times(\vec{k}\times\vec{k'})}{\arrowvert\vec{q}
\times(\vec{k}\times\vec{k'})\arrowvert},~
\vec{u}_y=\frac{\vec{k}\times\vec{k'}}{\arrowvert\vec{k}
\times\vec{k'}\arrowvert},~
\vec{u}_z=\frac{\vec{q}}{\arrowvert\vec{q}\arrowvert},\\
\rho(\vec{q})_{fi}&=&\frac{\arrowvert\vec{q}\arrowvert}{\omega}J(\vec{q},0)_{fi}.
\end{eqnarray}
The last Eq. is obtained from the four vector current conservation. $q$, $k$,
and $k^\prime$ are the four momentum transfer, the initial and final
momentum of the scattered electron depicted in Fig.7. $q=k-k^\prime$.
$\theta_e$ is the electron scattering angle in the lab system.
$M_T$ is the mass of the target, mass of deuteron in our case.
$\epsilon$ ($\epsilon^\prime$) is
the initial(final) energy of
the scattered electron, and $\omega=\epsilon-\epsilon^\prime$.
\begin{figure}
\begin{center}
\rotatebox{180}{\psfig{figure=fig7.ps,height=6cm}}
\caption{The electron-deuteron inelastic scattering}
\end{center}
\end{figure}
For the unpolarized electron scattering, the cross terms $R^{TT}_{fi}$ and $R^{TL}_{fi}$ do not contribute. The angular momentum and parity conservation
($IJ^p=01^+$ for d and $03^+$ for $d^*$)restrict the multipole moments
further. Only the following 5 multipole moments contribute to the $ed\rightarrow ed^*$
process:
\begin{eqnarray}
F^C_2(\arrowvert\vec{q}\arrowvert)&=&\sqrt{\frac{4\pi}{3}}\langle 3\Arrowvert
M_2(\arrowvert\vec{q}\arrowvert)\Arrowvert 1\rangle,\\
F^E_2(\arrowvert\vec{q}\arrowvert)&=&\sqrt{\frac{4\pi}{3}}\langle 3\Arrowvert
T^E_2(\arrowvert\vec{q}\arrowvert)\Arrowvert 1\rangle,\\
F^M_3(\arrowvert\vec{q}\arrowvert)&=&\sqrt{\frac{4\pi}{3}}\langle 3\Arrowvert
iT^M_3(\arrowvert\vec{q}\arrowvert)\Arrowvert 1\rangle,\\
F^C_4(\arrowvert\vec{q}\arrowvert)&=&\sqrt{\frac{4\pi}{3}}\langle 3\Arrowvert
M_4(\arrowvert\vec{q}\arrowvert)\Arrowvert 1\rangle,\\
F^E_4(\arrowvert\vec{q}\arrowvert)&=&\sqrt{\frac{4\pi}{3}}\langle 3\Arrowvert
T^E_4(\arrowvert\vec{q}\arrowvert)\Arrowvert 1\rangle.
\end{eqnarray}
The multipole moments are defined through the following multipole expansion,
\begin{eqnarray}
J(\vec{q},0)_{fi}&=&\frac{\omega}{\arrowvert\vec{q}\arrowvert}\sqrt{4\pi}
\sum_{J\geq 0}\sqrt{2J+1}i^J\langle f\arrowvert M_{J0}(\arrowvert\vec{q}
\arrowvert)\arrowvert i\rangle,\\
J(\vec{q},\pm 1)_{fi}&=&-\sqrt{2\pi}\sum_{J\geq 0}\sqrt{2J+1}i^J\left(\langle f
\arrowvert T^e_{J,\pm 1}(\arrowvert\vec{q}\arrowvert)\arrowvert i\rangle\pm
\langle f\arrowvert T^m_{J,\pm 1}(\arrowvert\vec{q}\arrowvert)\arrowvert i\rangle\right),\\
M_{Jm}(\arrowvert\vec{q}\arrowvert)&=&\int d^3\vec{r}M_{Jm}(\arrowvert\vec{q}
\arrowvert\vec{r})\rho(\vec{r}),\\
T^E_{Jm}(\arrowvert\vec{q}\arrowvert)&=&\int d^3\vec{r}\frac{1}{\arrowvert
\vec{q}\arrowvert}(\bigtriangledown\times M_{JLm}(\arrowvert\vec{q}\arrowvert
\vec{r}))\cdot\vec{J}(\vec{r}),\\
T^M_{Jm}(\arrowvert\vec{q}\arrowvert)&=&\int d^3\vec{r}M_{JLm}(\arrowvert\vec{q}
\arrowvert\vec{r})\cdot\vec{J}(\vec{r}),\\
M_{Jm}(\arrowvert\vec{q}\arrowvert\vec{r})&=&j_J(\arrowvert\vec{q}\arrowvert r)
Y_{Jm}(\hat{r}),\\
M_{JLm}(\arrowvert\vec{q}\arrowvert\vec{r})&=&j_J(\arrowvert\vec{q}\arrowvert r)
Y_{JLm}(\hat{r}),\\
Y_{JLm}(\vec{r})&=&\left[Y_L(\hat{r})\otimes e(\vec{q};1)\right]_{JM}.
\end{eqnarray}
The transition current resulted from Fig.5(a) is
\begin{eqnarray}
\rho\left(\vec{q}\right)&=&eF^N\left(q^2\right)\frac{M_N}{E\left(p\right)}
\frac{1}{p^0-E\left(p\right)+i\epsilon}
\left(\frac{f^*}{\mu}\right)^2\vec{S}^{\dagger}_1\cdot
\left(\vec{q}+\vec{q^\prime}
\right)
\vec{S}^{\dagger}_2\cdot\left(\vec{q}+\vec{q^\prime}\right)\\ \nonumber
&& F_{\pi}^2\left(\left(q+q^\prime\right)^2\right)
\frac{1}{\left(q+q^\prime\right)^2
-\mu^2+i\epsilon}
T^{\dagger}_1\cdot T^{\dagger}_2{\delta}^3\left(\vec{q}+\vec{p}_d
-\vec{p}_{d^*}\right),
\end{eqnarray}
\begin{eqnarray}
\vec{J}\left(\vec{q}\right)&=&e\left\{F^N\left(q^2\right)
\frac{\vec{p}_1+\vec{p}}{2M_N}+\frac{i}{2M_N}
\vec{\sigma}\times\vec{q}G^N_m\left(q^2\right)\right\}
\frac{M_N}{E\left(p\right)}\frac{1}{p^0-E\left(p\right)
+i\epsilon}\left(\frac{f^*}{\mu}\right)^2 \\ \nonumber
&&\vec{S}^{\dagger}_1\cdot\left(\vec{q}+\vec{q^\prime}\right)\vec{S}^{\dagger}_2\cdot
\left(\vec{q}+\vec{q^\prime}\right)F^2_{\pi}\left(\left(q+
q^\prime\right)^2\right) \frac{1}{\left(q+q^\prime\right)^2-\mu^2+i\epsilon}
T^{\dagger}_1\cdot T^{\dagger}_2{\delta}^3(\vec{q}+\vec{p}_d-\vec{p}_{d^*}).
\end{eqnarray}
A similar transition current resulted from Fig.6(a) is
\begin{eqnarray}
\rho(\vec{q})&=&\left(\tilde{g_{\gamma}}-\tilde{g_{\sigma}}\right)
\frac{\vec{S}^{\dagger}_1\cdot\vec{q}}{M_{N^*}-M_N}
\tilde{f}_{N^*\Delta\pi}
\frac{1}{\sqrt{S}-M_{N^*}
+i\frac{\Gamma\left(\sqrt{S}\right)}{2}}\left(-\frac{f^*}{\mu}\right)\\ \nonumber
&&F_{\pi}^2\left(\left(q+q^\prime\right)^2\right)
\frac{\vec{S}^{\dagger}_2\cdot
\left(\vec{q}+\vec{q^\prime}\right)}{\left(q+q^\prime\right)^2-\mu^2
+i\epsilon} T^{\dagger}_1\cdot T^{\dagger}_2{\delta}^3\left(\vec{q}+\vec{p}_d
-\vec{p}_{d^*}\right),
\end{eqnarray}
\begin{eqnarray}
\vec{J}\left(\vec{q}\right)&=&\left(\tilde{g}_{\gamma}-
\tilde{g}_{\sigma}\right)\vec{S}^{\dagger}_1
\tilde{f}_{N^*\Delta\pi}
\frac{1}{\sqrt{S}-
M_{N^*}+i\frac{\Gamma\left(\sqrt{S}\right)}{2}}\left(-
\frac{f^*}{\mu}\right)\\ \nonumber
&&F_{\pi}^2\left(\left(q+q^\prime\right)^2\right)\frac{\vec{S}^{\dagger}_2
\cdot\left(\vec{q}+\vec{q'}\right)}{\left(q+q^\prime\right)^2-\mu^2
+i\epsilon} T^{\dagger}_1\cdot T^{\dagger}_2{\delta}^3
\left(\vec{q}+\vec{p}_d
-\vec{p}_{d^*}\right).
\end{eqnarray}
Fig.5(b) and Fig.6(b) will give similar transition currents.
In obtaining these transition currents,the following effective Lagrangian have been used,
\begin{eqnarray}
L_{NN\gamma}&=& - e\overline{\Psi}_N\left(\gamma^\mu A_\mu -\frac{\chi_N}{2M_N}
\sigma^{\mu\nu}\partial_\nu A_\mu\right)\Psi_N \\
L_{N\Delta\pi}&=& - \frac{f^\ast}{\mu}\Psi^\dagger_\Delta S^\dagger_i\left(
\partial_i\phi^\lambda\right)T^{\lambda\dagger}\Psi_N + h.c. \\
L_{NN^*\gamma}&=& \overline{\Psi}_{N^{\ast}}\left[
\tilde{g_\gamma}\vec{S}^\dagger\vec{A}
-i\tilde{g_\sigma}(\vec{S}^\dagger\times\vec{\sigma})
\vec{A}\right]\Psi_N + h.c.\\
L_{N^*\Delta\pi}&=& i \overline{\Psi}_{N^{\ast}}
\tilde{f}_{N^{\ast} \Delta \pi}
\phi^\lambda T^{\lambda\dagger}
\Psi_\Delta + h.c.
\end{eqnarray}
In these expressions $\Phi$, $\Psi_N$, $\Psi_{\Delta}$, $\Psi_{N^*}$, and
$A_{\mu}$ stand for the pion, nucleon, $\Delta$, $N^*(1520)$, and photon fields
respectively; $M_N$ and $\mu$ are the nucleon and pion masses; $\vec{\sigma}$
and $\vec{\tau}$ are the usual $1\over{2}$ spin and isospin Pauli operators;
$\vec{S}^{\dagger}$ and $\vec{T}^{\dagger}$ are the transition spin and
isospin operators from $1\over{2}$ to $3\over{2}$ with the normalization,
\begin{eqnarray}
\langle \frac{3}{2}, M \left| S^{\dagger}_{\nu} \right|
\frac{1}{2}, m \rangle
= C \left( \frac{1}{2}, 1 ,\frac{3}{2} ; m, \nu, M \right)
\end{eqnarray}
All the effective coupling constants, form factors, and the propagators
are taken from GO\cite{go}. They are copied here to make this report
self contained and can be read without further check of many references.
$\chi^N = \left\{\begin{array}{c}
1.79\hspace{0.25cm}\text{proton} \\ -1.91 \hspace{0.25cm} \text{neutron}
\end{array} \right\}$ \\
$f^{\ast} = 2.13$\hspace{2cm}
$\tilde{f}_{N^{\ast}\Delta\pi} = 0.677$ \\
$\tilde{g}_\gamma = \left\{\begin{array}{c}
0.108\hspace{0.25cm} \text{proton} \\ -0.129\hspace{0.25cm}\text{neutron}
\end{array} \right\}$ \hspace{1.30cm}
$\tilde{g}_\sigma = \left\{\begin{array}{c}
-0.049 \hspace{0.25cm}\text{proton} \\ -0.0073\hspace{0.25cm} \text{neutron}
\end{array} \right\}$
For the nucleon propagator,
only the positive energy part is retained where $E(p)=\sqrt{M_N^2+\vec{p}^2}$;
for the $N^*$, the finite width $\Gamma$ has been included where
\begin{eqnarray}
S=p^{02}-\vec{p}^2,\hspace{1cm}
\Gamma(\sqrt{S})=\Gamma(M_N^*)q_{cm}^5(\sqrt{S})/q_{cm}^5(M_N*).
\end{eqnarray}
The form factors are taken from GO\cite{go} to keep the model consistent.
\begin{eqnarray}
F_\pi(q^2) = \frac{\Lambda_\pi^2 - m_\pi^2}{\Lambda_\pi^2-q^2};\hspace{1.5cm}
\, \Lambda_\pi\sim 1250\, MeV
\end{eqnarray}\\
Sachs's form factors are given by
\begin{equation}
G_M^N(q^2) = \frac{\mu_N}{(1 - \frac{q^2}{\Lambda^2})^2};\hspace{1.5cm}
\, G_E^N(q^2) = \frac{1}{(1 - \frac{q^2}{\Lambda^2})^2}
\end{equation}\\
with $\Lambda^2 = 0.71$ $GeV^2$; $\mu_p = 2.793$; $\mu_n = -1.913$.
\\
The relation
between $F_1^p(q^2)$ (Dirac's form factor) and $G_E^p(q^2)$ is :
\begin{equation}
F_1^p(q^2) = G_E^p(q^2)\frac{(1 -\frac{q^2\mu_p}{4m_N^2})}{(1 -
\frac{q^2}{4m_N^2})}
\end{equation}\\
and $F_1^n = 0.$
To calculate the hadronic $d\rightarrow d^\ast$ transition current, a single
Gaussian wave function with a size parameter $b^\ast=0.7~fm$ is used to
approximate the $d^\ast$ internal motion. For the deuteron $d$ a three Gaussian
fitted to the ground state properties has been used, three size parameters
are $b=\left(4.0975,~1.8252,~0.8837\right)fm$ with normalization coefficient
$c=\left(0.31491,~0.49716,~0.36926\right)$.\cite{wong1} The $d^\ast$ mass is
taken to be 2.1 GeV. Due to the special spin(1 for d and 3 for $d^*$) and
orbital angular momentum(both 0) internal structure and the spin property
of the meson exchange current, only $F_2^C$, $F_2^E$ and $F_3^M$ three
transition form factors remain after the integration over the internal
spin and orbital variables of d and $d^*$. Fig.5 contributes one $F_2^C$,
one $F_3^M$ and two $F_2^E$(from convective and magnetic current
respectively), Fig.6 contributes one $F_2^C$ and one $F_2^E$ term.
\section{results and discussions}
The calculated $d^\ast$ electro production cross sections are shown in Fig.8
. The four curves correspond to four electron energies $0.8$, $1.0$, $1.2$
and $1.5~GeV$. The shape
of the differential cross section is dominated by the Mott cross section
but modulated by the inelastic transition form factors. The value around
30 degree in the laboratory frame is about 10 nb for 1 GeV electron.
In Fig.9, 10 and 11, the transition form factors $F_2^C$, $F_2^E$
and $F_3^M$ are
shown. The dashed curves correspond to the
contribution of $NN$ intermediate state while the full curves correspond to
the sum of contributions of two intermediate states. The $NN$ intermediate
state is the dominant one except
at large angles, where the $NN^\ast$ contribution is more important. Fig.12
shows the differential cross section due to the NN intermediate state only
as well as a comparison to the full one.
The produced $d^\ast$ will decay into $NN$ and $NN\pi$. The cross
section, 10 nb around 30 degree for 1 GeV electron, is about two order smaller than that of quasi
elastic $ed\rightarrow e^\prime NN$ and $\Delta$ resonance production
$ed\rightarrow e^\prime N\Delta
\rightarrow e^\prime NN\pi$ processes. Therefore the normal measurement of the
inelastic electron scattering can not find the signal of $d^\ast$ even it
is existed, i.e., the $d^\ast$ signal will be buried in the quasi elastic or
$\Delta$ resonance background. A special kinematics and detector system
must be studied further both theoretically and experimentally in order to
pin down the weak signal of $d^\ast$ resonance from the strong background.
\begin{figure}
\begin{center}
\psfig{figure=sig.eps,height=8cm}
\caption{$ed\rightarrow ed^\ast$ differential cross sections}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\psfig{figure=cha215.eps,height=8cm}
\caption{$d\rightarrow d^\ast$ transition form factors: $\left|F^C_2\right|^2$,
$k_0= 1.5~GeV$, the dashed curve corresponds to
the $NN$ intermediate state contribution, the full curve corresponds to the
sum of $NN$ and $NN^\ast$ intermediate state contributions.}
\psfig{figure=el215.eps,height=8cm}
\caption{Same as FIG. 9 for $\left|F^E_2\right|^2$}
\psfig{figure=mag315.eps,height=8cm}
\caption{Same as FIG. 9 for $\left|F^M_3\right|^2$}
\psfig{figure=sig15.eps,height=8cm}
\caption{Same as FIG. 9 for differential cross sections $d\sigma/d\Omega$}
\end{center}
\end{figure}
In our model calculation, some simplifications have been assumed. The
D wave component of deuteron has been neglected. Even though it is a small
component, its contribution to the $d^\star$ production might not be small
enough to be neglected, because only a recoupling of spin and orbital
angular momentum will be able to transit the deuteron $^3D_1$ into a $^3D_3$
NN component of $d^\ast$. The initial state correlation, i.e., the $\Delta-
\Delta$ component of deuteron is an even smaller component, but only a spin
recoupling is needed to transit it into $d^\ast$, therefore its contribution
should be checked as well. $d^\ast$ is a six quark state\cite{wang1} rather
than a $\Delta-\Delta$ bound state. To model it as a pure two $\Delta$
bound state and use a single Gaussian wave function to describe its
internal structure will overestimate the calculated cross section.
The contribution of $NN^*(1440)$ intermediate state should be added especially
in the large angle part. Due to these approximations the calculated cross
section is an estimate of the $d^*$ electro production. On the other hand all of the above mentioned corrections
are minor effects, the order(10 nb around 30 degree for 1 GeV electron)
obtained from this calculation might be stable against these fine tunes.
Further calculation is going on, especially a quark model calculation is also
doing to check if a single Gaussian approximation of the $d^\ast$ internal
wave function has overestimated the cross section.. The results will be reported later.
The electro production of d' dibaryon is being measured. This calculation
can be extended to that process and will be a good check not only of the
electro production model used here but also of the d' dibaryon analysis itself.
Very helpful discussions with C.W .Wong, T. Goldman and Stan Yen are
acknowledged. We are also greatly indebted to J.A. G\'omez Tejedor and E.
Oset for their helpful private communications.
This research is supported by NSF, SSTD and the post Dr. foundation of SED of China. Part of the numerical calculation is done on the SGI Origin 2000
in the lab of computational condensed matter physics.
|
2,877,628,090,675 | arxiv | \section{Introduction}
Biological systems use specialized motor proteins to convert energy from one form to another \cite{Howard2001, Chowdhury2013}. These nanoscale devices operate in an environment dominated by thermal fluctuations and it has been suggested that they harness thermal fluctuations to perform their tasks effectively \cite{Astumian2007, Ishii2008, Beausang2013}. Single-molecule experiments can probe molecular motor fluctuation statistics \cite{Nishiyama2002, Yildiz2003, Shimabukuro2003, Toba2006} and this provides new opportunities to elucidate molecular motor mechanisms \cite{Svoboda1994, Shaevitz2005}.
The intrinsic stochasticity of molecular motor operation has been formalized in a range of Brownian motion-based theories \cite{Magnasco1994, Fisher1999, Wang2008, Lipowsky2009, Toyabe2010, Chowdhury2013}. One approach has been to describe molecular motors in terms of Brownian motion on a tilted periodic potential, either via continuous Fokker-Planck equations \cite{Magnasco1994, Keller2000, Golubeva2012, VandenBroeck2012, ChallisarXiv} or discrete hopping models \cite{Fisher2005, Lau2007, Schmiedl2008, Zhang2011, Bameta2013}. In the long-time limit, the system can be described as a Gaussian process with an effective drift and diffusion \cite{Lindner2001, Reimann2002, Pavliotis2005}. The drift quantifies the average rate of transport in the system and the diffusion quantifies the thermal fluctuations.
In this paper, we present a systematic theoretical treatment of the thermal fluctuation statistics of a molecular motor. Our approach is based on a multidimensional discrete master equation with nearest-neighbor hopping. This enables us to clarify fundamental aspects of the thermal fluctuation statistics for coupled processes. The particular advantages of our approach are (i) it describes more than one degree of freedom making coupling between degrees of freedom explicit, (ii) it includes loss processes providing access to both the strong and weak coupling regimes, and (iii) it is tractable analytically yielding formal results that build our intuition for these systems. The key result is that energy transfer leads to statistical correlations between thermal fluctuations in different degrees of freedom. These correlations depend on the operating regime of the system and are accessible via the diffusion matrix, discrete hopping statistics, and single trajectories.
This paper is organized as follows. In Sec.\ \ref{sec:background} we present our theoretical description of energy transfer in a molecular motor. In Sec.\ \ref{sec:long} we consider the long-time Gaussian limit of the system and describe signatures of energy transfer in the drift vector and diffusion matrix. In Sec.\ \ref{sec:hop} we derive the waiting time and formal expressions for the probability distribution in the strong and weak coupling regimes both near and far from equilibrium. In Sec.\ \ref{sec:single} we simulate single trajectories of the system. We conclude in Sec.\ \ref{sec:conc}.
\section{Multidimensional Master Equation \label{sec:background}}
Energy transfer in a molecular motor has been described in terms of Brownian motion on a multidimensional free-energy landscape \cite{Magnasco1994, Keller2000, Fisher2005}. In this theory, each degree of freedom is a generalized coordinate capturing the main conformal motions of the motor and representing displacements either in real space or along reaction coordinates \cite{Kramers1940}. In the limit of deep potential wells, the system is confined to (meta-)stable states of the potential and it is intuitive that the continuous theory can be approximated by a discrete equation describing thermally activated hopping between potential wells. We consider the model potential \cite{Magnasco1994, Keller2000}
\begin{equation}
V(\bm r) = V_{\bm 0}(\bm r) - \bm f \cdot \bm r,
\end{equation}
where $\bm r$ is the position, the potential $V_{\bm 0}(\bm r)= V_{\bm 0}(\bm r +a_j \hat{\bm r}_j)= V_{\bm 0}(\bm r +\bm a)$ is periodic with period $\bm a$, and $\bm f$ represents macroscopic thermodynamic forces that drive the system out of thermal equilibrium. For long times, we write the master equation \cite{Challis2013}
\begin{equation}
\frac{d p_{\bm n}(t)}{dt}=\sum_{{\bm n}'} \left[\kappa^{\bm f}_{{\bm n}-{\bm n}'}p_{\bm n'}(t)-\kappa^{\bm f}_{{\bm n}'-{\bm n}} p_{\bm n}(t)\right] \label{master_eqn},
\end{equation}
where $p_{\bm n}(t)$ is the probability of state $\bm n$ being occupied, and $\bm n$ and $\bm n'$ are vectors of integers. The transition rates $\kappa_{\bm n}^{\bm f}$ satisfy $\sum_{{\bm n}}\kappa^{\bm f}_{{\bm n}}=0$, and we consider the summation to be over nearest neighbor states only. To lowest order, the dependence on the thermodynamic force $\bm f$ is assumed to take the form \cite{Fisher2005, VandenBroeck2012, Challis2013}
\begin{eqnarray}
\kappa^{\bm f}_{{\bm n}}& = &e^{\alpha_{\bm n}{\bm f}\cdot {\bm A}{\bm n}/k_{B}T}\kappa^{\bm 0}_{{\bm n}},\label{rate}
\end{eqnarray}
where $\kappa_{\bm n}^{\bm 0}=\kappa_{-\bm n}^{\bm 0}$ are the transition rates at equilibrium ($\bm f = \bm 0$), $k_B$ is the Boltzmann constant, and $T$ is the temperature. The matrix $\bm A$ is diagonal with $A_{jj}=a_j$. The loading coefficients $\alpha_{\bm n}$ satisfy generalized detailed balance, i.e., $\alpha_{-\bm n}=1-\alpha_{\bm n}$ and $0 \leq \alpha_{\bm n}\leq 1$ \cite{Fisher1999, Seifert2010}. Energy transfer between degrees of freedom is possible when the transition rates $\kappa_{\bm n}^{\bm 0}$ are non-vanishing for $n_j\neq0$ in more than one coordinate and transport occurs along coupled coordinates \cite{ChallisarXiv}.
The master equation (\ref{master_eqn}) can be solved analytically by transforming to the diagonal form
\begin{equation}
\frac{dc_{\bm k}(t)}{dt}=-\lambda_{\bm k} c_{\bm k}(t) \label{diagonal},
\end{equation}
where the eigenstates are
\begin{equation}
c_{\bm k}(t)=\sum_{\bm n} p_{\bm n}(t) e^{-i{\bm k}\cdot \bm A \bm n},
\label{characteristic_function_discrete}
\end{equation}
and the eigenvalues are
\begin{eqnarray}
\lambda_{\bm k} & = & \sum_{{\bm n} \in {\rm for}}4\kappa^{\bm 0}_{\bm n} G_{\bm n}\left(\frac{\bm X\cdot \bm n}{2}\right) \sin\left(\frac{{\bm k}\cdot \bm A\bm n}{2}\right) \nonumber \\
& & \times \sin \left( \frac{\bm k \cdot \bm A \bm n}{2} +\frac{i \bm X \cdot \bm n}{2}\right).
\label{eigenvalues_general}
\end{eqnarray}
In Eq.\ (\ref{eigenvalues_general}), we have identified the unitless generalized thermodynamic force $\bm X = \bm A\bm f /k_BT$ \cite{ChallisarXiv} and defined the loading function
\begin{equation}
G_{\bm n}(x) = e^{(2\alpha_{\bm n}-1)x }.
\end{equation}
The summation over $\bm n$ in Eq.\ (\ref{eigenvalues_general}) is taken over forward rates only, i.e., over one of $\bm n$ or $-\bm n$, not both. The analytic solution to Eq.\ (\ref{diagonal}) is
\begin{equation}
c_{\bm k}(t)=c_{\bm k}(0)e^{-\lambda_{\bm k}t}.
\label{characteristic_function_solution}
\end{equation}
\section{Long-time Limit \label{sec:long}}
In the long-time limit, Brownian motion on a tilted periodic potential can be described by a Guassian process with an effective drift and diffusion \cite{Lindner2001, Reimann2002, Pavliotis2005}. This makes drift and diffusion key predictors of Brownian motion-based theories of molecular motors. In the multidimensional case, the drift and diffusion contain signatures of the energy transfer.
\subsection{Drift and Diffusion}
The effective drift vector $\bm v$ in the long-time limit is given by
\begin{equation}
v_j= \lim_{t\rightarrow\infty} \frac{\langle n_j\rangle}{t},
\end{equation}
and the effective diffusion matrix $D$ is
\begin{equation}
D_{jj'} = \lim_{t\rightarrow \infty} \frac{\langle n_j n_{j'}\rangle -\langle n_j\rangle \langle n_{j'}\rangle}{2t}.
\label{diff_matrix}
\end{equation}
For the master equation (\ref{master_eqn}), the drift is equal to the average rate of transport and is given by
\begin{eqnarray}
v_j & = & \frac{d\langle n_j \rangle}{dt}\\
& = & \sum_{\bm n \in for} 2\kappa_{\bm n}^{\bm 0} n_j G_{\bm n}\left(\frac{\bm X\cdot \bm n}{2}\right)\sinh \left( \frac{\bm X \cdot \bm n}{2}\right). \label{velocity_multi_dim}
\end{eqnarray}
Equation (\ref{velocity_multi_dim}) shows that when transitions occur simultaneously in more than one degree of freedom (i.e., when $\kappa_{\bm n}^{\bm 0}$ is non-vanishing for $n_j\neq0$ in more than one coordinate), the force in one degree of freedom can drive drift in another. This drift can occur even against an opposing force representing energy transfer between degrees of freedom. Near equilibrium, where $|X_{j}|\ll 1$, the drift becomes
\begin{eqnarray}
v_j \approx \sum_{\bm n\in for} \sum_{j'} \kappa_{\bm n}^{\bm 0} n_j n_{j'} X_{j'}.
\label{drift_eq}
\end{eqnarray}
Equation (\ref{drift_eq}) is a linear force-flux relation satisfying the Onsager relations \cite{ChallisarXiv}.
The diffusion matrix for the master equation (\ref{master_eqn}) is related to the covariance matrix $\sigma$ \cite{Gardiner2009} and is given by
\begin{eqnarray}
D_{jj'} & = & \frac{d(\langle n_j n_{j'}\rangle - \langle n_j\rangle \langle n_{j'}\rangle)}{2dt} = \frac{1}{2}\frac{d\sigma_{jj'}}{dt} \\
& = & \sum_{\bm n \in for} 2\kappa_{\bm n}^{\bm 0} n_j n_{j'} G_{\bm n}\left(\frac{\bm X\cdot \bm n}{2}\right) \cosh \left( \frac{\bm X \cdot \bm n}{2}\right).
\label{gamma_multi_dim}
\end{eqnarray}
The diffusion matrix provides insights into the thermal fluctuations of the system. For coupled transitions occuring simultaneously in more than one coordinate, $D_{jj'}$ can be non-zero for $j\neq j'$. This describes statistical correlations between fluctuations in different degrees of freedom. Near equilibrium, thermal diffusion still occurs and the diffusion matrix becomes
\begin{eqnarray}
D&\approx&\sum_{{\bm n} \in {\rm for}} \kappa^{\bm 0}_{\bm n}n_{j}n_{j'}.
\label{sigma_eq}
\end{eqnarray}
Using the drift (\ref{velocity_multi_dim}) and diffusion (\ref{gamma_multi_dim}), we find that
\begin{eqnarray}
\frac{\partial v_j}{\partial X_{j'}} & = & D_{jj'} \label{Einstein_rel} \\
& & +\sum_{\bm n \in {\rm for}} 2\kappa_{\bm n}^{\bm 0} n_j n_{j'} (\alpha_{\bm n}-1/2) G_{\bm n} \left(\frac{\bm X \cdot \bm n}{2}\right) \sinh \left(\frac{\bm X \cdot \bm n}{2}\right). \nonumber
\end{eqnarray}
Equation (\ref{Einstein_rel}) takes the form of a generalized Einstein relation and, as expected, the second term on the right-hand side vanishes when $\alpha_{\bm n}=1/2$, recovering the equilibrium result \cite{Seifert2010, Speck2006}.
\subsection{Gaussian Approximation \label{sec:diffusion}}
The master equation (\ref{master_eqn}) can be approximated for long times by a continuous diffusion equation with a constant effective drift vector and constant effective diffusion matrix, as follows. The eigensates $c_{\bm k}(t)$ decay according to the evolution equation (\ref{diagonal}) and, for long times, only the lowest eigensates remain populated and contribute to the system dynamics. As described in Appendix \ref{sec:cumulant}, we approximate the evolution equation by replacing $\lambda_{\bm k}$ by its second-order Taylor series around the origin, i.e.,
\begin{equation}
\lambda_{\bm k}\approx-i\bm A{\bm k}\cdot{\bm v} - 2(\bm A{\bm k})^{T}D {\bm A\bm k},
\label{lambda_approx}
\end{equation}
where $\bm k$ is taken as a column vector. The probability coefficients $p_{\bm n}(t)$ can be determined from the eigenstates $c_{\bm k}(t)$ by inverting Eq.\ (\ref{characteristic_function_discrete}), i.e., we define the continuous probability density
\begin{equation}
{\mathcal P}({\bm s},t) = \frac{1}{(2\pi)^d}\int d{\bm k}\ c_{\bm k}(t) e^{i{\bm k}\cdot \bm A{\bm s}},
\label{characteristic_function_continuous}
\end{equation}
where $s$ is a unitless continuous variable. The equation of motion for ${\mathcal P}({\bm s},t)$ is
\begin{equation}
\frac{\partial {\mathcal P}({\bm s},t)}{\partial t}=\left(-{\bm v}\cdot\nabla_{\bm s} +\sum_{jj'} 2D_{jj'} \frac{\partial^{2}}{\partial s_{j}\partial s_{j'}}\right) {\mathcal P}({\bm s},t),
\label{diffusion}
\end{equation}
which describes a multivariate diffusion process where the drift vector and diffusion matrix depend on the periodic potential, via $\kappa_{\bm n}^{\bm 0}$ and $\alpha_{\bm n}$, and the force $\bm f$ according to Eqs.\ (\ref{velocity_multi_dim}) and (\ref{gamma_multi_dim}).
If the system is initially described by a Dirac delta function at the origin, the diffusion equation (\ref{diffusion}) can be solved analytically to yield
\begin{equation}
{\cal P}_{ss} (\bm s,t) = \sqrt{\frac{8|D|}{(2\pi)^d t}} \exp \left( -\frac{1}{8t} (\bm s-\bm v t)^{T} D^{-1} (\bm s-\bm v t)\right). \label{Gaussian}
\end{equation}
In the long-time limit, the steady state is independent of the initial condition so the Gaussian (\ref{Gaussian}) provides a good approximate description of the system. This means that energy transfer in the steady state can be interpreted as a Gaussian process where, in general, the principal axis of the diffusion matrix is not aligned with the drift vector.
It is straightforward to write down the Ito stochastic differential equation for Eq.\ (\ref{diffusion}) and derive the two-time correlation function in the long-time limit \cite{Gardiner2009}:
\begin{eqnarray}
\langle s_j(t) , s_{j'} (t')\rangle & = & \langle s_j(t) s_{j'} (t')\rangle-\langle s_j(t)\rangle \langle s_{j'} (t')\rangle \\
& = & 4D_{jj'} \min (t,t').
\end{eqnarray}
\subsection{Two Dimensions \label{sec:2D}}
To explicitly interprete the transfer of energy between degrees of freedom, we consider the case with just two dimensions. Labeling the two coordinates $x$ and $y$, the eigenvalues (\ref{eigenvalues_general}) are
\begin{eqnarray}
\lambda_{(k_{x},k_{y})}&=&4\kappa^{\bm 0}_{(1,0)}G_{(1,0)}(X_x/2)\sin(k_x a_x/2 )\sin(k_x a_x/2+i X_x/2) \nonumber \\
& & +4\kappa^{\bm 0}_{(0,1)}G_{(0,1)}(X_y/2)\sin(k_y a_y/2)\sin ( k_y a_y/2+iX_y/2) \nonumber \\
&&+4\kappa^{\bm 0}_{(1,1)}G_{(1,1)}(X_z/2)\sin(k_{x}a_{x}/2+k_{y}a_{y}/2) \nonumber \\
& & \times \sin( k_{x}a_{x}/2+k_{y}a_{y}/2+iX_z/2),
\label{eigenvalues_2D}
\end{eqnarray}
where we have identified $X_z = X_x+X_y$ as the thermodynamic force along the coupled coordinate, and assumed that coupling between degrees of freedom is preferential in the $(1,1)$ direction so that $\kappa^{\bm 0}_{(1,-1)}$ is negligible.\footnote{The effect of the competing orthogonal coupling transition, described in our formalism by $\kappa_{(1,-1)}^{\bm 0}$, has been considered by other authors \cite{Golubeva2012}.} The uncoupled transition rates $\kappa^{\bm 0}_{(0,1)}$ and $\kappa^{\bm 0}_{(1,0)}$ represent {\em leak processes} that bypass the coupling mechanism and weaken the coupling between the $x$ and $y$ degrees of freedom \cite{Lems2003}. The drift (\ref{velocity_multi_dim}) becomes
\begin{eqnarray}
v_{x}
&=&2\kappa^{\bm 0}_{(1,0)} G_{(1,0)}(X_x/2)\sinh(X_{x}/2) \nonumber \\
& & +2\kappa^{\bm 0}_{(1,1)} G_{(1,1)}(X_z/2)\sinh(X_z/2)\label{drift1}\\
v_{y}
&=&2\kappa^{\bm 0}_{(0,1)}G_{(0,1)}(X_y/2)\sinh(X_y/2) \nonumber \\
& & +2\kappa^{\bm 0}_{(1,1)}G_{(1,1)}(X_z/2)\sinh(X_z/2)\label{drift2}
\end{eqnarray}
and the diffusion (\ref{diff_matrix}) becomes
\begin{eqnarray}
D_{xx}&=&\kappa^{\bm 0}_{(1,0)}G_{(1,0)}(X_x/2)\cosh(X_x/2) \nonumber \\
&&+\kappa^{\bm 0}_{(1,1)}G_{(1,1)}(X_z/2)\cosh(X_z/2) \\
D_{yy}&=&\kappa^{\bm 0}_{(0,1)}G_{(0,1)}(X_y/2)\cosh(X_y/2) \nonumber \\
& &+\kappa^{\bm 0}_{(1,1)}G_{(1,1)}(X_z/2)\cosh(X_z/2) \\
D_{xy}&=&D_{yx}=\kappa^{\bm 0}_{(1,1)}G_{(1,1)}(X_z/2)\cosh(X_z/2). \label{diff_offdiag}
\end{eqnarray}
For $\kappa_{(1,1)}^{\bm 0}= 0$, coupling between degrees of freedom can not occur. In this case, the coordinates decouple so that $v_x$ and $D_{xx}$ depend only on $X_x$, $v_y$ and $D_{yy}$ depend only on $X_y$, and the correlation term $D_{xy}$ vanishes. In contrast, for $\kappa_{(1,1)}^{\bm 0}\neq 0$, the thermodynamic force in one degree of freedom can drive drift in the other. For example, $v_x$ depends on $X_y$ via the coupled force $X_z=X_x+X_y$. In addition, coupling between the degrees of freedom induces correlations between the thermal fluctuations. This is illustrated in two ways: (i) the force in one degree of freedom can drive fluctuations in the other (e.g., $D_{xx}$ depends on $X_y$), and (ii) the correlation term $D_{xy}$ can be non-vanishing. The key result is that energy transfer between degrees of freedom leads to statistical correlations between thermal fluctuations in those degrees of freedom.
The energy transfer can be visualized in the $x$-$y$ plane as a drift vector and an elliptical contour of the diffusing Gaussian probability density (\ref{Gaussian}) [centered at the origin for simplicity]. This is shown in Fig.\ \ref{fig:velocity_covariance} for three different values of the coupling strength $\kappa^{\bm 0}_{(1,1)}/\kappa^{\bm 0}_{(0,1)}$. We consider $X_x>0$ and $X_y<0$ so that transitions in the $x$ coordinate are thermodynamically favourable (spontaneous) and transitions in the $y$ coordinate are thermodynamically unfavourable (nonspontaneous). We also choose $X_z=X_x+X_y>0,$ so that transitions in the coupled coordinate $z=x+y$ are thermodynamically favourable (spontaneous). Figure \ref{fig:velocity_covariance}(a) shows the weak-coupling limit where the coupling strength $\kappa^{\bm 0}_{(1,1)}/\kappa^{\bm 0}_{(0,1)}\lesssim 1$. In this case, the drift vector in the $y$ coordinate is negative, in the same direction as the force in that coordinate $X_{y}<0$, and the diffusion ellipse is roughly symmetric because the fluctuations are weakly correlated. Figure \ref{fig:velocity_covariance}(b) shows moderate coupling where $\kappa^{\bm 0}_{(1,1)}/\kappa^{\bm 0}_{(0,1)}>1$. In this case, the drift vector points in the positive $y$ direction, opposing the $X_y<0$ force, and the diffusion ellipse becomes elongated as the coupling leads to fluctuations along the coupled $z$ coordinate. Figure \ref{fig:velocity_covariance}(c) shows strong coupling where $\kappa^{\bm 0}_{(1,1)}/\kappa^{\bm 0}_{(0,1)}\gg1$. In this case, the drift amplitude is much larger than for weak or moderate coupling, the diffusion ellipse is strongly elongated along the coupled $z$ coordinate, and the drift vector and the principle axis of the diffusion ellipse line up. For strong coupling, a one-dimensional description in the coupled coordinate is valid, as considered in Sec.\ \ref{sec:strong}.
\begin{figure}
\centering
\includegraphics{fig_one.pdf}
\vspace{-6.3cm}
\caption{Drift vector for (a) $\kappa^{\bm 0}_{(1,1)}= \kappa^{\bm 0}_{(0,1)}$, (b) $\kappa^{\bm 0}_{(1,1)}=10\kappa^{\bm 0}_{(0,1)}$, and (c) $\kappa^{\bm 0}_{(1,1)}=100 \kappa^{\bm 0}_{(0,1)}$. Other parameters are $X_{x}=3 $, $X_{y}=-2$, $\kappa^{\bm 0}_{(1,0)}= \kappa^{\bm 0}_{(0,1)}$, and $\alpha_{(0,1)}=\alpha_{(1,0)}=\alpha_{(1,1)}=1/2$. The ellipse is a shaded contour of the Gaussian probability density (\ref{Gaussian}) centered at the origin.}
\label{fig:velocity_covariance}
\end{figure}
\section{Hopping Statistics \label{sec:hop}}
The long-time Gaussian approximation described in the previous section provides a physical interpretation of the signatures of energy transfer. However, the master equation (\ref{master_eqn}) describes more detail about the evolution of the system. In this section, we consider how the signatures of energy transfer manifest in the discrete hopping statistics of the system.
\subsection{Waiting Time \label{sec:waiting}}
An important measurable quantity is the time delay between hopping events. This is usually referred to as the waiting time or the dwell time \cite{Fisher1999}. Assuming the system initially occupies state $\bm n_{0}$, the probability $p_{\bm n_0}(t)$ of occupying the state $\bm n_0$ at time $t$ evolves according to the master equation (\ref{master_eqn}). For an infinitesimal $t$, $p_{\bm n}(t)$ is negligible unless $\bm n = \bm n_0$, so Eq.\ (\ref{master_eqn}) can be approximated by
\begin{equation}
\frac{dp_{\bm n_0}(t)}{dt} \approx \kappa_{\bm 0}^{\bm f} p_{\bm n_0}(t).
\label{approx_me}
\end{equation}
Integrating Eq.\ (\ref{approx_me}) gives
\begin{equation}
p_{\bm n_0}(t) = p_{\bm n_0}(0) e^{\kappa_{\bm 0}^{\bm f} t}= e^{- t/\tau}=e^{-\Gamma t}.
\label{pni}
\end{equation}
Equation (\ref{pni}) shows that the jump times are exponentially distributed. This is a general characteristic of master-equaton models \cite{Gardiner2009} and has been observed in single-molecule experiments of molecular motors \cite{Yildiz2003, Shimabukuro2003, Sakamoto2008}. In Eq.\ (\ref{pni}), the decay rate $\Gamma$ depends on the thermodynamic force $\bm X$ according to
\begin{eqnarray}
\Gamma & = & -\kappa_{\bm 0}^{\bm f} =\sum_{\bm n \neq\bm 0} \kappa_{\bm n}^{\bm f} \\
& = & \sum_{\bm n \neq\bm 0, \in for} 2\kappa_{\bm n}^{\bm 0} G_{\bm n}\left( \frac{\bm X\cdot \bm n}{2} \right)\cosh\left(\frac{\bm X \cdot \bm n}{2}\right).
\end{eqnarray}
For the two-dimensional case considered in Sec.\ \ref{sec:2D},
\begin{eqnarray}
\Gamma&=&2\kappa^{\bm 0}_{(1,0)} G_{(1,0)}(X_x/2)\cosh( X_x/2)\nonumber \\
&&+2\kappa^{\bm 0}_{(0,1)}G_{(0,1)}(X_y/2)\cosh(X_y/2)\nonumber \\
& & +2\kappa^{\bm 0}_{(1,1)}G_{(1,1)}(X_z/2)\cosh(X_z/2).
\end{eqnarray}
Near equilibrium, the decay rate $\Gamma$ becomes independent of the force $\bm X$.
\subsection{Hopping Rates}
The transition rates $\kappa_{\bm n}^{\bm f}$ depend on the thermodynamic force according to Eq.\ (\ref{rate}), and the average ratio of forward to backward hops is given by the generalized detailed balance condition
\begin{equation}
\frac{\kappa_{\bm n}^{\bm f}}{\kappa_{-\bm n}^{\bm f}}=e^{(\alpha_{\bm n}+\alpha_{-\bm n}){\bm f}\cdot{\bm A \bm n}/k_{B}T} = e^{{\bm X} \bm n}.
\label{ratio}
\end{equation}
The exponential form of Eq.\ (\ref{ratio}) has been observed experimentally for kinesin \cite{Nishiyama2002}. Equation (\ref{ratio}) shows that the rates of forward and backward hopping are approximately equal near equilibrium, when $|X_j|\ll 1$. The situation is quite different when the system operates far from equlibrium. When $X_j \gg 1$, the forward hopping rates are exponentially larger than the backward rates, and when $X_j \ll -1$, the backward hopping rates are exponentially larger than the forward rates. This dependence on the thermodynamic force leads to different behavior in the near and far from equilibrium regimes, as demonstrated in the following sections.
\subsection{Strong Coupling \label{sec:strong}}
The strong-coupling regime has been considered theoretically by other authors \cite{Magnasco1994, Golubeva2012, VandenBroeck2012}, and results from single-molecule experiments indicate that certain molecular motors operate in this regime \cite{Rondelez2005, Sakamoto2008}. We consider strong coupling in detail here for completeness, and because it clearly demonstrates the difference between the near and far from equilibrium operating regimes.
In the strong-coupling regime, the leak transition rates $\kappa_{(1,0)}^{\bm 0}$ and $\kappa_{(0,1)}^{\bm 0}$ are negligible compared to the coupled rate $\kappa_{(1,1)}^{\bm 0}$, and all transitions occur simultaneously in $x$ and $y$. In this case, the system is well described by the one-dimensional master equation in the coupled coordinate $z=x+y$:
\begin{eqnarray}
\frac{d p_{n_z}(t)}{dt}& = &\kappa_{(1,1)}^{\bm 0}\left[e^{\alpha_{(1,1)}X_z}p_{n_{z}-1}(t)+ e^{(\alpha_{(1,1)}-1)X_z}p_{n_z+1}(t)\right] \nonumber \\
& & -\Gamma_z p_{n_z}(t),
\label{master_eqn_z}
\end{eqnarray}
where $\Gamma_{z}= 2\kappa_{(1,1)}^{\bm 0}G_{(1,1)}(X_z/2) \cosh(X_z/2)$. Assuming an initial state $p_{n_z}(0)=\delta_{n_z 0}$, the general analytic solution to Eq.\ (\ref{master_eqn_z}) is
\begin{equation}
p_{n_z}(t)=e^{X_{z}n_z/2} e^{-\Gamma_zt } I_{n_z}\left(2t\kappa_{(1,1)}^{\bm 0} G_{(1,1)}(X_z/2)\right),
\label{soln}
\end{equation}
where $I_n(x)$ is the modified Bessel function finite at the origin \cite{Abramowitz1972}.
Near equilibrium, i.e., $X_z \ll 1$, the first exponential term in Eq.\ (\ref{soln}) is of order unity and the probability distribution becomes approximately symmetric in $n_z$. In the long-time limit, the probability distribution takes the Gaussian form
\begin{equation}
p_{n_z}(t) \sim \frac{e^{-n_z^2/2 t\Gamma_z}}{\sqrt{2\pi t\Gamma_z}} ,
\label{soln_long_time}
\end{equation}
with $\Gamma_z \approx 2\kappa_{(1,1)}^{\bm 0}$. Equation (\ref{soln_long_time}) shows that the probability $p_{n_z=0}(t)$ of the system initially at $n_z=0$ remaining at $n_z=0$ at time $t$ decays as $1/\sqrt{t}$, characteristic of a diffusion process.
Far from equilibrium, i.e., $X_z \gg1$, the second term on the right-hand side of the master equation (\ref{master_eqn_z}) is negligible. In that case, $p_{n_z}(t)$ is well described by the Poisson distribution
\begin{equation}
p_{n_z}(t) = e^{-\Gamma_z t} \frac{(t\Gamma_z)^{n_z}}{n_z !},
\label{poissonz}
\end{equation}
with $\Gamma_z=\kappa_{(1,1)}^{\bm 0}\exp(\alpha_{(1,1)}X_{z})$. Equation (\ref{poissonz}) shows that far from equilibrium $p_{n_z=0}(t)$ decays as $\exp(-t \Gamma_z)$. For long times, the Poisson distribution (\ref{poissonz}) is well approximated by a Gaussian, consistent with Sec.\ (\ref{sec:diffusion}).
In general for strong coupling, rotating the drift vector and diffusion matrix to the coupled coordinate $z$, and rescaling to $a_z$, the drift is
\begin{equation}
v_z = 2 \kappa_{(1,1)}^{\bm 0} G_{(1,1)}(X_z/2)\sinh(X_z/2),
\end{equation}
and the diffusion is
\begin{equation}
D_{zz}= \kappa_{(1,1)}^{\bm 0} G_{(1,1)}(X_z/2)\cosh(X_z/2).
\end{equation}
The relative timescales of the drift and diffusion processes can be compared using either the randomness $r$ or the P\'{e}clet number $Pe$ \cite{Lindner2001, Svoboda1994}. In the coupled coordinate these quantities satisfy the known result \cite{Lindner2001}
\begin{equation}
r = \frac{2 D_{zz}}{v_z}= \frac{2}{Pe}=\coth(X_z/2).
\label{peclet}
\end{equation}
Near equilibrium, $r$ is large and $Pe\ll 1$ indicating that diffusion occurs much faster than drift, characteristic of a diffusion process. Far from equilibrium, $r$ tends to 1 and $Pe$ tends to 2 indicating that drift and diffusion occur on similar timescales, characteristic of a Poisson process.
\subsection{Weak Coupling \label{sec:weak}}
The extent to which molecular motors operate outside the strong-coupling regime remains an open question \cite{Chowdhury2013, Ishijima1998, Oosawa2000, Ikezaki2013}. In this section, we consider the weak-coupling regime where the leak transition rates $\kappa_{(1,0)}^{\bm 0}$ and $\kappa_{(0,1)}^{\bm 0}$ are not negligible compared to the coupled rate $\kappa_{(1,1)}^{\bm 0}$, and a one-dimensional description along the coupled coordinate $z$ is insufficient. We consider the near and far from equilibrium regimes for the case where each degree of freedom is observed independently. This is relevant because, in practise, it is often not straightforward to access all degrees of freedom simultaneously.
A description of the system along a single degree of freedom can be determined by tracing over the inaccessible or unobserved coordinates. For example, if the multidimensional system is observed only along the $x$ co-ordinate, the effective probability density is
\begin{equation}
p_{n_x}(t)=\sum_{ n_j\neq n_x}p_{\bm n}(t),
\end{equation}
which evolves according to the one-dimensional master equation
\begin{equation}
\frac{dp_{n_x}(t)}{dt} =\sum_{\bm n'}\kappa_{\bm n + \bm n'}^{\bm f} p_{n'_x}(t).
\label{master_eqn_nx}
\end{equation}
For the two-dimensional case considered in Sec.\ \ref{sec:2D}, the effective one-dimensional master equations, from the point of view of an observer with access to only a single degree of freedom, either $x$ or $y$, respectively, are
\begin{eqnarray}
\frac{dp_{n_x}(t)}{dt} &= & \left[ \kappa_{(1,0)}^{\bm 0} e^{\alpha_{(1,0)} X_x} + \kappa_{(1,1)}^{\bm 0} e^{\alpha_{(1,1)}X_z}\right] p_{n_x-1}(t) -\Gamma_x p_{n_x}(t) \nonumber \\
& & \left[ \kappa_{(1,0)}^{\bm 0} e^{(\alpha_{(1,0)}-1)X_x} + \kappa_{(1,1)}^{\bm 0} e^{(\alpha_{(1,1)}-1)X_z}\right] p_{n_x+1}(t) \label{mex}\\
\frac{dp_{n_y}(t)}{dt} & = & \left[ \kappa_{(0,1)}^{\bm 0} e^{\alpha_{(0,1)}X_y} + \kappa_{(1,1)}^{\bm 0} e^{\alpha_{(1,1)}X_z}\right] p_{n_y-1}(t)-\Gamma_y p_{n_y}(t) \nonumber \\
& & \left[ \kappa_{(0,1)}^{\bm 0} e^{(\alpha_{(0,1)}-1)X_y} + \kappa_{(1,1)}^{\bm 0} e^{(\alpha_{(1,1)}-1)X_z}\right] p_{n_y+1}(t) ,\label{mey}
\end{eqnarray}
where
\begin{equation}
\Gamma_j = 2D_{jj}.
\label{D_jj}
\end{equation}
Coupled transitions $\kappa_{(1,1)}^{\bm 0}$ occur in both $x$ and $y$ simultaneously and are observed in both $x$ and $y$, whereas leak transitions $\kappa_{(1,0)}^{\bm 0}$ are observed in $x$ but not in $y$, and leak transitions $\kappa_{(0,1)}^{\bm 0}$ are observed in $y$ but not in $x$.
The probability distribution for each coordinate can be determined analytically in the near and far from equilibrium limits. Near equilibrium, i.e., when $|X_j|\ll 1$, the force dependence of the transition rates drops out and the solution to the master equations (\ref{mex}) and (\ref{mey}) with the initial state $p_{n_j}(0)=\delta_{n_j 0}$ is
\begin{equation}
p_{n_j}(t) = e^{-\Gamma_j t} I_{n_j} ( \Gamma_x t) ,
\end{equation}
where $\Gamma_j \approx \sum_{\bm n\in{\rm for}, n_j \neq 0}2\kappa_{\bm n}^{\bm 0}$. In the long-time limit, the probability distribution for each coordinate takes the Gaussian form
\begin{eqnarray}
p_{n_j}(t) &\sim&\frac{e^{-n_j^2/2\Gamma_x t}}{\sqrt{2\pi\Gamma_j t}},
\label{p_gauss}
\end{eqnarray}
characteristic of a diffusion process.
To consider the far from equilibrium limit, we take $X_x\gg1$, $X_y \ll 1$, and $X_z=X_x+X_y \gg1$. In this case, the following transition rates in the one-dimensional master equations (\ref{mex}) and (\ref{mey}) can be neglected: $\kappa_{(1,0)}^{\bm 0}\exp((\alpha_{(1,0)}-1)X_x)$, $\kappa_{(0,1)}^{\bm 0}\exp(\alpha_{(0,1)}X_y)$, and $\kappa_{(1,1)}^{\bm 0}\exp((\alpha_{(1,1)}-1)X_z)$. The master equation for the driving $x$ coordinate can then be solved for the initial state $p_{n_x}(0)=\delta_{n_x 0}$ to yield the Poisson distribution
\begin{equation}
p_{n_x}(t) = e^{-\Gamma_x t} \frac{(t\Gamma_x)^{n_x}}{n_x !},
\label{poisson}
\end{equation}
with $\Gamma_x \approx \kappa_{(1,0)}^{\bm 0}\exp(\alpha_{(1,0)}X_x)+\kappa_{(1,1)}^{\bm 0}\exp(\alpha_{(1,1)}X_z)$. The master equation for the driven coordinate $y$ can be solved in the long-time limit, following the method presented in Sec.\ \ref{sec:diffusion}, to yield the Gaussian form
\begin{equation}
p_{n_y}(t) \sim \frac{e^{-(n_y-v_yt)^2/2\Gamma_y t}}{\sqrt{2\pi\Gamma_y t}},
\label{gauss_shift}
\end{equation}
with effective drift
\begin{equation}
v_y \approx -\kappa_{(0,1)}^{\bm 0}e^{(\alpha_{(0,1)}-1)X_y}+\kappa_{(1,1)}^{\bm 0}e^{\alpha_{(1,1)}X_z},
\end{equation}
and effective diffusion half the decay rate
\begin{equation}
\Gamma_y \approx \kappa_{(0,1)}^{\bm 0}e^{(\alpha_{(0,1)}-1)X_y}+\kappa_{(1,1)}^{\bm 0}e^{\alpha_{(1,1)}X_z}.
\end{equation}
The interpretation of Eqs.\ (\ref{p_gauss}), (\ref{poisson}), and (\ref{gauss_shift}) will be discussed further in the following section.
\section{Single Trajectories \label{sec:single}}
Single trajectories of the system can be simulated using the decay rate $\Gamma$ and the hopping rates $\kappa_{\bm n}^{\bm f}$ \cite{Gardiner2009}. We consider only the two-dimensional case. In the strong-coupling regime, transitions occur along the coupled coordinate $z$ and single trajectories can be determined from the master equation (\ref{master_eqn_z}). Figure \ref{fig:trajectory} shows single trajectories for (a) near and (b) far from equilibrium. The probability distribution is determined numerically from an ensemble of single trajectories and compared with the analytic results from Sec.\ \ref{sec:strong}. Near equilibrium, the rates of forward and backward hopping are comparable and the single trajectory is a random walk roughly balanced in the forward and backward directions. The probability distribution is approximately Gaussian. Far from equilibrium, the decay rate $\Gamma_z$ is larger than in the near-equilibrium regime, reducing the average waiting time. Forward hops dominate and the single trajectory is a one-sided random walk. In the far equilibrium limit, there are no backward hops at all and the system evolves as a pure birth process with a Poisson probability distribution. In the biochemical literature this is referred to as a Poisson enzyme \cite{Svoboda1994}.
\begin{figure}
\centering
\includegraphics{fig_two.pdf}
\vspace{-2.9cm}
\caption{Single trajectories $(a_s)$ and $(b_s)$ and probability distributions $(a_p)$ and $(b_p)$ constructed from 1000 trajectories at $t\kappa_{(1,1)}^{\bm 0}=1$ with (a) $X_z = 0.1$ and (b) $X_z = 3$. Other parameters are $\alpha_{(1,1)}=1/2$. The curves are $(a_p)$ Eq.\ (\ref{soln_long_time}) and $(b_p)$ Eq.\ (\ref{poissonz}).}
\label{fig:trajectory}
\end{figure}
In the weak-coupling regime, leak transitions are important and more than one degree of freedom must be described. Figure \ref{fig:traj_2d} shows single trajectories for (a) near and (b) far from equilibrium. The probability distribution is determined numerically and compared with analytic results from Sec.\ \ref{sec:weak}. Near equilibrium, the rates of forward and backward hops are comparable and a balanced random walk is observed in both the $x$ and $y$ coordinates. The probability distribution is approximately Gaussian in both $x$ and $y$. Far from equilibrium, the average waiting time is reduced and the ratio of forward to backward hopping rates biases the random walks. In particular, the leak processes have a different effect on the driving and driven coordinates. In the $x$ coordinate, forward hops dominate both for coupled and leak transitions and the probability distribution is approximately Poissonian. In the $y$ coordinate, coupled transitions yield predominantly forward hops and leak transitions yield predominantly backward hops resulting in a Gaussian probability distribution shifted from the origin. The driven coordinate only displays Poisson statistics in the limit of strong coupling.
\begin{figure}
\centering
\includegraphics{fig_three.pdf}
\vspace{-3.6cm}
\caption{Single trajectories $(a_s)$ and $(b_s)$ in (black) $x$ and (gray) $y$, and probability distributions $(a_{px})$, $(a_{py})$, $(b_{px})$, and $(b_{py})$ constructed from 1000 trajectories at $t\kappa_{(1,1)}^{\bm 0}=1$. Parameters are $(a)$ $X_x = 0.3$ and $X_y=-0.2$, and (b) $X_x = 5$ and $X_y = -2$. Other parameters are $\kappa_{(1,0)}^{\bm 0}=\kappa_{(0,1)}^{\bm 0}=\kappa_{1,1}^{\bm 0}$ and $\alpha_{(1,0)}=\alpha_{(0,1)}=\alpha_{(1,1)}=1/2$. The curves are $(a_{px})$ and $(a_{py})$ Eq.\ (\ref{p_gauss}), $(b_{px})$ Eq.\ (\ref{poisson}), and $(b_{py})$ Eq.\ (\ref{gauss_shift}).}
\label{fig:traj_2d}
\end{figure}
Molecular motor experiments observe few, or no, backward steps in the driven mechanical coordinate \cite{Nishiyama2002, Yildiz2003, Shimabukuro2003, Toba2006, Ikezaki2013}. The theoretical results presented here support the view that these motors operate, at least to a reasonable approximation, in the strongly-coupled, far-from-equilibrium regime.
\subsection{Drift and Diffusion}
Drift and diffusion can be determined from single trajectories by counting the number of forward and backward hops and determining the waiting time. For example, if $N_j^f$ and $N_j^b$ are the number of forward and backward hops observed in coordinate $j$, then the drift in coordinate $j$ is
\begin{equation}
v_j = \Gamma_j \frac{N_j^f-N_j^b}{N_j^f+N_j^b}.
\end{equation}
For the diffusion matrix, the diagonal elements are given by the decay rates observed along the appropriate coordinate, as demonstrated by Eq.\ (\ref{D_jj}). The off-diagonal elements represent correlations between degrees of freedom and, to determine these from single trajectories, the relevant degrees of freedom must be observed simultaneously to identify hops occuring in coupled coordinates. The simultaneous observation of mechanical and chemical coordinates has been demonstrated for myosin \cite{Yanagida2008}. In the two-dimensional case, if $N_z^f$ and $N_z^b$ are the number of forward and backward hops observed along the coupled $z$ coordinate, i.e., they are observed simultaneously in the $x$ and $y$ directions, and $N_t$ is the total number of hops observed in all coordinates, then the off-diagonal diffusion matrix elements are given by
\begin{equation}
D_{xy}= D_{yx} = \Gamma \frac{N_z^f+N_z^{b}}{N_t}.
\label{Dxy_ob}
\end{equation}
An alternative approach is to determine the decay rate for hops only occuring in the coupled coordinate. For either method, if the orthogonal coupling transition $\kappa_{(1,-1)}^{\bm 0}$ is non-negligible, care is needed to distinguish positive-x, positive-y transitions from orthogonal positive-x, negative-y transitions.
\section{Conclusion \label{sec:conc}}
We have used a Brownian theory for energy transfer in a molecular motor to derive formal expressions for the thermal fluctuation statistics. Energy transfer between different degrees of freedom arises due to hopping transitions along coupled coordinates and leads to statistical correlations between thermal fluctuations in those degrees of freedom. In the long-time limit, energy transfer can be described by a continuous diffusion process with a constant drift vector and difussion matrix. The diffusion matrix quantifies the thermal fluctuations statistics in the steady state.
We have considered the discrete hopping statistics of the molecular motor and simulated single trajectories of the system. Near equilibrium, single trajectories show a similar number of forward and backward hops and can be described by a random walk with negligible drift leading to Guassian statistics. Far from equilibrium, single trajectories are a one-sided random walk in the driving coordinate leading to Poisson statistics. The driven coordinate undergoes a biased random walk, tending to a one-sided random walk only in the strong-coupling limit.
|
2,877,628,090,676 | arxiv | \section{Introduction}
\vskip .1cm
Neutrinos are the only fermions which the Standard Model (SM) predicts
to be massless. This ansatz was justified due to the apparently
masslessness of neutrinos in most experiments. However, the situation
has changed due to the important impact of underground experiments,
since the pioneer geochemical experiments of Davis and collaborators,
to the more recent Gallex, Sage, Kamiokande and SuperKamiokande
experiments \cite{solarexp,atmexp,superkatm98,sk300,sk504}. Altogether
they provide solid evidence for the solar and the atmospheric neutrino
problems, two milestones in the search for physics beyond the SM. Of
particular importance has been the recent confirmation by the
SuperKamiokande collaboration \cite{superkatm98} of the atmospheric
neutrino zenith-angle-dependent deficit, which has marked a turning
point in our understanding of neutrinos, providing a strong evidence
for \hbox{$\nu_\mu$ } conversions. In addition to the neutrino data from
underground experiments there is also some possible indication for
neutrino oscillations from the LSND experiment~\cite{LSND}. To this we
may add the possible r\^ole of neutrinos in the dark matter problem
and structure formation \cite{cobe,cobe2,iras}. If one boldly insists
in including also the last two requirements, together with the data on
solar and atmospheric neutrinos, then we have {\sl three mass scales}
involved in neutrino oscillations. The simplest way to reconcile these
requirements invokes the existence of a light sterile neutrino
\cite{ptv92,pv93,cm93}. The prototype models proposed in
\cite{ptv92,pv93} enlarge the $SU(2) \otimes U(1) $ Higgs sector in such a way that
neutrinos acquire mass radiatively, without unification nor
seesaw. Out of the four neutrinos, two of them lie at the MSW scale
and the other two maximally-mixed neutrinos are at the HDM/LSND
scale. The latter scale arises at one-loop, while the solar and
atmospheric scales come in at the two-loop level. The lightness of the
sterile neutrino, the nearly maximal atmospheric neutrino mixing, and
the generation of the solar and atmospheric neutrino scales all result
naturally from the assumed lepton-number symmetry and its breaking.
Either \hbox{$\nu_e$ }- \hbox{$\nu_\tau$ } conversions explain the solar data with
\hbox{$\nu_\mu$ }- \hbox{$\nu_{s}$ } oscillations accounting for the atmospheric deficit
\cite{ptv92}, or else the r\^oles of \hbox{$\nu_\tau$ } and \hbox{$\nu_{s}$ } are reversed
~\cite{pv93}. These two basic schemes have distinct implications at
future solar \& atmospheric neutrino experiments, as well as
cosmology.
\section{Theories of Neutrino Mass}
\vskip .1cm
One of the most unpleasant features of the SM is that the masslessness
of neutrinos is not dictated by an underlying {\sl principle}, such as
that of gauge invariance in the case of the photon: the SM simply
postulates that neutrinos are massless by choosing a restricted
multiplet content. {\sl Why are neutrinos so special when compared
with the other fundamental fermions}? If massive, neutrinos would
present another puzzle: {\sl Why are their masses so small compared to
those of the charged fermions}? The fact that neutrinos are the only
electrically neutral elementary fermions may hold the key to the
answer, namely neutrinos could be Majorana fermions, the most
fundamental kind of fermion. In this case the suppression of their
mass could be associated to the breaking of lepton number symmetry at
a very large energy scale within a {\sl unification approach}, which
can be implemented in many extensions of the SM. Alternatively,
neutrino masses could arise from garden-variety {\sl weak-scale
physics} characterized by a scale $\vev{\sigma} = \hbox{$\cal O$ }(m_Z)$ where
$\vev{\sigma}$ denotes a $SU(2) \otimes U(1) $ singlet vacuum expectation value which
owes its smallness to the symmetry enhancement which would result if
$\vev{\sigma}$ and $m_\nu \to 0$.
One should realize however that, although the physics of neutrinos can
be rather different in various gauge theories of neutrino mass, there
is hardly any predictive power on masses and mixings, this is one of
the aspects of the so-called flavour problem which is probably the
toughest open problem in physics.
\subsection{Unification Approach}
\vskip .1cm
An attractive possibility is to ascribe the origin of parity violation
in the weak interaction to the spontaneous breaking of B-L symmetry
in the context of left-right symmetric extensions such as the $SU(2)_L \otimes SU(2)_R \otimes U(1)$
\cite{LR}, $SU(4) \otimes SU(2) \otimes SU(2)$ \cite{PS} or $SO(10)$ gauge groups \cite{GRS}. In this case
the masses of the light neutrinos are obtained by diagonalizing the
following mass matrix in the basis $\nu,\nu^c$
\begin{equation}
\left[\matrix{
M_L & D \cr
D^T & M_R }\right]
\label{SS}
\end{equation}
where $D$ is the standard $SU(2) \otimes U(1) $ breaking Dirac mass term and $M_R =
M_R^T$ is the isosinglet Majorana mass that may arise from a 126
vacuum expectation value (vev) in $SO(10)$. The magnitude of the $M_L
\nu\nu$ term \cite{2227} is also suppressed by the left-right breaking
scale, $M_L \propto 1/M_R$ \cite{LR}.
In the seesaw approximation, one finds
\begin{equation}
M_{\nu \: eff} = M_L - D M_R^{-1} D^T
\:.
\label{SEESAW}
\end{equation}
As a result one is able to explain naturally the relative smallness of
\hbox{neutrino } masses since $m_\nu \propto 1/M_R$. Although $M_R$ is expected
to be large, its magnitude heavily depends on the model and it may
have different possible structures in flavour space (so-called
textures) \cite{Smirnov}. As a result it is hard to make firm
predictions for the corresponding light neutrino masses and mixings
that are generated through the seesaw mechanism. In fact this freedom
has been exploited in model building in order to account for an almost
degenerate seesaw-induced neutrino mass spectrum \cite{DEG}.
One virtue of the unification approach is that it may allow one to
gain a deeper insight into the flavour problem. There have been
interesting attempts at formulating supersymmetric unified schemes
with flavour symmetries and texture zeros in the Yukawa couplings. In
this context a challenge is to obtain the large lepton mixing now
indicated by the atmospheric neutrino data.
\subsection{Weak-Scale Approach}
\vskip .1cm
Although very attractive, the unification approach is by no means the
only way to generate neutrino masses. There are many schemes which do
not require any large mass scale. The extra particles employed to
generate the neutrino masses have masses $\hbox{$\cal O$ }(m_Z)$ accessible to
present experiments. There is a variety of such mechanisms, in which
neutrinos acquire mass either at the tree level or radiatively. Let
us look at some.
\subsubsection{Tree Level}
\vskip .1cm
For example, it is possible to extend the lepton sector of the $SU(2) \otimes U(1) $
theory by adding a set of $two$ 2-component isosinglet neutral
fermions, denoted ${\nu^c}_i$ and $S_i$, $i=e,~\mu$ or $\tau$ in each
generation. In this case one can consider the mass matrix (in the
basis $\nu, \nu^c, S$) \cite{CON}
\begin{equation}
\left[\matrix{
0 & D & 0 \cr
D^T & 0 & M \cr
0 & M^T & \mu }\right]
\label{MATmu}
\end{equation}
The Majorana masses for the neutrinos are determined from
\begin{equation}
M_L = D M^{-1} \mu {M^T}^{-1} D^T
\label{33}
\end{equation}
In the limit $\mu \to 0$ the exact lepton number symmetry is recovered
and will keep neutrinos strictly massless to all orders in
perturbation theory, as in the SM. The corresponding texture of the
mass matrix has been suggested in various theoretical models
\cite{WYLER}, such as superstring inspired models~\cite{SST}. In the
latter the zeros arise due to the lack of Higgs fields to provide the
usual Majorana mass terms.
The smallness of neutrino mass then follows from the smallness of
$\mu$. The scale characterizing $M$, unlike $M_R$ in the seesaw
scheme, can be low. As a result, in contrast to the heavy neutral
leptons of the seesaw scheme, those of the present model can be light
enough as to be produced at high energy colliders such as LEP
\cite{CERN} or at a future Linear Collider. The smallness of $\mu$ is
in turn natural, in t'Hooft's sense, as the symmetry increases when
$\mu \to 0$, i.e. total lepton number is restored. This scheme is a
good alternative to the smallness of neutrino mass, as it bypasses
the need for a large mass scale, present in the seesaw unification
approach. One can show that, since the matrices $D$ and $M$ are not
simultaneously diagonal, the leptonic charged current exhibits a
non-trivial structure that cannot be rotated away, even if we set $\mu
\equiv 0$. The phenomenological implication of this, otherwise
innocuous twist on the SM, is that there is neutrino mixing despite
the fact that light neutrinos are strictly massless. It follows that
flavour and CP are violated in the leptonic currents, despite the
masslessness of neutrinos. The loop-induced lepton flavour and CP
non-conservation effects, such as $\mu \to e + \gamma$~\cite{BER,3E},
or CP asymmetries in lepton-flavour-violating processes such as $Z \to
e \bar{\tau}$ or $Z \to \tau \bar{e}$~\cite{CP} are precisely
calculable. The resulting rates may be of experimental interest
\cite{ETAU,TTTAU,cernlfv}, since they are not constrained by the
bounds on neutrino mass, only by those on universality, which are
relatively poor. In short, this is a conceptually simple and
phenomenologically rich scheme.
Another remarkable implication of this model is a new type of resonant
neutrino conversion mechanism \cite{massless0}, which was the first
resonant mechanism to be proposed after the MSW effect \cite{MSW}, in
an unsuccessful attempt to bypass the need for neutrino mass in the
resolution of the solar neutrino problem. According to the mechanism,
massless neutrinos and anti-neutrinos may undergo resonant flavour
conversion, under certain conditions. Though these do not occur in the
Sun, they can be realized in the chemical environment of supernovae
\cite{massless}. Recently it has been pointed out how they may provide
an elegant approach for explaining the observed velocity of pulsars
\cite{pulsars}.
\subsubsection{Radiative Level}
\vskip .1cm
There is also a large variety of {\sl radiative} models, where the $SU(2) \otimes U(1) $
multiplet content is extended in order to generate neutrino
masses. The prototype one-loop scheme is the one proposed by Zee
\cite{zee}. Supersymmetry with explicitly broken R-parity also
provides an alternative one-loop mechanism to generate neutrino mass.
These arise, for example, from scalar quark or scalar lepton
contributions, as shown in \fig{mnrad}
\begin{figure}[t]
\centerline{\protect\hbox{\psfig{file=mnrad.ps,height=4cm,width=7cm}}}
\vglue -0.6cm
\caption{One-loop-induced Neutrino Mass. }
\label{mnrad}
\end{figure}
A two-loop scheme to induce neutrino mass was suggested by Babu
\cite{Babu88}. The relevant diagram is shown in \fig{2loop}
\begin{figure}
\centerline{\protect\hbox{\psfig{file=neumass.ps,height=4.5cm,width=7cm}}}
\caption{Two-loop-induced Neutrino Mass }
\label{2loop}
\end{figure}
\footnote{Note here that I have used the slight variant of the Babu
model suggested in ref. \cite{ewbaryo}, which incorporates the idea of
spontaneous, rather than explicit lepton number violation}.
In the above examples active neutrinos acquire radiative mass. One can
also employ the radiative approach to construct models including
sterile neutrinos, such as those in ref.~\cite{ptv92,pv93}. In this
case some new Feynman diagram topologies are encountered.
\subsection{A Hybrid Approach}
\vskip .1cm
I now describe an interesting mechanism of neutrino mass generation
that combines seesaw and radiative mechanisms. It invokes
supersymmetry with broken R-parity, as the origin of neutrino mass and
mixings \cite{epsrad}. The simplest model is a unified minimal
supergravity model with universal soft breaking parameters (MSUGRA)
and bilinear breaking of R--parity~\cite{epsrad,RPothers}. Contrary to
a popular misconception, the bilinear violation of R--parity implied
by the $\epsilon_3$ term in the superpotential is physical, and can
not be rotated away~\cite{BRpVtalks}. It leads also by a minimization
condition, to a non-zero sneutrino vev, $v_3$. It is
well-known~\cite{rossarca} that in such models of broken R--parity the
tau neutrino $\nu_{\tau}$ acquires a mass, due to the mixing between
neutrinos and neutralinos. It comes from the matrix
\begin{equation}
\left[\matrix{
M_1 & 0 & -{1\over 2} g'v_d & {1\over 2} g'v_u & -{1\over 2} g'v_3 \cr
0 & M_2 & {1\over 2} g v_d & -{1\over 2} g v_u & {1\over 2} g v_3 \cr
-{1\over 2} g'v_d & {1\over 2} g v_d & 0 & -\mu & 0 \cr
{1\over 2} g'v_u & -{1\over 2} g v_u & -\mu & 0 & \epsilon_3 \cr
-{1\over 2} g'v_3 & {1\over 2} g v_3 & 0 & \epsilon_3 & 0
}\right]
\label{eq:NeutMassMat}
\end{equation}
where the first two rows are gauginos, the next two Higgsinos, and the
last one denotes the tau neutrino. The $v_u$ and $v_d$ are the
standard vevs, $g's$ are gauge couplings and $M_{1,2}$ are the gaugino
mass parameters. Since the $\epsilon_3$ and the $v_3$ are related, the
simplest (one-generation) version of this model contains only one
extra free parameter in addition to those of the MSUGRA model. The
universal soft supersymmetry-breaking parameters at the unification
scale $m_X$ are evolved via renormalization group equations down to
the weak scale $\hbox{$\cal O$ }(m_Z)$. This induces an effective non-universality
of the soft terms {\sl at the weak scale} which in turn implies a
non-zero sneutrino vev $v'_3$ given as
\begin{equation}
v'_3 \approx \frac{\epsilon_3 \mu} {{m_Z}^4}
\left(v'_d \Delta M^2 + \mu'v_u \Delta B \right)
\label{App_v3p}
\end{equation}
where the primed quantities refer to a basis in which we eliminate the
$\epsilon_3$ term from the superpotential (but reintroduce it, of
course, in other sectors of the theory).
The scalar soft masses and bilinear mass parameters obey $\Delta
M^2=0$ and $\Delta B=0$ at $m_X$. However at the weak scale they are
calculable from radiative corrections as
\begin{eqnarray}
\Delta M^2 & \approx & {{3h_b^2} \over{8\pi^2}} m_{Z}^2
\ln{{M_{GUT}}\over{m_Z}}
\end{eqnarray}
Note that \eq{App_v3p} implies that the R--parity-violating effects
induced by $v'_3$ are {\sl calculable} in terms of the primordial
R--parity-violating parameter $\epsilon_3$. It is clear that the
universality of the soft terms plays a crucial r\^ole in the
calculability of the $v'_3$ and hence of the resulting neutrino mass
\cite{epsrad}. Thus \eq{eq:NeutMassMat} represents a new kind of
see-saw scheme in which the $M_R$ of \eq{SS} is the neutralino mass,
while the r\^ole of the Dirac entry $D$ is played by the $v'_3$, which
is induced radiatively as the parameters evolve from $m_X$ to the weak
scale. Thus we have a {\sl hybrid} see-saw mechanism, with naturally
suppressed Majorana $\nu_{\tau}$ mass induced by the mixing between
the weak eigenstate tau neutrino and the {\sl zino}.
Let me now turn to estimate the expected \hbox{$\nu_\tau$ } mass. For this purpose
let me first determine the tau neutrino mass in the most general
supersymmetric model with bilinear breaking of R-parity, {\sl without
imposing soft universality}. The \hbox{$\nu_\tau$ } mass depends quadratically on an
effective parameter $\xi$ defined as $\xi \equiv (\epsilon_3 v_d + \mu
v_3)^2 \propto {v'_3}^2$ characterizing the violation of R--parity.
The expected \hbox{$m_{\nu_\tau}$ } values are illustrated in \fig{mnt_xi_ev}. The band
shown in the figure is obtained through a scan over the parameter
space requiring that the supersymmetric particles are not too light.
\begin{figure}[t]
\centerline{\protect\hbox{\psfig{file=mnt_xi_ev.ps,height=6cm,width=7cm}}}
\caption{Tau neutrino mass versus
$\xi \equiv(\epsilon_3v_d+\mu v_3)^2$, from ref.~\protect\cite{epsrad} }
\label{mnt_xi_ev}
\end{figure}
Let us now compare this with the cosmologically allowed values of the
tau neutrino mass. The cosmological critical density bound \hbox{$m_{\nu_\tau}$ } $\raise0.3ex\hbox{$\;<$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}}
92 \Omega h^2$ eV only holds if neutrinos are stable. In the present
model (with 3-generations) the \hbox{$\nu_\tau$ } can decay into 3 neutrinos, via the
neutral current \cite{2227,774}, or by slepton exchanges. This decay
will reduce the relic \hbox{$\nu_\tau$ } abundance to the required level, as long as
\hbox{$\nu_\tau$ } is heavier than about 100 KeV or so. On the other hand primordial
Big-Bang nucleosynthesis implies that \hbox{$\nu_\tau$ } is lighter than about an MeV
or so \cite{bbnutaustable}.
However, {\sl if one adopts a SUGRA scheme where universality of the
soft supersymmetry breaking terms at $m_X$ is assumed}, then the \hbox{$\nu_\tau$ }
mass is theoretically {\sl predicted} in terms of $h_b$ and can be
small in this case due to a natural cancellation between the two terms
in the parameter $\xi$, which follows from the assumed universality of
the softs at $m_X$. One can verify that \hbox{$m_{\nu_\tau}$ } may easily lie in the
electron-volt range, in which case \hbox{$\nu_\tau$ } could be a component of the hot
dark matter of the Universe.
Notice that \hbox{$\nu_e$ } and \hbox{$\nu_\mu$ } remain massless in this approximation. They
get masses either from scalar loop contributions in \fig{mnrad} or by
mixing with singlets in models with spontaneous breaking of R-parity
\cite{Romao92}. It is important to notice that even when \hbox{$m_{\nu_\tau}$ } is
small, many of the corresponding R-parity violating effects can be
sizeable. An obvious example is the fact that the lightest neutralino
decay will typically decay inside the detector, unlike standard
R-parity-conserving supersymmetry. This leads to a vastly unexplored
plethora of phenomenological possibilities in supersymmetric physics
\cite{desert}.
\vskip .1cm
In conclusion I can say that, other than the seesaw scheme, none of
the above models requires a large mass scale. As a result they lead to
a potentially rich phenomenology, since the extra particles required
have masses at scales that could be accessible to present experiments.
In the simplest versions of these models the neutrino mass arises from
the explicit violation of lepton number. Their phenomenological
potential gets richer if one generalizes the models so as to implement
a spontaneous violation scheme. This brings me to the next section.
\subsection{Weak-scale majoron}
\vskip .1cm
If lepton number (or B-L) is an ungauged symmetry and if it is
arranged to break spontaneously, the generation of neutrino masses
will be accompanied by the existence of a physical Goldstone boson
that we generically call majoron.
Except for the left-right symmetric unification approach, in which B-L
is a gauge symmetry, in all of the above schemes one can implement the
spontaneous violation of lepton number.
One can also introduce it in an $SU(2) \otimes U(1) $ seesaw framework \cite{CMP}, as
originally proposed, but I do not consider this case here, see
ref.~\cite{fae} for a review. Here I will mainly concentrate on
weak-scale physics. In all models I consider the lepton-number breaks
at a scale given by a vacuum expectation value $\vev{\sigma} \sim
m_{weak}$.
Such scale arises as the most natural one since in all of these
models, as already mentioned, we have that the neutrino masses vanish
as the lepton-breaking scale $\vev{\sigma} \to 0$
\cite{JoshipuraValle92}.
It is also clear that in any acceptable model one must arrange for the
majoron to be mainly an $SU(2) \otimes U(1) $ singlet, ensuring that it does not affect
the invisible Z decay width, well-measured at LEP. In models where the
majoron has L=2 the neutrino mass is proportional to an insertion of
$\vev{\sigma}$, as indicated in \fig{2loop}. In the supersymmetric
model with broken R-parity the majoron is mainly a singlet sneutrino,
which has lepton number L=1, so that $m_\nu \propto \vev{\sigma}^2$,
where $\vev{\sigma} \equiv \vev{\widetilde{\nu^c}}$, with
$\widetilde{\nu^c}$ denoting the singlet sneutrino. The presence of
the square, just as in the parameter $\xi$ ~in \fig{mnt_xi_ev},
reflects the fact that the neutrino gets a Majorana mass which has
lepton number L=2. The sneutrino gets a vev at the effective
supersymmetry breaking scale $ m_{susy} = m_{weak}$.
The weak-scale majorons may have remarkable phenomenological
implications, such as the possibility of invisibly decaying Higgs bosons
\cite{JoshipuraValle92}. Unfortunately I have no time to discuss it
here (see, for instance \cite{desert}).
If the majoron acquires a KeV mass (natural in weak-scale models) from
gravitational effects at the Planck scale \cite{ellis} it may play a
r\^ole in cosmology as dark matter~\cite{KEV}.
In what follows I will just focus on two examples of how the
underlying physics of weak-scale majoron models can affect neutrino
cosmology in an important way.
\subsubsection{Heavy neutrinos and the Universe Mass}
\vskip .1cm
Neutrinos of mass less than \hbox{$\cal O$ }(100 KeV) or so, are cosmologically
stable if they have only SM interactions. Their contribution to the
present density of the universe implies \cite{KT}
\begin{equation}
\label{RHO1}
\sum m_{\nu_i} \raise0.3ex\hbox{$\;<$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 92 \: \Omega_{\nu} h^2 \: eV\:,
\end{equation}
where the sum is over all isodoublet neutrino species with mass less
than \hbox{$\cal O$ }(1 MeV). The parameter $\Omega_{\nu} h^2 \leq 1$, where $h^2$
measures the uncertainty in the present value of the Hubble parameter,
$0.4 \raise0.3ex\hbox{$\;<$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} h \raise0.3ex\hbox{$\;<$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 1$, while $\Omega_{\nu} = \rho_{\nu}/\rho_c$,
measures the fraction of the critical density $\rho_c$ in neutrinos.
For the $\nu_{\mu}$ and $\nu_{\tau}$ this bound is much more stringent
than the laboratory limits.
In weak-scale majoron models the generation of neutrino mass is
accompanied by the existence of a physical majoron, with potentially
fast majoron-emitting decay channels such as \cite{fae,V}
\begin{equation}
\label{NUJ}
\nu^\prime \to \nu + J \:\: .
\end{equation}
as well as new annihilations to majorons,
\begin{equation}
\label{nunuJJ}
\nu^\prime + \nu^\prime \to J + J \:\: .
\end{equation}
These could eliminate relic neutrinos and therefore allow neutrinos of
higher mass, as long as the rates are large enough to allow for an
adequate red-shift of the heavy neutrino decay and/or annihilation
products. While the annihilation involves a diagonal majoron-neutrino
coupling $g$, the decays proceed only via the non-diagonal part of the
coupling, in the physical mass basis. A careful diagonalization of
both mass matrix and coupling matrix is essential in order to avoid
wild over-estimates of the heavy neutrino decay rates, such as that in
ref.~\cite{CMP}.
The point is that, once the neutrino mass matrix is diagonalized,
there is a danger of simultaneously diagonalizing the majoron
couplings to neutrinos. That would be analogous to the GIM mechanism
present in the SM for the couplings of the Higgs to fermions. Models
that avoid this GIM mechanism in the majoron-neutrino couplings have
been proposed, e.g. in ref.~\cite{V}. Many of them are weak-scale
majoron models \cite{CON,JoshipuraValle92,Romao92}. A general method
to determine the majoron couplings to neutrinos and hence the neutrino
decay rates in any majoron model was first given in ref. \cite{774}.
For an estimate in the model with spontaneously broken R-parity
\cite{MASIpot3} see ref. \cite{Romao92}.
In short one may say that neutrino lifetimes can be shorter than
required by the cosmological mass bound, for all values of the masses
which are presently allowed by laboratory experiments.
\subsubsection{Heavy neutrinos and Cosmological Nucleosynthesis}
\vskip .1cm
Similarly, the number of light neutrino species is also restricted by
cosmological Big Bang Nucleosynthesis (BBN). Due to its large mass, an
MeV stable (lifetime longer than $\sim 100$ sec) tau neutrino would be
equivalent to several SM massless neutrino species and would therefore
substantially increase the abundance of primordially produced
elements, such as $^{4}He$ and deuterium
\cite{dmeas,cris.ncris,sarkar}. This can be converted into
restrictions on the \hbox{$\nu_\tau$ } mass. If the bound on the effective number of
massless neutrino species is taken as $N_\nu < 3.4-3.6$, one can rule
out $\nu_\tau$ masses above 0.5 MeV~\cite{bbnutaustable}. If we take
$N_\nu < 4.5$ \cite{sarkar} the \hbox{$m_{\nu_\tau}$ } limit loosens accordingly, as
seen from \fig{bbneq}, and allows a \hbox{$\nu_\tau$ } of about an MeV or so.
In the presence of \hbox{$\nu_\tau$ } annihilations the BBN \hbox{$m_{\nu_\tau}$ } bound is
substantially weakened or eliminated \cite{DPRV}. In \fig{bbneq} we
also give the expected $N_\nu$ value for different values of the
coupling $g$ between $\nu_\tau$'s and $J$'s, expressed in units of
$10^{-5}$.
\begin{figure}
\centerline{\protect\hbox{\psfig{file=bbneq.ps,height=7cm,width=8cm}}}
\caption{The dashed line shows the effective number of massless SM
neutrinos equivalent to the heavy \hbox{$\nu_\tau$ } ($g=0$). Depending on the value
of $g$ (in units of $10^{-5}$) one can lower $N_\nu$ below the
canonical SM value $N_\nu = 3$ due to the effect of \hbox{$\nu_\tau$ }
annihilations. From ref. \protect\cite{DPRV} }
\label{bbneq}
\end{figure}
Comparing with the SM $g=0$ case one sees that for a fixed
$N_\nu^{max}$, a wide range of tau neutrino masses is allowed for
large enough values of $g$. No \hbox{$\nu_\tau$ } masses below the LEP limit can be
ruled out, as long as $g$ exceeds a few times $10^{-4}$.
One can also see from the figure that {\sl $N_\nu$ can also be lowered
below the canonical SM value $N_\nu = 3$} due to the effect of the
heavy \hbox{$\nu_\tau$ } annihilations to majorons.
These results may be re-expressed in the $m_{\nu_\tau}-g$ plane, as
shown in figure \ref{neffmg}.
\begin{figure}
\centerline{
\psfig{file=bbnregion.ps,height=7cm,width=7cm}}
\caption{The region above each curve is allowed for the corresponding
$N_\nu^{max}$. From ref. \protect\cite{DPRV} } \vglue -.5cm
\vglue -.5cm
\label{neffmg}
\end{figure}
We note that the required values of $g(m_{\nu_\tau})$ fit well with
the theoretical expectations of many weak-scale majoron models.
The above discussion has been on the effect of \hbox{$\nu_\tau$ } annihilations to
majorons in BBN. In some weak-scale majoron models decays in \eq{NUJ}
may lead to short enough \hbox{$\nu_\tau$ } lifetimes that they may also play an
important r\^ole in BBN \cite{bbnunstable}.
Before concluding the discussion on majorons, let me comment that the
majoron may be realized even in the context of models where B-L is a
gauge symmetry, such as left-right-symmetric models, by suitably
implementing a spontaneously broken global U(1) symmetry similar to
lepton number. It plays an interesting r\^ole in such models as it
allows the left-right scale to be relatively low~\cite{LRmajoron}.
\newpage
\section{Indications for Neutrino Mass}
\vskip .1cm
The most solid indications in favour of nonzero neutrino masses come
from underground experiments on solar and atmospheric neutrinos. I
will provide a theorist's sketch of the present experimental
situation.
\subsection{Solar Neutrinos}
\vskip .1cm
The puzzle posed by the data collected by the Homestake, Kamiokande,
and the radiochemical Gallex and Sage experiments still defy an
explanation in terms of the Standard Model. The most recent data on
rates are summarized as: $2.56 \pm 0.23$ SNU (chlorine), $72.2 \pm
5.6$ SNU (Gallex and Sage gallium experiments sensitive to the $pp$
neutrinos), and $(2.44 \pm 0.10) \times 10^6 {\rm cm^{-2} s^{-1} }$
($^8$B flux from SuperKamiokande)~\cite{solarexp}. This has been
re-confirmed by the 504 days data sample now collected by the
SuperKamiokande (SK) collaboration and reported at Neutrino 98~
\cite{sk504}.
\begin{figure}[t]
\centerline{\protect\hbox{
\psfig{file=independent.ps,height=7cm,width=8cm}}}
\caption{Recent SSM predictions, from ref.~\protect\cite{Bahcall98}}
\label{78}
\end{figure}
In \fig{78} one can see the predictions of various standard solar
models in the plane defined by the $^7$Be and $^8$B neutrino fluxes,
normalized to the predictions of the BP98 solar model~\cite{BP98}.
Abbreviations such as BP95, identify different solar models, as given
in ref.~\cite{models}. The rectangular error box gives the $3\sigma$
error range of the BP98 fluxes. The values of these fluxes indicated
by present data on neutrino event rates are also shown by the contours
in the figure. The best-fit $^7$Be neutrino flux is negative!
Possible non-standard astrophysical solutions are strongly constrained
by helioseismology studies \cite{Bahcall98} \cite{helio97}. Within the
standard solar model approach, the theoretical predictions clearly lie
far from the best-fit solution, and even far from the $3\sigma$
contour, leading us to conclude that new particle physics is the only
way to account for the data \cite{CF}.
The most likely possibility is to assume the existence of neutrino
conversions involving very small neutrino masses. The most attractive
theoretical schemes are the MSW effect \cite{MSW}, vacuum neutrino
oscillations or {\sl just-so solution} and, possibly, the Spin-Flavour
Precession mechanism proposed in ref.~\cite{SFP}, aided by the
Resonant enhancement due to matter effects in the Sun found in
ref.~\cite{RSFP}. The resulting RSFP mechanism still provides a
viable solution to the solar neutrino problem \cite{akhmedov97}.
The recent SK data updates the 300 days situation we had before
Neutrino 98~\cite{sk300} without major surprises, except that the SK
collaboration has now given the first detailed report of the recoil
energy spectrum produced by solar neutrino
interactions~\cite{sk504}. The measured spectrum they reported at
Neutrino~98 shows more events at the highest bins than would have been
expected from the most popular neutrino oscillation parameters
discussed previously. At first sight this might seem bad news for the
oscillation scenarios. However, Bahcall and Krastev have noted that if
the low energy cross section for ${\rm ^3He} ~+~ p ~\to ~{\rm ^4He}
~+~e^+ ~+~\nu_e $, the so-called $hep$ reaction, is $\raise0.3ex\hbox{$\;>$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 20$ times
larger than the best (but uncertain) theoretical estimates, then this
reaction could significantly influence the electron energy spectrum
produced by solar neutrino interactions in the high recoil region.
This would hardly have any effect at lower energies. They compare the
predicted energy spectra for different assumed $hep$ fluxes and
different neutrino oscillation scenarios with the one measured at
SuperKamiokande.
\begin{figure}
\centerline{\protect\hbox{
\psfig{file=hep1.ps,height=7cm,width=8cm}}}
\vglue -.5cm
\caption{Combined $^8$B plus $hep$ energy spectrum from
ref.~\protect\cite{bkhep}. The total flux of $hep$ neutrinos was
varied so as to obtain the best-fit for each scenario.}
\label{spec500}
\end{figure}
Fig. \ref{spec500} shows the ratio of the measured \cite{sk504} to the
calculated number of events with electron recoil energy $E$. The
crosses are the recent SK measurements~\cite{sk504}, while the
calculated curves are global fits to all of the data. The horizontal
line at ${\rm Ratio} = 0.37$ represents the ratio of the total event
rate measured by SuperKamiokande to the predicted event
rate~\cite{BP98} with no oscillations and only $^8$B neutrinos. One
sees how the spectra with enhanced $hep$ contributions provide better
fits to the SK data, suggesting that these neutrinos may be playing a
r\^ole.
One can determine the required solar neutrino parameters $\Delta m^2$
and $\sin^2 2\theta$ through a $\chi^2$ fit of the experimental data.
In \fig{msw} we show the allowed two-flavour regions obtained in an
updated MSW global fit analysis of the solar neutrino data for the
case of active neutrino conversions. The data include the chlorine,
Gallex, Sage~\cite{solarexp} and SK total event rates~\cite{sk504},
the SK energy spectrum~\cite{sk504}, as well as the SK day-night
asymmetry~\cite{sk504}, which would be expected in the MSW scheme due
to regeneration effects at the Earth. The data also includes the
recent SK~504~days sample. The analysis uses the BP98 model but with
an arbitrary $hep$ neutrino flux~\cite{bks98}.
\begin{figure}
\centerline{
\protect\hbox{\psfig{file=globalmswhep.ps,width=8cm,height=7cm}}}
\vglue -.5cm
\caption{Presently allowed MSW solar neutrino parameters for 2-flavour
active neutrino conversions with an enhanced $hep$ flux, from
ref.~\protect\cite{bkhep}}
\label{msw}
\end{figure}
One notices from the analysis that rate-independent observables, such
as the electron recoil energy spectrum and the day-night asymmetry
(zenith angle distribution), play an important r\^ole in ruling out
large regions of MSW parameters.
A theoretical issue which has raised some interest recently is the
study of the possible effect of random fluctuations in the solar
matter density \cite{BalantekinLoreti,noise,noise2}. The possible
existence of noise fluctuations at a few percent level is not excluded
by present helioseismology studies.
\begin{figure}
\centerline{\protect\hbox{\psfig{file=m1a.ps,width=6.5cm,height=8cm,angle=90}}}
\vglue -.5cm
\caption{ Solar neutrino survival probability in the presence
of random density fluctuations, ref.~\protect\cite{noise}}
\label{Pnoise}
\end{figure}
In \fig{Pnoise} we show averaged solar neutrino survival probability
as a function of $E/\Delta m^2$, for $\sin^2 2\theta = 0.01$. This
figure was obtained via a numerical integration of the MSW evolution
equation in the presence of noise, using the density profile in the
Sun from BP95 in ref.~\cite{models}, and assuming that the correlation
length $L_0$ (which corresponds to the scale of the fluctuation) is
$L_0 = 0.1 \lambda_m$, where $\lambda_m$ is the neutrino oscillation
length in matter. An important assumption in the analysis is that $
l_{free} \ll L_0 \ll \lambda_m$, where $l_{free} \sim 10 $ cm is the
mean free path of the electrons in the solar medium. The fluctuations
may strongly affect the $^7$Be neutrino component of the solar
neutrino spectrum so that the Borexino experiment should provide an
ideal test, if sufficiently small errors can be achieved. The
potential of Borexino in probing the level of solar matter density
fluctuations provides an additional motivation for the experiment
\cite{borexino}. This is discussed in more detail in
ref. \cite{noise}.
\begin{figure}
\centerline{\protect\hbox{\psfig{file=figure17.eps,width=8cm,height=6.6cm}}}
\vglue -.5cm
\caption{Presently allowed vacuum oscillation
parameters, from ref.~\protect\cite{bks98}}
\label{vac98}
\end{figure}
The most popular alternative solution to the solar neutrino problem is
the {\sl vacuum oscillation solution} which clearly requires large
neutrino mixing and {\sl just-so} adjustment of the oscillation length
so as to coincide roughly with the Earth-Sun distance. This solution
fits with simplistic see-saw inspired-numerology and has attractive
features, as recently advocated in ref. \cite{glashow98}.
Fig. \ref{vac98} shows the regions of just-so oscillation parameters
obtained in a recent global fit of the data including the
504~days~SK~data sample, both the rates and the recoil energy
spectrum. Seasonal effects are expected in this scenario and could
potentially be used to further constrain the parameters, as described
in ref.~\cite{lisi}, and also to help discriminating it from the MSW
scenario.
\subsection{Atmospheric Neutrinos}
\vskip .1cm
Showers initiated when primary cosmic rays hit the Earth's atmosphere
originate secondary mesons, mostly pions and kaons, which decay
producing \hbox{$\nu_e$ }'s, \hbox{$\nu_\mu$ }'s as well as \bne's and \bnm's \cite{atmreview}.
There has been a long-standing discrepancy between the predicted and
measured \hbox{$\nu_\mu$ }/\hbox{$\nu_e$ } ratio of the atmospheric neutrino fluxes
\cite{atmexp}. The anomaly was found both in water Cerenkov
experiments, such as Kamiokande, SuperKamiokande and IMB \cite{sk300},
as well as in the iron calorimeter Soudan2 experiment. Negative
experiments, such as Frejus and Nusex have much larger errors.
Although individual $\nu_\mu$ or $\nu_e$ fluxes are only known to
within $30\%$ accuracy, the $\nu_\mu$ $/\nu_e$ ratio is known to
$5\%$. The most important feature of the atmospheric neutrino
535-day~ data~sample reported by the SK~collaboration at
Neutrino~98~\cite{superkatm98} is that it exhibits a {\sl
zenith-angle-dependent} deficit of muon neutrinos which is
inconsistent with expectations based on calculations of the
atmospheric neutrino fluxes. For recent analyses see
ref.~\cite{atm98,atmconcha,atmo98}. Experimental biases and
uncertainties in the prediction of neutrino fluxes and cross sections
are unable to explain the data.
In \fig{ang_mu} I show the measured zenith angle distribution of
electron-like and muon-like sub-GeV and multi-GeV events, as well as
the one predicted in the absence of oscillation. I also give the
expected distribution in various neutrino oscillation schemes.
\begin{figure}
\centerline{\protect\hbox{\epsfig{file=ang_mu.ps,width=10cm,height=8.45cm}}}
\caption{Theoretically expected zenith angle distributions for SK
electron and muon-like sub-GeV and multi-GeV events in the SM
(no-oscillation) and for the best-fit points of the various
oscillation channels, from ref.~\protect\cite{atm98,atmconcha}. The
crosses correspond to the SK observations reported at Neutrino~98.}
\label{ang_mu}
\end{figure}
The thick-solid histogram is the theoretically expected distribution
in the absence of oscillation, while the predictions for the best-fit
points of the various oscillation channels is indicated as follows:
for $\nu_\mu \to \nu_s$ (solid line), $\nu_\mu \to \nu_e$ (dashed
line) and $\nu_\mu \to \nu_\tau$ (dotted line). The error displayed
in the experimental points is only statistical.
In the theoretical analysis we have used the latest improved
calculations of the atmospheric neutrino fluxes as a function of
zenith angle, including the muon polarization effect and took into
account a variable neutrino production point \cite{flux}. Clearly the
data are not reproduced by the no-oscillation hypothesis, adding
substantially to our confidence that the atmospheric neutrino anomaly
is real.
In \fig{mutausk4} I show the allowed neutrino oscillation parameters
obtained in a recent global fit of the sub-GeV and multi-GeV
(vertex-contained) atmospheric neutrino data~\cite{atm98,atmconcha}
including the recent data reported at Neutrino~98, as well as all
other experiments combined at 90 (thick solid line) and 99 \% CL (thin
solid line) for each oscillation channel considered.
\begin{figure}
\centerline{\protect\hbox{\epsfig{file=all500.ps,width=12.5cm,height=0.55\textheight}}}
\fcaption{Allowed atmospheric oscillation parameters for all
experiments including the SK data reported at Neutrino~98, combined at
90 (thick solid line) and 99 \% CL (thin solid line) for all possible
oscillation channels, from ref.~\protect\cite{atm98,atmconcha}. In
each case the best-fit point is denoted by a star and always
corresponds to maximal mixing, a feature which is well-reproduced by
the theoretical predictions of the models proposed in
ref.~\protect\cite{ptv92,pv93}. The sensitivity of the present
accelerator and reactor experiments as well as the expectations of
upcoming long-baseline experiments is also displayed.}
\label{mutausk4}
\end{figure}
The two lower panels \fig{mutausk4} differ in the sign of the $\Delta
m^2$ which was assumed in the analysis of the matter effects in the
Earth for the $\nu_\mu \to \nu_s$ oscillations. We found that $\nu_\mu
\to \nu_\tau$ oscillations give a slightly better fit than $\nu_\mu
\to \nu_s$ oscillations. At present the atmospheric neutrino data
cannot distinguish between the \hbox{$\nu_\mu$ } to \hbox{$\nu_\tau$ } and \hbox{$\nu_\mu$ } to \hbox{$\nu_{s}$ } channels.
It is well-know that the neutral-to-charged current ratios are
important observables in neutrino oscillation phenomenology, which are
especially sensitive to the existence of singlet neutrinos, light or
heavy \cite{2227}.
The atmospheric neutrinos produce isolated neutral pions
($\pi^0$-events) mainly in neutral current interactions.
One may therefore study the ratios of $\pi^0$-events and the events
induced mainly by the charged currents, as recently advocated in
ref.~\cite{vissani}. This has the virtue of minimizing uncertainties
related to the original atmospheric neutrino fluxes.
In fact the SK~collaboration has already tried to do this by
estimating the double ratio of $\pi^0$ over e-like events in their
sample~\cite{superkatm98} and found $R = 0.93 \pm 0.07 \pm 0.19$.
This is consistent both with \hbox{$\nu_\mu$ } to \hbox{$\nu_\tau$ } or \hbox{$\nu_\mu$ } to \hbox{$\nu_{s}$ } channels, with a
slight preference for the former. The situation should improve in the
future.
We also display in \fig{mutausk4} the sensitivity of present
accelerator and reactor experiments, as well as that expected at
future long-baseline (LBL) experiments in each channel. The first
point to note is that the Chooz reactor \cite{Chooz} data already
excludes the region indicated for the $\nu_{\mu} \to \nu_e$ channel
when all experiments are combined at 90\% CL.
From the upper-left panel in \fig{mutausk4} one sees that the regions
of $\nu_\mu \to \nu_\tau$ oscillation parameters obtained from the
atmospheric neutrino data analysis cannot be fully tested by the LBL
experiments, as presently designed. One might expect that, due to the
upward shift of the $\Delta m^2$ indicated by the fit for the sterile
case (due to the effects of matter in the Earth) it would be possible
to completely cover the corresponding region of oscillation
parameters. Although this is the case for the MINOS disappearance
test, in general most of the LBL experiments can not completely probe
the region of oscillation parameters allowed by the $\nu_\mu \to
\nu_s$ atmospheric neutrino analysis. This is so irrespective of the
sign of $\Delta m^2$ assumed. For a discussion of the various
potential tests that can be performed at the future LBL experiments in
order to unravel the presence of oscillations into sterile channels
see ref.~\cite{atmconcha}.
\subsection{LSND, Dark Matter \& Pulsars}
\vskip .1cm
{\sl LSND}
\vskip .1cm
A search for $\bar\nu_{\mu}\to \bar\nu_{e}$ oscillations has been
conducted at the Los Alamos Meson Physics Facility by using
$\bar\nu_\mu$ from $\mu^+$ decay at rest \cite{LSND}. The
$\bar\nu_e$'s are detected via the reaction $\bar\nu_e\,p \to
e^{+}\,n$, correlated with a $\gamma$ from $np \to d \gamma$
($2.2\,{\rm MeV}$). The use of tight cuts to identify $e^+$ events
with correlated $\gamma$ rays yields 22 events with $e^+$ energy
between 36 and $60\,{\rm MeV}$ and only $4.6 \pm 0.6$ background
events. A fit to the $e^+$ events between 20 and $60\,{\rm MeV}$
yields a total excess of $51.8^{+18.7}_{-16.9} \pm 8.0$ events. If
attributed to $\bar \nu_\mu \to \bar \nu_e$ oscillations, this
corresponds to an oscillation probability of ($0.31^{+0.11}_{-0.10}
\pm 0.05$)\% and leads to the oscillation parameters shown in
\fig{darlsnd}. The shaded regions are the favoured likelihood regions
given in ref.~\cite{LSND}. The curves show the 90~\% and 99~\%
likelihood allowed ranges from LSND, and compares them to limits from
BNL776, KARMEN1, Bugey, CCFR, and NOMAD.
\begin{figure}
\centerline{\protect\hbox{\epsfig{file=lsnd1.ps,width=8cm,height=8cm}}}
\fcaption{Allowed LSND oscillation parameters versus competing
experiments~\protect\cite{louis} }
\label{darlsnd}
\vglue -.5cm
\end{figure}
A search for \hbox{$\nu_\mu$ } $\to$ \hbox{$\nu_e$ } oscillations has also been conducted by the
LSND collaboration. Using \hbox{$\nu_\mu$ } from $\pi^+$ decay in flight, the \hbox{$\nu_e$ }
appearance is detected via the charged-current reaction
$C(\hbox{$\nu_e$ },e^-)X$. Two independent analyses are consistent with the above
signature, after taking into account the events expected from the \hbox{$\nu_e$ }
contamination in the beam and the beam-off background. If interpreted
as an oscillation signal, the observed oscillation probability of $2.6
\pm 1.0 \pm 0.5 \times 10^{-3}$ is consistent with the \bnm $\to$ \bne
oscillation evidence described above. Fig.~\ref{miniboone} compares the
LSND region with the expected sensitivity from MiniBooNE, which was
recently approved to run at Fermilab~\cite{louis}.
\begin{figure}
\centerline{\protect\hbox{\epsfig{file=lsnd2.ps,width=8cm,height=10cm}}}
\fcaption{Expected sensitivity of the proposed MiniBooNE
experiment~\protect\cite{louis} }
\label{miniboone}
\end{figure}
A possible confirmation of the LSND anomaly would be a discovery of
far-reaching implications.
\vskip .2cm
{\sl Dark Matter}
\vskip .1cm
The research on the nature of the cosmological dark matter and the
origin of galaxies and large scale structure in the Universe within
the standard theoretical framework of gravitational collapse of
fluctuations as the origin of structure in the expanding universe has
undergone tremendous progress recently. Indeed the observations of
cosmic background temperature anisotropies on large scales performed
by the COBE satellite \cite{cobe} combined with cluster-cluster
correlation data e.g. from IRAS~\cite{iras} can not be reconciled with
the simplest cold dark matter (CDM) model.
Barring a non-zero cosmological constant and high value of the Hubble
parameter ($h \raise0.3ex\hbox{$\;>$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 0.7$) the simplest model that have a chance to
work is Cold + Hot Dark Matter (MDM, for mixed dark matter), if the
Hubble parameter and age parameter allow for an $\Omega=1$ cosmology
\cite{cobe2}, suggested by inflation. Electron-volt mass neutrinos are
the most well-motivated HDM candidate. This mass scale is similar to
that indicated by the hints reported by the LSND experiment
\cite{LSND}.
However it is too early to be confident on the MDM scenario, and one
should for the moment keep an open mind. For example, I note that an
MeV range (unstable) tau neutrino is an interesting possibility to
consider from the point of view of dark matter. If such neutrino
decays before the matter dominance epoch, its decay products would add
energy to the radiation, thereby delaying the time at which the matter
and radiation contributions to the energy density of the universe
become equal. Such delay would allow one to reduce the density
fluctuations on the smaller scales purely within the standard cold
dark matter scenario \cite{ma1}.
Future sky maps of the cosmic microwave background radiation (CMBR)
with high precision at the upcoming MAP and PLANCK missions should
bring more light into the nature of the dark matter and the possible
r\^ole of neutrinos \cite{Raffelt}.
\vskip .2cm
{\sl Pulsars}
\vskip .1cm
One of the most challenging problems in modern astrophysics is to find
a consistent explanation for the high velocity of pulsars.
Observations \cite{veloc} show that these velocities range from zero
up to 900 km/s with a mean value of $450 \pm 50$ km/s. An attractive
possibility is that pulsar motion arises from an asymmetric neutrino
emission during the supernova explosion. In fact, neutrinos carry more
than $99 \%$ of the new-born proto-neutron star's gravitational
binding energy so that even a $1 \%$ asymmetry in the neutrino
emission could generate the observed pulsar velocities. One possible
explanation to this puzzle may reside in the interplay between the
parity non-conservation present in weak interactions and the strong
magnetic fields which are expected during a SN explosion.
Possible realizations of this idea in the framework of the Standard
Model (SM) have been proposed \cite{Chugai,others} However, it has
recently been noted~\cite{vilenkin98} that no asymmetry in neutrino
emission can be generated in thermal equilibrium, even in the presence
of parity violation. This suggests that alternative mechanism is at
work.
Several neutrino conversion mechanisms in matter have been invoked as
a possible engine for powering pulsar motion. They all share in
common the feature that neutrino propagation properties are affected
by the {\sl polarization} \cite{NSSV} of the SN medium which is
provided by the strong magnetic fields $10^{15}$ Gauss present during
a SN explosion. This would give rise to some angular dependence of the
matter-induced neutrino potentials leading to a deformation of the
"neutrino-sphere" for, say, tau neutrinos and hence to an anisotropic
neutrino emission. As a consequence, in the presence of non-vanishing
$\nu_\tau$ mass and mixing the resonance sphere for the
$\nu_e-\nu_\tau$ conversions is distorted. If the resonance surface
lies between the $\nu_\tau$ and $\nu_e$ neutrino spheres, such a
distortion would induce a temperature anisotropy in the flux of the
escaping tau-neutrinos produced by the conversions, hence a recoil
kick of the proto-neutron star.
This mechanism was realized in ref.~\cite{KusSeg96} invoking MSW
conversions \cite{MSW} with \hbox{$m_{\nu_\tau}$ } $\raise0.3ex\hbox{$\;>$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}}$ 100 eV or so, assuming a
negligible $\nu_e$ mass. This is necessary in order for the resonance
surface to be located between the two neutrino-spheres. It should be
noted, however, that such requirement is at odds with cosmological
bounds on neutrinos masses unless the $\tau$-neutrino is unstable.
On the other hand in ref.~\cite{ALS} a realization was proposed in the
resonant spin-flavour precession scheme (RSFP) \cite{RSFP}. Here the
magnetic field not only affects the medium properties, but also
induces the spin-flavour precession through its coupling to the
neutrino transition magnetic moment \cite{SFP}.
Perhaps the simplest suggestion was proposed in ref.~\cite{pulsars}
where the required pulsar velocities would arise from anisotropic
neutrino emission induced by resonant conversions of massless
neutrinos (hence no magnetic moment) \cite{massless0}. This mechanism
arises in the model described in \eq{MATmu} and has been shown to be
of potential relevance for SN physics \cite{massless}.
Very recently, however, Raffelt and Janka~\cite{pulsars2} have claimed
that the asymmetric neutrino emission effect was vastly
overestimated, because the variation of the temperature over the
deformed neutrino-sphere is not an adequate measure for the anisotropy
of the neutrino emission. This would invalidate the oscillation
mechanisms, leaving the pulsar velocity problem without any known
viable solution. The only potential way out of their criticism would
invoke conversions into sterile neutrinos, since the conversions would
take place deeper in the star. However, it is too early to tell
whether or not it works \cite{nuno98}.
\section{Reconciling the neutrino puzzles}
\vskip .1cm
It is easy to accommodate the solar and atmospheric neutrino data by
themselves in a general gauge theory of neutrino mass, since it lacks
predictivity. One could even have a situation where three-neutrino
mixing could be {\sl bi-maximal}, i.e. maximal in both the atmospheric
as well as solar neutrino transitions, if the solution chosen by
nature is {\sl just-so} \cite{glashow98}.
The challenge to reconcile these two requirements arise mainly if one
wishes to do that in a predictive quark-lepton {\sl unification}
scheme that relates lepton and quark mixing angles. This especially so
since the latter are small, in contrast to the lepton mixing indicated
by the SK~atmospheric data.
The story gets more complicated if one wishes to account also for the
LSND anomaly and for the hot dark matter. There has been a lot of
effort to solve the bigger puzzle posed by the inclusion of any of
these additional hints~\cite{ptv92,pv93,cm93}. As we have seen the
atmospheric neutrino data requires $\Delta m^2_{atm}$ which is much
larger than the scale $\Delta {m^2}_\odot$ which is indicated by the
solar neutrino data, either in the context of the MSW mechanism or the
just-so solution. These two experiments fix two different scales for
neutrino mass differences, so that with just the three known neutrinos
and without discarding any experimental data, there is no room to
include the LSND scale indicated in \fig{darlsnd}, nor the HDM scale
which is roughly similar
\footnote{I will ignore the pulsar velocity problem since there is
no clear working-model at the moment.}.
Reconciling the neutrino puzzles may be attempted within the {\sl
unification approach} or the {\sl weak-scale approach} to the theory
of neutrino mass. I will concentrate mostly on the latter, because it
is an interesting and simpler alternative to the former.
\subsection{Almost Degenerate Neutrinos}
\vskip .1cm
The only possibility to fit solar, atmospheric and HDM scales in a
world with just the three known neutrinos is if all of them have
nearly the same mass \cite{cm93}, of about $\sim$ 1.5 eV or so in
order to provide the right amount of HDM \cite{cobe2} (all three
active neutrinos contribute to HDM). There is no room in this case to
accommodate the LSND anomaly. This can be arranged in the unification
approach discussed in sec. 2 using the $M_L$ term present in general
in seesaw models. With this in mind one can construct, e.g. unified
$SO(10)$ seesaw models where all neutrinos lie at the above HDM mass scale
($\sim$ 1.5 eV), due to a suitable horizontal symmetry, while the
parameters $\Delta {m^2}_\odot$ \& $\Delta {m^2}_{atm}$ appear as
symmetry breaking effects. An interesting fact is that the ratio
$\Delta {m^2}_\odot \:/\:\Delta {m^2}_{atm}$ appears as
${m_c}^2/{m_t}^2$~\cite{DEG}.
\subsection{Four-Neutrino Models}
\vskip .1cm
The simplest way to open the possibility of incorporating the LSND
scale is to invoke a sterile neutrino, i.e. one whose interaction with
standard model particles (such as the $W$ and the $Z$) is much weaker
than the SM weak interaction. It must come in as an $SU(2) \otimes U(1) $ singlet
ensuring that it does not affect the invisible Z decay width,
well-measured at LEP. The sterile neutrino \hbox{$\nu_{s}$ } must also be light
enough in order to participate in the oscillations involving the three
active neutrinos. The theoretical challenges we have are:
\begin{itemize}
\item
to understand why the sterile neutrino is so light (it is clear that
if a sterile neutrino is introduced into the SM, the $SU(2) \otimes U(1) $ gauge
symmetry allows it to have a bare mass, which could be large)
\item
to account for the maximal neutrino mixing indicated by the
atmospheric data
\item
to account for the three scales $\Delta m^2_{atm}$, $\Delta
{m^2}_\odot$ and $\Delta m^2_{LSND/HDM}$ from first principles
\end{itemize}
With this in mind we have formulated the simplest and first
schemes~\cite{ptv92,pv93} which provide an answer to the above points.
I will denote them, $(e\tau)(\mu~s)$~\cite{ptv92} and $(es)(\mu\tau)$
~\cite{pv93}, respectively. One should realize that a given
phenomenological scheme (mainly determined by the structure of the
leptonic charged current) may be realized in more than one theoretical
model. For example, an alternative to the model in ~\cite{pv93} was
suggested in ref.~\cite{cm93}. There have been many attempts to
reproduce the above phenomenological scenarios from different
theoretical assumptions, as has been discussed here
\cite{ptvlate,smir,4nutalks}.
These two basic schemes are characterized by a very symmetric mass
spectrum in which there are two ultra-light neutrinos at the solar
neutrino scale and two maximally mixed almost degenerate eV-mass
neutrinos (LSND/HDM scale), split by the atmospheric neutrino
scale~\cite{ptv92,pv93}. The HDM problem requires
the heaviest neutrinos at about 2 eV mass \cite{pvhdm}.
These scales are generated radiatively due to the additional Higgs
bosons which are postulated, as follows: $\Delta m^2_{LSND/HDM}$
arises at one-loop, while $\Delta m^2_{atm}$ and $\Delta {m^2}_\odot$
are two-loop effects. Since this proposal pre-dated the LSND results,
it naturally focussed on accounting for the HDM problem, rather than
LSND. However, it has been realized that the LSND oscillation effects
may be accounted for in its framework. These are the simplest theories
based only on weak-scale physics, in which one {\sl explains} the
lightness of the sterile neutrino, the large lepton mixing required by
the atmospheric neutrino data, as well as the generation of the mass
splittings responsible for solar and atmospheric neutrino
conversions. These follow naturally from the underlying
lepton-number-like symmetry and its breaking~\cite{ptv92,pv93}.
These models are minimal in the sense that they add a single $SU(2) \otimes U(1) $
singlet lepton to the SM. Before breaking the symmetry the heaviest
neutrinos are exactly degenerate, while the other two which will be
responsible for the explanation of the solar neutrino problem are
still massless \cite{OLDsterilemodel}. After the global U(1) lepton
symmetry breaks the massive ones split and the light ones get mass.
The models differ according to whether the \hbox{$\nu_{s}$ } lies at the dark matter
scale or at the solar neutrino scale. In the $(e\tau)(\mu~s)$ scheme
the \hbox{$\nu_{s}$ } lies at the LSND/HDM scale, as illustrated in \fig{ptv}
\begin{figure}[t]
\centerline{\protect\hbox{\psfig{file=ptv.ps,width=6cm,height=4cm}}}
\caption{$(e\tau)(\mu~s)$ scheme: \hbox{$\nu_e$ }- \hbox{$\nu_\tau$ } conversions explain the
solar neutrino data and \hbox{$\nu_\mu$ }- \hbox{$\nu_{s}$ } oscillations account for the
atmospheric deficit, ref.~\protect\cite{ptv92}.}
\label{ptv}
\vglue -.2cm
\end{figure}
while in the alternative $(es)(\mu\tau)$ model, \hbox{$\nu_{s}$ } is at the solar
\hbox{neutrino } scale as shown in \fig{pv} \cite{pv93} \cite{ptvlate}.
\begin{figure}
\centerline{\protect\hbox{\psfig{file=pv.ps,width=6cm,height=4cm}}}
\caption{$(es)(\mu\tau)$ scheme: \hbox{$\nu_e$ }- \hbox{$\nu_{s}$ } conversions explain the
solar neutrino data and \hbox{$\nu_\mu$ }- \hbox{$\nu_\tau$ } oscillations account for the
atmospheric deficit, ref.~\protect\cite{pv93}.}
\label{pv}
\vglue -.2cm
\end{figure}
In the $(e\tau)(\mu~s)$ case the atmospheric \hbox{neutrino } puzzle is explained
by \hbox{$\nu_\mu$ } to \hbox{$\nu_{s}$ } oscillations, while in $(es)(\mu\tau)$ it is explained
by \hbox{$\nu_\mu$ } to \hbox{$\nu_\tau$ } oscillations. Correspondingly, the deficit of solar
\hbox{neutrinos } is explained in the first case by \hbox{$\nu_e$ } to \hbox{$\nu_\tau$ } conversions, while
in the second the relevant channel is \hbox{$\nu_e$ } to $\nu_s$. The two models
are therefore clearly inequivalent. In both cases it is possible to
fit all present observations together.
I now turn to the consistency of the models with BBN. The presence of
additional weakly interacting light particles, such as our light
sterile neutrino \hbox{$\nu_{s}$ }, is constrained by BBN since the \hbox{$\nu_{s}$ } would enter
into equilibrium with the active neutrinos in the early Universe (and
therefore would contribute to $N_\nu^{max}$) via neutrino oscillations
\cite{bbnsterile}, unless
$\Delta m^2 sin^42\theta \raise0.3ex\hbox{$\;<$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 3\times 10^{-6}~~eV^2$
where $\Delta m^2$ denotes the mass-square difference of the active
and sterile species and $\theta$ is the vacuum mixing angle. However,
systematical uncertainties in the derivation of BBN bounds still
caution us not to take them too literally. For example, it has been
argued in~\cite{sarkar} that present observations of primordial Helium
and deuterium abundances can allow up to $N_\nu = 4.5$ neutrino
species if the baryon to photon ratio is small. Adopting this as a
limit, clearly both models described above are consistent. Should the
BBN constraints get tighter, e.g. $N_\nu^{max} < 3.5$ they could rule
out the $(e\tau)(\mu~s)$ model, and leave out only the competing
scheme as a viable alternative. For recent work on this see
ref.~\cite{volkasterilebbn}.
The two models would be distinguishable both from the analysis of
future solar as well as atmospheric neutrino data. For example they
may be tested in the SNO experiment~\cite{SNO} once they measure the
solar neutrino flux ($\Phi^{NC}_{\nu}$) in their neutral current data
and compare it with the corresponding charged current value
($\Phi^{CC}_{\nu}$). If the solar neutrinos convert to active
neutrinos, as in the $(e\tau)(\mu~s)$ model, then one expects
$\Phi^{CC}_{\nu}/\Phi^{NC}_{\nu} \simeq .5$, whereas in the
$(es)(\mu\tau)$ scheme (\hbox{$\nu_e$ } conversion to \hbox{$\nu_{s}$ }), the above ratio would
be nearly $ \simeq 1$. Looking at pion production via the neutral
current reaction $\nu_{\tau} + N \to \nu_{\tau} +\pi^0 +N$ in
atmospheric data might also help in distinguishing between these two
possibilities~\cite{vissani}, since this reaction is absent in the
case of sterile neutrinos, but would exist in the $(es)(\mu\tau)$
scheme.
If light sterile neutrinos indeed exist, as suggested by the current
solar and atmospheric neutrino data, together with the LSND
experiment, one can show that in some four-neutrino scenarios,
neutrinos would contribute to a cosmic hot dark matter component and
to an increased radiation content at the epoch of matter-radiation
equality. These effects leave their imprint in sky maps of the cosmic
microwave background radiation (CMBR) and may thus be detectable with
the precision measurements of the upcoming MAP and PLANCK missions
as noted recently in ref.~\cite{Raffelt}.
\subsection{MeV Tau Neutrino}
\vskip .1cm
In ref.~\cite{JV95} a model was presented where an unstable MeV
Majorana tau neutrino naturally reconciles the cosmological
observations of large and small-scale density fluctuations with the
cold dark matter picture (CDM). The model assumes the spontaneous
violation of a global lepton number symmetry at the weak scale. The
breaking of this symmetry generates the cosmologically required decay
of the \hbox{$\nu_\tau$ } with lifetime $\tau_{\nu_\tau} \sim 10^2 - 10^4$ sec, as
well as the masses and oscillations of the three light \hbox{neutrinos } \hbox{$\nu_e$ }, \hbox{$\nu_\mu$ }
and $\nu_s$. One can also verify that the BBN constraints can be
satisfied. The cosmological attractiveness of this scheme should
encourage one to check whether one can indeed account for the present
solar and atmospheric data through oscillations among the three light
neutrinos, after taking into account the recent SKdata.
\section{In conclusion}
\vskip .1cm
A major news has been the re-confirmation of an angle-dependent
atmospheric neutrino deficit by the SK collaboration, providing a
strong evidence for neutrino masses, similar to that offered by the
solar neutrino data. Unfortunately future LBL experiments do not all
probe the full region indicated by the atmospheric data.
If the LSND result stands the test of time, this would be a puzzling
indication for the existence of a light sterile neutrino. {\sl Who
ordered it?}
The two most attractive schemes to reconcile these observations invoke
either \hbox{$\nu_e$ }- \hbox{$\nu_\tau$ } conversions to explain the solar data, with \hbox{$\nu_\mu$ }- \hbox{$\nu_{s}$ }
oscillations accounting for the atmospheric deficit, or the other way
around. These two basic schemes have distinct implications at future
solar \& atmospheric neutrino experiments. SNO and SuperKamiokande
have the potential to distinguish them due to their neutral current
sensitivity.
{\sl How about heavy neutrinos?} Although cosmological bounds are a
fundamental tool to restrict neutrino masses, in many theories heavy
neutrinos will either decay or annihilate very fast, thereby loosening
or evading the cosmological bounds. From this point of view, {\sl
neutrinos can have any mass presently allowed by laboratory
experiments}, and it is therefore important to search for
manifestations of heavy neutrinos at the laboratory.
Last but not least, though most of the recent excitement comes from
underground experiments, one should note that models of neutrino mass
may lead to a plethora of new signatures which may be accessible also
at accelerators, thus illustrating the complementarity between the two
approaches in unravelling the properties of neutrinos and probing for
signals beyond the SM.
\vskip .1cm
I am grateful to Bernd Kniehl and Georg Raffelt for the kind
hospitality at the Ringberg castle. My thanks to John Bahcall, Plamen
Krastev and Bill Louis, for making their postcript figures available
to me, and to Thomas Janka and Eligio Lisi for correspondence. I thank
all my collaborators, especially Hiroshi Nunokawa for going over the
first draft of this manuscript critically. This work was supported by
DGICYT grant PB95-1077 and by the EEC under the TMR contract
ERBFMRX-CT96-0090.
\small
|
2,877,628,090,677 | arxiv | \section{Introduction}
Metal hydrides are considered as a potential hydrogen storage material because they have
a high storage capacity by weight. Especially the magnesium hydride reaches the theoretical maximal value of 7.66 wt\%, but its main drawback are the high sorption temperature (573-673K) and the sluggish sorption kinetics, commonly typical for metal hydrides (of light metals in the first instance). We restrict the following discussion on this material only, retaining that the modelling outlined below is nonspecific and can be applicable for any metal hydride.
In recent years, significant process has been made using nanocrystalline Mg hydride
produced by high energy milling and adding suitable catalysts in order to improve the
sorption kinetics. Without catalysts the desorption temperature of high energy milled
Mg hydride is still higher than 573K.
As it was reported from the recent research (\cite{meng}), wet ball milling method was used to produce nanocrystalline Mg hydride which is different from the conventional high energy ball milling. As we know, during high energy milling, particle size is decreased
significantly, that influence the sorption behavior.
Additionally, it has been found, that
the wet ball milled powder demonstrates a better desorption kinetics as the same dry ball milled powder with the same particle size. The reason could be probably the higher specific surface area of the former one. Hence a model operating with the specific surface area instead of particle size would be more suitable for evaluation of sorption capability of powders.
\section{Model reformulation in terms of {morphofactor} and specific surface area}
\subsection{ The morphological factor of particles }
A transport of hydrogen in-/outwards of a metal particle with the characteristic size $d$
occurs through its surface. The rate of the total hydrogen permeation is therefore proportional
to the surface area $S$, i.e. $\sim d^2$, whereat the hydrogen content of the
particle is proportional to its volume $V$, $\sim d^3$. From these suggestions it is to expect the characteristic
sorption time $\tau$ (especially for short time distances) to be dependent linearly on the characteristic particle
size $d$. The dimensionless proportionality factor $1/\xi:=\sqrt[3]{V}/\sqrt{S}$ is dependent on geometry/morphology of the particle.
For strongly convex particles it lies typically between 2.684 (tetrahedron) and 2.199 (ball). For
concave poly-particles with cavities (e.g. wet ball milled) or plain cakes (like original $MgH_2$ particles),
this factor is to expect overcoming a double value.
It means that at the same particle size, the specific surface of the compound can differ up to several times, provided
by the surface morphology, that can influence the characteristic sorption time.
The dependence outlined above should hold in the leading order if we compare powders with different particle sizes.
The characteristic sorption times of particles 1 and 2 are expected to relate to each other like their
characteristic sizes,
\be \tau_1/\tau_2 \sim d_1/d_2. \label{dimension}\ee
It should be the main effect providing an advantage
of wet ball milled compound over the dry ball milled and other conventional ones.
This hypothesis proposed in our recent publication \cite{1stpaper}, devoted to the simple sorption modelling,
would be subjected an experimental validation if we have samples consisting of equal sized particles with the same
morphology, e.g. spherical. In fact an attempt to verify the relation (\ref{dimension}) for different kinds of materials,
e.g. as-received and dry-ball milled, stumbles on the deviation up to several times. As possible reasons the next obvious
reasons can be considered:
1. all particles in a sample are of different sizes, described by a certain size distribution;
2. samples of different materials consist of particles with different morphology, the ratio of morphofactors $\xi_1/\xi_2$ can reach several times, as noted above.
The exact measurement of size distribution is extensive and the evaluation of the results is not unique, while the measurement of specific surface via BET is successfully available and is exact enough. Instead of the size distribution together with the morphofactor $\xi$ we can take this specific surface area for a governing parameter.
The idea of the improvement of the model proposed below is
to take the specific surface into account instead of the characteristic particle size as it was considered before \cite{1stpaper}.
Additionally, the shrinkage of the outer surface caused by the volume shrinkage is considered, and the influence of this effect
is accounted and estimated in the model. The analysis of results is simplified thereby, that the model still allows an analytical approach.
\subsection{ Improvement and generalization of the linear model }
\begin{minipage}[b]{8.5cm}
The spherical symmetry of particles as considered in the previous formulation \cite{1stpaper} of the model,
was assumed there following a number of similar models \cite{castro, gabis}
only for the sake of transparency.
In fact, the confluent model \cite{gabis} under consideration does not demand any symmetry: the desorption
rate depends only on the total surface of the particle and 'does not see' the surface of the inner $\beta$-core,
since the $\al$-concentration between these two surfaces remains constant homogeneous due
to fast $\al$-diffusion as assumed.
So, the results can be performed for a particle of an arbitrary form, without further requirements.
\end{minipage}
\vspace*{1cm} \includegraphics[scale=0.22]{gray.PNG}
We remind here briefly on the formulation of the model \cite{1stpaper}:
The inner core consists of the stoichiometric $MgH_2$ in the $\beta$-phase ($\beta$-core), having
a total volume $v_{\tiny\beta}$ shrinking during the desorption.
In fact, the geometry and deformation of the $\beta$-
core during the desorption as well as the number of such $\beta$-cores in a single particle is unimportant
for results of the model. The remaining space of the particle is the $\al$-phase consistent of dissolved $H^+$ ions in
the metallic magnesium with the constant (for the given temperature) molar concentration $X, \ [mol/m^3]$.
The molar concentration $Y$ of hydrogen atoms in the $\beta$-phase is always constant, $Y=110119 \ [mol/m^3]$
Then we have, for the balance of desorbed hydrogen atoms $\dot{\nu},\ [mol/s]$
\be
\dot{\nu}= -(Y-\eta X) \fr{d v_\beta}{d t},
\label{nu_rate}
\ee
with initial and final conditions:
\be v_\beta (t=0)=v_0;\ \ \ v_\beta (t=\tau)=\eta v_0, \ee
$v_0$- initial volume of the single particle, $\tau$- the life time of complete decay of the $\beta$-phase.
The volume shrinkage coefficient $\eta$ is taken into account, because of different densities of magnesium
hydride and metallic magnesium \cite{1stpaper}.
Then, for the current volume of the particle we have
\be
v(t)= (1-\eta)v_\beta(t)+\eta v_0
\ee
The surface $s$ of the particle is in any time $t$ related to the volume by
\be
s(t)=\fr{1}{\xi^2} v(t)^{2/3}= \fr{1}{\xi^2} \left[ (1-\eta) v_\beta(t)+\eta v_0 \right]^{2/3}.
\ee
Finally, the desorption kinetics is controlled by two surface parameters of the reaction $2 H \rightleftharpoons H_2$,
the desorption constant $b$ and the re-adsorption $k$, whereat the desorption rate is proportional to the
particle surface $s$:
\be
\dot{\nu}_i=s(b X^2- k p),
\ee
$p$-total outer pressure of molecular hydrogen.
With $p_i$ -the partial pressure produced by desorption from i-th particle in the volume $V$ we have
\be
\nu_i(t)=(v_0-v_\beta(t))[ Y-\eta X ]=\fr{2}{R} \{ V/T \} p_i(t)
\ee
and a differentiation of this relation combined with \ref{nu_rate} provides
\be
\fr{2}{R} \{ V/T \}\dot{p}_i=-\dot{v_i}_\beta [ Y-\eta X ] = \fr{b X ^2 -k p}{\xi^2}
\left[ (1-\eta) v_\beta +\eta v_0 \right]^{2/3},
\ee
here
\be
p=\sum\limits_i p_i
\ee
Now, introducing the notations
\be A(p)= \fr{R}{\xi^2}\cdot \fr{b X^2-k p}{ 2\{ V/T \}};\ \ \ B=\fr{2}{R} \fr{(\eta-1)\{ V/T \} }{Y-\eta X}\ee
we express the evolution of pressure $p$ as measured:
\be
\dot{p}_i= A [B p_i + v_{0i} ]^{2/3}
\ee
The equation cannot be simply summarized over $p_i$ to obtain the total pressure
$p$, because of the power 2/3.
We can verify this formula first for the special case of equal particles. To this end we assume the powder sample
to contain $N$ particles of equal size and equal form (morphology). It means then
\be p=N p_i,\ \ \ \bar{s}=N s_i,\ \ \ \bar{v}=N v_{0 i}\ee
for the total pressure, total surface of desorption and the total initial volume of powder in the sample,
whereat $\bar{v}$ and $\bar{s}$ are related by
\be
(\bar{v}/N)^{1/3} = \xi ( \bar{s}/N)^{1/2}.
\ee
The factor $\xi$ can be generally established using indirect measurements.
In this special case we prefer instead of $\xi$ other parameters measured directly to perform the calculation to
compare with experimental results.
The knowledge of the sample mass $m$, mass density $\varrho$ and the specific surface $\si$ (BET) allows the
elimination of the morphological factor $\xi$ by
\be
\fr{m}{\varrho N} = \xi^3 \left( \fr{\si m}{N} \right)^{3/2}
\ee
The resulting evolution of the total pressure is described by
\be
\dot{p}=\fr{\si m R}{ 2\{V/T\} } (b X^2- k p)
\left[ \fr{2\varrho }{m R} \cdot \fr{ (\eta-1)\{ V/T\} }{Y-\eta X} p
+ 1 \right]^{2/3}
\label{kinetic}
\ee
where two terms in brackets are of magnitude comparable to each other.
The desorption measurements have been carried out for several samples with the order of magnitudes:\\
$1-\eta=0.23, \ \ \varrho=1450\ kg/m^3,\ \ \{V/T\}\sim 10^{-7} m^3/K, \ \ m\sim 100 mg,\ \
\ Y-\eta X\sim 10^5\ mol\ H/m^3$
It provides the first term in brackets about 0.08 compared to unity. Therefore, at the beginning of desorption
(for small $p$), the desorption kinetics is quite well described by the simplified equation:
\be
\dot{p}=\fr{\si m R}{2 \{ V/T \}} (b X^2- k p)\equiv \vep_{\si,m} (b X^2 - k p)
\label{kinetic_simpl}
\ee
Especially for the case that all particles are initially of the spherical form, $ \si =3/(L\varrho) $
and we obtain the kinetic formula of \cite{1stpaper}.
However, if the pressure increases e.g. doubled (trebled), the first term in brackets (\ref{kinetic})
becomes 0.16 (0.24) respectively, and in principle may not be neglected anymore.
Finally, the description in terms of specific surface can be subjected to verification, under
assumption, the kinetic desorption and re-adsorption constants $k$ and $p$ as well as the critical
$\al$- concentration $X$ are inherent properties of the material, independent of geometry/morphology
and remain therefore the same for all kinds of compound (original, dry- and wet-ball-milled).
We take the solution of (\ref{kinetic_simpl}) in a linear approximation of kind
\be p(t)=\fr{b X^2}{k} \left( 1- e^{-\vep_{\si,m}k t } \right)\sim {b X^2} \vep_{\si,m} t \ee
The proportionality expected to hold for two different kinds 1 and 2 of compound will be :
\be
\fr{p_1}{p_2} = \fr{ m_1 {\cal P}_1 t_1 }{ m_2 {\cal P}_2 t_2 }\cdot \fr{\si_1}{\si_2}
\ee
\begin{minipage}[b]{5cm}
As an example, two desorption curves for desorption of as-received $MgH_2$ (m=140\ mg , purity ${\cal P}=0.85$)
and dry ball milled (m=149\ mg, ${\cal P}=0.91)$. The approximatively linear increase of pressure for both samples
between 50 and 200\ kPa ($p_1=p_2$) lasts 35 and 73 sec respectively.
\end{minipage}
\vspace*{0.5cm} \includegraphics[scale=0.28]{twocurves.PNG}
It provides:
\be
\fr{ m_2 {\cal P}_2 t_2 }{ m_1 {\cal P}_1 t_1 }=2.376
\ee
\subsection{ Analytical Solution }
Introducing the notations:
\bea
\pb &=& \fr{b X^2}{k}\ \ \ \mbox{ for the 'threshold' pressure }\nn\\
\Ybar &=& \fr{Y-\eta X}{1-\eta}\ \ \ \mbox{ for the 'effective desorbable' molar concentration of hydrogen in compound }\nn\\
\ep &=& \fr{m R}{2 \{ V/T \}}\ \ \ \mbox{ for the experimental equipment factor, like $\vep$ \cite{1stpaper} }
\eea
in the (\ref{kinetic}), we rewrite it in the form
\be
\fr{d p}{(\pb-p ) \left[ 1 - \fr{\varrho}{\ep\Ybar}p \right]^{2/3} } =\si \ep k\ d t
\ee
with the further notations
\be
\Pi:=\Ybar \fr{\ep}{\varrho};\ \
\Pb:=(\Pi-\pb)^{1/3};\ \ \ P_0:=(\Pi-p_0)^{1/3},
\ee
the solution obeying the initial condition $p(t=0)=p_0$ reads \cite{gradshteyn}
\bea
-\si \ep k \fr{\Pb^2}{\Pi^{2/3}} t &=&
\fr{3}{2} \ln \fr{(\Pi-p)^{1/3}-\Pb }{P_0-\Pb} \left[\fr{\pb-p_0}{\pb-p}\right]^{1/3}-\\
&-& \sqrt{3} \arctan \sqrt{3} \fr{ (\Pi-p)^{1/3} -P_0 }{2\left( \fr{(\Pi-p)^{1/3} P_0}{\Pb} +\Pb
\right)+(\Pi-p)^{1/3}+P_0 }\nn
\label{solution}
\eea
that is now suitable for graphical evaluations.
In the Fig.5 shown below,
the desorption kinetics, described by the present pressure-time law (\ref{solution}) outlined above,
depicted by the red line, is compared with the simplified law
\be
t(p)=\fr{1}{\kappa}\ln\fr{\pb-p_0}{\pb-p},
\label{simple}
\ee
obtained in \cite{1stpaper}, where the effective shrinking of the specific surface due to desorption is not
taken into account (green line). As expected, this feature leads to the slowdown of desorption at higher
pressures. This effect is the appreciable, the less is the sample mass in the volumetric setup. \\
\vspace*{0.5cm}\hspace*{-1cm} \includegraphics[scale=0.46]{surface15mg.PNG}
\vspace*{0.5cm} \includegraphics[scale=0.45]{surface15mg_pab.PNG}\\
{\bf \small
Fig.5 the deviation of kinetics (\ref{solution}) and (\ref{simple}) from each other for
the end of desorption(left); in fact these processes are only valid up to the pressure $p_{\al\ra\beta}$,
for 15 mg $MgH_2$ of 0.8 purity (case II of \cite{1stpaper}-complete desorption).
}\\
\vspace*{0.1cm} \includegraphics[scale=0.58]{surface200mg.PNG} \\
{\bf \small Fig.6 Also, for the case I - reaching the threshold pressure $\pb$, (200 mg)are two kinetic laws different
}
|
2,877,628,090,678 | arxiv | \section{Introduction}
Having unknown top quark mass$(m_t)$ has long been one of the biggest
disadvantage in studying the Standard Model(SM) and its extensions.
With the very recent announcement of CDF collaboration at Tevatron on their
evidence for top quark production\cite{CDF-top} with $m_t=174\pm
10^{+13}_{-12}\,{\rm GeV}$, there are amusing possibilities that one can constrain
further possible new physics beyond the SM from the precision LEP data.
Precision measurements at the LEP have been remarkably successful in confirming
the validity of the SM\cite{1inUV}. Indeed, in order to have agreements between
theory and experiments, one has to go beyond the tree-level calculations and
include known electroweak(EW) radiative corrections. However, from the
theoretical point of view, there is a concensus that the SM can only be a low
energy limit of a more complete theory. It is thus of the utmost importance to
try and push to the limit in finding possible deviations from the SM. In fact,
there are systematic programs for such precision tests. Possible deviations
from the SM can all be summarized into a few parameters which then serve to
measure the effects of new physics beyond the SM. A lot of efforts have gone
into this type of investigation trying to develop a scheme to minimize the
disadvantage of having unkown $m_t$ but to optimize sensitivity to new physics.
To date significant constraints have been placed on a number of models, such as
the two Higgs
the technicolor model\cite{RCTC}, and some extended gauge
models\cite{Altetal}.
In this work we wish to apply the analysis to another extension of the SM, the
$Sp(6)_L \times U(1)_Y$ family model. Amongst several of the available parametrization schemes
in the literature, the most appropriate one for our purposes is that of
Altarelli et.~al\cite{ABJ}. This is because their $\epsilon$-parametrization
can be used for new physics which might appear at energy scales not far from
those of the SM. This is the case for the $Sp(6)_L \times U(1)_Y$ model.
In fact, in an earlier analysis\cite{Kuo-park-eps}
we have performed precision EW tests in this model in a scheme using
$\epsilon_{1,2,3}$
introduced in Ref.~\cite{ABJ}. We found that the parameters in this model were
severely constrained.
Recently, it was re-emphasized that there are important vertex corrections to
the decay mode $Z\rightarrow b\overline b$\cite{RC2HDM,ABCI,ABCII}. This mode has also been measured
at LEP and has proven to provide a strong constraint to model building.
Considering the high central value for the $m_t$ from CDF, $Z\rightarrow b\overline b$ vertex
corrections, which grow as $m_t^2$, can be quite significant.
Therefore, we have now incorporated $Z\rightarrow b\overline b$ vertex corrections in our new
analysis of the $Sp(6)_L \times U(1)_Y$ model.
In this paper we extend our previous analysis in two aspects: (i) we include a
new parameter $\epsilon_b$ to encode $Z\rightarrow b\overline b$ vertex corrections, (ii) we calculate
$\epsilon_1$ in a new scheme introduced in Ref~\cite{ABCI} in order to take
advantage of all LEP data.
We find that inclusion of $Z\rightarrow b\overline b$ vertex corrections reinforces strongly the
previous constraint from $\epsilon_1$ only so that the allowed parameter regions are
reduced considerably.
Thus, the precision EW tests have demonstrated clearly that they are powerful
tools in shaping our searches for
extensions of the SM.
In Sec.~II, we will describe the $Sp(6)_L \times U(1)_Y$ model, spelling out in detail the parts
that are relevant to precision tests. In Sec.~III, we summarize properties of
the $\epsilon$-parameters which will be used in our analysis. Sec.~IV contains
our detailed numerical results. Finally, some concluding remarks are given in
Sec.~V.
\section{$Sp(6)_L \times U(1)_Y$ Model}
The $SP(6)_L\otimes U(1)_Y$ model, proposed some time ago\cite{kuo-nakagawa},
is the simplest extension of the standard model of three generations
that unifies the standard $SU(2)_L$ with the horizontal gauge group
$G_H(=SU(3)_H)$ into an anomaly free, simple, Lie group. In this model,
the six left-handed quarks (or leptons) belong to a {\bf 6} of
$SP(6)_L$, while the right-handed fermions are all singlets. It is thus
a straightforward generalization of $SU(2)_L$ into $SP(6)_L$, with the three
doublets of $SU(2)_L$ coalescing into a sextet of $SP(6)_L$. Most of the
new gauge bosons are arranged to be heavy $(\geq 10^2$--$10^3\rm\,TeV)$ so as
to avoid sizable FCNC. $SP(6)_L$ can be naturally broken into $SU(2)_L$
through a chain of symmetry breakings. The breakdown
$SP(6)_L \rightarrow [SU(2)]^3 \rightarrow SU(2)_L$ can be induced by two
antisymmetric Higgs which tranform as $({\bf 1}, {\bf 14}, 0)$ under
$SU(3)_C\otimes SP(6)_L\otimes U(1)_Y$. The standard $SU(2)_L$ is to be
identified with the diagonal $SU(2)$ subgroup of
$[SU(2)]^3=SU(2)_1\otimes SU(2)_2\otimes SU(2)_3$, where $SU(2)_i$ operates
on the $i$th generation exclusively. In terms of the $SU(2)_i$ gauge boson
$\vec{A}_i$, the $SU(2)_L$ gauge bosons are given by $\vec{A}={1\over\sqrt 3}
(\vec{A}_1+\vec{A}_2+\vec{A}_3)$. Of the other orthogonal combinations of
$\vec{A}_i$,
$\vec{A}^\prime={1\over\sqrt 6}(\vec{A}_1+\vec{A}_2-2\vec{A}_3)$, which
exhibits unversality only
among the first two generations, can have a mass scale in the TeV range
\cite{1TeVZ}. The three gauge bosons $A^\prime$ will be denoted as $Z^\prime$
and $W^{\prime\pm}$.
Given these extra gauge bosons with mass in the TeV range, we can expect small
deviations from the SM. Some of these effects were already analyzed elsewhere.
For EW precision tests,
the dominant effects of new heavier gauge boson $Z^\prime (W^{\prime\pm})$ show
up
in its mixing with the standard $Z(W^\pm)$ to form the mass eigenstates
$Z_{1,2} (W_{1,2})$:
\[ \hbox to \hsize{$ \hfill
\begin{array}{rcl}
Z_1&=&Z\cos\phi_Z+Z^\prime\sin\phi_Z \;, \\
W_1&=&W\cos\phi_W+W^\prime\sin\phi_W \;,
\end{array} \quad
\begin{array}{rcl}
Z_2 &=& -Z\sin\phi_Z+Z^\prime\cos\phi_Z \;, \\
W_2 &=& -W\sin\phi_W+W^\prime\cos\phi_W \;,
\end{array} \hfill
\begin{array}{r}
\stepcounter{equation}(\theequation)\\
\stepcounter{equation}(\theequation)
\end{array}
$} \]
where $Z_1 (W_1)$ is identified with the physical $Z(W)$.
Here, the mixing angles $\phi_Z$ and $\phi_W$ are expected to be small
$(\lsim0.01)$, assuming that they scale as some powers of mass ratios.
With the additional gauge boson $Z^\prime$, the neutral-current Lagrangian
is generalized to contain an additional term
\begin{equation}
L_{NC}=g_Z J_Z^\mu Z_\mu +g_{Z^\prime} J_{Z^\prime}^\mu Z_\mu^\prime \;,
\end{equation}
where $g_{Z^\prime}=\sqrt{1-x_W\over 2} g_Z={g\over\sqrt{2}}$,
$x_W=\sin^2\theta _W$, and $g={e\over {\sin\theta _W}}$. The neutral currents
$J_Z$ and $J_{Z^\prime}$ are given by
\begin{eqnarray}
J_Z^\mu &=&\sum_{f} \bar{\psi}_f\gamma^\mu\left( g^f_V+g^f_A\gamma _5\right)
\psi_f \;, \\
J_{Z^\prime}^\mu &=&\sum_{f} \bar{\psi}_f\gamma^\mu\left( g^{\prime
f}_V+g^{\prime f}_A\gamma _5\right)
\psi_f \;,
\end{eqnarray}
where $g^f_V={1\over 2}\left( I_{3L}-2x_Wq\right)_f$, $g^f_A={1\over
2}\left( I_{3L}\right)_f$ as in SM, $g^{\prime f}_V=g^{\prime
f}_A={1\over 2}\left( I_{3L}\right)_f$
for the first two generations and $g^{\prime f}_V=g^{\prime
f}_A=-\left( I_{3L}\right)_f$ for the third. Here $\left(I_{3L}\right)_f$ and
$q_f$
are the third component of weak isospin and electric charge of fermion $f$,
respectively. And the neutral-current Lagrangian reads in terms of $Z_{1,2}$
\begin{equation}
L_{NC}=g_Z\sum_{i=1}^2\sum_{f} \bar{\psi}_f\gamma_\mu\left(
g^f_{Vi}+g^f_{Ai}\gamma _5\right)
\psi_f Z^\mu_i \;,
\end{equation}
where $g^f_{Vi}$ and $g^f_{Ai}$ are the vector and axial-vector
couplings of fermion $f$ to physical gauge boson $Z_i$, respectively.
They are given by
\begin{eqnarray}
g^f_{V1, A1}&=&g^f_{V, A}\cos\phi_Z+{g_{Z^\prime}\over g_Z} g^{\prime
f}_{V, A}\sin\phi_Z \;, \\
g^f_{V2, A2}&=&-g^f_{V, A}\sin\phi_Z+{g_{Z^\prime}\over g_Z} g^{\prime
f}_{V, A}\cos\phi_Z \;.
\end{eqnarray}
Similar analysis can be carried out in the charged sector.
\section{One-loop EW radiative corrections and the $\epsilon$-
parameters}
There are several different schemes to parametrize
the EW vacuum polarization corrections \cite{Kennedy,PT,efflagr,AB}. It
can be easily shown that by expanding the vacuum polarization tensors to order
$q^2$, one obtains three independent physical parameters. Alternatively, one
can
show that upon symmetry breaking there are three
additional terms in the effective lagrangian \cite{efflagr}.
In the $(S,T,U)$ scheme \cite{PT},
the deviations of the model predictions from those of the SM (with fixed
values of $m_t,m_H$) are considered to be as the effects from ``new physics".
This scheme is
only valid to the lowest order in $q^2$, and is therefore not viable for a
theory with new, light $(\sim M_Z)$ particles. In the $\epsilon$-scheme, on the
other hand, the model predictions are absolute and are valid up to higher
orders in $q^2$, and therefore this scheme is better suited to
the EW precision tests of the MSSM\cite{BFC} and a class of supergravity models
\cite{PARKeps}.
Here we choose to use the $\epsilon$-scheme because the new particles in the
model to be considered here can be relatively light $(O(1TeV))$.
There are two different $\epsilon$-schemes. The original scheme\cite{ABJ} was
considered in our previous analysis \cite{Kuo-park-eps}, where
$\epsilon_{1,2,3}$ are defined from a basic set of observables $\Gamma_{l},
A^{l}_{FB}$ and $M_W/M_Z$.
Due to the large $m_t$-dependent vertex corrections to $\Gamma_b$, the
$\epsilon_{1,2,3}$ parameters and $\Gamma_b$ can be correlated only for a
fixed value of $m_t$. Therefore, $\Gamma_{tot}$, $\Gamma_{hadron}$ and
$\Gamma_b$ were not included in Ref.~\cite{ABJ}. However, in the new
$\epsilon$-scheme, introduced recently in Ref.~\cite{ABCI}, the above
difficulties are overcome by introducing a new parameter $\epsilon_b$ to encode
the $Z\rightarrow b\overline b$ vertex corrections. The four $\epsilon$'s are now defined from an
enlarged set of $\Gamma_{l}$, $\Gamma_{b}$, $A^{l}_{FB}$ and $M_W/M_Z$ without
even specifying $m_t$.
In this work we use this new $\epsilon$-scheme.
Experimental values for $\epsilon_1$ and $\epsilon_b$ are determined by
including all the latest LEP data(complete '92 LEP data+ preliminary '93 LEP
data) to be \cite{Altarelli}
\begin{equation}
\epsilon^{exp}_1=(-0.3\pm3.2)\times10^{-3},\qquad
\epsilon^{exp}_b=(3.1\pm5.5)\times10^{-3}\ .
\end{equation}
The expression for $\epsilon_1$ is given as
\cite{BFC}
\begin{equation}
\epsilon_1=e_1-e_5-{\delta G_{V,B}\over G}-4\delta g_A,\label{eps1}
\end{equation}
where $e_{1,5}$ are the following combinations of vacuum polarization
amplitudes
\begin{eqnarray}
e_1&=&{\alpha\over 4\pi \sin^2\theta_W M^2_W}[\Pi^{33}_T(0)-\Pi^{11}_T(0)],
\label{e1}\\
e_5&=& M_Z^2F^\prime_{ZZ}(M_Z^2),\label{e5}
\end{eqnarray}
and the $q^2\not=0$ contributions $F_{ij}(q^2)$ are defined by
\begin{equation}
\Pi^{ij}_T(q^2)=\Pi^{ij}_T(0)+q^2F_{ij}(q^2).
\end{equation}
The quantities $\delta g_{V,A}$ are the contributions to the vector and
axial-vector form factors at $q^2=M^2_Z$ in the $Z\rightarrow l^+l^-$ vertex from
proper vertex diagrams and fermion self-energies, and $\delta G_{V,B}$ comes
from the one-loop box, vertex and fermion self-energy corrections
to the $\mu$-decay amplitude at zero external momentum. It is important to note
that these non-oblique corrections are non-negligible.
Also, they must be included since in general only the physical observables
$\epsilon_i$, but not the individual terms in them, are
gauge-invariant\cite{gaugeinv}.
However, we have included the Standard non-oblique corrections only.
The contributions from the new physics are small, at least in the gauge that we
choose, and will be neglected here.
Following Ref.~\cite{ABCI}, $Z\rightarrow b\overline b$ vertex corrections are encoded in a new
variable $\epsilon_b$ defined from $\Gamma_b$, the inclusive
partial width for $Z\rightarrow b\overline b$, as follows
\begin{equation}
\Gamma_b=3 R_{QCD} {G_FM^3_Z\over 6\pi\sqrt 2}\left(
1+{\alpha\over 12\pi}\right)\left[ \beta _b{\left( 3-\beta
^2_b\right)\over 2}(g^b_V)^2+\beta^3_b (g^b_A)^2\right] \;,
\end{equation}
with
\begin{eqnarray}
R_{QCD} &\cong&\left[1+{\alpha_S\left(
M_Z\right)\over\pi}-1.1{\left(\alpha_S\left(
M_Z\right)\over\pi\right)}^2-12.8{\left(\alpha_S\left(
M_Z\right)\over\pi\right)}^3\right] \;,\\
\beta_b&=&\sqrt {1-{4m_b^2\over M_Z^2}} \;, \\
g^b_A&=&-{1\over2}\left(1+{\epsilon_1\over2}\right)\left(
1+{\epsilon_b}\right)\;,\\
{g^b_V\over{g^b_A}}&=&{{1-{4\over3}{\overline s}^2_W+\epsilon_b}\over{1+\epsilon_b}}\;.
\end{eqnarray}
Here ${\overline s}^2_W$ is an effective $\sin^2\theta_W$ for on-shell $Z$, and
$\epsilon_b$ is closely related to the real part of the vertex correction to $Z\rightarrow b\overline b$,
denoted by $\delta_{b-Vertex}$ and defined explicitly in
Ref.~\cite{Bernabeuetal}. In the SM, the diagrams for $\delta_{b-Vertex}$
involve top quarks and
$W^\pm$ bosons, and the contribution to $\epsilon_b$ depends
quadratically on $m_t$, which is due to the EW symmetry breaking and can be a
decisive test of the model. In supersymmetric models there are additional
diagrams
involving Higgs bosons and supersymmetric particles\cite{BF,Djouadietal}.
In fact, $\epsilon_b$ has been calculated by one of us(G.P) in the context of
non-supersymmetric two Higgs doublet model\cite{epsb2HD}.
In the following section we
calculate $\epsilon_{1}$ and $\epsilon_{b}$ in the
$Sp(6)_L \times U(1)_Y$ model. We do not, however, include $\epsilon_{2,3}$ in our analysis
simply because these parameters can not provide any constraints at the current
level
of experimental accuracy\cite{PARKeps}.
Although the oblique corrections due to extra gauge bosons could be neglected
completely as in Ref\cite{Altetal}, we have improved the model predictions for
the oblique corrections by implementing the new vertices from Eq.~(6)
for the fermion loops only.
In this way we have accounted for a significant deviation of the model
prediction from the SM value for not so small $|\phi_{Z,W}|$.
Furthermore, in models with extra gauge bosons such as the model to be
considered here, the contribution from the mixings of these extra bosons with
the SM ones $(\Delta\rho_M)$ should also be added to
$\epsilon_1$\cite{Altetal,Altarelli90,parkkuo93}.
\section{Results and Discussion}
In order to calculate the model prediction for the Z width, it is sufficient
for our purposes to resort to the improved Born approximation (IBA)\cite{IBA},
neglecting small additional effects from the new physics.
Weak corrections can be effectively included
within the IBA, wherein the
vector couplings of all the fermions are determined by an effective
weak mixing angle.
In the case $f\not= b$, vertex corrections are negligible, and one
obtains the standard partial $Z$ width
\begin{equation}
\Gamma(Z\longrightarrow f\bar{f})=N^f_C\rho {G_FM^3_Z\over 6\pi\sqrt 2}\left(
1+{3\alpha\over 4\pi}q^2_f\right)\left[ \beta _f{\left( 3-\beta
^2_f\right)\over 2}{g^f_{V1}}^2+\beta^3_f {g^f_{A1}}^2\right] \;,
\end{equation}
where $N_C^f =1$ for leptons, and for quarks
\begin{eqnarray}
N_C^f &\cong&3\left[1+{\alpha_S\left(
M_Z\right)\over\pi}-1.1{\left(\alpha_S\left(
M_Z\right)\over\pi\right)}^2-12.8{\left(\alpha_S\left(
M_Z\right)\over\pi\right)}^3\right] \;,\\
\beta_f&=&\sqrt {1-{4m_f^2\over M_Z^2}} \;, \\
\rho&=&1+\Delta\rho_M+\Delta\rho_{SB}+\Delta\rho_t\;, \\
\Delta\rho_t &\simeq& {3G_Fm_t^2\over 8\pi^2\sqrt 2} \;.
\end{eqnarray}
where the $\rho$ parameter includes not only the effects of the
symmetry breaking $\left(\Delta\rho_{SB}\right)$\cite{SB} and those
of the mixings between the SM bosons and the new bosons
$\left(\Delta\rho_{M}\right)$, but also the loop effects
$\left(\Delta\rho_{t}\right)$.
$N_C^f$ above is obtained by accounting for QCD corrections up to
3-loop order in $\overline{MS}$ scheme, and we ignore different QCD
corrections for vector and axial-vector couplings which are due not
only to chiral invariance broken by masses but also the large mass
splitting between $b$ and $t$. We use for the vector and axial vector couplings
$g^f_{V1}$ and $g^f_{A1}$ in Eq.~(7) the effective $\sin^2\theta_W$,
$\bar{x}_W=1-{M_W^2\over{\rho M_Z^2}}$.
In the case of $Z\longrightarrow b\bar{b}$,
the large $t$ vertex correction should be accounted for by the following
replacement
\begin{equation}
\rho \longrightarrow\rho-{4\over 3}\Delta\rho_t\,, \quad
\bar{x}_W\longrightarrow\bar{x}_W\left( 1+{2\over 3}\Delta\rho_t\right) \;.
\end{equation}
In the following analysis, we consider not only a constraint on the deviation
of $\Gamma_Z$ from the SM prediction\cite{parkkuo93}, $\Delta\Gamma_Z\leq
14$~MeV, which is the present experimental accuracy\cite{Luth}, but also the
present experimental
bound on $\Delta\rho_M$.
We use a direct model-independent bound on $\Delta\rho_M$,
$\Delta\rho_M\mathrel{\mathpalette\atversim<} 0.0147-0.0043{\left({m_t\over 120 GeV}\right)}^2$ from
$1-({M_W\over{M_Z}})^2=0.2257\pm 0.0017$ and $M_Z=91.187\pm
0.007\,{\rm GeV}$\cite{Luth}.
The values $M_H=100\,{\rm GeV}$, $\alpha_S(M_Z)=0.118$, and $\alpha(M_Z)=1/128.87$
will be used
thoughout the numerical analysis.
In Fig.~1 we present the model predictions for $\epsilon_1$ and $\epsilon_b$
only for the values of $\phi_Z$ and $\phi_W$ allowed by $\Delta\Gamma_Z$ and
$\Delta\rho_M$ constraints with $M_{Z^\prime}=1000$ and $M_{W^\prime}=800\,{\rm GeV}$
for $m_t=160, 175$ and $190\,{\rm GeV}$. We restrict $|\phi_{Z,W}|\leq 0.02$. We
also include in the figure the latest $90\%$CL ellipse from all LEP
data\cite{Altarelli}. The values of $m_t$ used are as indicated over each
horizontal stripe of dotts.
It is very interesting for one to see that the correlated contraint is much
stronger than individual constraints.
The maximum deviation of $\epsilon_b$ in the model from the SM value is around
$1.3\%$ for $|\phi_{Z,W}|\lsim0.02$. Although the deviation is very small, the
inclusion
of the $\epsilon_b$ in the analysis makes the LEP data certainly much more
constraining.
Imposing the $\epsilon_1-\epsilon_b$ constraint by selecting only the values of
$\phi_Z$ and $\phi_W$ falling inside the ellipse in Fig.~1, we show in Fig.~2
the allowed regions in $(\phi_Z, \phi_W)$ for (a) $m_t=160\,{\rm GeV}$, (b)
$m_t=175\,{\rm GeV}$ and (c) $m_t=190\,{\rm GeV}$. The striking difference in the shape of
the allowed region between $m_t=175\,{\rm GeV}$ and $m_t=190\,{\rm GeV}$ comes from the fact
that the SM value
of $\epsilon_1$ for $m_t=175\,{\rm GeV}$ is inside the ellipse whereas the one for
$m_t=190\,{\rm GeV}$ is outside\cite{Kuo-park-eps}.
If the top quark turns out to be fairly heavy, e.g. $m_t\mathrel{\mathpalette\atversim>} 180\,{\rm GeV}$
where the SM predictions always fall outside the ellipse in the Fig.~1
, then the presence of the extra gauge bosons is certainly favored
because it can bring the model predictions inside the ellipse
as seen in Fig.~1
although there is an ambiguity in the model prediction for $\epsilon_1$ that
the contribution from the extra gauge bosons can have either signs. This
situation can be contrasted with the one in
the MSSM where the heavy top quark is still consistent with the LEP data as
long as the chargino
is very light $\sim M_Z/2$, which is known as ``light chargino
effect"\cite{BFC}, whose contribution to $\epsilon_1$ is always negative.
However, if the chargino were not discovered at LEP~II,
then MSSM would fall into a serious trouble.
\section{Conclusions}
In this work we have extended our previous work on the precision EW tests in
the $Sp(6)_L \times U(1)_Y$ family model to include for the first time the important
$Z\rightarrow b\overline b$ vertex corrections encoded in a new variable $\epsilon_b$, utilizing all the
latest LEP data.
As has been the case with similar studies, the model is considerably
constrained. The most important effects of the model come from mixings of the
SM gauge bosons $Z$ and $W$ with the additional gauge bosons $Z^\prime$ and
$W^\prime$. We have included in our analysis the one loop EW radiative
corrections due to the new bosons in terms of $\epsilon_{1,b}$ and
$\Delta\Gamma_Z$.
It is found that the correlation between $\epsilon_1$ and $\epsilon_b$
makes the combined $\epsilon_1-\epsilon_b$ constraint much stronger than the
individual ones.
Using a global
fit to LEP data on $\Gamma_{l}, \Gamma_{b}, A^{l}_{FB}$ and $M_W/M_Z$
measurement, we find that the mixing angles $\phi_Z$ and $\phi_W$ are
constrained to lie in rather small regions. Also, larger ($\mathrel{\mathpalette\atversim>} 1\%$)
$\phi_Z$ and $\phi_W$ values are allowed only when there is considerable
cancellation between the $Z^\prime$ and $W^\prime$ contributions, corresponding
to $|\phi_Z|\approx|\phi_W|$. It is noteworthy that the results are sensitive
to the top quark mass.
For smaller $m_t$'s, the allowed parameter regions become considerably larger.
Only very tiny regions are allowed for $m_t=190\,{\rm GeV}$.
It is very interesting for one to see that the model can not accomodate
$m_t\mathrel{\mathpalette\atversim>} 195\,{\rm GeV}$ at $90\%$CL, which is still consistent with the $m_t$ from
CDF.
As the top quark mass from the Tevatron becomes more accurate, we can narrow
down the mixing angles further with considerable precision.
\section*{Acknowledgements}
The authors would like to thank professor J.~E.~Kim
for reading the manuscript.
The work of T.~K. has been supported in part by DOE. The work of G.~P.
has been supported by the Korea Science and Engineering Foundation
through the SRC program.
\newpage
\def\NPB#1#2#3{Nucl. Phys. B {\bf#1} (19#2) #3}
\def\PLB#1#2#3{Phys. Lett. B {\bf#1} (19#2) #3}
\def\PLIBID#1#2#3{B {\bf#1} (19#2) #3}
\def\PRD#1#2#3{Phys. Rev. D {\bf#1} (19#2) #3}
\def\PRL#1#2#3{Phys. Rev. Lett. {\bf#1} (19#2) #3}
\def\PRT#1#2#3{Phys. Rep. {\bf#1} (19#2) #3}
\def\MODA#1#2#3{Mod. Phys. Lett. A {\bf#1} (19#2) #3}
\def\IJMP#1#2#3{Int. J. Mod. Phys. A {\bf#1} (19#2) #3}
\def\TAMU#1{Texas A \& M University preprint CTP-TAMU-#1}
\def\PURD#1{Purdue University preprint PURD-TH-#1}
\def\ARAA#1#2#3{Ann. Rev. Astron. Astrophys. {\bf#1} (19#2) #3}
\def\ARNP#1#2#3{Ann. Rev. Nucl. Part. Sci. {\bf#1} (19#2) #3}
|
2,877,628,090,679 | arxiv | \section{Introduction}
A fundamental problem affecting many application areas is the processing of signals on graphs~\cite{shuman-2013}. In addition to representing networks, graphs provide an underlying structure for data in which connectivity and proximity affect processing. Distances as measured along graph edges induce notions of continuity and displacement, allowing for the adaptation of ideas from continuous geometry to sampled or fundamentally discrete data.
In particular, we focus on probability distributions \emph{over} graphs, whereby each vertex of a graph is associated with some amount of probabilistic mass. Given that countless algorithms and models in machine learning rely upon a notion of distance/divergence between probability distributions, it is desirable to design efficiently-computable and well-behaved distances taking into account the connectivity of the underlying graph.
A well-known notion of distance from information theory is provided by the KL divergence and its variants. The crucial drawback of these divergences in our context, however, is that they are agnostic to graph connectivity. That is, displacing probability from one node to another has the same cost regardless of the distance it travels along the edges.
One alternative taking connectivity into account is provided by \emph{optimal transportation} distances, also known as \emph{earth mover's distances} (EMDs). These measure the work needed to transform one probability distribution into another by transporting mass over the graph edges~\cite{villani-2003}. Setting the cost of moving a unit of mass between graph nodes $v$ and $w$ to $d(v,w)^p$ yields the $p$-Wasserstein transportation distance, where $d(\cdot,\cdot)$ denotes shortest-path graph distance.
Transportation distances are the topic of a well-developed mathematical theory establishing geometric properties for the $2$-Wasserstein distance desirable for applications. Unfortunately, the basic optimization for this distance involves a linear program scaling quadratically with the number of nodes, rendering the computation prohibitively expensive even for moderately-sized graphs. This has led to approximations using wavelets~\cite{cohen-2011}, smoothing~\cite{cuturi-2013}, sketching~\cite{mcgregor-2013}, and so on. These approximations impose strict requirements on the graph and can become noisy or discontinuous thanks to aggressive approximation. Others resort to the $1$-Wasserstein distance, which is easier computationally but has weak stability and geometric structure.
Here, we introduce a novel transportation distance between distributions over graphs. Our approach is inspired by a Riemannian framework for Wasserstein distance in $\R^n$ based on fluid flow~\cite{lott-2008}. We consider the set of distributions on a graph as a manifold and define appropriate analogs of tangent spaces and inner products; our transportation distances are geodesics in this abstract space. Although it does not come from the usual transportation linear program, our alternative definition still satisfies the triangle inequality and so on while benefiting from the new infinitesimal construction.
Our approach has several advantages. It scales linearly with the number of edges, enabling larger-scale computation especially on sparse graphs. Even with this favorable scaling, it has properties in common with $2$-Wasserstein distances. The Riemannian structure, not known to accompany the quadratically-scaling linear programming formulation, gives our distances additional properties in common with Wasserstein distances over $\R^n$, such as ``displacement interpolation'' paths explaining the motion of mass from one distribution to another. We also provide a suitable discretization and experiments suggesting our distance's structure and applicability to learning tasks.
\section{Optimal Transportation on Continuous Domains}
Suppose $M$ is a manifold with geodesic distance $d(\cdot,\cdot).$ We will use $\Prob(M)$ to denote the space of probability measures over $M$. Then, the $p$-\emph{Wasserstein} distance between $\mu_0,\mu_1\in\Prob(M)$ is:
\begin{equation}\label{eq:wasserstein}
\W_p(\mu_0,\mu_1)\equiv\inf_{\pi\in\Pi(\mu_0,\mu_1)}\left(
\iint_{M\times M} d(x,y)^p\,d\pi(x,y)
\right)^{\nicefrac{1}{p}}
\end{equation}
Measures $\pi\in\Pi(\mu_0,\mu_1)\subseteq\Prob(M\times M)$ are \emph{transportation plans} satisfying $\pi(U\times M)=\mu_0(U)$ and $\pi(M\times V)=\mu_1(V)$ for $U,V\subseteq M$. We can think of $\pi\in\Pi(\mu_0,\mu_1)$ as a matching of probabilistic mass from $\mu_0$ to $\mu_1$; then, $\W_p$ is the minimum cost matching, assuming the cost of moving mass from $x\in M$ to $y\in M$ is $d(x,y)^p$. We refer the reader to~\cite{villani-2003} for additional discussion.
In vision and learning, $\W_1$ is known as the ``earth mover's distance''~\cite{rubner-2000}, used on histograms and binned descriptors. Theoretically, however, the quadratic distance $\W_2$ admits stronger smoothness and uniqueness properties akin to the difference between $L_2$ and $L_1$ regularization. Regardless, algorithms optimizing~\eqref{eq:wasserstein} directly deal with distributions over the cross product $M\times M$; discretizing $M$ using $n$ points produces a linear program over $O(n^2)$ variables, which can be prohibitively expensive.
More recent methods establish a connection between~\eqref{eq:wasserstein} and Eulerian fluid mechanics. Most prominently, Benamou and Brenier show that $\W_2$ can be computed on $M$ as follows~\cite{benamou-2000}:
\begin{equation}\label{eq:bb}
[\W_2(\rho_0,\rho_1)]^2=
\left\{
\begin{aligned}
\inf_{\rho(t,x),v(t,x)} & \int_M\int_0^1 \rho(t,x) \|v(t,x)\|^2\,dx\,dt\\[1ex]
\mbox{such that } \; & \rho(0,x)=\rho_0(x),\
\rho(1,x) = \rho_1(x)\\
& \frac{\partial \rho}{\partial t} + \nabla\cdot(\rho v) = 0,\ \rho(t,x)\geq0
\end{aligned}
\right.
\end{equation}
We assume measures $\mu$ can be expressed using distributions $\rho$ such that $\mu(U)\equiv\int_U \rho(x)\,dx$. The variable $\rho(t,x)$ is a time-varying set of distributions advecting from $\rho_0$ to $\rho_1$ along vector field $v(t,x)$ with minimal kinetic energy $\iint\rho \|v\|^2 \, dx \, dt$. Discretizations of this problem scale well, since $[0,1]$ typically can be subdivided into far fewer than $n$ subintervals. Similar formulations exist for $\W_1$~\cite{santambrogio-2013}, but we focus on $\W_2$ for its structure and computational challenges on graphs.
Using~\eqref{eq:bb}, $\W_2$ can be viewed as a geodesic distance on $\Prob(M)$~\cite{lott-2008}. First-order optimality of~\eqref{eq:bb} shows $v(t,x)=\nabla \phi(t,x)$ for some $\phi(t,x)$. This is interpreted to mean that the set of gradients $\{\nabla \phi(\cdot)\}$ forms a ``tangent space" for $\Prob(M)$ at $\mu$ equipped with an inner product defined by
\begin{equation}\label{eq:inner_prod_bb}
\langle \nabla\phi_1,\nabla\phi_2\rangle_{\mu}\equiv\int_M \langle \nabla\phi_1(x),\nabla\phi_2(x)\rangle\,d\mu(x).
\end{equation}
Given a time-varying set of distributions $\rho(t,x)$, there exists a unique potential field $\nabla_x \phi(t,x)$ with $\frac{\partial\rho}{\partial t}+\nabla\cdot(\rho\nabla \phi)=0$, so if we write a ``curve'' $c(t):[0,1]\rightarrow\Prob(M)$ corresponding to $\rho(t,x)$, then we can define $c'(t)\equiv \nabla\phi(t,\cdot).$ Then,~\cite{lott-2008} proves:
\begin{equation}\label{eq:lott}
\W_2(\rho_0,\rho_1) = \inf_{\substack{c(0)=\rho_0\\c(1)=\rho_1}} \int_0^1 \langle c'(t),c'(t)\rangle_\mu^{\nicefrac{1}{2}}\,dt.
\end{equation}
That is, quadratic Wasserstein distance satisfies the geodesic equation under this inner product.
\section{Continuous-Flow Transportation Distances}
Now, we transition from manifold domains $M$ to undirected connected graphs $G=(V,E).$ Recall that our goal is to construct a transportation distance on discrete distributions using the structure of $G$ with the structure of $2$-Wasserstein distances in Euclidean space.
Define $d(v,w)$ to be the shortest-path distance between $v,w\in V$, and define $\Prob(G)\equiv\{p\in[0,1]^{|V|}:\mathbbm1^\top p=1\}.$ To mimic the construction of $\W_2$, we might attempt to compute the following quadratic transportation distance on $\Prob(G)$:
\begin{equation}\label{eq:pairwise}
[\overline\W_{\mathrm{full}}(p_0,p_1)]^2\equiv\left\{
\begin{aligned}
\inf_{T(v,w)\geq0} & \sum_{v,w\in V} d(v,w)^2T(v,w)\\
\mbox{such that } \;
& \sum_{w\in V} T(v,w)=p_0(v)\ \forall v\in V\\
& \sum_{v\in V} T(v,w)=p_1(w)\ \forall w\in V
\end{aligned}\right.
\end{equation}
While this distance has some properties in common with $\W_2$, the linear program requires $|V|^2$ variables $T(v,w)$ and precomputation of a dense matrix of pairwise distances $d(v,w).$
To alleviate these scaling problems, we might hope to find an Eulerian formulation of $\overline\W_{\mathrm{full}}$ in the style of~\eqref{eq:bb}. Such a distance is challenging to derive starting from~\eqref{eq:pairwise} for several reasons. First, graphs are discrete with no perfect analog of the advection equation $\frac{\partial\rho}{\partial t}+\nabla\cdot(\rho v)=0$. Furthermore, flows on graphs are per-edge and distributions are per-vertex, so the objective $\rho\|v\|^2$ must be approximated. Hence, rather than expecting to find a new formulation of $\overline\W_{\mathrm{full}},$ we will introduce a \emph{new} transportation distance constructed by working backward from~\eqref{eq:lott}.
\subsection{Definition}
The main ingredients defining transportation distances starting from~\eqref{eq:lott} are:
\begin{enumerate}
\item A model of advection for moving probabilistic mass between adjacent vertices of a graph over time using a per-edge flow.
\item A probability-based inner product on per-edge flows similar to~\eqref{eq:inner_prod_bb}.
\end{enumerate}
To simplify notation, define $\overline E\equiv \{(v\rightarrow w), (w\rightarrow v): (v,w)\in E\},$ a set of directed edges where each edge in $E$ is doubled with forward and backward orientation. Following~\cite{chapman-2011}, given a nonnegative oriented flow $U(t,e):[0,1]\times\overline E\rightarrow \R^+$ and distribution $p_0\in\Prob(G)$, we define advection of $p_0$ along $U$ as the solution of the system of ordinary differential equations
\begin{align}\label{eq:graph_advection}
\frac{d}{dt}p(t,v) &= \sum_{e=(w\rightarrow v)} U(t,e) p(t,w) - \sum_{e=(v\rightarrow w)} U(t,e) p(t,v)\,,\\
p(0,v) &= p_0(v)\,.\nonumber
\end{align}
The time variable $t$ aside, this model of advection is defined discretely in terms of the structure of the graph but still preserves mass in that $\sum_{v\in V} p(t,v)=1$ for all $t\geq0$.
To define an inner product on flows, take $U,W:\overline E\rightarrow\R$ and $p\in\Prob(G)$ with $p(v)>0\ \forall v\in V$. Then, we define the \emph{advective inner product} between $U$ and $W$ at $p$ as:
\begin{equation}\label{eq:advective_inner_prod}
\langle U,W\rangle_p\equiv\sum_{e=(v\rightarrow w)} \left(\frac{p(v)}{p(w)}\cdot\frac{(p(v)+p(w))}{2}\right) U(e)W(e)\,.
\end{equation}
This is an $L_2$ inner product on $\R^{2|E|}$ weighted by a function of the distribution $p$. The asymmetry between $p(v)$ and $p(w)$ accounts for the structure of graph advection~\eqref{eq:graph_advection} and makes our transportation distance symmetric via Proposition~\ref{prop:momentum}. We also define the \emph{advective norm} $\|U\|_p\equiv\sqrt{\langle U,U\rangle_p}.
Now, we can define the \emph{continuous-flow transportation distance} on $\Prob(G)$ by imitating~\eqref{eq:lott}:
\begin{empheq}[box=\fbox]{equation}\label{eq:graph_geodesic}
\overline\W(p_0,p_1)\equiv\left\{
\begin{aligned}
\inf_{\substack{U(t,e)\geq0\\p(t,v)\geq0}} & \int_0^1 \|U(t,\cdot)\|_{p(t,\cdot)}\,dt\\
\mbox{such that }\; & p(0,v)=p_0(v), \, p(1,v)=p_1(v)\\%\ \forall v\in V\\
&\text{(\ref{eq:graph_advection}) holds
\end{aligned}
\right.
\end{empheq}
\subsection{Properties}
We state several properties of our new distance $\overline\W$ to provide intuition for its behavior and to motivate how we will compute it numerically. We begin by providing an alternative convex formulation:
\begin{proposition} $\overline\W$ can be computed as follows:\label{prop:momentum}
\begin{equation}\label{eq:momentum_form}
[\overline\W(p_0,p_1)]^2=\left\{
\begin{aligned}
\inf_{\substack{J(t,e)\geq0\\p(t,v)\geq0}} & \int_0^1\sum_{e=(v\rightarrow w)} \frac{J(t,e)^2}{2}\left(\frac{1}{p(t,v)} + \frac{1}{p(t,w)}\right)\,dt\\
\mbox{such that } \; & p(0,v)=p_0(v), \, p(1,v)=p_1(v)\ \forall v\in V\\
& \frac{d}{dt} p(t,\cdot) = D^\top J(t,\cdot
\end{aligned}
\right.
\end{equation}
Here, $D\in\R^{2|E|\times |V|}$ is the operator computing $p(w)-p(v)$ for each oriented edge $(v\rightarrow w)\in \overline E.$
\end{proposition}
\begin{proof}
First, substitute $J(t,e)\equiv p(t,v)U(t,e)$ to~\eqref{eq:graph_geodesic}, for $e=(v\rightarrow w).$ Then,
$$\langle U,U\rangle_p
=\sum_{e=(v\rightarrow w)\in\overline E} \frac{J(t,e)^2}{2}\left(\frac{1}{p(t,v)} + \frac{1}{p(t,w)}\right)\,.$$
An identical substitution into the advection equation shows that $J$ satisfies $\frac{dp}{dt}=D^\top J.$
Define a new inner product $\langle\cdot,\cdot\rangle_p'$ replacing the weights in~\eqref{eq:advective_inner_prod} with $\frac{1}{p(v)}+\frac{1}{p(w)}$.
In this notation, we must show that minimizing~\eqref{eq:momentum_form}, whose energy functional now can be written as $\int_0^1 \|J(t,\cdot)\|_{p(t,\cdot)}'^2\,dt$, is equivalent to squaring the result of minimizing the non-squared functional $\int_0^1 \|J(t,\cdot)\|_{p(t,\cdot)}'\,dt$ with the same constraints. We follow an analogous proof for manifold geodesics in Theorem 2.1 of~\cite{udriste-1994}.
First, take any $J$ and $p$ satisfying the constraints. As a consequence of H\"older's inequality, we know
$$\int_0^1 \|J(t,\cdot)\|_{p(t,\cdot)}'\,dt\leq\left(\int_0^1 \|J(t,\cdot)\|_{p(t,\cdot)}'^2\,dt\right)^{\nicefrac{1}{2}}.$$
This establishes~\eqref{eq:momentum_form} as an upper bound for~\eqref{eq:graph_geodesic}.
Now, suppose $(J,p)$ are minimizers of~\eqref{eq:graph_geodesic} after substitution. Define the ``arc length'' function
$\ell(t)\equiv \int_0^t \|J(\bar t,\cdot)\|_{p(\bar t,\cdot)}'\,d\bar t,$
and take $s(t)\equiv\nicefrac{\ell(t)}{\ell(1)}.$
Reparameterizing $p$ with respect to $s$ yields
$$\frac{d}{ds}p(s,\cdot)=\frac{dp}{dt}\frac{dt}{ds}=D^\top J(s,\cdot)\frac{dt}{ds}=D^\top\left(\frac{J(s,\cdot)\ell(1)}{\|J(s,\cdot)\|_p}\right)\equiv D^\top\tilde J(s,\cdot).$$
Hence, $\tilde J(s,\cdot)$ and $p(s,\cdot)$ satisfy the constraints with $t\mapsto s$. The advective norm of $\tilde J$, however, is a constant function of $s$ with $\|\tilde J(s,\cdot)\|'_p = \ell(1) = \W(p_0,p_1).$ Thus we know $\overline\W(p_0,p_1)^2=\int_0^1 \|\tilde J(s,\cdot)\|'^2_p\,ds,$ establishing~\eqref{eq:graph_geodesic} as an upper bound for~\eqref{eq:momentum_form}.\end{proof}
With this alternative formulation in hand, we verify that $\overline\W$ truly satisfies the properties of a distance metric on $\Prob(G)$:
\begin{proposition} $\overline\W$ is a distance on $\Prob(G)$.
\end{proposition}\begin{proof}
Non-negativity follows directly from the form of $\overline\W$. Symmetry follows from~\eqref{eq:momentum_form} since $\overline E$ contains all forward \emph{and} backward edges; any forward-flowing $p(t,v)$ can be replaced by a backward flow $p(1-t,v)$ with the same integrated advective norm by placing $J$ on flipped edges. Finally, the triangle inequality follows from~\eqref{eq:graph_geodesic} identically to the proof of triangle inequality for geodesic curves by concatenating paths in $\Prob(G)$.
\end{proof}
Finally, we relate $\overline\W$ to the geometry of the underlying graph. For simplicity, we restrict our theoretical result to a weak but easily-proved bound and show experimentally in \S\ref{sec:experiments} that our bound is conservative:
\begin{proposition} Suppose $\delta_v,\delta_w\in\Prob(G)$ are the indicator functions of $v,w\in V$. Then, there exists $c\geq0$ not dependent on $G$ with $\overline\W(\delta_v,\delta_w)\leq cd(v,w).$\label{prop:bound}
\end{proposition}
\begin{proof}
For a two-node graph with $V=\{a,b\}$ and $E=\{(a,b)\}$, define $c\equiv\overline\W(\delta_a,\delta_b)$; using the discretization in \S\ref{sec:discretization} with a large number of time steps, $c\approx 2.2159$. Then, for $v,w\in V$ in a general graph $G=(V,W)$, define $\ell\equiv d(v,w)$. We construct a time-varying set of distributions $p(t,v)$ by dividing $[0,1]$ into $\ell$ subintervals of length $\nicefrac{1}{\ell}$ and use two-node solution to transport mass along each edge of the shortest path from $v$ to $w$ one-at-a-time. Compressing the two-node integral into an interval of length $\nicefrac{1}{\ell}$ scales the integral for $\overline\W^2$ by $\ell$, and there are $\ell$ such subintervals. Thus, this feasible point for~\eqref{eq:momentum_form} shows $[\overline\W(\delta_v,\delta_w)]^2\leq c^2\ell^2\implies \overline\W(\delta_v,\delta_w)\leq c\ell.$
\end{proof}
\subsection{Discretization}\label{sec:discretization}
While $\overline\W$ is a distance on $\Prob(G)$, a subset of the finite-dimensional set $[0,1]^{|V|}$, its computation involves solving variational problems~\eqref{eq:graph_geodesic} or~\eqref{eq:momentum_form} containing a continuous time variable $t$. For this reason, we propose a discrete approximation of $\overline\W$ by sampling $t$ before the minimization. This approximation can be evaluated using standard convex optimization machinery.
Since the functional $\nicefrac{x^2}{y}$ is convex so long as $y\geq0$, we use~\eqref{eq:momentum_form} as the starting point for our discretization; this objective term can be included in second-order cone programs~\cite{grant-2008,cvx-2014}. We divide the time span $[0,1]$ into $k$ subintervals, with improving approximation quality as $k\rightarrow\infty$. Similar to~\cite{papadakis-2014}, we propose using a \emph{staggered} discretization in which $J$ is sampled at subinterval midpoints and $p$ is sampled at subinterval endpoints; for this reason, our optimization variables are distributions $q_0,\ldots,q_k\in\Prob(G)$ with flows $J_1,\ldots,J_k$ so that $J_i$ is the flow from $q_{i-1}$ to $q_i$.
Our discretized version of the optimization~\eqref{eq:momentum_form} is:
\begin{equation}\label{eq:discretization}
[\overline\W_k(p_0,p_1)]^2\equiv
\left\{
\begin{aligned}
\min_{\substack{J^i(e)\geq0\\q^i(v)\geq0}}\;& k\sum_{i=1}^k \sum_{e=(v\rightarrow w)} \frac{J^i(e)^2}{2}\left(\frac{1}{q^{i-1}(v)}+\frac{1}{q^i(w)}\right)\,,\\
\mbox{such that } \; & q^0 = p_0, \, q^k = p_1, \,
D^\top J^i = q^i - q^{i-1}\,
\end{aligned}
\right.
\end{equation}
This discretization is convex. Taking $J=1$ along the shortest path from $v$ to $w$, one can show $\overline\W_k(\delta_v,\delta_w)=k$ when $d(v,w)=k,$ suggesting that the bound in Proposition~\ref{prop:bound} is conservative.
One feature of our discretization is the mix of $q^{i-1}(v)$ and $q^i(w)$ in the optimization objective. This construction ensures that when $J^i(e)>0$, both denominators are nonzero since mass is transported from vertex $v$ at time $i-1$ to vertex $w$ at time $i$. This slight asymmetry allows the boundary distributions to contain zero values and vanishes for large $k$. If symmetry is desired, the forward and backward discretizations can be averaged.
This optimization problem has $O(k(|V|+|E|))$ variables. One advantage of such a formulation over the $O(|V|^2)$ linear programming approach is that additional edge-based pruning strategies can be applied to reducing the number of variables when we are confident certain edges will not figure into the computation. For instance, we can first compute the one-Wasserstein distance using network flow techniques and prune those edges that are not used by the resulting flow. This enhances efficiency when distributions are concentrated in the same region of the graph or have low entropy.
\section{Experiments}\label{sec:experiments}
\subsection{Convergence}
\begin{figure}[t]\centering
\begin{tabular}{cc}
\begin{tikzpicture}
\begin{axis}[
xlabel={\footnotesize $d(v,w)$},
ylabel={\footnotesize $\overline\W_k(\delta_v,\delta_w)$},
legend entries={$y=x$,$k=10$,$k=25$,$k=50$,$k=100$},
legend style={cells={anchor=west},legend pos=north west,font=\tiny},
width=.48\columnwidth,
height=.35\columnwidth,
every axis/.append style={font=\footnotesize},
xlabel near ticks,
ylabel near ticks,
line cap = round,
line join = round,
xlabel shift = -.08in,
ylabel shift = -.08in
]
\addplot[black,samples=300,very thick,dashed,domain=0:100]{x};
\addplot[mark=none,color=blue,very thick] table {figures/deltaconvergence/k10.dat};
\addplot[mark=none,color=green,very thick] table {figures/deltaconvergence/k25.dat};
\addplot[mark=none,color=red,very thick] table {figures/deltaconvergence/k50.dat};
\addplot[mark=none,color=purple,very thick] table {figures/deltaconvergence/k100.dat};
\end{axis}
\end{tikzpicture}
&
\begin{tikzpicture}
\tikzset{every mark/.append style={scale=.5}}
\begin{axis}[
xlabel={\footnotesize $k$},
ylabel={\footnotesize $\overline\W_k(p_0,p_1)/ \overline\W_{30}(p_0,p_1)$},
legend entries={$y=1$,$H=0$,$H=1.20$,$H=2.25$,$H=3.15$},
legend style={cells={anchor=west},legend pos=north east,font=\tiny},
width=.48\columnwidth,
height=.35\columnwidth,
every axis/.append style={font=\footnotesize},
xlabel near ticks,
ylabel near ticks,
line cap = round,
line join = round,
xlabel shift = -.08in,
ylabel shift = -.08in
]
\addplot[black,samples=300,very thick,dashed,domain=4:20]{1};
\addplot[mark=*,color=blue,very thick] table {figures/entropyconvergence/entropy0.dat};
\addplot[mark=*,color=green,very thick] table {figures/entropyconvergence/entropy1.11986.dat};
\addplot[mark=*,color=red,very thick] table {figures/entropyconvergence/entropy2.25171.dat};
\addplot[mark=*,color=purple,very thick] table {figures/entropyconvergence/entropy3.15293.dat};
\end{axis}
\end{tikzpicture}
\end{tabular}\vspace{-.15in}
\caption{Convergence of the discrete-time approximation. (left) Graph distance between nodes $v$ and $w$ versus distance between indicator functions $\delta_v$ and $\delta_w$; (right) convergence of the approximation $\overline\W_k$ for different values of the average entropy $H$ between $p_0$ and $p_1$.}\label{fig:convergence}
\end{figure}
Figure~\ref{fig:convergence} illustrates the convergence of our approximation $\overline\W_k$ for increasing $k$.
On the left, we measure $\overline\W_k$ between the indicators of two vertices $v,w\in V$ as in Proposition~\ref{prop:bound}. We plot the relationship between graph distance and $\overline\W_k$ for various $k$. As $k$ increases, $\overline\W_k(\delta_v,\delta_w)$ better approximates $d(v,w)$. This suggests a commonality with traditional transportation distances, which satisfy the diagonal relationship exactly and shows that our bound in Proposition~\ref{prop:bound} likely could be tightened.
On the right, we plot $\overline\W_k$ as $k$ increases for assorted pairs $p_0,p_1\in\Prob(G)$. In this experiment, we measure distances after perturbing the indicators of the two farthest vertices of a thirty-vertex line graph with uniform noise. We plot convergence for distances for different levels of entropy in $p_0$ and $p_1$; our approximation succeeds even with low $k$ when the entropies of the distributions is high. This observation reflects the fact that transportation between high-entropy distributions is less likely to move mass large distances, since mass exists all over $G$.
\subsection{Displacement Interpolation}\label{sec:displacement_interpolation}
One well-understood phenomenon associated with the continuous two-Wasserstein distance $\W_2$ is \emph{displacement interpolation}, introduced by McCann in~\cite{mccann-1997}. Displacement interpolation can be thought of as the characterization of the ``path'' of distributions $\rho(t,\cdot)$ from $\rho_0$ to $\rho_1$ suggested in the flow-based formulation~\eqref{eq:bb}. This path is well-understood and isotropic in the $\W_2$ case, whereas for $\W_1$ the corresponding path is the trivial construction $\rho(t,x)=(1-t)\rho_0(x)+t\rho_1(x)$~\cite{villani-2003}.
Displacement interpolation largely is understood as a continuous phenomenon on connected rather than discrete domains. Even in the quadratic case~\eqref{eq:pairwise}, there does not appear to be a well-characterized flow of probability over time explaining the optimal matching $T(\cdot,\cdot).$ Contrastingly, our distance $\overline\W$ exhibits displacement interpolation phenomena by construction.
\begin{figure}
\begin{tabular}{r@{}c@{}c@{}c@{}c@{}c}
\rotatebox{90}{\tiny\hspace{.09in}Trivial}&
\includegraphics[width=0.2\textwidth]{usadisplacement/degenerate1.pdf}&
\includegraphics[width=0.2\textwidth]{usadisplacement/degenerate13.pdf}&
\includegraphics[width=0.2\textwidth]{usadisplacement/degenerate25.pdf}&
\includegraphics[width=0.2\textwidth]{usadisplacement/degenerate38.pdf}&
\includegraphics[width=0.2\textwidth]{usadisplacement/degenerate51.pdf} \\
\rotatebox{90}{\tiny \hspace{.15in}$\overline\W$}&
\includegraphics[width=0.2\textwidth]{usadisplacement/displacement1.pdf}&
\includegraphics[width=0.2\textwidth]{usadisplacement/displacement13.pdf}&
\includegraphics[width=0.2\textwidth]{usadisplacement/displacement25.pdf}&
\includegraphics[width=0.2\textwidth]{usadisplacement/displacement38.pdf}&
\includegraphics[width=0.2\textwidth]{usadisplacement/displacement51.pdf} \\
& $t=0$ & $t=\frac{1}{4}$ & $t=\frac{1}{2}$ & $t=\frac{3}{4}$ & $t=1$
\end{tabular}
\vspace{-.1in}
\caption{Displacement interpolation on a geometric graph. The one-Wasserstein distance $\W_1$ is explained by the trivial path $p(t,v)=(1-t)p_0+tp_1$ (top), while $\overline\W$ generates displacement interpolation phenomena in which the distribution moves across the map as a function of $t$.}\label{fig:displacement_interpolation
\end{figure}
Figure~\ref{fig:displacement_interpolation} illustrates displacement interpolation computed between two distributions over a geometric graph of the United States ($|V|=1113, |E|=4058, k=50$). Whereas the trivial interpolation ``teleports'' mass from one distribution to the other, the time-varying sequence of distributions shows mass moving continuously along the domain.
\subsection{Comparison with $\W_1$}
\begin{figure}\centering
\begin{tabular}{c|ccc}
\includegraphics[width=0.22\textwidth]{w1comparison/no_noise/rho.pdf}&
\includegraphics[width=0.22\textwidth]{w1comparison/no_noise/euclidean1.pdf}&
\includegraphics[width=0.22\textwidth]{w1comparison/no_noise/w1.pdf}&
\includegraphics[width=0.22\textwidth]{w1comparison/no_noise/w2.pdf}\\
\includegraphics[width=0.22\textwidth]{w1comparison/noise/rho.pdf}&
\includegraphics[width=0.22\textwidth]{w1comparison/noise/euclidean1.pdf}&
\includegraphics[width=0.22\textwidth]{w1comparison/noise/w1.pdf}&
\includegraphics[width=0.22\textwidth]{w1comparison/noise/w2.pdf}\\
\includegraphics[width=0.22\textwidth]{w1comparison/perturbed/rho.pdf}&
\includegraphics[width=0.22\textwidth]{w1comparison/perturbed/euclidean1.pdf}&
\includegraphics[width=0.22\textwidth]{w1comparison/perturbed/w1.pdf}&
\includegraphics[width=0.22\textwidth]{w1comparison/perturbed/w2.pdf}\\
$p(v)$ & $\|p-\delta_v\|_1$ & $\W_1(p,\delta_v)$ & $\overline\W_{20}(p,\delta_v)$
\end{tabular}
\vspace{-.1in}
\caption{Approximating $p$ (left) with the indicator $\delta_v$ of a single vertex $v\in V$. The distribution $p(v)$ for three experiments is on the left, colored from black ($p(v)=0$) to white ($p(v)=1$); probability is concentrated on the circles uniformly (top), with noise (middle), or with a perturbation on a single vertex (bottom). Vertices $v$ on the right are colored by $d(p,\delta_v)$ for distance $d$ on the bottom; all $v\in V$ minimizing $d(p,\delta_v)$ are marked in blue.}\label{fig:w1}\vspace{-.2in}
\end{figure}
When the ground distance is the length of the shortest path between vertices, earth mover's distances on a graph are easily computed using multi-commodity network flow; see~\cite{hitchcock-1941} for an early reference. In particular, for $p,q\in\Prob(G)$, $\W_1(p,q)$ is the minimum of $\sum_{e\in E} |J(e)|$ such that $D^\top J=q-p.$ Hence, we should justify the expense of minimizing~\eqref{eq:discretization}, even if it is cheaper than the na\"ive quadratic distance $\overline\W_{\mathrm{full}}$ in~\eqref{eq:pairwise}. As evidence beyond that in \S\ref{sec:displacement_interpolation}, we provide an additional experiment here.
Figure~\ref{fig:w1} shows an example constructed to compare $\overline\W$ to $\W_1$ ($|V|=120, |E|=119, k=20$). Suppose we wish to approximate a distribution $p\in\Prob(G)$ with the indicator $\delta_v$ of a single vertex $v\in V$ by minimizing a probabilistic distance. Here, $G$ consists of a line of vertices connecting two radial fans, and $p$ is uniform on the fans with zero probability along the line. $\W_1(p,\delta_v)$ cannot distinguish between the vertices $v$ on the line, and the resulting minimizer $v$ is unstable to perturbations of $p$. The $L_1$ distance $\|p-\delta_v\|_1$ is nearly minimized by any $v$ with mass on it. $\overline\W$, however, stably chooses $v$ to be the center of the line even in the presence of noise, and the distance function $f(v)\equiv\overline\W(p,\delta_v)$ clearly distinguishes between the vertices of $G$.
Intuition for this test comes from comparing the least squares problem $\min_x (\|x-x_1\|_2^2+\|x-x_2\|_2^2)$ to the geometric median problem $\min_x (\|x-x_1\|_2+\|x-x_2\|_2)$ on $\R^n$. The former is differentiable with minimizer $\nicefrac{1}{2}(x_1+x_2)$, while the latter is minimized by \emph{any} $x$ on the segment from $x_1$ to $x_2$.
\begin{wraptable}{r}{.47\textwidth}\centerin
\vspace{-.15in}
\begin{tabular}{|>{\centering\arraybackslash}m{.3in}|>{\centering\arraybackslash}m{.6in}|>{\centering\arraybackslash}m{.6in}|}\hline
\multirow{2}{*}{$\bm n$} & \multicolumn{2}{c|}{\textbf{\% recovered}}\\\cline{2-3}
& \vspace{.02in}$\bm{\|\cdot\|_1}$ & \vspace{.02in}$\bm{\overline\W}$\\\hhline{|=|=|=|}
\vspace{.02in}10 & \vspace{.02in}64.4\% & \vspace{.02in}64.1\%\\
20 & 73.5\% & 75.7\%\\
30 & 79.4\% & 84.0\%\\
40 & 86.5\% & 89.4\%\\
50 & 89.9\% & 91.9\%\\\hline
\end{tabular}
\caption{Results of shape retrieval task showing the average percentage of shapes in the same category as the query retrieved within the first $n$ results (109 experiments).}\vspace{-.2in}\label{fig:bow}
\end{wraptable}
\subsection{Bag-of-Words Shape Retrieval}
One application of distributions with ground distances is the processing of ``bag-of-words'' models. Suppose we have a collection of objects with local descriptors, e.g.\ images with per-pixel features. We can cluster all descriptors from all the domains to yield a sampling of descriptor space; then, each object in the collection can be described as the distribution of how many of its local descriptors have each cluster center as its closest point. This strategy provides an effective, simple approach to clustering, search, and other problems~\cite{sivic-2005}.
An underlying issue with this model, however, is that the histogram bins are related, especially if the number of clusters is large. That is, local features may have multiple near-closest cluster centers and could be assigned to different bins equally well. $L_p$ distances between bag-of-words neglect this structure and hence \emph{degrade} as the number of cluster centers---and correspondingly the sampling of descriptor space---increases beyond a certain point. Thus, transportation distances may have better performance for such tasks, as proposed in~\cite{jiang-2008,marinai-2011}.
Using extrinsic distance in descriptor space as the transportation ground distance, however, neglects the fact that the set of descriptors may exhibit manifold structure along which distances should be measured intrinsically. For this reason, rather than measuring Euclidean distance between the cluster centers for the ground distance, we propose using transportation along a graph in which the clusters are linked locally.
Table~\ref{fig:bow} illustrates an experiment for three-dimensional shape retrieval on a database of 109 organic three-dimensional shapes from~\cite{giorgi-2007} classified into 10 categories. We describe each shape using the bag-of-words method above on wave kernel signature (WKS) descriptors~\cite{aubry-2011} grouped into 1000 clusters; we connect the cluster centers into a connected graph by augmenting the Euclidean minimum spanning tree in WKS space with each center's two nearest neighbors ($|V|=1000, |E|=1668, k=10$). Compared to using the $L_1$ norm $\|p-q\|_1$ to compare $p,q\in\Prob(G)$, $\overline\W$ consistently achieves better within-class retrieval results using identical descriptors, beyond an initial set of shape matches well-distinguished by any distance.
\section{Discussion and Conclusion}
Our transportation distance $\overline\W$ bridges the gap between continuous and discrete transportation. Computational techniques on discrete domains like graphs largely are limited to the \emph{one}-Wasserstein distance $\W_1$, which relates to maximum flow problems, but the use of non-squared ground distances creates non-uniqueness and other undesirable properties. Contrastingly, theoretical understanding of Wasserstein distances largely is limited to the \emph{two}-Wasserstein distance $\W_2$. Our distance $\overline\W$, however, has properties in common with $\W_2$ without a construction scaling quadratically in $|V|$.
The construction of $\overline\W$ provides a new avenue of research into the theory and practice of transportation distances on graphs. Most immediately, replacing our use of generic optimization tools with ADMM~\cite{boyd-2011,papadakis-2014} or another specialized technique may help scale $\overline\W$ to graphs like social networks and websites with millions of nodes. Alternatively, since $D^\top D$ is the Laplacian matrix of $G$, it may be possible to apply spectral techniques to approximate minima of~\eqref{eq:graph_geodesic} e.g.\ by decomposing $J=Da + B$ where $D^\top B=0$~\cite{chung-1994,jiang-2011}. More generally, our variational construction of $\overline\W$ by introducing a continuous time variable $t$ still led to a distance function over the finite-dimensional space $\Prob(G)$; this pattern may reveal a more general strategy for problems on graphs with more easily-understood continuous analogs.
While our discussion has circulated largely around the construction of $\overline\W$ and some motivating examples, we foresee many practical applications of our machinery and larger optimizations in which it can play a key role. For instance, the distribution minimizing the sum of squared two-Wasserstein distances yields an effective strategy for computing a \emph{barycenter} summarizing a set of distributions~\cite{bruckstein-2012,cuturi-2013-2,bonneel-2014}; networks of such pairwise distances create effective strategies for semi-supervised learning~\cite{solomon-2014}. Similar strategies have been applied to construct techniques for clustering~\cite{wagner-2011} and matrix factorization~\cite{sandler-2011}. Optimal transportation also has had considerable influence on methods in vision~\cite{rubner-2000}, imaging~\cite{bonneel-2014}, graphics~\cite{solomon-2013}, shape processing~\cite{rabin-2010}, road networks~\cite{treleaven-2013}, and other application areas. With our graph-based treatment we hope to extend these benefits to other domains like documents with geometric embedding structure~\cite{mikolov-2013}.
\small{
\bibliographystyle{abbrv}
|
2,877,628,090,680 | arxiv | \section{Introduction}
Graphene is a two-dimensional carbon lattice that exhibits a wide range of unique electronic properties such as linear dispersion of charge carriers, Berry phase, and room-temperature quantum Hall effect~\cite{Rev1,Rev3}. Beyond the purely electronic properties, graphene has attracted a lot of attention in the field of nanophotonics, since it is both transparent and exhibits strong light-matter interaction~\cite{GR_Ph1,GR_Ph2}. Moreover, graphene supports the surface electromagnetic waves, analogous to surface plasmon polaritons in metallic and semiconductor films. The advantage of the graphene plasmon-polaritons over their metallic and semiconductor counterparts is that they can be efficiently controlled with the external gate voltage~\cite{Koppens_exp,Basovexp}. The attractive plasmonic properties of graphene nanostructures led to the emerging of the field of graphene nanoplasmonics~\cite{GraphPlasm_Grigorenko,GraphPlasm_Abajo}.
One of the topics in graphene plasmonics is the study of graphene \textit{metasurfaces}, the arrays of graphene nanoislands which may exhibit the optical properties which differ significantly from those of the two-dimensional graphene sheets~\cite{GraphSRR, GrapheneMM1,GrapheneMM2,GrapheneMM3, GrapheneSoukoulis, Engheta_Science}. For example, it has been shown that a two-dimensional array of graphene nanoislands can play the role of a perfect absorber for electromagnetic waves~\cite{Perfect_Absorption}. In the studies of graphene-based metasurfaces it is commonly assumed that the conductivity of individual graphene elements coincides with the conductivity of two-dimensional graphene. However, it is known from numerous studies that the graphene nanopatterning can substantially alter the electronic band structure of graphene~\cite{GrNS_BD1,GrNS_BD2}. Specifically, in Ref.~\cite{GrNS_BD1} it was shown that one can open and efficiently control the width of the band gap by nanopatterning graphene. Altering the electronic band structure should lead to the modification of the AC conductivity and thus to the modification of the optical properties of the nanopatterned graphene.
\begin{figure}[!h]
\centerline{\includegraphics[width = 1.0\columnwidth]{figure1.eps}}
\caption{(Color online) (a) Geometry of the structure (b,c) Structure the array of tunnel coupled array of zigzag (b) or armchair (b) graphene nanoribbons.}
\label{fig1}
\end{figure} In Ref.~\cite{Abajo1} it was theoretically shown that the quantum confinement effects start play the substantial role in the optical response of the patterned graphene if the characteristic size of the nanostructure elements becomes smaller than 10 nm.
Technically, the introduction of the superlattice to the graphene sheet at these scales can be realized by either etching the graphene sheet~\cite{etching}, introducing the periodic strain~\cite{Corrugation1,Corrugation2,Metallic_Substrate_Mask} or by introducing the periodic gate voltage~\cite{Superlattice_2_technology,Superlattice_3_technology}. Moreover, in Ref.~\cite{Self-organized_GMS} the possibility of molecular assembly of the arrays of several-atoms-thick nanoribbons was shown.
In our work we are studying the structures shown in Fig.\eqref{fig1}, which are essentially the arrays of tunnel-coupled graphene nanoribbons. We demonstrate that the optical properties of these structures can be tailored by engineering the electronic band structure, and that such a system can be regarded as a uniaxial anisotropic medium both for electrons and plasmon-polaritons. The optical properties of the graphene nanoribbon arrays have been studied in a number of papers. Specifically, in~\cite{NMR} it has been shown that a nanoribbon array can play a role of an efficient polarizers for the THz radiation due to the huge absorption anisotropy. However, in all of the papers the tunneling between the individual ribbons was not accounted for. The allowing for the electron hopping between the ribbons results in a number of physical effects such as emergence of hyperbolic electronic bands from the edge states in the zigzag ribbon case and overlap of electron and hole bands in the armchair edge case which will be discussed further in detail.
The remainder of the paper is organized as follows. In section~\ref{sec1} we show the results on the electronic band structure and Fermi surface topology of the arrays of zigzag and armchair graphene nanoribbons. Section~\ref{sec2} presents the results on the conductivity spectra obtained with the Kubo formula. The conclusion of the paper is given in section~\ref{conc}. Finally, appendices~\ref{sec3} and~\ref{sec4} provide the detailed formalism for the calculation of the band structure and the conductivity, respectively.
\section{Band structure calculation\label{sec1}}
For both types of the ribbons edge the Hamiltonian can be written as~\cite{Theory_tight_binding}:
\begin{align}
\hat{H}=&-t\displaystyle\sum_{nearest} (\hat{a}^{\dag}_{\alpha} \hat{b}_{\beta}+ \hat{b}^{\dag}_{\beta} \hat{a}_{\alpha})-t^{'}\displaystyle\sum_{\substack{next \\ nearest}} (\hat{a}^{\dag}_{\alpha} \hat{a}_{\beta}+ \hat{b}^{\dag}_{\alpha} \hat{b}_{\beta}) - \nonumber \\
-&\tau\displaystyle\sum_{\substack{inter-\\ribbon}} (\hat{a}^{\dag}_{\alpha} \hat{b}_{\beta}+\hat{b}^{\dag}_{\beta} \hat{a}_{\alpha}), \label{Hamiltonian}
\end{align}
where the first term corresponds to the hopping of the electron between the nearest atoms with the characteristic hopping energy $t=2.8$~eV, the second term to the next-nearest hopping with $t^{\prime}=0.2$~eV, and the last term corresponds to the tunneling between the nearest ribbons with the tunneling energy $\tau$ which can be varied (see Fig.\ref{fig1}). Generally, $\tau$ should decrease exponentially with distance between the neighbouring ribbons $\tilde{a}$ $\tau\approx t\exp(-2(\tilde{a}-a)/a)$\cite{tunnel}, where $a$ is the interatomic distance. We note that as $\tilde{a}$ goes to infinity, the band structure and the conductivity should reduce to those of an individual nanoribbon. In the opposite limit $\tilde{a}=a$, the considered structure can either converge to the conventional 2d graphene case as in Fig.\ref{fig1}(b) or result in a more complicated structure different from the two-dimensional graphene as in Fig.\ref{fig1}(c). In our calculations we have used $\tilde{a}=\sqrt{3}a$ which corresponds to $\tau=t^{\prime}$.
Indices $[\alpha,\beta]$ in the Eq.~\eqref{Hamiltonian} each consist of three integers: the first one $i$ is the number of the atom in the unit cell. The unit cells for the case of armchair and zigzag ribbons are shown with dashed lines in Figs.~\ref{fig1}(b,c). Each unit cell contains $N$ atoms of each sublattice. The remaining two indices $(m,n)$ correspond to the position of the unit cell in the directions of $K_b$ and $k$, respectively (cf. Figs~\ref{fig1}(b,c) for further details).
We then introduce Fourier transform of the annihilation and creation operators $a_{\alpha},a_{\alpha}^{\dagger}$:
\begin{align}
&a_{i,m,n}=\frac{1}{N_{c}^{1/2}}\sum_{k,K_b} a_{i}(k,K_b) e^{-i (K_b m L+k n a)}, \nonumber\label{afour}\\
&a^{\dag}_{i,m,n}=\frac{1}{N_{c}^{1/2}}\sum_{k,K_b} a^{\dag}_{i}(k,K_b) e^{i (K_b m L+k n a)}
\end{align}
where $N_c$ is the number of unit cells. The expressions for the operators $b$ are identical.
The wave functions of the structure are the linear combinations of the single atom eigenfunctions:
\begin{align}
\phi(k,K_b)=\displaystyle\sum_{i=1,N} \left[\mathcal{A}_i a_i^{\dagger}(k,K_b)|0\rangle +\mathcal{B}_i b_i^{\dagger}(k,K_b)|0\rangle \right],
\end{align}
and thus can be represented as a $2N$ vector of the expansion coefficients $\mathcal{A,B}$:
\begin{align}
\phi = \left(\mathcal{A}_1 \ldots \mathcal{A}_N,\mathcal{B}_1\ldots \mathcal{B}_N\right)^T.
\end{align}
If we substitute the expressions~\eqref{afour} to the Hamiltonian~\eqref{Hamiltonian} we obtain $H=\displaystyle\sum_{k,K_b}H(k,K_b)$, where $H(k,K_b)$ can be represented as a $2N\times 2N$ matrix in the basis of the single atom eigenfunctions:
\begin{align}
H=\begin{pmatrix} H_{AA} & H_{AB} \\ H_{BA} & H_{BB} \end{pmatrix}. \label{hmat}
\end{align}
Explicit form of the matrix depends on the type of the ribbon edge and is presented in appendix~\ref{sec3} for both types of the edge, here we just note that due to the requirement of Hermiticity the condition $H_{BA}=H_{AB}^{\dagger}$ holds.
The problem is then reduced to the eigenvalue problem for a $2N\times 2N$ matrix $H(k,K_b)\phi=\epsilon(k,K_b)\phi$, which has been solved numerically in order to obtain the band structures and Fermi surfaces. For the case of zigzag ribbons $N=10$ and for the case of armchair ribbons $N=11$. The results are presented in Figs.\ref{fig2},\ref{fig3}
\begin{figure}[!h]
\centerline{\includegraphics[width = 1.0\columnwidth]{figure2.eps}}
\caption{(Color online) Electronic band structure of the array of tunnel coupled zigzag nanoribbons. (b) Contour plots of the Fermi surfaces of the structure. The numbers at the curves define the corresponding electron energy.}
\label{fig2}
\end{figure}
We first consider the zigzag edge case shown in Fig.\ref{fig2}. We first note,that due to the inclusion of the second nearest neighbours hopping, the electron-hole symmetry is broken and the electron and hole bands touch at the nonzero energy~\cite{RMP}. It can be seen that the individual branches of the single ribbon band diagram evolve to the bands. Moreover, the degeneracy of the two edge states is lifted and we observe two distinct bands formed by the tunnel coupled edge states. These bands resemble the plasmonic bands existing in the layered metal-dielectric structures~\cite{Orlov} and originating from the photon tunneling between individual surface plasmon polaritons propagating along the metal-dielectric interfaces. It is these plasmonic bands that form the hyperbolic isofrequency contours and make the layered metal dielectric structures the most conventional realization of hyperbolic medium~\cite{Hyprev}. The coupled waveguide arrays have been show to exhibit hyperbolic media properties both for the case of electromagnetic~\cite{fishnet1,fishnet2} and acoustic~\cite{acoustic1} waves. As can be seen in Fig.\ref{fig2}(b), where the energy contour plots are presented, the similar situation takes place for the electrons in coupled zigzag ribbons. At the energy where the electron and hole bands touch (equal to $3t^{\prime}\approx 0.6$eV) the contour plots reduce to single points at the touching position. Then, as we shift to the area where the band formed by the edge states exists, we can observe the hyperbolic-like isofrequency contours. Thus, if the chemical potential is tuned such that the Fermi energy falls to the narrow region of these bands, the electrons at the Fermi surface would propagate in the effective hyperbolic media. As long as effectively ballistic propagation of electrons is considered, the negative refraction and the partial focusing of electron waves should be observed at the interface of the 2d graphene and patterned graphene structure just as for the case of electromagnetic waves~\cite{SmithPartial}. The hyperbolic isofrequency contours also lead to the enhanced density of electronic states as compared to the case of two-dimensional graphene at the same energy. We also note that at certain energies the Fermi contour reduces to parallel lines, and the two-dimensional structure effectively behaves as a one dimensional. As it is known~\cite{Pairing}, in the vicinity of such points the electron-electron interactions are substantially modified and the inclusion of the Hubbard repulsion to our tight-binding model is the subject of the future work. The advantage of the graphene-based structures is that the Fermi level can be fine-tuned with the external gate voltage, such that it falls to the region characterized by the hyperbolic isofrequency contour.
The band diagram and the Fermi surface contour plots for the case of an armchair ribbons are shown in Fig.\ref{fig3}.
\begin{figure}[!h]
\centerline{\includegraphics[width = 1.0\columnwidth]{figure3.eps}}
\caption{(Color online) Electronic band structure of the array of tunnel coupled armchair nanoribbons. (b) Contour plots of the Fermi surfaces of the structure. Blue (red) curves correspond to electron (hole) branches. The numbers on the curves define the corresponding electron energy. }
\label{fig3}
\end{figure}
For the case of armchair ribbon, the Dirac cones exist only if the number of atoms in the unit cell $N=3m+2$, where $m$ is an integer~\cite{RMP}. In the considered structure $N=11$ and thus the Dirac cones are preserved. It can be seen that coupling leads to the overlap of the electron and hole bands, and thus as can be seen in Fig.\ref{fig3}(b) at certain Fermi energies, the Fermi surface contains both electron and hole pockets.
We now move to the calculation of the AC conductivity of the considered structures.
\section{Conductivity tensor.\label{sec2}}
When the eigenvalues and eigenvectors are obtained, it is straightforward to compute the AC conductivity tensor of the structure using the Kubo formalism. The conductivity tensor is given by~\cite{Bruuc}
\begin{align}
\sigma_{\alpha \alpha}=\frac{\hbar}{iS} \sum_{s', s} \sum_{\substack{n,m\\k,K_{b}}} \frac
{\left( f[\varepsilon_{m}^{s'}]-f[\varepsilon_{n}^{s}] \right) \left|\langle \phi_{m}^{s'} |-ev_{\alpha}| \phi_{n}^{s} \rangle \right|^2}
{( \varepsilon_{m}^{s'}-\varepsilon_{n}^{s} ) ( \varepsilon_{m}^{s'}-\varepsilon_{n}^{s} +\hbar \omega+i\delta)}
\end{align}
The velocity operator $\mathbf{v}$ is defined through $-e\mathbf{v}/c=\partial H(\boldsymbol{A})/ \partial \boldsymbol{A} $, where $ H(\boldsymbol{A})$
is derived from the tight-binding model via Peierls substitution:
\begin{align}
t_{\boldsymbol{R},\boldsymbol{R'}} \to t_{\boldsymbol{R},\boldsymbol{R'}} e^{\frac{ie}{\hbar c}\int_{\boldsymbol{R}}^{\boldsymbol{R'}} \boldsymbol{A} d \boldsymbol{r} } =
t_{\boldsymbol{R},\boldsymbol{R'}} e^{\frac{ie}{\hbar c} (\boldsymbol{R'}-\boldsymbol{R}) \boldsymbol{A}}.
\end{align}
The latter equality is made within the approximation that the vector potential is constant at the scale of single unit cell. The velocity operator then can be presented as an $2N\times 2N$ matrix of the form:
\begin{align}
-e\mathbf{v}=-e\begin{pmatrix}\mathbf{v}_{AA} & \mathbf{v}_{AB} \\ \mathbf{v}_{BA} & \mathbf{v}_{BB}\end{pmatrix}.
\end{align}
Due to the requirement of the hermiticity of the velocity operator, we set $\mathbf{v}_{AB}=\mathbf{v}_{BA}^{\dagger}$ and within the nearest neighbour approximation $\mathbf{v}_{AA}=\mathbf{v}_{BB}=0$.
The explicit form of $\mathbf{v}_{AB}$ depends on the type of the edge and is presented in the appendix~\ref{sec4}. The results are shown in Figs.\ref{fig4}(a,b). The temperature was set to $300$K and the chemical potential was equal to $3t^{\prime}$. The conductivity is normalized to the bare graphene conductivity $\sigma_0=e^2/h$
\begin{figure}[!h]
\centerline{\includegraphics[width = 1.0\columnwidth]{figure4.eps}}
\caption{(Color online) Real and imaginary parts of the principal components of the conductivity tensor for the case of zigzag (a) and armchair (b) ribbon edge. Letters in (a) and (b) correspond to the transitions shown in Fig.\ref{fig2}(a) and Fig.\ref{fig3}(a) respectively.}
\label{fig4}
\end{figure}
For the case of zigzag ribbons, we observe the strong absorption for the field polarized along the ribbons. This effect is due to the presence of the edge states and is also present in the conductivity spectra of individual zigzag ribbons~\cite{Single_Ribbon} and graphene nanodiscs~\cite{Single_Disc}. However, due to the lift of degeneracy between the edge states, an additional absorption channel between the two edge states bands appears, which results in the broad resonance of the absorption for the transverse-polarized electric field, depicted with letter A in Fig.\ref{fig4}(a). In the low frequency region, the conductivity has the Drude-like shape for both electric field polarizations. In the higher frequency region the conductivity is determined by the interband transitions shown in Fig.\ref{fig2}(a). Namely, the transitions between the second hole band and first electron band and first electron and second hole band result in the two distinct absorption peaks in $xx$ conductivity tensor component (marked with letters C,D) and one peak in $zz$ component (marked with B). Transitions between the first hole and third electron and first electron and third hole bands result in the higher frequency absorption peak for both conductivity components (marked with letters E,F).
For the case of armchair ribbons shown in Fig.~\ref{fig4}(b) it is worth noting that the interband absorption is almost fully suppressed for the transitions between the first electron and hole bands. This is quite different to the case of conventional graphene which is characterized by a broadband absorption caused by interband transitions. The suppression of interband absorption caused by the Fermi surface topology has been studied previously~\cite{suppr}. The detailed study of the interband absorption suppression in the considered geometry is the subject of the ongoing work. In the higher frequency region the absorption peaks are defined by the interband absorption between the higher electron and hole bands. We note that for the $xx$ component of the conductivity the peak in absorption is a step-like similar to the two-dimensional systems, and $zz$ conductivity has the peaks similar to the Van Hove singularities in one-dimensional systems. For the electromagnetic applications, it is important that in the certain frequency region the absorption is suppressed, and the imaginary parts of the conductivity tensor components have opposite signs, which suggests that we can realize a high quality anisotropic metasurface based on the considered structure.
We should also note, that the obtained conductivity tensors are nonlocal, i.e. the conductivity depends on the in-plane momenta of the electromagnetic wave . In Fig.~\ref{fig5} the real part of the conductivity of both zigzag and armchair ribbons is presented for different values of $K_b D$ and compared to the conductivity of two-dimensional graphene. The substantial modification of the real part of the conductivity is observed due to the opening of new absorption channels via the indirect interband transitions.
\begin{figure}[!h]
\centerline{\includegraphics[width = 1.0\columnwidth]{figure5.eps}}
\caption{(Color online) Conductivity spectrum for the zigzag (a,b) and armchair (c) ribbons for different values of $K_bD$. The conductivity of two-dimensional graphene is shown for comparison.}
\label{fig5}
\end{figure}
\section{Conclusion and outlook.\label{conc}}
In this paper, we have analyzed the electronic band structure and conductivity spectrum of two-dimensional arrays of tunnel-coupled arrays of graphene nanoribbons. We have shown that both depend drastically on the type of the ribbon edges. Namely, for the case of zigzag edges the coupling of the electronic edge states lifts the degeneracy between the symmetric and antisymmetric electronic eigenmode formed by the edge states. Moreover, each of the individual modes evolves to a narrow band, defined by the hyperbolic isofrequency surfaces. We have thus demonstrated the simple yet interesting analogy between the considered structures and multilayered metal-dielectric metamaterials, where the hyperbolic isofrequency surface for the electromagnetic field originates from the coupling between individual surface-plasmon polariton modes. For the case of armchair ribons, we have shown that the coupling leads to the overlap of electron and hole bands, resulting into the formation of the hole and electron pockets in the Fermi surface of these structures in the certain doping range.
Furthermore, we have shown that these structures can be regarded as metasurfaces for the electromagnetic waves in the optical to mid-IR frequency range~\cite{Metasurface_Review}. Specifically, we have demonstrated that both types of the structures are characterized by different signs of imaginary part of conductivity tensor components together with small levels in some frequecy ranges. Recently, a phenomenological study of the electromagnetic spectrum of such metasurfaces has been performed, showing that these structures can support a special type of Dyakonov-like hybrid surface electromagnetic waves~\cite{arx1}.
The modification of the electronic band structures as well as electronic eigenfunction profiles in the nanostructured graphene should lead to the substantial modification of both electron-electron and electron-phonon interactions. Tailoring the electron-phonon scattering via the electron band engineering may lead to the substantial modification of the plasmon scattering rates and thus to the modification of the optical response of these structures~\cite{Pathways} Self-consistent calculation of the Coulomb self-energy as well as interactions-induced modification of the AC conductivity is the subject of future work. Availability to control the optical response of the graphene metasurfaces via the modification of the electronic band structure through the quantum confinement makes these structure a perspective platform for the novel optical to IR optoelectronic devices.
|
2,877,628,090,681 | arxiv | \section{Data used in Plots Generation}
In this appendix, we will provided detailed data that was used to generate plots in Section~\ref{sec:experiments}.
\subsection{Effect of $\alpha$ and Seasons on Reward}
\label{sec:appendix-sim-days}
Table~\ref{tab:result-sim-days} provides the data used for generating Figure~\ref{fig:result-sim-days}.
\subsection{Comparison of Centralized vs Multi-zone Control}
\label{sec:appendix-multi-zone}
Table~\ref{tab:result-single-multi-zone} provides the data used for generating Figure~\ref{fig:result-multi-zone}.
\begin{table}[htbp]
\caption{Table comparing centralized vs multi-zone control on reward value and training time as $\alpha$ is varied.}
\label{tab:result-single-multi-zone}
\begin{tabular}{ccp{0.6in}p{0.6in}c}
\toprule
Control type & $alpha$ & Energy consumption (MWh) & Mean temperature violation ($\degree$C) & Reward\\
\midrule
\multirow{9}{*}{Centralized} & 0.001 & 265.5 & 1.422 & -0.073\\
& 0.01 & 266.2 & 1.287 & -0.075\\
& 0.1 & 264.8 & 1.395 & -0.098\\
& 0.5 & 264.8 & 1.395 & -0.298\\
& 1 & 274.4 & 1.14 & -0.276\\
& 5 & 265.4 & 0.568 & -0.851\\
& 10 & 272.6 & 0.2131 & -0.4215\\
& 100 & 276.9 & 0.497 & -5.492\\
& 1000 & 289 & 0.02 & -2.053\\
\midrule
\multirow{9}{*}{Multi-zone} & 0.001 & 258.4 & 0.943 & -0.0709\\
& 0.01 & 258.8 & 0.94 & -0.0721\\
& 0.1 & 259.1 & 0.998 & -0.0861\\
& 0.5 & 268.5 & 0.916 & -0.121\\
& 1 & 258.6 & 0.846 & -0.2943\\
& 5 & 263.2 & 0.783 & -0.767\\
& 10 & 276.5 & 0.3571 & -0.6893\\
& 100 & 301.9 & 0.028 & -0.3318\\
& 1000 & 279.2 & 0.0261 & -2.875\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Impact of Weather on HVAC Optimization}
\label{sec:appendix-weather}
Table~\ref{tab:result-weather} provides the data used for generating Figure~\ref{fig:result-weather}. The following are the full names of the location found in NREL website.
\begin{itemize}
\item CAN\_ON\_Toronto.716240\_CWEC
\item USA\_FL\_Tampa.Intl.AP.722110\_TMY3
\item USA\_CA\_San.Francisco.Intl.AP.724940\_TMY3
\item USA\_CO\_Golden-NREL.724666\_TMY3
\item USA\_IL\_Chicago-OHare.Intl.AP.725300\_TMY3
\item USA\_VA\_Sterling-Washington.Dulles.Intl.AP.724030\_TMY3
\end{itemize}
\begin{table}[htbp]
\caption{Table comparing centralized vs multi-zone control on reward value and training time as $\alpha$ is varied.}
\label{tab:result-weather}
\begin{tabular}{p{0.4in}p{0.25in}p{0.4in}p{0.4in}p{0.3in}p{0.35in}p{0.4in}}
\toprule
Location & Energy & Mean temperature violation ($\degree$C) & Reward & Average temperature ($\degree$C) & Humidity (\%) & Mean weather temperature deviation ($\degree$C) \\
\midrule
Toronto & 269.3 & 1.088 & -0.271 & 7.38 & 73.83 & 22.6\\
Golden, CO & 224.9 & 1.287 & -0.1544 & 9.7 & 53.67 & 26\\
Chicago, IL & 350.1 & 1.48 & -0.1929 & 9.9 & 70.33 & 20.99\\
Dulles, VA & 393.5 & 1.231 & -0.1901 & 12.6 & 69.08 & 24.8\\
San Francisco, CA & 227.3 & 1.032 & -0.1128 & 13.8 & 74.5 & 6.5\\
Tampa, FL & 777.1 & 0.742 & -0.233 & 22.3 & 75.33 & 10.35\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Comparison of Single vs Multi-agent}
Table~\ref{tab:result-multi-agent} provides the data used for generating Figure~\ref{fig:result-multi-agent}.
\begin{table}[H]
\caption{Table comparing multi-zone vs. multi-agent policies on reward value, energy consumption, mean temperature violation, as $\alpha$ is varied.}
\label{tab:result-multi-agent}
\begin{tabular}{ccp{0.6in}p{0.6in}c}
\toprule
Control type & $alpha$ & Energy consumption (MWh) & Mean temperature violation ($\degree$C & Reward)\\
\midrule
\multirow{9}{*}{Multi-zone} & 0.001 & 258.4 & 0.943 & -0.0709\\
& 0.01 & 258.8 & 0.94 & -0.0721\\
& 0.1 & 259.1 & 0.998 & -0.0861\\
& 0.5 & 268.5 & 0.916 & -0.121\\
& 1 & 258.6 & 0.846 & -0.2943\\
& 5 & 263.2 & 0.783 & -0.767\\
& 10 & 276.5 & 0.3571 & -0.6893\\
& 100 & 301.9 & 0.028 & -0.3318\\
& 1000 & 279.2 & 0.0261 & -2.875\\
\midrule
\multirow{9}{*}{Multi-agent} & 0.001 & 248.1 & 0.903 & -0.0702\\
& 0.01 & 253.6 & 0.889 & -0.0712\\
& 0.1 & 258.1 & 0.8642 & -0.0784\\
& 0.5 & 257.1 & 0.8413 & -0.1425\\
& 1 & 268.8 & 0.446 & -0.2355\\
& 5 & 258.7 & 0.4652 & -0.4036\\
& 10 & 263.2 & 0.51 & -0.6612\\
& 100 & 275.2 & 0.022 & -0.2841\\
& 1000 & 277.5 & 0.018 & -1.7213\\
\bottomrule
\end{tabular}
\end{table}
\section{Deep Reinforcement Learning for Building HVAC Control}
\label{sec:drl}
Reinforcement learning (RL) is a branch of machine learning that is specialized for solving control problems. Unlike classical control problems, where action is usually based on the immediate feedback, in many RL problems, the optimal policy could depend on delayed feedback, or need to discount current feedback. A RL problem primarily consists of following components:
\begin{itemize}
\item \textbf{State}: State is the mathematical description of the environment. For example, in the case of HVAC control of a building, state would represent the current temperatures and humidity in various zones of the building, outdoor environment temperature and humidity, heating and cooling loads, etc.
\item \textbf{Action}: Action represents the control taken on the environment. In the example of HVAC control, action could be setting the thermostat setpoints for each zone, fan speeds, etc.
\item \textbf{Agent}: The job of an agent is to compute an optimal action for a given state, under an optimal policy.
\item \textbf{Environment}: Environment is the super set, where one or more actions, result in a state transition with associated observations. Each state transition has an associated transition probability that is dependent on the current state and the action taken. Depending on the objective, a reward function is used to predict immediate rewards for an action under a state.
\end{itemize}
The goal of an RL agent is to compute an optimal policy given a sequence of actions and subsequent observations. There are primarily two ways of achieving this goal:
\begin{itemize}
\item \textbf{Model-based RL}: Model-based RL is used in environments where we know the functions for transition probability and rewards are known. In this case, we can use policy or value iteration to find the optimal policy.
\item \textbf{Model-free RL}: In most environments we don't know the exact characteristics of the environment. In that case, the controller needs to determine the optimal policy without modeling the environment. Policy gradient, value-based, and actor-critic are some of the common approaches used in model-free RL. We use model-free approach in this work.
\end{itemize}
\subsection{DRL algorithms}
\label{sec:drl_algos}
Many algorithms for DRL have been developed over last few years. They can be divided based on several factors, e.g. whether the algorithm is improving same policy - on-policy, or continuously exploring new policies - off-policy; whether the algorithms support continuous or discrete control; if the algorithms are gradient based or derivative free. We will discuss some of the popular DRL algorithms in this section that are also supported in RLlib.
PPO (Proximal Policy Optimization)~\cite{schulman:arxiv17-ppo} is a popular on-policy algorithm, which improves upon TRPO (Trust Region Policy Optimization)~\cite{schulman:arxiv17-trpo} by avoiding Kullback–Leibler divergence as a hard constraint, instead add the constraint as a penalty, which simplifies the computation. DDPG~\cite{lillicrap:arxiv19} is another model-free off-policy algorithm designed for learning continuous actions. DDPG uses Experience Replay and slow-learning target networks from DQN (Deep Q-Network)~\cite{mnih:arxiv13}, and it is based on DPG (Deterministic Policy Gradient)~\cite{silver:icml14}.
Asynchronous Actor-Critic (A2C) and Advantage Actor-Critic (A3C)~\cite{mnih:arxiv16} methods updates the gradients asynchronously, which improves training times. Soft Actor Critic (SAC)~\cite{haarnoja:arxiv19} is an off-policy actor-critic DRL algorithm based on the maximum entropy reinforcement learning frame-work, where the actor aims to maximize expected reward while also maximizing entropy. Ape-X~\cite{horgan:arxiv18} provides a distributed architecture for DRL. When combined with DDPG, it can be used to scale across multiple instances.
\subsection{Multi-zone HVAC Control}
A building usually has multiple zones that could be spread over multiple floors. In a normal operation, these setpoints are set at a constant value. Since the range of desired temperature is a range, the zonal setpoints need not be set to a constant setpoint, but can be altered to match with external weather to reduce energy consumption. The setpoints for all zones could be set the same value, or they could be set independently to accommodate for differential temperatures between zones, e.g. core zones are less impacted by external weather than the perimeter zones. We will explore these options in detail in Section~\ref{sec:exp-multi-zone}.
\subsection{Multi-agent}
In a multi-agent setting, multiple agents are making control decision on an environment. There are two ways of setting up a multi-agent problem:
\begin{itemize}
\item \textbf{Cooperative}: In this setting, agents share their observations, and might have a common reward. The idea behind this is to maximize rewards for all agents.
\item \textbf{Competitive}: Here agents do not share their observation. They only try to maximize their individual rewards.
\end{itemize}
For our experiments, we consider cooperative type multi-agents. Specifically, we create two agents, one to control heating setpoint and another to control cooling setpoint. The intuition behind this is that each agent can create a separate model, so that agent responsible for setting heating setpoints can optimize for colder months, while the other agent can optimize for warmer months.
\section{Experiments}
\label{sec:experiments}
\subsection{Setup}
The following are some of the important settings we use in our experiments.
\begin{itemize}
\item \textbf{EnergyPlus}: v9.3.0
\item \textbf{Building}: DOE Commercial Reference Building Medium office, new construction 90.1-2004 (RefBldgMediumOfficeNew2004\_Chicago.idf~\footnote{\url{https://bcl.nrel.gov/node/85021}}).
\item \textbf{Weather file}: Toronto, Canada (CAN\_ON\_Toronto.716240\_\\CWEC.epw~\footnote{\url{https://energyplus.net/weather-location/north_and_central_america_wmo_region_4/CAN/ON/CAN_ON_Toronto.716240_CWEC}}). We chose Toronto location, as it has good difference in temperature values between summer and winter months.
\item \textbf{Simulation days}: We simulate for an entire year, i.e. 365 days. For comparison, we also simulate for 30 days in the month of January and July in Section~\ref{sec:exp-sim-days}.
\item \textbf{Zones considered}: The reference building has three floors and each floor is subdivided into multiple core (center) and perimeter zones. The details of the zones and their names are listed in the next paragraph. We consider all of these zones.
\item \textbf{Controls}: Heating and cooling setpoints for all zones.
\item \textbf{Reward}: We use the reward defined in Equation~\ref{eqn:obj}.
\item \textbf{Algorithms}: APEX\_DDPG~\cite{lillicrap:arxiv19} and PPO~\cite{schulman:arxiv17-ppo}.
\item \textbf{RL library}: Amazon SageMaker RL~footnote{\url{https://docs.aws.amazon.com/sagemaker/latest/dg/reinforcement-learning.html}} with Ray RLlib, version 1.0.
\item \textbf{Deep learning framework}: PyTorch 1.8.1.
\item \textbf{Python version}: 3.6.
\item \textbf{Simulation timestep}: 15 minutes.
\item \textbf{Desired range of zone temperatures}: 20\degree C -- 25\degree C.
\item \textbf{Heating setpoint range}: 15\degree C to 22\degree C.
\item \textbf{Cooling setpoint range}: 22\degree C to 30\degree C.
\item \textbf{Training instance}: ml.g4dn.16xlarge, which has 64 cores, 256 MB of RAM, and 1 NVIDIA T4 GPU.
\end{itemize}
The reference building has following 15 zones: \texttt{Core\_bottom}, \texttt{Core\_mid}, \texttt{Core\_top}, \texttt{Perimeter\_bot\_ZN\_1}, \texttt{Perimeter\_bot\_ZN\_2}, \texttt{Perimeter\_bot\_ZN\_3}, \texttt{Perimeter\_bot\_ZN\_4}, \texttt{Perimeter\_mid\_ZN\_1}, \texttt{Perimeter\_mid\_ZN\_2}, \texttt{Perimeter\_mid\_ZN\_3}, \texttt{Perimeter\_mid\_ZN\_4}, \texttt{Perimeter\_top\_ZN\_1}, \texttt{Perimeter\_top\_ZN\_2}, \texttt{Perimeter\_top\_ZN\_3}, and \texttt{Perimeter\_top\_ZN\_4}. Core refers to the center of a floor, perimeter refers to the zones on the periphery of a floor. `top', `mid', and `bottom' refers to the location of the floors.
In addition to zone temperatures, we use following observations:
\begin{itemize}
\item \texttt{Air System Total Heating Energy}, which is heat added to the air loop (sum of all components) in Joules. Similarly, \texttt{Air System Total Cooling Energy} is heat removed from the air loop. These variables are collected for each VAV (variable air volume) 1, 2, and 3. Their sum is represented as $P^h_z$ and $P^c_z$ in Table~\ref{tab:nomenclature}, respectively. VAV is a type of HVAC system, which unlike constant air volume systems can vary the airflow at a constant temperature.
\item \texttt{Site Outdoor Air Wetbulb Temperature} and \texttt{Site\\ Outdoor Air Drybulb Temperature}. Dry-bulb temperature is the temperature that measures air temperature without the effect of any moisture. On the other hand, wet-bulb temperature is the lowest temperature that can be reached under current ambient conditions by the evaporation of water only. Since wet-bulb temperature accounts for relative humidity, we include that as one of the observables.
\end{itemize}
In the following sections, we will explore the effect of various configurations on the reward optimization and training times.
\subsection{Effect of $\alpha$ and Seasons on Reward}
\label{sec:exp-sim-days}
As described earlier, $\alpha$ controls if we are optimizing more for energy savings vs temperature comfort. The final reward of a simulation depends not only on $\alpha$, but also on the number of simulation days, and the seasons captured in the simulation run. Since the rewards generated for every time step are accumulated towards final reward, we need to normalize the rewards for number of simulation days. Figure~\ref{fig:result-sim-days} shows the comparison of reward function, energy consumption, and mean violation of zone temperatures for various $\alpha$, when simulated for 30 days in the months of January and July, and also for entire year (365 days). January and July were chosen as these two months are representative of cold and warm months in a typical climate. The raw data points are provided in Table~\ref{tab:result-sim-days} in the Appendix~\ref{sec:appendix-sim-days}.
\begin{figure}[ht]
\centering
\begin{subfigure}[ht]{\linewidth}
\centering
\includegraphics[width=\textwidth]{figs/sim-days-reward.pdf}
\caption{Plot of final reward.}
\label{fig:result-sim-days-reward}
\Description{RL: Impact of weather - reward}
\end{subfigure}
\begin{subfigure}[ht]{\linewidth}
\centering
\includegraphics[width=\textwidth]{figs/sim-days-energy.pdf}
\caption{Plot of energy consumption. The energy consumption of 365 days simulation is normalized for 30 days for comparison reasons.}
\label{fig:result-sim-days-energy}
\Description{RL: Impact of weather - energy}
\end{subfigure}
\begin{subfigure}[ht]{\linewidth}
\centering
\includegraphics[width=\textwidth]{figs/sim-days-temp.pdf}
\caption{Plot of mean temperature violation.}
\label{fig:result-sim-days-temp}
\Description{RL: Impact of weather - temperature}
\end{subfigure}
\caption{Plot showing the effect of number of simulation days and season on the reward for various $\alpha$.}
\label{fig:result-sim-days}
\Description{RL: Simulation days vs reward}
\end{figure}
From the plots, we see that in general, energy consumption is higher in winter, as the differential between external temperature and the indoor temperature is higher in winter than in summer (for Toronto region). Similar reasoning also applies for mean temperature violation, where we see higher violation for January month than for July. Simulating for entire year tries to find a policy that works for both seasons, and ends up not being best in both seasons. It is for this reason, we explore using multi-agent policy (Section~\ref{sec:exp-multi-agent}), where one policy focuses on wamer months, while other on colder months.
\textbf{Comparison with baseline:} In addition to above experiment, we simulated the baseline experiment, where we set the heating setpoint to 20$\degree$C and cooling setpoint to 22.5$\degree$C for all zones, as they are the minimum values to avoid temperature violation yet, and thus minimize energy consumption. The result was the total baseline energy is 517.41~MWh and mean temperature violation is 0.118~$\degree$C. Comparing with centralized policy, we see that RL algorithms at the minimum (for $\alpha = 0.001$) gives \textbf{75\%} reduction in energy consumption and mean temperature lower is by 0.116~$\degree$C.
\subsection{Multi-zone}
\label{sec:exp-multi-zone}
There are two approaches in controlling HVAC of a building - one is centralized control, where every zone gets same heating and cooling setpoint at a given time, the other is individualized control for each zone. We evaluate the rewards for various $\alpha$, and energy consumption and mean temperature violation for above approaches. These observations are summarized in Figure~\ref{fig:result-multi-zone}. From the figure, we see that multi-zone approach does better in overall reward value, energy consumption, and also on mean temperature violation over the entire range of $\alpha$. For couple of datapoints, centralized policy seems to be doing better than multi-zone. This cross-over value is small and we believe it could be due to random nature of policy optimization. This agrees with our intuition that different zones in a building need different temperature setting as all zones don't have same thermal dynamics,e.g. a perimeter zone on south side will receive more sun than the core zones, and thus would need a different setpoint than the core zones. Please refer to Appendix~\ref{sec:appendix-multi-zone} for raw data used in the plots.
\begin{figure}[ht]
\centering
\begin{subfigure}[ht]{\linewidth}
\centering
\includegraphics[width=\textwidth]{figs/multi-zone-reward.pdf}
\caption{Plot of final reward.}
\label{fig:result-multi-zone-reward}
\Description{RL: Impact of weather - reward}
\end{subfigure}
\begin{subfigure}[ht]{\linewidth}
\centering
\includegraphics[width=\textwidth]{figs/multi-zone-energy.pdf}
\caption{Plot of energy consumption.}
\label{fig:result-multi-zone--energy}
\Description{RL: Impact of weather - energy}
\end{subfigure}
\begin{subfigure}[ht]{\linewidth}
\centering
\includegraphics[width=\textwidth]{figs/multi-zone-temp.pdf}
\caption{Plot of mean temperature violation.}
\label{fig:result-multi-zone-temp}
\Description{RL: Impact of weather - temperature}
\end{subfigure}
\caption{Plot comparing multi-zone policy vs centralized policy for various $\alpha$.}
\label{fig:result-multi-zone}
\Description{RL: Simulation days vs reward}
\end{figure}
\subsection{Impact of Weather on HVAC Optimization}
\label{sec:exp-weather}
In this section, we consider the impact of external weather on HVAC optimization of a building. We use weather from 6 locations and perform multi-zone optimization as described in the previous section. Figure~\ref{fig:result-weather} describes the plots of reward, energy consumption, and mean temperature violation for different locations. The average temperature and humidity for entire year for each location, along with temperature difference between summer and winter months are overlay on the plots for better understanding. The details of data and full names of location can be found in Appendix~\ref{sec:appendix-weather}.
From the plots, we see that the average external temperature plays a big role in reducing energy consumption, which also impact the reward. Both low and high average temperatures decreases reward (see Toronto and Tampa, FL), as that would require more energy to keep indoor temperature in the comfort zone. Note that the x-axis in Figure~\ref{fig:result-weather-reward} is inverted. The lower the bar, better the rewad value. In general, it takes more energy to cool than to heat, as we can see from Figure~\ref{fig:result-weather-energy}, where higher average temperatures in Tampa, FL requires more than double the energy consumption for a building to cool than for a building in Toronto, where the temperatures on average are much cooler..
Mean difference in summer and winter temperatures also plays a role, as it represents the differential work, the HVAC systems need to do to keep indoor temperatures within comfort zone, across different seasons. Similarly, higher humidity keeps the outdoor temperatures relatively constant, which reduces mean temperature violation in zones, as see with San Francisco location.
\begin{figure}[ht]
\centering
\begin{subfigure}[ht]{\linewidth}
\centering
\includegraphics[width=\textwidth]{figs/weather-reward.pdf}
\caption{Plot of final reward.}
\label{fig:result-weather-reward}
\Description{RL: Impact of weather - reward}
\end{subfigure}
\begin{subfigure}[ht]{\linewidth}
\centering
\includegraphics[width=\textwidth]{figs/weather-energy.pdf}
\caption{Plot of energy consumption.}
\label{fig:result-weather-energy}
\Description{RL: Impact of weather - energy}
\end{subfigure}
\begin{subfigure}[ht]{\linewidth}
\centering
\includegraphics[width=\textwidth]{figs/weather-temp.pdf}
\caption{Plot of mean temperature violation.}
\label{fig:result-weather-temp}
\Description{RL: Impact of weather - temperature}
\end{subfigure}
\caption{Plot of impact of external weather on HVAC optimization. Each plot contains metrics for 6 cities along with plot of average temperature, average humidity, and mean temperature deviation from summer and winter months. $\alpha$ was set to 1.}
\label{fig:result-weather}
\Description{RL: Impact of weather}
\end{figure}
\subsection{Multi-agent}
\label{sec:exp-multi-agent}
Next, we explored setting up multiple agents to control HVAC. The idea behind multi-agent control is that multiple agents may better optimize reward, when the controls are not homogeneous. For example, in HVAC use case, we have two sets of controls - heating setpoint and cooling setpoint. Heating setpoint is used during winter months to increase indoor temperature, while cooling setpoint is used in summer months to reduce indoor temperature. A single agent will model the environment using a single neural network and uses same weights to decide next heating setpoint and cooling setpoint controls, which may not be optimal. With multi-agent, each agent (heating/cooling setpoint) will optimize individual network for the control they are responsible. Figure~\ref{fig:result-multi-agent} compares multi-agent policy with multi-zone policy we discussed earlier. From the figure, we notice that multi-agent does better than multi-zone in general across various $\alpha$.
\begin{figure}[ht]
\centering
\begin{subfigure}[ht]{\linewidth}
\centering
\includegraphics[width=\textwidth]{figs/multi-agent-reward.pdf}
\caption{Plot of final reward. Note the x-axis is inverted. The smaller the length of the bar, better the reward.}
\label{fig:result-multi-agent-reward}
\Description{RL: multi-agent - reward}
\end{subfigure}
\begin{subfigure}[ht]{\linewidth}
\centering
\includegraphics[width=\textwidth]{figs/multi-agent-energy.pdf}
\caption{Plot of energy consumption.}
\label{fig:result-multi-agent-energy}
\Description{RL: multi-agent - energy}
\end{subfigure}
\begin{subfigure}[ht]{\linewidth}
\centering
\includegraphics[width=\textwidth]{figs/multi-agent-temp.pdf}
\caption{Plot of mean temperature violation.}
\label{fig:result-multi-agent-temp}
\Description{RL: multi-agent - temperature}
\end{subfigure}
\caption{Plot comparing multi-zone with multi-agent policies for various $\alpha$.}
\label{fig:result-multi-agent}
\Description{RL: Impact of weather}
\end{figure}
\subsection{Comparison of RL Algorithms}
In this experiment, we compare two RL algorithms PPO and APEX-DDPG that were described in Section~\ref{sec:drl_algos}. Table~\ref{tab:result-algo-compare} and Figure~\ref{fig:result-algorithms} summarizes the comparison of different algorithms mentioned above. Rewards from PPO (for different observation filters) and APEX-DDPG are close. In terms of training times, APEX-DDPG has the lowest training time (\textbf{60\%} lower than PPO) and the reward converges in 84\% less iterations. \texttt{MeanStdFilter}, which keeps track of running mean, took longer to converge and with no significant improvement in reward.
\begin{table}[ht]
\caption{Table comparing RL algorithms along with different configuration values, in terms of reward value and training time .}
\label{tab:result-algo-compare}
\begin{tabular}{cp{0.7in}cp{0.5in}p{0.4in}}
\toprule
Algorithm & Observation filter & Reward & Iteration count & Training time (s)\\
\midrule
\multirow{2}{*}{PPO} & NoFilter & -293.19 & 104 & 25\\
& MeanStdFilter & -292.73 & 178 & 45\\
APEX-DDPG & -- & -293.02 & 17 & 10\\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{figs/algos.pdf}
\caption{Plot comparing training times for PPO (NoFilter and MeanStdFilter observation filters) and APEX-DDPG algorithms.}
\label{fig:result-algorithms}
\Description{RL: Simulation days vs reward}
\end{figure}
\subsection{Training Cost Optimization}
In the final experiment, we run our HVAC optimization on different AWS instances as listed in Table~\ref{tab:result-training-cost}. The table lists GPU types if the instances have a GPU, number of CPU cores, the training time, and the resulting cost. From the table, we can infer that having many cores is very helpful to rollout many RLlib workers, at the same time having a low cost GPU is useful to train the model in less time. For our experiments, we found ml.g4dn.16xlarge turns to be the low cost.
\begin{table}[ht]
\caption{Table comparing convergence times on different instances on Amazon SageMaker and their respective costs.}
\label{tab:result-training-cost}
\begin{tabular}{llp{0.4in}p{0.4in}p{0.4in}}
\toprule
Instance type & GPU type & CPU cores & Training time (mins) & Training cost (\$)\\
\midrule
ml.g4dn.16xlarge & T4 & 64 & 15 & 1.36\\
ml.p3.2xlarge & V100 & 8 & 40 & 2.55\\
ml.p2.xlarge & K80 & 4 & 135 & 2.53\\
ml.c5.18xlarge & -- & 72 & 35 & 2.14\\
ml.m5.24xlarge & -- & 96 & 50 & 4.6\\
\bottomrule
\end{tabular}
\end{table}
\section{Introduction}
Buildings account for 40\% of total energy consumption, 70\% of total electricity, and 30\% of carbon emissions in the United States~\cite{shaikh:rser14, doe:energy11}. HVAC systems account for 50\% of the total energy consumption in buildings. The aim of HVAC systems in residential and commercial buildings is to maintain indoor air temperature and air quality. The conventional building HVAC control is rule-based feedback. In this setup, temperature setpoints are set based on certain pre-determined schedules (e.g. day and night time schedules). A simple controller like Proportional-Integral-Derivative (PID)~\cite{geng:icca93} is used for tracking the setpoints. This simple reactive strategy works well to maintain indoor air temperature on a per-zone basis, but would not be optimal for a large building with many thermal zones. The strategy also ignores the effects of external elements like weather. Additional savings in energy can be achieved by intelligently controlling HVAC systems.
Optimal control strategies, such as MPC (Model Predictive Controller) address above limitations by iteratively optimizing an objective function over a finite time horizon. One of the limitations of MPC is the need for accurate models of the environment. The thermal dynamics in a large building is very complex with zones at the periphery having different thermal properties than one in the middle, and similar behavior can be seen with multiple floors in the building. Moreover, the dynamics of two buildings can be very different depending on the layout, HVAC configurations, and occupancy rates. Modeling these different thermal properties of buildings is difficult and needs to be repeated for a new building. Model-free optimization is preferred in such scenarios and Reinforcement learning (RL)~\cite{sutton:rl18} is well suited for model-free optimization. RL directly interacts with an environment, and learns the model through set of actions and corresponding feedback in the form of state changes. DRL incorporates deep learning with RL, allowing agents to make decisions from unstructured input data without manual engineering of the state space. DRL has achieved remarkable success in playing Atari and Go~\cite{mnih:nature15}. DRL is especially well suited for model-free RL, where the agent can learn to model the environment by exploring extensively. Ray RLlib~\cite{liang:arxiv18} is a popular DRL framework, which supports commonly used DRL algorithms.
Since RL algorithms require extensive action-state pairs from an environment to optimize, RL algorithms are usually trained on simulators, as (i) trying those actions in real world could be dangerous, e.g. setting temperature too high or cold in a building, and (ii) collecting the extensive data would take a long time in real world. EnergyPlus~\cite{crawley:energplus01} is a whole building energy simulation program that models heating, cooling, ventilation, lighting, and water use in buildings. EnergyPlus is commonly used by design engineers and architects, who would like to design for HVAC equipment, or analyzing life cycle cost, or optimize energy performance, etc.
In this paper, we present a co-simulation framework, where DRL algorithms from Ray RLlib will interact with EnergyPlus simulator to arrive at optimal HVAC policy. The co-simulation framework will provide seemless and scalable interface, where observations from EnergyPlus are provided to OpenAI gym environment, which in turn will compute reward. The rewards are collected by RLlib and compute the optimal actions for next time step. These actions are conveyed back to OpenAI gym and is supplied to EnergyPlus. Though this co-simulation framework, EnerygPlus, which is designed to run as single process, is extended RLlib's distributed model to scale for multiple CPUs and GPUs across clusters. This is made possible by using callbacks from EnergyPlus and utilizing queues to synchronize the two softwares. Additionally, we demonstrate setting up multi-zone and multi-agent environments with this framework.
The main contributions of this work are summarized below:
\begin{enumerate}
\item This paper provides an optimization framework by orchestrating a co-simulation environment between Ray RLlib and EnergyPlus. The framework is highly scalable to using multiple compute instances including CPUs and GPUs.
\item The above framework uses standard OpenAI gym, which is customizable, and supports multiple DRL algorithms (all that is supported by RLlib). We demonstrate setting up multi-zone control and multi-agents to control building at different times of the year.
\item We conduct experiments for different weather conditions and simulator configurations. We demonstrate the trade-off in training times and rewards w.r.t. energy-temperature penalty coefficient.
\end{enumerate}
\subsection{Related Work}
The recent literature in HVAC optimization for buildings generally falls in two categories. One that uses MPC, the other uses RL. One of the roadblocks in widespread adoption of MPC is the need for a model~\cite{privara:cca11, killian:be16}. Due to heterogeneity of the buildings, we may need to develop model for each thermal zone~\cite{lu:appEnergy15}. The models could be built based on physics, e.g. EnergyPlus, or based on material characteristics, e.g. computation fluid dynamics models. These models are not control-oriented~\cite{ercan:icbs16}, and although not impossible, it requires considerable work to control these models.
Many DRL based HVAC control methods have been proposed. Wei et al.~\cite{wei:dac17} DRL method based on Deep Q-Network (DQN)~\cite{mnih:arxiv13} was one of the first DRL based control methods to minimize energy consumption, while maintaining temperature in a desired range. Gao et al.~\cite{gao:arxiv19} proposed a Deep Deterministic Policy Gradients (DDPG)~\cite{lillicrap:arxiv19} based DRL method to minimize energy consumption and thermal discomfort in a laboratory setting. Similarly, Zhang et al.~\cite{zhang:eb19} proposed Asynchronous Advantage Actor Critic (A3C)~\cite{mnih:arxiv16} based control to jointly optimizes energy demand and thermal comfort in an office building.
Some of the works have utilized EnergyPlus to model the building and use it as a simulator to verify the HVAC control methods. Chen et al.~\cite{chen:arxiv19} validated their work on a large office (16 thermal zones) within EnergyPlus by controlling the temperature setpoints. Moriyama et al.~\cite{moriyama:asc18} presented a test bed that integrates DRL with EnergyPlus for data center HVAC control. Our work is most similar to this work. One of the limitation of~\cite{moriyama:asc18} was the co-simulation is not scalable to beyond one core. In this paper, we provide the framework that is highly scalable to multiple cores and multiple nodes.
The rest of the paper is structured as follows: Section~\ref{sec:model} covers problem formulation and introduction to EnergyPlus simulator. Section~\ref{sec:drl} introduces the reader to deep reinforcement learning algorithms for HVAC control. Section~\ref{sec:experiments} presents the results of exploring DRL algorithms with EnergyPlus, and finally we conclude the paper in Section~\ref{sec:conclusion}.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we have presented a scalable framework for optimizing HVAC on a commercial building using deep reinforcement learning. We have conducted experiments, showing the impact of simulation days, weather, energy-temperature penalty coefficient on the reward. We have also shown how to set up multi-zone and multi-agent control in this framework. We believe this framework will ease the adoption of DRL in HVAC optimization with researchers and practitioners, and spur further innovation.
\bibliographystyle{ACM-Reference-Format}
\section{HVAC System Modeling of Buildings}
\label{sec:model}
In this section, we will cover the problem formulation, discuss EnergyPlus and reinforcement learning frameworks, and strategies to distribute workloads to speed up computation.
\subsection{Problem Formulation}
Table~\ref{tab:nomenclature} lists the nomenclature used in this paper:
\begin{table}[ht]
\caption{Nomenclature used in the paper.}
\label{tab:nomenclature}
\begin{tabular}{lp{2.65in}}
\toprule
Symbol & Description\\
\midrule
$N_z$ & Number of thermal zones.\\
$N_d$ & Number of simulation days.\\
$t_s$ & Timestep (s)\\
$T_{min}, T_{max}$ & Minimum and maximum acceptable zone temperature ($\degree$ C), respectively\\
$T_z$ & Zone $z$ temperature ($\degree$ C), $\mathbf{T}$ represents the vector of all zone temperatures.\\
$P_z^h$ & Heat added to zone $z$ per unit timestep\\
$P_z^c$ & Heat removed from zone $z$ per unit timestep\\
$P_b$ & Base energy expenditure (unrelated to zone) per unit timestep, e.g. energy used for lighting, etc.\\
$\alpha$ & Energy-temperature penalty ratio\\
$T_z^h$ & Heating setpoint temperature for zone $z$ ($\degree$ C)\\
$T_z^c$ & Cooling setpoint temperature for zone $z$ ($\degree$ C)\\
$T_e$ & Outdoor temperature ($\degree$ C)\\
$R_h$ & Relative humidity (\%)\\
$T^h_{min}, T^h_{max}$ & Minimum and maximum heating setpoint ($\degree$ C), respectively\\
$T^c_{min}, T^c_{max}$ & Minimum and maximum cooling setpoint ($\degree$ C), respectively\\
\bottomrule
\end{tabular}
\end{table}
The goal of our work is to minimize energy consumption in a building subject to maintaining zone temperatures within the range of comfort. The problem can be mathematically stated as follows:
\begin{align}
\displaystyle\min_{T_z^h(t_s),~T_z^c(t_s)} \quad & \displaystyle\sum_{t_s=0}^{t_s=N_d} \left[ P_b(t_s) + \displaystyle\sum_{z=0}^{z=N_z} P_z^h(T_z^h, t_s) + P_z^c(T_z^c, t_s) \right] \label{eqn:orig-obj}\\
s.t. \quad & T^h_{min} \le T_z^h(t_s) \le T^h_{max}. \quad \forall z, t_s \label{eqn:t_h_range}\\
& T^c_{min} \le T_z^c(t_s) \le T^c_{max}. \quad \forall z, t_s \label{eqn:t_c_range}\\
& T_z(t_s) = f_T(\mathbf{T}, T_e, R_h, \mathbf{P}^h, \mathbf{P}^c, t_s) \quad \forall z, t_s \label{eqn:t_func}\\
& P_z^h(t_s) = f_{Ph}(\mathbf{T}, T_z^h, t_s), \quad \forall z, t_s \label{eqn:p_h_func}\\
& P_z^c(t_s) = f_{Pc}(\mathbf{T}, T_z^c, t_s), \quad \forall z, t_s \label{eqn:p_c_func}\\
& T_{min} \le T_z(t_s) \le T_{max}. \quad \forall z, t_s \label{eqn:t_range}
\end{align}
Equation~\ref{eqn:orig-obj} is the objective function, where we minimize the overall energy consumption for all simulation steps and all zones. Power consumed for heating and cooling are described in Equations~\ref{eqn:p_h_func} and~\ref{eqn:p_c_func}, respectively. The power consumed is a function of the setpoint, the temperature of all zones. Since temperature of a zone is influenced by other nearby zones, we need temperature of all zones to compute the effective power to bring the temperature of a zone to it's desired range. Similarly, function for computing zone temperatures is dependent on both heating and cooling setpoints, and the temperature of all zones, as described in Equation~\ref{eqn:t_func}. Note that $ f_{Ph}$, $ f_{Pc}$, and $ f_T$ are black-box functions, which are learned by Model-free RL algorithms described in Section~\ref{sec:drl} as they optimize for reward.
Of the constraints mentioned in Equations~\ref{eqn:t_h_range},~\ref{eqn:t_c_range}, and~\ref{eqn:t_range}, Equation~\ref{eqn:t_range} is the constraint on the observed temperatures, while the rest are the constraints on the input variables. Equation~\ref{eqn:t_range} presents a challenge in optimizing for algorithms, as solving for hard constraint increases the complexity of the algorithms. Equations~\ref{eqn:t_h_range} and~\ref{eqn:t_c_range} are constraints on control variables, which are easily handled by limiting the range of input variables. One common way of addressing this issue is by modifying the hard constraint as a soft constraint in the objective with an added penalty coefficient as shown in Equation~\ref{eqn:obj}.
\begin{multline}
\displaystyle\min_{T_z^h(t_s),~T_z^c(t_s)} \quad \displaystyle\sum_{t_s=0}^{t_s=N_d} \left[ P_b(t_s) + \displaystyle\sum_{z=0}^{z=N_z} \left( P_z^h(T_z^h, t_s) + P_z^c(T_z^c, t_s) \right. \right. \\
+ \alpha (\max(T_{min} - T_z(t_s), T_z(t_s) - T_{max}, 0))^\lambda)] \label{eqn:obj}
\end{multline}
\begin{align}
s.t. \quad & T^h_{min} \le T_z^h(t_s) \le T^h_{max}. \quad \forall z, t_s\\
& T^c_{min} \le T_z^c(t_s) \le T^c_{max}. \quad \forall z, t_s\\
& T_z(t_s) = f_T(\mathbf{T}, T_e, R_h, \mathbf{P}^h, \mathbf{P}^c, t_s) \quad \forall z, t_s\\
& P_z^h(t_s) = f_{Ph}(\mathbf{T}, T_z^h, t_s), \quad \forall z, t_s\\
& P_z^c(t_s) = f_{Pc}(\mathbf{T}, T_z^c, t_s), \quad \forall z, t_s\\
& \alpha, \lambda \ge 0, \label{eqn:alpha_const}\\
& \lambda \ge 1 \label{eqn:lambda_const}
\end{align}
We have added a second term in Equation~\ref{eqn:obj} to represent the violation of zone temperature constraints. $\max(T_{min} - T_z(t_s), T_z(t_s) - T_{max}, 0)$ is the max of violation of zone temperature of either below minimum and above maximum temperature. We also add two coefficients, $\alpha$, which prioritizes reducing power consumption vs temperature comfort. Increasing $\alpha$ penalizes more on violating temperature constraints, and decreasing $\alpha$ will give more emphasis on energy reduction. $\alpha$ is constrained to be above $0$ as in Equation~\ref{eqn:alpha_const}, but with no upper limit. The reason being that the power consumption and temperature are of different scales and cannot be normalized to same range.
$\lambda$ is a pre-determined constant that determines the penalty as a function of deviation of zone temperature from either minimum or maximum constraint. Increasing $\lambda$ is exponentially increases penalty. It's constrained to be more than $1$ as in Equation~\ref{eqn:lambda_const}. An example of temperature penalty term for $\alpha=1$ and $\lambda=1.5$ is shown in Figure~\ref{fig:temp-penalty}. Temperature delta refers to the delta by which zone temperature exceeding the temperature constraints.
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\linewidth]{figs/temp-penalty.pdf}
\caption{Temperature penalty curve adopted in our experiments.}
\label{fig:temp-penalty}
\Description{RL: Training cost}
\end{figure}
In Equation~\ref{eqn:obj}, we added a new penalty coefficient $\alpha$, which prioritizes reducing power consumption vs temperature comfort. Increasing $\alpha$ penalizes more on violating temperature constraints, and decreasing $\alpha$ will give more emphasis on energy reduction. $\alpha$ is constrained to be above $0$ as in Equation~\ref{eqn:alpha_const}, but with no upper limit. The reason being that the power consumption and temperature are of different scales and cannot be normalized to same range. Rest of the equations remain same as Equations~\ref{eqn:t_h_range} --~\ref{eqn:p_c_func}.
The following sections discuss the simulation engine we use for our experiment, and the co-simulation with DRL optimizers.
\subsection{Ray RLlib and OpenAI Gym}
RLlib~\cite{liang:arxiv18} is a popular open-source library for reinforcement learning (RL) that is built on Ray~\cite{moritz:arxiv18}. RLlib offers highly scalability and packs many commonly used policies, e.g. PPO~\cite{schulman:arxiv17-ppo}, DDPG~\cite{lillicrap:arxiv19}, etc., for RL training. Gym~\cite{brockman:corr16} is a toolkit from OpenAI for developing reinforcement learning algorithms. Gym provides standard interfaces for initializing environment, where we define the action and observation spaces, functions for resetting environment, and providing controls and observing outputs at every time step of the simulation. RLlib interfaces with Gym seamlessly.
\subsection{Co-simulation Routine}
We combine EnergyPlus with a RL framework to co-simulate and determine the optimal settings for HVAC operation. A user can provide building information, e.g. HVAC system (heat sources, rate of cooling/heating, etc.), number of zones, building material, etc., in an input file called IDF. EnergyPlus reads the IDF file and the weather file to compute heating and cooling loads for every time step.
Co-simulation refers to synchronizing two or more simulations in parallel. In our case, it is the synchronization of reading observations from EnergyPlus, and generating action for next time step from RLlib This turns out to be a challenge since RLlib and EnergyPlus runs on different processes. EnergyPlus provides support for BCVTB (Building Controls Virtual Test Bed), which is a software environment that allows users to couple different simulation programs for co-simulation. BCVTB uses Ptolemy server in the backend. This solution works, when one program is controlling one run of EnergyPlus, but with RLlib, it can spin multiple rollout workers, and each of them would need to start an EnergyPlus process. BCVTB based solution does not work in this scenario.
Fortunately, EnergyPlus from version 9.3 onward provides\\ \texttt{pyenergyplus}, a Python library that contains callbacks to various states within EnergyPlus. Using this callbacks, we can intercept EnergyPlus, gather variables of interest, and pause the simulator until the necessary calculation needs to be done by RLlib. In order for RLlib to gather outputs from EnergyPlus and compute next action based on reward, we utilize the callback function \texttt{callback\_end\_zone\_timestep\_after\_zone\_reporting},\\ which returns to our Gym class at the end of every zone timestep after generating outputs. In this function, we collect outputs and put it in a queue, that will be picked by Gym step function, and compute the corresponding next action. Queues act as synchronization mechanism between our Gym environment and EnergyPlus. The framework is depicted in Figure~\ref{fig:arch}.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.8\linewidth]{figs/arch.pdf}
\caption{Distributed co-simulation framework involving Ray RLlib, OpenAI gym, and EnergyPlus}
\label{fig:arch}
\Description{co-simulation framework}
\end{figure*}
In order to change the setpoints on every timestep, we add following constant schedules in building IDF:
\begin{Verbatim}[frame=single]
Schedule:Constant,
CLGSETP_SCH_Perimeter_top_ZN_4, !- Name
Temperature, !- Schedule Type Limits Name
20.0; !- Hourly Value
\end{Verbatim}
Through EnergyPlus callback, we modify this setpoint to desired setpoint, which will be held constant for the given timestep.
|
2,877,628,090,682 | arxiv | \section{\label{sec:int}Introduction}
Recent observations of quasi-periodic oscillations (QPOs),
particularly kHz QPOs, in X-ray emissions from accreting binaries,
have aroused a lot of interest in the astrophysical community. The
QPOs are very strong and remarkably coherent with frequencies
ranging from $\sim 10$ Hz to $\sim 1200$ Hz. They have been observed
in the X-ray flux of about 20 accreting neutron star sources and
five black hole sources by {\it Rossi X-Ray Timing Explorer}. Almost
all sources have shown twin spectral peaks in the QPOs in the kHz
part of the X-ray spectrum, with the value of the peak separation
being anti-correlated with the QPO frequencies
\cite{Van00,DMV03,MVF04}.
The clear similarities of kHz QPO properties in black hole systems
to those in neutron star binaries \cite{PBV99,BPV02}, and in white
dwarf systems \cite{WWP03}, suggest that the oscillation likely
originate in the accretion flow surrounding the central object.
Motivated by this argument, a variety of accretion-based models have
been proposed. Most models, however, fail to provide a general
explanation of QPO features in all potential sources. See, for
example, Li \& Narayan \cite{LN04} and references therein.
Li \& Narayan \cite{LN04} studied Rayleigh-Taylor and
Kelvin-Helmholtz instabilities at a possible interface between the
star's magnetosphere and the accretion disk. They found that modes
with low order azimuthal wavenumbers are expected to grow to large
amplitude and to contribute to kHz QPOs. In their study, the
magnetic field allows an interface with abrupt spatial
discontinuities in the flow density and/or angular velocity across
the interface. They ignore, however, the dynamical role of the
magnetic field of the central object and its interaction with the
accretion disk/flow.
Based on theoretical models and observations of the aurora in the
Earth's magnetosphere, Rezania et al. \cite{RSD04} and Rezania \&
Samson \cite{RS05} (hereafter paper I) have recently proposed a
generic magnetospheric model for accretion disk-neutron star systems
to address the occurrence and the behavior of the observed QPOs in
those systems. In the Earth's magnetosphere, the occurrence of
aurora is a result of the resonant coupling between shear Alfv\'en
waves and fast compressional waves (produced by the solar wind).
These resonances are known as field line resonances (FLRs). Paper I
argued that this resonant coupling is also likely to occur in
neutron star magnetospheres, due to interaction with the accreting
plasma. The MHD interaction of the infalling plasma (with a
sonic/supersonic speed) with the neutron star magnetosphere, would
alter not only the plasma flow toward the surface of the star, as
assumed by current QPO models, but also the structure of the star's
magnetosphere. The magnetic field of the neutron star is distorted
inward by the infalling plasma of the Keplerian accretion flow.
Furthermore, the plasma would likely be able to penetrate through
magnetic field lines and produce enhanced density regions within the
magnetosphere (similar to the magnetospheric interface introduced in
\cite{LN04}). Any instability in the compressional action of the
accretion flow would alter the quasi-equilibrium pressure balance
between the inward pressure of the infalling flow $\sim\rho v_r^2/2$
and the outward magnetic pressure $\sim B_p^2/8\pi$. This process
then can excite perturbations in the enhanced density region. Here
$\rho$ and $v_r$ are the density and radial velocity of the
infalling matter and $B_p$ is the poloidal magnetic field on the
plane of the disk. In analogy with the interaction of the solar wind
with the earth's magnetosphere \cite{Sam91}, one would expect the
excitation of resonant shear Alfv\'en waves, or FLRs, \cite{RS05}.
Paper I assumed a simple geometry with a rectilinear magnetic field,
and showed that, in the presence of a plasma flow,two resonant MHD
modes with frequencies in the kHz range can occur within a few
stellar radii. The results in paper I gave a reasonable prediction,
both quantitative and qualitative, of the kHz oscillations observed
in the X-ray fluxes in X-ray binaries. The existence of a non-zero
plasma displacement along the magnetic field lines, which oscillates
with resonance frequencies and then modulates the flow of the plasma
toward the surface of the neutron star, can explain the kHz
quasi-periodicity in the observed X-ray flux from the star. See
\cite{RS05} for more details.
In this letter we examine the above problem in a more realistic
configuration for the star's magnetosphere. We base are model on the
fact that the topology of the magnetospheres of isolated pulsars is
likely close to dipolar. In accreting pulsars, however, the geometry
of the magnetosphere will be distorted due to the inward flowing
plasma, specifically on the plane of the accretion disk.
Nevertheless, we approximate the topology of the magnetosphere of an
accreting neutron stars with a dipolar geometry in order to study
the structure of shear Alfv\'en waves in the presence of an ambient
flow.
Furthermore, due to the existence of ambient flow along the magnetic
field, the plasma pressure is not expected to be isotropic. This can
be understood by noting that the plasma pressure parallel to the
field lines can be defined as
\stepcounter{sub} \begin{equation}\label{p_par}
p_{||}\sim m \int f (u_{||}-v_p)^2 d^3u,
\end{equation}
where $u_{||}={\bf u}\cdot\B/B$ is the
thermal velocity of the plasma particles and $v_p$ is the ambient
flow velocity along the field lines. Here $f({\bf r},{\bf u},t)$ is the
plasma distribution function satisfying the Vlasov equation
\cite{SHD97}. Similarly, the perpendicular component of the pressure
can be calculated from
\stepcounter{sub} \begin{equation}\label{p_per} p_\perp \sim m \int f
u_\perp^2 d^3u/2.
\end{equation}
Hence, the scalar pressure of isotropic MHD
must be replaced by a diagonal pressure tensor with two components:
a parallel component $p_{||}$ acting along the field lines and a
perpendicular component $p_\perp$ acting in the perpendicular
direction. The latter can be considered to be the ram pressure.
Consequently, in the MHD equations, the flow pressure must be
written as: ${\p}=p_\perp {\bf I} + (p_\perp-p_{||}) {\bf b}\b$, where
${\bf I}$ is the identity tensor and ${\bf b}=\B/B$ is the unit vector along
the magnetic field line.
The linearized, perturbed magnetohydrodynamic equations with
anisotropic pressure in the presence of an ambient flow now have
parallel and perpendicular components:
\stepcounter{sub}\begin{eqnarray}\label{eq-mhd1} \stepcounter{subeqn}\label{xi_para} && \rho \left(
{\partial \delta {\bf v} \over \partial t} + {\bf v} \cdot \nab \delta{\bf v} +
\delta{\bf v} \cdot \nab {\bf v}\right)_{||} =
- \nabla_{||} \delta p_{||} \no\\
&&\hspace{4cm}+ (p_\perp-p_{||}) {\nabla_{||}\delta B_{||}\over B}\\
\stepcounter{subeqn}\label{xi_perp}
&& \rho \left( {\partial \delta {\bf v} \over \partial t}
+ {\bf v} \cdot \nab
\delta{\bf v} + \delta{\bf v} \cdot \nab {\bf v}\right)_\perp =
- \nab_\perp \delta \left( p_\perp + {B^2\over 8\pi}\right)\no\\
&&\hspace{3cm}
+\Xi \left({ \delta\B \cdot \nab \B + \B \cdot \delta\nab \B \over 4\pi}\right)_\perp\\
\stepcounter{subeqn}\label{dB_t}
&&{\partial \delta \B\over\partial t} = \nab \times (\delta {\bf v} \times \B + {\bf v}
\times \delta \B), \\
\stepcounter{subeqn}\label{div_dB}
&&\nab\cdot\delta \B =0,\\
\stepcounter{subeqn}\label{dp_par}
&&\delta p_{||}=-2p_{||}\delta B_{||}/B,\\
\stepcounter{subeqn}\label{dp_per}
&&\delta p_\perp=p_\perp\delta B_{||}/B,
\end{eqnarray}
where $\delta {\bf v}=\partial \xxi/\partial t$,
$\Xi=1+2(c^2_\perp-c^2_{||})/v^2_{\rm A}$, and we ignore the
perturbation in the plasma density, i.e. $\delta\rho\simeq 0$. Here
$\rho, p_{||}, p_\perp, {\bf v},$ and $\B$ are the unperturbed quantities
while $\delta p_{||}, \delta p_\perp, \delta{\bf v},$ and $\delta\B$ are
the perturbed quantities. Equations (\ref{dp_par}) and
(\ref{dp_per}) are calculated from the two equations of state for
$p_{||}$ and $p_\perp$ that are known as double adiabatic equations,
${d \over dt}(p_{||}B^2/\rho^3)=0$ and ${d\over dt}(p_\perp/(\rho
B))=0$ \cite{CGL56}. $\Xi=1$ if the pressure is isotropic, i.e
$p_{||}=p_\perp=p$.
To avoid complexities, we shall ignore the rotation of the star and
consequently neglect both the toroidal field $B_\phi$ and velocity
$v_\phi$. These assumptions simplify our calculations significantly,
and, we believe, allow the model to retain the important physics.
Furthermore, we do not consider a jump condition in the enhanced
density region as discussed by \cite{LN04}. As a result, we do not
address Rayleigh-Taylor and/or Kelvin-Hemholtz instabilities.
We expand the MHD equations (\ref{eq-mhd1}) in the orthogonal
coordinate system ($\mu, \nu, \phi$), where $\mu=\cos\theta/r^2$ is
the magnetic field-aligned coordinate, $\nu=\sin^2\theta/r$
numerates magnetic shells in the direction perpendicular to the
field line, and $\phi$ the is azimuthal coordinate. The components
of the metric are $h_\mu=h_\nu h_\phi$,
$h_\nu=r^2/\sin\theta\sqrt{1+3\cos^2\theta}$, and
$h_\phi=r\sin\theta$ where ($r,\theta,\phi$) is the spherical
coordinate. The metric component $h_\mu$ allows a convenient
representation of the dipolar magnetic field in the form
$\B=\mu^{\rm mag}/h_\mu {\bf e}_\mu$ where $\mu^{\rm mag}$ is the
magnetic dipole moment of the star. Note that, for a dipolar
magnetic field $\B=B_p{\bf e}_\mu$, $B_p h_\mu=\mu^{\rm mag}$ is constant
which leads to $\nab\times\B=0$. We further assume that the ambient
flow velocity is along magnetic field lines, ie. ${\bf v}=v_p{\bf e}_\mu$.
Assuming \stepcounter{sub} \begin{equation} \delta (\mu,\nu,\phi,t) \sim \delta(\nu) e^{i k\mu}
e^{-i\omega t} e^{im\phi}. \end{equation} we can reduce the equations of motion
(\ref{eq-mhd1}) to one second order differential equation for
$\delta v_\nu$ as
\begin{widetext}
\stepcounter{sub}
\begin{eqnarray}\label{diff-v-nu}
&&\frac{d^2\delta v_\nu}{d \nu^2}
+ F(\mu,\nu) \frac{d\delta v_\nu}{d \nu}
+ G(\mu,\nu) \delta v_\nu=0,\\
\stepcounter{subeqn}
&&
F(\mu,\nu)=\frac{1}{(c^2_\perp + v_A^2)\eta_1}\left[ (c^2_\perp+v^2_A)
\partial_\nu(\eta_1+ \eta_2) +
2\Xi {v^2_{\rm A}\over h^2_\mu}\eta_1\partial_\nu \ln h_\mu
+{2ik v_p \eta_1 (3c^2_{||}-c^2_\perp) \partial_\nu \ln h_\mu\over
h_\mu (i\omega_D+K_\mu-\nab\cdot{\bf v}_p)}\right]
,\\
\stepcounter{subeqn}
&&G(\mu,\nu)=\frac{1}{(c^2_\perp+v^2_A)\eta_1}
\left[ (c^2_\perp+ v^2_A) \partial_\nu\eta_2
+2\Xi {v^2_A\over h^2_\mu}\eta_2\partial_\nu \ln h_\mu
- (i\omega_D-K_\nu)h_\nu
-{h_\nu \Xi v^2_A \over h^2_\mu}{k^2+(\partial_\mu \ln h_\nu)^2
\over i\omega_D+K_\nu-\nab\cdot v_p}
\right. \no\\
&&\hspace{6cm} \left.
+{2v_p\partial_\nu \ln h_\mu/h_\mu \over i\omega_D-K_\mu-\nab\cdot v_p}
\left( ik(3c^2_{||}-c^2_\perp)\eta_2 - \partial_\nu(h_\mu v_p)/h_\nu\right)
\right] \,,\\
\stepcounter{subeqn}
&&\eta_1= (h_\phi Q/h_\mu)/[ (i\omega_D - K_\mu)Q
- (m^2/h^2_\phi)(c^2_\perp+v^2_{\rm A})~ (i\omega_D + K_\phi - \nab\cdot{\bf v}_p)],\\
\stepcounter{subeqn}
&&\eta_2=
(\eta_1/ h_\phi) \left[ \frac{1}{h_\mu} \partial_\nu(h_\mu h_\phi)
- \frac{2}{h_\nu} \partial_\nu h_\mu +{1\over h_\mu h_\nu}(h_\mu\partial_\nu v_p
- v_p\partial_\nu h_\mu )
~{i k - \partial_\mu h_\nu /h_\nu\over
i\omega_D + K_\nu - \nab\cdot{\bf v}_p}\right],\\
\stepcounter{subeqn}\label{Q}
&&
Q=\omega_D^2 + K^2_\phi + (i\omega_D-K_\phi)\nab\cdot{\bf v}_p -
{v^2_A\Xi\over h^2_\mu} [ k^2 +\frac{1}{h^2_\phi}(\partial_\mu h_\phi)^2],
\end{eqnarray}
\end{widetext}
where $\omega_D=\omega-k v_p/h_\mu$ is the Doppler shifted
frequency, $K_i= {\bf v}_p\cdot\nab\ln h_i$ ($i=\mu, \nu, \phi$), $v_{\rm
A}=B/\sqrt{4\pi\rho}$ is the Alfv\'en wave velocity,
$c_{||}=\sqrt{p_{||}/\rho}$ and $c_\perp=\sqrt{p_\perp/\rho}$ are
sound velocities parallel and perpendicular to the direction of the
magnetic field. We note that in deriving Eqs. (\ref{diff-v-nu}) we
assumed that $\partial_\nu [p_\perp+B^2/(8\pi)]\simeq 0$.
The shear Alfv\'en resonance happens at $\eta_1=0$ or equivalently
at $Q=0$ leading to \stepcounter{sub} \begin{equation}\label{omega_res} \omega_D^2 + i
\nab\cdot{\bf v}_p ~\omega_D + K^2_\phi -K_\phi \nab\cdot{\bf v}_p - {v^2_A
\Xi\over h^2_\mu} [ k^2 +\frac{1}{h^2_\phi}(\partial_\mu
h_\phi)^2]=0\,. \end{equation} As a result, resonance frequencies will be given
by
\stepcounter{sub}
\begin{eqnarray}\label{omega_res1}
&&\omega_\pm= k v_p/h_\mu -(i/2) \nab\cdot{\bf v}_p \pm (1/ 2)
\sqrt{\Delta},\\
&&\Delta=
4v^2_A \Xi[k^2+(\partial_\mu\ln h_\phi)^2]/h^2_\mu-( \nab\cdot{\bf v}_p-2K_\phi)^2
.\no
\end{eqnarray}
For a rectilinear configuration, resonance eigenfrequencies Eq.
(\ref{omega_res1}) for an incompressible plasma flow along the field
lines, $\nab\cdot{\bf v}_p=0$, with an isotropic pressure,
$p_{||}=p_\perp=p$, reduce to ones we obtained in paper I, and as
expected: $\omega_\pm= k( v_p \pm v_A)$.
Equation (\ref{omega_res1}), however, shows that whenever the
ambient flow is compressible, i.e. $\nab\cdot{\bf v}_p\ne 0$, and/or
$\Delta <0$, waves do not propagate and an instability develops. In
general, the ambient plasma is fairly incompressible, i.e.
$\nab\cdot{\bf v}_p = 0$. However, due to the topological deformation of
the magnetosphere caused by the compressional action of accreting
material, a non-zero density gradient through the plasma, and so
$\nab\cdot{\bf v}_p\ne 0$ can be expected: \stepcounter{sub} \begin{eqnarray}
&&\nab\cdot{\bf v}_p=-{1\over \rho} \frac{d\rho}{dt}=
-{1\over \rho}{\partial \rho \over \partial t} - {1\over \rho} {\bf v}_p\cdot\nab \rho,\no\\
&&\hspace{1.1cm}\simeq - {\bf v}_p\cdot\nab\ln \rho .
\end{eqnarray}
As a result, the resonant mode will grow (decay) if
${\bf v}_p\cdot\nab \rho>0$ ($<0$).
Approximating the plasma inflow velocity with the free fall
velocity $v_p\sim v_{\rm ff}(r)=(2GM/r)^{1/2}$, the growth/decay
timescale will be as order
of $\tau\sim r/v_p=(r^3/2GM)^{1/2}\sim 6\times 10^{-5}\;
(M/M_\odot)^{-1/2}(r/10\,{\rm km})^{3/2}$ s.
Therefore, the closer to the star the faster the
instability develops.
The condition $\Delta <0$ is satisfied whether
\stepcounter{sub}
\begin{eqnarray}\label{Delta_eq}
\stepcounter{subeqn}\label{firehose}
&&{\rm I}:~~~ \Xi <0 \rightarrow c^2_{||} > c^2_\perp+v^2_{\rm A}/2,\\
\stepcounter{subeqn}\label{Delta1} &&{\rm II}:~~~ (\nab\cdot{\bf v}_p-2K_\phi)^2>
4v^2_A|\Xi|[k^2+(\partial_\mu\ln h_\phi)^2]/h^2_\mu . \no\\
\end{eqnarray}
Case I, that is known as the firehose instability in the literature,
happens when $p_{||}$ is much larger than $p_\perp +p_{\rm M}$,
where $p_{\rm M}=B^2/(8\pi)$ is the magnetic pressure.
The magnetic field channeling the parallel plasma streams
experiences a similar instability. Whenever the flux tube is
slightly bent, the flowing plasma exerts a centrifugal force, that
tends to enhance the initial bending. The field line bending is
proportional to the density of energy in plasma motion along the
magnetic field $\sim \rho v^2_{||}\sim p_{||}$. Recalling Eq.
(\ref{p_par}), the ambient plasma flow would enhance the firehose
instability in the magnetosphere of an accreting neutron star: \stepcounter{sub}
\begin{equation}\label{p_par1} c^2_{||}\simeq {c'}^2_{||} + v^2_p, \end{equation} where
${c'}^2_{||}\simeq (m/\rho) \int f u^2_{||} d^3u$ and we assume that
a Maxwellian distribution function, so the cross term vanishes.
Inserting Eq. (\ref{p_par1}) into Eq. (\ref{firehose}), we find \stepcounter{sub}
\begin{equation}\label{firehose1} v^2_p> v^2_{\rm A}/2+ ( c^2_\perp -{c'}^2_{||}
). \end{equation} Therefore, a firehose instability develops whenever condition
(\ref{firehose1}) is satisfied.
For an isotropic pressure, i.e. $\Xi\sim 1$, the firehose
instability would not be expected. However, an instability may
arise if the condition (\ref{Delta1}) is satisfied. For an
incompressible flow, we find that Eq. (\ref{Delta1}) reduces to
\stepcounter{sub}
\begin{equation}\label{cond} v_p > v_{\rm A} \sqrt{1+q^2},
\end{equation}
where $q=k/(\partial_\mu \ln h_\phi)$.
It is necessary to note that this
instability is enhanced whenever both $v_p\ne 0$ and $\partial_\mu
\ln h_\phi\ne 0$. The latter condition is due to the curvature of
magnetic field lines. Therefore, the non-flat topology of the
magnetic field can trigger some MHD instabilities through the
magnetosphere. An interesting note is that the condition
(\ref{Delta1}) is only valid for for an isotropic pressure flow.
When the wave transfers energy to the flow, the wave decays, and the
extraction of flow energy by the wave will lead to a growing mode.
For a superAlfv\'enic flow, the extraction of energy from the flow
and growing MHD waves is very likely \cite{JNR97}. In this case
also the growth/decay timescale will be as order of $\tau\sim
r/v_p=(r^3/2GM)^{1/2}\sim 6\times 10^{-5}\;
(M/M_\odot)^{-1/2}(r/10\,{\rm km})^{3/2}$ s.
Furthermore, by approximating the Alfv\'en velocity by $v_{\rm
A}\sim B(r)/\sqrt{4\pi \rho_{\rm ff}}$ where $\rho_{\rm
ff}=\dot{M}/(v_{\rm ff}~ 4\pi r^2)$ is the free fall mass density
($v_p\sim v_{\rm ff}(r)=\sqrt{2GM/r}$), the instability condition
(\ref{cond}) is satisfied for distance further than \stepcounter{sub}
\begin{equation}\label{cond2} r>1.7\times 10^6\; {\rm cm}
\;(1+q^2)^{2/7}\mu^{4/7}_{26} \dot{M}_{17}^{-2/7}
(M/M_\odot)^{-1/7}. \end{equation} Therefore, this instability is very likely
to develop at a position where the accretion disk can distort the
dipolar magnetosphere. Here $\mu_{26}$ is the magnetic field dipole
moment at the surface of star in units of $10^{26}$ G cm$^{3}$ and
$\dot{M}_{17}$ is the mass of accretion rate in units of $10^{17}$ g
s$^{-1}$. We calculate Eq. (\ref{cond2}) for a solar mass neutron
star with 10 km radius.
To summarize, we believe we have shown that there are generic plasma
instabilities associated with the radial position where the
inflowing material of the accretion disk leads to a distortion of
the inner magnetosphere of the neutron star and field aligned plasma
flows. We suspect that these instabilities may be relevant in the
understanding of some details of observed QPOs in X-ray binary
systems, particularly when linked to shear Alfv\'en waves. The MHD
interaction of the infalling plasma with the neutron star's
magnetosphere can alter the topology of the star's inner
magnetoshere. The plasma flows and topology lead to the possibility
of : (1) firehose instabilities associated the pressure anisotropy
produced by the plasma flow; (2) convective growth of waves in the
plasma flow. The unstable modes might produce shear Alfv\'en
resonances with large amplitude giving strong quasi-periodic
variations in X-ray fluxes.
\begin{acknowledgments}
This research was supported by the Natural Sciences and
Engineering Research Council of Canada (NSERC).
\end{acknowledgments}
\def\aj{{AJ}}
\def\araa{{ARA\&A\ }}
\def\apj{{ApJ\ }}
\def\apjl{{ApJ\ }}
\def\apjs{{ApJS\ }}
\def\apss{{Ap\&SS}}
\def\aap{{A\&A\ }}
\def\aapr{{A\&A~Rev.}}
\def\aaps{{A\&AS}}
\def\azh{{AZh}}
\def\baas{{BAAS}}
\def\jrasc{{JRASC}}
\def\memras{{MmRAS}}
\def\mnras{{MNRAS\ }}
\def\pra{{Phys.~Rev.~A}}
\def\prb{{Phys.~Rev.~B}}
\def\prc{{Phys.~Rev.~C\ }}
\def\prd{{Phys.~Rev.~D\ }}
\def\pre{{Phys.~Rev.~E}}
\def\prl{{Phys.~Rev.~Lett.\ }}
\def\pasp{{PASP}}
\def\pasj{{PASJ\ }}
\def\qjras{{QJRAS}}
\def\skytel{{S\&T}}
\def\solphys{{Sol.~Phys.}}
\def\sovast{{Soviet~Ast.\ }}
\def\ssr{{Space~Sci.~Rev.\ }}
\def\zap{{ZAp}}
\def\nat{{Nature\ }}
\def\iaucirc{{IAU~Circ. No.}}
\def\aplett{{Astrophys.~Lett.}}
\def{Astrophys.~Space~Phys.~Res.}{{Astrophys.~Space~Phys.~Res.}}
\def{Bull.~Astron.~Inst.~Netherlands}{{Bull.~Astron.~Inst.~Netherlands}}
\def\fcp{{Fund.~Cosmic~Phys.}}
\def\gca{{Geochim.~Cosmochim.~Acta}}
\def\grl{{Geophys.~Res.~Lett.}}
\def\jcp{{J.~Chem.~Phys.}}
\def\jgr{{J.~Geophys.~Res.}}
\def{J.~Quant.~Spec.~Radiat.~Transf.}{{J.~Quant.~Spec.~Radiat.~Transf.}}
\def{Mem.~Soc.~Astron.~Italiana}{{Mem.~Soc.~Astron.~Italiana}}
\def\nphysa{{Nucl.~Phys.~A}}
\def\nphysb{{Nucl.~Phys.~B\ }}
\def\physrep{{Phys.~Rep.}}
\def\physscr{{Phys.~Scr}}
\def\planss{{Planet.~Space~Sci.}}
\def\procspie{{Proc.~SPIE}}
----------------------------------------------------------------------------------------
|
2,877,628,090,683 | arxiv | \section{Introduction}
In their influential paper \cite{AB}, Atiyah and Bott used two--dimensional
Yang--Mills theory to compute cohomology of the moduli space $\mathcal N$ of stable
holomorhic bundles over a Riemann surface $F$. In essence, the computation
was inspired by the idea that the Yang--Mills functional in this dimension
should be an equivariantly perfect Morse--Bott function (this was proved
later in full generality by Daskalopoulos \cite{Da}).
A real structure $\sigma: F\to F$ induces a real structure $\sigma^{\#}$ on
the moduli space $\mathcal N$. Understanding the structure of the real moduli space
$\mathcal N' = \operatorname{Fix}\,(\sigma^{\#})$ is an important but subtle problem. It has been
addressed most recently in a series of papers by N.-K.~Ho and C.-C.~M.~Li,
the latest being \cite{HL}. They mainly treat the case when the real
structure $\sigma$ has no fixed points, which leads them to the study of
the Yang--Mills functional on the non--orientable surface $F/\sigma$.
In this paper we consider the other extreme, when $\sigma$ has the maximal
possible number of fixed circles. In this case, the pair $(F,\sigma)$ is
usually referred to as an $M$--curve. More specifically, we work with the
moduli space $\mathcal N$ of stable holomorphic bundles of rank 2 over $F$ with
fixed non--trivial determinant, and with the associated real moduli space
$\mathcal N'$. Instead of the infinite dimensional Yang--Mills functional we utilize
a finite dimensional Morse--Bott function as in Thaddeus \cite{T}. The
perfection of the function in Thaddeus' paper was suggested by the work of
Frankel \cite{F} while the perfection of ours is suggested by Duistermaat's
paper \cite{D}.
For technical reasons, in this paper we will only take up the case of genus
two $M$--curves, hoping to address the case of higher genera elsewhere. For
$M$--curves of genus two, the second author \cite{W1} described $\mathcal N'$
algebraically as the intersection of two quadrics in the five--dimensional
real projective space; however, it proved to be rather difficult to extract
any further information about the topology of $\mathcal N'$ from that description.
The Morse theoretic approach of this paper, on the other hand, gives a
complete calculation of the integral cohomology of $\mathcal N'$.
\begin{theorem}\label{T:main}
Let $(F,\sigma)$ be a genus two $M$--curve and $\mathcal N'$ the real moduli space
of stable holomorphic bundles of rank 2 over $F$ with fixed non--trivial
determinant. Then, at the level of graded abelian groups, there is an
isomorphism $H^* (\mathcal N',\mathbb Z) = H^* (S^1 \times S^1 \times S^1,\mathbb Z)$.
\end{theorem}
It should be mentioned that the above isomorphism does not hold at the
level of cohomology rings, see Remark \ref{R:rings}.
Parts of this project were accomplished while we attended the Summer 2006
Session of the Park City/IAS Mathematics Institute. We express our
appreciation to the organizers for providing a stimulating environment.
\section{Real moduli spaces over curves}
Let $F$ be a compact surface of genus $g \ge 2$ and fix a point $p \in F$.
Denote by $\mathcal N$ the moduli space of stable holomorphic bundles $\mathcal E \to F$
of rank 2 with determinant $\L_p^{-1}$, where $p \in F$ is viewed as a
divisor. Then $\mathcal N$ is a smooth complex manifold of real dimension $6g - 6$,
modeled on the complex vector space $H^1 (F,\operatorname{End} \mathcal E)$ by the deformation
theory.
Let $\sigma: F \to F$ be a real structure on $F$ whose real part $F' =
\operatorname{Fix}\,(\sigma)$ is non--empty. Then $F'$ contains at least one circle,
but may contain as many as $g + 1$ circles. In the latter case, the pair
$(F,\sigma)$ is called an $M$--curve. Choose $p \in F'$ then $\sigma: F
\to F$ induces an involution ${\sigma^{\#}}: \mathcal N \to \mathcal N$ by the formula
\[
\sigma^{\#}\,[\mathcal E] = [\,\sigma^*\,\overline{\mathcal E}\,],
\]
where $\overline{\mathcal E}$ stands for the complex conjugate of $\overline{\mathcal E}$. Note that
since $\sigma: F \to F$ is orientation reversing, $\sigma^*{\L_p} =
\overline{\L}_p$, thus making the complex conjugation in the above formula
necessary.
Note that the linear map $H^1 (F,\operatorname{End} \mathcal E) \to H^1(F,\operatorname{End} (\sigma^*\overline{\mathcal E}))$
induced by $\sigma$ is a complex conjugation, therefore, the involution
$\sigma^{\#}: \mathcal N \to \mathcal N$ is anti-holomorhic and hence is a real structure
on $\mathcal N$. It follows that the real moduli space $\mathcal N' = \operatorname{Fix}\, (\sigma^{\#})$
is a closed smooth manifold of dimension $3g - 3$.
\begin{proposition}
The manifold $\mathcal N'$ is orientable.
\end{proposition}
\begin{proof}
According to \cite{AB}, the moduli space $\mathcal N$ is simply connected and spin.
Therefore, the real structure $\sigma^{\#}$ must be compatible with the
unique spin structure on $\mathcal N$ in the sense of \cite{W2}. The main result
in \cite{W2} then applies to infer that $\mathcal N'$ is orientable. Note that
$\sigma^{\#}$ need not preserve the spin structure in the usual sense,
since it is orientation reversing when the complex dimension of $\mathcal N$ is
odd, that is, when $g$ is even.
\end{proof}
\section{Representation varieties}
Let $F_0$ be the surface $F$ punctured at $p \in F$. The theorem of
Narasimhan and Seshadri \cite{NS} can be used to identify $\mathcal N$ with the
representation variety $\mathcal M$ which consists of the conjugacy classes of
$SU(2)$ representations of $\pi_1 (F_0) = \pi_1 (F_0, x_0)$ sending
$[\,\partial F_0\,] \in \pi_1 (F_0)$ to $-1 \in SU(2)$ (the latter condition
does not dependent on the choice of basepoint because $-1$ is a central
element in $SU(2)$).
Given a real structure $\sigma: F\to F$ with non--empty $F'= \operatorname{Fix}\,(\sigma)$
and $p \in F'$ as above, choose a basepoint $x_0 \in F' \cap F_0$. Then we
have an induced involution $\sigma_*: \pi_1 (F_0) \to \pi_1 (F_0)$, which
in turn induces an involution $\sigma^*: \mathcal M \to \mathcal M$ by the formula
\[
\sigma^*\,[\alpha] = [\,\alpha\circ\sigma_*\,].
\]
That $\sigma^*$ is well defined follows from the fact that $\sigma_*\,
[\,\partial F_0\,] = [\,\partial F_0\,]^{-1} = -1$.
\begin{lemma}
Let $\phi: \mathcal M \to \mathcal N$ be the Narasimhan--Seshadri diffeomorphism then
the following diagram commutes
\[
\begin{CD}
\mathcal M @> \sigma^* >> \mathcal M \\
@V \phi VV @V \phi VV \\
\mathcal N @> \sigma^{\#} >> \mathcal N
\end{CD}
\]
\end{lemma}
\medskip
\begin{proof}
The Narasimhan--Seshadri correspondence assigns to every $[\alpha] \in
\mathcal M$ a stable holomorphis bundle $\mathcal E_{\alpha} \to F$ as follows. Let
$\tilde F_0 \to F_0$ be the universal covering space of $F_0$ and
consider the holomorphic bundle $\mathcal E \to F_0$ with
\[
\mathcal E\; =\; \tilde F_0\, \times_{\pi_1(F_0)}\, \mathbb C^2,
\]
where $\pi_1 (F_0)$ acts on $\mathbb C^2$ via $\alpha: \pi_1 (F_0) \to
SU(2)$. Obviously, $\det \mathcal E$ is trivial. Since the holonomy of $\alpha$
along the loop $\partial F_0$ is fixed, we can trivialize $\mathcal E$ near the boundary
$\partial F_0$. Glue $D^2 \times \mathbb C^2$ in using the transition function
$z^{-1}$ along $\partial F_0$. The result is the stable bundle $\mathcal E_{\alpha}
\to F$ with the determinant $\det\mathcal E_{\alpha} = \L^{-1}_p$ yielding the
Narasimhan--Seshadri correspondence.
For any given $[\alpha] \in \mathcal M$ we have $\sigma^{\#}\,[\mathcal E_{\alpha}] =
[\,\sigma^*\,{\overline \mathcal E_{\alpha}}\,] = [\,\mathcal E_{\sigma^* \overline \alpha}\,]$,
where $\overline \alpha$ is the complex conjugate of $\alpha$. However, for
any matrix
\[
A\; =\;
\left[\begin{array}{cc}
a & b\\
- \overline b & \overline a
\end{array}
\right]
\;\in\; SU(2),
\]
\noindent
its complex conjugate
\[
\overline A \; =\;
\left[\begin{array}{cc}
\overline a & \overline b\\
- b & a
\end{array}
\right]
\;=\;
\left[\begin{array}{cc}
0 & 1\\
-1 & 0
\end{array}
\right]
\;A\;
\left[\begin{array}{cc}
0 & 1\\
-1 & 0
\end{array}
\right]^{-1}
\]
\medskip\noindent
is the matrix conjugate of $A$ via
\[
j\; =\;
\left[\begin{array}{cc}
0 & 1\\
-1 & 0
\end{array}
\right]
\;\in\; SU(2)
\]
\noindent
(we use the standard identification between $SU(2)$ matrices and unit
quaternions). This means that $\overline \alpha$ and $\alpha$ are conjugate
representations, and therefore the bundles $\mathcal E_{\sigma^* \overline \alpha}$
and $\mathcal E_{\sigma^* \alpha}$ are isomorphic.
\end{proof}
Denote by $\mathcal M'$ the fixed point set of the involution $\sigma^*: \mathcal M \to
\mathcal M$ then the above lemma implies that $\mathcal M'$ and $\mathcal N'$ are diffeomorphic.
\begin{corollary}\label{C:orient}
The moduli space $\mathcal M'$ is a smooth closed orientable manifold of
dimension $3g - 3$.
\end{corollary}
In conclusion, note that $\mathcal M$ is a symplectic manifold with symplectic
form $\omega: H^1 (F;\operatorname{ad}\rho)\,\otimes\,H^1 (F;\operatorname{ad}\rho)\to \mathbb R$
given by $\omega(u,v) = - 1/2\,\operatorname{tr}\, (u\,\cup\,v)\,[F]$, see Goldman
\cite{G}.
\begin{lemma}\label{L:anti}
The map $\sigma^*: \mathcal M \to \mathcal M$ is a real sructure with respect to the
symplectic form $\omega$, that is, $\sigma^* \omega = - \omega$.
\end{lemma}
\begin{proof}
This result is obtained by the following straightforward calculation\,:
\begin{multline}\notag
(\sigma^* \omega) (u,v) =
\omega (\sigma^* u,\sigma^* v) = -1/2\,\operatorname{tr}\, (\sigma^* u\,\cup\,\sigma^* v)
\,[F] \\ = -1/2\,\operatorname{tr}\, (\sigma^* (u\,\cup\,v))\,[F] = -1/2\,\operatorname{tr}\, (u\,\cup\,v)
\,\sigma_*[F] = -\omega(u,v).
\end{multline}
In the last equality, we used the fact that the map $\sigma: F \to F$ is
orientation reversing.
\end{proof}
\section{The case of $g = 2$}
Let $(F,\sigma)$ be a genus two $M$--curve embedded in $\mathbb R^3$ as
shown in Figure \ref{fig2}, with $\sigma: F \to F$ acting as the reflection
in the horizontal plane fixing the obvious three circles. Let $F_0$ be
a once punctured surface $F$, the puncture $p \in F$ being a real point
on the left circle of $F'$ (the other two options for positioning $p$
will be treated in Section \ref{S:proof}). Then $\pi_1 (F_0)$ is the free
group
\[
\pi_1 (F_0) = \langle\,a_1,b_1,a_2,b_2\,|\qquad\rangle
\]
whose generators $a_1$, $a_2$, $b_1$, $b_2$ are shown in the picture.
Also shown is the curve $c = [a_1,b_1] \cdot [a_2,b_2]$. All of the
above curves are oriented clockwise with respect to the projection
shown in Figure \ref{fig2}.
\bigskip
\begin{figure}[!ht]
\psfrag{p}{$p$}
\psfrag{c}{$c$}
\psfrag{a1}{$a_1$}
\psfrag{a2}{$a_2$}
\psfrag{b1}{$b_1$}
\psfrag{b2}{$b_2$}
\centering
\includegraphics{fig2.eps}
\caption{}
\label{fig2}
\end{figure}
The moduli space $\mathcal M$ can now be described as follows. Let us consider
the smooth map $\mu: SU(2)^4 \to SU(2)$ given by $\mu (A_1,B_1,A_2,B_2)
= - [A_1,B_1]\cdot [A_2,B_2]$. Since $1\in SU(2)$ is a regular value of
$\mu$, its preimage $\mu^{-1} (1)$ is a smooth manifold. It contains no
reducibles, that is, 4--tuples $(A_1,B_1,A_2,B_2)$ whose entries commute
with each other, because the latter would contradict the equation
$[A_1,B_1] \cdot [A_2,B_2] = -1$. Therefore, the action of $SO(3)
= SU(2)/\pm 1$ on $\mu^{-1}(1)$ by conjugation is free, and its quotient
space is then the smooth six--dimensional manifold $\mathcal M$.
A straightforward calculation shows that the induced action
$\sigma: \pi_1 (F_0) \to \pi_1 (F_0)$ is given on the generators by the
formulas
\begin{alignat*}{1}
& \sigma_* (a_1) = a_1,\quad\sigma_*(b_1) = b_1^{-1}, \\
&\sigma_* (a_2) = b_1^{-1} c\,b_2\,a_2\,b_2^{-1} b_1,\quad
\sigma_* (b_2) = b_1^{-1} b_2^{-1} b_1.
\end{alignat*}
\noindent
In practical terms, $\mathcal M$ consists of the conjugacy classes $[A_1,B_1,A_2,
B_2]$ of quadruples $(A_1,B_1,A_2,B_2)$ such that $[A_1,B_1] \cdot [A_2,
B_2] = -1$, and the real structure $\sigma^*: \mathcal M \to \mathcal M$ is given by
\[
\sigma^* [A_1,B_1,A_2,B_2] = [B_1\,A_1\,B_1^{-1},\;B_1^{-1},\;
-B_2\,A_2\,B_2^{-1},\;B_2^{-1}].
\]
\noindent
As we already know, the fixed point set $\mathcal M'$ of $\sigma^*: \mathcal M \to \mathcal M$ is
a smooth orientable manifold of dimension 3.
\section{The function}
Let $f: \mathcal M \to \mathbb R$ be the function on the moduli space $\mathcal M$ defined
by the formula
\[
f\,([A_1,B_1,A_2,B_2]) = \operatorname{tr}\, (B_1)/2.
\]
This is a smooth function whose range is $[-1,1]$, and $f^{-1}(-1,1)$ is
acted upon by $S^1$ as follows. After conjugation if necessary, every
element of $f^{-1}(-1,1)$ can be written in the form $[A_1,B_1,A_2,B_2]$
with $B_1 = e^{i\beta}$ and $0 < \beta < \pi$. The circle action is then
given by $e^{i\phi}: [A_1,B_1,A_2,B_2] \mapsto [A_1\,e^{i\phi},B_1,A_2,
B_2]$.
\begin{lemma}
This is a well defined action on $f^{-1} (-1,1)$.
\end{lemma}
\begin{proof}
Any other choice of representative in $[A_1,B_1,A_2,B_2]$ with $B_1 =
e^{i\gamma}$ and $0 < \gamma < \pi$, will have the property that
$e^{i\gamma} = g e^{i\beta} g^{-1}$ for some $g \in SU(2)$. Therefore,
$\gamma = \beta$ and $g$ is a unit complex number. Because of that,
$(g A_1 g^{-1}) e^{i\phi} = g (A_1 e^{i\phi}) g^{-1}$, making the
action well defined. Of course, if $B_1 = \pm 1$, the conjugating
element $g$ can be any $SU(2)$ matrix, and the above argument fails.
\end{proof}
According to Goldman \cite{G}, the above circle action preserves the
symplectic form $\omega$ on $\mathcal M$ and $\arccos f$ is its moment map
(up to a factor of $i$).
It is clear that $f\circ\sigma^* = f$ hence $f: \mathcal M \to \mathbb R$ is
invariant with respect to $\sigma^*: \mathcal M \to \mathcal M$. The restriction of $f$
to $\mathcal M'$ will be denoted by $f': \mathcal M'\to \mathbb R$. The range of $f'$
is again $[-1,1]$. The circle action on $f^{-1}(-1,1)$ is not defined
on $(f')^{-1}(-1,1)$. Nevertheless, it gives rise to the residual
$\mathbb Z/2$ action $r: \mathcal M' \to \mathcal M'$ defined on the entire moduli space $\mathcal M'$
(and in fact on the full moduli space $\mathcal M$) by the formula
\begin{equation}\label{E:res}
r\,([A_1,B_1,A_2,B_2]) = [-A_1,B_1,A_2,B_2].
\end{equation}
\section{The critical submanifolds of $f'$}
Thaddeus \cite{T} proved that $f: \mathcal M \to \mathbb R$ is a Morse--Bott
function and described its critical submanifolds. These critical
submanifolds are acted upon by $\sigma^*: \mathcal M \to \mathcal M$, and the fixed point
sets of this action are exactly the critical submanifolds of $f':
\mathcal M' \to \mathbb R$. These are of two types.
The first type is comprised of the manifolds $S_1'$, $S_3' \subset \mathcal M'$ on
which $f'$ achieves its absolute minimum and maximum. To be
precise, the absolute minimum $S_1 = f^{-1}(-1)$ of $f: \mathcal M \to \mathbb R$
is a copy of $SU(2)$ consisting of quadruples $[A_1,-1,i,j]$ parametrized
by $A_1 \in SU(2)$. The action
\[
\sigma^*[A_1,-1,\,i,\,j] = [A_1,-1,\,i,-j] = [i\,A_1\,i^{-1},-1,\,i,\,j]
\]
corresponds to the map $\operatorname{Ad}_{\,i}: SU(2) \to SU(2)$ sending $A_1$ to
$i\,A_1\,i^{-1}$. The fixed point set of this action, which is $S_1'$, is
the circle consisting of quadruples $[e^{i\phi},-1,\,i,\,j]$ parametrized
by $e^{i\phi}$. Similarly, $S_3 = f^{-1}(1)$, where $f: \mathcal M \to \mathbb R$
achieves its absolute maximum, gives rise to the circle $S_3'$
consisting of quadruples $[e^{i\psi},1,\,i,\,j]$.
\begin{lemma}
The critical submanifolds $S_1'$ and $S_3'$ of $f'$ are non-degenerate in
the Morse--Bott sense, and their indices are respectively 0 and 2.
\end{lemma}
\begin{proof}
The Hessian of $f$ is negative definite on the normal bundle of $S_3
\subset \mathcal M$. But then its restriction to the normal bundle of $S_3'
\subset \mathcal M'$, which is the Hessian of $f'$, is also negative definite.
In particular, $S_3'$ is non-degenerate, and its Morse--Bott index equals
the codimension of $S_3'$ in $\mathcal M'$, which is 2. The argument for $S_1'$
is analogous.
\end{proof}
Within $f^{-1}(-1,1)$, the critical points of $f: \mathcal M \to \mathbb R$
coincide with those of the moment map $\arccos f$, hence with the fixed
points of the circle action on $f^{-1}(-1,1)$. They form the submanifold
$S_2 \subset \mathcal M$ which is a 2--torus consisting of quadruples $[j,\,i,\,
z,\,w]$ parametrized by $z$, $w \in \mathbb C$ such that $|z| = |w| = 1$.
Note that $f = 0$ on $S_2$. One can easily see that the fixed point set
$S_2'$ of the involution $\sigma^*: S_2 \to S_2$ given by
\[
\sigma^* [j,\,i,\,z,\,w] = [-j,-i,-z,\,\bar w] = [j,\,i,-\bar z,\,w]
\]
consists of the two circles $[j,\,i,\,\pm i,\,w]$ with $|w| = 1$.
\begin{lemma}
The critical submanifold $S_2'$ of $f'$ is non-degenerate in the
Morse--Bott sense, and its index is equal to 1.
\end{lemma}
\begin{proof}
According to Frankel \cite{F}, the moment map $\arccos f$ is a Morse--Bott
function on $f^{-1}(-1,1)$, and hence so is the function $f$. The involution
$\mathcal M \to \mathcal M$ given by $[A_1,B_1,A_2,B_2] \mapsto [A_1,-B_1,A_2,B_2]$ changes
sign of $f$. Since $S_2$ is connected, the index of $f$ is half the rank of
the normal bundle of $S_2 \subset \mathcal M$, or 2. The involution $\sigma^*$ is
antisymplectic, see Lemma \ref{L:anti}, hence we can apply
\cite[Proposition 2.2]{D} to the moment map $\arccos f$. It tells us that
$\arccos f'$ and hence $f'$ are Morse--Bott on $S_2'$, and the index of
$f'$ is half that of $f$.
\end{proof}
\begin{lemma}\label{L:action}
The residual involution $r: \mathcal M' \to \mathcal M'$ acts as the $180^{\circ}$ rotation
on each of the circles $S_1'$ and $S_3'$, and acts trivially on $S_2'$.
\end{lemma}
\begin{proof}
The circle $S_3'$ consists of the quadruples $[e^{i\psi},1,i,j]$ acted upon
by $r$ as $[e^{i\psi},1,i,j] \to [-e^{i\psi},1,i,j]$. The case of $S_1'$ is
completely similar. The manifold $S_2'$ consists of the quadruples $[j,i,
\pm i,w]$ where $w$ is a unit complex number. The action of $r$ is given by
$[j,i,\pm i,w]\to [-j,i,\pm i,w] = [j,i,\pm i,w]$, after conjugating by $i$.
\end{proof}
\section{The Morse--Bott spectral sequence}
As we have seen, the critical set of the Morse--Bott function $f': \mathcal M'\to
\mathbb R$ is a disjoint union $S_1'\,\cup\, S_2'\,\cup\, S_3'$, where
the index of $S_p'$ is equal to $p - 1$, and the restriction of $f'$
to each of the $S_p'$ is constant, $f' (S_p') = p - 2$ for
$p = 1, 2, 3$. Let $x_j = j - 3/2$ for $j = 0, 1, 2, 3$, and consider
the filtration
\[
\emptyset\; =\; U'_0\quad \subset\quad U'_1\quad \subset\quad
U'_2\quad \subset\quad U'_3\; =\; \mathcal M'
\]
of $\mathcal M'$ by the open sets $U_j' = (f')^{-1}
(x_0,x_j)$. The Morse Lemma implies that, up to homotopy, $U'_j$ is a
complex obtained from $U'_{j-1}$ by attaching, along its boundary, the
disc bundle associated to the negative normal bundle over
$S'_j$ whose fibers are the negative definite subspaces of the
Hessian of $f'$ on $S'_j$. The $E_1$ term of the Morse--Bott spectral
sequence associated with this filtration is of the form shown in Figure
\ref{fig3}, where the $(p-1)$st column represents $H^* (S_p')$ for
$p = 1, 2, 3$. This spectral sequence converges to $H^*(\mathcal M')$.
\bigskip
\begin{figure}[!ht]
\psfrag{Z}{$\mathbb Z$}
\psfrag{Z2}{$\mathbb Z^2$}
\psfrag{0}{$0$}
\psfrag{p}{$p$}
\psfrag{q}{$q$}
\centering
\includegraphics{fig4.eps}
\caption{}
\label{fig3}
\end{figure}
Observe that the filtration is preserved by the residual $\mathbb Z/2$ action
$r: \mathcal M' \to \mathcal M'$ hence $r$ induces an automorphism $r^*$ of the above
spectral sequence. Our next task will be to compute the differentials,
of which only $d_1$ and $d_2$ are not obviously zero.
\section{Differentials $d_1$}
The differential $d_1: E_1^{0,0} \to E_1^{1,0}$ must vanish because $H^0
(\mathcal M') \ne 0$. In particular, we conclude that $H^0(\mathcal M') = \mathbb Z$ and hence
$\mathcal M'$ is connected. Similarly, the differential $d_1: E_1^{1,1} \to
E_1^{2,1}$ must vanish because $\mathcal M'$ is orientable, see Corollary
\ref{C:orient}, so $H^3 (\mathcal M') = \mathbb Z$. In fact, we will show that the
remaining differentials $d_1$ also vanish but this will take some effort.
Let us first consider the differential $d_1: E_1^{1,0}\to E_1^{2,0}$. It
is represented as the composition
\begin{equation}\label{E:thom1}
H^0 (S_2')\; \to\; H^1 (U_2',U_1')\; \to\; H^1 (U_2')\; \to\;
H^2 (U_3',U_2')\; \to\; H^0 (S_3').
\end{equation}
Here, the second and third arrows are portions of the long exact sequences
of the respective pairs hence they commute with the automorphism $r^*$.
The first and the last arrows are Thom isomorphisms, therefore, their
behavior with respect to $r^*$ is determined by how $r$ acts on $S_2'$
and $S_3'$ and on their negative normal bundles. According to Lemma
\ref{L:action}, the action of $r^*$ on both $H^0(S_2')$ and $H^0(S_3')$
is trivial.
\begin{lemma}\label{L:thom1}
The involution $r$ acts as minus identity on the normal bundle of $S_2'
\subset \mathcal M'$ and, in particular, on its negative normal bundle.
\end{lemma}
\begin{proof}
This will follow from the fact that $r: \mathcal M \to \mathcal M$ given by the formula
(\ref{E:res}) is an involution and that its fixed point
set coincides with $S_2$. To show the latter, suppose that $[A_1,B_1,A_2,
B_2] = [-A_1,B_1,A_2,B_2]$. Then there is $u \in SU(2)$ such that $u A_1
= - A_1 u$ and $u$ commutes with $B_1$, $A_2$ and $B_2$. Since $(A_1,B_1,
A_2,B_2)$ is an irreducible representation, we conclude that $u^2 = 1$ or
$u^2 = -1$. The former cannot happen because then $-A_1 = A_1$ and $A_1 =
0$, a contradiction with $A_1 \in SU(2)$. Therefore, we have $u^2 = - 1$,
and $u = \pm i$ after conjugation. The fact that $u = \pm i$ commutes
with $B_1$, $A_2$ and $B_2$ means that all three of them are unit complex
numbers. In fact, one can conjugate by $j$ if necessary to make $B_1 =
e^{i\beta}$ with $\beta \in [0,\pi]$, perhaps at the expense of changing
the sign of $u$. Conjugate further by a unit complex number to make
$A_1 = a + bj$ with real non-negative $b$, while preserving $u$, $B_1$,
$A_2$ and $B_2$. The relation $[A_1,B_1] = [A_1,B_1]\cdot [A_2,B_2] = -1$
then tells us that, up to conjugation, $A_1 = j$ and $B_1 = i$. Therefore,
the only fixed points of $r: \mathcal M \to \mathcal M$ are of the form $[j,i,z,w]$, with
$z$ and $w$ some unit complex numbers. This is exactly $S_2 \subset \mathcal M$.
\end{proof}
Since the rank of the negative normal bundle to $S_2'$ is one, $r$
reverses orientation on the fiber hence $r^*$ anticommutes with the first
arrow in (\ref{E:thom1}). On the other hand, $S'_3$ is the absolute
maximum hence its negative normal bundle is the same as its normal bundle.
According to Lemma \ref{L:action}, the action of $r$ on $S'_3$ is
orientation preserving. Since $r: \mathcal M' \to \mathcal M'$ preserves orientation
(which follows for example from Lemma \ref{L:thom1}) it must be
orientation preserving on the negative normal bundle of $S_3'$. Therefore,
$r^*$ commutes with the last arrow in (\ref{E:thom1}). In conclusion,
$d_1: E_1^{1,0} \to E_1^{2,0} $ anticommutes with $r^*$ and thus must
vanish.
Finally, let us consider the differential $d_1: E_1^{0,1}\to E_1^{1,1}$.
It is represented as the composition
\begin{equation}\label{E:thom2}
H^1 (S_1')\; \to\; H^1 (U_1',U_0')\; \to\; H^1 (U_1')\;\to\;
H^2 (U_2',U_1')\; \to\; H^1 (S_2').
\end{equation}
As before, the two arrows in the middle commute with $r^*$, and the first
and the last arrows are Thom isomorphisms. The action of $r$ on $S_1'$ is
by the $180^{\circ}$ rotation hence the induced action on $H^1 (S_1')$ is
trivial. Since the negative normal bundle of $S_1'$ is zero dimensional,
there is no Thom class to worry about and we readily conclude that $r^*$
commutes with the first arrow in (\ref{E:thom2}). Concerning the last
arrow, we already know that $r^*$ anticommutes with the Thom class of the
negative normal bundle of $S_2'$. On the other hand, $r$ acts as the
identity on $S'_2$ and hence as the identity on $H^1(S'_2)$. This implies
that $r^*$ anticommutes with the last arrow and hence with the composition
(\ref{E:thom2}). Thus $d_1: E_1^{0,1} \to E_1^{1,1}$ vanishes.
\section{Differential $d_2$}
The results of the previous section imply that $E_2 = E_1$ hence all that
remains to do is compute the differential $d_2: E_2^{0,1}\to E_2^{2,0}$.
We will show that the edge homomorphism $i^*: H^1(\mathcal M') \to H^1 (S_1')$ in
the spectral sequence induced by the inclusion $i: S_1'\to \mathcal M'$ is
surjective. This will imply that $d_2 = 0$ because a generator of
$E_2^{0,1} = H^1(S'_1) = \mathbb Z$ must survive in the $E_{\infty}$ term of the
spectral sequence, hence it must be in the kernel of $d_2$. Note that a
similar argument could be used to show vanishing of $d_1: E_1^{0,1} \to
E_1^{1,1}$ above.
Remember that $S'_1$ consists of the quadruples $[e^{i\phi},-1, i, j] \in
\mathcal M'$. Consider the subset $\mathcal R' \subset \mathcal M'$ which consists of quadruples
$[1,B_1,A_2,B_2]$ fixed by the involution $\sigma^*: \mathcal M' \to \mathcal M'$.
\begin{lemma}
$\mathcal R' \subset \mathcal M'$ is a smoothly embedded 2--sphere which intersects $S_1'
\subset \mathcal M'$ transversely in exactly one point.
\end{lemma}
\begin{proof}
The relation $[1,B_1] \cdot [A_2,B_2] = [A_2,B_2] = -1$ on the points of
$\mathcal R'$ implies that, up to conjugation, $A_2 = i$ and $B_2 = j$. The fact
that $[1,B_1,i,j] \in \mathcal R'$ is fixed by $\sigma^*$ means that $\sigma^*
([1,B_1,i,j]) = [1, B_1^{-1},i,-j] = [1, i\,B_1^{-1} i^{-1},i,j]$ (we
used conjugation by $i$ in the last equality). Therefore, $\mathcal R'$ is
parametrized by $B_1 \in SU(2)$ such that $B_1\,i = i\,B_1^{-1}$. These
are precisely the unit quaternions with no $i$ component; they obviously
comprise an embedded 2--sphere in $\mathcal M'$. The intersection $\mathcal R'\,\cap\,S_1'$
consists of just one point, $[1,-1,i,j]$, and it is obviously transversal.
\end{proof}
Now, the lemma implies that the natural generator of $H^1 (S'_1) = \mathbb Z$
is the image under $i^*$ of the Poincar\'e dual of $\mathcal R'$. This shows
that $i^*: H^1(\mathcal M') \to H^1 (S'_1)$ is surjective and thus completes the
argument.
\section{Proof of Theorem \ref{T:main}}\label{S:proof}
The Morse--Bott spectral sequence associated with the function $f': \mathcal M'
\to \mathbb R$ converges to $H^* (\mathcal M') = H^* (\mathcal N')$. As we showed in the
last two sections, all the differentials in this spectral sequence vanish
and therefore it collapses at the $E_1$ term, $E_1 = E_{\infty}$.
This completes the proof in the case when the puncture belongs to the left
circle of $F'$ as shown in Figure \ref{fig2}.
If the puncture belongs to the right circle of $F'$, the involution
$\sigma^*: \mathcal M \to \mathcal M$ is given by the formula
\[
\sigma^* [A_1,B_1,A_2,B_2] = [-B_1\,A_1\,B_1^{-1},\;B_1^{-1},\;
B_2\,A_2\,B_2^{-1},\;B_2^{-1}],
\]
and Theorem \ref{T:main} follows by simply renaming the variables. In
the remaining case, when $p \in F'$ belongs to the middle circle, the
involution $\sigma^*: \mathcal M \to \mathcal M$ is given by
\[
\sigma^* [A_1,B_1,A_2,B_2] = [B_1\,A_1\,B_1^{-1},\;B_1^{-1},\;
B_2\,A_2\,B_2^{-1},\;B_2^{-1}].
\]
The above proof goes through with minimal changes, which can be safely
left to the reader.
\begin{remark}\label{R:rings}
The regular neighborhood of $S_1'\,\cup\,\mathcal R' = S^1\,\vee\,S^2$ in $\mathcal M'$
is a punctured $S^1 \times S^2$. This implies that $\mathcal M'$ splits into a
connected sum, one of the factors being $S^1 \times S^2$. In particular,
the isomorphism $H^* (\mathcal N') = H^* (S^1 \times S^1 \times S^1)$ of Theorem
\ref{T:main} is not a ring isomorphism.
\end{remark}
|
2,877,628,090,684 | arxiv | \section*{Abstract}
Microbial genome web portals have a broad range of capabilities that
address a number of information-finding and analysis needs for
scientists. This article compares the capabilities of the major
microbial genome web portals to aid researchers in determining
which portal(s) are best suited to solving their information-finding and
analytical needs.
We assessed both the bioinformatics tools and the data
content of BioCyc, KEGG, Ensembl Bacteria, KBase, IMG, and PATRIC.
For each portal,
our assessment compared and tallied the available capabilities. The
strengths of BioCyc include its genomic and metabolic tools,
multi-search capabilities, table-based
analysis tools, regulatory network tools and data, omics data
analysis tools, breadth of data content, and large amount
of curated data.
The strengths of KEGG include its genomic and metabolic tools.
The strengths of Ensembl Bacteria include its genomic tools and large number of genomes.
The strengths of KBase include its genomic tools and metabolic models.
The strengths of IMG include its genomic tools, multi-search
capabilities, large number of genomes, table-based analysis
tools, and breadth of data content.
The strengths of PATRIC include its large number of genomes,
table-based analysis tools, metabolic models, and breadth of
data content.
\section{Introduction}
A number of web portals exist for providing the scientific community
with access to the thousands of microbial genomes that have been
sequenced to date. This article compares the capabilities of the major
microbial genome web portals to aid researchers in determining
which portal(s) best serve their information-finding and
analytical needs.
The power that a genome web portal provides to its users is a function
of what data the portal contains, and of the types of software tools
the portal provides to users for querying, visualizing, and analyzing
the data. Query tools enable researchers to find what they are looking
for. Visualization tools speed the understanding of the information that
is found. Analysis tools enable extraction of new relationships from
the data.
We assess the data content of each portal both according to the types of
data it provides (e.g., does it provide regulatory network
information, protein localization data, or Gene Ontology
annotations?), and according to the number of genomes it provides. We
assess the software tools provided by each portal in several major
areas: genomics tools, metabolic tools, advanced search and analysis
tools, web services, table-based analysis, and user accounts. Omics data analysis
capabilities are also assessed, but are distributed among the
preceding areas. In each area, we enumerate
multiple software capabilities, such as the ability to paint omics data
onto pathway diagrams. We must emphasize that many of the portals
include a significant number of other capabilities that are not within
the purview of this study.
Search tools are a particularly important part of a portal because
they determine the user's ability to find information of interest;
therefore, we provide detailed comparisons of the search tools that each
portal provides for finding genes, proteins, DNA and RNA sites,
metabolites, and pathways. We call these multi-search tools because
they enable the user to search multiple database (DB) fields in combination.
Although user friendliness is a critical aspect of any
website, it is extremely difficult to assess objectively. We have
assessed a small number of relatively objective user friendliness
criteria, such as the types of user documentation available, the
presence of explanatory tooltips (information windows that appear when
the user hovers over regions of the screen), and the speed of the
site's gene page.
Our criteria for inclusion in the comparison were portals with a
perceived high level of usage, large number of genomes, a relatively
rich collection of tools, and sites that are actively maintained and developed.
The portals we compare are BioCyc \cite{MetaCycNAR18} (version 22.0, April 2018),
KEGG \cite{KEGG17} (version 87.1, August 2018),
Ensembl Bacteria \cite{Kersey18} (Release 40, July 2018),
KBase \cite{KBASE18} (versions during August 2018 to October 2018),
IMG \cite{IMG17} (version 5.0 August 2018),
and PATRIC \cite{Wattam14} (version 3.5.21, July 2018).
Related portals that are not included in this comparison are Entrez
Genomes (whose capabilities are similar to Ensembl Bacteria),
MicroScope \cite{Vallenet17} (which uses Pathway Tools for its
metabolic component and therefore has the same functionality as
BioCyc), ModelSEED \cite{Henry10} (which is a metabolic model portal,
not a genome portal), the SEED \cite{Overbeek14} (which has been
inactive for a number of years and was subsumed by the PATRIC
project), MicrobesOnline \cite{Dehal10}, iMicrobe
(\href{https://www.imicrobe.us/}{https://www.imicrobe.us/}), which is
a portal for metagenomes and transcriptomes, not for single genomes),
and Microme (\href{http://www.microme.eu/}{http://www.microme.eu/},
the Microme website largely shut down as of January 2018).
\subsection{Summary of the Portals}
Here we introduce each portal. Note that some portals have
some capabilities that are not covered in this comparison.
For each portal we provide a hyperlink to a sample gene page.
\subsubsection*{BioCyc}
BioCyc \cite{BioCyc17,MetaCycNAR16} is a microbial genome web portal that integrates sequenced
genomes with curated information from the biological literature,
with information imported from other biological DBs, and with
computational inferences. BioCyc data include metabolic pathways,
regulatory networks, and gene essentiality data. BioCyc provides
extensive query and visualization tools, as well as tools for omics
data analysis, metabolic path searching, and for running metabolic
models. We omit discussion of many BioCyc comparative genomics and
metabolic operations under its Analysis $\rightarrow$ Comparative
Analysis menu. Scientists can use the Pathway Tools software
associated with BioCyc to perform metabolic reconstructions and create
BioCyc-like DBs for in-house genome data.
BioCyc contains information curated from 89,500 publications.
The curated information includes experimentally determined gene
functions and Gene Ontology terms,
experimentally studied metabolic pathways, and experimentally
determined parameters such as enzyme kinetics data and enzyme
activators and inhibitors. Curated information also includes textual
mini-reviews that summarize information about genes, pathways, and
regulation, with citations to the primary literature. The large
amount of curated information within BioCyc is unique with respect to
other genome portals.
\ \\
\noindent Home page: \href{https://biocyc.org/}{https://biocyc.org/}\\
Sample gene page: \href{https://biocyc.org/gene?orgid=ECOLI&id=EG10823}{https://biocyc.org/gene?orgid=ECOLI\&id=EG10823}.
\subsubsection*{KEGG}
The Kyoto Encyclopedia of Genes and Genomes is a resource for
understanding high-level functions of a biological system from
molecular-level information. It includes a focus on data
relevant for biomedical research (e.g., KEGG DISEASE and KEGG DRUG
databases) and includes tools for analysis of
large-scale molecular datasets generated by high-throughput
experimental technologies.
\ \\
\noindent Home page: \href{https://www.kegg.jp/}{https://www.kegg.jp/}\\
Sample gene page: \href{https://www.kegg.jp/dbget-bin/www_bget?eco:b2699}{https://www.kegg.jp/dbget-bin/www\_bget?eco:b2699}.
\subsection*{Ensembl Bacteria}
Ensembl Bacteria is a portal for bacterial and archaeal genomes. It does
not have any data or tools for metabolism, pathways or compounds, focusing on genes
and proteins. Its strengths seem to be in its large collection of gene and protein
family data. Its capabilities are somewhat different from other
Ensembl sites. In addition to BLAST, it includes a hidden Markov
model (HM) search tool for protein motifs. Pan-taxonomic comparative tools are available for key species. It also includes Ensembl's variant effect predictor, which can predict functional consequences of sequence variants.
\ \\
\noindent Home page: \href{https://bacteria.ensembl.org/}{https://bacteria.ensembl.org/}\\
Sample gene page: \href{https://bacteria.ensembl.org/Escherichia_coli_str_k_12_substr_mg1655/Gene/Summary?g=b2699;r=Chromosome:2822708-2823769;t=AAC75741;db=core}{https://bacteria.ensembl.org/Escherichia\_coli\_str\_k\_12\_substr\_mg1655/Gene/Summary?g=b2699;r=Chromosome:2822708-2823769;t=AAC75741;db=core}.
\subsubsection*{KBase}
KBase is an environment for systems biology research that provides
more than 160 applications to support user-driven analysis of a
variety of data ranging from raw reads to fully assembled and
annotated genomes, and metabolic models. In addition to its
genome-portal capabilities, KBase \cite{KBASE16} enables users to
assemble and annotate genomes, to analyze transcriptomics data, and to
create metabolic models for organisms with sequenced genomes. Once a
model is created, it can be analyzed using phylogenetic, expression
analysis, and comparative tools. KBase also allows users to integrate
custom code into their analysis pipeline and enables addition of
external applications by their developers using a software development
kit (SDK). Its other major aim is to support
reproducible computational experiments, on models, that can be
published and shared with other users.
\ \\
\noindent Home page: \href{https://kbase.us/}{https://kbase.us/}\\
Sample gene page: \href{https://narrative.kbase.us/#dataview/35926/2/1?sub=Feature&subid=b2699}{https://narrative.kbase.us/\#dataview/35926/2/1?sub=Feature\&subid=b2699}.
\subsubsection*{IMG}
The Integrated Microbial Genomes (IMG) system is a resource for
annotation and analysis of sequence data, integrated with
environmental and other metadata to support genome and microbiome
comparisons. In addition to being the vehicle for release of the data
generated by the DOE Joint Genome Institute, it provides a suite of
analytical and visualization tools available to explore and mine the
data for biological inference. Custom data marts dedicated to specific
research topics like synthesis of secondary metabolite (IMG-ABC) or
viral eco-genomics (IMG/VR), are also included. Users can submit their
own data and metadata for integration in the system.
\ \\
\noindent Home page: \href{https://img.jgi.doe.gov/}{https://img.jgi.doe.gov/}\\
Sample gene page: \href{https://img.jgi.doe.gov/cgi-bin/m/main.cgi?section=GeneDetail&page=geneDetail&gene_oid=646314661}{https://img.jgi.doe.gov/cgi-bin/m/main.cgi?section=GeneDetail\&page=geneDetail\&gene\_oid=646314661}.
\subsubsection*{PATRIC}
PATRIC is designed to support the biomedical research community's work
on bacterial infectious diseases via integration of vital pathogen
information with data and analysis tools. Data is integrated across
sources, data types, molecular entities, and organisms. Data types
include genomics, transcriptomics, protein-protein interactions, 3D
protein structures, sequence typing data, and metadata. It supports both genome
assembly and annotation (RAST), and RNA-seq data analysis via a job submission
system.
\ \\
\noindent Home page: \href{https://www.patricbrc.org/}{https://www.patricbrc.org/}\\
Sample gene page: \href{https://www.patricbrc.org/view/Feature/PATRIC.511145.12.NC_000913.CDS.2820730.2821791.rev}{https://www.patricbrc.org/view/Feature/PATRIC.511145.12.NC\_000913.CDS.2820730.2821791.rev}.
\section{Results}
We assessed the software and data content capabilities of each portal
according to a number of topic areas, such as genomics-related tools
and metabolism-related tools. We chose topic areas that we considered
to be core elements of a microbial genome information portal --- that
is, a web site that counts among its primary missions providing users
with data and knowledge regarding sequenced microbial genomes. A
number of the portals contain functionality outside of that mission,
for example, some portals contain software tools for annotating
microbial genomes (e.g., performing assembly and gene-function
prediction). We did not include such functionality because we
considered it outside the scope of a microbial genome information
portal. In many cases, we added new criteria within a topic area
(meaning rows within our comparison tables) as we learned about each
portal, such as adding the ability of Ensembl Bacteria to predict the effects
of sequence variants. Our choice of criteria is validated by the fact
that many of the criteria are shared among some or many of the portals.
For several of the topic areas, we provide multiple
tables to assess software capabilities, with one or two tables
focusing on DB search capabilities and another table focusing
on other capabilities in that area.
For example, Tables~\ref{tab:gene-protein-multi-search} and
\ref{tab:site-multi-search} describe genomics multi-search tools, and
Table~\ref{tab:genomics-tools} describe other genomics software tools.
We attempted to be as diligent as possible when evaluating each
portal's capabilities, however, being non-expert navigators of KEGG,
Ensembl Bacteria, KBase, and PATRIC, we may have overlooked or
misjudged some element of those portals.
\begin{landscape}
\begin{table}[!h]
{\small
\centerline{
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline
{\bf Tool} & {\bf BioCyc} & {\bf KEGG} & {\bf Ensembl Bacteria} & {\bf KBase} & {\bf IMG} & {\bf PATRIC} \\ \hline \hline
Genome Browser & YES & YES & YES & YES & YES & YES \\ \hline
-- Operons, Promoters, TF binding sites &YES& no & no & no & partial & YES \\ \hline
-- Depicts Nucleotide Sequence & YES & YES & YES & YES & YES & YES \\ \hline
-- Customizable Tracks & YES & no & YES & no & partial & YES \\ \hline
-- Comparative, by Orthologs & YES & no\footnotemark[1] & no & no & YES & YES \\ \hline
-- Genome Poster & YES & no & no & no & no & no \\ \hline
Retrieve Gene Sequence & YES & YES & YES & YES & YES & YES \\ \hline
Retrieve Replicon Sequence & YES & YES & YES & no & YES & YES \\ \hline
Retrieve Protein Sequence & YES & YES & YES & YES & YES & YES \\ \hline
Nucleotide Sequence Alignment Viewer & YES & YES & no & no & YES & YES \\ \hline
Protein Sequence Alignment Viewer & YES & YES & no & no & YES & YES \\ \hline
Protein Phylogenetic Tree Analysis & no & YES & no & YES & YES & YES \\ \hline
Sequence Searching by BLAST & YES & YES & YES & YES & YES & YES \\ \hline
Sequence Pattern Search & YES & YES & no & YES & YES & no \\ \hline
Sequence Cassette Search & no & YES & YES & YES & YES & no \\ \hline
Orthologs & YES & YES & no & YES & YES & YES \\ \hline
Gene/Protein Page & YES & YES & YES & YES & YES & YES \\ \hline
Enrichment Analysis (GO Terms) & YES & no & no & YES & no & no \\ \hline
Enrichment Analysis (Regulation) & YES & no & no & no & no & no \\ \hline
Omics Dashboard & YES & no & no & no & no & no \\ \hline
Multi-Organism Comparative Analysis & YES & YES & YES & YES & YES & YES \\ \hline
Horizontal Gene Transfer Prediction & no & no & no & no & YES & no \\ \hline
Fused Protein Prediction & no & no & no & no & YES & no \\ \hline
Alternative ORF View & no & no & no & no & YES & YES \\ \hline
Genome Multi-Search & YES & no & no & no & YES & YES \\ \hline
gANI Computations & no & no & no & YES & YES & YES \\ \hline
Kmer Frequency Analysis & no & no & no & no & YES & no \\ \hline
Synteny Comparison & no & no & no & YES & YES & no \\ \hline
Proteome Comparisons & YES & no & no & YES & YES & YES \\ \hline
Statistical Analysis, Genome & YES & no & no & no & YES & no \\ \hline
Statistical Analysis, Expression & no & no & no & YES & YES & YES \\ \hline
Genome Function Comparison & no & no & no & YES & YES & YES \\ \hline
Insert Genomes into Reference Trees & no & no & no & YES & no & YES\footnotemark[2] \\ \hline
Predict Effects of Sequence Variants & no & no & YES & no & no & YES \\ \hline
\end{tabular}
} }
\caption{\label{tab:genomics-tools}
{\bf Genomics Tools Comparison.} ``Partial'' means that
the tool provides some but not all of the indicated functionality.
$^{1}$KEGG does have a rudimentary tool for this purpose, but it is
not based on a zoomable genome browser. $^{2}$PATRIC
supports construction of trees from an arbitrary set of in-group and out-group genomes.
}
\end{table}
\begin{table}[!h]
\centerline{
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline
{\bf Tool} & {\bf BioCyc} & {\bf KEGG} & {\bf Ensembl Bacteria} & {\bf KBase} & {\bf IMG} & {\bf PATRIC} \\ \hline \hline
Gene Name & YES & YES & YES & YES & YES & YES \\ \hline
Product Name & YES & YES & YES & YES & YES & YES \\ \hline
Database Identifier & YES & YES & YES & YES & YES & YES \\ \hline
EC Number & YES & YES & YES & no & YES & YES \\ \hline
Sequence Length & YES & no & no & YES & YES & YES \\ \hline
Replicon & YES & no & no & YES & YES & YES \\ \hline
Map Position & YES & YES & no & YES & YES & no \\ \hline
Product Mol Wt & YES & no & no & no & YES & no \\ \hline
Product Subunits & YES & no & no & no & YES & no \\ \hline
Product pI & YES & no & no & no & YES & no \\ \hline
Product Ligands & YES & no & no & no & YES & no \\ \hline
Evidence Code & YES & no & no & no & no & no \\ \hline
Cell Component & YES & no & no & no & no & no \\ \hline
GO Terms & YES & no & YES & YES & YES & YES \\ \hline
Protein Features & YES & no & YES & no & YES & no \\ \hline
Publication & YES & no & no & YES & no & no \\ \hline
Scaffold Length & no & YES & no & YES & YES & no \\ \hline
Scaffold GC Content & no & no & no & no & YES & YES \\ \hline
Protein Family Assignment & no & YES & YES & no & YES & YES \\ \hline
Is Partial & no & no & no & no & YES & no \\ \hline
Is Pseudogene & YES & no & no & no & YES & YES \\ \hline
\end{tabular}
}
\caption{\label{tab:gene-protein-multi-search}
{\bf Gene/protein multi-search capabilities.}
Does the portal support multi-searches for genes and gene products based on
the data fields or criteria listed?
``Publication'' means the ability to search for a gene based on a
publication cited in the pathway entry.
``Scaffold Length'' means the ability to search for a gene based on
the length of the scaffold it resides on.
``Protein Family Assignment'' means the ability to search for a gene
based on what protein families it is assigned to (e.g., Pfam or
TIGRFAM family).
``Is Partial'' means search for partial (truncated) proteins.
}
\end{table}
\begin{table}[!h]
\centerline{
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline
{\bf Tool} & {\bf BioCyc} & {\bf KEGG} & {\bf Ensembl Bacteria} & {\bf KBase} & {\bf IMG} & {\bf PATRIC} \\ \hline \hline
Site Type & YES & no & no & no & no & no \\ \hline
-- Attenuators & YES & no & no & no & no & no \\ \hline
-- Origin of Replication & YES & no & no & no & no & no \\ \hline
-- Phage Attachment Sites & YES & no & no & no & no & no \\ \hline
-- REP Elements & YES & no & no & no & no & no \\ \hline
-- Promoters & YES & no & no & no & no & no \\ \hline
-- Terminators & YES & no & no & no & no & no \\ \hline
-- mRNA Binding Sites & YES & no & no & no & YES & no \\ \hline
-- Riboswitches & YES & no & no & no & YES & no \\ \hline
-- TF Binding Sites & YES & no & no & no & no & no \\ \hline
-- Transcription Units & YES & no & no & no & no & no \\ \hline
-- Transposons & YES & no & no & no & no & no \\ \hline
Replicon & YES & no & no & no & YES & no \\ \hline
Map Position & YES & no & no & no & YES & no \\ \hline
Site Regulator & YES & no & no & no & no & no \\ \hline
Site Ligands & YES & no & no & no & no & no \\ \hline
Evidence Code & YES & no & no & no & no & no \\ \hline
CRISPR Arrays & no & no & no & no & YES & no \\ \hline
\end{tabular}
}
\caption{\label{tab:site-multi-search}
{\bf DNA/RNA Site Multi-Search Capabilities.}
Does the portal support multi-searches for DNA and RNA sites based on
the data fields or criteria listed? For example, does the portal
support searches for sites by the type of site (e.g., for attenuators
versus transcription-factor binding sites), and by numeric constraints
on the genome position of the site?
}
\end{table}
\begin{table}[!h]
\centerline{
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline
{\bf Tool} & {\bf BioCyc} & {\bf KEGG} & {\bf Ensembl Bacteria} & {\bf KBase} & {\bf IMG} & {\bf PATRIC} \\ \hline \hline
Metabolite Page & YES & YES & no & no & no & no \\ \hline
Chemical Similarity Search & no & YES & no & no & no & no \\ \hline
Glycan Similarity Search & no & YES & no & no & no & no \\ \hline
Reaction Page & YES & YES & no & no & YES & no \\ \hline
-- Reaction Atom Mappings & YES & YES & no & no & no & no \\ \hline
Individual Pathway Diagram & YES & YES & no & YES & YES & YES \\ \hline
-- Automatic Pathway Layout & YES & no & no & no & no & no \\ \hline
-- Paint Omics Data onto Pathway & YES & YES & no & no & YES & no \\ \hline
-- Depict Enzyme Regulation & YES & no & no & no & no & no \\ \hline
-- Depict Genetic Regulation & YES & no & no & no & no & no \\ \hline
-- Depict Metabolite Structures & YES & YES (Tooltip) & no & no & no & no \\ \hline
Multi-Pathway Diagram & YES & no & no & no & no & no \\ \hline
Full Metabolic Network Diagram & YES & YES & no & no & no & no \\ \hline
-- Zoomable Metabolic Network & YES & YES & no & no & no & no \\ \hline
-- Paint Omics Data onto Diagram & YES & no & no & no & no & no \\ \hline
-- Animated Omics Data Painting & YES & no & no & no & no & no \\ \hline
-- Metabolic Poster & YES & no & no & no & no & no \\ \hline
-- Organism Comparison & YES & no & no & no & no & no \\ \hline
Automated Metabolic Reconstruction & YES (Desktop)\footnotemark[1] & YES & no & YES & YES & YES \\ \hline
Enrichment Analysis (Pathways) & YES & no & no & no & YES & no \\ \hline
Execute Metabolic Model & YES & no & no & YES & no & YES \\ \hline
-- Gene Knock-out Analysis & YES & no & no & YES & no & YES \\ \hline
Chokepoint Analysis & YES & no & no & no & no & no \\ \hline
Dead-End Metabolite Analysis & YES & no & no & no & no & no \\ \hline
Blocked-Reaction Analysis & YES & no & no & YES & no & no \\ \hline
Route Search Tool & YES & YES & no & no & no & no \\ \hline
Path Prediction Tool & no & YES & no & no & no & no \\ \hline
Assign EC Number & no & YES & no & no & no & no \\ \hline
\end{tabular}
}
\caption{\label{tab:metabolic-tools}
{\bf Metabolic Tools Comparison.}
$^{1}$ The desktop version of the Pathway Tools software performs
automated metabolic reconstruction.
}
\end{table}
\end{landscape}
\begin{table}[!h]
\centerline{
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline
{\bf Tool} & {\bf BioCyc} & {\bf KEGG} & {\bf Ensembl Bacteria} & {\bf KBase} & {\bf IMG} & {\bf PATRIC} \\ \hline \hline
Name & YES & YES & no & no & YES & YES\footnotemark[1] \\ \hline
Database Identifier & YES & YES & no & no & YES & YES\footnotemark[1] \\ \hline
Ontology & YES & no & no & no & YES & YES \\ \hline
Monoisotopic Mass & YES & no & no & no & partial & no \\ \hline
Molecular Weight & YES & no & no & no & partial & no \\ \hline
Chemical Formula & YES & no & no & no & partial & no \\ \hline
Chemical Substructure& YES & YES & no & no & partial & no \\ \hline
InChi String & YES & no & no & no & partial & no \\ \hline
InChi Key & YES & no & no & no & partial & no \\ \hline
\end{tabular}
}
\caption{\label{tab:compound-multi-search}
{\bf Compound multi-search capabilities. }
Does the portal support multi-searches for chemical compounds based on
the data fields or criteria listed? ``Ontology'' means the ability to
search for compounds based on a chemical ontology (classification).
$^{1}$This search will find pages of antimicrobial compounds.
}
\end{table}
\begin{table}[!h]
\centerline{
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline
{\bf Tool} & {\bf BioCyc} & {\bf KEGG} & {\bf Ensembl Bacteria} & {\bf KBase} & {\bf IMG} & {\bf PATRIC} \\ \hline \hline
Name & YES & YES & no & no & YES & YES \\ \hline
Ontology & YES & YES & no & no & YES & YES \\ \hline
Size in Reactions & YES & no & no & no & no & no \\ \hline
Substrates & YES & YES & no & no & YES & no \\ \hline
Evidence Code & YES & no & no & no & no & no \\ \hline
Publication & YES & no & no & no & no & no \\ \hline
\end{tabular}
}
\caption{\label{tab:pathway-multi-search}
{\bf Pathway multi-search capabilities.}
Does the portal support multi-searches for pathways based on
the data fields or criteria listed? ``Ontology'' means the ability to
search for pathways based on a pathway ontology (classification).
}
\end{table}
\begin{table}[!h]
\centerline{
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline
{\bf Table Capability} & {\bf BioCyc} & {\bf KEGG} & {\bf Ensembl Bacteria} & {\bf KBase} & {\bf IMG} & {\bf PATRIC} \\ \hline \hline
Table Datatypes: & & & & & & \\ \hline
\ \ \ Genomes & no & no & no & no & no & YES \\ \hline
\ \ \ Genes & YES & no & no & no & YES & YES\footnotemark[1] \\ \hline
\ \ \ Proteins & YES & no & no & no & YES & YES \\ \hline
\ \ \ RNAs & YES & no & no & no & YES & YES \\ \hline
\ \ \ Metabolites & YES & no & no & no & partial & no \\ \hline
\ \ \ Pathways & YES & no & no & no & partial & YES \\ \hline
\ \ \ Reactions & YES & no & no & no & partial & no \\ \hline
\ \ \ Promoters & YES & no & no & no & no & no \\ \hline
\ \ \ Terminators & YES & no & no & no & no & no \\ \hline
\ \ \ Transcription Factor Binding Sites & YES & no & no & no & no & no \\ \hline
\ \ \ Transcription Units & YES & no & no & no & partial & no \\ \hline
\ \ \ Publications & YES & no & no & no & no & no \\ \hline
\ \ \ Transciptomics Experiments & no & no & no & no & partial & YES \\ \hline
\ \ \ Biosynthetic Clusters & no & no & no & no & YES & no \\ \hline
\ \ \ Protein Families & no & no & no & no & no & YES \\ \hline
Create Table from Uploaded File & YES & no & no & no & YES & YES \\ \hline
Create Table from database query result & YES & no & no & no & YES & YES \\ \hline
Include Database Properties as Table Columns & YES & no & no & no & YES & YES \\ \hline
Create Columns as Computational Transformations & YES & no & no & no & no & no \\ \hline
Set Operations Among Tables & YES & no & no & no & YES & YES \\ \hline
Filter Table Rows & YES & no & no & no & YES & YES \\ \hline
Export Table to File & YES & no & no & no & YES & YES \\ \hline
Share Table with Selected Users & YES & no & no & no & YES & YES \\ \hline
Share Table to the Public & YES & no & no & no & no & YES \\ \hline
\end{tabular}
}
\caption{\label{tab:smt}{\bf Table-Based Analysis Capabilities.}
$^{1}$PATRIC provides tables of genomes and tables of features (defined sections of a genome, e.g., genes, CDS, mRNAs).
}
\end{table}
\begin{table}[!h]
\centerline{
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline
{\bf Feature} & {\bf BioCyc} & {\bf KEGG} & {\bf Ensembl Bacteria} & {\bf KBase} & {\bf IMG} & {\bf PATRIC} \\ \hline \hline
Gene Page Load Time (sec)\footnotemark[1] & 4.4 & 2.5 & 10.0 & 9.8 & 13.5 & 34.9 \\ \hline
Tooltips & YES & no & YES & YES & YES & YES \\ \hline
User Guide & YES & YES & YES\footnotemark[2] & YES & YES & YES \\ \hline
Webinars & YES & no & YES\footnotemark[2] & YES & YES & YES \\ \hline
Workshops & YES & ? & YES & YES & YES & YES \\ \hline
\end{tabular}
}
\caption{\label{tab:user-exp}
{\bf User Experience Features}
}
$^{1}$The extent of gene details and visualization displayed is vastly
different among sites and can lead to longer page load times.
$^{2}$Userguide and webinars cover multiple Ensembl portals, not specifically bacteria.
\end{table}
\subsection{Genomics Tools}
Genomics tools enable researchers to query, analyze, and compare
genome-related information within an organism DB.
Table~\ref{tab:genomics-tools} assesses most genomics tools;
Tables~\ref{tab:gene-protein-multi-search} and
\ref{tab:site-multi-search} describe genomics multi-search tools.
An explanation of the rows within Table~\ref{tab:genomics-tools} is as follows.
\begin{itemize}
\item {\bf Genome Browser}: Can a user browse a
chromosome at different zoom levels to see the genomic features present?
\begin{itemize}
\item Are {\bf operons, promoters, and transcription-factor binding
sites} depicted in the genome browser?
\item Is the {\bf nucleotide sequence} depicted in the genome browser?
\item {\bf Customizable Tracks}: Can a user add additional tracks to
the genome browser, which show user-supplied data?
\item {\bf Comparative, by Orthologs}: Can a user compare
chromosome regions from several genomes side-by-side,
with orthologous genes indicated?
\item {\bf Genome Poster}: Can the portal generate a printable, detailed,
wall-sized poster of the entire genome, e.g., one that depicts every
gene in the genome?
\end{itemize}
\item {\bf Retrieve Gene Sequence}: Can a user retrieve the
nucleotide sequence of a gene?
\item {\bf Retrieve Replicon Sequence}: Can a user retrieve the
nucleotide sequence of a specified region of a replicon?
\item {\bf Retrieve Protein Sequence}: Can a user retrieve the
amino-acid sequence of a protein?
\item {\bf Nucleotide Sequence Alignment Viewer}: Can a user compare the
nucleotide sequence of a gene with orthologs from other organisms?
\item {\bf Protein Sequence Alignment Viewer}: Can a user compare the
amino-acid sequence of a protein with orthologs from other organisms?
\item {\bf Protein Phylogenetic Tree Analysis}: Can a user construct a
phylogenetic tree from a set of protein sequences?
\item {\bf Sequence Searching by BLAST}: Is searching for a sequence
in a genome by BLAST supported?
\item {\bf Sequence Pattern Search}: Is sequence searching by short sequence patterns
supported?
\item {\bf Sequence Cassette Search}: Is sequence searching by
protein family recognition patterns supported?
\item {\bf Orthologs}: Can a user query for the orthologs of a given gene in other
organisms?
\item {\bf Gene/Protein Page}: Does the portal provide gene pages,
showing relevant information such as the gene products and links to
other DBs?
\item {\bf Enrichment Analysis (GO Terms)}: Can a user find which
GO terms are statistically enriched, given a set of genes?
\item {\bf Enrichment Analysis (Regulation)}: Given a set of genes,
can a user compute which regulators of those genes are statistically
over-represented in the gene set?
\item {\bf Omics Dashboard}: Can a user submit a transcriptomics
dataset for analysis using a visual dashboard tool that enables
interactive summarization and exploration of the dataset in a manner
similar to the BioCyc Omics Dashboard \cite{DashGene17}?
\item {\bf Multi-Organism Comparative Analysis}: Can a user globally
compare a variety of different data types between several organisms?
\item {\bf Horizontal Gene Transfer Prediction}: Can the site show
which genes may have been acquired by horizontal gene transfer?
\item {\bf Fused Protein Prediction}: Can the portal show
which genes result from fusions of genes that can be found separately in other organisms?
\item {\bf Alternative ORF Search (6-frame translation)}:
Can a user assess alternative ORFs to the ones predicted on a given genomic region? Change the name to Alternate ORF View?
\item {\bf Genome Multi-Search}: Does the portal support
search and retrieval across all genomes using sequencing, environmental, or other metadata attributes?
\item {\bf gANI (Whole-genome Average Nucleotide Identity) Computations}:
Whole-genome based average nucleotide identity (gANI) has been
proposed as a measure of genetic relatedness of a pair of genomes.
gANI for a pair of genomes is calculated by averaging the nucleotide
identities of orthologous genes. The fraction of orthologous genes
(alignment fraction or AF) is also reported as a complementary
measure of similarity of the two genomes.
\item {\bf Kmer Frequency Analysis}: Can the portal display principal component
analysis plots of oligonucleotide frequencies along genome length;
allow comparison of genomes by the similarity of oligonucleotide
composition, and identify sequences with abnormal
oligonucleotide composition, such as horizontally transferred
sequences and contaminating contigs/scaffolds?
\item {\bf Synteny Comparisons}: Does the portal provide a
tool for evaluating conservation of
gene order by plotting pairwise genome alignment?
Potential translocations, inversions, or gaps relative to
reference can be visualized. Such a tool gives a quick snapshot of how closely
related two strains might be.
\item {\bf Proteome Comparisons}: Find proteins that are
shared between two or more genomes or unique to a given genome.
\item {\bf Statistical Analysis, Genome}:
Example statistical analyses include
counts of genes assigned to a ``feature'' (such as presence of a
COG/Pfam/TIGRFAM/KEGG domains), and counts of genes in different
Gene Ontology categories.
\item {\bf Statistical Analysis, Expression}:
Does the portal provide tools for calculating statistical significance
of gene expression data?
\item {\bf Genome Function Comparison}: Genomes can be clustered based on a function profile
(e.g., COG/Pfam/TIGRFAM/KEGG features) and viewed as a hierarchical
cluster tree, principal component analysis, principal coordinate analysis plot, or other options,
to assess relatedness of selected genomes.
\item {\bf Insert Genomes into Reference Trees}:
Enables a user to determine evolutionary relationships between a
genome of interest and nearby reference genomes by building a tree of
49 concatenated universal sequences.
\item {\bf Predict Effects of Sequence Variants}:
Enables users to predict effects of variation, including SNPs and
indels on transcripts in the region of the variant.
\end{itemize}
\subsection{Metabolic Tools}
Metabolic tools enable researchers to query, analyze, and compare
information about metabolic pathways and reactions within an organism
DB, to run metabolic models, and to analyze high-throughput data
in the context of metabolic networks.
Table~\ref{tab:metabolic-tools} assesses most metabolic tools;
Table~\ref{tab:compound-multi-search} describes metabolite multi-search
capabilities and Table~\ref{tab:pathway-multi-search} describe pathway
multi-search capabilities.
An explanation of the rows within Table~\ref{tab:metabolic-tools} is as follows.
\begin{itemize}
\item {\bf Metabolite Page}: Does the site provide a metabolite page,
showing relevant information such as synonyms, chemical structure,
and reactions in which the metabolite occurs?
\item {\bf Chemical Similarity Search}: Can the user search for
chemicals that have similar structures to a provided chemical?
\item {\bf Glycan Similarity Search}: Can the user search for
glycans that have similar structures to a provided glycan?
\item {\bf Reaction Page}: Does the site provide a reaction page,
showing relevant information such as EC numbers, reaction equation,
and enzymes catalyzing the reaction?
\item {\bf Reaction Atom Mappings}: Can the reaction equation
be shown with metabolite structures that depict the trajectories of atoms
from reactants to products?
\item {\bf Pathway Diagrams}: Can pathway diagrams be depicted?
\item {\bf Automatic Pathway Layout}: Are pathway diagrams generated
automatically by the software, thereby avoiding manual drawing?
\item {\bf Paint Omics Data onto Pathway}: Can a user visualize
omics data on pathway diagrams?
\item {\bf Depict Enzyme Regulation}: Can pathway diagrams
show regulation of enzymes by metabolites, to depict information such as feedback inhibition?
\item {\bf Depict Genetic Regulation}: Can pathway diagrams
show genetic regulation of enzymes, such as by transcription
factors and attenuation?
\item {\bf Depict Metabolite Structures}: Can pathway diagrams
show the chemical structures of metabolites?
\item {\bf Multi-Pathway Diagram}: Can users interactively create
diagrams consisting of multiple interacting metabolic pathways?
\item {\bf Full Metabolic Network Diagram}: Can the entire metabolic
reaction network of a genome be depicted and explored by an interactive graphical
interface?
\item {\bf Zoomable Metabolic Network}: Does the metabolic network
browser enable zooming in and out?
\item {\bf Paint Omics Data onto Network}: Can a user visualize
an omics dataset (e.g., gene expression, metabolomics) on the
metabolic network diagram?
\item {\bf Animated Omics Data Painting}: Can several omics data points be
visualized as an animation on the metabolic network diagram?
\item {\bf Metabolic Poster}: Can the portal generate a printable
wall-sized poster of the organism's metabolic network?
\item {\bf Organism Comparison}: Can a user compare the metabolic
networks of two organisms via the full metabolic network diagram?
\item {\bf Automated Metabolic Reconstruction}: Starting from a
functionally annotated genome, can the metabolic reaction network (and
pathways) be inferred in an automated fashion?
\item {\bf Enrichment Analysis (Pathways)}: Can the site compute
statistical enrichment of pathways within a large-scale dataset?
\item {\bf Execute Metabolic Model}: Can a user execute a steady-state
metabolic flux model via the portal?
\item {\bf Gene Knock-out Analysis}: Can a user run flux-balance analysis (FBA) on the metabolic
network by systematically disabling (knocking-out) various genes, to investigate
how knock-outs perturb the network, and to predict gene essentiality?
\item {\bf Chokepoint Analysis}: Can the site compute chokepoint
reactions (possible drug targets) in the full metabolic reaction network? A chokepoint
reaction is a reaction that either uniquely consumes a specific
reactant or uniquely produces a specific product in the metabolic
network.
\item {\bf Dead-End Metabolite Analysis}: Can the portal compute dead-end
metabolites in the full metabolic reaction network? Dead-end
metabolites are those that are either only consumed, or only
produced, by the reactions within a given cellular compartment,
including transport reactions.
\item {\bf Blocked-Reaction Analysis}: Can the portal compute blocked
reactions in the full metabolic reaction network? Blocked reactions
cannot carry flux because of dead-end metabolites upstream or
downstream of the reactions.
\item {\bf Route Search Tool}: Given a starting and an ending
metabolite, can the site compute an optimal series
of known reactions (routes) that converts the starting metabolite to the
ending metabolite?
\item {\bf Path Prediction Tool}: Given a starting
chemical compound, can the site predict a series of previously
unknown enzyme-catalyzed reactions that will act upon the input
compound and the products of previous reactions?
\item {\bf Assign EC Number}: Can the portal compute an appropriate
Enzyme Commission number for a user-provided reaction?
\end{itemize}
\subsection{Regulation Tools}
\label{sec:regulation}
BioCyc has a number of regulatory informatics tools that are not
provided by any of the portals. We list those tools here rather than
providing a table.
\begin{itemize}
\item BioCyc includes a network browser that depicts the full transcriptional regulatory network
of the organism. The network diagram can be queried interactively and
painted with transcriptomics data.
\item The BioCyc transcription-unit page depicts operon structure
including promoters, transcription factor binding sites, and terminators,
the evidence for each, and describes regulatory interactions between
these sites and associated transcription factors and small RNA regulators.
\item BioCyc generates diagrams that summarize all regulatory influences on a gene, including
regulation of transcription, translation, and of the gene product.
\item BioCyc depicts transcription-factor regulons as diagrams of all operons
regulated by a transcription factor.
\item BioCyc can depict regulatory influences on metabolism by
highlighting the regulon of a transcription factor on the BioCyc metabolic
map diagram.
\item BioCyc SmartTables can list the regulators or regulatees of each
gene within a SmartTable.
\item BioCyc can generate a report comparing the regulatory networks of two or more organisms.
\end{itemize}
\subsection{Advanced Search and Analysis}
These tools (see Table~\ref{tab:advanced-tools}) enable researchers to perform complex searches and analyses, to
retrieve data via web services and bulk downloads, and to create and
manipulate user accounts.
\begin{table}[!h]
\centerline{
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline
{\bf Tool} & {\bf BioCyc} & {\bf KEGG} & {\bf Ensembl Bacteria} & {\bf KBase} & {\bf IMG} & {\bf PATRIC} \\ \hline \hline
Advanced Search & YES & no & no & no & YES & no \\ \hline
Cross-Organism Search & YES & YES & YES & partial & YES & YES \\ \hline
Web Services & YES & YES & YES & YES & no & no \\ \hline
Other Query Options & * & * & * & * & * & * \\ \hline
User Account & opt/req & no & optional & required & opt/req & opt/req \\ \hline
Custom Notifications & YES & no & no & no & no & no \\ \hline
\multirow{3}{*} \ Download Formats & biopax,gff & json,sbml & fasta,gff,gff3 & genbank,gff,tsv & fasta,txt & csv,fasta,gff \\
& genbank & & json,mysql,rdf & fasta,json,sbml & & embl,json \\
& sbml & & & & & genbank \\ \hline
\end{tabular}
}
\caption{\label{tab:advanced-tools}
{\bf Comparison of Advanced Search and Analysis, Web
Services, and User Accounts.} ``Opt/Req'' means that user accounts
are optional for some operations and required for other operations.
IMG also provides for downloading of reads, assemblies, QC reports,
annotations, and more.
}
\end{table}
An explanation of the rows within Table~\ref{tab:advanced-tools} is as follows.
\begin{itemize}
\item {\bf Advanced Search}: Does the site enable the user to
construct multi-criteria queries that search arbitrary DB
fields using combinations of AND, OR, and NOT?
\item {\bf Cross-Organism Search}: Can a user search all organisms,
specified organism sets, or taxonomic groups of organisms, for genes, metabolites, or pathways?
\item {\bf Web Services}: Can DBs within the portal be queried
programmatically by means of web services, using for example XML
protocols?
\item {\bf Other Query Options:} What other query options are provided
by the portal?
\begin{itemize}
\item BioCyc supports queries via its BioVelo query language
\cite{BioVeloLangURL}. Users can download BioCyc data files for
text searches, and can load those data files into SRI's
BioWarehouse system for SQL query access. Users can download
bundled versions of subsets of BioCyc plus Pathway Tools, and
query the DBs via APIs for Python, Lisp, Java, Perl, and R.
\item Users can download KEGG data files for text searches.
\item Ensembl Bacteria provides a Perl API and public MySQL servers.
\item KBase includes “code cells” for adding python code blocks to
enable custom analyses, for which applications do not exist, or
for programmatically calling Kbase native apps to automate large
scale analyses.
\item PATRIC provides a downloadable command line
interpreter application that allows interactive submission of
DB queries using a query language.
\end{itemize}
\item {\bf User Account}: Are user accounts available for logging in, and
for storing data and preferences? ``Opt/Req'' means accounts are
optional for some operations and required for other operations.
\item {\bf Custom Notifications}: Does the portal enable the user to
register to be notified of curation updates in biological areas of
interest to the user?
\item {\bf Bulk Download Formats}: What formats are supported by the
portal for large scale data downloads?
\end{itemize}
\newpage
\subsection{Table-Based Analysis Tools}
Table-based analysis tools
enable users to define lists of genes, proteins, metabolites, or
pathways that are stored within the portal, and can be displayed,
analyzed, manipulated, and shared with other users.
These tools are called SmartTables by BioCyc and are called Carts by IMG.
A typical series of SmartTable operations are to define a SmartTable
containing a list of genes (such as from a transcriptomics
experiment); to configure which DB properties are displayed for
each gene within the SmartTable (such as displaying the gene name,
accession number, product name, and genome map position); performing a
set operation on the SmartTable such as taking the intersection with
another gene SmartTable; and transforming the gene SmartTable to say
a SmartTable of the metabolic pathways containing those genes, or the
set of transcriptional regulators for those genes.
KBase does not have a tables mechanism, but it does have a data
sharing mechanism called narratives, which is not table-based.
An explanation of the rows within Table~\ref{tab:smt} is as follows.
\begin{itemize}
\item {\bf Datatypes Tables can Contain}: What types of entities may
be stored in tables within each portal? The more types of entities
can be manipulated within tables, the more versatile the table
mechanism is.
\item {\bf Create Table from Uploaded File}:
Can tables be defined by uploading a data file that lists the entities
within the table?
\item {\bf Create Table from DB Query Result}:
Can tables be defined from the result of a query within the portal?
\item {\bf Include DB Properties as Table Columns}:
Can a user add columns to the table containing information from the
DB about a given entity, such as the accession number of a gene
or the nucleotide coordinate of a gene, or a diagram of the chemical
structure of a metabolite?
\item {\bf Create Table Columns as Computational Transformations}:
Can table columns contained information computed from another column,
such as adding a column that computes the pathways in which a gene participates?
\item {\bf Set Operations Among Tables}:
Can the portal create a new table by computing set operations between
two other tables, such as taking the union of the list of genes in two
other tables?
\item {\bf Filter Table Rows}:
Can the portal remove rows from a table according to a search, such as
removing all entries from a table of metabolites where the metabolite
name contains ``arginine''?
\item {\bf Export Table to File}:
Can the portal export the contents of a table to a data file?
\item {\bf Share Table with Selected Users}:
Can a user share a table with a specific set of users?
\item {\bf Share Table with the Public}:
Can a user share a table with the general public?
\end{itemize}
\newpage
\subsection{Data Content among the Portals}
\begin{table}[!h]
\centerline{
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline
{\bf Data Type} & {\bf BioCyc} & {\bf KEGG} & {\bf Ensembl Bacteria} & {\bf KBase} & {\bf IMG} & {\bf PATRIC} \\ \hline \hline
Genome Count & 14,560 & 5,130 & 44,046 & 122,688 & 97,179 & 184,000 \\ \hline
\ \ \ Bacterial Genomes & 14,134 & 4,854 & 43,552 & 121,994 & 66,362 & 181,260 \\ \hline
\ \ \ Archaeal Genomes & 394 & 276 & 494 & 694 & 1,724 & 2,881 \\ \hline
\ \ \ Uncultivated Organisms & & & 0 & & 11,466 & 0 \\ \hline
Genome Metadata & YES & YES & no & no & YES & YES \\ \hline
Regulatory Networks & 11 & no & no & no & no & no \\ \hline
Protein Localization & YES & no & no & no & no & no \\ \hline
Protein Features & YES & no & YES & no & partial & YES \\ \hline
Protein 3-D Structures & no & YES & no & no & no & no \\ \hline
GO Terms & YES & no & YES & YES & YES & YES \\ \hline
Evidence Codes & YES & no & no & no & YES & partial\footnotemark[1] \\ \hline
Operons & YES & no & no & no & no & YES \\ \hline
Prophages & YES & no & no & no & YES & YES \\ \hline
Growth Media & YES & no & no & YES & no & no \\ \hline
Gene Essentiality & YES & no & no & no & no & YES \\ \hline
Gene Clusters for Secondary Metabolites& no & no & no & no & YES & no \\ \hline
Gene Pairs with Correlated Expression &no & no & no & no & no & YES \\ \hline
Protein-Protein Interactions & no & no & no & no & no & YES \\ \hline
AMR Phenotypes & no & no & no & no & no & YES \\ \hline
\end{tabular}
}
\caption{\label{tab:data-content}
{\bf Data Types Comparison.}
$^{1}$PATRIC includes evidence codes in only two DB tables.
}
\end{table}
Table~\ref{tab:data-content} describes the types and quantities of
data present in each web portal.
An explanation of the rows within the Table~\ref{tab:data-content} is
as follows.
\begin{itemize}
\item {\bf Genome Count (Bact./Arch.)}: How many bacterial genomes (organisms)
does the portal provide access to? Only bacteria and
archaea are counted here, although some resources provide
eukaryotic and viral genomes.
\item {\bf Genome Metadata}: Does the portal contain genome metadata, such as the
lifestyle of the organism, and the location of where the organism sample was
obtained?
\item {\bf Regulatory Networks}: Is (gene) regulatory information provided by
the site? Eleven BioCyc DBs provide regulatory networks
larger than 100 transcriptional regulatory interactions.
\item {\bf Protein Localization}: Does the portal contain protein cellular locations?
\item {\bf Protein Features}: Are annotations of features of protein
sequences provided by the portal? Such features include which
residues bind to cofactors or to metal ions, and where signaling
peptide sequences reside. IMG provides transmembrane and signal
peptide features.
\item {\bf GO Terms}: Are GO term annotations provided by the site?
IMG provides evidence codes for GO terms. BioCyc provides evidence
terms for gene functions, pathway presence, operon presence.
\item {\bf Evidence Codes}: Are evidence codes for the annotations
provided by the resource, so the level of validity of the data can
be assessed?
\item {\bf Operons}: Are genes grouped into operons, where applicable?
\item {\bf Prophages}: Are potential prophages indicated on the
genomes?
\item {\bf Growth Media}: Are growth media for known growth conditions
of the organisms provided by the site? (BioCyc provides
growth-media data for two organisms.)
\item {\bf Gene Essentiality}: Are gene essentiality data under
various growth conditions provided by the site? (BioCyc provides
gene-essentiality data for 36 organisms.)
\item {\bf Gene Clusters for Secondary Metabolites}:
Does the site identify putative operons of genes encoding enzymes for the production of
secondary metabolites?
\item {\bf Gene pairs with correlated expression}: Pairs of genes with correlated
expression based on experimental evidence.
\item {\bf Protein-Protein interactions}: Pairs of protein with either experimental or computational
evidence of interacting.
\item {\bf AMR phenotypes}: Can the site display phenotypes for antimicrobial resistance (e.g., is a strain resistant
or susceptible to a particular antimicrobial compound)?
\end{itemize}
\subsection{User Experience}
Table~\ref{tab:user-exp} contains several features that reflect the usability of the various portals. These include
average loading times for typical gene pages for each portal;
and other features and resources that assist the user in learning to use each portal.
\begin{itemize}
\item {\bf Mean Load Time for Gene Pages}: Since gene pages are among the most
commonly visited information pages within a genome web portal, the
time required for the page to load in a web browser is central to
the user experience. The values in this row are the average
number of seconds required for each portal to load a gene page. The
values are averaged across six sessions, conducted from Menlo Park,
California and Richmond, Virginia to average out geographic
distances to each portal. Each session tested five genes on
each of the six portals. Testing was conducted using the Chrome browser version
68.0, running on MacOS 10.13.6. Testing consisted of clearing the
browser cache, and pasting the URL of the gene page into the
browser. The load was monitored using the 'Network' panel of
Chrome's Developer Tools (More Tools $\rightarrow$ Developer Tools). The page
was allowed to completely load (including loading large files and
waiting for Ajax calls to complete). The number used is the ``Finish'' time in
the bottom line of the panel. While some portals were disadvantaged
by starting from an empty cache, forcing large files to be loaded,
others were slowed by long Ajax calls. We have removed the single worst time recorded
of the 30 times (5 genes x 6 sessions) for each portal.
\item{\bf Portal Information}: Lists the availability of a
userguide, extensive explanatory tooltips throughout the site, recorded webinars (either downloadable
files or on YouTube or similar site), and user workshops.
\end{itemize}
\section{Discussion}
Table~\ref{tab:tallies} summarizes the number of capabilities present
in each portal. In each row of Table~\ref{tab:tallies} we have summed the
counts in the column for each portal from the specified tables, with
each ``YES'' counted as 1, each ``partial'' counted as $1/2$, and each
``no'' counted as 0.
\begin{table}[!h]
\centerline{
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline
{\bf Tool} & {\bf BioCyc} & {\bf KEGG} & {\bf Ensembl Bacteria} & {\bf KBase} & {\bf IMG} & {\bf PATRIC} \\ \hline \hline
Major & 51 & 30 & 14 & 27.5 & 35 & 29 \\ \hline
SmartTables & 20 & 0 & 0 & 0 & 13.5 & 15 \\ \hline
Multi-Search & 49 & 12 & 7 & 10 & 32 & 15 \\ \hline
Data Types & 10 & 2 & 2 & 2 & 5.5 & 9.5 \\ \hline
\end{tabular}
}
\caption{\label{tab:tallies}
{\bf Tallies of Portal Capabilities from Previous Tables.}
Row ``Major'' summarizes the major capabilities for
genomics tools, metabolic tools, and advanced tools
present in Tables~\ref{tab:genomics-tools}, \ref{tab:metabolic-tools},
and \ref{tab:advanced-tools}. Row ``SmartTables'' summarizes the number of SmartTables capabilities
for each portal present in Table~\ref{tab:smt}.
Row ``Multi-Search'' summarizes the number of multi-search capabilities
for each portal present in Tables~\ref{tab:gene-protein-multi-search},
\ref{tab:site-multi-search}, \ref{tab:compound-multi-search}, and \ref{tab:pathway-multi-search}.
Row ``Data Types'' summarizes the number of datatypes provided by
each portal present in Table~\ref{tab:data-content}, from row ``Genome
Metadata'' downward.
}
\end{table}
BioCyc received the highest count (51) of major capabilities (which
does not count its unique regulatory capabilities listed in Section~\ref{sec:regulation}).
IMG ranked second with a count of 35.
KEGG, PATRIC, and KBase ranked third, fourth, and fifth with counts of
30, 29, and 27.5, respectively. Ensembl Bacteria ranked sixth with a count of 14.
BioCyc has the most extensive multi-search capabilities, with IMG in
second place; these portals provide users with the most extensive
capabilities for finding desired information.
IMG has the most genomics capabilities, with PATRIC and BioCyc second
and third. Ensembl Bacteria
has the fewest genomics capabilities. BioCyc and IMG have the most
powerful gene/protein multi-search capabilities. BioCyc has the most
extensive capabilities for DNA/RNA site multi-searches.
BioCyc has the most extensive metabolic capabilities. KEGG ranks
second; it lacks metabolic modeling capabilities, and it lacks
network analysis tools such as dead-end metabolite analysis and chokepoint
analysis.
BioCyc has the most extensive metabolic multi-search
capabilities, with IMG second.
SmartTables make extensive data analysis capabilities
available to users that in many cases would otherwise require
assistance from a programmer.
BioCyc has the most extensive SmartTable capabilities, with PATRIC
ranking second and IMG ranking third. KEGG, Ensembl Bacteria, and KBase completely lack
SmartTables capabilities.
PATRIC has the largest number of genomes, with KBase and IMB ranked
second and third, respectively;
KEGG has the smallest number
of genomes. Most of the PATRIC genomes were assembled from
whole-genome shotgun data and thus are expected to be of lower
quality --- only 11,803 PATRIC bacterial genomes are complete genomes.
KEGG provides the fastest loading gene pages; BioCyc pages are the
second fastest. Pages for KBase, Ensembl Bacteria, and IMG are significantly slower.
PATRIC gene pages are the slowest, loading 13.96 times slower than KEGG gene pages.
BioCyc contains the most extensive analysis capabilities for
metabolomics and transcriptomics data,
including painting omics data onto individual pathways, multi-pathway
diagrams, and zoomable metabolic maps; enrichment analysis for GO
terms, regulation, and pathways; and an Omics Dashboard.
BioCyc contains extensive unique content not included in any of the
other portals including regulatory network data, data on growth under
different nutrient conditions,
experimental gene essentiality data, reaction atom mappings (also
present in KEGG), and thousands of textbook page
equivalents of mini-review summaries.
KEGG is particularly lacking a diverse range of datatypes, for
example, KEGG lacks protein features, localization information,
GO terms, and evidence codes.
\section{Conclusions}
Microbial genome web portals have a broad range of capabilities, and
are quite variable in terms of what capabilities they provide.
We assessed the capabilities of BioCyc, KEGG, Ensembl Bacteria, KBase,
IMG, and PATRIC.
BioCyc provided the most capabilities overall in terms of
bioinformatics tools and breadth of data content; it also provides a
level of curated data content (curated from 89,000 publications) that far exceeds that within the other sites.
IMG ranked second overall, second in bioinformatics tools, and second in number of genomes.
KEGG ranked third overall, PATRIC ranked fourth, KBase ranked fifth,
and Ensembl Bacteria ranked sixth.
IMG provided the most extensive genome-related tools, with BioCyc a
close second.
BioCyc provided the most extensive metabolic tools, with KEGG ranked
second. Ensembl Bacteria provided no metabolic tools.
PATRIC provided the largest number of genomes.
BioCyc provided extensive regulatory network tools (and data) that are
not present in any of the other portals.
BioCyc provided the most extensive SmartTable tools and the most
extensive omics data analysis tools.
\section*{Acknowledgments}
We thank Dr. Nishadi De Silva of the European Bioinformatics Institute
for comments and corrections
regarding Ensembl Bacteria.
We thank the KBase team for comments and corrections regarding KBase.
We thank Maulik Shukla of Argonne National Laboratory and the
University of Chicago and Rebecca Wattam of Virginia Tech for comments and corrections regarding PATRIC.
Research reported in this publication was supported by SRI
International and by the National
Institute Of General Medical Sciences of the National Institutes of
Health under Award Number 5R01GM080745. The content is solely the
responsibility of the authors and does not necessarily represent the
official views of the National Institutes of Health.
For JGI contributors, the work presented in this paper was supported
by the Director, Office of Science, Office of Biological and
Environmental Research, Life Sciences Division, U.S. Department of
Energy under Contract No. DE-AC02-05CH11231.
|
2,877,628,090,685 | arxiv | \section{Introduction} Detailed numerical simulations of the formation of the first stars,
the so-called population III stars, indicate that they were probably massive, with masses
greater than $20 \: {\rm M_{\odot}}$ \citep{abn02,bcl02,yoha06,oshn07}. The fact that no
population III star has ever been observed in the Milky Way provides some observational
support for this prediction \citep{tum06}. However, a number of low-mass, extremely
metal-poor, stars with $[{\rm Fe}/{\rm H}] < -3$ have been discovered in the Galactic halo
\citep{bc05}, suggesting that the distribution of stellar masses is sensitive to even very
low levels of metal enrichment. Explanations of this apparent change in the IMF have
concentrated on the fact that metal-enriched gas has more coolants than its primordial
counterpart. These coolants are suggested to provide an opportunity for efficient
fragmentation, since they can keep the gas temperature lower during the collapse process
than is possible with pure H$_{2}$ cooling in the primordial gas. If this is the case, then
the final IMF will likely contain at least some low-mass stars, even if it still differs
significantly from the present-day local IMF.
If metal enrichment is the key to the formation of low-mass stars, then logically
there must be some critical metallicity ${\rm Z_{\rm crit}}$ at which the formation
of low mass stars first becomes possible. However, the value of ${\rm Z_{\rm crit}}$
is a matter of ongoing debate. Some models suggest that low mass star formation
becomes possible only once atomic fine-structure line cooling from carbon and oxygen
becomes effective \citep{bfcl01,bl03,san06,fjb07}, setting a value for ${\rm Z_{\rm
crit}}$ at around $10^{-3.5} \: {\rm Z_{\odot}}$. Another possibility, and the one
that we explore with this paper, is that low mass star formation is a result of
dust-induced fragmentation occurring at high densities, and thus at a very late stage
in the protostellar collapse \citep{sch02,om05,sch06}. In this model,
$10^{-6}~\simless~{\rm Z_{\rm crit}}~\simless~10^{-4}~{\rm Z_{\odot}}$, where much of
the uncertainty in the predicted value results from uncertainties in the dust
composition and the degree of gas-phase depletion \citep{sch02,sch06}.
The recent simulations performed by \citet{to06}, which model the collapse of very high
density protogalactic gas, provide some support for the dust-induced fragmentation model.
Using a simple piecewise polytropic equation of state to describe the thermal evolution of
extremely metal-poor protogalactic gas, \citet{to06} show that fragmentation can occur at
metallicities as low as ${\rm Z} = 10^{-6} \: {\rm Z_{\odot}}$, and that it becomes more
effective as the metallicity increases. However, their study considered only the limiting
case of gas with zero angular momentum. Owing to the absence of angular momentum, the
fragments formed in their simulation do not survive for longer than a dynamical time, as
they simply fall to the center of the potential well, where they merge with other fragments.
It is also unclear whether the fragmentation process would be as effective if the angular
momentum of the gas were non-zero, as would be expected in reality.
We present the results of simulations of the high-density, dust-cooling dominated
regime that improve on those of \citet{to06} by including the effects of rotation,
and by following a much larger dynamical range of the collapse (ten orders of
magnitude in density), as well as employing an equation of state (EOS) which follows
that of Omukai et al.\ (2005) more closely. We also perform some simulations of purely
primordial gas for comparison. A key feature is the use of sink particles (Bate,
Bonnell \& Price 1995) to capture the formation and evolution of multiple collapsing
cores, which enables us to follow the evolution of the star-forming gas over several
free-fall timescales and thus to model the build-up of a stellar cluster. This
differs from previous calculations which either follow the collapse of a single core
to high densities \citep{abn02,bl04,yoha06}, or use sink particles to capture low
density ($n < 10^{6}$ cm$^{-3}$) fragmentation \citep[e.g.][]{bcl02}.
In following section, we give details of the numerical model (\S\ref{code}) and the initial
set-up of the simulations (\S\ref{setup}). The main results of our study are presented in
\S\ref{results} and we discuss the origin of the fragmentation in \S\ref{discussion}.
Potential caveats with the current model are highlighted in \S\ref{caveats}, and there is a
summary of our findings in \S\ref{conclusions}.
\begin{figure}[t]
\includegraphics[width=3.1in,height=1.75in]{f1}
\caption{\label{fig:EOS} The three equations of state (EOSs) from Omukai et~al. (2005) that are used
in our study. The primordial case (solid line), ${\rm Z} = 10^{-6} \: {\rm Z_{\odot}} $~(dotted line), and
${\rm Z} = 10^{-5} \: {\rm Z_{\odot}} $~(dashed line), are shown alongside an example of a polytropic EOS with an
effective $\gamma = 1.06$. }
\end{figure}
\begin{figure*}[t]
\centerline{
\includegraphics[width=6.4in,height=4.18in]{f2}
}
\caption{\label{fig:sequence} Time evolution of the density distribution in the innermost 400 AU of the
protogalactic halo shortly before and shortly after the formation of the first protostar at
$t_{\rm SF}$. We plot only gas at densities above $10^{10}\,$cm$^{-3}$. The dynamical
timescale at a density $n = 10^{13}\,$cm$^{-3}$ is of the order of only 10 years. Dark dots
indicate the location of protostars as identified by sink particles forming at $n \ge
10^{17}\,$cm$^{-3}$. Note that without usage of sink particles we would not have been able
to follow the build-up of the protostellar cluster beyond the formation of the first object.
There are 177 protostars when we stop the calculation at $t = t_{\rm SF} + 420\,$yr. They
occupy a region roughly a hundredth of the size of the initial cloud. With
$18.7\,$M$_{\odot}$ accreted at this stage, the stellar density is $2.25 \times
10^{9}\,$M$_{\odot}\,$pc$^{-3}$.}
\end{figure*}
\begin{figure*}[t]
\begin{center}
\unitlength1cm
\begin{picture}(18,5.5)
\put(1.0, 0.50){\includegraphics[width=1.8in,height=1.8in]{f3a}}
\put(7.0, 0.50){\includegraphics[width=1.8in,height=1.8in]{f3b}}
\put(13.0, 0.50){\includegraphics[width=1.8in,height=1.8in]{f3c}}
\end{picture}
\end{center}
\caption{ \label{fig:density} We illustrate the onset of the fragmentation process in the high resolution
${\rm Z} = 10^{-5} \: {\rm Z_{\odot}} $~simulation. The graphs show the densities of the particles, plotted as a
function of their x-position. Note that for each plot, the particle data has been centered on
the region of interest. We show here results at three different output times, ranging from
the time that the first star forms ($t_{\rm sf}$) to 221 years afterwards. The densities
lying between the two horizontal dashed lines denote the range over which
dust cooling lowers the gas temperature.}
\end{figure*}
\begin{figure*}[t]
\centerline{
\includegraphics[width=2.1in,height=2.1in]{f4a}
\includegraphics[width=2.1in,height=2.1in]{f4b}
\includegraphics[width=2.1in,height=2.1in]{f4c}
}
\caption{\label{fig:masses} Mass functions resulting from simulations with metallicities
${\rm Z} = 10^{-5} \: {\rm Z_{\odot}}$ (left-hand panel), ${\rm Z} = 10^{-6} \: {\rm
Z_{\odot}}$ (center panel), and ${\rm Z} = 0$ (right-hand panel). The plots refer to the
point in each simulation at which 19 $\mathrm{M_\odot}$~ of material has been accreted (which occurs at
a slightly different time in each simulation). The mass resolutions are 0.002 $\mathrm{M_\odot}$~ and
0.025 $\mathrm{M_\odot}$~ for the high and low resolution simulations, respectively. Note the
similarity between the results of the low-resolution and high-resolution simulations. The
onset of dust-cooling in the ${\rm Z} = 10^{-5} \: {\rm Z_{\odot}} $~cloud results in a stellar cluster which has a mass
function similar to that for present day stars, in that the majority of the mass resides in
the lower-mass objects. This contrasts with the ${\rm Z} = 10^{-6} \: {\rm Z_{\odot}} $~and primordial clouds, in which the
bulk of the cluster mass is in high-mass stars.}
\end{figure*}
\section{Details of the Calculations}
\subsection{Numerical Method}
\label{code}
We follow the evolution of the gas in this study using Smoothed Particle
Hydrodynamics (SPH). Our version of the code is essentially a parallelized version of
that used by Bate et~al.~(1995), which utilizes adaptive smoothing lengths for the gas
particles and a binary tree algorithm for computing gravitational forces and
constructing SPH neighbor lists. The calculations were performed on the J{\"u}lich
Multi-processor (JUMP) supercomputer at the John von Neumann Institute for Computing,
Research Center J{\"u}lich, Germany.
The thermal evolution of the gas in our simulations is modelled using a tabulated
equation of state (EOS) that is based on the results of the detailed chemical
modelling of \citet{om05}. We use their reported results for the temperature and
molecular fraction as functions of gas density (their Figures 1 \& 3) with
metallicities ${\rm Z} = 0$, $10^{-6}$ and $10^{-5} \: {\rm Z_{\odot}}$ to compute
the internal energy density and thermal pressure of the gas at a range of different
densities. These values are used to construct look-up tables which are then used by
the SPH code to compute the pressure and internal energy at any required density, for
a given gas metallicity, via linear interpolation (in log-log space) between the tabulated values.
The resulting equations of state are plotted in Figure
\ref{fig:EOS}. By using a tabulated equation of state, we avoid the large
computational expense involved in solving the full thermal energy equation, while
still obtaining qualitatively correct behavior.
To model the star formation in this study we use sink particles, as described by Bate
et~al.\ (1995). This involves replacing the innermost parts of dense, self-gravitating
regions of gas with particles that can both accrete further material from their
surroundings and interact with other particles in the simulation via gravity. Sink
particles are formed once an SPH particle and its neighbors are gravitationally
bound, collapsing (negative velocity divergence), and within an accretion radius,
$h_{acc}$, which is taken here to be 0.4 AU. Gravitational interactions between
the sink particles and all other particles in the simulation are also smoothed,
self-consistently, to $h_{acc}$. Our set-up allows us to identify sink particles
as the direct progenitors of individual stars \cite[for a more detailed discussion,
see e.g.][]{WK01}.
\subsection{Setup and Initial conditions}
\label{setup}
Our calculations are designed to start from the point where previous fragmentation
calculations ended. It is now well established that the gas which falls into
collapsing dark matter mini-halos undergoes a phase of fragmentation, resulting in
the formation of large, self-gravitating clumps, with masses of order 100 $\mathrm{M_\odot}$.
The origin of this fragmentation is the rapid cooling driven by ${\rm H_{2}}$ that
occurs at densities of around 10 -- $10^{4}$ cm$^{-3}$, and the masses of the clumps are
similar to the Jeans mass at the corresponding density and temperature in this
regime. We thus pick up the evolution from conditions similar to those reached in the
study of \citet[][]{bfcl01}. Since the process of chemical enrichment involves
supernovae events from the first stars we expect the gas to have a certain initial
level of turbulence, which is normally assumed to be absent during the formation of
the very first stars from purely primordial gas. We also adopt a net angular momentum
for the gas consistent with the results of simulations of cosmological structure
formation.
The clouds in this study have a mass of 500 $\mathrm{M_\odot}$, and are modelled using either 2
million or 25 million SPH particles. In the higher resolution calculations, this gives a
particle mass of $2 \times 10^{-5} \: {\rm M_{\odot}}$ and a mass resolution of 0.002
$\mathrm{M_\odot}$~ (Bate \& Burkert 1997), roughly 10 times smaller than the opacity limit set by the
\citet[][]{om05} equation of state. These simulations therefore have roughly an order of
magnitude of surplus resolution. The low-resolution calculations have a mass resolution
roughly equal to the mass at which the gas becomes optically thick. To ensure that the Jeans
condition is not violated, the sinks are always formed just before the optically thick
regime in all the simulations. Our clouds have an initial radius of 0.17pc, at an initial
uniform density of $5 \times 10^{5}$~cm$^{-3}$. This corresponds to an initial free-fall
time of $t_{\rm ff} = (3\pi/32G\pi\rho)^{1/2} = 5.1 \times 10^{4}$ years. At this scale and
density regime, the contributions from dark matter to the gravitational potential are small
and are thus not taken into account in our computational set up. One can see from Figure
\ref{fig:EOS} that the different gas metallicities have slightly different internal energies
at the starting density. Thus, the ${\rm Z} \leq 10^{-6} \: {\rm Z_{\odot}} $~calculations have an initial ratio of thermal to
gravitational energy of $\alpha = E_{\rm therm}/ |E_{\rm grav}| =0.39$, while the cooler
${\rm Z} = 10^{-5} \: {\rm Z_{\odot}} $~calculations have an initial value of $\alpha = 0.32$. All simulations are given a
low level of initial turbulence, with the ratio of turbulent to gravitational potential
energy $E_{\rm turb} / |E_{\rm grav}| = 0.1$, and thus an RMS Mach number of
$\mathcal{M}_{RMS} \approx 1$. The clouds are set in initially uniform rotation, with $\beta
= E_{\rm rot}/ |E_{\rm grav}| = 0.02$. The conditions at the start of our simulation are thus
similar to those at which \cite{bcl02} form their sink particles (see their Figure 5 for
comparison).We perform low-resolution simulations for all three metallicity cases, and
high-resolution simulations for the primordial and ${\rm Z} = 10^{-5} \: {\rm Z_{\odot}} $~cases.
\begin{figure*}[t]
\centerline{
\includegraphics[width=3.1in,height=3.1in]{f5a}
\includegraphics[width=3.1in,height=3.1in]{f5b}
}
\caption{ \label{fig:massrho} The mass (upper panels) and the number of Jeans masses (lower panels) are both
shown here as a function of gas density, with the high-resolution ${\rm Z} = 10^{-5} \: {\rm Z_{\odot}} $~simulation in the
left-hand plot and both the high-resolution primordial and low-resolution ${\rm Z} = 10^{-6} \: {\rm Z_{\odot}} $~simulations
in the right-hand plot. In the upper panels, we plot the mass residing above a density,
$n_{\rm gas}$, as a function of that density, as well as the Jeans mass. In the lower
panels, we plot the number of Jeans masses as a function of density. In the left-hand panel,
we show via the shaded areas, the heating (pink) and cooling (light blue) phases in the
${\rm Z} = 10^{-5} \: {\rm Z_{\odot}} $~EOS. In the right-hand plot, we show the conditions in the primordial
(high-resolution) simulation (solid lines), and ${\rm Z} = 10^{-6} \: {\rm Z_{\odot}} $~simulation (dashed lines). The gas
conditions are shown at the point of star formation and 50 years earlier, as well as at two
instants after star formation, when the clouds have converted around 9.5\,$\mathrm{M_\odot}$~ and
19\,$\mathrm{M_\odot}$~ of gas into protostars. Note that we only label the evolutionary stages in the
lower panels, but the same progression with time, going from lower left-most lines to those
at the top, applies also to the upper panels. }
\end{figure*}
\section{Results}
\label{results}
We find that enrichment of the gas to a metallicity of only ${\rm Z} = 10^{-5} \:{\rm
Z_{\odot}}$ dramatically enhances fragmentation within the dense, collapsing cloud.
We first focus our discussion on the evolution of the ${\rm Z} = 10^{-5} \: {\rm
Z_{\odot}}$ gas cloud, before contrasting with the $10^{-6} \: {\rm Z_{\odot}}$ and
primordial clouds.
The evolution of the high-resolution ${\rm Z} = 10^{-5} \: {\rm Z_{\odot}} $~simulation is illustrated in Figure
\ref{fig:sequence}, in a series of snapshots showing the density distribution of the
gas. We show several stages in the collapse process, spanning a time interval from
shortly before the formation of the first protostar (as identified by the formation of
a sink particle in the simulation) to 420 years afterwards. Note that we show only the
innermost 1\% of the full computational domain. A complementary view is given in Figure
\ref{fig:density}, which shows the particle densities as a function of position.
During the initial contraction, the cloud builds up a central core with a density of
about $n = 10^{10}\,$cm$^{-3}$. This core is supported by a combination of thermal
pressure and rotation. Eventually, the core reaches high enough densities to go into
free-fall collapse, and forms a single protostar. As more high angular momentum
material falls to the center, the core evolves into a disk-like structure with density
inhomogeneities caused by low levels of turbulence. As it grows in mass, its density
increases. When dust-induced cooling sets in, it fragments heavily into a tightly
packed protostellar cluster within only a few hundred years. One can see this behavior
in particle density-position plots in Figure \ref{fig:density}. We stop the simulation
420 years after the formation of the first stellar object (sink particle). At this
point, the core has formed 177 stars. The evolution in the low-resolution simulation is
very similar. The time between the formation of the first and second protostars is
roughly 23 years, which is two orders of magnitude higher than the free-fall time at
the density where the sinks are formed. Without the inclusion of sink particles, we
would only have been able to capture the formation of the first collapsing object which
forms the first protostar: {\em the formation of the accompanying cluster would have
been missed entirely}.
The mass functions of the protostars at the end of the ${\rm Z} = 10^{-5} \: {\rm Z_{\odot}} $~simulations (both high
and low resolution cases) are shown in Figure \ref{fig:masses} (left-hand panel). When
our simulation is terminated, the sink particles hold $\sim$ 19 $\mathrm{M_\odot}$~ of gas in
total. The mass function peaks somewhere below $0.1\ $M$_{\odot}$ and ranges from below
0.01$\,$M$_{\odot}$ to about $5\ $M$_{\odot}$. It is important to stress here that this
is not the final protostellar mass function. The continuing accretion of gas by the
cluster will alter the mass function, as will mergers between the newly-formed
protostars (which cannot be followed using our current sink particle implementation).
Protostellar feedback in the form of winds, jets and H{\sc ii} regions may also play a
role in determining the shape of the final stellar mass function. However, a key point
to note is that the chaotic evolution of a bound system such as this cluster ensures
that a wide spread of stellar masses will persist. Some stars will enjoy favourable
accretion at the expense of others that will be thrown out of the system (as can be
seen in Figure \ref{fig:sequence}), thus having their accretion effectively terminated
(see for example, the discussions in Bonnell \& Bate 2006 and Bonnell, Larson \&
Zinnecker 2007). The survival of some of the low mass stars formed in the cluster is
therefore inevitable.
Our calculations demonstrate that the dust-cooling model of \citet{om05} can indeed
lead to the formation of low-mass objects from gas with very low metallicity. This
suggests that the transition from high-mass primordial stars to Population II stars
with a more ``normal'' mass spectrum occurs early in the universe, at metallicities at
or below Z $\approx10^{-5}\ $Z$_{\odot}$. Our simulations even predict the existence of
brown dwarfs in this metallicity range. Their discovery would be a critical test for
our model of the formation of the first star cluster. The first hints of its validity
come from the very low metallicity sub-giant stars that have recently been discovered
in the Galactic halo \cite[][]{christlieb02,bc05}, which have iron abundances less than
$10^{-5}$ times the solar value and masses below one solar mass, consistent with the
range reported here.
Turning our attention to the ${\rm Z} = 0$ and ${\rm Z} = 10^{-6} \: {\rm Z_{\odot}} $~calculations, we also find that
fragmentation of the gas occurs, albeit at a much lower level than in the ${\rm Z} =
10^{-5} \: {\rm Z_{\odot}}$ run. The mass functions from these simulations are shown in
Figure \ref{fig:masses} (middle and right-hand panels), and are again taken when $\sim$
19 $\mathrm{M_\odot}$~ of gas has been accreted onto the sink particles, the same amount as is
accreted by the end of the ${\rm Z} = 10^{-5} \: {\rm Z_{\odot}} $~calculations.
The primordial gas clouds form fewer protostars than the ${\rm Z} = 10^{-5} \: {\rm Z_{\odot}} $~clouds, with the high
resolution simulation forming 25 sink particles and the low-resolution simulation
forming 22 sink particles. The mass functions are considerably flatter than the present
day IMF, in agreement with the suggestion that Population III stars are typically very
massive. The fragmentation in the ${\rm Z} = 10^{-6} \: {\rm Z_{\odot}} $~simulation is slightly more efficient than in
the primordial case, with 33 objects forming. Again we stress that there is a delay of
several local free-fall times between the formation of first and second protostars in
these simulations: {\em without the inclusion of sink particles, we would have missed
the formation of the lower mass objects.}
\section{Conditions for Fragmentation and Cluster Formation}
\label{discussion}
We now discuss the physical origin of the fragmentation in the ${\rm Z} = 10^{-5} \: {\rm Z_{\odot}} $~simulation, and
investigate the properties of the forming cluster.
First, we focus on the distribution of gas in the center of the halo right at the onset of star formation, i.e.\ at the time when we identify the first sink particle. Figure \ref{fig:ang-mom} shows $(a)$ the distribution of rotational speed $L/r$ relative to the Keplerian velocity $v_{\rm kep} = (GM/r)^{1/2}$ and $(b)$ the specific angular momentum $L$ in spherical mass shells around the halo center as a function of the enclosed mass. The blue curves denote the $10^{-5}\,$Z$_{\odot}$ case, while the red-dashed curves illustrate the behavior of zero metallicity gas, as discussed further below. It is evident that at the time when the first sink particle forms, rotational support plays only a minor role. The rotational velocity of gas in the inner parts of the halo is still sub-Keplerian by a factor or 4 to 5. This will change rapidly in the subsequent evolution as more and more higher angular momentum gas falls to the center. In $(c)$ we plot the enclosed mass and in $(d)$ the spherically averaged density as a function of distance from the center. The density roughly follows a power-law distribution with slope $-2\pm0.1$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=8.5cm]{f6.eps}
\end{center}
\caption{\label{fig:ang-mom} $(a)$ Radially-averaged rotational velocity $L/r$ normalized
to the Keplerian velocity $v_{\rm kep} = (GM_{\rm enc}(r)/r)^{1/2}$ and $(b)$ absolute
value of the specific angular momentum $L$ in spherical shells centered on the density
peak as function of enclosed mass; $(c)$ $M_{\rm enc}(r)$ and $(d)$ average local gas
density $n_{\rm gas}(r)$ as function of distance $r$ to the center. Blue (solid line)
curves denote gas
with Z$=10^{-5}$Z$_{\odot}$ and red (dashed line) curves denote gas with Z$=0$. To guide your eye, a
power-law slope of $-2$ is indicated in $(d)$.}
\end{figure}
We point out that the properties of the star forming clump in our model are virtually identical to those arising from full cosmological calculations taking into account the combined evolution of baryons and dark matter over time \cite[for comparison, see, e.g.][]{abn02,yoha06}. If anything, one could argue that the specific angular momentum we consider is somewhat on the low side. We therefore expect that fragmentation will also occur in full cosmological simulations, if these are continued beyond the formation of the very first stellar object \cite[see e.g.\ ][for some preliminary evidence of this]{yokh07}.
To illustrate how the conditions
for cluster formation arise we show in Figure \ref{fig:density} the density
distribution perpendicular to the rotational axis of the system at different times. In
Figure \ref{fig:massrho}, in the top panel, we plot both the mass of gas which resides
above a density $n$ as well as the Jeans mass. In the bottom panel, we plot
the number of Jeans masses, which is simply the mass at this density (or higher)
divided by the corresponding Jean mass.
\begin{figure*}[t]
\begin{center}
\unitlength1cm
\begin{picture}(20,5.5)
\put(1.0, 0.50){\includegraphics[width=1.8in,height=1.8in]{f7a}}
\put(7.0, 0.50){\includegraphics[width=1.8in,height=1.8in]{f7b}}
\put(13.0, 0.50){\includegraphics[width=1.8in,height=1.8in]{f7c}}
\end{picture}
\end{center}
\caption{\label{fig:lowresxrho} Particle densities as a function of position in the
low-resolution simulations, for the primordial (left), ${\rm Z} = 10^{-6} \: {\rm Z_{\odot}} $~(middle) and
${\rm Z} = 10^{-5} \: {\rm Z_{\odot}} $~simulations (right). The particles are plotted once the protostars in each
simulation have accreted 19 $\mathrm{M_\odot}$~ of gas.}
\end{figure*}
The fragmentation in the low-metallicity gas (the ${\rm Z} = 10^{-5} \: {\rm Z_{\odot}} $~case) is the result of two
key features in its thermal evolution. First, the rise in the EOS curve between
densities $10^{9}$cm$^{-3}$ and $10^{11}$cm$^{-3}$ causes material to loiter at this
point in the gravitational contraction. A similar behavior at densities around $n =
10^3\,$cm$^{-3}$ is discussed by Bromm et al.\ (2001).
The rotationally stabilized disk-like structure, as seen in the plateau at $n \approx
10^{10}$cm$^{-3}$ in Figure \ref{fig:density}, is able to accumulate a significant
amount of mass in this phase and only slowly increases in density. Second, once the
density exceeds $n \approx 10^{12}$cm$^{-3}$, the sudden drop in the EOS curve lowers
the critical mass for gravitational collapse by two orders of magnitude. The Jeans mass
in the gas at this stage is only $M_{\rm J} = 0.01\ $M$_{\odot}$, as visible in the top
panel of Figure \ref{fig:massrho}. The disk-like structure suddenly becomes highly
unstable against gravitational collapse. We see this when looking at the behavior of
the number of Jeans masses $N_{J}$ above a certain density $n$ in the bottom panel of
Figure \ref{fig:massrho}, which shortly after the formation of the first stellar object
increases to values as high as $N_{\rm J} \approx 10^3$. Consequently, the disk
fragments vigorously on timescales of several hundred years. A very dense cluster of
embedded low-mass protostars builds up, and the protostars grow in mass by accretion
from the available gas reservoir. The number of protostars formed by the end of our
simulation (177) is nearly two orders of magnitude larger than the initial number of
Jeans masses in the cloud set-up.
Because the evolutionary timescale of the system is extremely short -- the free-fall
time at a density of $n = 10^{13}\,$cm$^{-3}$ is of the order of 10 years -- none
of the protostars that have formed by the time that we stop the simulation have yet
commenced hydrogen burning, justifying our decision to neglect the effects of
protostellar feedback in this study. Heating of the dust due to the significant
accretion luminosities of the newly-formed protostars will occur \citep{krum06}, but is
unlikely to be important, as the temperature of the dust at the onset of dust-induced
cooling is much higher than in a typical Galactic protostellar core ($T_{\rm dust} \sim
100 \: {\rm K}$ or more, compared to $\sim 10 \: {\rm K}$ in the Galactic case). The
rapid collapse and fragmentation of the gas also leaves no time for dynamo
amplification of magnetic fields \citep{tb04}, which in any case are expected to be
weak and dynamically unimportant in primordial and very low metallicity gas
\citep{wid02}.
The cluster forming in our ${\rm Z} = 10^{-5} \: {\rm Z_{\odot}} $~simulation represents a very extreme analogue of
the clustered star formation that we know dominates in the present-day Universe
\cite[][]{ll03}. A mere 420 years after the formation of the first object, the
cluster has formed 177 stars (see Figure \ref{fig:sequence}). These occupy a region
of only around 400~AU, or $2 \times 10^{-3}$~pc, in size, roughly a hundredth of the
size of the initial cloud. With $\sim 19 \,$M$_{\odot}$ accreted at this stage, the
stellar density is $2.25 \times 10^{9}$ $\mathrm{M_\odot}$~ pc$^{-3}$. This is about five orders
of magnitude greater than the stellar density in the Trapezium cluster in Orion
\cite[][]{hh98} and about a thousand times greater than that in the core of 30
Doradus in the Large Magellanic Cloud \cite[][]{mh98}. This means that dynamical encounters
will be extremely important during the formation of the first star cluster. The
violent environment causes stars to be thrown out of the denser regions of the cluster,
slowing down their accretion. The stellar mass spectrum thus depends on both the
details of the initial fragmentation process \cite[e.g.\ as discussed by][]{jappsen05,
clark05} as well as dynamical effects in the growing cluster \cite[][]{bcbp2001,bbv04}.
This is different to present-day star formation, where the
situation is less clear-cut and the relative importance of these two processes may
vary strongly from region to region \cite[][]{kmk05, bb06, blz07}.
Dynamical encounters in the extremely dense protocluster will also influence the binary
fraction of the stars that form. Wide binaries will be rapidly disrupted, and so any
binary systems that survive will be tightly-bound close binaries \cite[][]{kroupa98}.
Recent observations suggest that extremely metal-poor low-mass stars have a higher
binary fraction than that found for normal metal-poor stars, and that the period
distribution of these binary systems is also skewed towards tight, short-period
binaries (Lucatello et~al.\ 2005). Mass transfer from a close binary companion may also
be able to explain the extremely high [C/Fe] ratios measured in the most metal-poor
stars currently known \citep{ry05,kom07}. Our results suggest that these stars may
originate in conditions similar to those that we find in our simulations. Further work
aimed at improving our understanding of the binary statistics of low-metallicity stars
formed by dust-induced fragmentation is clearly required.
As mentioned in Section \ref{results}, the primordial and the ${\rm Z} = 10^{-6} \: {\rm Z_{\odot}} $~cases also
exhibit some fragmentation. Careful analysis of the \citet{om05} EOS for
zero-metallicity gas shows roughly isothermal behavior in the density range
$10^{14}\,$cm$^{-3} \le n \le 10^{16}\,$cm$^{-3}$, i.e.\ just before the gas becomes
optically thick and begins to heat up adiabatically. Conservation of angular momentum
again leads to the build-up of a rotationally supported massive disk-like structure
which then fragments into several objects. This is understandable, as isothermal disks
are susceptible to gravitational instability \cite[][]{bodenheimer95} once they have
accumulated sufficient mass. Further, Goodwin et~al.~(2004a,b) show how even very low
levels of turbulence can induce fragmentation. Since turbulence creates
local anisotropies in the angular momentum on all scales, it can always provide some
centrifugal support against gravitational collapse. This support can then provide a window in
which fragmentation can occur.
In addition to the quasi-isothermal behavior of the gas, both of the
${\rm Z} \leq 10^{-6} \: {\rm Z_{\odot}} $~equations of state from Omukai et~al. (2005) also contain a brief phase of
cooling during the collapse, which further aids fragmentation. In the primordial case,
this occurs very late in the collapse, just above $n = 10^{14}$ cm$^{-3}$, and is due
to the onset of efficient cooling from ${\rm H_{2}}$ collision-induced emission
\citep{on98,ra04}. In the ${\rm Z} = 10^{-6} \: {\rm Z_{\odot}} $~EOS, the cooling occurs earlier, at around $n = 10^{10}$
cm$^{-3}$, and is due to a combination of effects -- enhanced ${\rm H_{2}}$ cooling
resulting from the rapid increase in the molecular fraction at these densities due to
efficient three-body ${\rm H_{2}}$ formation, and rotational and vibrational line
cooling from ${\rm H_{2}O}$ -- that are present in the model of Omukai et~al.\ (2005).
One can see the emergence of structure at these densities
in the particle plots shown in Figure~\ref{fig:lowresxrho}. However, the effect is much less
pronounced in the primordial case. Indeed a further low-resolution simulation of the
primordial EOS in which this dip was removed yielded almost identical results, forming
17 sink particles instead of 22, suggesting that the quasi-isothermal nature of the
gas is more important than this brief cooling phase.
In comparison to the ${\rm Z} = 10^{-5} \: {\rm Z_{\odot}} $~case, the strength of fragmentation in the
${\rm Z} \leq 10^{-6} \: {\rm Z_{\odot}} $~calculations is weak, and only a few objects form for the combination of total
mass ($M=500\,$M$_{\odot}$), angular momentum ($E_{\rm rot}/ |E_{\rm grav}|= 0.02$),
and level of initial turbulence ($E_{\rm turb}/|E_{\rm grav}| = 0.1$) that we adopted
in our simulations. Consequently the stars (sink particles) forming in the
${\rm Z} \leq 10^{-6} \: {\rm Z_{\odot}} $~simulations are typically of higher mass than those in the ${\rm Z} = 10^{-5} \: {\rm Z_{\odot}} $~simulations,
consistent with the predictions made by \citet[][]{abn02} and \citet[][]{bcl02}.
Although the recent high-resolution SPH simulation of primordial gas performed by
\citet{yoha06}, which predict a similar EOS to that used in our primordial simulations,
find no fragmentation, they do not follow the evolution of the gas beyond the formation
of the first collapsing core, since they do not include sink particles in their study,
and so they may miss this subsequent phase of fragmentation. However, more
recent work \citet{yokh07}, does show evidence of fragmentation on scales of around
0.1pc.
\section{Caveats}
\label{caveats}
Although the equations of state taken from Omukai et~al.\ (2005) provide an opportunity
for vigorous fragmentation at low metallicities, there are a number of questions which
still need to be addressed. One immediate uncertainty is the applicability of the
results derived from a 1-zone model to a full three-dimensional collapse calculation,
which contains turbulence and thus local anisotropies in velocity and density
structure. In particular, the existence of a strict relationship between temperature
and density is unlikely, even when the cooling time of the gas is short compared to the
dynamical time (Whitehouse \& Bate 2006). If the gas were to collapse more slowly
than is assumed in the Omukai et~al.\ (2005) calculations, or if the opacity of the
gas were to be lower, then less heating would occur prior to the onset of efficient
dust cooling.
On the other hand, if the collapse is faster than Omukai et~al.\ (2005) assume or if
the gas opacity is greater, then more heating would occur. Since the amount of heating
that occurs during the loitering phase helps to accumulate material at high densities,
which then acts as a reservoir for fragmentation, the amount of fragmentation will be
sensitive to the thermal evolution of the gas during this phase of the collapse, and is
therefore somewhat uncertain. However, we note that unless the temperature dip is
eliminated entirely (which seems unlikely), fragmentation will still occur along the
lines outlined in this paper. We therefore believe our results to be qualitatively, if
not quantitatively, correct.
A related issue is one of dust opacity. In their study, Omukai et~al.\ (2005) use a
dust model based on Pollack et~al.\ (1994), which was designed for the study of
Galactic dust. However, it is not at all clear that this is the appropriate model to
use, given that high-redshift dust is likely to differ significantly from local dust
\citep[see e.g.][]{tod01}. Schneider et~al.\ (2006) have performed a similar study to
that of Omukai et~al.\ (2005), but use a dust model based on the results of
\citet{tod01} and \citet{sfs04} in place of the Pollack et~al.\ (1994) dust model.
They find that this leads to qualitatively similar behavior to the Omukai et~al.
(2005) model, but that the onset of effective dust cooling happens at lower
metallicity, ${\rm Z} = 10^{-6} \: {\rm Z_{\odot}} $~rather than ${\rm Z} = 10^{-5} \: {\rm Z_{\odot}} $. However, the use of this alternative dust model
can also be questioned, since \citet{noz03} have shown that the composition and
properties of the dust produced by population III supernovae are strongly dependent on
the (poorly-constrained) degree of mixing assumed to occur in the ejecta, and since
the model does not take into account the destruction of grains by thermal sputtering
in the reverse shock in the supernova remnant \citep{bs07}, or the growth of grains
through accretion or coagulation during the protostellar collapse \citep[see
e.g.][]{fpw05}.
Lastly, the total amount of heating produced by three body H$_{2}$ formation is
uncertain. This reaction, which takes place primarily at gas number densities between
$10^{8}$ and $10^{13} \: {\rm cm^{-3}}$, is responsible for much of the rise in gas
temperature prior to the dust-cooling phase. However, there is a difference of two
orders of magnitude between the three-body ${\rm H_{2}}$ formation rate used in the
simulations of primordial star-formation performed by Abel et~al.~(2002) and the most
recent theoretical determination by Flower \& Harris (2007), with other suggested
rates spanning the full range in between. This introduces an additional uncertainty
into the thermal evolution of the gas during the high-density loitering phase.
\section{Conclusions}
\label{conclusions}
We have performed numerical simulations of star formation in very high density gas (in
the range $5 \times 10^{5} \leq n \leq 10^{16} \: {\rm cm^{-3}}$) in the early
universe. The aim of the study was to investigate whether the dust-induced
fragmentation predicted by Omukai et~al.\ (2005) and Schneider et~al.\ (2002, 2006)
does actually occur in realistic systems, and to begin to constrain the resulting IMF.
The major differences of our work from the only previous numerical study of
dust-induced fragmentation (Tsuribe \& Omukai 2006) are the inclusion of the effects
of rotation, which prove to be of vital importance, and the use of sink particles to
capture the formation of multiple protostellar objects.
Based on the equations of state reported by Omukai et~al.\ (2005), our results show
that fragmentation of protogalactic gas at very high densities above $n_{\rm gas} =
10^{12}\,$cm$^{-3}$ is almost unavoidable, as long as the angular momentum is
non-negligible. In this case, rotation leads to the build-up of a massive disk-like
structure which provides the background for smaller-scale density fluctuations to grow,
some of which become gravitationally unstable and collapse to form stars. At
metallicities above Z~$\sim 10^{-5}\,$Z$_{\odot}$ dust cooling becomes effective at
densities $n_{\rm gas} \sim 10^{12}\,$cm$^{-3}$ and leads to a sudden drop of
temperature which in turn induces vigorous fragmentation. A very dense cluster of
low-mass protostars builds up, which we refer to as the first stellar cluster. The mass
spectrum peaks below $1\,$M$_{\odot}$, which is similar to the value in the solar
neighborhood \cite[][]{kroupa02,chabrier03} and is also comparable to the mass of the
very low metallicity subgiant stars recently discovered in the halo of our Milky Way
\citep{christlieb02, bc05}. If the dust induced cooling model proposed by Omukai et al
(2005) is accurate, then the high-density, low-metallicity, fragmentation we describe
here may be the dominant process which shapes the stellar mass function.
We find that even purely primordial Z~$ = 0$~gas with sufficient rotation may fragment
at densities $10^{14}\,$cm$^{-3} \le n \le 10^{16}\,$cm$^{-3}$. In this density range,
zero metallicity gas is roughly isothermal and the disk-like structure that forms due
to angular momentum conservation is marginally unstable. It fragments into several,
quite massive objects, thus supporting the hypothesis that metal-free stars should have
masses in excess of several tens of M$_{\odot}$. Similar behavior is found for gas with
a metallicity of ${\rm Z} = 10^{-6} \: {\rm Z_{\odot}} $.
\begin{acknowledgments}
The authors would like to thank Kazuyuki Omukai, Tom Abel, Volker Bromm, Robi
Banerjee and Mordecai-Mark Mac Low for helpful discussions concerning this topic.
In particular, we would also like to thank Naoki Yoshida for many interesting
and insightful discussions regarding this paper. All computations described here
were performed at the J{\"u}lich Multi-processor (JUMP) supercomputer at the John
von Neumann Institute for Computing, Research Centre J{\"u}lich, Germany. PCC
acknowledges support by the Deutsche Forschungsgemeinschaft (DFG) under grant
KL~1358/5. SCOG acknowledges travel support from the European Commission FP6 Marie
Curie RTN CONSTELLATION (MRTN-CT-2006-035890).
\end{acknowledgments}
|
2,877,628,090,686 | arxiv | \chapter[#2]{#1
\newcommand{\Section}[2]{%
\renewcommand{}{#2}%
\markboth{Section \thesection\quad---\quad\textsc{}}%
}
\renewcommand{\baselinestretch}{1.4}
\usepackage{babel}
\addto\extrasfrench{\providecommand{\og}{\leavevmode\flqq~}
\providecommand{\fg}{\ifdim\lastskip>\z@\unskip\fi~\frqq}}
\begin{document}
\title{\Large{Another method of viscosity solutions of integro-differential partial equation by concavity}}
\date{\today}
\selectlanguage{english}
\author{\textsc{L.\ SYLLA\footnotemark[2]}}
\footnotetext[2]{Universit\'e Gaston Berger, LERSTAD, CEAMITIC, e-mail: sylla.lamine@ugb.edu.sn}
\maketitle
\begin{abstract}
In this paper we consider the problem of viscosity solution of integro-partial differential equation(IPDE in short) via the solution of backward stochastic differential equations(BSDE in short) with jumps where L\'evy's measure is not necessarily finite. We mainly use the concavity of the generator at the level of its second variable to establish the existence and uniqueness of the solution with non local terms.\\
\end{abstract}
\bigskip{}
\textbf{Keywords}: Integro-partial differential equation; Backward stochastic differential equations with jumps; Viscosity solution; Non-local operator; Concavity.\\
\\
\textbf{MSC 2010 subject classifications}: 35D40, 35R09, 60H30.
\bigskip{}
\newpage
\section{Introduction}
We consider the following system of integro-partial differential equation, which is a function of $(t,x)$: $\forall i\in\{1,\ldots,m\}$,
\begin{equation}\label{eq1}
\left
\{\begin{array}{ll}
-\partial_{t}u^{i}(t,x)-b(t,x)^{\top}\mathrm{D}_{x}u^{i}(t,x)-\frac{1}{2}\mathrm{Tr}(\sigma\sigma^{\top}(t,x)\mathrm{D}^{2}_{xx}u^{i}(t,x))-\mathrm{K}_{i}u^{i}(t,x)\\
-\mathit{h}^{(i)}(t,x,u^{i}(t,x),(\sigma^{\top}\mathrm{D}_{x}u^{i})(t,x),\mathrm{B}_{i}u^{i}(t,x))=0,\quad (t,x)\in\left[ 0,T\right] \times\mathbb{R}^{k};\\
u^{i}(T,x)=g^{i}(x);
\end{array}
\right.
\end{equation}
where the operators $\mathrm{B}_{i}$ and $\mathrm{K}_{i}$ are defined as follows:
\begin{eqnarray}
\mathrm{B}_{i}u^{i}(t,x) & = & \displaystyle\int_{\mathrm{E}}\gamma^{i}(t,x,e)(u^{i}(t,x+\beta(t,x,e))-u^{i}(t,x))\lambda(\mathrm{d} e);\label{2.2}\\
\mathrm{K}_{i}u^{i}(t,x) & = & \displaystyle\int_{\mathrm{E}}(u^{i}(t,x+\beta(t,x,e))-u^{i}(t,x)-\beta(t,x,e)^{\top}\mathrm{D}_{x}u^{i}(t,x))\lambda(de).\nonumber
\end{eqnarray}
The resolution of (\ref{eq1}) is in connection with the following system of backward stochastic differential equations with jumps :
\begin{equation}\label{eq2}
\left
\{\begin{array}{ll}
dY^{i;t,x}_{s}=-f^{(i)}(s,X^{t,x}_{s},Y^{i;t,x}_{s},Z^{i;t,x}_{s},U^{i;t,x}_{s})ds+Z^{i;t,x}_{s}\mathrm{d}
\mathrm{B}_{s}+\displaystyle\int_{\mathrm{E}}\mathrm{U}^{i;t,x}
_{s}(e)\tilde{\mu}(\mathrm{d}s,\mathrm{d}e),\quad s\leq T;\\
Y^{i;t,x}_{T}=g^{i}(X^{t,x}_{T});
\end{array}
\right.
\end{equation}
and\\
the following standard stochastic differential equation of diffusion-jump type:
\begin{equation}\label{2.4}
X^{t,x}_{s}=x+\displaystyle\int^{s}_{t}b(r,X^{t,x}_{r})\, \mathrm{d}r+\displaystyle\int^{s}_{t}\sigma(r,X^{t,x}_{r})\, \mathrm{d}B_{r}+\displaystyle\int^{s}_{t}
\displaystyle\int_{E}\beta(r,X^{t,x}_{r-},e)\tilde{\mu}(dr,de),
\end{equation}
for $s\in[t,T]$ and $X^{t,x}_{s}=x$ if $s\leq t$.\\
Several authors have studied (\ref{eq1}) by various and varied methods.
Among others we can quote Barles and al. \cite{bar}, who use the theorem of comparison by supposition the monotony of the generator; Ouknine and Hamad\`ene \cite{hamaOuk} use the penalization method.\\
But recently Hamad\`ene does a relaxation on the hypothesis of monotony on the generator to introduce a new class of functions(see \cite{hama} page 216) for the resolution of (\ref{eq1}).\\
In this work we propose to solve (\ref{eq1}) by relaxing the monotonicity of the generator and the class of fonctions introduced in \cite{hama} and assuming that $\lambda=\infty$ and the concavity of the generator at the level of its second variable. We recall that this technic was used in \cite{fan} for the resolution of BSDE.\\
Our paper is organized as follows: in the next section we give the notations and the assumptions; in section $3$ we recall a number of existing results; in section $4$ we build estimates and properties for a good resolution of our problem; section $5$ is reserved to give our main result.\\
And at the end, classical definition of the concept of viscosity solution is put in appendix.
\section{Notations and assumptions}
Let $\left(\Omega,\mathcal{F},(\mathcal{F}_{t})_{t\leq T},\mathbb{P}\right)$ be a stochastic basis such that $\mathcal{F}_{0}$ contains all $\mathbb{P}-$null sets of $\mathcal{F}$, and $\mathcal{F}_{t}=\mathcal{F}_{t+}:=\bigcap_{\epsilon>0}
\mathcal{F}_{t+\epsilon},~t\geq 0$, and we suppose that the filtration is generated by the two mutually independents processes:\\
(i) $B:=(B_{t})_{t\geq 0}$ a $d$-dimensional Brownian motion and,\\
(ii) a Poisson random measure $\mu$ on $\mathbb{R}^{+}\times\mathrm{E}$ where $\mathrm{E}:=\mathbb{R}^{\ell}-\{0\}$ is equipped with its Borel field $\mathcal{E}$ $(\ell\geq 1)$. The compensator $\nu(\mathrm{d}t,\mathrm{d}e)=\mathrm{d}t\lambda(\mathrm{d}e)$ is such that $\{\tilde{\mu}(\left[0,t\right]\times A)=(\mu-\lambda)(\left[ 0,t\right]\times A)\}_{t\geq 0}$ is a martingale for all $A\in\mathcal{E}$ satisfying $\lambda(A)<\infty$. We also assume that $\lambda$ is a $\sigma$-finite measure on $(E,\mathcal{E})$, integrates the function $(1\wedge\mid e\mid ^{2})$ and $\lambda(E)=\infty$.\\
Let's now introduce the following spaces:\\
(iii) $\mathcal{P}~(resp.~\mathbf{P})$ the field on $\left[0,T\right]\times \Omega$ of $\mathcal{F}_{t\leq T}$-progressively measurable (resp. predictable) sets.\\
(iv) For $\kappa\geq 1$, $\mathbb{L}^{2}_{\kappa}(\lambda)$ the space of Borel measurable functions $\varphi:=(\varphi(e))_{e\in E}$ from $E$ into $\mathbb{R}^{\kappa}$ such that
$\|\varphi\|^{2}_{\mathbb{L}^{2}_{\kappa}(\lambda)}=\displaystyle\int_{E}\left|\varphi(e)\right|^{2}_{\kappa}\lambda(\mathrm{d}e)<\infty$; $\mathbb{L}^{2}_{1}(\lambda)$ will be simply denoted by $\mathbb{L}^{2}(\lambda)$;\\
(v) $\mathcal{S}^{2}(\mathbb{R}^{\kappa})$ the space of RCLL (for right continuous with left limits) $\mathcal{P}$-measurable and $\mathbb{R}^{\kappa}$-valued processes such that $\mathbb{E}[\sup_{s\leq T} \left|Y_{s}\right|^{2}]<\infty$;\\
(vi) $\mathbb{H}^{2}(\mathbb{R}^{\kappa\times d})$ the space of processes $Z:=(Z_{s})_{s\leq T}$ which are $\mathcal{P}$-measurable, $\mathbb{R}^{\kappa\times d}$-valued and satisfying $\mathbb{E}\left[\displaystyle\int^{T}_{0}\left|Z_{s}\right|^{2}\, \mathrm{d} s\right]<\infty$;\\
(vii) $\mathbb{H}^{2}(\mathbb{L}^{2}_{\kappa}(\lambda))$ the space of processes $U:=(U_{s})_{s\leq T}$ which are $\mathbf{P}$-measurable, $\mathbb{L}^{2}_{\kappa}(\lambda)$-valued and satisfying $\mathbb{E}\left[\displaystyle\int^{T}_{0}\|U_{s}(\omega)\|^{2}_{\mathbb{L}^{2}_{\kappa}(\lambda)}\, \mathrm{d} s\right]<\infty$;\\
(viii) $\Pi_{g}$ the set of deterministics functions\\ $\varpi:~(t,x)\in [0,T]\times \mathbb{R}^{\kappa}\mapsto\varpi(t,x)\in\mathbb{R}$ of polynomial growth, i.e., for which there exists two non-negative constants $C$ and $p$ such that for any $(t,x)\in [0,T]\times \mathbb{R}^{\kappa}$,
$$\left|\varpi(t,x)\right|\leq C(1+\left|x\right|^{p}).$$
The subspace of $\Pi_{g}$ of continuous functions will be denoted by $\Pi^{c}_{g}$;\\
(ix) $\mathcal{M}$ the class of functions which satisfy the $p$-order Mao condition in $x$ i.e. If $f\in\mathcal{M}$ then there exists a nondecreasing, continuous and concave function $\rho(\cdot):\mathbb{R}^{+}\mapsto\mathbb{R}^{+}$ with $\rho(0)=0$, $\rho(u)>0$, for $u>0$ and $\displaystyle\int_{0^{+}}\frac{du}{\rho(u)}=+\infty$, such that $d\mathbb{P}\times dt-$a.e., $\forall x, x^{'}\in\mathbb{R}^{k}\textrm{ and }\forall p\geq 2$,
$$|f(t,x,y,z,q)-f(t,x^{'},y,z,q)|\leq\rho^{\frac{1}{p}}(|x-x^{'}|^{p});$$
(x) For any process $\theta:=(\theta_{s})_{s\leq T}$ and $t\in(0,T],~\theta_{t-}=\lim_{s\nearrow t}\theta_{s}$ and\\
$$\Delta_{t}\theta=\theta_{t}-\theta_{t-}.$$
Now let $b$ and $\sigma$ be the following functions:
$$b:(t,x)\in [0,T]\times \mathbb{R}^{k}\mapsto b(t,x)\in\mathbb{R}^{k};$$
$$\sigma:(t,x)\in [0,T]\times \mathbb{R}^{k}\mapsto\sigma(t,x)\in\mathbb{R}^{k\times d}.$$
\begin{equation}
\textrm{We assume that } b \textrm{ and } \sigma \textrm{ belong to } \mathcal{M}\label{2.5}
\end{equation}
Let $\beta:(t,x,e)\in [0,T]\times \mathbb{R}^{k}\times E\mapsto \beta(t,x,e)\in\mathbb{R}^{k}$ be a measurable function such that for some real constant $C$, and for all $e\in E$,
\begin{eqnarray}\label{2.7}
& (i) & \left|\beta(t,x,e)\right| \leq C(1\wedge\left|e\right|); \\ \nonumber
& (ii) & \beta \textrm{ belongs to } \mathcal{M} ;\\
& (iii) & \textrm{the mapping}~(t,x)\in[0,T]\times \mathbb{R}^{k}\mapsto \beta(t,x,e)\in\mathbb{R}^{k}~\textrm{is continuous for any}~\mathrm{e}\in\mathrm{E}\nonumber.
\end{eqnarray}
The functions $(g^{i})_{i=1,m}$ and $(h^{(i)})_{i=1,m}$ be two functions defined as follows: for $i=1,\ldots,m$,
\begin{eqnarray}
g^{i}:\mathbb{R}^{k} & \longrightarrow & \mathbb{R}^{m}\nonumber \\
{}{}x & \longmapsto & g^{i}(x)\nonumber
\end{eqnarray}
and
\begin{eqnarray}
h^{(i)}:[0,T]\times\mathbb{R}^{k+m+d+1} & \longrightarrow & \mathbb{R}\nonumber \\
{}{}(t,x,y,z,q) & \longmapsto & h^{(i)}(t,x,y,z,q).\nonumber
\end{eqnarray}
Moreover we assume they satisfy:\\
(\textbf{H1}): For any $i\in\left\lbrace 1,\ldots,m\right\rbrace$, the function $g^{i}$ belongs to $\mathcal{M}$.\\
\\
(\textbf{H2}): For any $i\in\left\lbrace 1,\ldots,m\right\rbrace$,
\begin{eqnarray}
& (i) &~\textrm{the function}~h^{(i)}~\textrm{is Lipschitz in}~ (y,z,q)~\textrm{uniformly in}~(t,x),~\textrm{i.e., there exists a real constant}\nonumber\\
& {}{} & \textrm{C such that for any}~ (t,x)\in[0,T]\times\mathbb{R}^{k}, (y,z,q)~\textrm{and}~(y',z',q')~\textrm{elements of}~\mathbb{R}^{m+d+1},\nonumber\\
& {}{} &\left|h^{(i)}(t,x,y,z,q)-h^{(i)}(t,x,y',z',q')\right|\leq C(\left|y-y'\right|+\left|z-z'\right|+\left|q-q'\right|);\label{2.11}\\
& (ii) & ~\textrm{the}~(t,x)\mapsto h^{(i)}(t,x,y,z,q), ~\textrm{for fixed}~ (y,z,q)\in\mathbb{R}^{m+d+1},~\textrm{belongs uniformly to}~\mathcal{M},\nonumber
\end{eqnarray}
Next let $\gamma^{i},~i=1,\ldots,m$ be Borel measurable functions defined from $[0,T]\times\mathbb{R}^{k}\times E$ into $\mathbb{R}$ and satisfying:
\begin{eqnarray}\label{2.12}
& (i) &\left|\gamma^{i}(t,x,e)\right|\leq C(1\wedge\left|e\right|);\nonumber\\
& (ii) & \gamma^{i} \textrm{ belongs to }\mathcal{M};\\
& (iii) & \textrm{the mapping}~t\in[0,T]\mapsto \gamma^{i}(t,x,e)\in\mathbb{R}~ \textrm{is continuous for any}~(x,e)\in\mathbb{R}^{k}\times E.\nonumber
\end{eqnarray}
Finally we introduce the following functions $(f^{(i)})_{i=1,m}$ defined by:
\begin{equation}\label{2.13}
\forall (t,x,y,z,\zeta)\in[0,T]\times\mathbb{R}^{k+m+d}\times \mathbb{L}^{2}(\lambda),~f^{(i)}(t,x,y,z,\zeta):=h^{(i)}\left(t,x,y,z,\displaystyle\int_{E}\gamma^{i}(t,x,e)\zeta(e)\lambda(de)\right).
\end{equation}
The functions $(f^{(i)})_{i=1,m}$, enjoy the two following properties:
\begin{eqnarray}\label{2.15}
& (a) &~\textrm{The function}~f^{(i)}~\textrm{is Lipschitz in}~ (y,z,\zeta)~\textrm{uniformly in}~(t,x),~\textrm{i.e., there exists a real constant}\nonumber\\
& {}{} & \textrm{C such that}\nonumber\\
& {}{} &\left|f^{(i)}(t,x,y,z,\zeta)-f^{(i)}(t,x,y',z',\zeta')\right|\leq C(\left|y-y'\right|+\left|z-z'\right|+\|\zeta-\zeta'\|_{\mathbb{L}^{2}(\lambda)});\\
\nonumber
& {}{} & \textrm{since}~h^{(i)}~\textrm{is uniformly Lipschitz in}~(y,z,q)~\textrm{and}~ \gamma^{i}~\textrm{verifies (\ref{2.12})-(i)};\nonumber\\
& (b) &~\textrm{The function}~(t,x) \in[0,T]\times\mathbb{R}^{k}\mapsto f^{(i)}(t,x,0,0,0)~\textrm{belongs}~to~\Pi^{c}_{g};\nonumber\\
\nonumber
& {}{} & \textrm{and then}~ \mathbb{E}\left[\displaystyle\int^{T}_{0}\left|f^{(i)}(r,X^{t,x}_{r},0,0,0)\right|^{2}\,dr \right]<\infty.
\end{eqnarray}
\section{Some results in backward stochastic differential equation with jumps}
\subsection{A class of diffusion processes with jumps}
Let $(t,x)\in[0,T]\times\mathbb{R}^{d}$ and $(X^{t,x}_{s})_{s\leq T}$ be the stochastic process solution of (\ref{2.4}).
Under assumptions (\ref{2.5})-(\ref{2.7}) the solution of Equation (\ref{2.4}) exists and is unique (see \cite{fuj} for more details).
We state some properties of the process $\{(X^{t,x}_{s}),~s\in[0,T]\}$ which can found in \cite{fuj}.
\begin{proposition}\label{pro3.1}
For each $t\geq 0$, there exists a version of $\{(X^{t,x}_{s}),~s\in[t,T]\}$ such that $s\rightarrow X^{t}_{s}$ is a $C^{2}(\mathbb{R}^{d})$-valued rcll process. Moreover it satisfies the following estimates: $\forall p\geq 2,~x,x'\in\mathbb{R}^{d}$ and $s\geq t$,
\begin{eqnarray}
\mathbb{E}[\sup_{t\leq r\leq s} \left|X^{t,x}_{r}-x\right|^{p}] & \leq & M_{p}(s-t)(1+\left|x\right|^{p});\nonumber\\
\mathbb{E}[\sup_{t\leq r\leq s} \left|X^{t,x}_{r}-X^{t,x'}_{r}-(x-x')^{p}\right|^{p}] & \leq & M_{p}(s-t)(\left|x-x'\right|^{p});\label{3.16}
\end{eqnarray}
for some constant $M_{p}$.
\end{proposition}
\subsection{Existence and uniqueness for BSDE with jumps}
Let $(t,x)\in[0,T]\times\mathbb{R}^{d}$ and we consider the following m-dimensional BSDE with jumps:
\begin{equation}\label{3.17}
\left
\{\begin{array}{ll}
(i)~\vec{Y}^{t,x}:=(Y^{i,t,x})_{i=1,m}\in\mathcal{S}^{2}(\mathbb{R}^{m}),~Z^{t,x}:=(Z^{i,t,x})_{i=1,m}\in\mathbb{H}^{2}(\mathbb{R}^{m\times d}),~ U^{t,x}:=(U^{i,t,x})_{i=1,m}\in\mathbb{H}^{2}(\mathbb{L}^{2}_{m}(\lambda));\\
(ii)~dY^{i;t,x}_{s}=-f^{(i)}(s,X^{t,x}_{s},Y^{i;t,x}_{s},Z^{i;t,x}_{s},U^{i;t,x}_{s})ds+Z^{i;t,x}_{s}\mathrm{d}
\mathrm{B}_{s}+\displaystyle\int_{\mathrm{E}}\mathrm{U}^{i;t,x}
_{s}(e)\tilde{\mu}(\mathrm{d}s,\mathrm{d}e),\quad s\leq T;\\
(iii)~Y^{i;t,x}_{T}=g^{i}(X^{t,x}_{T});
\end{array}
\right.
\end{equation}
where for any $i\in\{1,\ldots,m\}$, $Y^{i;t,x}_{s}$ is the ith row of $Y^{t,x}_{s}$, $Z^{i;t,x}_{s}$ is the ith component of $Z^{t,x}_{s}$ and $U^{i;t,x}_{s}$ is the ith component of $U^{t,x}_{s}$.\\
\begin{proposition}\label{prop3.2}
Assume that assumptions $(\mathbf{H1})\textrm{ and }(\mathbf{H2})$ hold. Then for any $(t,x)\in[0,T]\times\mathbb{R}^{d}$, the BSDE (\ref{3.17}) has an unique solution $(\vec{Y}^{t,x},Z^{t,x},U^{t,x})$.
\end{proposition}
For proof of this proposition we can see \cite{bar}.
\subsection{Viscosity solutions of integro-differential partial equation}
\begin{proposition}(see, \cite{bar})\label{prop3.3}
Assume that (\textbf{H1}), (\textbf{H2}), are fulfilled. Then there exists deterministic continuous functions $(u^{i}(t,x))_{i=1,m}$ which belong to $\Pi_{g}$ such that for any $(t,x)\in[0,T]\times\mathbb{R}^{k}$, the solution of the BSDE (\ref{3.17}) verifies:
\begin{equation}\label{3.18}
\forall i\in\{1,\ldots,m\},~\forall s\in[t,T],~Y^{i;t,x}_{s}=u^{i}(s,X^{t,x}_{s}).
\end{equation}
Moreover if for any $i\in\{1,\ldots,m\}$,
\begin{eqnarray*}
& (i) & \gamma^{i}\geq 0;\\
& (ii) & \textrm{for any fixed}~(t,x,\vec{y},z)\in[0,T]\times \mathbb{R}^{k+m+d},~\textrm{the mapping}\\
& {}{} & (q\in\mathbb{R})\longmapsto h^{(i)}(t,x,\vec{y},z,q)\in\mathbb{R}~\textrm{is non-decreasing}.
\end{eqnarray*}
\end{proposition}
The function $(u^{i})_{i=1,m}$ is a continuous viscosity solution (in Barles and al. 's sense, see Definition \ref{def5.3} in the Appendix) of (\ref{eq1}). The solution $(u^{i})_{i=1,m}$ of (\ref{eq1}) is unique in the class $\Pi^{c}_{g}$.
\begin{remark}(see, \cite{bar})
Under the assumptions (\textbf{H1}), (\textbf{H2}), there exists a unique viscosity solution of (\ref{eq1}) in the class of functions satisfying
\begin{equation}
\lim\limits_{\left|x\right| \to +\infty}\left|u(t,x)\right| e^{-\tilde{A}[\log(\left|x\right|)]^{2}}=0
\end{equation}
uniformly for $t\in[0,T]$, for some $\tilde{A}>0$.
\end{remark}
\section{Estimates and properties}
In this section, we will establish a priori estimates concerning solutions of BSDE (\ref{3.17}), which will play an important role in the proof of our main results.
\begin{lemma}(see, \cite{hama})\label{lem4.1}
Under assumption (\textbf{H1}), (\textbf{H2}), for any $p\geq 2$ there exists two non-negative constants $C$ and $\delta$ such that,
\begin{equation}\label{4.22}
\mathbb{E}\left[\left\lbrace \displaystyle\int^{T}_{0} ds \left(\displaystyle\int_{E}\left|U^{t,x}_{s}(e)\right|^{2}\lambda(de)\right)\right\rbrace^{\frac{p}{2}}\right]=\mathbb{E}\left[\left\lbrace \displaystyle\int^{T}_{0} ds\|U^{t,x}_{s}\|^{2}_{\mathbb{L}^{2}_{m}(\lambda)}\right\rbrace^{\frac{p}{2}}\right]\leq C\left(1+\left|x\right|^{\delta}\right).
\end{equation}
\end{lemma}
\begin{proposition}\label{prop4.2}
For any $i=1,\ldots,m$, there exist $C\geq 0$, $\kappa>0$ such that, $\forall$ $x$ and $x'$ elements of $\mathbb{R}^{k}$ $$|u^{i}(t,x)-u^{i}(t,x')|^{2}\leq \rho\left(M_{2}\left|x-x'\right|^{2}(1+\left|x-x'\right|^{2})\right)\left[C(1+\left|x\right|^{\kappa})\right]$$
\end{proposition}
\begin{proof}
Let $x$ and $x'$ be elements of $\mathbb{R}^{k}$. Let $(\vec{Y}^{t,x},Z^{t,x},U^{t,x})$ $(\textrm{resp. }(\vec{Y}^{t,x'},Z^{t,x'},U^{t,x'}))$ be the solution of the BSDE with jumps (\ref{3.17}) associated with ($f(s,X^{t,x}_{s},y,\eta,\zeta),g(X^{t,x}_{T})$)\\
$(\textrm{resp. }f(s,X^{t,x'}_{s},y,\eta,\zeta),g(X^{t,x'}_{T}))$. Applying It\^o formula to $\left|\vec{Y}^{t,x}-\vec{Y}^{t,x'}\right|^{2}$ between $s$ and $T$, we have
\newpage
\begin{eqnarray}\label{4.31}
& {}{} &\left|\vec{Y}^{t,x}_{s}-\vec{Y}^{t,x'}_{s}\right|^{2}+\displaystyle\int^{T}_{s}\left|\Delta Z_{r}\right|^{2}\,dr+\sum_{s\leq r\leq T}(\Delta_{r}\vec{Y}^{t,x}_{r})^{2}\\
& {}{} &=\left|g(X^{t,x}_{T})-g(X^{t,x'}_{T})\right|^{2}+2\displaystyle\int^{T}_{s}
<\left(\vec{Y}^{t,x}_{s}-\vec{Y}^{t,x'}_{s}\right),\Delta f(r)>\,dr\nonumber\\
& {}{} &\quad\quad-2\displaystyle\int^{T}_{s}
\displaystyle\int_{\mathrm{E}}\left(\vec{Y}^{t,x}_{r}
-\vec{Y}^{t,x'}_{r}\right)\left(\Delta U_{r}(e)\right)\tilde{\mu}(\mathrm{d}r,\mathrm{d}e)-2\displaystyle\int^{T}_{s} \left(\vec{Y}^{t,x}_{r}-\vec{Y}^{t,x'}_{r}\right)\left(\Delta Z_{r}\right)\,dB_{r};\nonumber
\end{eqnarray}
and taking expectation we obtain: $\forall s\in[t,T]$,
\begin{eqnarray}\label{4.33}
& {}{} &\mathbb{E}\left[\left|\vec{Y}^{t,x}_{s}-\vec{Y}^{t,x'}_{s}\right|^{2}+\displaystyle\int^{T}_{s}\left|\Delta Z_{r}\right|^{2}\,dr+\displaystyle\int^{T}_{s}\|\Delta U_{r}\|^{2}_{\mathbb{L}^{2}(\lambda)}\,dr\right]\\
& {}{} &\leq\mathbb{E}\left[\left|g(X^{t,x}_{T})-g(X^{t,x'}_{T})\right|^{2}+2\displaystyle\int^{T}_{s}
<\left(\vec{Y}^{t,x}_{s}-\vec{Y}^{t,x'}_{s}\right),\Delta f(r)>\,dr\right]\nonumber
\end{eqnarray}
where the processes $\Delta X_{r}$, $\Delta Y_{r}$, $\Delta f(r)$, $\Delta Z_{r}$ and $\Delta U_{r}$ are defined as follows: $\forall r\in[t,T]$,\\
$\Delta f(r):=((\Delta f^{(i)}(r))_{i=1,m}=(f^{(i)}(r,X^{i;t,x}_{r},\vec{Y}^{t,x}_{r},Z^{i;t,x}_{r},U^{i;t,x}_{r})-f^{(i)}(r,X^{i;t,x'}_{r},\vec{Y}^{i;t,x'}_{r},Z^{i;t,x'}_{r},U^{i;t,x'}_{r}))_{i=1,m}$,
$\Delta X_{r}=X^{t,x}_{r}-X^{t,x'}_{r}$, $\Delta Y(r)=\vec{Y}^{t,x}_{r}-\vec{Y}^{t,x'}_{r}=(Y^{j;t,x}_{r}-Y^{j;t,x'}_{r})_{j=1,m}$,\\
$\Delta Z_{r}=Z^{t,x}_{r}-Z^{t,x'}_{r}$ and $\Delta U_{r}=U^{t,x}_{r}-U^{t,x'}_{r}$
($<\cdot,\cdot>$ is the usual scalar product on $\mathbb{R}^{m}$).
Now we will give an estimation of each three terms of the second member of inequality (\ref{4.33}).\\
$\bullet$ As for any $i\in\{1,\ldots,m\}$ $g^{i}$ belongs to $\mathcal{M}~(\textrm{ for }p=2)$; therefore\\
\begin{eqnarray}
\mathbb{E}\left[\left|g(X^{t,x}_{T})-g(X^{t,x'}_{T})\right|^{2}\right] & \leq & \mathbb{E}\left[\rho\left(\left|X^{t,x}_{T}-X^{t,x'}_{T}\right|^{2}\right)\right]\nonumber\\
& \leq & \rho\left(\mathbb{E}\left[\left|X^{t,x}_{T}-X^{t,x'}_{T}\right|^{2}\right]\right)\left(\textrm{by Jensen's inequality}\right)\nonumber\\
& \leq & \rho\left(\mathbb{E}\left[\left|X^{t,x}_{T}-X^{t,x'}_{T}-(x-x')^{2}+(x-x')^{2}\right|^{2}\right]\right)\nonumber
\end{eqnarray}
and by subsequently using the triangle inequality, the relation (\ref{3.16}) of proposition \ref{pro3.1} and the fact that
$$(a+b){^p}\leq 2^{p-1}(a^{p}+b^{p}).$$
\begin{equation}\label{4.34}
\mathbb{E}\left[\left|g(X^{t,x}_{T})-g(X^{t,x'}_{T})\right|^{2}\right]\leq \rho\left(M_{2}\left|x-x'\right|^{2}(1+\left|x-x'\right|^{2})\right),
\end{equation}
$\bullet$ To complete our estimation of (\ref{4.33}) we need to deal with $\mathbb{E}\left[2\displaystyle\int^{T}_{s}
<\left(\vec{Y}^{t,x}_{s}-\vec{Y}^{t,x'}_{s}\right),\Delta f(r)>\,dr\right].$
Taking into account the expression of $f^{(i)}$ given by (\ref{2.13}) we then split $\Delta f(r)$ in the follows way: for $r\leq T$,
$$\Delta f(r)=(\Delta f(r))_{i=1,m}=\Delta_{1}(r)+\Delta_{2}(r)+\Delta_{3}(r)+\Delta_{4}(r)=(\Delta^{i}_{1}(r)+\Delta^{i}_{2}(r)+\Delta^{i}_{3}(r)+\Delta^{i}_{4}(r))_{i=1,m},$$
\newpage
where for any $i=1,\ldots,m$,
\begin{eqnarray*}
\Delta^{i}_{1}(r) & = & h^{(i)}\left(r,X^{t,x}_{r},\vec{Y}^{t,x}_{r},Z^{i;t,x}_{r},\displaystyle\int_{E}\gamma^{i}(r,X^{t,x}_{r},e)U^{i;t,x}_{r}(e)\lambda(de)\right)\\
&{}{}& -h^{(i)}\left(r,X^{t,x'}_{r},\vec{Y}^{t,x}_{r},Z^{i;t,x}_{r},\displaystyle\int_{E}\gamma^{i}(r,X^{t,x}_{r},e)U^{i;t,x}_{r}(e)\lambda(de)\right);\\
\Delta^{i}_{2}(r) & = & h^{(i)}\left(r,X^{t,x'}_{r},\vec{Y}^{t,x}_{r},Z^{i;t,x}_{r},\displaystyle\int_{E}\gamma^{i}(r,X^{t,x}_{r},e)U^{i;t,x}_{r}(e)\lambda(de)\right)\\
&{}{}& -h^{(i)}\left(r,X^{t,x'}_{r},\vec{Y}^{t,x'}_{r},Z^{i;t,x}_{r},\displaystyle\int_{E}\gamma^{i}(r,X^{t,x}_{r},e)U^{i;t,x}_{r}(e)\lambda(de)\right);\\
\Delta^{i}_{3}(r) & = & h^{(i)}\left(r,X^{t,x'}_{r},\vec{Y}^{t,x'}_{r},Z^{i;t,x}_{r},\displaystyle\int_{E}\gamma^{i}(r,X^{t,x}_{r},e)U^{i;t,x}_{r}(e)\lambda(de)\right)\\
&{}{}& -h^{(i)}\left(r,X^{t,x'}_{r},\vec{Y}^{t,x'}_{r},Z^{i;t,x'}_{r},\displaystyle\int_{E}\gamma^{i}(r,X^{t,x}_{r},e)U^{i;t,x}_{r}(e)\lambda(de)\right);\\
\Delta^{i}_{4}(r) & = & h^{(i)}\left(r,X^{t,x'}_{r},\vec{Y}^{t,x'}_{r},Z^{i;t,x'}_{r},\displaystyle\int_{E}\gamma^{i}(r,X^{t,x}_{r},e)U^{i;t,x}_{r}(e)\lambda(de)\right)\\
&{}{}& -h^{(i)}\left(r,X^{t,x'}_{r},\vec{Y}^{t,x'}_{r},Z^{i;t,x'}_{r},\displaystyle\int_{E}\gamma^{i}(r,X^{t,x'}_{r},e)U^{i;t,x'}_{r}(e)\lambda(de)\right).
\end{eqnarray*}
By Cauchy-Schwartz inequality, the inequality $2ab\leq\epsilon a^{2}+\frac{1}{\epsilon}b^{2}$, (\textbf{H2})-(ii), the estimate (\ref{3.16}) and Jensen's inequality we have:
\begin{eqnarray}\label{4.36}
\mathbb{E}\left[2\displaystyle\int^{T}_{s}
\scriptstyle<\Delta Y(r),\Delta_{1}(r)>\,dr\right] & \leq & \mathbb{E}\left[\frac{1}{\epsilon}\int^{T}_{s}\scriptstyle{|\Delta Y(r)|^{2}\,dr+\epsilon}\displaystyle\int^{T}_{s}\rho\left(|X^{t,x}_{r}-X^{t,x'}_{r}|^{2}\right)\,dr\right]\nonumber\\
& \leq & \mathbb{E}\left[\frac{1}{\epsilon}\int^{T}_{s}|\Delta Y(r)|^{2}\,dr\right]+\epsilon\rho\left(M_{2}\left|x-x'\right|^{2}(1+\left|x-x'\right|^{2})\right).
\end{eqnarray}
Besides since $h^{(i)}$ is Lipschitz w.r.t. $(y,z,q)$ then,
\begin{equation}\label{4.37}
\mathbb{E}\left[2\displaystyle\int^{T}_{s}<\Delta Y(r),\Delta_{2}(r)>\,dr\right]\leq 2C\mathbb{E}\left[\int^{T}_{s}|\Delta Y(r)|^{2}\,dr\right],
\end{equation}
and
\begin{equation}\label{4.38}
\mathbb{E}\left[2\displaystyle\int^{T}_{s}<\Delta Y(r),\Delta_{3}(r)>\,dr\right]\leq\mathbb{E}\left[\frac{1}{\epsilon}\int^{T}_{s}|\Delta Y(r)|^{2}\,dr+C^{2}\epsilon\int^{T}_{s}|\Delta Z(r)|^{2}\,dr\right].
\end{equation}
It remains to obtain a control of the last term. But for any $s\in[t,T]$ we have,
\begin{eqnarray}\label{4.39}
& {}{}& \mathbb{E}\left[2\displaystyle\int^{T}_{s}<\Delta Y(r),\Delta_{4}(r)>\,dr\right]\\
& \leq & 2C\mathbb{E}\left[\int^{T}_{s}|\Delta Y(r)|\,dr\times \left|\int_{E}\left(\gamma(r,X^{t,x}_{r},e)U^{t,x}_{r}(e)-\gamma(r,X^{t,x'}_{r},e)U^{t,x'}_{r}(e)\right)\,\lambda(de)\right|\right]\nonumber.
\end{eqnarray}
Next by splitting the crossing terms as follows
$\gamma(r,X^{t,x}_{r},e)U^{t,x}_{r}(e)-\gamma(r,X^{t,x'}_{r},e)U^{t,x'}_{r}(e)=\Delta U_{s}(e)\gamma(s,X^{t,x}_{s},e)+U^{t,x'}_{s}\left(\gamma(s,X^{t,x}_{s},e)-\gamma(s,X^{t,x'}_{s},e)\right)$\\
and setting $\Delta \gamma_{s}(e):=\left(\gamma(s,X^{t,x}_{s},e)-\gamma(s,X^{t,x'}_{s},e)\right)$,\\
we obtain,
\begin{eqnarray}\label{4.40}
\mathbb{E}\left[2\displaystyle\int^{T}_{s}<\Delta Y(r),\Delta_{4}(r)>\,dr\right]& \leq & 2C\mathbb{E}\left[\int^{T}_{s}|\scriptstyle\Delta Y(r)|\times\left(\displaystyle\int_{E}\scriptstyle(|U^{t,x'}_{r}(e)\Delta\gamma_{r}(e)|+|\Delta U_{r}(e)\gamma(r,X^{t,x}_{r},e)|)\,\lambda(de)\right)\,dr\right]\nonumber\\
& \leq & \frac{2}{\epsilon}\mathbb{E}\left[\int^{T}_{s}|\Delta Y(r)|^{2}\,dr\right]+C^{2}\epsilon\mathbb{E}\left[\int^{T}_{s}\left(\int_{E}(|U^{t,x'}_{r}(e)\Delta\gamma_{r}(e)|\lambda(de)\right)^{2}\,dr\right]\nonumber\\
& {}{} &+C^{2}\epsilon\mathbb{E}\left[\int^{T}_{s}\left(\int_{E}(|\Delta U_{r}(e)\gamma(r,X^{t,x}_{r},e)|\lambda(de)\right)^{2}\,dr\right].
\end{eqnarray}
By Cauchy-Schwartz inequality, (\ref{2.12})-(ii), Jensen's inequality and (\ref{3.16}), and the result of Lemma \ref{lem4.1} it holds:
\begin{eqnarray}\label{4.41}
\mathbb{E}\left[\int^{T}_{s}\left(\int_{E}(|U^{t,x'}_{r}(e)\Delta\gamma_{r}(e)|\lambda(de)\right)^{2}\,dr\right] & \leq & \mathbb{E}\left[\int^{T}_{s}\,dr\left(\int_{E}|U^{t,x'}_{r}(e)|^{2}\lambda(de)\right)\left(\int_{E}|\Delta\gamma_{r}(e)|^{2}\lambda(de)\right)\right]\nonumber\\
&\leq & \rho\left(M_{2}\left|x-x'\right|^{2}(1+\left|x-x'\right|^{2})\right)\times \mathbb{E}\left[\int^{T}_{s}\,dr\left(\int_{E}|U^{t,x'}_{r}(e)|^{2}\lambda(de)\right)\right]\nonumber\\
& \leq & C\rho\left(M_{2}\left|x-x'\right|^{2}(1+\left|x-x'\right|^{2})\right)(1+\left|x\right|^{\kappa}).
\end{eqnarray}
On the other hand using once more Cauchy-Schwartz inequality and (\ref{2.12})-(i) we get
\begin{eqnarray}\label{4.42}
\mathbb{E}\left[\int^{T}_{s}\left(\int_{E}(\scriptstyle|\Delta U_{r}(e)\gamma(r,X^{t,x}_{r},e)|\lambda(de)\right)^{2}\,dr\right] & \leq & \mathbb{E}\left[\int^{T}_{s}\,dr\left(\int_{E}(\scriptstyle|\Delta U_{r}(e)|^{2}\lambda(de)\right)\left(\int_{E}|\gamma(r,X^{t,x}_{r},e)|^{2}\lambda(de)\right)\right]\nonumber\\
& \leq & C\mathbb{E}\left[\int^{T}_{s}\,dr\left(\int_{E}(|\Delta U_{r}(e)|^{2}\lambda(de)\right)\right].
\end{eqnarray}
From (\ref{4.36}) to (\ref{4.42}) it follows that:
\begin{eqnarray*}\label{4.43}
& {}{} &\mathbb{E}\left[\left|\vec{Y}^{t,x}_{s}-\vec{Y}^{t,x'}_{s}\right|^{2}+\displaystyle\int^{T}_{s}\left|\Delta Z_{r}\right|^{2}\,dr+\displaystyle\int^{T}_{s}\|\Delta U_{r}\|^{2}_{\mathbb{L}^{2}(\lambda)}\,dr\right]\\
& {}{} &\leq\mathbb{E}\left[\left|g(X^{t,x}_{T})-g(X^{t,x'}_{T})\right|^{2}+2\displaystyle\int^{T}_{s}
<\left(\vec{Y}^{t,x}_{s}-\vec{Y}^{t,x'}_{s}\right),\Delta f(r)>\,dr\right]\\
& \leq & \rho\left(M_{2}\left|x-x'\right|^{2}(1+\left|x-x'\right|^{2})\right)\left[C(1+\left|x\right|^{\kappa})+1+\epsilon+\epsilon C^{3}\right]+\left(\frac{4}{\epsilon}+2C\right)\mathbb{E}\left[\int^{T}_{s}|\Delta Y(r)|^{2}\,dr\right]\\
&{}{}&+C^{2}\epsilon\mathbb{E}\left[\int^{T}_{s}|\Delta Z(r)|^{2}\,dr\right]+C^{3}\epsilon\mathbb{E}\left[\int^{T}_{s}\,dr\left(\int_{E}(|\Delta U_{r}(e)|^{2}\lambda(de)\right)\right].
\end{eqnarray*}
By choosing $\epsilon$ such that $\{\epsilon+\frac{4}{\epsilon}+\epsilon(2C^{3}+C^{2})+2C<1\}$ we deduce the existence of a constant $C\geq 0$ such that for any $s\in[t,T]$,\\
$\mathbb{E}\left[|\Delta Y(s)|^{2}\right]\leq \rho\left(M_{2}\left|x-x'\right|^{2}(1+\left|x-x'\right|^{2})\right)\left[C(1+\left|x\right|^{\kappa})\right]+\mathbb{E}\left[\displaystyle\int^{T}_{s}|\Delta Y(r)|^{2}\,dr\right]$\\
and by Gronwall lemma this implies that for any $s\in[t,T]$,\\
$$\mathbb{E}\left[|\Delta Y(s)|^{2}\right]\leq \rho\left(M_{2}\left|x-x'\right|^{2}(1+\left|x-x'\right|^{2})\right)\left[C(1+\left|x\right|^{\kappa})\right].$$
Finally in taking $s=t$ and considering (\ref{3.18}) we obtain the desired result.\\
\\
\end{proof}
Now we start a point which giving difference of definition viscosity solution between \cite{bar} and \cite{hama}.\\
It should also be noted that in this part will appear our first contribution after of course the first corresponding to the proposition \ref{prop4.2}.
It will be mainly about the use of the $\mathcal{M}$ class and the Bihari inequality as in \cite{fan}.
\begin{proposition}\label{pro4.3}
For any $i=1,\ldots,m$, $(t,x)\in[0,T]\times\mathbb{R}^{k}$,
\begin{equation}
U^{i;t,x}_{s}(e)=u^{i}(s,X^{t,x}_{s-}+\beta(s,X^{t,x}_{s-},e))-u^{i}(s,X^{t,x}_{s-}),~~d\mathbb{P}\otimes ds\otimes d\lambda-\text{a.e. on}~\Omega\times[t,T]\times E.
\end{equation}
\end{proposition}
\begin{proof}
\textbf{Step 1: Truncation of the L\'evy measure}\\
For any $k\geq 1$, let us first introduce a new Poisson random measure $\mu_{k}$ (obtained from the truncation of $\mu$) and its associated compensator $\nu_{k}$ as follows:
$$\mu_{k}(ds,de)=1_{\{|e|\geq\frac{1}{k}\}}\mu(ds,de)~~\text{and }\nu_{k}(ds,de)=1_{\{|e|\geq\frac{1}{k}\}}\nu(ds,de).$$
Which means that, as usual, $\tilde{\mu_{k}}(ds,de):=(\mu_{k}-\nu_{k})(ds,de)$, is the associated random martingale measure.\\
The main point to notice is that
\begin{eqnarray}
\lambda_{k}(E)=\displaystyle\int_{E}\,\lambda_{k}(de)& = &\displaystyle\int_{E}1_{\{|e|\geq\frac{1}{k}\}}\,\lambda(de)\nonumber\\
{}{}&=&\displaystyle\int_{\{|e|\geq\frac{1}{k}\}}\,\lambda(de)\nonumber\\
{}{}&=&\lambda(\{|e|\geq\frac{1}{k}\})<\infty.
\end{eqnarray}
As in \cite{hama}, let us introduce the process $^{k}X^{t,x}$ solving the following standard SDE of jump-diffusion type:
\begin{eqnarray}
& {}{} & ^{k}X^{t,x}_{s}=x+\displaystyle\int^{s}_{t}b(r,^{k}X^{t,x}_{r})\,dr+\displaystyle\int^{s}_{t}\sigma(r,^{k}X^{t,x}_{r})\,dB_{r}\nonumber\\
& {}{} &\qquad\qquad+\displaystyle\int^{s}_{t}\displaystyle\int_{\mathrm{E}}\beta(r,^{k}X^{t,x}_{r-},e)\tilde{\mu}_{k}\,(dr,de),~~~t\leq s\leq T;~^{k}X^{t,x}_{r}=x~\text{if }s\leq t.\nonumber\\
\end{eqnarray}
Note that thanks to the assumptions on $b$, $\sigma$, $\beta$ the process $^{k}X^{t,x}$ exists and is unique. Moreover it satisfies the same estimates as in (\ref{3.16}) since $\lambda_{k}$ is just a truncation at the origin of $\lambda$ which integrates $(1\wedge|e|^{2})_{e\in E}$.\\
On the other hand let us consider the following Markovian BSDE with jumps
\begin{equation}\label{4.46}
\left
\{\begin{array}{ll}
(i)~\mathbb{E}\left[\sup_{s\leq T}\left|^{k}Y^{t,x}_{s}\right|^{2}+\displaystyle\int^{T}_{s}\left|^{k}Z^{t,x}_{r}\right|^{2}\,dr+\displaystyle\int^{T}_{s}\|^{k}U^{t,x}_{r}\|^{2}_{\mathbb{L}^{2}(\lambda_{k})}\,dr\right]<\infty\\
(ii)~^{k}{Y}^{t,x}:=(^{k}Y^{i,t,x})_{i=1,m}\in\mathcal{S}^{2}(\mathbb{R}^{m}),~^{k}Z^{t,x}:=(^{k}Z^{i,t,x})_{i=1,m}\in\mathbb{H}^{2}(\mathbb{R}^{m\times d}),\\
^{k}U^{t,x}:=(^{k}U^{i,t,x})_{i=1,m}\in\mathbb{H}^{2}(\mathbb{L}^{2}_{m}(\lambda_{k}));\\
(iii)~^{k}Y^{t,x}_{s}=g(^{k}X^{t,x}_{T})+\displaystyle\int^{T}_{s}f_{\mu_{k}}(r,^{k}X^{t,x}_{r},^{k}Y^{t,x}_{r},^{k}Z^{t,x}_{r},^{k}U^{t,x}_{r})\,dr\\
\quad\quad\quad\quad\quad\quad\quad\quad-\displaystyle\int^{T}_{s}\left\lbrace ^{k}Z^{t,x}_{r}\,dB_{r}+\displaystyle\int_{\mathrm{E}}\{^{k}U^{t,x}_{r}(e)\}\tilde{\mu}_{k}(dr,de)\right\rbrace ,\quad s\leq T;\\
(iv)~^{k}Y^{t,x}_{T}=g(^{k}X^{t,x}_{T}).
\end{array}
\right.
\end{equation}
Finally let us introduce the following functions $(f^{(i)})_{i=1,m}$ defined by:
$\forall (t,x,y,z,\zeta)\in[0,T]\times\mathbb{R}^{k}\times\mathbb{R}^{m}\times\mathbb{R}^{m\times d}\times\mathbb{L}^{2}_{m}(\lambda_{k}),\\f_{\mu_{k}}(t,x,y,z,\zeta)=(f^{(i)}_{\mu_{k}}(t,x,y,z_{i},\zeta_{i}))_{i=1,m}:=\left( h^{(i)}\left(t,x,y,z,\displaystyle\int_{E}\gamma^{i}(t,x,e)\zeta_{i}(e)\lambda_{k}(de)\right)\right)_{i=1,m}$.\\
First let us emphasize that this latter BSDE is related to the filtration $(\mathcal{F}^{k}_{s})_{s\leq T}$ generated by the Brownian motion and the independent random measure $\mu_{k}$. However this point does not raise major issues since for any $s\leq T$, $\mathcal{F}^{k}_{s}\subset \mathcal{F}_{s}$ and thanks to the relationship between $\mu$ and $\mu_{k}$.\\
Next by the properties of the functions $b$, $\sigma$, $\beta$ and by the same opinions of proposition \ref{prop3.2} and proposition \ref{prop3.3}, there exists an unique quadriple $(^{k}Y^{t,x},^{k}Z^{t,x},^{k}U^{t,x})$ solving (\ref{4.46}) and there also exists a function $u^{k}$ from $[0,T]\times \mathbb{R}^{k}$ into $\mathbb{R}^{m}$ of $\Pi^{c}_{g}$ such that
\begin{equation}\label{4.47}
\forall s\in[t,T],~~^{k}Y^{t,x}:=u^{k}(s,^{k}X^{t,x}),~\mathbb{P}-a.s.
\end{equation}
Moreover as in proposition \ref{prop4.2}, there exists positive constants $C$ and $\kappa$ wich do not depend on $k$ such that:
\begin{equation}\label{4.48}
\forall t,x,x',~~|u^{k}(t,x)-u^{k}(t,x')|\leq \rho\left(M_{2}\left|x-x'\right|^{2}(1+\left|x-x'\right|^{2})\right)\left[C(1+\left|x\right|^{\kappa})\right].
\end{equation}
Finally as $\lambda_{k}$ is finite then we have the following relationship between the process $^{k}U^{t,x}:=(^{k}U^{i;t,x})_{i=1,m}$ and the deterministics functions $u^{k}:=(u^{k}_{i})_{i=1,m}$ (see \cite{hamaMor}): $\forall i=1,\ldots,m$;
$$^{k}U^{i;t,x}_{s}(e)=u^{k}_{i}(s,^{k}X^{t,x}_{s-}+\beta(s,^{k}X^{t,x}_{s-},e))-u^{k}_{i}(s,^{k}X^{t,x}_{s-}),~~d\mathbb{P}\otimes ds\otimes d\lambda_{k}-a.e.~\text{on }\Omega\times[t,T]\times E.$$
This is mainly due to the fact that $^{k}U^{t,x}$ belongs to $\mathbb{L}^{1}\cap\mathbb{L}^{2}(ds\otimes d\mathbb{P}\otimes d\lambda_{k})$ since $\lambda_{k}(E)<\infty$ and then we can split the stochastic integral w.r.t. $\tilde{\mu}_{k}$ in (\ref{4.46}). Therefore for all $i=1,\ldots,m$,
\begin{equation}\label{4.49}
^{k}U^{i;t,x}_{s}(e)1_{\{|e|\geq \frac{1}{k}\}}=(u^{k}_{i}(s,^{k}X^{t,x}_{s-}+\beta(s,^{k}X^{t,x}_{s-},e))-u^{k}_{i}(s,^{k}X^{t,x}_{s-}))1_{\{|e|\geq \frac{1}{k}\}},~~d\mathbb{P}\otimes ds\otimes d\lambda_{k}-a.e.~\text{on }\Omega\times[t,T]\times E.
\end{equation}
\end{proof}
\textbf{Step 2: Convergence of the auxiliary processes}\\
Let's now prove the following convergence result;
\begin{equation}\label{4.51}
\mathbb{E}\left[\sup_{s\leq T}\left|Y^{t,x}_{s}-^{k}Y^{t,x}_{s}\right|^{2}+\displaystyle\int^{T}_{0}\left|Z^{t,x}_{s}-^{k}Z^{t,x}_{s}\right|^{2}\,ds+\displaystyle\int^{T}_{0}\,ds\displaystyle\int_{E}\lambda(de)\left|U^{t,x}_{s}(e)-^{k}U^{t,x}_{s}(e)1_{\{|e|\geq \frac{1}{k}\}}\right|^{2}\right]\substack{\longrightarrow\\ k\longrightarrow+\infty}0;
\end{equation}
where $(Y^{t,x},Z^{t,x},U^{t,x})$ is solution of the BSDE with jumps (\ref{3.17}).\\
It should be noted that this convergence (\ref{4.51}) requires as the technique used in (\ref{prop4.2}) the following convergence:
\begin{equation}\label{4.50}
\mathbb{E}\left[\sup_{s\leq T}\left|X^{t,x}_{s}-^{k}X^{t,x}_{s}\right|^{2}\right]\substack{\longrightarrow\\ k\longrightarrow+\infty}0.
\end{equation}
\begin{proof}[of \ref{4.50}]
\begin{eqnarray}
& {}{} & X^{t,x}_{s}-^{k}X^{t,x}_{s}=\displaystyle\int^{s}_{0}(b(r,X^{t,x}_{r})-b(r,^{k}X^{t,x}_{r}))\,dr+\displaystyle\int^{s}_{0}(\sigma(r,X^{t,x}_{r})-\sigma(r,^{k}X^{t,x}_{r}))\,dB_{r}\nonumber\\
& {}{} &\qquad\qquad+\displaystyle\int^{s}_{0}\displaystyle\int_{\mathrm{E}}(\beta(r,X^{t,x}_{r-},e)-\beta(r,^{k}X^{t,x}_{r-},e)1_{\{|e|\geq \frac{1}{k}\}})\tilde{\mu}_{k}\,(dr,de).\nonumber\\
\end{eqnarray}
Next let $\eta\in[0,T]$. Since $|a+b+c|^{2}\leq 3(|a|^{2}+|b|^{2}+|c|^{2})$ for any real constants $a$, $b$ and $c$ and by the Cauchy-Schwartz and Burkholder-Davis-Gundy inequalities we have:
\newpage
\begin{eqnarray}\label{4.36n}
& {}{} & \mathbb{E}\left[\sup_{0\leq s\leq \eta}\left|X^{t,x}_{s}-^{k}X^{t,x}_{s}\right|^{2}\right]\nonumber\\
{} & \leq & 3\mathbb{E}\left[\sup_{0\leq s\leq \eta}\left|\displaystyle\int^{s}_{0}(b(r,X^{t,x}_{r})-b(r,^{k}X^{t,x}_{r}))\,dr\right|^{2}+\sup_{0\leq s\leq \eta}\left|\displaystyle\int^{s}_{0}(\sigma(r,X^{t,x}_{r})-\sigma(r,^{k}X^{t,x}_{r}))\,dB_{r}\right|^{2}
\right.\nonumber\\
& {}{} &\left.
\qquad\qquad+\sup_{0\leq r\leq \eta}\left|\displaystyle\int^{r}_{0}\displaystyle\int_{\mathrm{E}}(\beta(r,X^{t,x}_{r-},e)-\beta(r,^{k}X^{t,x}_{r-},e)1_{\{|e|\geq \frac{1}{k}\}})\tilde{\mu}_{k}\,(dr,de)\right|^{2}\right]\nonumber\\
{} & \leq & C\mathbb{E}\left[\displaystyle\int^{\eta}_{0}\sup_{0\leq \eta\leq r}\left\lbrace \left|(b(r,X^{t,x}_{r})-b(r,^{k}X^{t,x}_{r}))\right|^{2}+\left|(\sigma(r,X^{t,x}_{r})-\sigma(r,^{k}X^{t,x}_{r}))\right|^{2}\right\rbrace\,dr\right]\nonumber\\
&{}{}&+C\mathbb{E}\left[\displaystyle\int^{\eta}_{0}\displaystyle\int_{\mathrm{E}}\sup_{0\leq r\leq \eta}\left|\beta(r,X^{t,x}_{r-},e)-\beta(r,^{k}X^{t,x}_{r-},e)\right|^{2}\,\lambda_{k}(de)dr
\right.\nonumber\\
& {}{} &\left.
+\displaystyle\int^{\eta}_{0}\displaystyle\int_{\mathrm{E}}\sup_{0\leq r\leq \eta}\left|\beta(r,X^{t,x}_{r-},e)\right|^{2}1_{\{|e|<\frac{1}{k}\}}\,\lambda(de)dr\right]\nonumber\\
\end{eqnarray}
Since $b$, $\sigma$ and $\beta$ belong to $\mathcal{M}$, then we have: $\forall r\in[0,T]$,
\begin{equation}
\sup_{0\leq \tau\leq r}\left\lbrace \left|(b(r,X^{t,x}_{\tau})-b(r,^{k}X^{t,x}_{\tau}))\right|^{2}+\left|(\sigma(r,X^{t,x}_{\tau})-\sigma(r,^{k}X^{t,x}_{\tau}))\right|^{2}\right\rbrace \leq C\rho(\left|X^{t,x}_{\tau}-^{k}X^{t,x}_{\tau}\right|^{2})
\end{equation}
and
\begin{equation}
\displaystyle\int_{\mathrm{E}}\sup_{0\leq r\leq \eta}\left|\beta(r,X^{t,x}_{r-},e)-\beta(r,^{k}X^{t,x}_{r-},e)\right|^{2}\,\lambda_{k}(de)\leq C\rho(\left|X^{t,x}_{\tau}-^{k}X^{t,x}_{\tau}\right|^{2})
\end{equation}
Plug now those two last inequalities in the previous one to obtain: $\forall \eta\in[0,T]$,
\begin{eqnarray*}
\mathbb{E}\left[\sup_{0\leq s\leq \eta}\left|X^{t,x}_{s}-^{k}X^{t,x}_{s}\right|^{2}\right] &\leq & C\mathbb{E}\left[\displaystyle\int^{\eta}_{0}\rho(\left|X^{t,x}_{\tau}-^{k}X^{t,x}_{\tau}\right|^{2})\,dr+\displaystyle\int_{\{|e|<\frac{1}{k}\}}(1\wedge|e|^{2})\,\lambda(de)\right]\\
&\leq & C\displaystyle\int^{\eta}_{0}\rho(\mathbb{E}\left[\left|X^{t,x}_{\tau}-^{k}X^{t,x}_{\tau}\right|^{2}\right])\,dr+C\mathbb{E}\left[\displaystyle\int_{\{|e|<\frac{1}{k}\}}(1\wedge|e|^{2})\,\lambda(de)\right]~~(\text{by Jensen}),
\end{eqnarray*}
By Bihari's inequality (see \cite{fan} page 171 and \cite{pa}) and the fact of $\displaystyle\int_{\{|e|<\frac{1}{k}\}}(1\wedge|e|^{2})\,\lambda(de)\substack{\longrightarrow\\ k\longrightarrow+\infty}0$; we obtain our result the (\ref{4.50}).
\end{proof}
\\
We now focus on (\ref{4.51}). Note that we can apply Ito's formula, even if the BSDEs are related to filtrations and Poisson random measures which are not the same, since:\\
(i) $\mathcal{F}^{k}_{s}\subset\mathcal{F}_{s}$, $\forall s\leq T$;\\
(ii) for any $s\leq T$, $\displaystyle\int^{s}_{0}\displaystyle\int_{\mathrm{E}}^{k}U^{i;t,x}(e)\tilde{\mu}_{k}\,(dr,de)=\displaystyle\int^{s}_{0}\displaystyle\int_{\mathrm{E}}^{k}U^{i;t,x}(e)1_{\{|e|\geq\frac{1}{k}\}}\tilde{\mu}\,(dr,de)$ and then the first $(\mathcal{F}^{k}_{s})_{s\leq T}-$martingale is also an $(\mathcal{F}_{s})_{s\leq T}-$martingale.
$\forall s\in[0,T]$,
\begin{eqnarray}
& {}{} &\left|\vec{Y}^{t,x}_{s}-^{k}Y^{t,x}_{s}\right|^{2}+\displaystyle\int^{T}_{0}\left|Z^{t,x}_{s}-^{k}Z^{t,x}_{s}\right|^{2}\,ds+\sum_{s\leq r\leq T}(^{k}\Delta_{r}\vec{Y}^{t,x}_{r})^{2}\nonumber\\
& {}{} &=\left|g(X^{t,x}_{T})-g(^{k}X^{t,x}_{T})\right|^{2}+2\displaystyle\int^{T}_{s}
\left(\vec{Y}^{t,x}_{r}-^{k}Y^{t,x}_{r}\right)\times ^{k}\Delta f(r)\,dr\nonumber \\
& {}{} &-2\displaystyle\int^{T}_{s}\displaystyle\int_{\mathrm{E}}\left(\vec{Y}^{t,x}_{r}
-^{k}Y^{t,x}_{r}\right)\left(^{k}\Delta U_{r}(e)\right)\tilde{\mu}(\mathrm{d}r,\mathrm{d}e)-2\displaystyle\int^{T}_{s} \left(\vec{Y}^{t,x}_{r}-^{k}\vec{Y}^{t,x}_{r}\right)\left(^{k}\Delta Z_{r}\right)\,dB_{r};\nonumber
\end{eqnarray}
and taking expectation we obtain: $\forall s\in[t,T]$,
\begin{eqnarray}\label{4.53}
& {}{} &\mathbb{E}\left[\left|\vec{Y}^{t,x}_{s}-^{k}Y^{t,x}_{s}\right|^{2}+\displaystyle\int^{T}_{0}\left\lbrace\left|Z^{t,x}_{s}-^{k}Z^{t,x}_{s}\right|^{2}+\displaystyle\int_{E}\left|U^{t,x}_{s}-^{k}U^{t,x}_{s}1_{\{|e|\geq\frac{1}{k}\}}\right|^{2}\,\lambda(de)\right\rbrace\,ds\right]\nonumber\\
& {}{} &\leq\mathbb{E}\left[\left|g(X^{t,x}_{T})-g(^{k}X^{t,x}_{T})\right|^{2}+2\displaystyle\int^{T}_{s}
\left(\vec{Y}^{t,x}_{r}-^{k}Y^{t,x}_{r}\right)\times ^{k}\Delta f(r)\,dr\right];\nonumber\\
\end{eqnarray}
where the processes $^{k}\Delta X_{r}$, $^{k}\Delta Y_{r}$, $^{k}\Delta f(r)$, $^{k}\Delta Z_{r}$ and $^{k}\Delta U_{r}$ are defined as follows: $\forall r\in[0,T]$,\\
$^{k}\Delta f(r):=((^{k}\Delta f^{(i)}(r))_{i=1,m}=(f^{(i)}(r,X^{t,x}_{r},\vec{Y}^{t,x}_{r},Z^{i;t,x}_{r},U^{i;t,x}_{r})-f^{(i)}_{k}(r,^{k}X^{t,x}_{r},^{k}Y^{t,x}_{r},^{k}Z^{t,x}_{r},^{k}U^{t,x}_{r}))_{i=1,m}$,
$^{k}\Delta X_{r}=X^{t,x}_{r}-^{k}X^{t,x}_{r}$, $^{k}\Delta Y(r)=\vec{Y}^{t,x}_{r}-^{k}Y^{t,x}_{r}=(Y^{j;t,x}_{r}-^{k}Y^{j;t,x}_{r})_{j=1,m}$, $^{k}\Delta Z_{r}=Z^{t,x}_{r}-^{k}Z^{t,x}_{r}$ and $^{k}\Delta U_{r}=U^{t,x}_{r}-^{k}U^{t,x}_{s}1_{\{|e|\geq\frac{1}{k}\}}$.\\
Next let us set for $r\leq T$,
$$^{k}\Delta f(r)=(f(r,X^{t,x}_{r},\vec{Y}^{t,x}_{r},Z^{t,x}_{r},U^{t,x}_{r})-f_{k}(r,^{k}X^{t,x}_{r},^{k}Y^{t,x}_{r},^{k}Z^{t,x}_{r},^{k}U^{t,x}_{r}))=A(r)+B(r)+C(r)+D(r);$$
where for any $i=1,\ldots,m$,
\begin{eqnarray*}
A(r) & = & \left(h^{(i)}\left(r,X^{t,x}_{r},\vec{Y}^{t,x}_{r},Z^{i;t,x}_{r},\displaystyle\int_{E}\gamma^{i}(r,X^{t,x}_{r},e)U^{i;t,x}_{r}(e)\lambda(de)\right)
\right.\\
&{}{}&\left.
-h^{(i)}\left(r,^{k}X^{t,x}_{r},\vec{Y}^{t,x}_{r},Z^{i;t,x}_{r},\displaystyle\int_{E}\gamma^{i}(r,X^{t,x}_{r},e)U^{i;t,x}_{r}(e)\lambda(de)\right)\right)_{i=1,m};\\
B(r) & = & \left(h^{(i)}\left(r,^{k}X^{t,x}_{r},\vec{Y}^{t,x}_{r},Z^{i;t,x}_{r},\displaystyle\int_{E}\gamma^{i}(r,X^{t,x}_{r},e)U^{i;t,x}_{r}(e)\lambda(de)\right)
\right.\\
&{}{}&\left.
-h^{(i)}\left(r,^{k}X^{t,x}_{r},^{k}Y^{t,x}_{r},Z^{i;t,x}_{r},\displaystyle\int_{E}\gamma^{i}(r,X^{t,x}_{r},e)U^{i;t,x}_{r}(e)\lambda(de)\right)\right)_{i=1,m};\\
C(r) & = & \left(h^{(i)}\left(r,^{k}X^{t,x}_{r},^{k}Y^{t,x}_{r},Z^{i;t,x}_{r},\displaystyle\int_{E}\gamma^{i}(r,X^{t,x}_{r},e)U^{i;t,x}_{r}(e)\lambda(de)\right)
\right.\\
&{}{}&\left.
-h^{(i)}\left(r,^{k}X^{t,x}_{r},^{k}Y^{t,x}_{r},^{k}Z^{i;t,x}_{r},\displaystyle\int_{E}\gamma^{i}(r,X^{t,x}_{r},e)U^{i;t,x}_{r}(e)\lambda(de)\right)\right)_{i=1,m};\\
D(r) & = & \left( h^{(i)}\left(r,^{k}X^{t,x}_{r},^{k}Y^{t,x}_{r},^{k}Z^{i;t,x}_{r},\displaystyle\int_{E}\gamma^{i}(r,X^{t,x}_{r},e)U^{i;t,x}_{r}(e)\lambda(de)\right)
\right.\\
&{}{}&\left.
-h^{(i)}\left(r,^{k}X^{t,x}_{r},^{k}Y^{t,x}_{r},^{k}Z^{i;t,x}_{r},\displaystyle\int_{E}\gamma^{i}(r,^{k}X^{t,x}_{r},e)^{k}U^{i;t,x}_{r}(e)\lambda_{k}(de)\right)\right)_{i=1,m}.
\end{eqnarray*}
Since $g$ belongs to $\mathcal{M}$ and by (\ref{4.50}) we have,
\begin{equation}\label{4.54}
\mathbb{E}\left[\left|g(X^{t,x}_{T})-g(^{k}X^{t,x}_{T})\right|^{2}\right]\substack{\displaystyle\longrightarrow 0\\ k\rightarrow+\infty}
\end{equation}
Now we will interest to $\mathbb{E}\left[\displaystyle\int^{T}_{s}
\left(\vec{Y}^{t,x}_{r}-^{k}Y^{t,x}_{r}\right)\times ^{k}\Delta f(r)\,dr\right]$ for found (\ref{4.51}).\\
By (\ref{2.11}), we have: $\forall r\in[0,T]$
\begin{eqnarray}\label{4.56}
\left|A(r)\right|^{2} &\leq & \rho(\left|X^{t,x}_{r}-^{k}X^{t,x}_{r}\right|^{2});\nonumber\\
\left|B(r)\right|+\left|C(r)\right| &\leq & C\{\left|\vec{Y}^{t,x}_{r}-^{k}Y^{t,x}_{r}\right|+\left|Z^{t,x}_{r}-^{k}Z^{t,x}_{r}\right|\}.\nonumber\\
\end{eqnarray}
Now let us deal with $D(r)$ which is more involved. First note that $D(r)=(D_{i}(r))_{i=1,m}$ where
\begin{eqnarray*}
D_{i}(r)& = & h^{(i)}\left(r,^{k}X^{t,x}_{r},^{k}Y^{t,x}_{r},^{k}Z^{i;t,x}_{r},\displaystyle\int_{E}\gamma^{i}(r,X^{t,x}_{r},e)U^{i;t,x}_{r}(e)\lambda(de)\right)\\
&{}{}&-h^{(i)}\left(r,^{k}X^{t,x}_{r},^{k}Y^{t,x}_{r},^{k}Z^{i;t,x}_{r},\displaystyle\int_{E}\gamma^{i}(r,^{k}X^{t,x}_{r},e)^{k}U^{i;t,x}_{r}(e)\lambda_{k}(de)\right).
\end{eqnarray*}
But as $h^{(i)}$ is Lipschitz w.r.t to the last component $q$ and by the relation (\ref{2.12}) then ,
\begin{eqnarray}\label{4.57}
\left|D(r)\right|^{2} & \leq & C\left\lbrace \displaystyle\int_{E}\left|\gamma^{i}(r,X^{t,x}_{r},e)U^{i;t,x}_{r}(e)-\gamma^{i}(r,^{k}X^{t,x}_{r},e)^{k}U^{i;t,x}_{r}(e)1_{\{|e|\geq\frac{1}{k}\}}\right|\lambda(de)\right\rbrace^{2}\nonumber\\
{}&\leq & C\left\lbrace \left\lbrace \displaystyle\int_{E}\left|\gamma^{i}(r,X^{t,x}_{r},e)-\gamma^{i}(r,^{k}X^{t,x}_{r},e)\right|\left|U^{i;t,x}_{r}(e)\right|\,\lambda(de)\right\rbrace^{2}
\right.\nonumber\\
&{}{}&\left.
+\left\lbrace \displaystyle\int_{E}\left|\gamma^{i}(r,X^{t,x}_{r},e)\right|\left| U^{i;t,x}_{r}(e)-^{k}U^{i;t,x}_{r}(e)1_{\{|e|\geq\frac{1}{k}\}}\right|\lambda(de)\right\rbrace^{2}\right\rbrace\nonumber\\
{}&\leq & C\rho(\left|X^{t,x}_{r}-^{k}X^{t,x}_{r}\right|^{2})\left\lbrace\displaystyle\int_{E}\left|U^{i;t,x}_{r}(e)\right|^{2}\lambda(de)\right\rbrace\nonumber\\
&{}{}&+C\displaystyle\int_{E}(1\wedge|e|)^{2}\lambda(de)\times\displaystyle\int_{E}\left| U^{i;t,x}_{r}(e)-^{k}U^{i;t,x}_{r}(e)1_{\{|e|\geq\frac{1}{k}\}}\right|^{2}\lambda(de),
\end{eqnarray}
By using the majorations obtain in (\ref{4.56}) and in (\ref{4.57}) and Cauchy-Schwartz inequality;
\begin{eqnarray}\label{4.58}
& {}{} &\mathbb{E}\left[\left|\vec{Y}^{t,x}_{s}-^{k}Y^{t,x}_{s}\right|^{2}+\displaystyle\int^{T}_{0}\left\lbrace\left|Z^{t,x}_{s}-^{k}Z^{t,x}_{s}\right|^{2}+\displaystyle\int_{E}\left|U^{t,x}_{s}-^{k}U^{t,x}_{s}1_{\{|e|\geq\frac{1}{k}\}}\right|^{2}\,\lambda(de)\right\rbrace\,ds\right]\nonumber\\
& {}{} &\leq\mathbb{E}\left[\left|g(X^{t,x}_{T})-g(^{k}X^{t,x}_{T})\right|^{2}\right]+C_{\epsilon}\mathbb{E}\left[\displaystyle\int^{T}_{s}
\left|\vec{Y}^{t,x}_{r}-^{k}Y^{t,x}_{r}\right|^{2}\right]+\displaystyle\int^{T}_{s}\rho(\mathbb{E}\left[\left|X^{t,x}_{r}-^{k}X^{t,x}_{r}\right|^{2}\right])\,dr\nonumber\\
& {}{} &+C\sqrt{\mathbb{E}\left[\sup_{s\leq r\leq T}\rho^{2}(\left|X^{t,x}_{r}-^{k}X^{t,x}_{r}\right|^{2})\right]}\times\sqrt{\mathbb{E}\left[\left(\displaystyle\int^{T}_{s}\displaystyle\int_{E}\left|U^{i;t,x}_{r}(e)\right|^{2}\lambda(de)\,dr\right)^{2}\right]}\nonumber\\
& {}{} &+C\epsilon\mathbb{E}\left[\displaystyle\int^{T}_{t}\displaystyle\int_{E}\left|U^{t,x}_{s}-^{k}U^{t,x}_{s}1_{\{|e|\geq\frac{1}{k}\}}\right|^{2}\,\lambda(de)\,ds\right]
\end{eqnarray}
For $C\epsilon<1$ and the Gronwall's lemma going through the dominated convergence theorem, the continuity of $g$ and $\rho$ and the lemma \ref{lem4.1}, then
\begin{equation}\label{4.59}
\mathbb{E}\left[\left|\vec{Y}^{t,x}_{s}-^{k}Y^{t,x}_{s}\right|^{2}\right]\substack{\displaystyle\longrightarrow 0\\ k\rightarrow+\infty}
\end{equation}
and in taking $s=t$ we obtain $u^{k}(t,x)\substack{\displaystyle\longrightarrow u(t,x)\\ k\rightarrow+\infty}$. As $(t,x)\in[0,T]\times \mathbb{R}^{k}$ is arbitrary then u$^{k}\substack{\displaystyle\longrightarrow u\\ k\rightarrow+\infty}$ pointwisely.\\
Taking the same arguments as when getting (\ref{4.59}); we once again add Lebesgue's dominated convergence theorem to have,
\begin{equation}\label{4.60}
\mathbb{E}\left[\displaystyle\int^{T}_{t}\displaystyle\int_{E}\left|U^{t,x}_{s}-^{k}U^{t,x}_{s}1_{\{|e|\geq\frac{1}{k}\}}\right|^{2}\,\lambda(de)\,ds\right]\substack{\displaystyle\longrightarrow 0\\ k\rightarrow+\infty}.
\end{equation}
\textbf{Step 3: Conclusion}\\
First note that by (\ref{4.48}) and the pointwise convergence of $(u^{k})_{k}$ to $u$, if $(x_{k})_{k}$ is a sequence of $\mathbb{R}^{k}$ which converges to $x$ then $((u^{k}(t,x_{k}))_{k})$ converges to $u(t,x)$.\\
Now let us consider a subsequence which we still denote by $\{k\}$ such that $\sup_{s\leq T}\left|X^{t,x}_{s}-^{k}X^{t,x}_{s}\right|^{2}\substack{\displaystyle\longrightarrow 0\\ k\rightarrow+\infty}$, $\mathbb{P}$-a.s. (and then $\left|X^{t,x}_{s-}-^{k}X^{t,x}_{s-}\right|\substack{\displaystyle\longrightarrow 0\\ k\rightarrow+\infty}$ since $\left|X^{t,x}_{s-}-^{k}X^{t,x}_{s-}\right|\leq \sup_{s\leq T}\left|X^{t,x}_{s}-^{k}X^{t,x}_{s}\right|^{2}$). By (\ref{4.50}), this subsequence exists. As the mapping $x\mapsto \beta(t,x,e)$ is Lipschitz then the sequence
\begin{eqnarray}\label{4.61}
&{}{}&\left(^{k}U^{t,x}_{s}(e)1_{\{|e|\geq\frac{1}{k}\}}\right)_{k}=\left((u^{k}_{i}(s,^{k}X^{t,x}_{s-}+\beta(s,^{k}X^{t,x}_{s-},e))-u^{k}_{i}(s,^{k}X^{t,x}_{s-}))1_{\{|e|\geq\frac{1}{k}\}}\right)_{k\geq 1}\substack{\displaystyle\longrightarrow {}\\ k\rightarrow+\infty}\nonumber\\
&{}{}& (u_{i}(s,X^{t,x}_{s-}+\beta(s,X^{t,x}_{s-},e))-u_{i}(s,X^{t,x}_{s-})),\quad d\mathbb{P}\otimes ds\otimes d\lambda-a.e.\quad \text{on }\Omega\times[t,T]\times E\quad
\end{eqnarray}
for any $i=1,\ldots,m$. Finally from (\ref{4.60}) we deduce that
\begin{equation}\label{4.62}
U^{t,x}_{s}(e)=(u_{i}(s,X^{t,x}_{s-}+\beta(s,X^{t,x}_{s-},e))-u_{i}(s,X^{t,x}_{s-})),\quad\text{on }\Omega\times[t,T]\times E
\end{equation}
which is the desired result.
\section{The main result}
Unlike Barles and al.\cite{bar} our result on viscosity solutions is established for the following definition.
\begin{definition}\label{def5.1}
We say that a family of deterministics functions $u=(u^{i})_{i=1,m}$ which belongs to $\mathcal{M}\quad\forall i\in\{1,\ldots,m\}$ is a viscosity sub-solution (resp. super-solution) of the IPDE (\ref{eq1}) if:\\
$(i)\quad \forall x\in\mathbb{R}^{k}$, $u^{i}(x,T)\leq g^{i}(x)$ (resp. $u^{i}(x,T)\geq g^{i}(x)$);\\
$(ii)\quad\text{For any } (t,x)\in[0,T]\times\mathbb{R}^{k}$ and any function $\phi$ of class $C^{1,2}([0,T]\times\mathbb{R}^{k})$ such that $(t,x)$ is a global maximum point of $u^{i}-\phi$ (resp. global minimum point of $u^{i}-\phi$) and $(u^{i}-\phi)(t,x)=0$ one has
\begin{equation}\label{5.63}
-\partial_{t}\phi(t,x)-\mathcal{L}^{X}\phi(t,x)-h^{i}(t,x,(u^{j}(t,x))_{j=1,m},\sigma^{\top}(t,x))D_{x}\phi(t,x),B_{i}u^{i}(t,x))\leq 0
\end{equation}
$\left(resp.
\right.$
\begin{equation}\label{5.64}
\left.
-\partial_{t}\phi(t,x)-\mathcal{L}^{X}\phi(t,x)-h^{i}(t,x,(u^{j}(t,x))_{j=1,m},\sigma^{\top}(t,x))D_{x}\phi(t,x),B_{i}u^{i}(t,x))\geq 0\right).
\end{equation}
The family $u=(u^{i})_{i=1,m}$ is a viscosity solution of (\ref{eq1}) if it is both a viscosity sub-solution and viscosity super-solution.\\
Note that $\mathcal{L}^{X}\phi(t,x)=b(t,x)^{\top}\mathrm{D}_{x}\phi(t,x)+\frac{1}{2}\mathrm{Tr}(\sigma\sigma^{\top}(t,x)\mathrm{D}^{2}_{xx}\phi(t,x))+\mathrm{K}\phi(t,x)$;\\
where $\mathrm{K}\phi(t,x)=\displaystyle\int_{\mathrm{E}}(\phi(t,x+\beta(t,x,e))-\phi(t,x)-\beta(t,x,e)^{\top}\mathrm{D}_{x}\phi(t,x))\lambda(de).$
\end{definition}
\begin{theorem}
Under assumptions (\textbf{H1}) and (\textbf{H2}), the IPDE (\ref{eq1}) has unique solution which is the $m$-tuple of functions $(u^{i})_{i=1,m}$ defined in proposition \ref{prop3.3} by (\ref{3.18}).
\end{theorem}
\newpage
\begin{proof}
\emph{\underline{Step $1$}:} \emph{Existence}\\
Assume that assumptions (\textbf{H1}) and (\textbf{H2}) are fulfilled, then the following multi-dimensional BSDEs with jumps
\begin{equation}\label{5.65}
\left
\{\begin{array}{ll}
(i)~\underline{\vec{Y}}^{t,x}:=(\underline{Y}^{i;t,x})_{i=1,m}\in\mathcal{S}^{2}(\mathbb{R}^{m}),~\underline{Z}^{t,x}:=(\underline{Z}^{i;t,x})_{i=1,m}\in\mathbb{H}^{2}(\mathbb{R}^{m\times d}),~\underline{U}^{t,x}:=(\underline{U}^{i;t,x})_{i=1,m}\in\mathbb{H}^{2}(\mathbb{L}^{2}_{m}(\lambda));\\
(ii)~\underline{Y}^{i;t,x}_{s}= g^{i}(X^{t,x}_{T})-\displaystyle\int^{T}_{s}\underline{Z}^{i;t,x}\mathrm{d}\mathrm{B}_{r}-\displaystyle\int^{T}_{s}
\displaystyle\int_{\mathrm{E}}\underline{U}^{i;t,x}_{r}(e)\tilde{\mu}(\mathrm{d}r,\mathrm{d}e).\\
\quad\quad\quad+\displaystyle\int^{T}_{s}h^{(i)}(r,X^{t,x}_{r},\underline{Y}^{i;t,x}_{r},\underline{Z}^{i;t,x}_{r},\displaystyle\int_{\mathrm{E}}\gamma^{i}(t,X^{t,x}_{r},e)\{(u^{i}(t,X^{t,x}_{r-}+\beta(t,X^{t,x}_{r-},e))-u^{i}(t,X^{t,x}_{r-}))\}\,\lambda(de))dr\\
(iii)~\underline{Y}^{i;t,x}_{T}= g^{i}(X^{t,x}_{T}).
\end{array}
\right.
\end{equation}
has unique solution $(\underline{Y},\underline{Z},\underline{U})$.
Next as for any $i=1,\ldots,m$, $u^{i}$ belongs to $\mathcal{M}$, then by proposition \ref{prop3.3} the (\ref{3.18}), there exists a family of deterministics continuous functions of polynomial growth $(\underline{u}^{i})_{i=1,m}$ that fact for any $(t,x)\in[0,T]\times\mathbb{R}^{k}$,
$$\forall s\in[t,T],\qquad \underline{Y}^{i;t,x}_{s}=\underline{u}^{i}(s,X^{t,x}_{s}).$$
Such that by the same proposition, the family $(\underline{u}^{i})_{i=1,m}$ is a viscosity solution of the following system:
\begin{equation}\label{5.66}
\left
\{\begin{array}{ll}
-\partial_{t}\underline{u}^{i}(t,x)-b(t,x)^{\top}\mathrm{D}_{x}\underline{u}^{i}(t,x)-\frac{1}{2}\mathrm{Tr}(\sigma\sigma^{\top}(t,x)\mathrm{D}^{2}_{xx}\underline{u}^{i}(t,x))\\
\quad\quad-\mathrm{K}_{i}\underline{u}^{i}(t,x)-\mathit{h}^{(i)}(t,x,(\underline{u}^{j}(t,x))_{j=1,m},(\sigma^{\top}\mathrm{D}_{x}\underline{u}^{i})(t,x),\mathrm{B}_{i}u^{i}(t,x))=0,\quad (t,x)\in\left[ 0,T\right] \times\mathbb{R}^{k};\\
u^{i}(T,x)=g^{i}(x).
\end{array}
\right.
\end{equation}
Now we have the family $(\underline{u}^{i})_{i=1,m}$ is a viscosity solution, our main objective is to found relation between $(\underline{u}^{i})_{i=1,m}$ and $(u^{i})_{i=1,m}$ which is defined in (\ref{3.18}).\\
For this, let us consider the system of BSDE with jumps
\begin{equation}\label{5.67}
\left
\{\begin{array}{ll}
(i)~\vec{Y}^{t,x}:=(Y^{i;t,x})_{i=1,m}\in\mathcal{S}^{2}(\mathbb{R}^{m}),~Z^{t,x}:=(Z^{i;t,x})_{i=1,m}\in\mathbb{H}^{2}(\mathbb{R}^{m\times d}),~\\U^{t,x}:=(U^{i;t,x})_{i=1,m}\in\mathbb{H}^{2}(\mathbb{L}^{2}_{m}(\lambda));\\
(ii)~Y^{i;t,x}_{s}= g^{i}(X^{t,x}_{T})-\displaystyle\int^{T}_{s}Z^{i;t,x}\mathrm{d}
\mathrm{B}_{r}-\displaystyle\int^{T}_{s}
\displaystyle\int_{\mathrm{E}}U^{i;t,x}_{r}(e)\tilde{\mu}(\mathrm{d}r,\mathrm{d}e).\\
\quad\quad\quad+\displaystyle\int^{T}_{s}h^{(i)}(r,X^{t,x}_{r},Y^{i;t,x}_{r},\underline{Z}^{i;t,x}_{r},\displaystyle\int_{\mathrm{E}}\gamma^{i}(t,X^{t,x}_{r},e)U^{i;t,x}_{r}(e)\,\lambda(de))dr;\\
(iii)~Y^{i;t,x}_{T}=g(X^{t,x}_{T}).
\end{array}
\right.
\end{equation}
By uniqueness of the solution of the BSDEs with jumps (\ref{5.64}), that for any $s\in[t,T]$ and $\forall i\in\{1\ldots,m\}$, $\underline{Y}^{i;t,x}_{s}=Y^{i;t,x}_{s}$.\\
Therefore $\underline{u}^{i}=u^{i}$, such that by (\ref{4.60}) we obtain $U^{t,x}_{s}(e)=(u_{i}(s,X^{t,x}_{s-}+\beta(s,X^{t,x}_{s-},e))-u_{i}(s,X^{t,x}_{s-})),\quad\text{on }\Omega\times[t,T]\times E$, which give the viscosity solution in the sense of definition $5.1$ (see \cite{hama}) by pluging (\ref{4.61}) in $h^{(i)}$ of (\ref{5.66}).
\emph{\underline{Step $2$}:} \emph{Uniqueness}\\
For uniqueness, let $(\overline{u}^{i})_{i=1,m}$ be another family of $\mathcal{M}$ which is solution viscosity of the system (\ref{eq1}) in the sense of definition $5.1$ and we consider BSDE with jumps defined with $\overline{u}^{i}$.
\begin{equation}\label{5.68}
\left
\{\begin{array}{ll}
(i)~\vec{\overline{Y}}^{t,x}:=(\overline{Y}^{i;t,x})_{i=1,m}\in\mathcal{S}^{2}(\mathbb{R}^{m}),~\overline{Z}^{t,x}:=(\overline{Z}^{i;t,x})_{i=1,m}\in\mathbb{H}^{2}(\mathbb{R}^{m\times d}),~\overline{U}^{t,x}:=(\overline{U}^{i;t,x})_{i=1,m}\in\mathbb{H}^{2}(\mathbb{L}^{2}_{m}(\lambda));\\
(ii)~\overline{Y}^{i;t,x}_{s}= g^{i}(X^{t,x}_{T})-\displaystyle\int^{T}_{s}\overline{Z}^{i;t,x}\mathrm{d}\mathrm{B}_{r}-\displaystyle\int^{T}_{s}\displaystyle\int_{\mathrm{E}}\overline{U}^{i;t,x}_{r}(e)\tilde{\mu}(\mathrm{d}r,\mathrm{d}e).\\
\quad\quad\quad+\displaystyle\int^{T}_{s}h^{(i)}(r,X^{t,x}_{r},\overline{Y}^{i;t,x}_{r},\overline{Z}^{i;t,x}_{r},\displaystyle\int_{\mathrm{E}}\gamma^{i}(t,X^{t,x}_{r},e)(\overline{u}_{i}(s,X^{t,x}_{s-}+\beta(s,X^{t,x}_{s-},e))-\overline{u}_{i}(s,X^{t,x}_{s-}))\,\lambda(de))dr;\\
(iii)~\overline{Y}^{i;t,x}_{T}=g(X^{t,x}_{T}).
\end{array}
\right.
\end{equation}
By Feynman Kac formula $\overline{u}^{i}(s,X^{t,x}_{s})=Y^{i;t,x}_{s}$ where $Y^{i;t,x}_{s}$ satisfies the BSDE with jumps (\ref{eq2}) associated to IPDE (\ref{eq1}).\\
Since that the BSDE with jumps (\ref{5.66}) has solution and it is unique by
assumed that (\textbf{H1}) and (\textbf{H2}) are verified. By proposition \ref{prop3.3} the (\ref{3.18}), there exists a family of deterministic continuous functions of polynomial growth $(v^{i})_{i=1,m}$ that fact for any $(t,x)\in[0,T]\times\mathbb{R}^{k}$,
$$\forall s\in[t,T],\qquad \overline{Y}^{i;t,x}_{s}=v^{i}(s,X^{t,x}_{s}).$$
Such that by the same proposition, the family $(v^{i})_{i=1,m}$ is a viscosity solution of the following system:
\begin{equation}\label{5.69}
\left
\{\begin{array}{ll}
-\partial_{t}v^{i}(t,x)-b(t,x)^{\top}\mathrm{D}_{x}v^{i}(t,x)-\frac{1}{2}\mathrm{Tr}(\sigma\sigma^{\top}(t,x)\mathrm{D}^{2}_{xx}v^{i}(t,x))\\
\quad\quad-\mathrm{K}_{i}v^{i}(t,x)-\mathit{h}^{(i)}(t,x,(v^{j}(t,x))_{j=1,m},(\sigma^{\top}\mathrm{D}_{x}v^{i})(t,x),\mathrm{B}_{i}\overline{u}^{i}(t,x))=0,\quad (t,x)\in\left[ 0,T\right] \times\mathbb{R}^{k};\\
u^{i}(T,x)=g^{i}(x).
\end{array}
\right.
\end{equation}
By uniqueness of solution of (\ref{5.67}) $\overline{u}^{i}$ is viscosity solution of (\ref{5.68}); and by proposition \ref{prop3.3} $v^{i}=\overline{u}^{i}$ $\forall i\in\{1,\ldots,m\}$.\\
Now for completing our proof we show that on $\Omega\times[t,T]\times E$, $ds\otimes d\mathbb{P}\otimes d\lambda-\text{a.e.}\quad\forall i\in\{1,\ldots, m\}$;
\begin{eqnarray}\label{5.71}
\overline{U}^{i;t,x}_{s}(e) & = & (v^{i}(s,X^{t,x}_{s-}+\beta(s,X^{t,x}_{s-},e))-v^{i}(s,X^{t,x}_{s-}))\nonumber\\
{} & = & (\overline{u}_{i}(s,X^{t,x}_{s-}+\beta(s,X^{t,x}_{s-},e))-\overline{u}_{i}(s,X^{t,x}_{s-})).
\end{eqnarray}
By Remark $3.4$ in \cite{hama}; let us considere $(x_{k})_{k\geq 1}$ a sequence of $\mathbb{R}^{k}$ which converges to $x\in\mathbb{R}^{k}$ and the two following BSDE with jumps (adaptation is w.r.t. $\mathcal{F}^{k}$):
\begin{equation}\label{5.72}
\left
\{\begin{array}{ll}
(i)~\vec{\overline{Y}}^{k,t,x}:=(\overline{Y}^{i;k,t,x})_{i=1,m}\in\mathcal{S}^{2}(\mathbb{R}^{m}),~\overline{Z}^{k,t,x}:=(\overline{Z}^{i;k,t,x})_{i=1,m}\in\mathbb{H}^{2}(\mathbb{R}^{m\times d}),~\overline{U}^{k,t,x}:=(\overline{U}^{i;k,t,x})_{i=1,m}\in\mathbb{H}^{2}(\mathbb{L}^{2}_{m}(\lambda));\\
(ii)~\overline{Y}^{i;k,t,x}_{s}= g^{i}(X^{k,t,x}_{T})-\displaystyle\int^{T}_{s}\overline{Z}^{i;k,t,x}\mathrm{d}\mathrm{B}_{r}-\displaystyle\int^{T}_{s}
\displaystyle\int_{\mathrm{E}}\overline{U}^{i;k,t,x}_{r}(e)\tilde{\mu}(\mathrm{d}r,\mathrm{d}e)\\
\quad\quad\quad+\displaystyle\int^{T}_{s}h^{(i)}\left(r,X^{k,t,x}_{r},\overline{Y}^{i;k,t,x}_{r},\overline{Z}^{i;k,t,x}_{r},
\right.\\
\left.
\qquad\qquad\qquad\qquad\qquad\displaystyle\int_{\mathrm{E}}\gamma^{i}(t,X^{k,t,x_{k}}_{r},e)(\overline{u}_{i}(s,X^{k,t,x_{k}}_{s-}+\beta(s,X^{k,t,x_{k}}_{s-},e))-\overline{u}_{i}(s,X^{k,t,x_{k}}_{s-}))\,\lambda(de)\right)dr;\\
(iii)~\overline{Y}^{i;k,t,x_{k}}_{T}=g(X^{k,t,x_{k}}_{T}).
\end{array}
\right.
\end{equation}
and
\begin{equation}\label{5.73}
\left
\{\begin{array}{ll}
(i)~\vec{\overline{Y}}^{k,t,x_{k}}:=(\overline{Y}^{i;k,t,x_{k}})_{i=1,m}\in\mathcal{S}^{2}(\mathbb{R}^{m}),~\overline{Z}^{k,t,x_{k}}:=(\overline{Z}^{i;k,t,x_{k}})_{i=1,m}\in\mathbb{H}^{2}(\mathbb{R}^{m\times d}),\\\qquad\qquad\overline{U}^{k,t,x_{k}}:=(\overline{U}^{i;k,t,x_{k}})_{i=1,m}\in\mathbb{H}^{2}(\mathbb{L}^{2}_{m}(\lambda));\\
(ii)~\overline{Y}^{i;k,t,x_{k}}_{s}= g^{i}(X^{k,t,x_{k}}_{T})-\displaystyle\int^{T}_{s}\overline{Z}^{i;k,t,x_{k}}\mathrm{d}\mathrm{B}_{r}-\displaystyle\int^{T}_{s}
\displaystyle\int_{\mathrm{E}}\overline{U}^{i;k,t,x_{k}}_{r}(e)\tilde{\mu}(\mathrm{d}r,\mathrm{d}e)\\
\quad\quad\quad+\displaystyle\int^{T}_{s}h^{(i)}\left(r,X^{k,t,x_{k}}_{r},\overline{Y}^{i;k,t,x_{k}}_{r},\overline{Z}^{i;k,t,x_{k}}_{r},
\right.\\
\left.
\qquad\qquad\qquad\qquad\qquad\displaystyle\int_{\mathrm{E}}\gamma^{i}(t,X^{k,t,x_{k}}_{r},e)(\overline{u}_{i}(s,X^{k,t,x_{k}}_{s-}+\beta(s,X^{k,t,x_{k}}_{s-},e))-\overline{u}_{i}(s,X^{k,t,x_{k}}_{s-}))\,\lambda(de)\right)dr;\\
(iii)~\overline{Y}^{i;k,t,x_{k}}_{T}=g(X^{k,t,x_{k}}_{T}).
\end{array}
\right.
\end{equation}
By proof of step $2$ of proposition \ref{pro4.3}, $(\overline{Y}^{i;k,t,x},\overline{Z}^{i;k,t,x},\overline{U}^{i;k,t,x}1_{\{|e|\geq \frac{1}{k}\}})_{k}$ converge to $(\overline{Y}^{i;t,x}, \overline{Z}^{i;t,x},\overline{U}^{i;t,x})$ in $\mathcal{S}^{2}(\mathbb{R})\times\mathbb{H}^{2}(\mathbb{R}^{\kappa\times d})\times\mathbb{H}^{2}(\mathbb{L}^{2}(\lambda)) $.\\
Let $((v^{k}_{i=1,m}))_{k\geq 1}$ be the sequence of continuous deterministics functions such that for any $t\leq T$ and $s\in[t,T]$,\\
$$\overline{Y}^{i;k,t,x}_{s}=\overline{v}^{k}_{i}(s,^{k}X^{t,x}_{s})~\text{and } \overline{Y}^{i;k,t,x_{k}}_{s}=\overline{v}^{k}_{i}(s,^{k}X^{t,x_{k}}_{s})~~\forall i=1,\ldots,m.$$
Such that we have respectively by proof of proposition \ref{pro4.3} in step $1$ and step $2$:\\
$(i)~\overline{U}^{i;k,t,x}_{s}(e)=(v^{i}(s,^{k}X^{t,x}_{s-}+\beta(s,^{k}X^{t,x}_{s-},e))-v^{i}(s,^{k}X^{t,x}_{s-}))$, $ds\otimes d\mathbb{P}\otimes d\lambda_{k}$-a.e on $[t,T]\times\Omega\times E$;\\
$(ii)~\text{the sequence } ((v^{k}_{i=1,m}))_{k\geq 1}$ converges to $v^{i}(t,x)$
by using (\ref{4.59}).\\
So that $x_{k}\longrightarrow_{k} x$ we take the following estimation which is obtaining by Ito's formula and by the properties of $h^{(i)}$.
\begin{eqnarray}\label{5.74}
& {}{} &\mathbb{E}\left[\left|\vec{Y}^{k,t,x_{k}}_{s}-Y^{k,t,x}_{s}\right|^{2}+\displaystyle\int^{T}_{0}\left\lbrace\left|Z^{k,t,x_{k}}_{s}-Z^{k,t,x}_{s}\right|^{2}+\displaystyle\int_{E}\left|U^{k,t,x_{k}}_{s}-U^{k,t,x}_{s}\right|^{2}\,\lambda_{k}(de)\right\rbrace\,ds\right]\nonumber\\
& {}{} &\leq\mathbb{E}\left[\left|g(^{k}X^{t,x_{k}}_{T})-g(^{k}X^{t,x}_{T})\right|^{2}+2\displaystyle\int^{T}_{s}<\left(\vec{Y}^{k,t,x_{k}}_{r}-Y^{k,t,x}_{r}\right),^{k}\Delta h^{(i)}(r)>\,dr\right]\nonumber
\end{eqnarray}
Using the same arguments as in proof (\ref{4.51}), it follows that for $s=t$; $\forall i=1,\ldots,m$,\\
$$v^{k}_{i}(t,x_{k})\longrightarrow_{k}v^{k}_{i}(t,x).$$
Therefore by (i)-(ii) we have, for any $i=1,\ldots,m$,\\
\begin{equation}\label{5.75}
\overline{U}^{i;t,x}_{s}(e)=(v^{i}(s,X^{t,x}_{s-}+\beta(s,X^{t,x}_{s-},e))-v^{i}(s,X^{t,x}_{s-}))\quad ds\otimes d\mathbb{P}\otimes d\lambda-\text{a.e. in }[t,T]\times\Omega\times E, \quad\forall i\in\{1,\ldots, m\}.
\end{equation}
By this result we can replace $(\overline{u}_{i}(s,X^{t,x}_{s-}+\beta(s,X^{t,x}_{s-},e))-\overline{u}_{i}(s,X^{t,x}_{s-}))$ by $\overline{U}^{i;t,x}_{s}(e)$ in (\ref{5.71}), we deduce that $(\overline{Y}^{t,x},\overline{Z}^{t,x},\overline{U}^{t,x})$ verifies: $\forall i\in\{1,\ldots,m\}$
\begin{equation}\label{5.76}
\left
\{\begin{array}{ll}
(i)~\vec{\overline{Y}}^{t,x}:=(\overline{Y}^{i;t,x})_{i=1,m}\in\mathcal{S}^{2}(\mathbb{R}^{m}),~\overline{Z}^{t,x}:=(\overline{Z}^{i;t,x})_{i=1,m}\in\mathbb{H}^{2}(\mathbb{R}^{m\times d}),~\overline{U}^{t,x}:=(\overline{U}^{i;t,x})_{i=1,m}\in\mathbb{H}^{2}(\mathbb{L}^{2}_{m}(\lambda));\\
(ii)~\overline{Y}^{i;t,x}_{s}= g^{i}(X^{t,x}_{T})-\displaystyle\int^{T}_{s}\overline{Z}^{i;t,x}_{r}\,\mathrm{d}
\mathrm{B}_{r}-\displaystyle\int^{T}_{s}
\displaystyle\int_{\mathrm{E}}\overline{U}^{i;t,x}_{r}(e)\,\tilde{\mu}(\mathrm{d}r,\mathrm{d}e).\\
\quad\quad\quad+\displaystyle\int^{T}_{s}h^{(i)}(r,X^{t,x}_{r},\overline{Y}^{i;t,x}_{r},\overline{Z}^{i;t,x}_{r},\displaystyle\int_{\mathrm{E}}\gamma^{i}(r,X^{t,x}_{r},e)\overline{U}^{i;t,x}_{r}\,\lambda(de))dr;\\
(iii)~\overline{Y}^{i;t,x}_{T}=g^{i}(X^{t,x}_{T}).
\end{array}
\right.
\end{equation}
It follows that $$\forall i\in\{1,\ldots,m\},\quad \overline{Y}^{i;t,x}={Y}^{i;t,x}.$$
With the uniqueness of solution (\ref{5.68}), we have $u^{i}=\overline{u}^{i}=v^{i}$ which means that the solution of (\ref{eq1}) in the sense of Definition \ref{def5.1} is unique inside the class $\mathcal{M}$.
\end{proof}
\\
In this paper we have shown the existence and uniqueness of viscosity solution through BSDE by weakening the condition on the generator used in \cite{hama}.\\
The question that arose now is to have the existence and uniqueness of viscosity solution through the RBSDEs.
\newpage
\section*{Appendix}
\subsection*{Barles and al.'s definition for viscosity solution of IPDE (\ref{eq1})}
\begin{definition}\label{def5.3}
We say that a family of deterministics functions $u=(u^{i})_{i=1,m}$ which is continuous $\forall i\in\{1,\ldots,m\}$, is a viscosity sub-solution (resp. super-solution) of the IPDE (\ref{eq1}) if:\\
$(i)\quad \forall x\in\mathbb{R}^{k}$, $u^{i}(x,T)\leq g^{i}(x)$ (resp. $u^{i}(x,T)\geq g^{i}(x)$);\\
$(ii)\quad\text{For any } (t,x)\in[0,T]\times\mathbb{R}^{k}$ and any function $\phi$ of class $C^{1,2}([0,T]\times\mathbb{R}^{k})$ such that $(t,x)$ is a global maximum point of $u^{i}-\phi$ (resp. global minimum point of $u^{i}-\phi$) and $(u^{i}-\phi)(t,x)=0$, one has
\begin{equation*}
\min\left\lbrace u^{i}(t,x)-\ell(t,x);-\partial_{t}\phi(t,x)-\mathcal{L}^{X}\phi(t,x)-h^{i}(t,x,(u^{j}(t,x))_{j=1,m},\sigma^{\top}(t,x))D_{x}\phi(t,x),B_{i}\phi(t,x))\right\rbrace\leq 0
\end{equation*}
$\left(resp.
\right.$
\begin{equation*}
\left.
\min\left\lbrace u^{i}(t,x)-\ell(t,x);-\partial_{t}\phi(t,x)-\mathcal{L}^{X}\phi(t,x)-h^{i}(t,x,(u^{j}(t,x))_{j=1,m},\sigma^{\top}(t,x))D_{x}\phi(t,x),B_{i}\phi(t,x)(t,x))\right\rbrace\scriptstyle\geq 0\right).
\end{equation*}
The family $u=(u^{i})_{i=1,m}$ is a viscosity solution of (\ref{eq1}) if it is both a viscosity sub-solution and viscosity super-solution.\\
Note that $\mathcal{L}^{X}\phi(t,x)=b(t,x)^{\top}\mathrm{D}_{x}\phi(t,x)+\frac{1}{2}\mathrm{Tr}(\sigma\sigma^{\top}(t,x)\mathrm{D}^{2}_{xx}\phi(t,x))+\mathrm{K}\phi(t,x)$;\\
where $\mathrm{K}\phi(t,x)=\displaystyle\int_{\mathrm{E}}(\phi(t,x+\beta(t,x,e))-\phi(t,x)-\beta(t,x,e)^{\top}\mathrm{D}_{x}\phi(t,x))\lambda(de)$.
\end{definition}
\subsection*{Another Mao condition}
In this paper it was mainly a question of the p-order Mao condition, it is necessary to know that there exists another condition of Mao which is mainly used in the case of monotony of the generator for apply the comparison theorem at the viscosity solution.
\begin{definition}
$f$ satisfies the $p$-order one-sided Mao condition in $x$ i.e., there exists a nondecreasing, concave function $\rho(\cdot):\mathbb{R}^{+}\mapsto\mathbb{R}^{+}$ with $\rho(0)=0$, $\rho(u)>0$, for $u>0$ and $\displaystyle\int_{0^{+}}\frac{du}{\rho(u)}=+\infty$, such that $d\mathbb{P}\times dt-$a.e., $\forall x, x^{'}\in\mathbb{R}^{k}\textrm{ and }\forall p\geq 2$,
$$<\frac{x-x'}{|x-x'|}\mathds{1}_{|x-x'|\neq 0},f(t,x,y,z,q)-f(t,x^{'},y,z,q)>\leq\rho^{\frac{1}{p}}(|x-x^{'}|^{p}).$$
\end{definition}
\begin{remark}
Applying Cauchy-Schwartz inequality for the $p$-order one-sided Mao condition, we deduce the $p$-order Mao condition.
\end{remark}
|
2,877,628,090,687 | arxiv |
\section{Introduction}
We study the problem of synthesizing novel views of a scene from a sparse set of input views.
This long-standing problem has recently seen progress due to advances in differentiable neural rendering~\cite{NeRF, NSVF, NeRFW, SRN}.
Across these approaches, a 3D scene is represented with a neural network, which can then be rendered into 2D views.
Notably, the recent method neural radiance fields (NeRF)~\cite{NeRF} has shown impressive performance on novel view synthesis of a specific scene by implicitly encoding volumetric density and color through a neural network. While NeRF can render photorealistic novel views, it is often impractical as it requires a large number of posed images and a lengthy per-scene optimization.
In this paper, we address these shortcomings by proposing pixelNeRF, a learning framework that enables predicting NeRFs from one or several images in a feed-forward manner. Unlike the original NeRF network, which does not make use of any image features, pixelNeRF takes spatial image features aligned to each pixel as an input. This image conditioning allows the framework to be trained on a set of multi-view images, where it can learn scene priors to perform view synthesis from one or few input views. In contrast, NeRF is unable to generalize and performs poorly when few input images are available, as shown in Fig.~\ref{fig:teaser}.
Specifically, we condition NeRF on input images by first computing a fully convolutional image feature grid from the input image. Then for each query spatial point $\tf x$ and viewing direction $\tf d$ of interest in the view coordinate frame, we sample the corresponding image feature via projection and bilinear interpolation. The query specification is sent along with the image features to the \textit{NeRF network} that outputs density and color, where the spatial image features are fed to each layer as a residual. When more than one image is available, the inputs are first encoded into a latent representation in each camera's coordinate frame, which are then pooled in an intermediate layer prior to predicting the color and density. The model is supervised with a reconstruction loss between a ground truth image and a view rendered using conventional volume rendering techniques.
This framework is illustrated in Fig.~\ref{fig:arch}.
PixelNeRF has many desirable properties for few-view novel-view synthesis. First, pixelNeRF can be trained on a dataset of multi-view images without additional supervision such as ground truth 3D shape or object masks. Second, pixelNeRF predicts a NeRF representation in the camera coordinate system of the input image instead of a canonical coordinate frame.
This is not only integral for generalization to unseen scenes and object categories~\cite{What3d, Shin2018}, but also for flexibility, since no clear canonical coordinate system exists on scenes with multiple objects or real scenes.
Third, it is fully convolutional, allowing it to preserve the spatial alignment between the image and the output 3D representation.
Lastly, pixelNeRF can incorporate a variable number of posed input views at test time without requiring any test-time optimization.
We conduct an extensive series of experiments on synthetic and real image datasets to evaluate the efficacy of our framework, going beyond the usual set of ShapeNet experiments
to demonstrate its flexibility. Our experiments show that pixelNeRF can generate novel views from a single image input for both category-specific and category-agnostic settings, even in the case of unseen object categories.
Further, we test the flexibility of our framework, both with a new multi-object benchmark for ShapeNet, where pixelNeRF outperforms prior approaches,
and with simulation-to-real transfer demonstration on real car images.
Lastly, we test capabilities of pixelNeRF on real images using the DTU dataset~\cite{DTU},
where despite being trained on under 100 scenes, it can generate plausible novel views of a real scene from three posed input views.
\begin{table}
\centering
\input{tables/comparison}
\caption{
\textbf{A comparison with prior works reconstructing
neural scene representations.}
The proposed approach learns a scene prior for one or few-view reconstruction
using only multi-view 2D image supervision.
Unlike previous methods in this regime,
we do not require a consistent canonical space across
the training corpus.
Moreover, we incorporate local image features to preserve local information which is
in contrast to methods that compress the structure and appearance
into a single latent vector such as Occupancy Networks (ONet)~\cite{OccupancyNetworks} and DVR~\cite{DVR}.
}
\label{tab:related_work_comparison}
\vspace{-1em}
\end{table}
\section{Related Work}
\label{sec:rel_work}
\noindent\textbf{Novel View Synthesis.}
The long-standing problem of novel view synthesis entails constructing new views of a scene from a set of input views.
Early work achieved photorealistic results but required densely captured views of the scene~\cite{levoy1996light, gortler1996lumigraph}.
Recent work has made rapid progress toward photorealism for both wider ranges of novel views and sparser sets of input views, by using 3D representations based on neural networks~\cite{NeRF, NeuralVolumes, meshry2019neural, DeepVoxels, thies2019deferred, dai2020neural}.
However, because these approaches fit a single model to each scene, they require many input views and substantial optimization time per scene.
There are methods that can predict novel view from few input views or even single images by learning shared priors across scenes.
Methods in the tradition of~\cite{shade1998layered, buehler2001unstructured} use depth-guided image interpolation~\cite{zhou2017view, DeepStereo, riegler2020free}.
More recently, the problem of predicting novel views from a single image has been explored~\cite{tucker2020singleview, wiles2020synsin, Shih3DP20,chen2019monocular}. However, these methods employ 2.5D representations, and are therefore limited in the range of camera motions they can synthesize. In this work we infer a 3D volumetric NeRF representation, which allows novel view synthesis from larger baselines.
Sitzmann et al.~\cite{SRN} introduces a representation based on a continuous 3D feature space
to learn a prior across scene instances.
However, using the learned prior at test time requires further optimization with known absolute camera poses.
In contrast, our approach is completely feed-forward and only requires relative camera poses.
We offer extensive comparisons with this approach to demonstrate the advantages our design affords.
Lastly, note that concurrent work \cite{GRF} adds image features to NeRF.
A key difference is that we operate in view rather than canonical space, which makes our approach applicable in more general settings.
Moreover, we extensively demonstrate our method's performance in few-shot view synthesis, while GRF shows very limited quantitative results for this task.
\vspace{0.1in}
\noindent\textbf{Learning-based 3D reconstruction.}
Advances in deep learning have led to rapid progress in single-view or multi-view 3D reconstruction.
Many approaches~\cite{kar2017learning, AtlasNet, Pixel2Mesh, GenRe, DeepVoxels, PIFu, DISN, OccupancyNetworks, CoReNet} propose learning frameworks with various 3D representations that require ground-truth 3D models for supervision.
Multi-view supervision \cite{PTN, DRC, SoftRas, Liu2019, SRN, DVR, ENR, Bautista_2021_WACV} is less restrictive and more ecologically plausible. However, many of these methods~\cite{PTN, DRC, SoftRas, Liu2019, DVR} require object masks; in contrast,
pixelNeRF can be trained from images alone,
allowing it to be applied to
scenes of two objects without modification.
Most single-view 3D reconstruction methods condition neural 3D representations on input images.
The majority employs global image features~\cite{park2019deepsdf, chen2019learning, DVR, OccupancyNetworks, ENR}, which, while memory efficient, cannot preserve details that are present in the image and often lead to retrieval-like results.
Spatially-aligned \textit{local} image features have been shown to achieve detailed reconstructions from a single view~\cite{DISN, PIFu}.
However, both of these methods require 3D supervision.
Our method is inspired by these approaches, but only requires multi-view supervision.
Within existing methods, the types of scenes that can be reconstructed are limited, particularly so for object-centric approaches (e.g.~\cite{Pixel2Mesh, SoftRas, AtlasNet, DRC, DeepVoxels, GenRe, OccupancyNetworks, DISN, DVR}).
CoReNet~\cite{CoReNet}
reconstructs scenes with multiple objects via a voxel grid with offsets, but it requires 3D supervision including the identity and placement of objects.
In comparison, we formulate a scene-level learning framework that can in principle be trained to scenes of arbitrary structure.
\vspace{0.5em}
\noindent\textbf{Viewer-centric 3D reconstruction}
For the 3D learning task, prediction can be done either in a viewer-centered coordinate system, i.e. \textit{view space}, or in an object-centered coordinate system, i.e. \textit{canonical space}.
Most existing methods~\cite{DISN, OccupancyNetworks, DVR, SRN} predict in canonical space, where all objects of a semantic category are aligned to a consistent orientation.
While this makes learning spatial regularities easier,
using a canonical space
inhibits prediction performance on unseen object categories and scenes with more than one object, where there is no pre-defined or well-defined canonical pose.
PixelNeRF operates in view-space, which
has been shown to allow better reconstruction of unseen object categories in~\cite{Shin2018, Bautista_2021_WACV},
and discourages the memorization of the training set~\cite{What3d}.
We summarize key aspects of our approach relative to prior work in Table~\ref{tab:related_work_comparison}.
\section{Background: NeRF}
\label{sec:background}
We first briefly review the NeRF representation~\cite{NeRF}.
A NeRF encodes a scene as a continuous volumetric radiance field $f$ of color and density.
Specifically, for a 3D point $\tf x \in \mathbb{R}^3$ and viewing direction unit vector $\tf d \in \mathbb{R}^3$, $f$ returns a differential density $\sigma$ and RGB color $\tf c$:~\( f(\tf x, \tf d) = (\sigma, \tf c)\).
The volumetric radiance field can then be rendered into a 2D image via
\begin{equation}
\hat{\mathbf{C}}(\mathbf{r}) = \int_{t_n}^{t_f} T(t) \sigma(t) \mathbf{c}(t) dt
\label{eq:rendering}
\end{equation}
where~$T(t) = \exp\big(- \int_{t_n}^{t} \sigma(s) \,ds \big)$ handles occlusion.
For a target view with pose $\tf P$, a camera ray can be parameterized as $\tf r(t) = \tf o + t \tf d$, with the ray origin (camera center) $\tf o \in \mathbb{R}^3$ and ray unit direction vector $\tf d \in \mathbb{R}^3$.
The integral is computed along $\tf r$ between pre-defined depth bounds $[t_n, t_f]$.
In practice, this integral is approximated with numerical quadrature by sampling points along each pixel ray.
The rendered pixel value for camera ray $\tf r$ can then be compared against the corresponding ground truth pixel value, $\tf C(\tf r)$, for all the camera rays of the target view with pose $\tf P$.
The NeRF rendering loss is thus given by
\begin{equation}
\mathcal{L} =
\sum_{\tf r \in \mathcal{R}(\tf P)}
\left\lVert
\hat{\tf C}(\tf r)
-
\tf C(\tf r)
\right\rVert_2^2
\label{eq:mseloss}
\end{equation}
where $\mathcal{R}(\tf P)$ is the set of all camera rays of target pose $\tf P$.
\vspace{.1in}
\noindent\textbf{Limitations} While NeRF achieves state of the art novel view synthesis results, it is an optimization-based approach using geometric consistency as the sole signal, similar to classical multiview stereo methods~\cite{RomeInDay, COLMAPMVS}.
As such each scene must be optimized individually, with no knowledge shared between scenes.
Not only is this time-consuming, but in the limit of single or extremely sparse views, it is unable to make use of any prior knowledge of the world to accelerate reconstruction or for shape completion.
\section{Image-conditioned NeRF}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.9\textwidth]{figures/pipeline.pdf}
\end{center}
\vspace{-6mm}
\caption{\textbf{Proposed architecture in the single-view case.}
For a query point $\tf x$ along a target camera ray with view direction $\tf d$,
a corresponding image feature is extracted from the feature volume $\tf W$
via projection and interpolation.
This feature is then passed into the NeRF network $f$ along with the spatial coordinates.
The output RGB and density value is volume-rendered and compared with the target pixel value.
The coordinates $\mathbf{x}$ and $\mathbf{d}$ are in the camera coordinate system of the input view.
}
\label{fig:arch}
\vspace{-5mm}
\end{figure*}
To overcome the NeRF representation's inability to share knowledge between scenes, we propose an architecture to condition a NeRF on spatial image features.
Our model is comprised of two components:
a fully-convolutional image encoder $E$, which encodes the input image into a pixel-aligned feature grid, and a NeRF network $f$ which outputs color and density, given a spatial location and its corresponding encoded feature.
We choose to model the spatial query in the input view's camera space, rather than a canonical space, for the reasons discussed in $\S$~\ref{sec:rel_work}.
We validate this design choice in our experiments on unseen object categories ($\S$~\ref{sec:beyond_shapenet}) and complex unseen scenes ($\S$~\ref{sec:dtu}).
The model is trained with the volume rendering method and loss described in $\S$~\ref{sec:background}.
In the following, we first present our model for the single view case.
We then show how this formulation can be easily extended to incorporate multiple input
images.
\subsection{Single-Image pixelNeRF}
We now describe our approach to render novel views from one input image.
We fix our coordinate system as the \textit{view space} of the input image and specify positions and camera rays in this coordinate system.
Given a input image $\tf I$ of a scene, we first extract a feature volume $\tf W = E(\tf I)$.
Then, for a point on a camera ray $\tf x$, we retrieve the corresponding image feature by projecting $\tf x$ onto the image plane to the image coordinates $\pi(\tf x)$ using known intrinsics, then bilinearly interpolating between the pixelwise features to extract the feature vector $\tf W(\pi(\tf x))$.
The image features are then passed into the NeRF network, along with the position and view direction (both in the input view coordinate system), as
\begin{equation}
f(\gamma(\tf x), \tf d; \tf W(\pi(\tf x))) = (\sigma, \tf c)
\label{eq:icnerf_1view}
\end{equation}
where $\gamma(\cdot)$ is a positional encoding on $\tf x$ with $6$
exponentially increasing frequencies introduced in the original NeRF \cite{NeRF}.
The image feature is incorporated as a residual at each layer;
see $\S$~\ref{sec:impl_detail} for more information.
We show our pipeline schematically in Fig.~\ref{fig:arch}.
In the few-shot view synthesis task, the query view direction is a useful signal for
determining the importance of a particular image feature in
the NeRF network.
If the query view direction is similar to the input view orientation, the model can rely more directly on the input; if it is dissimilar, the model must leverage the learned prior.
Moreover, in the multi-view case, view directions could
serve as a signal for the relevance and positioning of different views.
For this reason, we input the view directions at the beginning of the NeRF network.
\subsection{Incorporating Multiple Views}
\label{sec:multiview}
Multiple views provide additional information about the scene and resolve 3D geometric ambiguities inherent to the single-view case.
We extend our model to allow for an arbitrary number of views at test time,
which distinguishes our method from existing approaches that are designed to only use single input view at test time.~\cite{ENR, GenRe}
Moreover, our formulation is independent of the choice of world space and the order of input views.
In the case that we have multiple input views of the scene,
we assume only that the relative camera poses are known.
For purposes of explanation,
an arbitrary world coordinate system can be fixed
for the scene.
We denote the $i$th input image as $\tf I^{(i)}$ and its associated
camera transform from the world space to its view space as $\tf P^{(i)} = \begin{bmatrix}\tf R^{(i)} & \tf t^{(i)} \end{bmatrix}$.
For a new target camera ray, we transform a query point $\tf x$, with view direction $\tf d$, into the coordinate system of each input view $i$ with the world to camera transform as
\begin{equation}
\tf x^{(i)} = \tf P^{(i)} \tf x, \quad \tf d^{(i)} = \tf R^{(i)} \tf d
\end{equation}
To obtain the output density and color,
we process the coordinates and corresponding features
in each view coordinate frame independently
and aggregate across the views within the NeRF network.
For ease of explanation, we denote the initial layers of the NeRF network as $f_1$,
which process inputs in each input view space separately, and the final layers as $f_2$,
which process the aggregated views.
We encode each input image into feature volume $\tf W^{(i)} = E(\tf I^{(i)})$.
For the view-space point $\tf x^{(i)}$, we extract the corresponding image feature from the feature volume $\tf W^{(i)}$ at the projected image coordinate $\pi(\tf x^{(i)})$.
We then pass these inputs into $f_1$ to obtain intermediate vectors:
\begin{equation}
\tf V^{(i)} = f_1\left(\gamma(\tf x^{(i)}), \tf d^{(i)} ;\, \tf W^{(i)}\big( \pi(\tf x^{(i)}) \big) \right).
\end{equation}
The intermediate $\tf V^{(i)}$ are then aggregated with the average pooling operator $\psi$
and passed into a the final layers, denoted as $f_2$,
to obtain the predicted density and color:
\begin{equation}
\label{eq:icnerf_2view}
(\sigma, \tf c) = f_2\left(
\psi\left(\tf V^{(1)}, \ldots, \tf V^{(n)}\right)
\right).
\end{equation}
In the single-view special case, this simplifies to Equation~\ref{eq:icnerf_1view} with $f = f_2 \circ f_1$,
by considering the view space as the world space.
An illustration is provided in the supplemental.
\section{Experiments}
We extensively demonstrate our approach in three experimental categories:
\begin{inparaenum}[1)]
\item existing ShapeNet~\cite{ShapeNet} benchmarks for category-specific and category-agnostic view synthesis,
\item ShapeNet scenes with unseen categories and multiple objects, both of which require geometric priors instead of recognition,
as well as domain transfer to real car photos and
\item real scenes from the DTU MVS dataset~\cite{DTU}.
\end{inparaenum}
\vspace{0.5em}
\noindent\textbf{Baselines}
For ShapeNet benchmarks,
we compare quantitatively and qualitatively to SRN~\cite{SRN} and DVR~\cite{DVR},
the current state-of-the-art in few-shot novel-view synthesis and 2D-supervised single-view reconstruction respectively.
We use the 2D multiview-supervised variant of DVR.
In the category-agnostic setting ($\S$~\ref{sec:multi_cat}), we also include
grayscale rendering of SoftRas~\cite{SoftRas} results.~\footnote{
Color inference is not supported by the public SoftRas code.
}
In the experiments with multiple ShapeNet objects, we compare with SRN, which can also model entire scenes.
For the experiment on the DTU dataset, we compare to NeRF~\cite{NeRF} trained on sparse views.
Because NeRF is a test-time optimization method, we train a separate model for each scene in the test set.
\vspace{0.1in}
\noindent\textbf{Metrics}
\label{sec:metrics}
We report the standard image quality metrics PSNR and SSIM~\cite{SSIM} for all evaluations.
We also include LPIPS~\cite{LPIPS}, which more accurately reflects human perception, in all evaluations except in the category-specific setup ($\S$~\ref{sec:single_cat}).
In this setting, we exactly follow the protocol of SRN~\cite{SRN} to remain comparable to prior works \cite{Tatarchenko2015, WorrallGTB18, GQN, ENR, GRF}, for which source code is unavailable.
\vspace{0.1in}
\label{sec:impl_detail}
\noindent\textbf{Implementation Details}
For the image encoder $E$,
to capture both local and global information effectively,
we extract a feature pyramid from the image.
We use a ResNet34 backbone pretrained on ImageNet for our experiments.
Features are extracted prior to the first $4$ pooling layers, upsampled using bilinear interpolation,
and concatenated to form latent vectors of size $512$ aligned to each pixel.
To incorporate a point's corresponding image feature into the NeRF network $f$, we choose a
ResNet architecture with a residual modulation rather than simply concatenating the feature vector with the point's position and view direction.
Specifically, we feed the encoded position and view direction through the network
and add the image feature as a residual at the beginning of each ResNet block.
We train an independent linear layer for each block residual, in a similar manner as AdaIn and SPADE~\cite{AdaIn, SPADE}, a method previously used with success in ~\cite{OccupancyNetworks, DVR}.
Please refer to the supplemental for additional details.
\section{ShapeNet Benchmarks}
\input{5s1shapenet}
\input{5s2beyondshapenet}
\input{5s3dtu}
\subsection{ShapeNet Benchmarks}
We first evaluate our approach on category-specific and category-agnostic view synthesis tasks on ShapeNet.
\begin{table}
\centering
\input{tables/singlecat}
\caption{
\textbf{Category-specific 1- and 2-view reconstruction}.
Methods marked * do not require canonical poses at test time.
In all cases, a single model is trained for each category and used for both 1- and 2-view evaluation.
Note ENR is a 1-view only model.
}
\label{tab:single_cat}
\end{table}
\begin{table}
\centering
\input{tables/ablations}
\caption{\textbf{Ablation studies for ShapeNet chair reconstruction.} We show the benefit of using local features over a global code to condition the NeRF network ($-$Local vs Full), and of providing view directions to the network ($-$Dirs vs Full).
}
\vspace{-1em}
\label{tab:chairs_ablations}
\end{table}
\begin{figure*}[ht]
\centering
\includegraphics[width=\textwidth]{figures/category_agnostic_v2a.pdf}
\vspace{-0.6cm}
\caption{\textbf{Category-agnostic single-view reconstruction.}
Going beyond the SRN benchmark, we train a single model to the 13 largest ShapeNet categories;
we find that our approach produces superior visual results compared to a series of strong baselines.
In particular, the model recovers fine detail and thin structure more effectively, even for outlier shapes.
Quite visibly, images on monitors and tabletop textures are accurately reproduced; baselines
representing the scene as a single latent vector cannot preserve
such details of the input image.
SRN's test-time latent inversion becomes less reliable as well in this setting.
The corresponding quantitative evaluations are available in Table~\ref{tab:multi_cat}.
Due to space constraints, we show objects with interesting properties here.
Please see the supplemental for sampled results.}
\vspace{-.5em}
\label{fig:category_agnostic}
\end{figure*}
\begin{table*}[t]
\centering
\include{tables/multicat}
\vspace{-0.01cm}
\caption{
\textbf{Category-agnostic single-view reconstruction.}
Quantitative results for category-agnostic view-synthesis are presented, with a detailed breakdown by category.
Our method outperforms the state-of-the-art by significant margins in all categories.
}
\label{tab:multi_cat}
\vspace{-1em}
\end{table*}
\vspace{-0.5em}
\subsubsection{Category-specific View Synthesis Benchmark}
\label{sec:single_cat}
We perform one-shot and two-shot view synthesis on the ``chair" and ``car" classes of ShapeNet, using the protocol and dataset introduced in~\cite{SRN}.
The dataset contains 6591 chairs and 3514 cars with a predefined split across object instances.
All images have resolution $128 \times 128$.
A single model is trained for each object class with 50 random views per object instance, randomly sampling either one or two of the training views to encode.
For testing,
We use 251 novel views on an Archimedean spiral for each object in the test set of object instances, fixing 1-2 informative views as input.
We report our performance in comparison with state-of-the-art baselines in Table~\ref{tab:single_cat}, and show selected qualitative results in Fig.~\ref{fig:single_view_cars_chairs}.
We also include the quantitative results of baselines TCO~\cite{Tatarchenko2015}
and dGQN~\cite{GQN} reported in~\cite{SRN} where applicable, and
the values available in the recent works ENR~\cite{ENR} and GRF~\cite{GRF} in this setting.
PixelNeRF achieves noticeably superior results despite solving a problem \textit{significantly harder} than SRN because we:
\begin{inparaenum}[1)]
\item use feed-forward prediction, without test-time optimization,
\item do not use ground-truth absolute camera poses at test-time,
\item use view instead of canonical space.
\end{inparaenum}
\vspace{0.5em}
\noindent\textbf{Ablations.}
In Table~\ref{tab:chairs_ablations},
we show the benefit of using local features and view directions in our model for this category-specific setting.
Conditioning the NeRF network on pixel-aligned local features instead of a global code ($-$Local vs Full) improves performance significantly, for both single and two-view settings.
Providing view directions ($-$Dirs vs Full) also provides a significant boost.
For these ablations, we follow an abbreviated evaluation protocol on ShapeNet chairs, using 25 novel views on the Archimedean spiral.
\vspace{-0.6em}
\subsubsection{Category-agnostic Object Prior}
\label{sec:multi_cat}
While we found appreciable improvements over baselines in the simplest category-specific benchmark,
our method is by no means constrained to it.
We show in Table~\ref{tab:multi_cat} and
Fig.~\ref{fig:category_agnostic} that
our approach offers a much greater advantage in the
\textit{category-agnostic} setting of \cite{SoftRas, DVR}, where we train a single model to the $13$ largest categories of ShapeNet.
Please see the supplemental for randomly sampled results.
We follow community standards for 2D-supervised methods on multiple ShapeNet categories \cite{DVR, NMR, SoftRas} and use the renderings and splits from Kato et al.~\cite{NMR}, which provide
24 fixed elevation views of $64 \times 64$ resolution for each object instance.
During both training and evaluation, a random view is selected as the input view for each object and shared across all baselines.
The remaining $23$ views are used as target views for computing metrics (see
$\S$~\ref{sec:metrics}).
\subsection{Pushing the Boundaries of ShapeNet}
\label{sec:beyond_shapenet}
Taking a step towards reconstruction in less controlled capture scenarios, we perform experiments on ShapeNet data in three more challenging setups:
\begin{inparaenum}[1)]
\item unseen object categories,
\item multiple-object scenes, and
\item simulation-to-real transfer on car images.
\end{inparaenum}
In these settings, successful reconstruction requires geometric priors; recognition or retrieval alone is not sufficient.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/novel_category_v1.pdf}
\vspace{-1.3em}
\caption{
\textbf{Generalization to unseen categories.}
We evaluate a model trained on planes, cars, and chairs on 10 unseen ShapeNet categories.
We find that the model is able to synthesize reasonable views even in this difficult case.}
\vspace{-0.1em}
\label{fig:novel_cat}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth]{figures/2obj_cmp.pdf}
\vspace{-8mm}
\end{center}
\caption{\textbf{360\degree view prediction with multiple objects.}
We show qualitative results of our method compared with SRN on scenes composed of multiple ShapeNet chairs. We are easily able to handle this setting, because our prediction is done in view space; in contrast, SRN predicts in canonical space, and struggles with scenes that cannot be aligned in such a way.
}
\vspace{-.3em}
\label{fig:2obj}
\end{figure}
\begin{table}
\centering
\include{tables/beyond_shapenet}
\caption{
\textbf{Image quality metrics for challenging ShapeNet tasks.}
(Left) Average metrics on 10 unseen categories for models trained on only planes, cars, and chairs.
See the supplemental for a breakdown by category.
(Right) Average metrics for two-view reconstruction for scenes with multiple ShapeNet chairs.
}
\vspace{-1em}
\label{tab:beyond_shapenet}
\end{table}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{figures/sim2real_v1.pdf}
\caption{\textbf{Results on real car photos.}
We apply the
car model from $\S$~\ref{sec:single_cat} directly to images from the Stanford cars dataset~\cite{CarsDataset}. The background has been masked out using PointRend~\cite{PointRend}.
The views are rotations about the view-space vertical axis.}
\label{fig:real_car}
\vspace{-1em}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{figures/dtu_v3.pdf}
\vspace{-1.5em}
\caption{\textbf{Wide baseline novel-view synthesis on a real image dataset.}
We train our model to distinct scenes in the DTU MVS dataset~\cite{DTU}.
Perhaps surprisingly, even in this case, our model is able to infer novel views with reasonable quality for held-out scenes
without further test-time optimization, all from only three views.
Note the train/test sets share no overlapping scenes.}
\vspace{-1em}
\label{fig:dtu}
\end{figure*}
\vspace{0.5em}
\noindent\textbf{Generalization to novel categories.}
\label{sec:cross_cat_gen}
We first aim to reconstruct ShapeNet categories which were not seen in training.
Unlike the more standard category-agnostic task described in the previous section, such generalization is impossible with semantic information alone.
The results in Table~\ref{tab:beyond_shapenet} and Fig.~\ref{fig:novel_cat}
suggest our method learns intrinsic geometric and appearance priors which
are fairly effective
even for objects quite distinct from those seen during training.
We loosely follow the protocol used for zero-shot cross-category reconstruction from~\cite{GenRe, yan2017perspective}.
Note that our baselines ~\cite{SRN, DVR} do not evaluate in this setting, and we adapt them for the sake of comparison.
We train on the airplane, car, and chair categories and test on 10 categories unseen during training, continuing to use the Kato et al. renderings described in $\S$~\ref{sec:multi_cat}.
\vspace{0.5em}
\noindent\textbf{Multiple-object scenes.}
We further perform few-shot $360\degree$ reconstruction for scenes with multiple randomly placed and oriented ShapeNet chairs.
In this setting, the network cannot rely solely on semantic cues for correct object placement and completion.
The priors learned by the network must be applicable in an arbitrary coordinate system.
We show in Fig.~\ref{fig:2obj} and Table~\ref{tab:beyond_shapenet} that our formulation allows us to perform well on these simple scenes without additional design modifications.
In contrast, SRN models scenes in a canonical space and struggles on held-out scenes.
We generate training images composed with 20 views randomly sampled on the hemisphere and
render test images composed of a held out test set of chair instances, with 50 views sampled on an Archimedean spiral.
During training, we randomly encode two input views; at test-time, we fix two informative views across the compared methods.
In the supplemental, we provide example images from our dataset as well as additional quantitative results and qualitative comparisons with varying numbers of input views.
\vspace{0.4em}
\noindent\textbf{Sim2Real on Cars.}
We also explore the performance of pixelNeRF on real images from the Stanford cars dataset~\cite{CarsDataset}.
We directly apply car model from $\S$~\ref{sec:single_cat} without any fine-tuning.
As seen in Fig.~\ref{fig:real_car}, the network trained on synthetic data effectively infers shape and texture of the real cars, suggesting our model can transfer beyond the synthetic domain.
Synthesizing the $360\degree$ background from a single view is nontrivial and out of the scope for this work.
For this demonstration, the off-the-shelf PointRend~\cite{PointRend} segmentation model is used to remove
the background.
\subsection{Scene Prior on Real Images}
\label{sec:dtu}
Finally, we demonstrate that our method is applicable for few-shot \textit{wide baseline} novel-view synthesis on real scenes in the DTU MVS dataset~\cite{DTU}.
Learning a prior for view synthesis on this dataset poses significant challenges:
not only does it consist of more complex scenes, without clear semantic similarities across scenes, it also contains inconsistent backgrounds and lighting between scenes.
Moreover, under 100 scenes are available for training.
We found that the standard data split introduced in MVSNet~\cite{MVSNet} contains overlap between scenes of the training and test sets.
Therefore, for our purposes, we use a different split of 88 training scenes and 15 test scenes, in which there are no shared or highly similar scenes between the two sets.
Images are down-sampled to a resolution of $400 \times 300$.
We train one model across all training scenes by encoding $3$ random views of a scene.
During test time, we choose a set of fixed informative input views shared across all instances.
We show in Fig.~\ref{fig:dtu} that our method can perform view synthesis on the held-out test scenes.
We further quantitatively compare the performance of our feed-forward model with NeRF optimized to the same set of input views
in Fig.~\ref{fig:dtu_views}.
Note that training each of 60 NeRFs took 14 hours; in contrast,
pixelNeRF is applied to new scenes immediately without any test-time optimization.
\section{Discussion}
We have presented pixelNeRF, a framework to learn a scene prior for reconstructing NeRFs from one or a few images.
Through extensive experiments, we have established that our approach can be successfully applied in a variety of settings.
We addressed some shortcomings of NeRF, but there are challenges yet to be explored:
\begin{inparaenum}[1)]
\item
Like NeRF, our rendering time is slow, and in fact,
our runtime increases linearly when given more input views.
Further, some methods (e.g.~\cite{DVR, SoftRas}) can recover a mesh from the image enabling fast rendering and manipulation afterwards, while NeRF-based representations cannot be converted to meshes very reliably.
Improving NeRF's efficiency is an
important research question that
can enable real-time applications.
\item
As in the vanilla NeRF,
we manually tune ray sampling bounds $t_n, t_f$ and a scale for the positional encoding.
Making NeRF-related methods scale-invariant is a crucial challenge.
\item
While we have demonstrated our method on real data from the DTU dataset, we acknowledge that
this dataset was captured under controlled settings and has matching camera poses across all scenes with limited viewpoints.
Ultimately, our approach is bottlenecked by the availability of large-scale wide baseline multi-view datasets, limiting the applicability to datasets such as ShapeNet and DTU.
Learning a general prior for $360\degree$ scenes in-the-wild is an exciting direction for future work.
\end{inparaenum}
\section{Introduction}
After receiving paper reviews, authors may optionally submit a rebuttal to address the reviewers' comments, which will be limited to a {\bf one page} PDF file. Please follow the steps and style guidelines outlined below for submitting your author response.
Note that the author rebuttal is optional and, following similar guidelines to previous CVPR conferences, it is meant to provide you with an opportunity to rebut factual errors or to supply additional information requested by the reviewers. It is NOT intended to add new contributions (theorems, algorithms, experiments) that were not included in the original submission. You may optionally add a figure, graph or proof to your rebuttal to better illustrate your answer to the reviewers' comments.
Per a passed 2018 PAMI-TC motion, reviewers should not request additional experiments for the rebuttal, or penalize authors for lack of additional experiments. This includes any experiments that involve running code, e.g., to create tables or figures with new results. \textbf{Authors should not include new experimental results in the rebuttal}, and reviewers should discount any such results when making their final recommendation. Authors may include figures with illustrations or comparison tables of results reported in the submission/supplemental material or in other papers.
The rebuttal must adhere to the same blind-submission as the original submission and must comply with this rebuttal-formatted template.
\subsection{Response length}
Author responses must be no longer than 1 page in length including any references and figures. Overlength responses will simply not be reviewed. This includes responses where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide. Note that this \LaTeX\ guide already sets figure captions and references in a smaller font.
\section{Formatting your Response}
{\bf Make sure to update the paper title and paper ID in the appropriate place in the tex file.}
All text must be in a two-column format. The total allowable width of the text
area is $6\frac78$ inches (17.5 cm) wide by $8\frac78$ inches (22.54 cm) high.
Columns are to be $3\frac14$ inches (8.25 cm) wide, with a $\frac{5}{16}$ inch
(0.8 cm) space between them. The top margin should begin
1.0 inch (2.54 cm) from the top edge of the page. The bottom margin should be
1-1/8 inches (2.86 cm) from the bottom edge of the page for $8.5 \times
11$-inch paper; for A4 paper, approximately 1-5/8 inches (4.13 cm) from the
bottom edge of the page.
Please number all of your sections and any displayed equations. It is important
for readers to be able to refer to any particular equation.
Wherever Times is specified, Times Roman may also be used. Main text should be
in 10-point Times, single-spaced. Section headings should be in 10 or 12 point
Times. All paragraphs should be indented 1 pica (approx. 1/6 inch or 0.422
cm). Figure and table captions should be 9-point Roman type as in
Figure~\ref{fig:onecol}.
List and number all bibliographical references in 9-point Times, single-spaced,
at the end of your response. When referenced in the text, enclose the citation
number in square brackets, for example~\cite{Authors14}. Where appropriate,
include the name(s) of editors of referenced books.
\begin{figure}[t]
\begin{center}
\fbox{\rule{0pt}{1in} \rule{0.9\linewidth}{0pt}}
\end{center}
\caption{Example of caption. It is set in Roman so that mathematics
(always set in Roman: $B \sin A = A \sin B$) may be included without an
ugly clash.}
\label{fig:long}
\label{fig:onecol}
\end{figure}
\subsection{Illustrations, graphs, and photographs}
All graphics should be centered. Please ensure that any point you wish to make is resolvable in a printed copy of the response. Resize fonts in figures to match the font in the body text, and choose line widths which render effectively in print. Many readers (and reviewers), even of an electronic copy, will choose to print your response in order to read it. You cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic.
When placing figures in \LaTeX, it's almost always best to use \verb+\includegraphics+, and to specify the figure width as a multiple of the line width as in the example below
{\small\begin{verbatim}
\usepackage[dvips]{graphicx} ...
\includegraphics[width=0.8\linewidth]
{myfile.eps}
\end{verbatim}
}
{\small
\bibliographystyle{ieee_fullname}
\section{Introduction}
Please follow the steps outlined below when submitting your manuscript to
the IEEE Computer Society Press. This style guide now has several
important modifications (for example, you are no longer warned against the
use of sticky tape to attach your artwork to the paper), so all authors
should read this new version.
\subsection{Language}
All manuscripts must be in English.
\subsection{Dual submission}
Please refer to the author guidelines on the CVPR 2021~web page for a
discussion of the policy on dual submissions.
\subsection{Paper length}
Papers, excluding the references section,
must be no longer than eight pages in length. The references section
will not be included in the page count, and there is no limit on the
length of the references section. For example, a paper of eight pages
with two pages of references would have a total length of 10 pages.
{\bf There will be no extra page charges for CVPR 2021.}
Overlength papers will simply not be reviewed. This includes papers
where the margins and formatting are deemed to have been significantly
altered from those laid down by this style guide. Note that this
\LaTeX\ guide already sets figure captions and references in a smaller font.
The reason such papers will not be reviewed is that there is no provision for
supervised revisions of manuscripts. The reviewing process cannot determine
the suitability of the paper for presentation in eight pages if it is
reviewed in eleven.
\subsection{The ruler}
The \LaTeX\ style defines a printed ruler which should be present in the
version submitted for review. The ruler is provided in order that
reviewers may comment on particular lines in the paper without
circumlocution. If you are preparing a document using a non-\LaTeX\
document preparation system, please arrange for an equivalent ruler to
appear on the final output pages. The presence or absence of the ruler
should not change the appearance of any other content on the page. The
camera ready copy should not contain a ruler.
(\LaTeX\ users may use options of cvpr.cls to switch between different
versions.)
Reviewers:
note that the ruler measurements do not align well with lines in the paper
--- this turns out to be very difficult to do well when the paper contains
many figures and equations, and, when done, looks ugly. Just use fractional
references (e.g.\ this line is $095.5$), although in most cases one would
expect that the approximate location will be adequate.
\subsection{Mathematics}
Please number all of your sections and displayed equations. It is
important for readers to be able to refer to any particular equation. Just
because you didn't refer to it in the text doesn't mean some future reader
might not need to refer to it. It is cumbersome to have to use
circumlocutions like ``the equation second from the top of page 3 column
1''. (Note that the ruler will not be present in the final copy, so is not
an alternative to equation numbers). All authors will benefit from reading
Mermin's description of how to write mathematics:
\url{http://www.pamitc.org/documents/mermin.pdf}.
ty
\subsection{Blind review}
Many authors misunderstand the concept of anonymizing for blind
review. Blind review does not mean that one must remove
citations to one's own work---in fact it is often impossible to
review a paper unless the previous citations are known and
available.
Blind review means that you do not use the words ``my'' or ``our''
when citing previous work. That is all. (But see below for
techreports.)
Saying ``this builds on the work of Lucy Smith [1]'' does not say
that you are Lucy Smith; it says that you are building on her
work. If you are Smith and Jones, do not say ``as we show in
[7]'', say ``as Smith and Jones show in [7]'' and at the end of the
paper, include reference 7 as you would any other cited work.
An example of a bad paper just asking to be rejected:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of our
previous paper [1], and show it to be inferior to all
previously known methods. Why the previous paper was
accepted without this analysis is beyond me.
[1] Removed for blind review
\end{quote}
An example of an acceptable paper:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of the
paper of Smith \etal [1], and show it to be inferior to
all previously known methods. Why the previous paper
was accepted without this analysis is beyond me.
[1] Smith, L and Jones, C. ``The frobnicatable foo
filter, a fundamental contribution to human knowledge''.
Nature 381(12), 1-213.
\end{quote}
If you are making a submission to another conference at the same time,
which covers similar or overlapping material, you may need to refer to that
submission in order to explain the differences, just as you would if you
had previously published related work. In such cases, include the
anonymized parallel submission~\cite{Authors14} as additional material and
cite it as
\begin{quote}
[1] Authors. ``The frobnicatable foo filter'', F\&G 2014 Submission ID 324,
Supplied as additional material {\tt fg324.pdf}.
\end{quote}
Finally, you may feel you need to tell the reader that more details can be
found elsewhere, and refer them to a technical report. For conference
submissions, the paper must stand on its own, and not {\em require} the
reviewer to go to a techreport for further details. Thus, you may say in
the body of the paper ``further details may be found
in~\cite{Authors14b}''. Then submit the techreport as additional material.
Again, you may not assume the reviewers will read this material.
Sometimes your paper is about a problem which you tested using a tool which
is widely known to be restricted to a single institution. For example,
let's say it's 1969, you have solved a key problem on the Apollo lander,
and you believe that the CVPR70 audience would like to hear about your
solution. The work is a development of your celebrated 1968 paper entitled
``Zero-g frobnication: How being the only people in the world with access to
the Apollo lander source code makes us a wow at parties'', by Zeus \etal.
You can handle this paper like any other. Don't write ``We show how to
improve our previous work [Anonymous, 1968]. This time we tested the
algorithm on a lunar lander [name of lander removed for blind review]''.
That would be silly, and would immediately identify the authors. Instead
write the following:
\begin{quotation}
\noindent
We describe a system for zero-g frobnication. This
system is new because it handles the following cases:
A, B. Previous systems [Zeus et al. 1968] didn't
handle case B properly. Ours handles it by including
a foo term in the bar integral.
...
The proposed system was integrated with the Apollo
lunar lander, and went all the way to the moon, don't
you know. It displayed the following behaviours
which show how well we solved cases A and B: ...
\end{quotation}
As you can see, the above text follows standard scientific convention,
reads better than the first version, and does not explicitly name you as
the authors. A reviewer might think it likely that the new paper was
written by Zeus \etal, but cannot make any decision based on that guess.
He or she would have to be sure that no other authors could have been
contracted to solve problem B.
\medskip
\noindent
FAQ\medskip\\
{\bf Q:} Are acknowledgements OK?\\
{\bf A:} No. Leave them for the final copy.\medskip\\
{\bf Q:} How do I cite my results reported in open challenges?
{\bf A:} To conform with the double blind review policy, you can report results of other challenge participants together with your results in your paper. For your results, however, you should not identify yourself and should not mention your participation in the challenge. Instead present your results referring to the method proposed in your paper and draw conclusions based on the experimental comparison to other results.\medskip\\
\begin{figure}[t]
\begin{center}
\fbox{\rule{0pt}{2in} \rule{0.9\linewidth}{0pt}}
\end{center}
\caption{Example of caption. It is set in Roman so that mathematics
(always set in Roman: $B \sin A = A \sin B$) may be included without an
ugly clash.}
\label{fig:long}
\label{fig:onecol}
\end{figure}
\subsection{Miscellaneous}
\noindent
Compare the following:\\
\begin{tabular}{ll}
\verb'$conf_a$' & $conf_a$ \\
\verb'$\mathit{conf}_a$' & $\mathit{conf}_a$
\end{tabular}\\
See The \TeX book, p165.
The space after \eg, meaning ``for example'', should not be a
sentence-ending space. So \eg is correct, {\em e.g.} is not. The provided
\verb'\eg' macro takes care of this.
When citing a multi-author paper, you may save space by using ``et alia'',
shortened to ``\etal'' (not ``{\em et.\ al.}'' as ``{\em et}'' is a complete word.)
However, use it only when there are three or more authors. Thus, the
following is correct: ``
Frobnication has been trendy lately.
It was introduced by Alpher~\cite{Alpher02}, and subsequently developed by
Alpher and Fotheringham-Smythe~\cite{Alpher03}, and Alpher \etal~\cite{Alpher04}.''
This is incorrect: ``... subsequently developed by Alpher \etal~\cite{Alpher03} ...''
because reference~\cite{Alpher03} has just two authors. If you use the
\verb'\etal' macro provided, then you need not worry about double periods
when used at the end of a sentence as in Alpher \etal.
For this citation style, keep multiple citations in numerical (not
chronological) order, so prefer \cite{Alpher03,Alpher02,Authors14} to
\cite{Alpher02,Alpher03,Authors14}.
\begin{figure*}
\begin{center}
\fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}}
\end{center}
\caption{Example of a short caption, which should be centered.}
\label{fig:short}
\end{figure*}
\section{Formatting your paper}
All text must be in a two-column format. The total allowable width of the
text area is $6\frac78$ inches (17.5 cm) wide by $8\frac78$ inches (22.54
cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a
$\frac{5}{16}$ inch (0.8 cm) space between them. The main title (on the
first page) should begin 1.0 inch (2.54 cm) from the top edge of the
page. The second and following pages should begin 1.0 inch (2.54 cm) from
the top edge. On all pages, the bottom margin should be 1-1/8 inches (2.86
cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4
paper, approximately 1-5/8 inches (4.13 cm) from the bottom edge of the
page.
\subsection{Margins and page numbering}
All printed material, including text, illustrations, and charts, must be kept
within a print area 6-7/8 inches (17.5 cm) wide by 8-7/8 inches (22.54 cm)
high.
Page numbers should be in footer with page numbers, centered and .75
inches from the bottom of the page and make it start at the correct page
number rather than the 4321 in the example. To do this fine the line (around
line 20)
\begin{verbatim}
\setcounter{page}{4321}
\end{verbatim}
where the number 4321 is your assigned starting page.
\subsection{Type-style and fonts}
Wherever Times is specified, Times Roman may also be used. If neither is
available on your word processor, please use the font closest in
appearance to Times to which you have access.
MAIN TITLE. Center the title 1-3/8 inches (3.49 cm) from the top edge of
the first page. The title should be in Times 14-point, boldface type.
Capitalize the first letter of nouns, pronouns, verbs, adjectives, and
adverbs; do not capitalize articles, coordinate conjunctions, or
prepositions (unless the title begins with such a word). Leave two blank
lines after the title.
AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title
and printed in Times 12-point, non-boldface type. This information is to
be followed by two blank lines.
The ABSTRACT and MAIN TEXT are to be in a two-column format.
MAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use
double-spacing. All paragraphs should be indented 1 pica (approx. 1/6
inch or 0.422 cm). Make sure your text is fully justified---that is,
flush left and flush right. Please do not place any additional blank
lines between paragraphs.
Figure and table captions should be 9-point Roman type as in
Figures~\ref{fig:onecol} and~\ref{fig:short}. Short captions should be centred.
\noindent Callouts should be 9-point Helvetica, non-boldface type.
Initially capitalize only the first word of section titles and first-,
second-, and third-order headings.
FIRST-ORDER HEADINGS. (For example, {\large \bf 1. Introduction})
should be Times 12-point boldface, initially capitalized, flush left,
with one blank line before, and one blank line after.
SECOND-ORDER HEADINGS. (For example, { \bf 1.1. Database elements})
should be Times 11-point boldface, initially capitalized, flush left,
with one blank line before, and one after. If you require a third-order
heading (we discourage it), use 10-point Times, boldface, initially
capitalized, flush left, preceded by one blank line, followed by a period
and your text on the same line.
\subsection{Footnotes}
Please use footnotes\footnote {This is what a footnote looks like. It
often distracts the reader from the main flow of the argument.} sparingly.
Indeed, try to avoid footnotes altogether and include necessary peripheral
observations in
the text (within parentheses, if you prefer, as in this sentence). If you
wish to use a footnote, place it at the bottom of the column on the page on
which it is referenced. Use Times 8-point type, single-spaced.
\subsection{References}
List and number all bibliographical references in 9-point Times,
single-spaced, at the end of your paper. When referenced in the text,
enclose the citation number in square brackets, for
example~\cite{Authors14}. Where appropriate, include the name(s) of
editors of referenced books.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|}
\hline
Method & Frobnability \\
\hline\hline
Theirs & Frumpy \\
Yours & Frobbly \\
Ours & Makes one's heart Frob\\
\hline
\end{tabular}
\end{center}
\caption{Results. Ours is better.}
\end{table}
\subsection{Illustrations, graphs, and photographs}
All graphics should be centered. Please ensure that any point you wish to
make is resolvable in a printed copy of the paper. Resize fonts in figures
to match the font in the body text, and choose line widths which render
effectively in print. Many readers (and reviewers), even of an electronic
copy, will choose to print your paper in order to read it. You cannot
insist that they do otherwise, and therefore must not assume that they can
zoom in to see tiny details on a graphic.
When placing figures in \LaTeX, it's almost always best to use
\verb+\includegraphics+, and to specify the figure width as a multiple of
the line width as in the example below
{\small\begin{verbatim}
\usepackage[dvips]{graphicx} ...
\includegraphics[width=0.8\linewidth]
{myfile.eps}
\end{verbatim}
}
\subsection{Color}
Please refer to the author guidelines on the CVPR 2021~web page for a discussion
of the use of color in your document.
\section{Final copy}
You must include your signed IEEE copyright release form when you submit
your finished paper. We MUST have this form before your paper can be
published in the proceedings.
Please direct any questions to the production editor in charge of these
proceedings at the IEEE Computer Society Press:
\url{https://www.computer.org/about/contact}.
{\small
\bibliographystyle{ieee_fullname}
\section*{Acknowledgements}
We thank Shubham Goel and Hang Gao for comments on the text. We also thank Emilien Dupont and Vincent Sitzmann for helpful discussions.
{\small
\bibliographystyle{ieee_fullname}
\subsection{Experimental Details}
We first provide general details about the metrics
and training procedure common to all experiments,
then present more specific details for each experimental setting in subsections.
\paragraph{Metric details}
We use PSNR and SSIM from the scikit-image~\cite{scikitimage} package as in SRN~\cite{SRN},
whereas LPIPS is computed with the code provided by the LPIPS authors~\cite{LPIPS} after normalizing the pixel values to the $[-1, 1]$ range.
We use the VGG network version of LPIPS following NeRF~\cite{NeRF}.
\paragraph{Training}
For all experiments, we take the learning rate to be $10^{-4}$.
We use a batch size of $4$ instances
and $128$ rays per instances.
\begin{table}[h]
\centering
\begin{tabular}{@{}llll@{}}
\toprule
\textbf{Full Name }& cabinet & display & speaker \\
\textbf{Abbreviation} & cbnt. & disp. & spkr.\\
\bottomrule
\end{tabular}
\caption{ShapeNet category name abbreviations.}
\label{tab:name_abbreviations}
\end{table}
\vspace{-1em}
\subsubsection{Single-category ShapeNet}
We train for 400000 iterations,
which took roughly $6$ days on a single Titan RTX.
For efficiency, we sample rays
from within a tight bounding box around the object for the first 300000 iterations, after which
we remove the bounding box to avoid background artifacts.
Further, we use $2$ input views for the first 300000 iterations
and after that,
we randomly choose to take either $1$ or $2$ views as input to encourage the model to work with either $1$ or $2$ views.
SRN's evaluation protocol is followed: in the 1-view case,
we use view $64$ as input,
and in the 2-view case, we use views $64$ and $128$.
\paragraph{Baselines}
For SRN~\cite{SRN}, we use the pretrained chair model from the public GitHub repository.
Note that SRN requires a test-time training step (latent inversion) to
generate result images; we apply latent inversion for 170000 iterations for both the 1-view and 2-view cases for chairs.
Recall that, due to a camera sampling bug, we use an updated car dataset provided by the SRN author.
Thus, we follow instructions in the Github to train a model on the new dataset;
we train for 400000 iterations and apply latent inversion for 100000 iterations for each of the 1-view and 2-view cases.
Note the quantitative results we report are slightly lower than that in~\cite{ENR} in the single-view case, but substantially higher than in the original SRN paper, which used the bugged renderings.
For the remaining baselines, we only report numbers from the relevant papers on the same task.
\subsubsection{Category-agnostic ShapeNet}
\label{sec:detail_multicat}
We train our model for 800000 iterations on the
entire training set, where rays are sampled from within a tight bounding box for
the first 400000 iterations. This took about 6 days on an RTX 2080Ti.
\paragraph{Evaluation protocol}
As discussed in the main paper, we evaluate on the test split
from~\cite{NMR} as provided by DVR~\cite{DVR}.
To ensure fairness, we sampled a random input view to encode for each object
and use this view for all baselines as well.
\paragraph{Baselines}
For DVR~\cite{DVR}, we use the pretrained 2D multiview-supervised model from the public GitHub and
the provided rendering code (in \texttt{render.py}).
For SoftRas~\cite{SoftRas}, we similarly use the pretrained ShapeNet model from the public GitHub repo
and obtain images using their renderer library.
Since SRN~\cite{SRN} did not originally evaluate in this setting, we train a model for this category-agnostic setting using the public code.
We train for 1 million iterations and perform latent inversion for 260000 iterations,
taking about 14 days on a Titan RTX in total.
\subsubsection{Generalization to Novel Categories}
We train our model for 680000 iterations across all instances of $3$ categories:
airplane, car, and chair.
Rays are sampled from within a tight bounding box
for the first 400000 iterations. This took about 5 days on a
GTX 1080Ti.
\paragraph{Evaluation protocol}
Since there are more than 25000 objects in the 10 remaining categories,
it would be computationally prohibitive to evaluate on all of them.
For our purposes, we sample
25\% of the objects from each category for testing, using the remaining for validation.
The protocol otherwise remains the same as in the category-agnostic model ($\S$~\ref{sec:detail_multicat}).
\paragraph{Baselines}
We train SRN and DVR as in $\S$~\ref{sec:detail_multicat}.
For DVR~\cite{DVR}
we turned off the use of visual hull depth for sampling,
since this information was not provided for all instances of the dataset shipped with DVR.
\subsubsection{Two-object Scenes}
We generate more complex synthetic scenes consisting of two ShapeNet chairs.
We subdivide ShapeNet chairs into 2715 training instances and 1101 test instances.
We generate scenes by randomly placing instances within each split around the origin, rotated randomly about each object's z-axis, and render $128\times128$ resolution images.
Per instance, we render 20 training views sampled binned uniform on the hemisphere, and 50 testing views sampled on an Archimedean spiral, similar to the SRN protocol.
We compare with SRN~\cite{SRN} as our baseline on this task, using the publicly available code.
We note that SRN performs prediction in a canonical object-centric coordinate system, and used this version for this task.
We train a model for evaluation on this two-object dataset using one, two, and three input views.
We first train the model for 1 million iterations.
Then for each number of input views, we fix the set of input views per instance and perform latent inversion for 150,000 iterations.
\subsubsection{Sim2Real on Real Car Images}
We use car images from the Stanford Cars dataset~\cite{CarsDataset}.
PointRend~\cite{PointRend} is applied to the images
to obtain foreground masks and bounding boxes.
After removing the background with this mask, the image is then
translated and rescaled so that the center of the bounding box is at the center of the image
and the shorter side of the bounding box
is 1/4 of the image side length, 128.
This normalization heuristic is motivated by the observation that the shorter side roughly corresponds to the height or width of the car, which is a more constant quantity than the length.
For evaluation, we set the camera pose to identity
and use the same sampling strategy and bounds as at train time for the single-category cars model.
\subsubsection{Real Images on DTU}
A single model is trained on the 88 training scenes.
We use exposure level $3$ only.
Note that while there are several views per scene with
incorrect exposure throughout the DTU dataset,
we did not remove them for training purposes.
At each training step, a random color jitter augmentation is applied equally to all views
of each object.
\paragraph{Dataset details}
While we solve a very different task
from MVSNet~\cite{MVSNet}
which predicts depth maps from short-baseline views
and is 2.5D supervised,
we considered using
the MVSNet~\cite{MVSNet} DTU split to conform to standards for training on DTU.
However, we found that the
the split contained effective overlap across the train/val/test sets,
making it a poor benchmark of cross-scene generalization,
as shown in Fig.~\ref{fig:dtu_bad}.
For our purposes, we created a different split to avoid this issue:
we use scans
8, 21, 30, 31, 34, 38, 40, 41, 45, 55, 63, 82, 103, 110, 114
for testing
and all other scans except
1, 2, 7, 25, 26, 27, 29, 39, 51, 54, 56, 57, 58, 73, 83, 111, 112, 113, 115, 116, 117
for training.
We downsampled all DTU images to $400 \times 300$
and adjusted the world scale of all scans by a factor or $1/300$.
\paragraph{Evaluation protocol}
We separately evaluate using 1, 3, 6, 9 informative input views and calculate image metrics with the remaining views.
Specifically, we selected views 25, 22, 28, 40, 44,48, 0, 8, 13 for input, taking a prefix of this list
when less than $9$ views are used.
We exclude the views with bad exposure (they are 3, 4, 5, 6, 7, 16, 17, 18, 19, 20, 21, 36, 37, 38, 39) for testing.
\paragraph{Baselines}
We train a total of 60 NeRFs for comparison,
one for each scene and number of input views, using the original NeRF TensorFlow code.
Each NeRF is trained for 400000 iterations
with ray batch size 128, which takes about 14 hours on an RTX Titan, to ensure convergence.
We found that NeRF did not converge in 5 cases when given 6 or 9 views,
including in the case of smurf (scan 82) shown in the video. This is possibly due to
exposure variation in the DTU dataset.
For these scenes, we initialized the model to the trained weights for the same scene with $3$ less views
and train for about $200000$ additional iterations to get reasonable results.
\subsection{Implementation Details}
Here we describe implementation details in the interest of reproducibility.
A general remark is that due to the high compute cost,
we did not spend significant effort to tune the architecture or
training procedure, and it is possible that
variations can perform better, or that smaller models may suffice.
\paragraph{Encoder $E$}
As briefly discussed in the main paper, we use a ResNet34 backbone and extract a feature pyramid by taking the feature maps
prior to the first pooling operation and after the first ResNet $3$ layers.
For a $H\times W$ image, the feature maps have shapes
\begin{compactenum}
\item
$64\times H/2\times W/2$
\item
$64\times H/4\times W/4$
\item
$128\times H/8 \times W/8$
\item
$256\times H/16\times W/16$
\end{compactenum}
These are upsampled bilinearly to $H/2 \times W/2$ and concatenated into a volume of size $512 \times H/2 \times W/2$.
For a $64\times64$ image, to avoid losing too much resolution, we skip the first pooling layer, so that
the image resolutions are at $1/2, 1/2, 1/4, 1/8$ of the input rather than $1/2, 1/4, 1/8, 1/16$.
We use ImageNet pretrained weights provided through PyTorch.
\paragraph{NeRF network $f$}
We employ a fully-connected ResNet architecture with $5$ ResNet blocks and width $512$, similar to that in \cite{DVR}.
To enable arbitrary number of views as input, we aggregate across the source-views after block $3$ using an average-pooling operation.
This architecture is illustrated in Fig.~\ref{fig:nerf_net_arch}.
We remark that
due to computational cost,
we did not tune this architecture very much in practice.
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\linewidth]{supplemental/figures/nerfnet_arch.pdf}
\caption{\textbf{Multi-view NeRF Network Architecture.}
We use notation established in $\S$~\ref{sec:multi_cat} of the main paper, where $\gamma$ denotes a positional encoding with $6$ exponentially increasing frequencies. Each linear layer is followed by a ReLU activation.
Note that in the single-view
case, $f_1$ and $f_2$ can be considered a single ResNet $f = f_2 \circ f_1$.
}
\label{fig:nerf_net_arch}
\end{figure*}
\paragraph{Hierarchical volume sampling} To improve the sampling efficiency,
in practice, we also use \textit{coarse} and \textit{fine} NeRF networks $f_c, f_f$ as in the vanilla NeRF \cite{NeRF}, both of which share an identical architecture described above. Note that
the encoder $E$ is not duplicated.
More precisely, we use 64 stratified uniform and 16 importance samples,
and additionally take 16 fine samples with a normal distribution (SD 0.01) around the expected ray termination (i.e.~depth) from the coarse model, to further promote denser sampling near the surface.
\vspace{1em}
\noindent\textbf{NeRF rendering hyperparameters}
We use positional encoding $\gamma$ from NeRF for the spatial coordinates,
with exponentially increasing frequencies:
\begin{equation}
\gamma(\tf x) =
\begin{pmatrix}
\sin (2^0 \omega \tf x) \\
\cos (2^0 \omega \tf x)\\
\sin (2^1 \omega \tf x) \\
\cos (2^1 \omega \tf x)\\
\vdots \\
\sin (2^{L-1} \omega \tf x) \\
\cos (2^{L-1} \omega \tf x)
\end{pmatrix}
\end{equation}
Note that we do not apply the encoding to the view directions.
In all experiments, we set $L = 6$. We also
concatenate the input coordinates
along the encoding as in the NeRF implementation.
$\omega$ is a scaling factor,
set (rather arbitrarily) to $1.5$ for the
single-category, category-agnostic ShapeNet experiments as well
as the DTU experiment,
and to $2.0$ for the multi-object experiment.
While the exponent base can be tuned, in practice we left it at $2$ as in NeRF.
The sampling bounds were set manually for
each dataset.
They were $[1.25, 2.75]$ for ShapeNet chairs,
$[0.8, 1.8]$ for ShapeNet cars,
$[1.2, 4.0]$ for Kato et al.~\cite{NMR} renderings (category agnostic, novel category),
$[4.0, 9.0]$ for our rendered $2$-object dataset,
and $[0.1, 5.0]$ for input.
We use a white background color in NeRF to match the ShapeNet renderings, except in the DTU setup where a black background is used.
\paragraph{Model implementation}
We implement all models using the PyTorch~\cite{PyTorch} framework.
\section{Miscellaneous}
\subsection{ShapeNet Category Name Abbreviations}
Abbreviations used are listed below.
Also note that we interchangeably use
airplane and plane.\\
\begin{tabular}{@{}ll@{}}
\toprule
Abbreviation & Full Name \\ \midrule
cbnt. & cabinet \\
disp. & display \\
spkr. & speaker\\
\bottomrule
\end{tabular}
\vspace{1em}
\subsection{Things That Didn't Work}
In this section we discuss some things that were tried but didn't help in this project.
Future work may develop better solutions to the problems
they were intended to address.
\paragraph{Camera frustum filtering}
A slight complication in our pixel-aligned approach is that for a point outside
an input view's frustum, no image feature information flows to the query point.
The previous approach \cite{PIFu} simply zeros out the image feature vector for such points,
while in our main paper we repeat the boundary features.
During our experiments, we tried a slightly more sophisticated solution: for each point, we only aggregate the image features from views for which it is inside the frustum.
For points not visible from any input view, we cannot expect a reasonable prediction and hence simply set $\sigma$ to zero.
This process was implemented with a scatter operation.
However, we found that this only made results worse and created artifacts at the frustum boundaries.
\paragraph{Alpha regularization}
In our early experiments, we incorporated a
freespace regularization of \cite{NeuralVolumes} to try to clear up ``cloudy'' artifacts common to NeRF representations.
While this appeared to help when overfitting to single scenes,
it did not make a difference with the main models in ablations and was therefore removed.
\section{Additional Results}
In this section, we provide additional qualitative and quantitative results
for several key experiments.
The reader is encouraged to
refer to the video and website for
a richer, animated presentation of qualitative results.
\subsection{Category-agnostic ShapeNet: Random Results}
We show \emph{randomly} sampled results
for the category-agnostic setting
($\S$~\ref{sec:multi_cat})
in Fig.~\ref{fig:random_1},
Fig.~\ref{fig:random_2}, and Fig.~\ref{fig:random_3}.
Specifically, we sample $6$ uniformly random objects
for each of the 13 largest ShapeNet categories and show comparisons to the baselines~\cite{SoftRas, DVR, SRN} as in the main paper.
Two random views are selected from the $24$ available views
to be source and target views respectively.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=1.0\linewidth]{supplemental/figures/random_1.pdf}
\end{center}
\caption{\textbf{Randomly sampled results.} part 1}
\label{fig:random_1}
\end{figure*}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=1.0\linewidth]{supplemental/figures/random_2.pdf}
\end{center}
\caption{\textbf{Randomly sampled results.} part 2}
\label{fig:random_2}
\vspace{-0.5em}
\end{figure*}
\begin{figure*}[t]
\centering
\begin{center}
\includegraphics[width=1.0\linewidth]{supplemental/figures/random_3.pdf}
\end{center}
\caption{\textbf{Randomly sampled results.} part 3}
\label{fig:random_3}
\end{figure*}
\subsection{Generalization to novel categories}
In Table~\ref{tab:gen_cat_breakdown} we show a detailed breakdown of metrics by category on
unseen categories, as promised in the main paper.
\begin{table*}[t]
\vspace{2em}
\begin{center}
\input{tables/gencat}
\end{center}
\caption{
\textbf{Generalization to novel categories}.
Expanding on Table~\ref{tab:beyond_shapenet} in the main paper,
we show quantitative results with a
breakdown by category.
}
\label{tab:gen_cat_breakdown}
\vspace{2em}
\end{table*}
\subsection{Two-object Scenes}
We show samples from our rendered dataset in
Fig.~\ref{fig:sample_2obj}.
An analysis of performance as more views become available is
in Table~\ref{tab:2obj_views}, for our method when compared with SRN.
We also show \textit{randomly sampled} results of scenes when given two input views in Figure~\ref{fig:2obj_random}.
We train our model using two random views, and give the model either one, two, or three fixed informative views during inference.
\begin{table*}
\centering
\input{tables/2obj_view}
\caption{\textbf{Performance on synthetic two-object dataset with increasing number of views at test time.} Image quality metrics for SRN and our method, when increasing the number of views given at test time.}
\label{tab:2obj_views}
\end{table*}
\begin{figure*}
\vspace{1.5em}
\centering
\begin{subfigure}[t]{0.8\textwidth}
\centering
\includegraphics[width=\linewidth]{supplemental/figures/sample_2obj_train.pdf}
\caption{Train set}
\end{subfigure}%
\\
\begin{subfigure}[t]{0.8\textwidth}
\centering
\includegraphics[width=\linewidth]{supplemental/figures/sample_2obj_test.pdf}
\caption{Test set}
\end{subfigure}%
\caption{Randomly sampled images from the synthetic two-object scene dataset}
\label{fig:sample_2obj}
\vspace{1em}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{supplemental/figures/2obj_random.pdf}
\caption{Randomly sampled results for two object scenes, when given two input views.}
\label{fig:2obj_random}
\end{figure*}
\subsection{DTU}
In Fig.~\ref{fig:dtu_extra}, we show quantitative results for each scene as well as renderings of
of all test scenes not shown in the main paper.
\begin{figure*}[t]
\centering
\includegraphics[width=1.0\linewidth]{supplemental/figures/dtu_extra.pdf}
\caption{\textbf{Additional DTU results.} Views from the remaining 9 scenes are shown.}
\label{fig:dtu_extra}
\end{figure*}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=1.0\linewidth]{supplemental/figures/dtu_bad.png}
\end{center}
\caption{\textbf{DTU split overlap.} The
first and third scans (115, 119) are from the standard DTU training set from MVSNet,
while the second and fourth (114, 118) are from the test set.
In our split, highly similar scenes are either all placed in the same set or
discarded.}
\label{fig:dtu_bad}
\vspace{2em}
\end{figure*}
In Table~\ref{tab:dtu_table} we provide means and standard deviations of metrics for our method and NeRF on the DTU test set, with 1, 3, 6, 9 views. The PSNR here was plotted in
Fig.~\ref{fig:dtu_views} of the main paper
\begin{table*}[t]
\vspace{2em}
\begin{center}
\input{tables/dtutab}
\end{center}
\caption{
\textbf{DTU aggregate metrics vs.\ NeRF}.
Expanding on Fig.~\ref{fig:dtu_views} in the main paper,
we compare our method to NeRF on
DTU test scenes quantitatively.
Recall higher is better for PSNR and SSIM, while lower is better for LPIPS.
Note that PixelNeRF is a feed-forward method,
while a NeRF was optimized
for 14 hours for each scene and set of input views.
}
\label{tab:dtu_table}
\vspace{2em}
\end{table*}
\section{Reproducibility}
\input{supplemental/impl_details}
\input{supplemental/exp_details}
\FloatBarrier
{\small
\bibliographystyle{ieee_fullname}
|
2,877,628,090,688 | arxiv | \section{Introduction}
As generative AI technologies continue to grow in power and utility, their use is becoming more mainstream. Generative models, including LLM-based foundation models~\cite{bommasani2021opportunities}, are being used for applications such as general Q\&A (e.g. ChatGPT\footnote{\url{http://chat.openai.com}}), software engineering assistance (e.g. Copilot\footnote{\url{http://copilot.github.com}}), task automation (e.g. Adept\footnote{\url{http://adept.ai}}), copywriting (e.g. Jasper.ai\footnote{\url{http://jasper.ai}}), and the creation of high-fidelity artwork (e.g. DALL-E 2~\cite{ramesh2022hierarchical}, Stable Diffusion~\cite{rombach2022high}, Midjourney\footnote{\url{http://midjourney.com}}). Given the explosion in popularity of these new kinds of generative applications, there is a need for guidance on how to design those applications to foster productive and safe use, in line with human-centered AI values~\cite{shneiderman2022human}.
Fostering productive use is a challenge, as revealed in a recent literature survey by \citet{campero2022test}. They found that many human-AI collaborative systems failed to achieve positive synergy -- the notion that a human-AI team is able to accomplish superior outcomes above either party working alone. In fact, some studies have found the opposite effect, that human-AI teams produced inferior results to either a human or AI working alone~\cite{clark2018creative, buccinca2021trust, jacobs2021machine, kleinberg2021humans}.
Fostering safe use is a challenge because of the potential risks and harms that stem from generative AI, either because of how the model was trained (e.g. \cite{weidinger2021ethical}) or because of how it is applied (e.g. \cite{houde2020business, muller2022drinking}).
In order to address these issues, we propose a set of design principles to aid the designers of generative AI systems. These principles are grounded in an environment of \textbf{generative variability}, indicating the two properties of generative AI systems inherently different from traditional discriminative\footnote{Our use of the term \emph{discriminative} is to indicate that the task conducted by the AI algorithm is one of determining to which class or group a data instance belongs; classification and clustering algorithms are examples of discriminative AI. Although our use of the term \emph{discriminative} may evoke imagery of human discrimination (e.g. via racial, religious, gender identity, genetic, or other lines), our use follows the scientific convention established in the machine learning community (see, e.g., \url{https://en.wikipedia.org/wiki/Discriminative_model})} AI systems: \textbf{generative}, because the aim of generative AI applications is to \emph{produce artifacts} as outputs, rather than \emph{determine decision boundaries} as discriminative AI systems do, and \textbf{variability}, indicating the fact that, for a given input, a generative system may produce a variety of possible outputs, many of which may be valid; in the discriminative
case, it is expected that the output of a model does not vary for a given input.
We note that our principles are meant to generally apply to generative AI applications. Other sets of design principles exist for specific kinds of generative AI applications, including \citet{liu2022design}'s guidelines for engineering prompts for text-to-image models, and advice about one-shot prompts for generation of texts of different kinds \cite{greyling:engineering, reynolds:prompt, Denny:Conversing}.
There are also more general AI-related design guidelines \cite{acm2023words, amershi2019guidelines, apple, ibm2023reid, liao2020questioning}.
Six of our principles are presented as ``design for...'' statements, indicating the characteristics that designers should keep in mind when making important design decisions. One is presented as a ``design against...'' statement, directing designers to design \emph{against} potential harms that may arise from hazardous model outputs, misuse, potential for human displacement, or other harms we have not yet considered. The principles interact with each other in complex ways, schematically represented via overlapping circles in Figure~\ref{fig:teaser}. For example, the characteristic denoted in one principle (e.g. multiple outputs) can sometimes be leveraged as a strategy for addressing another principle (e.g. exploration). Principles are also connected by a user's aims, such as producing a singular artifact, seeking inspiration or creative ideas, or learning about a domain. They are also connected by design features or attributes of a generative AI application, such as the support for versioning, curation, or sandbox environments.
Our aim for these principles is threefold: (1) to provide the designers of generative AI applications with the language to discuss issues unique to \emph{generative} AI; (2) to provide strategies and guidance to help designers make important design decisions around how end users will interact with a generative AI application; and (3) to sensitize designers to the idea that generative AI applications may cause a variety of harms (likely inadvertently, but possibly intentionally). We hope these principles provide the human-AI co-creation community with a reasoned way to think through the design of novel generative AI applications.
\section{Design Principles for Generative AI Applications}
\label{sec:design-principles}
We developed seven design principles for generative AI applications based on recent research in the HCI and AI communities, specifically around human-AI co-creative processes. We conducted a literature review of research studies, guidelines, and analytic frameworks from these communities~\cite{acm2023words, amershi2019guidelines, apple, deterding2017mixed, grabe2022towards, ibm2023reid, liao2020questioning, maher2012computational, maher2022research, muller2020iccc, muller2022extending, lubart2005can, seeber2020machines, spoto2017mici}, which included experiments in human-AI co-creation~\cite{agarwal2020project, agarwal2020quality, kreminski2022evaluating, louie2020novice, sun2022investigating, weisz2021perfection, weisz2022better}, examinations of representative generative applications~\cite{Brown:GPT3, johnson2007measuring, kaiser2018generative, louie2020novice, metz:GPT-3, ramesh2022hierarchical, rombach2022high, ross2023programmers}, and a review of publications in recent workshops~\cite{geyer2021hai, muller2022genaichi, muller2022hcai, weisz2022hai}.
\subsection{The Environment: Generative Variability}
\label{sec:generative-variability}
Generative AI technologies present unique challenges for designers of AI systems compared to discriminative AI systems. First, generative AI is \emph{generative} in nature, which means their purpose is to produce artifacts as output, rather than decisions, labels, classifications, and/or decision boundaries. These artifacts may be comprised of different types of media, such as text, images, audio, animations or videos. Second, the outputs of a generative AI model are \emph{variable} in nature. Whereas discriminitive AI aims for deterministic outcomes, generative AI systems may not produce the same output for a given input each time. In fact, \emph{by design}, they can produce multiple and divergent outputs for a given input, some or all of which may be satisfactory to the user. Thus, it may be difficult for users to achieve replicable results when working with a generative AI application.
Although the very nature of generative applications violates the common HCI principle that a system should respond consistently to a user's input (for critiques of this position, see \cite{aragon2022human, boyd2012critical, costanza2020design, d2020data, gitelman2013raw, muller2022drinking}), we take the position that this environment in which generative applications operate -- \emph{generative variability} -- is a core strength. Generative applications enable users to explore or populate a ``space'' of possible outcomes to their query. Sometimes, this exploration is explicit, as in the case of systems that enable latent space manipulations of an artifact. Other times, exploration of a space occurs when a generative model produces multiple candidate outputs for a given input, such as multiple distinct images for a given prompt \cite{ramesh2022hierarchical, rombach2022high} or multiple implementations of a source code program \cite{weisz2021perfection, weisz2022better}. Recent studies also show how users may improve their knowledge of a domain by working with a generative model and its variable outputs~\cite{weisz2021perfection, ross2023programmers}.
This concept of generative variability is crucially important for designers of generative AI applications to communicate to users. Users who approach a generative AI system without understanding its probabilistic nature and its capacity to produce varied outputs will struggle to interact with it in productive ways. The design principles we outline in the following sections -- designing for multiple outcomes \& imperfection, for exploration \& human control, and for mental models \& explanations -- are all rooted in the notion that generative AI systems are distinct and unique because they operate in an environment of generative variability.
\subsection{Design for Multiple Outputs}
\label{sec:design-multiple-outputs}
Generative AI technologies such as encoder-decoder models~\cite{sutskever2014sequence, cho2014learning}, generative adversarial networks~\cite{goodfellow2020generative}, and transformer models~\cite{vaswani2017attention} are probabilistic in nature and thus are capable of producing multiple, distinct outputs for a user's input. Designers therefore need to understand the extent to which these multiple outputs should be visible to users. Do users need the ability to annotate or curate? Do they need the ability to compare or contrast? How many outputs does a user need?
Understanding the user's task can help answer these questions. If the user's task is one of \emph{production}, in which the ultimate goal is to produce a single, satisfying artifact, then designs that help the user filter and visualize differences may be preferable. For example, a software engineer's goal is often to implement a method that performs a specific behavior. Tools such as Copilot take a user's input, such as a method signature or documentation, and provide a singular output. Contrarily, if the user's task is one of \emph{exploration}, then designs that help the user curate, annotate, and mutate may be preferable. For example, a software engineer may wish to explore a space of possible test cases for a code module. Or, an artist may wish to explore different compositions or styles to see a broad range of possibilities. Below we discuss a set of strategies for helping design for multiple outputs.
\subsubsection{Versioning}
Because of the randomness involved in the generative process, as well as other user-configurable parameters (e.g. a random seed, a temperature, or other types of user controls), it may be difficult for a user to produce exactly the same outcome twice. As a user interacts with a generative AI application and creates a set of outputs, they may find that they prefer earlier outputs to later ones. How can they recover or reset the state of the system to generate such earlier outputs? One strategy is to keep track of all of these outputs, as well as the parameters that produced them, by versioning them. Such versioning can happen manually (e.g. the user clicks a button to ``save'' their current working state) or automatically.
\subsubsection{Curation}
When a generative model is capable of producing multiple outputs, users may need tools to curate those outputs. Curation may include collecting, filtering, sorting, selecting, or organizing outputs (possibly from the versioned queue) into meaningful subsets or groups, or creating prioritized lists or hierarchies of outputs according to some subjective or objective criteria. For example, CogMol\footnote{\url{http://covid19-mol.mybluemix.net}} generates novel molecular compounds, which can be sorted by various properties, such as their molecular weight, toxicity, or water solubility~\cite{chenthamarakshan2020cogmol, chenthamarakshan2020targetspecific}. In addition, the confidence of the model in each output it produced may be a useful way to sort or rank outputs, although in some cases, model confidence scores may not be indicative of the quality of the model's output~\cite{agarwal2020quality}.
\subsubsection{Annotation}
When a generative model has produced a large number of outputs, users may desire to add marks, decorators, or annotations to outputs of interest. These annotations may be applied to the output itself (e.g. ``I like this'') or it may be applied to a portion or subset of the output (e.g. flagging lines of source code that look problematic and need review).
\subsubsection{Visualizing Differences}
In some cases, a generative model may produce a diverse set of distinct outputs, such as images of cats that look strikingly different from each other. In other cases, a generative model may produce a set of outputs for which it is difficult to discern differences, such as a source code translation from one language to another. In this case, tools that aid users in visualizing the similarities and differences between multiple outputs can be useful. Depending on the users' goals, they may seek to find the \textit{invariant} aspects across outcomes, such as identifying which parts of a source code translation were the same across multiple translations, indicating a confidence in its correctness. Or, users may prioritize the \textit{variant} aspects for greater creativity and inspiration. For example, Sentient Sketchbook~\cite{liapis2013sentient} is a video game co-creation system that displays a number of different metrics of the maps it generates, enabling users to compare newly-generated maps with their current map to understand how they differ.
\subsection{Design for Imperfection}
\label{sec:design-imperfect}
It is highly important for users to understand that the quality of a generative model's outputs will vary. Users who expect a generative AI application to produce exactly the artifact they desire will experience frustration when they work with the system and find that it often produces imperfect artifacts. By ``imperfect,'' we mean that the artifact itself may have imperfections, such as visual misrepresentations in an image, bugs or errors in source code, missing desired elements (e.g. ``an illustration of a bunny with a carrot'' fails to include a carrot), violations of constraints specified in the input prompt (e.g. ``write a 10 word sentence'' produces a much longer or shorter sentence), or even untruthful or misleading answers (e.g. a summary of a scientific topic that includes non-existent references~\cite{rose2022facebook}). But, ``imperfect'' can also mean ``doesn't satisfy the user's desire,'' such as when the user prompts a model and doesn't get back any satisfying outputs (e.g. the user didn't like any of the illustrations of a bunny with a carrot). Below we discuss a set of strategies for helping design for imperfection.
\subsubsection{Multiple Outputs}
Our previous design principle is also a strategy for handling imperfect outputs. If a generative model is allowed to produce multiple outputs, the likelihood that one of those outputs is satisfying to the user is increased. One example of this effect is in how code translation models are evaluated, via a metric called $pass@k$~\cite{roziere2020unsupervised, kulal2019spoc}. The idea is that the model is allowed to produce $k$ code translations for a given input, and if any of them pass a set of unit tests, then the model is said to have produced a correct translation. In this way, generating multiple outputs serves to mitigate the fact that the model's most-likely output may be imperfect. However, it is left up to the user to review the set of outputs and identify the one that is satisfactory; with multiple outputs that are very similar to each other, this task may be difficult~\cite{weisz2022better}, implying the need for a way to easily visualize differences.
\subsubsection{Evaluation \& Identification}
Given that generative models may not produce perfect (or perfectly satisfying) outputs, they may still be able to provide users with a signal about the quality of its output, or indicate parts that require human review. As previously discussed, a model's per-output confidence scores may be used (with care) to indicate the quality of a model's output. Or, domain-specific metrics (e.g. molecular toxicity, compiler errors) may be useful indicators to evaluate whether an artifact achieved a desirable level of quality. Thus, evaluating the quality of generated artifacts and identifying which portions of those artifacts may contain imperfections (and thus require human review, discussed further in \citet{weisz2021perfection}) can be an effective way for handling imperfection.
\subsubsection{Co-Creation}
User experiences that allow for co-creation, in which both the user and the AI can edit a candidate artifact, will be more effective than user experiences that assume or aim for the generative model to produce a perfect output. Allowing users to edit a model's outputs provides them with the opportunity to find and fix imperfections, and ultimately achieve a satisfactory artifact. One example of this idea is Github Copilot~\cite{web:copilot}, which is embedded in the VSCode IDE. In the case when Copilot produces an imperfect block of source code, developers are able to edit it \emph{right in context} without any friction. By contrast, tools like Midjourney or Stable Diffusion only produce a gallery of images to chose from; editing those images requires the user to shift to a different environment (e.g. Photoshop).
\subsubsection{Sandbox / Playground Environment}
A sandbox or playground environment ensures that when a user interacts with a generated artifact, their interactions (such as edits, manipulations, or annotations) do not impact the larger context or environment in which they are working. Returning to the example of Github Copilot, since it is situated inside a developer's IDE, code it produces is directly inserted into the working code file. Although this design choice enables co-creation, it also poses a risk that imperfect code is injected into a production code base. A sandbox environment that requires users to explicitly copy and paste code in order to commit it to the current working file may guard against the accidental inclusion of imperfect outputs in a larger environment or product.
\subsection{Design for Human Control}
\label{sec:design-control}
Keeping humans in control of AI systems is a core tenet of human-centered AI~\cite{shneiderman2020human, shneiderman2021human, shneiderman2022human}. Providing users with controls in generative applications can improve their experience by increasing their efficiency, comprehension, and ownership of generated outcomes \cite{louie2020novice}. But, in co-creative contexts, there are multiple ways to interpret what kinds of ``control'' people need. We identify three kinds of controls applicable to generative AI applications.
\subsubsection{Generic Controls}
One aspect of control relates to the exploration of a design space or range of possible outcomes (as discussed in Section~\ref{sec:design-explore}). Users need appropriate controls in order to drive their explorations, such as control over the number of outputs produced from an input or the amount of variability present in the outputs. We refer to these kinds of controls as \emph{generic controls}, as they are applicable to any particular generative technology or domain. As an example, some generative projects may involve a ``lifecycle'' pattern in which users benefit from seeing a great diversity of outputs in early stages of the process in order to search for ideas, inspirations, or directions. Later stages of the project may focus on a smaller number (or singular) output, requiring controls that specifically operate on that output. Many generative algorithms include a user-controllable parameter called \emph{temperature}. A low temperature setting produces outcomes that are very similar to each other; conversely, a high temperature setting produces outcomes that are very dissimilar to each other. In the ``lifecycle'' model, users may first set a high temperature for increased diversity, and then reduce it when they wish to focus on a particular area of interest in the output space. This effect was observed in a study of a music co-creation tool, in which novice users dragged temperature control sliders to the extreme ends to explore the limits of what the AI could generate~\cite{louie2020novice}.
\subsubsection{Technology-specific Controls}
Other types of controls will depend on the particular generative technology being employed. Encoder-decoder models, for example, often allow users to perform latent space manipulations of an artifact in order to control semantically-meaningful attributes. For example, \citet{liu2021neurosymbolic} demonstrate how semantic sliders can be used to control attributes of 3D models of animals, such as the animal's torso length, neck length, and neck rotation. Transformer models use a temperature parameter to control the amount of randomness in the generation process~\cite{vonplaten2020how}. Natural language prompting, and the emerging discipline of prompt engineering~\cite{liu2022design}, provide additional ways to tune or tweak the outputs of large language models. We refer to these kinds of controls as \emph{technology-specific controls}, as the controls exposed to a user in a user interface will depend upon the particular generative AI technology used in the application.
\subsubsection{Domain-specific Controls}
Some types of user controls will be domain-specific, dependent on the type of artifact being produced. For example, generative models that produce molecules as output might be controlled by having the user specify desired properties such as molecular weight or water solubility; these types of constraints might be propagated to the model itself (e.g. expressed as a constraint in the encoder phase), or they may simply act as a filter on the model's output (e.g. hide anything from the user that doesn't satisfy the constraints). In either case, the control itself is dependent on the fact that the model is producing a specific kind of artifact, such as a molecule, and would not logically make sense for other kinds of artifacts in other domains (e.g. how would you control the water solubility for a text-to-image model?). Thus, we refer to these types of controls, independent of how they are implemented, as \emph{domain specific}. Other examples of domain-specific controls include the reading level of a text, the color palette or artistic style of an image, or the run time or memory efficiency of source code.
\subsection{Design for Exploration}
\label{sec:design-explore}
Because users are working in an environment of generative variability, they will need some way to ``explore'' or ``navigate'' the space of potential outputs in order to identify one (or more) that satisfies their needs. Below we discuss a set of strategies for helping design for exploration.
\subsubsection{Multiple Outputs}
The ability for a generative model to produce multiple outputs (Section~\ref{sec:design-multiple-outputs}) is an enabler of exploration. Returning to the bunny and carrot example, an artist may wish to explore different illustrative styles and prompt (and re-prompt) the model for additional candidates of ``a bunny with a carrot'' in various kinds of styles or configurations. Or, a developer can explore different ways to implement an algorithm by prompting (and re-prompting) a model to produce implementations that possess different attributes (e.g. ``implement this using recursion,'' ``implement this using iteration,'' or ``implement this using memoization''). In this way, a user can get a sense of the different possibilities the model is capable of producing.
\subsubsection{Control}
Depending on the specific technical architecture used by the generative application, there may be different ways for users to control it (Section~\ref{sec:design-control}). No matter the specific mechanisms of control, \emph{providing controls} to a user provides them with the ability to interactively work with the model to explore the space of possible outputs for their given input.
\subsubsection{Sandbox / Playground Environment}
A sandbox or playground environment can enable exploration by providing a separate place in which new candidates can be explored, without interfering with a user's main working environment. For example, in a project using Copilot, \citet{cheng2022would} suggest providing, ``a sandbox mechanism to allow users to play with the prompt in the context of their own project.''
\subsubsection{Visualization}
One way to help users understand the space in which they are exploring is to explicitly visualize it for them. \citet{kreminski2022evaluating} introduce the idea of expressive range coverage analysis (ERCA) in which a user is shown a visualization of the ``range'' of possible generated artifacts across a variety of metrics. Then, as users interact with the system and produce specific artifact instances, those instances are included in the visualization to show how much of the ``range'' or ``space'' was explored by the user.
\subsection{Design for Mental Models}
\label{sec:design-mental-models}
Users form mental models when they work with technological systems \cite{scheutz2017framework, fiore2001group, mathieu2000influence}. These models represent the user's understanding of how the system works and how to work with it effectively to produce the outcomes they desire. Due to the environment of generative variability, generative AI applications will pose new challenges to users because these applications may violate existing mental models of how computing systems behave (i.e. in a deterministic fashion). Therefore, we recommend designing to support users in creating accurate mental models of generative AI applications in the following ways.
\subsubsection{Orientation to Generative Variability}
Users may need a general introduction to the concept of generative AI. They should understand that the system may produce multiple outputs for their query (Section~\ref{sec:design-multiple-outputs}), that those outputs may contain flaws or imperfections (Section~\ref{sec:design-imperfect}), and that their effort may be required to collaborate \emph{with} the system in order to produce desired artifacts via various kinds of controls (Section~\ref{sec:design-control}).
\subsubsection{Role of the AI}
Research in human-AI interaction suggests that users may view an AI application as filling a role such as an assistant, coach, or teammate~\cite{seeber2020machines}. In a study of video game co-creation, \citet{guzdial2019friend} found participants to ascribe roles of friend, collaborator, student, and manager to the AI system. Recent work by \citet{ross2023programmers} examined software engineers' role orientations toward a programming assistant and found that people viewed the assistant with a tool orientation, but interacted with it as if it were a social agent. Clearly establishing the role of a generative AI application in a user's workflow, as well as its level of autonomy (e.g. \cite{fitts1951human, sheridan1978human, parasuraman2000model, horvitz1999principles}), will help users better understand how to interact effectively with it. Designers can reason about the role of their application by answering questions such as, is it a tool or partner? does it act proactively or does it just respond to the user? does it make changes to an artifact directly or does it simply make recommendations for the user?
\subsection{Design for Explanations}
\label{sec:design-xai}
Generative AI applications will be unfamiliar and possibly unusual to many users. They will want to know what the application can (and cannot) do, how well it works, and how to work with it effectively. Some users may even wish to understand the technical details of how the underlying generative AI algorithms work, although these details may not be necessary to work effectively with the model (as discussed in \cite{weisz2021perfection}).
In recent years, the explainable AI (XAI) community has made tremendous progress at developing techniques for explaining how AI systems work~\cite{arya2020ai, ehsan2022human, liao2020questioning, liao2021introduction, simkute2022xai}. Much of the work in XAI has focused on discriminative algorithms: how they generally make decisions (e.g. via interpretable models~\cite[Chapter 5]{molnar2020interpretable} or feature importance~\cite[Section 8.5]{molnar2020interpretable}, and why they make a decision in a specific instance (e.g. via counterfactual explanations~\cite[Section 9.3]{molnar2020interpretable}.
Recent work in human-centered XAI (HCXAI) has emphasized designing explanations that cater to human knowledge and human needs~\cite{ehsan2022human}. This work grew out of a general shift toward human-centered data science~\cite{aragon2022human}, in which the import of explanations is not for a technical user (data scientist), but for an end user who might be impacted by a machine learning model.
In the case of generative AI, recent work has begun to explore the needs for explainability. \citet{sun2022investigating} explored explainability needs of software engineers working with a generative AI model for various types of use cases, such as code translation and autocompletion. They identified a number of types of questions that software engineers had about the generative AI, its capabilities, and its limitations, indicating that explainability is an important feature for generative AI applications. They also identified several gaps in existing explainability frameworks stemming from the \emph{generative} nature of the AI system, indicating that existing XAI techniques may not be sufficient for generative AI applications. Thus, we make the following recommendations for how to design for explanations.
\subsubsection{Calibrate Trust by Communicating Capabilities and Limitations}
Because of the inherent imperfection of generative AI outputs, users would be well-served if they understood the limitations of these systems~\cite{muller2022forgetting, pinanez2021expose}, allowing them to \emph{calibrate} their trust in terms of what the application can and cannot do~\cite{pinanez2022breakdowns}. When these kinds of imperfections (Section~\ref{sec:design-imperfect}) are not signaled, users of co-creative tools may mistakenly blame themselves for shortcomings of generated artifacts in co-creative applications ~\cite{louie2020novice}, and users in Q \& A use cases can be shown deceptive misconceptions and harmful falsehoods as objective answers \cite{lin2021truthfulqa}. One way to communicate the capabilities of a generative AI application is to show examples of what it can do. For example, Midjourney provides a public discussion space to orient new users and show them what other users have produced with the model. This space not only shows the outputs of the model (e.g. images), but the textual prompts that produced the images. In this way, users can more quickly come to understand how different prompts influence the application's output. To communicate limitations, systems like ChatGPT contain modal screens to inform users of the system's limitations.
\subsubsection{Use Explanations to Create and Reinforce Accurate Mental Models}
\citet{weisz2021perfection} explored how a generative model's confidence could be surfaced in a user interface. Working with a transformer model on a code translation task, they developed a prototype UI that highlighted tokens in the translation that the model was not confident in. In their user study, they found that those highlights also served as explanations for how the model worked: users came to understand that each source code token was chosen probabilistically, and that the model had considered other alternatives. This design transformed an algorithmic weakness (imperfect output) into a resource for users to understand how the algorithm worked, and ultimately, to control its output (by showing users where they might need to make changes).
\subsection{Design Against Harms}
\label{sec:design-against-harms}
The use of AI systems -- including generative AI applications -- may unfortunately lead to diverse forms of harms, especially for people in vulnerable situations. Much work in AI ethics communities has identified how discriminative AI systems may perpetuate harms such as the denial of personhood or identity~\cite{costanza2020design, kantayya2020coded, spiel2021they}; the deprivation of liberty or children \cite{lyn2020risky, saxena2021framework}, and the erasure of persons, cultures, or nations through data silences \cite{muller2022forgetting}. We identify four types of potential harms, some of which are unique to the generative domain, and others which represent existing risks of AI applications that may manifest in new ways.
Our aim in this section is to sensitize designers to the potential risks and harms that generative AI systems may pose. We do not prescribe solutions to address these risks, in part because it is an active area of research to understand how these kinds of risks could be mitigated. Risk identification, assessment, and mitigation is a sociotechnical problem involving computing resources, humans, and cultures. Even with our focus on the design of generative applications, an analysis of harms that is limited to design concepts may blur into technosolutionism~\cite{lindtner2016reconstituting, madaio2020co, resseguier2021ethics}.
We do posit that \emph{human-centered} approaches to generative AI design are a useful first step, but must be part of a larger strategy to understand who are the direct and indirect stakeholders of a generative application~\cite{friedman2019value, hendry2021value}, and to work directly with those stakeholders to \emph{identify} harms, \emph{understand} what are their differing priorities and value tensions~\cite{miller2007value}, and \emph{negotiate} issues of culture, policy, and (yes) technology to meet these diverse challenges (e.g., \cite{denzin2008handbook, disalvo2022design, hayes2014knowing}).
\subsubsection{Hazardous Model Outputs}
Generative AI applications may produce artifacts that cause harm. In an integrative survey paper, \citet{weidinger2021ethical} list six types of potential harms of large language models, three of which regard the harms that may be caused by the model's output:
\begin{itemize}
\item \textbf{Discrimination, Exclusion, and Toxicity}. Generative models may produce outputs that promote discrimination against certain groups, exclude certain groups from representation, or produce toxic content. Examples include text-to-image models that fail to produce ethnically diverse outputs for a given input (e.g. a request for images of doctors produces images of male, white doctors \cite{cho2022dall} or language models that produce inappropriate language such as swear words, hate speech, or offensive content~\cite{acm2023words, ibm2023reid}.
\item \textbf{Information Hazards}. Generative models may inadvertently leak private or sensitive information from their training data. For example, \citet{carlini2021extracting} found that strategically prompting GPT-2 revealed an individual's full name, work address, phone number, email, and fax number. Additionally, larger models may be more vulnerable to these types of attacks~\cite{carlini2021extracting, carlini2022quantifying}.
\item \textbf{Misinformation Harms}. Generative models may produce inaccurate misinformation in response to a user's query. Lin et al. \cite{lin2021truthfulqa} found that GPT-3 can provide false answers that mimic human falsehoods and misconceptions, such as ``coughing can help stop a heart attack'' or ``[cold weather] tells us that global warming is a hoax''. \citet{singhal2022large} caution against the tendency of LLMs to hallucinate references, especially if consulted for medical decisions. \citet{albrecht2022despite} claim that LLMs have few defenses against adversarial attacks while advising about ethical questions. The Galactica model was found to hallucinate non-existent scientific references~\cite{heaven_2022}, and Stack Overflow has banned responses sourced from ChatGPT due to their high rate of incorrect, yet plausible, responses~\cite{vincent_2022}.
\end{itemize}
In addition to those harms, a generative model's outputs may be hazardous in other ways as well.
\begin{itemize}
\item \textbf{Deceit, Impersonation, and Manipulation}. Generative algorithms can be used to create false records or ``deep fakes'' (e.g., \cite{houde2020business, meskys2020regulating}), to impersonate others (e.g. \cite{stupp2019fraudsters}), or to distort information into politically-altered content~\cite{yang2019unsupervised}. In addition, they may manipulate users who believe that they are chatting with another human rather than with an algorithm, as in the case of an unreviewed ChatGPT ``experiment'' in which at least 4,000 people seeking mental health support were connected to a chatbot rather than a human counselor~\cite{morris2023we}.
\item \textbf{Copyright, Licenses, and Intellectual Property}. Generative models may have been trained on data protected by regulations such as the GDPR, which prohibits the re-use of data beyond the purposes for which it was collected. In addition, large language models have been referred to as ``stochastic parrots'' due to their ability to reproduce data that was used during their training~\cite{bender2021dangers}. One consequence of this effect is that the model may produce outputs that incorporate or remix materials that are subject to copyright or intellectual property protections~\cite{franceschelli2022copyright, hristov2016artificial, murray2022generative}. For example, the Codex model, which produces source code as output, may (re-)produce source code that is copyrighted or subject to a software license, or that was openly shared under a creative commons license that prohibits commercial re-use (e.g., in a pay-to-access LLM). Thus, the use of a model's outputs in a project may cause that project to violate copyright protections, or subject that project to a restrictive license (e.g. GPL). As of this writing, there is a lawsuit against GitHub, Microsoft, and OpenAI on alleged copyright violations in the training of Codex~\cite{web:githubcopilotlitigation}.
\end{itemize}
\subsubsection{Misuse}
\citet{weidinger2021ethical} describe how generative AI applications may be misused in ways unanticipated by the creators of those systems. Examples include making disinformation cheaper and more effective, facilitating fraud and scams, assisting code generation for cyberattacks, or conducting illegitimate surveillance and censorship. In addition to these misuses, \citet{houde2020business} also identify business misuses of generative AI applications such as facilitating insurance fraud and fabricating evidence of a crime. Although designers may not be able to prevent users from intentionally misusing their generative AI applications, there may be preventative measures that make sense for a given application domain. For example, output images may be watermarked to indicate they were generated by a particular model, blocklists may be used to disallow undesirable words in a textual prompt, or multiple people may be required to review or approve a model's outputs before they can be used.
\subsubsection{Human Displacement}
One consequence of the large-scale deployment of generative AI technologies is that they may come to \emph{replace}, rather than \emph{augment} human workers. Such concerns have been raised in related areas, such as the use of automated AI technologies in data science~\citet{wang2019human, wang2021towards}. \citet{weidinger2021ethical} specifically discuss the potential economic harms and inequalities that may arise as a consequence of widespread adoption of generative AI. If a generative model is capable of producing high-fidelity outputs that rival (or even surpass) what can be created by human effort, are the humans necessary anymore? Contemporary fears of human displacement by generative technologies are beginning to manifest in mainstream media, such as in the case of illustrators' concerns that text-to-image models such as Stable Diffusion and Midjourney will put them out of a job~\cite{wilkins2022will}. We urge designers to find ways to design generative AI applications that \emph{enhance} or \emph{augment} human abilities, rather than applications that aim to \emph{replace} human workers. Copilot serves as one example of a tool that clearly enhances the abilities of a software engineer: it operates on the low-level details of a source code implementation, freeing up software engineers to focus more of their attention on higher-level architectural and system design issues.
\section{Discussion}
\label{sec:discussion}
\subsection{Designing for User Aims}
Users of generative AI applications may have varied aims or goals in using those systems. Some users may be in pursuit of \emph{perfecting a singular artifact}, such as a method implementation in a software program. Other users may be in \emph{pursuit of inspiration or creative ideas}, such as when exploring a visual design space. As a consequence of working with a generative AI application, users may also \emph{enhance their own learning or understanding} of the domain in which they are operating, such as when a software engineer learns something new about a programming language from the model's output. Each of these aims can be supported by our design principles, as well as help designers determine the appropriate strategy for addressing the challenges posed by each principle.
To support artifact production, designers ought to carefully consider how to manage a model's multiple, imperfect outputs. Interfaces ought to support users in curating, annotating, and mutating artifacts to help users refine a singular artifact. The ability to version artifacts, or show a history of artifact edits, may also be useful to enable users to revisit discarded options or undo undesirable modifications. For cases in which users seek to produce one ``ideal'' artifact that satisfies some criteria, controls that enable them to co-create with the generative tool can help them achieve their goal more efficiently, and explanations that signal or identify imperfections can tell them how close or far they are from the mark.
To support inspiration and creativity, designers also ought to provide adequate controls that enable users to explore a design space of possibilities~\cite{kreminski2022evaluating, morris2022design}. Visualizations that represent the design space can also be helpful as they can show which parts the user has vs. has not explored, enabling them to explore the novel parts of that space. Tools that help users manage, curate, and filter the different outputs created during their explorations can be extremely helpful, such as a digital mood board for capturing inspiring model outputs.
Finally, to support learning how to effectively interact with a generative AI application, designers ought to help users create accurate mental models~\cite{kollmansberger2010helping} through explanations~\cite{arya2020ai, ehsan2022human, liao2020questioning, liao2021introduction, simkute2022xai}. Explanations can help answer general questions such as what a generative AI application is capable or not capable of generating, how the model's controls impact its output, and how the model was trained and the provenance of its training data. They can also answer questions about a specific model output, such as how confident the model was in that output, which portions of that output might need human review or revision, how to adjust or modify the input or prompt to adjust properties of the output, or what other options or alternatives exist for that output.
\subsection{The Importance of Value-Sensitive Design in Mitigating Potential Harms}
Designers need to be sensitive to the potential harms that may be caused by the rapid maturation and widespread adoption of generative AI technologies. Although sociotechnical means for mitigating these harms have yet to be developed, we recommend that designers use a Value Sensitive Design approach~\cite{friedman2019value, hendry2021value} when reasoning about how to design generative AI applications. By clearly identifying the different stakeholders and impacted parties of a generative AI application, and explicitly enumerating their values, designers can make more reasoned judgments about how those stakeholders might be impacted by hazardous model outputs, model misuse, and issues of human displacement.
\section{Limitations and Future Work}
Generative AI applications are still in their infancy, and new kinds of co-creative user experiences are emerging at a rapid pace. Thus, we consider these principles to be in their infancy as well, and it is possible that other important design principles, strategies, and/or user aims have been overlooked. In addition, although these principles can provide helpful guidance to designers in making specific design decisions, they need to be validated in real-world settings to ensure their clarity and utility.
\section{Conclusion}
We present a set of seven design principles for generative AI applications. These principles are grounded in an environment of generative variability, the key characteristics of which are that a generative AI application will generate artifacts as outputs, and those outputs may be varied in nature (e.g. of varied quality or character). The principles focus on designing for multiple outputs and the imperfection of those outputs, designing for exploration of a space or range of possible outputs and maintaining human control over that exploration, and designing to establish accurate mental models of the generative AI application via explanations. We also urge designers to design \emph{against} the potential harms that may be caused by hazardous model output (e.g. the production of inappropriate language or imagery, the reinforcement of existing stereotypes, or a failure to inclusively represent different groups), by misuse of the model (e.g. by creating disinformation or fabricating evidence), or by displacing human workers (e.g. by designing for the \emph{replacement} rather than the \emph{augmentation} of human workers). We envision these principles to help designers make reasoned choices as they create novel generative AI applications.
\bibliographystyle{ACM-Reference-Format}
\section{Introduction}
As generative AI technologies continue to grow in power and utility, their use is becoming more mainstream. Generative models, including LLM-based foundation models~\cite{bommasani2021opportunities}, are being used for applications such as general Q\&A (e.g. ChatGPT\footnote{\url{http://chat.openai.com}}), software engineering assistance (e.g. Copilot\footnote{\url{http://copilot.github.com}}), task automation (e.g. Adept\footnote{\url{http://adept.ai}}), copywriting (e.g. Jasper.ai\footnote{\url{http://jasper.ai}}), and the creation of high-fidelity artwork (e.g. DALL-E 2~\cite{ramesh2022hierarchical}, Stable Diffusion~\cite{rombach2022high}, Midjourney\footnote{\url{http://midjourney.com}}). Given the explosion in popularity of these new kinds of generative applications, there is a need for guidance on how to design those applications to foster productive and safe use, in line with human-centered AI values~\cite{shneiderman2022human}.
Fostering productive use is a challenge, as revealed in a recent literature survey by \citet{campero2022test}. They found that many human-AI collaborative systems failed to achieve positive synergy -- the notion that a human-AI team is able to accomplish superior outcomes above either party working alone. In fact, some studies have found the opposite effect, that human-AI teams produced inferior results to either a human or AI working alone~\cite{clark2018creative, buccinca2021trust, jacobs2021machine, kleinberg2021humans}.
Fostering safe use is a challenge because of the potential risks and harms that stem from generative AI, either because of how the model was trained (e.g. \cite{weidinger2021ethical}) or because of how it is applied (e.g. \cite{houde2020business, muller2022drinking}).
In order to address these issues, we propose a set of design principles to aid the designers of generative AI systems. These principles are grounded in an environment of \textbf{generative variability}, indicating the two properties of generative AI systems inherently different from traditional discriminative\footnote{Our use of the term \emph{discriminative} is to indicate that the task conducted by the AI algorithm is one of determining to which class or group a data instance belongs; classification and clustering algorithms are examples of discriminative AI. Although our use of the term \emph{discriminative} may evoke imagery of human discrimination (e.g. via racial, religious, gender identity, genetic, or other lines), our use follows the scientific convention established in the machine learning community (see, e.g., \url{https://en.wikipedia.org/wiki/Discriminative_model})} AI systems: \textbf{generative}, because the aim of generative AI applications is to \emph{produce artifacts} as outputs, rather than \emph{determine decision boundaries} as discriminative AI systems do, and \textbf{variability}, indicating the fact that, for a given input, a generative system may produce a variety of possible outputs, many of which may be valid; in the discriminative
case, it is expected that the output of a model does not vary for a given input.
We note that our principles are meant to generally apply to generative AI applications. Other sets of design principles exist for specific kinds of generative AI applications, including \citet{liu2022design}'s guidelines for engineering prompts for text-to-image models, and advice about one-shot prompts for generation of texts of different kinds \cite{greyling:engineering, reynolds:prompt, Denny:Conversing}.
There are also more general AI-related design guidelines \cite{acm2023words, amershi2019guidelines, apple, ibm2023reid, liao2020questioning}.
Six of our principles are presented as ``design for...'' statements, indicating the characteristics that designers should keep in mind when making important design decisions. One is presented as a ``design against...'' statement, directing designers to design \emph{against} potential harms that may arise from hazardous model outputs, misuse, potential for human displacement, or other harms we have not yet considered. The principles interact with each other in complex ways, schematically represented via overlapping circles in Figure~\ref{fig:teaser}. For example, the characteristic denoted in one principle (e.g. multiple outputs) can sometimes be leveraged as a strategy for addressing another principle (e.g. exploration). Principles are also connected by a user's aims, such as producing a singular artifact, seeking inspiration or creative ideas, or learning about a domain. They are also connected by design features or attributes of a generative AI application, such as the support for versioning, curation, or sandbox environments.
Our aim for these principles is threefold: (1) to provide the designers of generative AI applications with the language to discuss issues unique to \emph{generative} AI; (2) to provide strategies and guidance to help designers make important design decisions around how end users will interact with a generative AI application; and (3) to sensitize designers to the idea that generative AI applications may cause a variety of harms (likely inadvertently, but possibly intentionally). We hope these principles provide the human-AI co-creation community with a reasoned way to think through the design of novel generative AI applications.
\section{Design Principles for Generative AI Applications}
\label{sec:design-principles}
We developed seven design principles for generative AI applications based on recent research in the HCI and AI communities, specifically around human-AI co-creative processes. We conducted a literature review of research studies, guidelines, and analytic frameworks from these communities~\cite{acm2023words, amershi2019guidelines, apple, deterding2017mixed, grabe2022towards, ibm2023reid, liao2020questioning, maher2012computational, maher2022research, muller2020iccc, muller2022extending, lubart2005can, seeber2020machines, spoto2017mici}, which included experiments in human-AI co-creation~\cite{agarwal2020project, agarwal2020quality, kreminski2022evaluating, louie2020novice, sun2022investigating, weisz2021perfection, weisz2022better}, examinations of representative generative applications~\cite{Brown:GPT3, johnson2007measuring, kaiser2018generative, louie2020novice, metz:GPT-3, ramesh2022hierarchical, rombach2022high, ross2023programmers}, and a review of publications in recent workshops~\cite{geyer2021hai, muller2022genaichi, muller2022hcai, weisz2022hai}.
\subsection{The Environment: Generative Variability}
\label{sec:generative-variability}
Generative AI technologies present unique challenges for designers of AI systems compared to discriminative AI systems. First, generative AI is \emph{generative} in nature, which means their purpose is to produce artifacts as output, rather than decisions, labels, classifications, and/or decision boundaries. These artifacts may be comprised of different types of media, such as text, images, audio, animations or videos. Second, the outputs of a generative AI model are \emph{variable} in nature. Whereas discriminitive AI aims for deterministic outcomes, generative AI systems may not produce the same output for a given input each time. In fact, \emph{by design}, they can produce multiple and divergent outputs for a given input, some or all of which may be satisfactory to the user. Thus, it may be difficult for users to achieve replicable results when working with a generative AI application.
Although the very nature of generative applications violates the common HCI principle that a system should respond consistently to a user's input (for critiques of this position, see \cite{aragon2022human, boyd2012critical, costanza2020design, d2020data, gitelman2013raw, muller2022drinking}), we take the position that this environment in which generative applications operate -- \emph{generative variability} -- is a core strength. Generative applications enable users to explore or populate a ``space'' of possible outcomes to their query. Sometimes, this exploration is explicit, as in the case of systems that enable latent space manipulations of an artifact. Other times, exploration of a space occurs when a generative model produces multiple candidate outputs for a given input, such as multiple distinct images for a given prompt \cite{ramesh2022hierarchical, rombach2022high} or multiple implementations of a source code program \cite{weisz2021perfection, weisz2022better}. Recent studies also show how users may improve their knowledge of a domain by working with a generative model and its variable outputs~\cite{weisz2021perfection, ross2023programmers}.
This concept of generative variability is crucially important for designers of generative AI applications to communicate to users. Users who approach a generative AI system without understanding its probabilistic nature and its capacity to produce varied outputs will struggle to interact with it in productive ways. The design principles we outline in the following sections -- designing for multiple outcomes \& imperfection, for exploration \& human control, and for mental models \& explanations -- are all rooted in the notion that generative AI systems are distinct and unique because they operate in an environment of generative variability.
\subsection{Design for Multiple Outputs}
\label{sec:design-multiple-outputs}
Generative AI technologies such as encoder-decoder models~\cite{sutskever2014sequence, cho2014learning}, generative adversarial networks~\cite{goodfellow2020generative}, and transformer models~\cite{vaswani2017attention} are probabilistic in nature and thus are capable of producing multiple, distinct outputs for a user's input. Designers therefore need to understand the extent to which these multiple outputs should be visible to users. Do users need the ability to annotate or curate? Do they need the ability to compare or contrast? How many outputs does a user need?
Understanding the user's task can help answer these questions. If the user's task is one of \emph{production}, in which the ultimate goal is to produce a single, satisfying artifact, then designs that help the user filter and visualize differences may be preferable. For example, a software engineer's goal is often to implement a method that performs a specific behavior. Tools such as Copilot take a user's input, such as a method signature or documentation, and provide a singular output. Contrarily, if the user's task is one of \emph{exploration}, then designs that help the user curate, annotate, and mutate may be preferable. For example, a software engineer may wish to explore a space of possible test cases for a code module. Or, an artist may wish to explore different compositions or styles to see a broad range of possibilities. Below we discuss a set of strategies for helping design for multiple outputs.
\subsubsection{Versioning}
Because of the randomness involved in the generative process, as well as other user-configurable parameters (e.g. a random seed, a temperature, or other types of user controls), it may be difficult for a user to produce exactly the same outcome twice. As a user interacts with a generative AI application and creates a set of outputs, they may find that they prefer earlier outputs to later ones. How can they recover or reset the state of the system to generate such earlier outputs? One strategy is to keep track of all of these outputs, as well as the parameters that produced them, by versioning them. Such versioning can happen manually (e.g. the user clicks a button to ``save'' their current working state) or automatically.
\subsubsection{Curation}
When a generative model is capable of producing multiple outputs, users may need tools to curate those outputs. Curation may include collecting, filtering, sorting, selecting, or organizing outputs (possibly from the versioned queue) into meaningful subsets or groups, or creating prioritized lists or hierarchies of outputs according to some subjective or objective criteria. For example, CogMol\footnote{\url{http://covid19-mol.mybluemix.net}} generates novel molecular compounds, which can be sorted by various properties, such as their molecular weight, toxicity, or water solubility~\cite{chenthamarakshan2020cogmol, chenthamarakshan2020targetspecific}. In addition, the confidence of the model in each output it produced may be a useful way to sort or rank outputs, although in some cases, model confidence scores may not be indicative of the quality of the model's output~\cite{agarwal2020quality}.
\subsubsection{Annotation}
When a generative model has produced a large number of outputs, users may desire to add marks, decorators, or annotations to outputs of interest. These annotations may be applied to the output itself (e.g. ``I like this'') or it may be applied to a portion or subset of the output (e.g. flagging lines of source code that look problematic and need review).
\subsubsection{Visualizing Differences}
In some cases, a generative model may produce a diverse set of distinct outputs, such as images of cats that look strikingly different from each other. In other cases, a generative model may produce a set of outputs for which it is difficult to discern differences, such as a source code translation from one language to another. In this case, tools that aid users in visualizing the similarities and differences between multiple outputs can be useful. Depending on the users' goals, they may seek to find the \textit{invariant} aspects across outcomes, such as identifying which parts of a source code translation were the same across multiple translations, indicating a confidence in its correctness. Or, users may prioritize the \textit{variant} aspects for greater creativity and inspiration. For example, Sentient Sketchbook~\cite{liapis2013sentient} is a video game co-creation system that displays a number of different metrics of the maps it generates, enabling users to compare newly-generated maps with their current map to understand how they differ.
\subsection{Design for Imperfection}
\label{sec:design-imperfect}
It is highly important for users to understand that the quality of a generative model's outputs will vary. Users who expect a generative AI application to produce exactly the artifact they desire will experience frustration when they work with the system and find that it often produces imperfect artifacts. By ``imperfect,'' we mean that the artifact itself may have imperfections, such as visual misrepresentations in an image, bugs or errors in source code, missing desired elements (e.g. ``an illustration of a bunny with a carrot'' fails to include a carrot), violations of constraints specified in the input prompt (e.g. ``write a 10 word sentence'' produces a much longer or shorter sentence), or even untruthful or misleading answers (e.g. a summary of a scientific topic that includes non-existent references~\cite{rose2022facebook}). But, ``imperfect'' can also mean ``doesn't satisfy the user's desire,'' such as when the user prompts a model and doesn't get back any satisfying outputs (e.g. the user didn't like any of the illustrations of a bunny with a carrot). Below we discuss a set of strategies for helping design for imperfection.
\subsubsection{Multiple Outputs}
Our previous design principle is also a strategy for handling imperfect outputs. If a generative model is allowed to produce multiple outputs, the likelihood that one of those outputs is satisfying to the user is increased. One example of this effect is in how code translation models are evaluated, via a metric called $pass@k$~\cite{roziere2020unsupervised, kulal2019spoc}. The idea is that the model is allowed to produce $k$ code translations for a given input, and if any of them pass a set of unit tests, then the model is said to have produced a correct translation. In this way, generating multiple outputs serves to mitigate the fact that the model's most-likely output may be imperfect. However, it is left up to the user to review the set of outputs and identify the one that is satisfactory; with multiple outputs that are very similar to each other, this task may be difficult~\cite{weisz2022better}, implying the need for a way to easily visualize differences.
\subsubsection{Evaluation \& Identification}
Given that generative models may not produce perfect (or perfectly satisfying) outputs, they may still be able to provide users with a signal about the quality of its output, or indicate parts that require human review. As previously discussed, a model's per-output confidence scores may be used (with care) to indicate the quality of a model's output. Or, domain-specific metrics (e.g. molecular toxicity, compiler errors) may be useful indicators to evaluate whether an artifact achieved a desirable level of quality. Thus, evaluating the quality of generated artifacts and identifying which portions of those artifacts may contain imperfections (and thus require human review, discussed further in \citet{weisz2021perfection}) can be an effective way for handling imperfection.
\subsubsection{Co-Creation}
User experiences that allow for co-creation, in which both the user and the AI can edit a candidate artifact, will be more effective than user experiences that assume or aim for the generative model to produce a perfect output. Allowing users to edit a model's outputs provides them with the opportunity to find and fix imperfections, and ultimately achieve a satisfactory artifact. One example of this idea is Github Copilot~\cite{web:copilot}, which is embedded in the VSCode IDE. In the case when Copilot produces an imperfect block of source code, developers are able to edit it \emph{right in context} without any friction. By contrast, tools like Midjourney or Stable Diffusion only produce a gallery of images to chose from; editing those images requires the user to shift to a different environment (e.g. Photoshop).
\subsubsection{Sandbox / Playground Environment}
A sandbox or playground environment ensures that when a user interacts with a generated artifact, their interactions (such as edits, manipulations, or annotations) do not impact the larger context or environment in which they are working. Returning to the example of Github Copilot, since it is situated inside a developer's IDE, code it produces is directly inserted into the working code file. Although this design choice enables co-creation, it also poses a risk that imperfect code is injected into a production code base. A sandbox environment that requires users to explicitly copy and paste code in order to commit it to the current working file may guard against the accidental inclusion of imperfect outputs in a larger environment or product.
\subsection{Design for Human Control}
\label{sec:design-control}
Keeping humans in control of AI systems is a core tenet of human-centered AI~\cite{shneiderman2020human, shneiderman2021human, shneiderman2022human}. Providing users with controls in generative applications can improve their experience by increasing their efficiency, comprehension, and ownership of generated outcomes \cite{louie2020novice}. But, in co-creative contexts, there are multiple ways to interpret what kinds of ``control'' people need. We identify three kinds of controls applicable to generative AI applications.
\subsubsection{Generic Controls}
One aspect of control relates to the exploration of a design space or range of possible outcomes (as discussed in Section~\ref{sec:design-explore}). Users need appropriate controls in order to drive their explorations, such as control over the number of outputs produced from an input or the amount of variability present in the outputs. We refer to these kinds of controls as \emph{generic controls}, as they are applicable to any particular generative technology or domain. As an example, some generative projects may involve a ``lifecycle'' pattern in which users benefit from seeing a great diversity of outputs in early stages of the process in order to search for ideas, inspirations, or directions. Later stages of the project may focus on a smaller number (or singular) output, requiring controls that specifically operate on that output. Many generative algorithms include a user-controllable parameter called \emph{temperature}. A low temperature setting produces outcomes that are very similar to each other; conversely, a high temperature setting produces outcomes that are very dissimilar to each other. In the ``lifecycle'' model, users may first set a high temperature for increased diversity, and then reduce it when they wish to focus on a particular area of interest in the output space. This effect was observed in a study of a music co-creation tool, in which novice users dragged temperature control sliders to the extreme ends to explore the limits of what the AI could generate~\cite{louie2020novice}.
\subsubsection{Technology-specific Controls}
Other types of controls will depend on the particular generative technology being employed. Encoder-decoder models, for example, often allow users to perform latent space manipulations of an artifact in order to control semantically-meaningful attributes. For example, \citet{liu2021neurosymbolic} demonstrate how semantic sliders can be used to control attributes of 3D models of animals, such as the animal's torso length, neck length, and neck rotation. Transformer models use a temperature parameter to control the amount of randomness in the generation process~\cite{vonplaten2020how}. Natural language prompting, and the emerging discipline of prompt engineering~\cite{liu2022design}, provide additional ways to tune or tweak the outputs of large language models. We refer to these kinds of controls as \emph{technology-specific controls}, as the controls exposed to a user in a user interface will depend upon the particular generative AI technology used in the application.
\subsubsection{Domain-specific Controls}
Some types of user controls will be domain-specific, dependent on the type of artifact being produced. For example, generative models that produce molecules as output might be controlled by having the user specify desired properties such as molecular weight or water solubility; these types of constraints might be propagated to the model itself (e.g. expressed as a constraint in the encoder phase), or they may simply act as a filter on the model's output (e.g. hide anything from the user that doesn't satisfy the constraints). In either case, the control itself is dependent on the fact that the model is producing a specific kind of artifact, such as a molecule, and would not logically make sense for other kinds of artifacts in other domains (e.g. how would you control the water solubility for a text-to-image model?). Thus, we refer to these types of controls, independent of how they are implemented, as \emph{domain specific}. Other examples of domain-specific controls include the reading level of a text, the color palette or artistic style of an image, or the run time or memory efficiency of source code.
\subsection{Design for Exploration}
\label{sec:design-explore}
Because users are working in an environment of generative variability, they will need some way to ``explore'' or ``navigate'' the space of potential outputs in order to identify one (or more) that satisfies their needs. Below we discuss a set of strategies for helping design for exploration.
\subsubsection{Multiple Outputs}
The ability for a generative model to produce multiple outputs (Section~\ref{sec:design-multiple-outputs}) is an enabler of exploration. Returning to the bunny and carrot example, an artist may wish to explore different illustrative styles and prompt (and re-prompt) the model for additional candidates of ``a bunny with a carrot'' in various kinds of styles or configurations. Or, a developer can explore different ways to implement an algorithm by prompting (and re-prompting) a model to produce implementations that possess different attributes (e.g. ``implement this using recursion,'' ``implement this using iteration,'' or ``implement this using memoization''). In this way, a user can get a sense of the different possibilities the model is capable of producing.
\subsubsection{Control}
Depending on the specific technical architecture used by the generative application, there may be different ways for users to control it (Section~\ref{sec:design-control}). No matter the specific mechanisms of control, \emph{providing controls} to a user provides them with the ability to interactively work with the model to explore the space of possible outputs for their given input.
\subsubsection{Sandbox / Playground Environment}
A sandbox or playground environment can enable exploration by providing a separate place in which new candidates can be explored, without interfering with a user's main working environment. For example, in a project using Copilot, \citet{cheng2022would} suggest providing, ``a sandbox mechanism to allow users to play with the prompt in the context of their own project.''
\subsubsection{Visualization}
One way to help users understand the space in which they are exploring is to explicitly visualize it for them. \citet{kreminski2022evaluating} introduce the idea of expressive range coverage analysis (ERCA) in which a user is shown a visualization of the ``range'' of possible generated artifacts across a variety of metrics. Then, as users interact with the system and produce specific artifact instances, those instances are included in the visualization to show how much of the ``range'' or ``space'' was explored by the user.
\subsection{Design for Mental Models}
\label{sec:design-mental-models}
Users form mental models when they work with technological systems \cite{scheutz2017framework, fiore2001group, mathieu2000influence}. These models represent the user's understanding of how the system works and how to work with it effectively to produce the outcomes they desire. Due to the environment of generative variability, generative AI applications will pose new challenges to users because these applications may violate existing mental models of how computing systems behave (i.e. in a deterministic fashion). Therefore, we recommend designing to support users in creating accurate mental models of generative AI applications in the following ways.
\subsubsection{Orientation to Generative Variability}
Users may need a general introduction to the concept of generative AI. They should understand that the system may produce multiple outputs for their query (Section~\ref{sec:design-multiple-outputs}), that those outputs may contain flaws or imperfections (Section~\ref{sec:design-imperfect}), and that their effort may be required to collaborate \emph{with} the system in order to produce desired artifacts via various kinds of controls (Section~\ref{sec:design-control}).
\subsubsection{Role of the AI}
Research in human-AI interaction suggests that users may view an AI application as filling a role such as an assistant, coach, or teammate~\cite{seeber2020machines}. In a study of video game co-creation, \citet{guzdial2019friend} found participants to ascribe roles of friend, collaborator, student, and manager to the AI system. Recent work by \citet{ross2023programmers} examined software engineers' role orientations toward a programming assistant and found that people viewed the assistant with a tool orientation, but interacted with it as if it were a social agent. Clearly establishing the role of a generative AI application in a user's workflow, as well as its level of autonomy (e.g. \cite{fitts1951human, sheridan1978human, parasuraman2000model, horvitz1999principles}), will help users better understand how to interact effectively with it. Designers can reason about the role of their application by answering questions such as, is it a tool or partner? does it act proactively or does it just respond to the user? does it make changes to an artifact directly or does it simply make recommendations for the user?
\subsection{Design for Explanations}
\label{sec:design-xai}
Generative AI applications will be unfamiliar and possibly unusual to many users. They will want to know what the application can (and cannot) do, how well it works, and how to work with it effectively. Some users may even wish to understand the technical details of how the underlying generative AI algorithms work, although these details may not be necessary to work effectively with the model (as discussed in \cite{weisz2021perfection}).
In recent years, the explainable AI (XAI) community has made tremendous progress at developing techniques for explaining how AI systems work~\cite{arya2020ai, ehsan2022human, liao2020questioning, liao2021introduction, simkute2022xai}. Much of the work in XAI has focused on discriminative algorithms: how they generally make decisions (e.g. via interpretable models~\cite[Chapter 5]{molnar2020interpretable} or feature importance~\cite[Section 8.5]{molnar2020interpretable}, and why they make a decision in a specific instance (e.g. via counterfactual explanations~\cite[Section 9.3]{molnar2020interpretable}.
Recent work in human-centered XAI (HCXAI) has emphasized designing explanations that cater to human knowledge and human needs~\cite{ehsan2022human}. This work grew out of a general shift toward human-centered data science~\cite{aragon2022human}, in which the import of explanations is not for a technical user (data scientist), but for an end user who might be impacted by a machine learning model.
In the case of generative AI, recent work has begun to explore the needs for explainability. \citet{sun2022investigating} explored explainability needs of software engineers working with a generative AI model for various types of use cases, such as code translation and autocompletion. They identified a number of types of questions that software engineers had about the generative AI, its capabilities, and its limitations, indicating that explainability is an important feature for generative AI applications. They also identified several gaps in existing explainability frameworks stemming from the \emph{generative} nature of the AI system, indicating that existing XAI techniques may not be sufficient for generative AI applications. Thus, we make the following recommendations for how to design for explanations.
\subsubsection{Calibrate Trust by Communicating Capabilities and Limitations}
Because of the inherent imperfection of generative AI outputs, users would be well-served if they understood the limitations of these systems~\cite{muller2022forgetting, pinanez2021expose}, allowing them to \emph{calibrate} their trust in terms of what the application can and cannot do~\cite{pinanez2022breakdowns}. When these kinds of imperfections (Section~\ref{sec:design-imperfect}) are not signaled, users of co-creative tools may mistakenly blame themselves for shortcomings of generated artifacts in co-creative applications ~\cite{louie2020novice}, and users in Q \& A use cases can be shown deceptive misconceptions and harmful falsehoods as objective answers \cite{lin2021truthfulqa}. One way to communicate the capabilities of a generative AI application is to show examples of what it can do. For example, Midjourney provides a public discussion space to orient new users and show them what other users have produced with the model. This space not only shows the outputs of the model (e.g. images), but the textual prompts that produced the images. In this way, users can more quickly come to understand how different prompts influence the application's output. To communicate limitations, systems like ChatGPT contain modal screens to inform users of the system's limitations.
\subsubsection{Use Explanations to Create and Reinforce Accurate Mental Models}
\citet{weisz2021perfection} explored how a generative model's confidence could be surfaced in a user interface. Working with a transformer model on a code translation task, they developed a prototype UI that highlighted tokens in the translation that the model was not confident in. In their user study, they found that those highlights also served as explanations for how the model worked: users came to understand that each source code token was chosen probabilistically, and that the model had considered other alternatives. This design transformed an algorithmic weakness (imperfect output) into a resource for users to understand how the algorithm worked, and ultimately, to control its output (by showing users where they might need to make changes).
\subsection{Design Against Harms}
\label{sec:design-against-harms}
The use of AI systems -- including generative AI applications -- may unfortunately lead to diverse forms of harms, especially for people in vulnerable situations. Much work in AI ethics communities has identified how discriminative AI systems may perpetuate harms such as the denial of personhood or identity~\cite{costanza2020design, kantayya2020coded, spiel2021they}; the deprivation of liberty or children \cite{lyn2020risky, saxena2021framework}, and the erasure of persons, cultures, or nations through data silences \cite{muller2022forgetting}. We identify four types of potential harms, some of which are unique to the generative domain, and others which represent existing risks of AI applications that may manifest in new ways.
Our aim in this section is to sensitize designers to the potential risks and harms that generative AI systems may pose. We do not prescribe solutions to address these risks, in part because it is an active area of research to understand how these kinds of risks could be mitigated. Risk identification, assessment, and mitigation is a sociotechnical problem involving computing resources, humans, and cultures. Even with our focus on the design of generative applications, an analysis of harms that is limited to design concepts may blur into technosolutionism~\cite{lindtner2016reconstituting, madaio2020co, resseguier2021ethics}.
We do posit that \emph{human-centered} approaches to generative AI design are a useful first step, but must be part of a larger strategy to understand who are the direct and indirect stakeholders of a generative application~\cite{friedman2019value, hendry2021value}, and to work directly with those stakeholders to \emph{identify} harms, \emph{understand} what are their differing priorities and value tensions~\cite{miller2007value}, and \emph{negotiate} issues of culture, policy, and (yes) technology to meet these diverse challenges (e.g., \cite{denzin2008handbook, disalvo2022design, hayes2014knowing}).
\subsubsection{Hazardous Model Outputs}
Generative AI applications may produce artifacts that cause harm. In an integrative survey paper, \citet{weidinger2021ethical} list six types of potential harms of large language models, three of which regard the harms that may be caused by the model's output:
\begin{itemize}
\item \textbf{Discrimination, Exclusion, and Toxicity}. Generative models may produce outputs that promote discrimination against certain groups, exclude certain groups from representation, or produce toxic content. Examples include text-to-image models that fail to produce ethnically diverse outputs for a given input (e.g. a request for images of doctors produces images of male, white doctors \cite{cho2022dall} or language models that produce inappropriate language such as swear words, hate speech, or offensive content~\cite{acm2023words, ibm2023reid}.
\item \textbf{Information Hazards}. Generative models may inadvertently leak private or sensitive information from their training data. For example, \citet{carlini2021extracting} found that strategically prompting GPT-2 revealed an individual's full name, work address, phone number, email, and fax number. Additionally, larger models may be more vulnerable to these types of attacks~\cite{carlini2021extracting, carlini2022quantifying}.
\item \textbf{Misinformation Harms}. Generative models may produce inaccurate misinformation in response to a user's query. Lin et al. \cite{lin2021truthfulqa} found that GPT-3 can provide false answers that mimic human falsehoods and misconceptions, such as ``coughing can help stop a heart attack'' or ``[cold weather] tells us that global warming is a hoax''. \citet{singhal2022large} caution against the tendency of LLMs to hallucinate references, especially if consulted for medical decisions. \citet{albrecht2022despite} claim that LLMs have few defenses against adversarial attacks while advising about ethical questions. The Galactica model was found to hallucinate non-existent scientific references~\cite{heaven_2022}, and Stack Overflow has banned responses sourced from ChatGPT due to their high rate of incorrect, yet plausible, responses~\cite{vincent_2022}.
\end{itemize}
In addition to those harms, a generative model's outputs may be hazardous in other ways as well.
\begin{itemize}
\item \textbf{Deceit, Impersonation, and Manipulation}. Generative algorithms can be used to create false records or ``deep fakes'' (e.g., \cite{houde2020business, meskys2020regulating}), to impersonate others (e.g. \cite{stupp2019fraudsters}), or to distort information into politically-altered content~\cite{yang2019unsupervised}. In addition, they may manipulate users who believe that they are chatting with another human rather than with an algorithm, as in the case of an unreviewed ChatGPT ``experiment'' in which at least 4,000 people seeking mental health support were connected to a chatbot rather than a human counselor~\cite{morris2023we}.
\item \textbf{Copyright, Licenses, and Intellectual Property}. Generative models may have been trained on data protected by regulations such as the GDPR, which prohibits the re-use of data beyond the purposes for which it was collected. In addition, large language models have been referred to as ``stochastic parrots'' due to their ability to reproduce data that was used during their training~\cite{bender2021dangers}. One consequence of this effect is that the model may produce outputs that incorporate or remix materials that are subject to copyright or intellectual property protections~\cite{franceschelli2022copyright, hristov2016artificial, murray2022generative}. For example, the Codex model, which produces source code as output, may (re-)produce source code that is copyrighted or subject to a software license, or that was openly shared under a creative commons license that prohibits commercial re-use (e.g., in a pay-to-access LLM). Thus, the use of a model's outputs in a project may cause that project to violate copyright protections, or subject that project to a restrictive license (e.g. GPL). As of this writing, there is a lawsuit against GitHub, Microsoft, and OpenAI on alleged copyright violations in the training of Codex~\cite{web:githubcopilotlitigation}.
\end{itemize}
\subsubsection{Misuse}
\citet{weidinger2021ethical} describe how generative AI applications may be misused in ways unanticipated by the creators of those systems. Examples include making disinformation cheaper and more effective, facilitating fraud and scams, assisting code generation for cyberattacks, or conducting illegitimate surveillance and censorship. In addition to these misuses, \citet{houde2020business} also identify business misuses of generative AI applications such as facilitating insurance fraud and fabricating evidence of a crime. Although designers may not be able to prevent users from intentionally misusing their generative AI applications, there may be preventative measures that make sense for a given application domain. For example, output images may be watermarked to indicate they were generated by a particular model, blocklists may be used to disallow undesirable words in a textual prompt, or multiple people may be required to review or approve a model's outputs before they can be used.
\subsubsection{Human Displacement}
One consequence of the large-scale deployment of generative AI technologies is that they may come to \emph{replace}, rather than \emph{augment} human workers. Such concerns have been raised in related areas, such as the use of automated AI technologies in data science~\citet{wang2019human, wang2021towards}. \citet{weidinger2021ethical} specifically discuss the potential economic harms and inequalities that may arise as a consequence of widespread adoption of generative AI. If a generative model is capable of producing high-fidelity outputs that rival (or even surpass) what can be created by human effort, are the humans necessary anymore? Contemporary fears of human displacement by generative technologies are beginning to manifest in mainstream media, such as in the case of illustrators' concerns that text-to-image models such as Stable Diffusion and Midjourney will put them out of a job~\cite{wilkins2022will}. We urge designers to find ways to design generative AI applications that \emph{enhance} or \emph{augment} human abilities, rather than applications that aim to \emph{replace} human workers. Copilot serves as one example of a tool that clearly enhances the abilities of a software engineer: it operates on the low-level details of a source code implementation, freeing up software engineers to focus more of their attention on higher-level architectural and system design issues.
\section{Discussion}
\label{sec:discussion}
\subsection{Designing for User Aims}
Users of generative AI applications may have varied aims or goals in using those systems. Some users may be in pursuit of \emph{perfecting a singular artifact}, such as a method implementation in a software program. Other users may be in \emph{pursuit of inspiration or creative ideas}, such as when exploring a visual design space. As a consequence of working with a generative AI application, users may also \emph{enhance their own learning or understanding} of the domain in which they are operating, such as when a software engineer learns something new about a programming language from the model's output. Each of these aims can be supported by our design principles, as well as help designers determine the appropriate strategy for addressing the challenges posed by each principle.
To support artifact production, designers ought to carefully consider how to manage a model's multiple, imperfect outputs. Interfaces ought to support users in curating, annotating, and mutating artifacts to help users refine a singular artifact. The ability to version artifacts, or show a history of artifact edits, may also be useful to enable users to revisit discarded options or undo undesirable modifications. For cases in which users seek to produce one ``ideal'' artifact that satisfies some criteria, controls that enable them to co-create with the generative tool can help them achieve their goal more efficiently, and explanations that signal or identify imperfections can tell them how close or far they are from the mark.
To support inspiration and creativity, designers also ought to provide adequate controls that enable users to explore a design space of possibilities~\cite{kreminski2022evaluating, morris2022design}. Visualizations that represent the design space can also be helpful as they can show which parts the user has vs. has not explored, enabling them to explore the novel parts of that space. Tools that help users manage, curate, and filter the different outputs created during their explorations can be extremely helpful, such as a digital mood board for capturing inspiring model outputs.
Finally, to support learning how to effectively interact with a generative AI application, designers ought to help users create accurate mental models~\cite{kollmansberger2010helping} through explanations~\cite{arya2020ai, ehsan2022human, liao2020questioning, liao2021introduction, simkute2022xai}. Explanations can help answer general questions such as what a generative AI application is capable or not capable of generating, how the model's controls impact its output, and how the model was trained and the provenance of its training data. They can also answer questions about a specific model output, such as how confident the model was in that output, which portions of that output might need human review or revision, how to adjust or modify the input or prompt to adjust properties of the output, or what other options or alternatives exist for that output.
\subsection{The Importance of Value-Sensitive Design in Mitigating Potential Harms}
Designers need to be sensitive to the potential harms that may be caused by the rapid maturation and widespread adoption of generative AI technologies. Although sociotechnical means for mitigating these harms have yet to be developed, we recommend that designers use a Value Sensitive Design approach~\cite{friedman2019value, hendry2021value} when reasoning about how to design generative AI applications. By clearly identifying the different stakeholders and impacted parties of a generative AI application, and explicitly enumerating their values, designers can make more reasoned judgments about how those stakeholders might be impacted by hazardous model outputs, model misuse, and issues of human displacement.
\section{Limitations and Future Work}
Generative AI applications are still in their infancy, and new kinds of co-creative user experiences are emerging at a rapid pace. Thus, we consider these principles to be in their infancy as well, and it is possible that other important design principles, strategies, and/or user aims have been overlooked. In addition, although these principles can provide helpful guidance to designers in making specific design decisions, they need to be validated in real-world settings to ensure their clarity and utility.
\section{Conclusion}
We present a set of seven design principles for generative AI applications. These principles are grounded in an environment of generative variability, the key characteristics of which are that a generative AI application will generate artifacts as outputs, and those outputs may be varied in nature (e.g. of varied quality or character). The principles focus on designing for multiple outputs and the imperfection of those outputs, designing for exploration of a space or range of possible outputs and maintaining human control over that exploration, and designing to establish accurate mental models of the generative AI application via explanations. We also urge designers to design \emph{against} the potential harms that may be caused by hazardous model output (e.g. the production of inappropriate language or imagery, the reinforcement of existing stereotypes, or a failure to inclusively represent different groups), by misuse of the model (e.g. by creating disinformation or fabricating evidence), or by displacing human workers (e.g. by designing for the \emph{replacement} rather than the \emph{augmentation} of human workers). We envision these principles to help designers make reasoned choices as they create novel generative AI applications.
\bibliographystyle{ACM-Reference-Format}
|
2,877,628,090,689 | arxiv | \section{Introduction}
The main target of present and future $B$ factories is the study
of CP violation. It will provide tests of the flavor sector of the
Standard Model, which predicts that all CP violation results
from a single complex phase in the quark mixing matrix. The
determination of the sides and angles of the unitarity triangle
plays a central role in this program.\cite{BaBar} The angle
$\beta=\mbox{arg}(V_{td}^*)$ is accessible in a theoretically
clean way through the observation of the CP asymmetry in the decay
$B\to J/\psi K_S$, and a first measurement yielding $\sin 2\beta
=0.79_{-0.44}^{+0.41}$ has just been reported by the CDF
Collaboration.\cite{CDF}
Recently, there has been significant progress in the theoretical
understanding of the hadronic decays $B\to\pi K$, and methods have
been developed to extract information on $\gamma=\mbox{arg}(V_{ub}^*)$
from measurements of the rates for these processes. Here, we discuss
the charged modes $B^\pm\to\pi K$, which are particularly clean from
a theoretical point of view.\cite{us}$^-$\cite{me} For applications
involving the neutral decay modes the reader is referred to the
literature.\cite{FM,Robert}
\section{\boldmath Theory of $B^\pm\to\pi K$ decays\unboldmath}
The hadronic decays $B\to\pi K$ are mediated by a low-energy
effective weak Hamiltonian, whose operators allow for three
distinct classes of flavor topologies: QCD penguins, trees, and
electroweak (EW) penguins. In the Standard Model, the weak couplings
associated with these topologies are known. Data show that the QCD
penguins dominate the decay amplitudes, whereas trees and EW
penguins are subleading and of a similar strength. The theoretical
description of the two charged modes $B^\pm\to\pi^\pm K^0$ and
$B^\pm\to\pi^0 K^\pm$ exploits the fact that the amplitudes for
these processes differ in a pure isospin amplitude, $A_{3/2}$,
given by the matrix element of the isovector part of the effective
Hamiltonian between $B$ and the $(\pi K)$ isospin eigenstate with
$I=\frac 32$. Up to an overall strong-interaction phase $\phi$,
the parameters of this amplitude are determined in the limit of
SU(3) flavor symmetry.\cite{us} SU(3)-breaking corrections can be
calculated in the factorization approximation,\cite{Stech} so that
theoretical uncertainties enter only at the level of nonfactorizable
SU(3)-breaking corrections to a subleading decay amplitude.
A convenient parametrization of the decay amplitudes is
\begin{eqnarray}
{\cal A}(B^+\to\pi^+ K^0) &=& P \Big[ 1
- \varepsilon_a\,e^{i\gamma} e^{i\eta} \Big] \,,\nonumber\\
- \sqrt2\,{\cal A}(B^+\to\pi^0 K^+) &=& P \Big[ 1
- \varepsilon_a\,e^{i\gamma} e^{i\eta}
- \varepsilon_{3/2}\,e^{i\phi}
(e^{i\gamma} - \delta_{\rm EW}) \Big] \,,
\end{eqnarray}
where $P$ is the dominant penguin amplitude defined as the sum of
all terms in the $B^+\to\pi^+ K^0$ amplitude not proportional to
$e^{i\gamma}$, $\eta$ and $\phi$ are strong phases, and
$\varepsilon_a$, $\varepsilon_{3/2}$ and $\delta_{\rm EW}$ are
real hadronic parameters. From a naive quark-diagram analysis,
one does not expect the $B^+\to\pi^+ K^0$ amplitude to receive a
contribution from $b\to u$ tree topologies; however, such a
contribution can be induced through final-state rescattering or
annihilation contributions.\cite{Blok}$^-$\cite{At97} They are
parametrized by $\varepsilon_a$. Model estimates indicate that
$\varepsilon_a$ could be about 5--10\%. In the future, it will be
possible to put upper and lower bounds on this parameter by comparing
the CP-averaged branching ratios for the decays $B^\pm\to\pi^\pm K^0$
and $B^\pm\to K^\pm\bar K^0$.\cite{Fa97} Below, we assume
$\varepsilon_a\le 0.1$; however, our results will be almost
insensitive to this assumption.
The parameter $\delta_{\rm EW}=0.64\pm 0.15$ describes the ratio of
EW penguin and tree contributions to the isospin amplitude $A_{3/2}$.
In the SU(3) limit, it is calculable in terms of Standard Model
parameters.\cite{us,Fl96} The main uncertainty comes from the present
errors on $|V_{ub}|$ and the mass of the top quark. SU(3)-breaking
effects, which are included in the factorization approximation,
reduce the value of $\delta_{\rm EW}$ by 6\%. In the error analysis
we have assigned a 100\% uncertainty to this estimate. Note that, if
nonfactorizable SU(3) breaking would lead to a further reduction of
$\delta_{\rm EW}$, this would {\em strengthen\/} the bound on $\gamma$
derived below.
The parameter $\varepsilon_{3/2}$ describes the strength of the
$\Delta I=1$ tree amplitude relative to the leading penguin
amplitude. We define a related parameter $\bar\varepsilon_{3/2}$
by writing $\varepsilon_{3/2}=\bar\varepsilon_{3/2}
\sqrt{1-2\varepsilon_a\cos\eta\cos\gamma+\varepsilon_a^2}$, so
that the two quantities agree in the limit $\varepsilon_a\to 0$.
In the SU(3) limit, this new parameter can be determined
experimentally form the relation
\begin{equation}\label{eps}
\bar\varepsilon_{3/2} = \sqrt2\,R_{\rm SU(3)}
\left|\frac{V_{us}}{V_{ud}}\right| \left[
\frac{\mbox{B}(B^\pm\to\pi^\pm\pi^0)}
{\mbox{B}(B^\pm\to\pi^\pm K^0)} \right]^{1/2} \,.
\end{equation}
Factorizable SU(3)-breaking is accounted for by a factor
$R_{\rm SU(3)}\approx f_K/f_\pi\approx 1.2$. This may overestimate
the effect; however, reducing the value of $\bar\varepsilon_{3/2}$
would again {\em strenghten\/} the bound on $\gamma$. We thus feel
comfortable working with a large positive SU(3) correction and
assign an error of 50\% to it to account for nonfactorizable effects.
Using preliminary data reported by the CLEO Collaboration \cite{CLEO}
combined with some theoretical guidance then yields
$\bar\varepsilon_{3/2}=0.24\pm 0.06$.\cite{us} With a precise
measurement of the CP-averaged branching ratios entering (\ref{eps}),
the uncertainty in this number could be reduced significantly.
\section{\boldmath Lower bound on $\gamma$\unboldmath}
Generalizing an idea of Fleischer and Mannel,\cite{FM} a bound on
$\cos\gamma$ can be derived from a measurement of the ratio of the
CP-averaged branching ratios for the two $B^\pm\to\pi K$ decay
modes. The CLEO Collaboration has measured~\cite{CLEO}
\begin{equation}
R_* = \frac{\mbox{B}(B^\pm\to\pi^\pm K^0)}
{2\mbox{B}(B^\pm\to\pi^0 K^\pm)}
= 0.47\pm 0.24 \,.
\end{equation}
It is possible to derive an exact theoretical lower bound on $R_*$
by varying the strong phases $(\eta,\phi)$ independently between $-\pi$
and $\pi$. Knowing the other parameters ($\bar\varepsilon_{3/2}$
from data and $\delta_{\rm EW}$ from theory), this bound implies an
exclusion region for $\cos\gamma$, which becomes larger the smaller
the values of $R_*$ and $\bar\varepsilon_{3/2}$ are. Since the
$B^\pm\to\pi^\pm K^0$ branching ratio enters these two quantities
in an opposite way, it is advantageous to consider the ratio
${\cal R}=(1-\sqrt{R_*})/\bar\varepsilon_{3/2}$, for which the
experimental error on this branching ratio tends to cancel.\cite{Frank}
The current value is ${\cal R}=1.32\pm 0.47$. Theoretically, this ratio
has the advantage of being independent of $\bar\varepsilon_{3/2}$ to
leading order, and one obtains the bound~\cite{us}
\begin{equation}
{\cal R} \le |\delta_{\rm EW}-\cos\gamma|
+ O(\bar\varepsilon_{3/2},\varepsilon_a) \,.
\end{equation}
An exact formula for the higher-order terms can be found in the
literature.\cite{me} For values ${\cal R}>0.8$ the exact bound is
stronger than the approximate one shown above even after the
rescattering effects parametrized by $\varepsilon_a$ are included.
Varying the parameters in the intervals
$0.18\le\bar\varepsilon_{3/2}\le 0.30$ and $0.49\le\delta_{\rm EW}
\le 0.79$, and setting $\varepsilon_a=0.1$, we obtain the bound
shown in Figure~\ref{fig:bound}. Note that the effect of the
rescattering contribution is very small. The table next to the
figure shows the resulting bounds on $|\gamma|$ obtained at different
confidence levels, which we obtained taking into account that in
the Standard Model the largest allowed value of ${\cal R}$ is 1.35.
(This is more conservative than assuming a two-sided Gaussian
distribution.)
\begin{figure}
\epsfxsize=6cm
\epsfbox{bound.eps}
\raisebox{3.2cm}{
\begin{tabular}{|l|c|l|}
\hline
\qquad~\raisebox{0pt}[11pt][4.5pt]{${\cal R}$} & CL & ~~\,Bound \\
\hline
\raisebox{0pt}[11pt][0pt]{1.32} (mean) & \phantom{0}5\% &
$|\gamma|>157^\circ$ \\
\raisebox{0pt}[9pt][0pt]{0.85} ($1\sigma$) & 70\% &
$|\gamma|>93^\circ$ \\
\raisebox{0pt}[9pt][0pt]{0.56} ($1.62\sigma$) & 90\% &
$|\gamma|>71^\circ$ \\
\raisebox{0pt}[9pt][0pt]{0.41} ($1.94\sigma$) & 95\% &
$|\gamma|>60^\circ\,\mbox{or}\!$ \\
\raisebox{0pt}[9pt][6pt]{} & & $|\gamma|<26^\circ$ \\
\hline
\end{tabular}}
\caption{
Left: Theoretical upper bound on the ratio ${\cal R}$ versus
$|\gamma|$ for $\varepsilon_a=0.1$ (solid) and $\varepsilon_a=0$
(dashed). The horizontal line and band show the current experimental
value with its $1\sigma$ variation. Right: Bounds on $|\gamma|$
obtained for specific values of ${\cal R}$.
\label{fig:bound}}
\end{figure}
From a global analysis of the available information on the CKM
matrix elements, one can derive indirect constraints on $\gamma$
that lead to the allowed range $37^\circ\le\gamma\le 118^\circ$,
where the upper limit is determined by the current experimental limit
on $B_s$--$\bar B_s$ mixing, whereas the lower bound is deduced from
the measurement of CP violation in $K$--$\bar K$ mixing.\cite{BaBar}
Without this information, i.e.\ using data from $B$ physics alone,
this lower bound would disappear, and $\gamma=0$ would still be
allowed. The 90\% CL bound on $|\gamma|$ derived here, combined with
the upper bound from $B_s$--$\bar B_s$ mixing, implies that
$71^\circ<|\gamma|<118^\circ$, which is a significant improvement
over the range obtained from the global analysis. This, together with
the fact that $|V_{ub}|\ne 0$ as shown by the existence of semileptonic
$b\to u$ transitions, proves that the unitarity triangle is not
degenerate to a line. In the context of the CKM model, this implies
direct CP violation in $B$ decays.
\section{\boldmath Extraction of $\gamma$\unboldmath}
Ultimately, one would like not only to derive a bound on $\gamma$,
but to measure this parameter directly from a study of CP violation.
Recently, we have described a strategy for extracting $\gamma$ from
$B^\pm\to\pi K$ decays,\cite{us2} which generalizes a method suggested
by Gronau, Rosner and London~\cite{GRL} to include the effects of EW
penguins. This approach has later been generalized to account for
rescattering contributions to the $B^\pm\to\pi^\pm K^0$ decay
amplitudes.\cite{me}
In addition to the ratio $R_*$, one considers the following
combination of the direct CP asymmetries in the decays
$B^\pm\to\pi K$:
\begin{equation}
\widetilde A \equiv \frac{A_{\rm CP}(\pi^0 K^\pm)}{R_*}
- A_{\rm CP}(\pi^\pm K^0) \,.
\end{equation}
With this definition, the rescattering effects parametrized by
$\varepsilon_a$ are suppressed by a factor of $\bar\varepsilon_{3/2}$
and thus reduced to the percent level. The same is true for the ratio
$R_*$. Explicitly, we have
\begin{eqnarray}
R_*^{-1} &=& 1 + 2\bar\varepsilon_{3/2}\cos\phi\,
(\delta_{\rm EW}-\cos\gamma) \nonumber\\
&&\mbox{}+ \bar\varepsilon_{3/2}^2\,
(1-2\delta_{\rm EW}\cos\gamma+\delta_{\rm EW}^2)
+ O(\bar\varepsilon_{3/2}\,\varepsilon_a) \,, \nonumber\\
\widetilde A &=& 2\bar\varepsilon_{3/2} \sin\gamma \sin\phi
+ O(\bar\varepsilon_{3/2}\,\varepsilon_a) \,.
\end{eqnarray}
For fixed values of $\bar\varepsilon_{3/2}$ and $\delta_{\rm EW}$,
these equations define contours in the $(\gamma,\phi)$ plane. When
the rescattering corrections from $\varepsilon_a$ are included,
these contours become narrow bands. From the intersection of the
contour bands for $R_*$ and $\widetilde A$ one obtains the values of
$\gamma$ and the strong phase $\phi$ up to possible discrete
ambiguities. For some typical numerical examples, the theoretical
uncertainties on the extracted values of $\gamma$ resulting from the
variation of the input parameters $\bar\varepsilon_{3/2}$,
$\delta_{\rm EW}$ and $\varepsilon_a$ are found to add up to a
total error of order $\delta\gamma\sim 10^\circ$.\cite{us2,me} A
precise determination of this error requires, however, to know the
actual values of $R_*$ and $\widetilde A$. For instance, Gronau
and Pirjol~\cite{Pirj} find a larger error for the specific case
where the product $|\sin\gamma\sin\phi|$ is {\em very\/} close to 1,
which would imply a value of the CP asymmetry $\widetilde A$ close
to 50\%.
\section{Conclusions}
Measurements of the exclusive hadronic decays $B\to\pi K$ provide
interesting information on the weak phase $\gamma$. Using CLEO data
on the CP-averaged branching ratios of the two charged decay modes,
we have derived the bound $|\gamma|>71^\circ$ at 90\% CL. This bound
is stronger than the lower bound derived from the global analysis
of all other information on the CKM matrix. Combined with constraints
from $B_s$--$\bar B_s$ mixing and semileptonic $b\to u$ decays, and
in the context of the CKM model, it proves the existence of direct
CP violation in $B$ decays.
\newpage
\section*{Acknowledgments}
It is a pleasure to thank the organizers of WIN99, C.A.~Dominguez,
R.D. Viollier and their staff, for arranging a marvellous meeting in
a fantastic setting. I am grateful to Jon Rosner for collaboration
on most of the work reported here. I also wish to thank Frank
W\"urthwein for useful comments. This work was supported by the
Department of Energy under contract DE--AC03--76SF00515.
|
2,877,628,090,690 | arxiv | \section{#1}\label{sec:#2}}
\def\mysubsection#1#2{\subsection{#1}\label{sec:#2}}
\def\mysubsubsection#1#2{\subsubsection{#1}\label{sec:#2}}
\newcommand{\refSec}[1]{Section~\ref{sec:#1}}
\newcommand{\refFig}[1]{Figure~\ref{fig:#1}}
\newcommand{\refEq}[1]{Equation~\ref{eq:#1}}
\newcommand{\refTbl}[1]{Table~\ref{tbl:#1}}
\newcommand{\unsure}[1]{{\sethlcolor{yellow}\hl{#1}}}
\definecolor{piotrcolor}{rgb}{0.1,0.6,0.2}
\newcommand{\piotr}[1]{\textcolor{piotrcolor}{Piotr: #1}}
\definecolor{rafalcolor}{rgb}{0.6,0.2,0.2}
\newcommand{\rafal}[1]{\textcolor{rafalcolor}{Rafal: #1}}
\definecolor{Deniscolor}{rgb}{0.6,0.2,0.6}
\newcommand{\Denis}[1]{\textcolor{Deniscolor}{Denis: #1}}
\definecolor{marekcolor}{rgb}{0.1,0.25,0.65}
\newcommand{\marek}[1]{\textcolor{marekcolor}{Marek: #1}}
\definecolor{karolcolor}{rgb}{0.7,0.25,0.3}
\newcommand{\karol}[1]{\textcolor{karolcolor}{Karol: #1}}
\definecolor{vamsicolor}{rgb}{1.0,0.65,0}
\newcommand{\vamsi}[1]{\textcolor{vamsicolor}{Vamsi: #1}}
\definecolor{changedcolor}{rgb}{1.0,0.0,0.0}
\newcommand{\changed}[1]{\textcolor{changedcolor}{#1}}
\renewcommand{\hl}[1]{#1}
\renewcommand{\st}[1]{}
\renewcommand{\vamsi}[1]{}
\renewcommand{\changed}[1]{#1}
\DeclareGraphicsExtensions{.png,.pdf,.ai,.psd}
\DeclareGraphicsRule{.ai}{pdf}{.ai}{}
\DeclareGraphicsRule{.psd}{pdf}{.psd}{}
\usepackage[pagebackref=true,breaklinks=true,letterpaper=true,colorlinks,bookmarks=false]{hyperref}
\cvprfinalcopy
\def\cvprPaperID{28}
\def\mbox{\tt\raisebox{-.5ex}{\symbol{126}}}{\mbox{\tt\raisebox{-.5ex}{\symbol{126}}}}
\ifcvprfinal\pagestyle{empty}\fi
\begin{document}
\title{Towards a quality metric for dense light fields}
\author{Vamsi Kiran Adhikarla$^{1}$
\and
Marek Vinkler$^{1}$
\and
Denis Sumin$^{1}$
\and
Rafa\l{} K. Mantiuk$^{3}$
\and
Karol Myszkowski$^{1}$
\and
Hans-Peter Seidel$^{1}$
\and
Piotr Didyk$^{1,2}$
}
\twocolumn
[
\begin{@twocolumnfalse}
\maketitle
\vspace{-1.0\baselineskip}
\begin{center}
$^{1}$MPI Informatik \quad $^{2}$Saarland University, MMCI \quad $^{3}$The Computer Laboratory, University of Cambridge
\end{center}
\vspace{2\baselineskip}
\end{@twocolumnfalse}
]
\thispagestyle{empty}
\begin{abstract}
Light fields become a popular representation of three-dimensional
scenes, and there is interest in their processing, resampling, and
compression. As those operations often result in loss of quality, there
is a need to quantify it. In this work, we collect a new dataset of dense reference and
distorted light fields as well as the corresponding quality scores which are scaled in
perceptual units. The scores were acquired in a subjective experiment using
an interactive light-field viewing setup. The dataset contains typical
artifacts that occur in light-field processing chain due to light-field reconstruction, multi-view compression, and limitations of automultiscopic
displays. We test a number of existing objective quality
metrics to determine how well they can predict the quality of light fields. We find that the existing image quality metrics provide good
measures of light-field quality, but require dense reference light-
fields for optimal performance. For more complex tasks of comparing two distorted light fields, their performance drops significantly, which reveals the need for new, light-field-specific metrics.
\end{abstract}
\mysection{Introduction}{Introduction}
A light field can be seen as a generalization of a 2D image, which
encodes most of the depth cues and allows to render a
scene simulating arbitrary optics (e.g., defocus
blur) \cite{Levoy1996}. It is a convenient representation for
multiscotopic and \changed{light-field displays}~\cite{Wetzstein2012},
but also attractive format for capturing high-quality cinematographic
content, offering new editing possibilities in post-production \cite{LytroCinema}. Due the enormous storage requirements, light fields are usually
sparsely sampled in spatial and angular dimensions, stored using lossy
compression, and reconstructed later. \changed{It is unclear how the distortions introduced on the way affect the perceived quality.}
Similar problems have been addressed for 2D images, videos, and sparse multiview content. Many quality metrics have been designed to predict perceived differences between various versions of the \changed{same content}~\cite{Aydin2010}. However, measuring quality for dense light fields still remains a complex task. While several works applied the existing metrics to \changed{such content} \cite{Higa2013,Dansereau2013}, their performance has never been systematically evaluated in this context. One of the challenges is acquiring dense light-field data to validate a metric. Wide baselines as in multi-camera rigs~\cite{Wilburn2005} need to be considered, and the reference light fields should be sufficiently dense to avoid uncontrolled visual artifacts. Obtaining human responses for light-field distortions is also difficult due to current display limitations. This work is an attempt to overcome these problems by first building a new dense light-field dataset which is suitable for testing quality metrics, and second, using a custom light-field viewing setup to obtain the quality judgments for this dataset. The collected subjective scores are used to evaluate the performance of existing metrics in the context of dense light fields.
We focus on light-field-specific angular effects akin to motion parallax, complex surface appearance, and binocular vision that arise in free viewing experience. To capture a rich variability over these effects and make quality scaling in our perceptual experiments tractable, we design fourteen real and synthetic scenes and introduce light-field distortions that are specific to light-field reconstruction, compression, and display. We then run a pair-wise comparison experiment over light-field pairs, and derive perceptual scaling of differences between original and distorted stimuli. This allows us to investigate the suitability of a broad spectrum of existing image, video, and multiview quality metrics to predict such perceptual scaling. We also propose simple extensions of selected metrics to capture the angular aspects of light-field perception. While the original metrics are not meant for light fields, our results show that they can be used in this context, given a dense light field as the reference. We also demonstrate that the robustness of such metrics predictions drops when evaluating the quality between two distorted light fields.
The main contributions of this work are:
\begin{itemize}
\vspace{-0.5em}
\item a publicly available dense light-fields dataset that is designed for training and evaluating quality metrics;
\vspace{-0.5em}
\item a perceptual experiment that provides human quality judgments for several typical light-field distortions;
\vspace{-0.5em}
\item an evaluation, analysis, and extensions of existing quality metrics in the context of light fields;
\vspace{-0.5em}
\item identified challenges of quality assessment in light fields, such as the need for a high quality reference.
\end{itemize}
\mysection{Previous works}{PreviousWork}
In this section, we provide an overview of existing datasets for light fields as well as the experiments that measure the perceived distortions in various types of content.
\textbf{Light-field datasets:
There are several publicly available light-field datasets. The most popular ones are: 4D light-field dataset ~\cite{Wanner13} containing seven synthetic scenes and five real-world scenes, Stanford archive \cite{Stan2008} with twenty 4D light fields, and Disney 3D light-field dataset \cite{Kim2013} containing five scenes. Although the first two datasets provide a good quality and reasonable number of light fields, they are captured over very narrow baselines that are insufficient for the new generation autostereoscopic displays. The Disney dataset provides high spatio-angular resolution light fields; however, they are few and do not have consistent spatial and angular resolution, which makes it difficult to use in quality evaluations. In the context of quality evaluation of 3D light fields, three real-world light fields are provided in the IRCCyN/IVC DIBR Images database \cite{Bosc2011}. These contain several scenes captured along a wide baseline at the cost of reduced angular resolution. Tamboli~et~al. \cite{Tamboli16} provided 360$^{\circ}$ round table shots of three scenes that are used for quality evaluation on a 3D light-field display. These are rather simple scenes with single objects and the images contain a lot of noise. In our work, we provide first consistent dataset of dense, complex-scene light fields with large appearance variation. We use the dataset for training and evaluating quality metrics. The database cane also serve as a ground truth for automultiscopic displays.
\textbf{Metrics and experiments:
Because of their proven efficiency on 2D images, 2D objective metrics are viable candidates for evaluating light-field quality. Yasakethu et al. \cite{Yasakethu08} tested the suitability of objective measures -- Structural SIMilarity (SSIM)~\cite{Wang2004}, Peak Signal-to-Noise Ratio (PSNR) and Video Quality Metric (VQM)~\cite{Pinson2004} for quality assessment for stereoscopic and 2D+Depth videos that are compressed at different bitrates. They carried out subjective experiments on an autostereoscopic display and showed that 2D metrics can be used separately on each view to assess 3D video quality. They used few sequences and studied only compression artifacts. \changed{Several metrics have been proposed to determine the quality of synthesized views from multiview images. Bosc et al.{~\cite{Bosc2011}} advocated two measures for assessing the quality of synthesized views. However, they did not conduct thorough subjective studies. Solh et al.{~\cite{Solh2009}} presented a metric for quantifying the geometric and photometric distortions in multiview acquisition. Bosc et al.{~\cite{Bosc2013}} suggested a method to asses the quality of virtual synthesized views in the context of multiview video. Battisti et al.{~\cite{Battisti2015}} proposed more sophisticated framework for evaluating the quality of depth image based rendering techniques by comparing the statistical features of wavelet subbands and used image registration and skin detection steps for additional optimization. Sandic et al.{~\cite{Sandic2016}} exploited multi-scale pyramid decompositions with morphological filters for obtaining the quality of intermediate views and showed that they achieve significantly higher correlation with subjective scores. These methods form a class of metrics specific to view-interpolation artifacts, and 2D stimuli containing the interpolated views are used for subjective experiments}.
Vangorp et al. \cite{Vangorp2011} ran a psychophysical study to account for the plausibility of visual artifacts associated with view interpolation methods. They considered such artifacts as a function of different number of input images; however, they limited their study to monocular viewing and Lambertian surfaces. An experiment was also performed for precomputed videos, so that the impact of user's interaction and dynamic aspects of free viewing could be judged. More recently, this work was extended to transitions between videos \cite{Tompkin2013}. Similar studies were also performed in the context of panoramas \cite{Morvan2009}. Tamboli et al.~\cite{Tamboli16} conducted subjective studies on a 3D light-field display. Users were asked to judge the quality as perceived from different viewing locations in front of the display and the scores were averaged over all locations. The user could rate the quality only from a certain viewing position. Moreover, they only considered three distinct scenes. We believe that, for inferring a light-field quality, all the views should be taken into account at the same time.
\textbf{Light-field displays:
\changed{Our work focuses on wide-baseline 3D light fields which enable perfect simulation of stereoscopic viewing and continuous horizontal motion parallax crucial for new light-field displays. Although many light-field display designs exist{~\cite{Masia2013}}, including more advanced ones that provide focus cues{~\cite{Maimone2013}}, they suffer from several drawbacks such as limited field of view, discontinous motion parallax, visible crosstalk, and limited depth budget. Several strategies have been proposed to minimize these artifacts by filtering the content~\cite{Zwicker2006,Du2014} and manipulating depth~\cite{Lang2010,Didyk2012,Masia2013}. However, display designs that enable displaying reference light fields for quality measurements are still unavailable.}
\mycfigure{Scenes.jpg}{
Representative images of all light fields in our collection. Below each image representative EPIs are presented.
}
\mysection{Data collection}{DataCollection}
Our dataset consist of light fields which are parameterized using two parallel planes~\cite{Levoy1996}. We consider only horizontal motion parallax that can be described using one plane and a line that is parallel to it.
\setlength{\intextsep}{5pt}
\setlength{\columnsep}{10pt}
\begin{wrapfigure}{r}{0.5\linewidth}
\includegraphics[width=\linewidth]{ parametrization}
\end{wrapfigure}
More formally, we denote our light fields as $L(\mathbf \omega, \mathbf x)\in(\mathbb R \times \mathbb R^2)\rightarrow\mathbb R^3$, where $\mathbf \omega$ is a position on the line, and $\mathbf x$ is a position on the plane. We refer to them as \emph{angular} and \emph{spatial} coordinates, respectively. In practice, $\mathbf \omega$ describes a position of the viewer, and $\mathbf x$ is a coordinate of the observed image. Below, we describe the acquisition of our light fields.
\mysubsection{Scenes}{Scenessec}
We designed and rendered nine synthetic and captured five real-world scenes (Figure~\ref{fig:Scenes.jpg}). They span a large variety of different conditions, e.g.,\ outdoor/indoor, daylight/night etc. They also contain objects with large range of different appearance properties. The scene objects distribution in depth is widely varied to study the artifacts resulting from disocclusions and depth discontinuities. For capturing real-world scenes, we used a one-meter long motorized linear stage with Canon EOS 5D Mark II camera and 50\,mm and 28\,mm lenses. \changed{After capturing all views, we performed lens distortion correction using \changed{PTLens~\cite{PTlens}}, estimated the camera poses using \changed{Voodoo camera tracker~\cite{Voodoo}}, and rectified all the images \changed{using the baseline drawn from the first to last camera using the approach in~\cite{Fusiello2000}}.}
For rendered images, we used cameras with off-axis asymmetric frustums. For real-world scenes, the same effect was achieved by applying horizontal shift to the individual views.
All the light fields are of identical spatial and angular resolution (960$\times$720$\times$101). The angular resolution was chosen high enough to avoid visible angular aliasing. This was achieved by assuring that the maximum on-screen disparities between consecutive views are around 1\,pixel. To guarantee a comfortable viewing, the total disparity range during the presentation was limited to 0.2 visual degree~\cite{Shibata2011}.
\mysubsection{Distortions}{Distortions}
We considered typical light-field distortions that are specific to transmission, reconstruction, and display. For each distortion, we generated multiple light fields by varying the distortion severity level. \changed{The exact levels were chosen to keep the differences between two consecutive levels small and similar. To this end, we conducted a small pilot study with 10 distortion levels, and then, selected the final levels manually.}
\textbf{Transmission:} To transmit the light-field data, an efficient data compression algorithm is highly required. We consider well-known 3D extension of HEVC encoder \cite{GTech2016}. The light-field views are encoded into a bit stream at various quantization steps, and then, decoded back from the bit stream using the 3D-HEVC coder. We chose the following quantization steps: \{25, 29, 33, 37, 41, 45\}.
\textbf{Reconstruction:} Light-field reconstruction techniques are used to recover a dense light field from sparse view samples. They interpolate the missing views using several techniques which alter the nature and appearance of the distortion. We chose the distortions resulting from linear ({\sc Linear}\xspace) and nearest neighbor ({\sc NN}\xspace) interpolation, as well as image warping using optical flow estimation ({\sc Opt}\xspace). We also investigated the impact of using quantized depth maps ({\sc DQ}\xspace). All the distortions are parametrized by the angular subsampling factor $k$ (the distortion severity) that defines the angular resolution of the light field prior to applying the reconstruction technique. We considered $k \in \{2,5,8,11,18,25\}$. The linear filter reconstructs dense light field by blending the reference views, and the {\sc NN}\xspace method clones the closest reference view. For {\sc Opt}\xspace method, we used the TV-L1 optical flow~\cite{Sanchez2013}, and apply an image-warping technique~\cite{Brox2004} to synthesize in-between views. For {\sc DQ}\xspace, we considered ground-truth depth map which is quantized using 8 discrete levels. Then, we used the same image-warping technique~\cite{Brox2004} to reconstruct the light field. As this distortion requires ground-truth depth information, it is only applied to the synthetic scenes.
\textbf{Display:} As an example of multiview autostereoscopic display artifacts, we chose a crosstalk between adjacent views, which can be modeled using a Gaussian blur in angular domain \cite{Liu2015}. Consequently, we include such artifacts into our dataset ({\sc GAUSS}). In particular, we considered the same angular subsampling parameters used in light-field reconstruction distortions and created hypothetical displays with corresponding number of views. The upsampling to higher resolution light field was achieved by using the display crosstalk model.
Four different distortions with all severity levels were applied to every scene. To all synthetic scenes we apply {\sc NN}\xspace, {\sc Linear}\xspace, {\sc Opt}\xspace, and {\sc DQ}\xspace. For all real-world scenes, we used {\sc NN}\xspace, {\sc Opt}\xspace, {\sc GAUSS}, and {\sc HEVC}\xspace. Including original light fields, our database consists of 350 different light fields and it is available online~\cite{MPILF}. The examples of resulting artifacts are presented in Figure~\ref{fig:Distortions}. Please refer to supplemental materials for the whole light-field dataset.
\myfigure{Distortions}{
Examples of distortions introduced to our light field for one of our scenes ({\sc Barcelona}). The images visualize central EPI of each of the distorted light fields and the enlarged portion of it is shown in the bottom row.
}
\mysection{Experiment}{Experiment}
To acquire subjective quality scores that enable both training and testing different quality metrics, we performed a large-scale subjective experiment.
\textbf{Equipment:}
\changed{To simulate stereoscopic viewing with high-quality motion parallax, we used on our own setup} ({Figure~\ref{fig:ExperimentSetup}) that consists of ASUS VG278 $27\,''$ Full HD 120\,Hz LCD desktop monitor and NVIDIA 3D Vision 2 Kit for displaying stereoscopic images. Motion parallax was reproduced using a custom head tracking in which a small LED headlamp was tracked using a Logitech HD C920 Pro webcam (refer to the supplemental video).
The head tracking allowed the participants to view light fields in an unconstrained manner. The viewing distance was approximately 60\,cm, and users could move their heads along a baseline of 20\,cm in the direction parallel to the screen plane.
The eye accommodation was fixed to the screen and did not change with eye vergence. \changed{The display was operated at the full brightness to minimize the effect of luminance on depth perception{~\cite{Didyk2012}}.}
\myfigure{ExperimentSetup}{Experiment session: viewer's position is tracked using a head lamp and a webcam, a pair of NVIDIA 3D Vision 2 Kit active glasses provides stereoscopic viewing.}
\textbf{Stimuli:}
Each stimuli was a pair of light fields. As our scaling procedure used for obtaining quality scores (Section~\ref{sec:Analysis}) can handle an incomplete set of comparisons and prefers when more comparisons are made for pairs of similar quality \cite{Silverstein2001}, each pair consisted of light fields with neighboring severity levels of the same distortion type. This results in 336 different stimuli which were presented stereoscopically.
\textbf{Task:}
We experimented with direct rating methods, such as ACR \cite{ITU-T-P.9102008}, in order to measure Mean-Opinion-Scores of the distorted images. However, we found these methods to be insensitive to subtle but noticeable degradation of quality. Also participants found the direct rating task difficult. Therefore, we decided to use a more sensitive pair-wise comparison method with a two-alternative-forced-choice. In each trial, the participants were shown a pair of light fields side-by-side, and the task was to indicate the light field that a user ``would prefer to see on a 3-dimensional display''.
Participants were given unlimited time to investigate the light fields, but they were allowed to give their response only after 80\% of all perspective images were seen. The order of the light-fields pairs as well as their placement on the screen were randomized. Before each session, the participants were provided with a form summarizing the task, and a training session was conducted to familiarize participants with the experiment.
\textbf{Participants:}
Forty participants took part in the test, including both male (20) and female (20) aged 24--40 with normal or corrected-to-normal vision. Each subject performed the test in three sessions within one week. In one session, the participants saw 120--180 light-field pairs consisting of all the test conditions, but for a subset of the scenes.
For a given subject, two test sessions were allowed during a single day, and these were separated by at least an hour of break.
\mysection{Analysis of subjective data}{Analysis}
The results of pair-wise comparison experiment are usually scaled in just-noticeable-differences (JNDs). We observed that considering measured differences as ``noticeable'' leads to incorrect interpretation of the experimental results. Two stimuli are 1\,JND apart if 75\% of observers can see the difference between them. However, our experimental question was not whether observers can tell if the light fields are different, but rather which one has higher quality. As shown in Figure~\ref{fig:jod_vs_jnd}, a pair of stimuli could be noticeably different from each other (JND>1), but they could appear to have the same quality. For that reason, we denote measured values as just-objectionable-differences ({\sc JODs}\xspace). These units quantify the quality difference in relation to the perfect reference image. Note that the measure of {\sc JOD}\xspace is more similar to visual equivalence \cite{Ramanarayanan2007a} or to the quality expressed as a difference-mean-opinion-score (DMOS) rather than to JNDs.
\myfigure{jod_vs_jnd}{Illustration of the difference between just-objectionable-differences (JODs) and just-noticeable-differences (JNDs). The image affected by blur and noise may appear to be similarly degraded in comparison to the reference image (the same JOD), but they are noticeably different and therefore several JNDs apart. The mapping between JODs and JNDs can be very complex and the relation shown in this plot using Cartesian and polar coordinates is just for illustration purposes.
}
To scale the results in {\sc JOD}\xspace units we used a Bayesian method based on the method of Silverstein and Farrell~\cite{Silverstein2001}. It employs a maximum-likelihood-estimator to maximize the probability that the collected data explains {\sc JOD}\xspace-scaled quality scores under the Thurstone Case V assumptions~\cite{RAFALC}. The optimization procedure finds a quality value for each pair of light fields that maximizes the likelihood modeled by the binomial distribution. Unlike standard scaling procedures, the Bayesian approach robustly scales pairs of conditions for which there is unanimous agreement. Such pairs are common when a large number of conditions are compared. It can also scale the result of an incomplete and imbalanced pair-wise design, when not all the pairs are compared and some are compared more often. As the pair-wise comparisons provide relative quality information, the {\sc JOD}\xspace values are relative. To maintain consistency across the scenes, we fix the starting point of the {\sc JOD}\xspace scale at 0 for different distortions and thus the quality degradation results in negative {\sc JOD}\xspace values.
The results of the subjective quality assessment experiment are shown in Figure~\ref{fig:exp_results}. The error bars represent 95\% confidence intervals, relative to the reference light field, computed by bootstrapping by sampling with replacement.
\mycfigure{exp_results}{
The results of the subjective quality assessment experiment. The distortion level indicates the distortion severity: 0--reference 6--severest distortion level. JOD is the scaled subjective quality value. The error bars denote 95\% confidence interval. The bars are horizontally displaced to avoid overlapping. The scene names are indicated in the corner of each plot. The character in parenthesis after the scene name indicates whether the scene is synthetic (S) or real-world (R).}
}
The results show interesting patterns in the objectionability of different distortions.
{\sc Opt}\xspace\ offers a consistent performance improvement over {\sc NN}\xspace. The only exception is the \emph{Furniture} scene featuring thin and irregularly shaped foreground objects, in which case all types of view interpolation are more objectionable than the selection of the nearest single view. The optical flow interpolation works better for real-world scenes as there are more features that can be detected. The {\sc Linear}\xspace\ interpolation in most of the cases results in the worst performance, except for small distortion levels, which may indicate that visible blur due to this distortion is strongly objectionable. Similar findings have been reported by Vangorp~et~al.~\cite{Vangorp2011} in their study on the visual performance of view interpolation methods in monocular vision.
{\sc HEVC}\xspace\ and {\sc GAUSS}\ distortions are usually the easiest to detect as they induce significant amount of spatial distortion when compared to others.
Overall the results show clearly that light-field quality is scene-dependent and successful quality metric must predict the effect of scene content on the visibility of light-field distortions.
\mysection{Evaluation of quality metrics}{Metric}
We considered several popular image, video, stereo, and multiview quality metrics. We briefly describe the metrics and then show their individual performance on our dataset. For obtaining the quality of a light field using image quality metrics, we apply the metrics on individual light-field images and then average the scores over all images.
\textbf{Quality metrics:}
\changed{Although studies show that perceptual metrics perform better than an absolute difference (AD)~\cite{Lin11}, because of its significant usage in image quality assessment, we considered peak signal-to-noise ratio ({\sc PSNR}\xspace). We also investigated {\sc SSIM\textsubscript{2D}}\xspace~\cite{Wang2004}, which is widely used on 2D images, and its extensions to angular domains -- {{\sc SSIM\textsubscript{2D$\times$1D}}\xspace} and {{\sc SSIM\textsubscript{3D}}\xspace}. {{\sc SSIM\textsubscript{3D}}\xspace} computes the same statistics as standard {\sc SSIM\textsubscript{2D}}\xspace but on 3D patches extracted from the light-field volume. {{\sc SSIM\textsubscript{2D$\times$1D}}\xspace} uses {2D$\times$1D} patch which contains a 2D window extracted from a particular view and a 1D row of pixels that extends from the center of the 2D window in the angular domain (see Figure{~\ref{fig:2D-3D}}). We applied the metrics to all light-field images without resampling and averaged the scores over all images. Although we experimented with various pooling strategies, we found that the average value performs best.
Due to better performance, we chose the angular window sizes of 32 and 64 pixels for {\sc SSIM\textsubscript{2D$\times$1D}}\xspace\ and {\sc SSIM\textsubscript{3D}}\xspace\ respectively. We also considered a multi-scale version of {\sc SSIM\textsubscript{2D}}\xspace -- {\sc MS-SSIM}\xspace~\cite{Wang2003} which extends {\sc SSIM\textsubscript{2D}}\xspace\ to compute differences on multiple levels. We also used {\sc GMSD}\xspace~\cite{Xue2014} which provides good performance over a rich collection of image datasets. The most advanced 2D metric considered in our experiments was {\sc HDR-VDP-2}\xspace~\cite{Mantiuk2011} which stands out among perception-based quality metrics.
}
\myfigure{2D-3D}{Patches used in our extensions of {\sc SSIM\textsubscript{2D}}\xspace.}
We further considered the NTIA General Model -- {\sc VQM}\xspace ~\cite{Pinson2004} which was standardized for video-signals evaluation (ANSI T1.801.03-2003). For this metric, light-field images are input in a form of video panning from the leftmost view to the rightmost view and back. We also chose the stereoscopic image quality metric -- {\sc SIQM}\xspace \cite{Chen2013} that is based on the concept of cyclopean image where, we averaged scores obtained from all stereo pairs shown in our experiment. To capture the full range of stereo quality metrics, we also included a stereoscopic video quality metric {\sc StSD\textsubscript{LC}}\xspace~\cite{DeSilva2013}.
Finally, we chose metrics that address multiview data and account for interpolation artifacts. {\sc 3DSwIM}\xspace\ proposed by Battisti et al.~\cite{Battisti2015} first shift-compensates blocks from the reference and distorted (interpolated) images. These matched blocks undergo the first level of Haar wavelet transform and histogram of the sub-band corresponding to horizontal details in the block is computed. Finally, the Kolmogorov-Smirnov distance of these histograms is taken as the metric prediction. Another metric for the multiview video is {\sc MP-PSNR}\xspace~\cite{Sandic2016}. It computes the multi-resolution morphological pyramid decomposition on the reference and test images. Detail images of the top levels of these pyramids are then compared through the mean squared error. The resulting per pixel errors maps are then pooled and converted to a peak signal-to-noise ratio measure.
\mysubsection{Metric performance comparison}{Metric-performance}
\mycfigure{barplot}{The goodness-of-fit scores for the metrics expressed as Pearson Correlation Coefficient ($\rho$) and reduced chi-square ($\chi^2_{red}$) after cross-validation. The results for each cross-validation fold are shown. $\chi^2_{red} = 1$ indicates that the goodness of fit between the metric predictions and the subjective data is in perfect agreement with the measured subjective variance and $\rho = 1$ indicates perfect positive linear relation between objective scores and JODs. The error bars represent standard error.}
The quality values predicted by each metric are expected to be related to JOD values, but this relation can be complex and non-linear. To account for this relation, we follow a common practice and fit a logistic function:
\begin{equation}
q(o) = a_1 \left\{\frac{1}{2} - \frac{1}{1 + \exp\left[a_2 \left( o -
a_3 \right)\right]}\right\} + a_4 o + a_5\ \quad
\end{equation}
where $o$ is the output of a metric.
The parameters $a_{1..5}$ are optimized to minimize a given goodness-of-fit measure. We computed
several such measures, such as Spearman rank-order correlation, or
MSE, which can be found in the supplementary materials. Here we report
the reduced chi-squared statistic ($\chi^2_{red}$) and Pearson
correlation coefficient ($\rho$). $\chi^2_{red}$ is computed as a
weighted average of the squared differences, in which weights are the
inverse of sample variance. This accounts for the fact that larger JOD
values are more uncertain (refer to Figure~\ref{fig:exp_results}), and
therefore, the accuracy of their prediction can be lower. For a fair
comparison, we employed a seven fold cross-validation across different
scenes. We measured the goodness-of-fit on two randomly chosen scenes
in a cross-validation fold and averaged the results over all
folds. The resulting Pearson correlation and $\chi^2_{red}$ values are
shown in Figure~\ref{fig:barplot}. The performance of the metrics on
individual distortions are shown in Figure~\ref{fig:distBarPlots}. \changed{A more elaborate analysis including the evaluation on real-world and synthetic scenes separately is presented in the supplementary materials.}
The results show good performance of 2D image and video quality
metrics. This is unexpected as our dataset was meant to emphasize
visibility of angular artefacts, which are not directly considered by
these metrics. We observed, however, that angular distortions
indirectly translate into the differences in spatial patterns, which
could explain the good performance.
We hypothesize that relatively better performance of {\sc HDR-VDP-2}\xspace and {\sc GMSD}\xspace is
achieved by detecting changes in contrast across multiple scales,
which in case of {\sc HDR-VDP-2}\xspace is additionally backed by perceptual scaling of
distortions and discarding of those that are invisible. A comparable
performance of video ({\sc VQM}\xspace) and stereoscopic ({\sc StSD\textsubscript{LC}}\xspace) metrics can be
explained by their emphasis on the relation between neighboring views,
which in some way captures angular aspects of light-field viewing.
Figure~\ref{fig:distBarPlots} shows that some metrics are better at
predicting some distortion types than the others. For example, {\sc HDR-VDP-2}\xspace
consistently under-predicts quality for {\sc HEVC}\xspace. Training such metrics
for a particular distortion type could substantially boost their
performance. Unexpectedly, our {\it ad hoc} attempts to extend the
{\sc SSIM\textsubscript{2D}}\xspace metric by adding the angular dimension ({\sc SSIM\textsubscript{3D}}\xspace) or right
away considering 3D patches ({\sc SSIM\textsubscript{2D$\times$1D}}\xspace) that should account for angular
changes has led to significantly worse results. Clearly, there is a
room for improvements and a suitable dataset, such as the one provided in
this work, should help to develop a better metric in future.
\mycfigure{distBarPlots}{ Left: The prediction accuracy per-distortion reported as reduced chi-squared goodness of fit score. Middle and right: $\chi^2_{red}$--fit for the metrics HDR-VDP and GMSD over all scenes. The prediction accuracy for individual distortions are shown inside the plots and the overall accuracy is indicated on the top of the plots.}
\mysubsection{Sparse light-field reference case}{distorted-reference}
\myfigure{dist_ref_plot}{The goodness-of-fit scores for the subset of
the dataset when a dense LF is used as a reference (blue), when
nearest-neighbour at the 2$^{nd}$ distortion level is a reference
(cyan), or when optical flow is used to up-sample the reference
LFs. The dots at cyan bars mean that the value is statistically
different from the dense LF case and the dots on the yellow bars that
the values are statistically different from the {\sc NN}\xspace case. The
significance is computed by bootstrapping and running one-tailed
test ($p=0.05$). }
In all our tests, we provided a high quality, 101-view light field as a
reference for the quality metrics. In practice, in most applications
only sparsely sampled light field is unavailable. When a sparse
light field is used as a reference, a full-reference metric is given
to compare two distorted light fields without a perfect reference. This is a task that such metrics were not designed for as they are intended to predict JODs relative to the perfect reference image, not JNDs relative to any other image (refer to Figure~\ref{fig:jod_vs_jnd}). This issue is
potentially shared with other quality assessment tasks, for example
when a metric is trained on 4K images, but it is used on much lower
resolution images. However, this problem is exacerbated in case of
light fields, where the reduction of angular resolution is often
substantial.
To test whether the metrics can predict the quality of distorted light fields using sparse light fields as a reference, we measured the performance of the metrics on a subset of our dataset. As a reference, we chose light fields with distortion {\sc NN}\xspace and severity level two, which correspond to original light fields subsampled to 21 angular views. For the testing light field, we considered all light fields with a higher distortion levels. For a fair comparison, we also ran the metrics on the same subset using full 101-view light fields as a reference. The results of these tests are shown as cyan and blue bars in Figure~\ref{fig:dist_ref_plot}. The significant difference in
goodness-of-fit scores (marked with dots) show that metrics predictions get worse if imperfect (sparse) reference is used. This suggests that the existing metrics must be provided with a high-quality reference light field to predict reliably the quality.
But if such high quality reference is not available, can it be
approximated? Our subjective data shows that optical-flow
interpolation ({\sc Opt}\xspace) produces the highest quality results. Therefore,
we used {\sc Opt}\xspace to produce reference 101-view light fields from sparse
21-view light fields and reran the metrics on the subset. The results indicate that
the predictions improved as compared to using sparse light field (yellow
vs. cyan bars in Figure~\ref{fig:dist_ref_plot}). This suggests that a potential solution to the problem of imperfect reference is to use high-quality interpolation method in order to generate reference.
\mysection{Conclusions and future work}{conclusions}
We have established a new 3D dense light-field dataset together with the subjective quality scaling for various distortions that occur in light-field applications.
Different methods in light-field processing lead to visual artifacts with quite different appearance, e.g., blur for {\sc Linear}\xspace, ghosting for {\sc Opt}\xspace, image flickering and jumping for {\sc NN}\xspace.
Our experiments reveal how these different artifacts affect perceived quality. Our subjective scores are derived from an interactive 3D light-field viewing setup and correspond precisely to overall quality of light fields rather than individual views. We have evaluated the potential of existing image, video, stereo, and multiview quality metrics in predicting the subjective scores. Our observations show that the metrics -- {\sc HDR-VDP-2}\xspace, {\sc GMSD}\xspace, {\sc StSD\textsubscript{LC}}\xspace and {\sc VQM}\xspace perform reasonably well when comparing a distorted light field to a dense reference, and can be used in applications requiring such comparisons. When dense light field is not available, which is the case in some applications, the usage of these metrics for quality assessment is not justified. The perceptually scaled data that we provide can be used for training and validating new light-field quality metrics. Of practical interest for such development is the problem identified in this work, where incomplete, sparse light fields must serve as the reference. Our results also reveal the quality of different light-field reconstruction method, which can directly guide the choice of the light-field reconstruction technique. In the current work, we did not consider aspects such as masking properties of the human visual system. It could be interesting to investigate how much the metrics gain by considering this effect rather than simple averaging of scores over all views. When creating our dataset, we did not consider focus cue. We are, however, not aware of any display setup that could be used to evaluate both motion parallax and focus cue quality. We also believe that the problems revealed in this work should be addressed before including additional cues.
\vspace{2ex}
\textit{\textbf{Acknowledgements:}
This project was supported by the Fraunhofer and Max Planck cooperation
program within the German pact for research and innovation (PFI).
Denis Sumin was supported by the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 642841. The authors would like to thank Tobias Ritschel for the initial discussions and providing synthetic scenes.}
\begin{filecontents}{Paper.bib}
@InProceedings{cadik12iqm_evaluation,
Title = {New Measurements Reveal Weaknesses of Image Quality Metrics in Evaluating Graphics Artifacts},
Author = {{\v{C}}ad\'{i}k, Martin and Herzog, Robert and Rafal Mantiuk and Myszkowski, Karol and Seidel, Hans-Peter},
Booktitle = {ACM Trans. Graph.},
Year = {2012},
Pages = {1--10},
Volume = {31},
Issue = {6}
}
@Article{Alam2014,
Title = {Local masking in natural images: A database and analysis},
Author = {Alam, M. M. and Vilankar, K. P. and Field, David J and Chandler, Damon M},
Journal = {J. Vis.},
Year = {2014},
Number = {8},
Pages = {22--22},
Volume = {14},
Doi = {10.1167/14.8.22},
ISSN = {1534-7362},
Url = {http://jov.arvojournals.org/Article.aspx?doi=10.1167/14.8.22}
}
@InProceedings{Aydin2010,
Title = {Video quality assessment for computer graphics applications},
Author = {Aydin, Tun{\c{c}} Ozan and {\v{C}}ad{\'\i}k, Martin and Myszkowski, Karol and Seidel, Hans-Peter},
Booktitle = {ACM Trans. Graph.},
Year = {2010},
Number = {6},
Pages = {161},
Volume = {29}
}
@Article{Battisti2015,
Title = {Objective image quality assessment of 3{D} synthesized views},
Author = {Federica Battisti and Emilie Bosc and Marco Carli and Patrick Le Callet and Simone Perugia},
Journal = {Signal Processing: Image Communication},
Year = {2015},
Number = {C},
Pages = {78--88},
Volume = {30},
Address = {New York, NY, USA},
Doi = {http://dx.doi.org/10.1016/j.image.2014.10.005},
ISSN = {0923-5965},
Keywords = {Objective image quality},
Publisher = {Elsevier Science Inc.},
Url = {http://www.sciencedirect.com/science/article/pii/S0923596514001453}
}
@inproceedings{Bolin1998,
author = {Bolin, Mark R. and Meyer, Gary W.},
title = {A Perceptually Based Adaptive Sampling Algorithm},
booktitle = {Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques},
year = {1998},
isbn = {0-89791-999-8},
pages = {299--309},
numpages = {11},
url = {http://doi.acm.org/10.1145/280814.280924},
doi = {10.1145/280814.280924},
acmid = {280924},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {adaptive sampling, masking, perception, vision models},
}
@InProceedings{Bosc2013,
Title = {A wavelet-based image quality metric for the assessment of 3{D} synthesized views},
Author = {Bosc, Emilie and Battisti, Federica and Carli, Marco and Le Callet, Patrick},
Booktitle = {Proc. SPIE},
Year = {2013},
Pages = {86481Z-86481Z-9},
Volume = {8648},
Abstract = {
In this paper we present a novel image quality assessment technique for evaluating virtual synthesized views in the context of multi-view video. In particular, Free Viewpoint Videos are generated from uncompressed color views and their compressed associated depth maps by means of the View Synthesis Reference Software, provided by MPEG. Prior to the synthesis step, the original depth maps are encoded with different coding algorithms thus leading to the creation of additional artifacts in the synthesized views. The core of proposed wavelet-based metric is in the registration procedure performed to align the synthesized view and the original one, and in the skin detection that has been applied considering that the same distortion is more annoying if visible on human subjects rather than on other parts of the scene. The effectiveness of the metric is evaluated by analyzing the correlation of the scores obtained with the proposed metric with Mean Opinion Scores collected by means of subjective tests. The achieved results are also compared against those of well known objective quality metrics. The experimental results confirm the effectiveness of the proposed metric.
},
Doi = {10.1117/12.2002410},
Url = { http://dx.doi.org/10.1117/12.2002410}
}
@Article{Bosc2011,
Title = {Towards a New Quality Metric for 3-{D} Synthesized View Assessment},
Author = {Bosc, Emilie and P{\'e}pion, Romuald and Le Callet, Patrick and K{\"o}ppel, Martin and Ndjiki-Nya, Patrick and Pressigout, Muriel and Morin, Luce},
Journal = {{IEEE Journal on Selected Topics in Signal Processing}},
Year = {2011},
Month = Nov,
Pages = {J-STSP-ETVC-00048-2011},
Hal_id = {hal-00628070},
Hal_version = {v1},
Keywords = {Depth image-based rendering (DIBR) ; multiview ; quality assessment ; quality metrics ; video plus depth ; view synthesis ; Depth image-based rendering (DIBR) ; multiview ; quality assessment ; quality metrics ; video plus depth ; view synthesis.},
Url = {https://hal.archives-ouvertes.fr/hal-00628070}
}
@InCollection{Brox2004,
Title = {High accuracy optical flow estimation based on a theory for warping},
Author = {Brox, Thomas and Bruhn, Andr{\'e}s and Papenberg, Nils and Weickert, Joachim},
Booktitle = {Comput. Vis.-ECCV 2004},
Publisher = {Springer},
Year = {2004},
Pages = {25--36}
}
@Misc{ITU2002,
Title = {Methodology for the subjective assessment of the quality of television pictures},
Author = {ITU-R-BT.500-11},
Year = {2002},
Institution = {ITU}
}
@InProceedings{cartney99modelfest,
Title = {The development of an image/threshold database for designing and
testing human vision models},
Author = {Carney, T. and Klein, S. A. and Tyler, C. W. and Silverstein, A.
D. and Beutter, B. and Levi, D. and Watson, A. B. and Reeves, A.
J. and Norcia, A. M. and Chen, C.-C. and Makous, W. and Eckstein,
M. P.},
Booktitle = {Proc. of Human Vision, Visual Proc., and Digital Display IX},
Year = {1999},
Publisher = {SPIE}
}
@Article{Chen2013,
Title = {Full-reference quality assessment of stereopairs accounting for rivalry},
Author = {Ming-Jun Chen and Che-Chun Su and Do-Kyoung Kwon and Lawrence K. Cormack and Alan C. Bovik},
Journal = {Signal Processing: Image Communication },
Year = {2013},
Number = {9},
Pages = {1143--1155},
Volume = {28},
Address = {New York, NY, USA},
Doi = {http://dx.doi.org/10.1016/j.image.2013.05.006},
ISSN = {0923-5965},
Keywords = {Binocular rivalry},
Publisher = {Elsevier Science Inc.},
Url = {http://www.sciencedirect.com/science/article/pii/S0923596513000787}
}
@Misc{Stan2008,
Title = {The (New) {Stanford} Light Field Archive},
Author = {{Computer Graphics Laboratory, Stanford University}},
HowPublished = {\url{http://lightfield.stanford.edu/acq.html}},
Note = {Accessed: 2016-04-23},
Year = {2008}
}
@InProceedings{Daly1992,
Title = {Visible differences predictor: an algorithm for the assessment of image fidelity},
Author = {Daly, Scott J},
Booktitle = {SPIE/IS\&T 1992 Symposium on Electronic Imaging: Science and Technology},
Year = {1992},
Pages = {2--15}
}
@InProceedings{Dansereau2013,
Title = {Light field image denoising using a linear 4D frequency-hyperfan all-in-focus filter},
Author = {Dansereau, Donald G and Bongiorno, Daniel L and Pizarro, Oscar and Williams, Stefan B},
Booktitle = {Proceedings of the SPIE Conference on Computational Imaging (SPIE'13)},
Year = {2013},
Volume = {8657}
}
@Article{Didyk2011,
Title = {A Luminance-Contrast-Aware Disparity Model and Applications},
Author = {Piotr Didyk and
Tobias Ritschel and
Elmar Eisemann and
Karol Myszkowski and
Hans-Peter Seidel and
Wojciech Matusik},
Journal = {ACM Trans. Graph.},
Year = {2012},
Number = {6},
Volume = {31}
}
@Article{Didyk2012,
Title = {A luminance-contrast-aware disparity model and applications},
Author = {Didyk, Piotr and Ritschel, Tobias and Eisemann, Elmar and Myszkowski, Karol and Seidel, Hans-Peter and Matusik, Wojciech},
Journal = {ACM Trans. Graph.},
Year = {2012},
Number = {6},
Pages = {184},
Volume = {31},
Publisher = {ACM}
}
@Article{Didyk2013,
Title = {Joint view expansion and filtering for automultiscopic 3{D} displays},
Author = {Didyk, Piotr and Sitthi-Amorn, Pitchaya and Freeman, William and Durand, Fr{\'e}do and Matusik, Wojciech},
Journal = {ACM Transactions on Graphics (TOG)},
Year = {2013},
Number = {6},
Pages = {221},
Volume = {32},
Publisher = {ACM}
}
@Article{Neil05,
Title = {Autostereoscopic 3{D} Displays},
Author = {Neil A. Dodgson },
Journal = {Computer},
Year = {2005},
Number = {undefined},
Pages = {31-36},
Volume = {38},
Address = {Los Alamitos, CA, USA},
Doi = {doi.ieeecomputersociety.org/10.1109/MC.2005.252},
ISSN = {0018-9162},
Publisher = {IEEE Computer Society}
}
@Book{Dorsey2007,
Title = {Digital Modeling of Material Appearance},
Author = {Julie Dorsey and Holly Rushmeier and Fran{\c c}ois X. Sillion},
Publisher = {Morgan Kaufmann},
Year = {2007},
Address = {Burlington, MA, USA}
}
@Article{Du2014,
Title = {Improving visual quality of view transitions in automultiscopic displays},
Author = {Du, Song-Pei and Didyk, Piotr and Durand, Fr{\'e}do and Hu, Shi-Min and Matusik, Wojciech},
Journal = {ACM Trans. Graph.},
Year = {2014},
Number = {6},
Pages = {192},
Volume = {33},
Publisher = {ACM}
}
@Article{filip08psychophysically,
Title = {A Psychophysically Validated Metric for Bidirectional Texture Data Reduction},
Author = {Filip, Ji\v{r}\'{\i} and Chantler, Michael J. and Green, Patrick R. and Haindl, Michal},
Journal = {ACM Trans. Graph.},
Year = {2008},
Month = dec,
Number = {5},
Pages = {138:1--138:11},
Volume = {27},
Articleno = {138},
ISSN = {0730-0301},
Issue_date = {December 2008},
Numpages = {11}
}
@Article{Guthe2009,
Title = {{BTF}-{CIE}Lab: A Perceptual Difference Measure for Quality Assessment and Compression of {BTF}s},
Author = {Guthe, Michael and M{\"u}ller, Gero and Schneider, Martin and Klein, Reinhard},
Journal = {Comput. Graph. Forum},
Year = {2009},
Number = {1},
Pages = {101--113},
Volume = {28}
}
@InProceedings{Herzog2012,
Title = {{NoRM}: No-Reference Image Quality Metric for Realistic Image Synthesis},
Author = {Herzog, Robert and {\v{C}}ad{\'\i}k, Martin and Aydin, Tun{\c{c}} O and Kim, Kwang In and Myszkowski, Karol and Seidel, Hans-P},
Booktitle = {Comp. Graph. Forum},
Year = {2012},
Number = {2pt3},
Pages = {545--554},
Volume = {31}
}
@Article{Hewage2009,
Title = {Quality evaluation of color plus depth map-based stereoscopic video},
Author = {Hewage, Chaminda TER and Worrall, Stewart T and Dogan, Safak and Villette, Stephane and Kondoz, Ahmet M},
Journal = {Sel. Topics Signal Process., IEEE J.},
Year = {2009},
Number = {2},
Pages = {304--318},
Volume = {3}
}
@Article{Higa2013,
Title = {Plenoptic image compression comparison between JPEG, JPEG2000 and SPITH},
Author = {Higa, Rog{\'e}rio Seiji and Chavez, Roger Fredy Larico and Leite, Ricardo Barroso and Arthur, Rangel and Iano, Yuzo},
Journal = {Cyber Journals: JSAT},
Year = {2013},
Number = {6},
Volume = {3},
Publisher = {Citeseer}
}
@Article{Jarabo2014,
Title = {How Do People Edit Light Fields?},
Author = {Jarabo, Adrian and Masia, Belen and Bousseau, Adrien and Pellacini, Fabio and Gutierrez, Diego},
Journal = {ACM Trans. Graph.},
Year = {2014},
Number = {4},
Volume = {33}
}
@Article{JaraboTVCG2014,
Title = {Effects of Approximate Filtering on the Appearance of Bidirectional Texture Functions},
Author = {Jarabo, Adrian and Wu, Hongzhi and Dorsey, Julie and Rushmeier, Holly and Gutierrez, Diego},
Journal = {IEEE Trans. Vis. Comp. Graph.},
Year = {2014},
Number = {6},
Volume = {20}
}
@Article{Jia2014,
Title = {Caffe: Convolutional Architecture for Fast Feature Embedding},
Author = {Jia, Yangqing and Shelhamer, Evan and Donahue, Jeff and Karayev, Sergey and Long, Jonathan and Girshick, Ross and Guadarrama, Sergio and Darrell, Trevor},
Journal = {arXiv preprint arXiv:1408.5093},
Year = {2014}
}
@Article{Kim2013,
Title = {Scene Reconstruction from High Spatio-angular Resolution Light Fields},
Author = {Kim, Changil and Zimmer, Henning and Pritch, Yael and Sorkine-Hornung, Alexander and Gross, Markus},
Journal = {ACM Trans. Graph.},
Year = {2013},
Month = jul,
Number = {4},
Pages = {73:1--73:12},
Volume = {32},
Acmid = {2461926},
Address = {New York, NY, USA},
Articleno = {73},
Doi = {10.1145/2461912.2461926},
ISSN = {0730-0301},
Issue_date = {July 2013},
Keywords = {image-based scene reconstruction, light fields},
Numpages = {12},
Publisher = {ACM},
Url = {http://doi.acm.org/10.1145/2461912.2461926}
}
@InCollection{Krizhevsky2012,
Title = {ImageNet Classification with Deep Convolutional Neural Networks},
Author = {Alex Krizhevsky and Sutskever, Ilya and Geoffrey E. Hinton},
Booktitle = {Advances in Neural Information Processing Systems 25},
Publisher = {Curran Associates, Inc.},
Year = {2012},
Pages = {1097--1105},
Url = {http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf}
}
@article{Lang2010,
author = {Lang, Manuel and Hornung, Alexander and Wang, Oliver and Poulakos, Steven and Smolic, Aljoscha and Gross, Markus},
title = {Nonlinear Disparity Mapping for Stereoscopic 3{D}},
journal = {ACM Trans. Graph.},
issue_date = {July 2010},
volume = {29},
number = {4},
month = jul,
year = {2010},
issn = {0730-0301},
pages = {75:1--75:10},
articleno = {75},
numpages = {10},
url = {http://doi.acm.org/10.1145/1778765.1778812},
doi = {10.1145/1778765.1778812},
acmid = {1778812},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {3D video, depth perception, disparity mapping, stereoscopy, warping},
}
@Misc{ivcdb,
Title = {Subjective quality assessment IRCCyN/IVC database},
Author = {Le Callet, Patrick and Autrusseau, Florent},
Note = {http://www.irccyn.ec-nantes.fr/ivcdb/},
Year = {2005}
}
@inproceedings{Levoy1996,
author = {Levoy, Marc and Hanrahan, Pat},
title = {Light Field Rendering},
booktitle = {Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques},
year = {1996},
isbn = {0-89791-746-4},
pages = {31--42},
numpages = {12},
url = {http://doi.acm.org/10.1145/237170.237199},
doi = {10.1145/237170.237199},
acmid = {237199},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {epipolar analysis, holographic stereogram, image-based rendering, light field, vector quantization},
}
@Article{Lin11,
Title = {Perceptual visual quality metrics: {A} survey },
Author = {Weisi Lin and C.-C. Jay Kuo},
Journal = {J. Vis. Commun. Image Represent.},
Year = {2011},
Number = {4},
Pages = {297--312},
Volume = {22}
}
@InProceedings{Liu2015,
Title = {Dynamic mapping for multiview autostereoscopic displays},
Author = {Liu, Jing and Malzbender, Tom and Qin, Siyang and Zhang, Bipeng and Wu, Che-An and Davis, James},
Booktitle = {Proc. SPIE, vol. 9391},
Year = {2015},
Pages = {1I:1--1I:8},
Doi = {10.1117/12.2185676},
Url = { http://dx.doi.org/10.1117/12.2185676}
}
@Misc{LytroCinema,
Title = {Lytro Cinema},
Author = {{Lytro}},
HowPublished = {\url{https://www.lytro.com/cinema}},
Note = {Accessed: 2016-15-11},
Year = {2016}
}
@Misc{PTlens,
Title = {Lens distortion correction software},
Author = {{PTLens}},
HowPublished = {\url{http://www.epaperpress.com/ptlens/}},
Note = {Accessed: 2016-15-11},
Year = {2016}
}
@Misc{MPILF,
Title = {{Light-field archive}},
Author = {{MPI}},
HowPublished = {\url{http://lightfields.mpi-inf.mpg.de/Dataset.html}},
Note = {Accessed: 2017-07-04},
Year = {2017}
}
@Misc{Voodoo,
Title = {{Voodoo Camera Tracker}},
Author = {{Viscoda}},
HowPublished = {\url{http://www.viscoda.com/en/voodoo-download}},
Note = {Accessed: 2016-15-11},
Year = {2016}
}
@Misc{RAFALC,
Title = {{Thurstonian scaling for pair-wise comparison experiments}},
Author = {{Rafa\l{} K. Mantiuk}},
HowPublished = {\url{https://github.com/mantiuk/pwcmp}},
Note = {Accessed: 2016-15-11},
Year = {2016}
}
@article{Fusiello2000,
author = {Fusiello, Andrea and Trucco, Emanuele and Verri, Alessandro},
title = {A Compact Algorithm for Rectification of Stereo Pairs},
journal = {Mach. Vision Appl.},
issue_date = {July 2000},
volume = {12},
number = {1},
month = jul,
year = {2000},
issn = {0932-8092},
pages = {16--22},
numpages = {7},
url = {http://dx.doi.org/10.1007/s001380050003},
doi = {10.1007/s001380050003},
acmid = {360413},
publisher = {Springer-Verlag New York, Inc.},
address = {Secaucus, NJ, USA},
keywords = {epipolar geometry, rectification, stereo},
}
@Article{Magnor2000,
Title = {Data compression for light-field rendering},
Author = {M. Magnor and B. Girod},
Journal = {IEEE Transactions on Circuits and Systems for Video Technology},
Year = {2000},
Number = {3},
Pages = {338-343},
Volume = {10}
}
@Article{Maimone2013,
Title = {Focus 3{D}: Compressive Accommodation Display},
Author = {A. Maimone and G. Wetzstein and D. Lanman and M. Hirsch and R. Raskar and H. Fuchs},
Journal = {ACM Trans. Graph.},
Year = {2013},
Number = {5},
Pages = {1--13},
Volume = {32},
Address = {New York, NY, USA},
Publisher = {ACM}
}
@Article{Mantiuk2011,
Title = {{HDR-VDP-2: a} calibrated visual metric for visibility and quality predictions in all luminance conditions},
Author = {Mantiuk, Rafat and Kim, Kil Joong and Rempel, Allan G and Heidrich, Wolfgang},
Journal = {ACM Trans. Graph.},
Year = {2011},
Number = {4},
Pages = {40:1--40:12},
Volume = {30}
}
@Article{Masia2013,
Title = {A survey on computational displays: Pushing the boundaries of optics, computation, and perception},
Author = {Masia, Belen and Wetzstein, Gordon and Didyk, Piotr and Gutierrez, Diego},
Journal = {Comput. Graph.},
Year = {2013},
Number = {8},
Pages = {1012--1038},
Volume = {37},
Publisher = {Elsevier}
}
@Article{Moorthy2013,
Title = {Subjective evaluation of stereoscopic image quality },
Author = {A.K. Moorthy and C.-C. Su and A. Mittal and A.C. Bovik},
Journal = {Signal Processing: Image Communication },
Year = {2013},
Number = {8},
Pages = {870--883},
Volume = {28}
}
@Article{Morvan2009,
Title = {Handling Occluders in Transitions from Panoramic Images: A Perceptual
Study},
Author = {Morvan, Y. and O'Sullivan, C.},
Journal = {ACM Trans. Appl. Percept.},
Year = {2009},
Number = {4},
Pages = {1--15},
Volume = {6}
}
@Article{Pinson2004,
Title = {A new standardized method for objectively measuring video quality},
Author = {M. H. Pinson and S. Wolf},
Journal = {IEEE Transactions on Broadcasting},
Year = {2004},
Number = {3},
Pages = {312-322},
Volume = {50}
}
@Article{Ponomarenko2015,
Title = {Image database {TID2013}: Peculiarities, results and perspectives},
Author = {Ponomarenko, Nikolay and Jin, Lina and Ieremeiev, Oleg and Lukin, Vladimir and Egiazarian, Karen and Astola, Jaakko and Vozel, Benoit and Chehdi, Kacem and Carli, Marco and Battisti, Federica and {Jay Kuo}, C.-C.},
Journal = {Signal Processing: Image Communication},
Year = {2015},
Month = {jan},
Pages = {57--77},
Volume = {30},
Doi = {10.1016/j.image.2014.10.009},
ISSN = {09235965},
Mendeley-groups = {Subjective quality},
Url = {http://linkinghub.elsevier.com/retrieve/pii/S0923596514001490}
}
@Article{ponomarenko2009tid2008,
Title = {{TID2008} - A database for evaluation of full-reference visual quality assessment metrics},
Author = {Ponomarenko, N. and Lukin, V. and Zelensky, A. and Egiazarian, K. and Carli, M. and Battisti, F.},
Journal = {Advances of Modern Radioelectronics},
Year = {2009},
Pages = {30--45},
Volume = {10}
}
@Article{Ramanarayanan2007a,
Title = {{Visual equivalence: towards a new standard for image fidelity}},
Author = {Ramanarayanan, G and Ferwerda, J and Walter, B},
Journal = {ACM Transactions on Graphics (TOG)},
Year = {2007},
Number = {3},
Pages = {76},
Volume = {26},
Doi = {10.1145/1276377.1276472},
File = {:home/rkm38/Mendeley Desktop/Ramanarayanan, Ferwerda, Walter/2007 - Visual equivalence towards a new standard for image fidelity - ACM Transactions on Graphics (TOG).pdf:pdf},
Mendeley-groups = {Quality},
Url = {http://portal.acm.org/citation.cfm?id=1276377.1276472}
}
@Article{Sanchez2013,
Title = {{TV-L1} Optical Flow Estimation},
Author = {S\'anchez P\'erez, Javier and Meinhardt-Llopis, Enric and Facciolo, Gabriele},
Journal = {{Image Processing On Line}},
Year = {2013},
Pages = {137--150},
Volume = {3},
Doi = {10.5201/ipol.2013.26}
}
@Article{Sandic2016,
Title = {Multi-Scale Synthesized View Assessment Based on Morphological Pyramids},
Author = {Sandic-Stankovic, D. and Kukolj, D. and Le Callet, P.},
Journal = {Journal of Electrical Engineering},
Year = {2016},
Number = {1},
Pages = {3--11},
Volume = {67},
Doi = {http://dx.doi.org/10.1515/jee-2016-0001},
ISSN = {1339-309X},
Publisher = {Walter de Gruyter GmbH},
Url = {http://www.degruyter.com/view/j/jee.2016.67.issue-1/jee-2016-0001/jee-2016-0001.xml}
}
@Article{Sheikh2006b,
Title = {A Statistical Evaluation of Recent Full Reference Image Quality Assessment Algorithms},
Author = {Sheikh, H.R. and Sabir, M.F. and Bovik, A.C.},
Journal = {IEEE Trans. Image Proc.},
Year = {2006},
Number = {11},
Pages = {3440--3451},
Volume = {15}
}
@Article{Shibata2011,
Title = {The zone of comfort: Predicting visual discomfort with stereo displays},
Author = {Shibata, Takashi and Kim, Joohwan and Hoffman, David M and Banks, Martin S},
Journal = {J. Vis.},
Year = {2011},
Number = {8},
Pages = {11:1--11:29},
Volume = {11},
Publisher = {The Association for Research in Vision and Ophthalmology}
}
@Article{DeSilva2013,
Title = {Toward an Impairment Metric for Stereoscopic Video: A Full-Reference Video Quality Metric to Assess Compressed Stereoscopic Video},
Author = {V. De Silva and H. K. Arachchi and E. Ekmekcioglu and A. Kondoz},
Journal = {IEEE Transactions on Image Processing},
Year = {2013},
Month = {Sept},
Number = {9},
Pages = {3392-3404},
Volume = {22},
Doi = {10.1109/TIP.2013.2268422},
ISSN = {1057-7149},
Keywords = {data compression;stereo image processing;video codecs;video coding;H.264;advanced 3{D} media delivery systems;advanced immersive media distribution platforms;compressed stereoscopic video;full-reference video quality metric;perceptual quality;quality assessment;statistical techniques;stereoscopic video quality metric;stereoscopic video sequences;subjective test;video coding compliant video codecs;Video quality metric;binocular suppression;compressed stereoscopic video;subjective analysis of 3D video}
}
@Article{Silverstein2001,
Title = {Efficient method for paired comparison},
Author = {Silverstein, DA and Farrell, JE},
Journal = {J. Electron. Imaging},
Year = {2001},
Number = {2},
Pages = {394--398},
Volume = {10},
Doi = {10.1117/1.1344187}
}
@InProceedings{Solh2009,
Title = {{MIQM}: A novel Multi-view Images Quality Measure},
Author = {Mashhour Solh and Ghassan AlRegib},
Booktitle = {Quality of Multimedia Experience, 2009. QoMEx 2009. International Workshop on},
Year = {2009},
Month = {July},
Pages = {186-191},
Doi = {10.1109/QOMEX.2009.5246953},
Keywords = {brightness;edge detection;image motion analysis;video signal processing;visual perception;MIQM;calibration process;edge-based structure components;geometric distortion;image contrast;luminance;multicamera images;multiview image quality measure;objective quality assessment method;perceptual quality;photometric distortion;spatial motion;subjective quality assessment method;video quality;visual distortion;Calibration;Cameras;Distortion measurement;Image coding;Image quality;Layout;PSNR;Photometry;Quality assessment;Videos;geometric distortion;immersive communication;multi-view;photometric distortion;quality assessment}
}
@Article{Stich2011,
Title = {Perception-motivated Interpolation of Image Sequences},
Author = {Stich, Timo and Linz, Christian and Wallraven, Christian and Cunningham, Douglas and Magnor, Marcus},
Journal = {ACM Trans. Appl. Percept.},
Year = {2011},
Number = {2},
Pages = {11:1--25},
Volume = {8}
}
@Article{Tamboli16,
Title = {Super-multiview content with high angular resolution: 3{D} quality assessment on horizontal-parallax lightfield display },
Author = {Roopak R. Tamboli and Balasubramanyam Appina and Sumohana Channappayya and Soumya Jana},
Journal = {Signal Processing: Image Communication },
Year = {2016},
Pages = {42 - 55},
Volume = {47},
Doi = {http://dx.doi.org/10.1016/j.image.2016.05.010},
ISSN = {0923-5965},
Keywords = {Full-reference 3{D} Quality Assessment},
Url = {http://www.sciencedirect.com/science/article/pii/S0923596516300674}
}
@Article{GTech2016,
Title = {Overview of the Multiview and 3{D} Extensions of High Efficiency Video Coding},
Author = {G. Tech and Y. Chen and K. M{\"U}ller and J. R. Ohm and A. Vetro and Y. K. Wang},
Journal = {IEEE Transactions on Circuits and Systems for Video Technology},
Year = {2016},
Month = {Jan},
Number = {1},
Pages = {35-49},
Volume = {26},
Doi = {10.1109/TCSVT.2015.2477935},
ISSN = {1051-8215},
Keywords = {decoding;image representation;image texture;motion compensation;video cameras;video coding;3D extensions;3D-HEVC;advanced 3D displays;bit rate savings;block-level syntax;block-level video coding tools;decoding processes;depth maps;depth-based 3D video formats;high efficiency video coding standard;motion-compensated prediction;multiple camera view coding;multiview video representation;single-layer decoders;video texture;Decoding;Encoding;Gold;Standards;Syntactics;Three-dimensional displays;Video coding;3D High Efficiency Video Coding (3D-HEVC);3D-HEVC;HEVC;High Efficiency Video Coding (HEVC);Joint Collaborative Team on 3D Video Coding Extension Development (JCT-3V);Joint Collaborative Team on 3D Video Coding Extensions (JCT-3V);Moving Picture Experts Group (MPEG);Multiview HEVC (MV-HEVC);Video Coding Experts Group (VCEG);standards;video compression}
}
@InProceedings{Tian2009,
Title = {View synthesis techniques for 3{D} video},
Author = {Tian, Dong and Lai, Po-Lin and Lopez, Patrick and Gomila, Cristina},
Year = {2009},
Pages = {74430T-74430T-11},
Volume = {7443},
Doi = {10.1117/12.829372},
Journal = {Proc. SPIE},
Url = { http://dx.doi.org/10.1117/12.829372}
}
@Article{Tompkin2013,
Title = {Preference and artifact analysis for video transitions of places},
Author = {Tompkin, James and Kim, Min H. and Kim, Kwang In and Kautz, Jan and
Theobalt, Christian},
Journal = {ACM Trans. Appl. Percept.},
Year = {2013},
Number = {3},
Pages = {13:1--19},
Volume = {10}
}
@TechReport{ITU-T-P.9102008,
Title = {{Subjective audiovisual quality assessment methods for multimedia applications}},
Author = {ITU-T-P.910},
Year = {2008},
Booktitle = {Networks}
}
@Article{Vangorp2011,
Title = {Perception of Visual Artifacts in Image-Based Rendering of Fa{\c{c}}ades},
Author = {Vangorp, Peter and Chaurasia, Gaurav and Laffont, P-Y and Fleming,
Roland W and Drettakis, George},
Journal = {Comput. Graph. Forum},
Year = {2011},
Number = {4},
Pages = {1241--1250},
Volume = {30}
}
@Article{Vetro2011,
Title = {Overview of the Stereo and Multiview Video Coding Extensions of the {H.264/MPEG-4 AVC} Standard},
Author = {A. Vetro and T. Wiegand and G. J. Sullivan},
Journal = {Proceedings of the IEEE},
Year = {2011},
Number = {4},
Pages = {626-642},
Volume = {99}
}
@Article{Wang2004,
Title = {Image quality assessment: from error visibility to structural similarity},
Author = {Wang, Zhou and Bovik, Alan Conrad and Sheikh, Hamid Rahim and Simoncelli, Eero P},
Journal = {IEEE Trans. Image Process.},
Year = {2004},
Number = {4},
Pages = {600--612},
Volume = {13}
}
@InProceedings{Wang2003,
Title = {Multiscale structural similarity for image quality assessment},
Author = {Z. Wang and E. P. Simoncelli and A. C. Bovik},
Booktitle = {Signals, Systems and Computers, 2004. Conference Record of the Thirty-Seventh Asilomar Conference on},
Year = {2003},
Month = {Nov},
Pages = {1398-1402 Vol.2},
Volume = {2},
Doi = {10.1109/ACSSC.2003.1292216},
Keywords = {image processing;human visual system;image quality assessment;image synthesis method;multiscale structural similarity;structural information extracting;Data mining;Displays;Distortion measurement;Electric variables measurement;Humans;Image quality;Layout;Optical filters;Signal processing;Visual system}
}
@InProceedings{Wanner13,
Title = {Datasets and Benchmarks for Densely Sampled 4D Light Fields},
Author = {S. Wanner and S. Meister and B. Goldluecke},
Booktitle = vmv,
Year = {2013},
Poster = {WMG13_vmv_poster.pdf},
Titleurl = {WMG13_vmv.pdf}
}
@Article{Watson2001,
Title = {Digital video quality metric based on human vision},
Author = {Watson, Andrew B and Hu, James and McGowan, John F},
Journal = {J Electronic imaging},
Year = {2001},
Number = {1},
Pages = {20--29},
Volume = {10}
}
@Article{Wetzstein2012,
Title = {Tensor Displays: Compressive Light Field Synthesis using Multilayer Displays with Directional Backlighting},
Author = {G. Wetzstein and D. Lanman and M. Hirsch and R. Raskar},
Journal = {ACM Trans. Graph.},
Year = {2012},
Number = {4},
Pages = {1--11},
Volume = {31},
Address = {New York, NY, USA},
Publisher = {ACM}
}
@Article{Wilburn2005,
Title = {High performance imaging using large camera arrays},
Author = {Wilburn, Bennett and Joshi, Neel and Vaish, Vaibhav and Talvala, Eino-Ville and Antunez, Emilio and Barth, Adam and Adams, Andrew and Horowitz, Mark and Levoy, Marc},
Journal = {ACM Trans. Graph.},
Year = {2005},
Number = {3},
Pages = {765--776},
Volume = {24}
}
@Article{Xue2014,
Title = {Gradient Magnitude Similarity Deviation: A Highly Efficient Perceptual Image Quality Index},
Author = {W. Xue and L. Zhang and X. Mou and A. C. Bovik},
Journal = {IEEE Transactions on Image Processing},
Year = {2014},
Month = {Feb},
Number = {2},
Pages = {684-695},
Volume = {23},
Doi = {10.1109/TIP.2013.2293423},
ISSN = {1057-7149},
Keywords = {distortion;gradient methods;image processing;GMS;GMSD algorithm;IQA model;Matlab source code;gradient magnitude similarity deviation;high-speed networks;high-volume visual data;image compression;image distortions;image gradient based local quality map;image quality assessment model;image quality prediction;image restoration;multimedia streaming;perceptual image quality index;pixel-wise gradient magnitude similarity;pooling strategy;Accuracy;Computational modeling;Degradation;Image coding;Image quality;Indexes;Nonlinear distortion;Gradient magnitude similarity;full reference;image quality assessment;standard deviation pooling}
}
@Article{Yasakethu08,
Title = {Quality analysis for 3{D} video using 2D video quality models},
Author = {S. L. P. Yasakethu and C. T. E. R. Hewage and W. A. C. Fernando and A. M. Kondoz},
Journal = {IEEE Transactions on Consumer Electronics},
Year = {2008},
Month = {November},
Number = {4},
Pages = {1969-1976},
Volume = {54},
Doi = {10.1109/TCE.2008.4711260},
ISSN = {0098-3063},
Keywords = {image sensors;video signal processing;2D video quality models;3D video;3D video content;image quality;quality analysis;Cameras;Humans;Image quality;PSNR;Rendering (computer graphics);Space technology;Testing;Three dimensional displays;Time measurement;Video compression;2D video;3D video;PSNR;SSIM;VQM}
}
@InCollection{Zach2007,
Title = {A duality based approach for realtime {TV-L} 1 optical flow},
Author = {Zach, Christopher and Pock, Thomas and Bischof, Horst},
Booktitle = {Pattern Recognition},
Publisher = {Springer},
Year = {2007},
Pages = {214--223}
}
@InProceedings{Zeng2012,
Title = {{3D-SSIM} for video quality assessment},
Author = {K. Zeng and Z. Wang},
Booktitle = {19th IEEE Int. Conf. on Image Proc.},
Year = {2012},
Pages = {621-624},
Doi = {10.1109/ICIP.2012.6466936},
ISSN = {1522-4880}
}
@InProceedings{Zwicker2006,
Title = {Antialiasing for automultiscopic 3{D} displays},
Author = {Zwicker, Matthias and Matusik, Wojciech and Durand, Fr{\'e}do and
Pfister, Hanspeter},
Booktitle = {Proc. of EGSR},
Year = {2006},
Pages = {73--82}
}
@Misc{Lytro,
Title = {The lytro camera},
HowPublished = {\url{https://www.lytro.com/}}
}
\end{filecontents}
{\small
\bibliographystyle{ieee}
|
2,877,628,090,691 | arxiv | \section{Introduction}\label{sec:intro}
Let $X$ be a finite-dimensional CW-complex. From the perspective of homotopy theory, a topological vector bundle of complex rank $r$ over a space $X$ is identified with a classifying map $X \to BU(r)$. Topologically equivalent vector bundles over $X$ correspond to homotopy equivalent maps to $BU(r)$.
The integral cohomology of $BU(r)$ is generated by universal Chern classes $c_1,\ldots c_r,$ with $c_i \in H^{2i}(BU(r);\mathbb Z).$ These give rise to important invariants of complex bundles: the Chern classes of a bundle, defined for $V\: X \to BU(r)$ as the pullbacks $$c_i(V):=V^*(c_i)\in H^{2i}(X;\mathbb Z).$$
In the case $X = \CP^n$, Chern classes are complete invariants of the {\em stable} equivalence class of the bundle. Explicitly, this means that $V\: \CP^n \to BU(r)$ and $W\: \CP^n \to BU(r')$ have the same Chern classes if and only if there exist integers $n,m$ greater than zero such that $V \oplus \underline \C^n$ and $W \oplus \underline \C^m$ are topologically equivalent. Here, $\underline \C$ is the trivial rank $1$ bundle on $X$.
This leads to the following fundamental question:
\begin{q}\label{q1} Are Chern classes sufficient to determine the (unstable) topological class of a complex rank $r$ vector bundle on $\CP^n$, up to topological equivalence? If not, what invariants beyond Chern classes are needed to distinguish such bundles?
\end{q}
Rank $1$ bundles on all spaces $\CP^n$ are determined by their first Chern class \cite{AE,KS}. Rank $\geq n$ bundles on $\CP^n$ are also determined by their Chern classes. For $r$ strictly between $1$ and $n$, there is no uniform answer (although some patterns have been found when restricting to bundles with all Chern classes zero, see \cite{Hu}).
In \cite{AR}, Atiyah and Rees answer Question~\ref{q1} for complex rank $2$ topological bundles on $\CP^3$ by producing a $\Z/2$-valued invariant $\alpha$, which can be viewed as a characteristic class in the generalized cohomology of a classifying space.
\begin{thm}[{\cite[Theorem 2.8 and 3.3]{AR}}]\label{thm:AR} Given $a_1,a_2\in \Z$ with $a_1a_2\equiv 0 \pmod 2$, the number of rank $2$ bundles on $\CP^3$ with $i$-th Chern class $a_i$ is:
\begin{itemize}
\item equal to $2$ if $a_1\equiv 0 \pmod 2$; and
\item equal to $1$ otherwise.
\end{itemize}
In the first case, a rank $2$ vector bundle on $\CP^3$ is determined by $c_1,c_2,$ and $\alpha$. \end{thm}
\begin{rmk} The condition $a_1a_2\equiv 0 \pmod 2$ is necessary and sufficient for two integers to be the Chern classes of a rank $2$ bundle on $\CP^3$.\end{rmk}
Atiyah--Rees' works shows that the classification of rank $2$ bundles on $\CP^3$ is a $2$-primary problem. Since there are similarities between the $2$-primary homotopical structure of $BU(2)$ and the $3$-primary homotopical structure of $BU(3) $, one might hope for an analogy between the classification of rank $2$ bundles on $\CP^3$ and of rank $3$ bundles on $\CP^5$. Our goal is to realize this analogy and answer Question~\ref{q1} for rank $3$ bundles on $\CP^5$. We do this by defining a $\Z/3$-valued invariant $\rho$ of such bundles and proving the following:
\begin{thm}\label{thm:combined} Given $a_1,a_2,a_3 \in \mathbb Z$ satisfying the the Schwarzenberger condition $S_5$ (see Lemma~\ref{lem:explicit_S5}), the number of bundles of rank $3$ on $\CP^5$ with $i$-th Chern class equal to $a_i$ is:
\begin{itemize}
\item equal to 3 if $a_1\equiv 0 \pmod 3$ and $a_2\equiv 0 \pmod 3$; and
\item equal to 1 otherwise.
\end{itemize}
In the first case, a rank $3$ bundle on $\CP^5$ is determined by $c_1,c_2,c_3$ and $\rho$.
\end{thm}
\begin{rmk}\label{rmk:S5_nec_suff} In Subsection~\ref{subsec:schwarzenberger}, we show that the Schwarzenberger condition $S_5$ is necessary and sufficient for three integers to be the Chern classes of a rank $3$ bundle on $\CP^5$. We also give $S_5$ explicitly in this section. \end{rmk}
A priori, there is no simple geometric relationship between topologically distinct bundles with the same Chern classes. However, in both the case of rank $2$ bundles on $\CP^3$ and the case of rank $3$ bundles on $\CP^5$, any two bundles with the same Chern classes differ by an explicit action defined as follows.
\begin{const}\label{const:action_Z3} Associated to an inclusion $D^{2n} \hookrightarrow \CP^{n}$ of a disk in the top cell of $\CP^{n}$, we define $$Q\:\CP^n \to S^{2n} \vee \CP^n$$ by collapsing the boundary of $D^{2n}$ to a point.
Given vector bundles
$V\: \CP^n \to BU(n)$ and $\sigma\: S^{2n} \to BU(r)$ we define
$$\sigma V:= (\sigma \vee V )\circ Q\: \CP^n \to BU(r).$$
Diagrammatically:
\begin{center}
\begin{tikzcd}[row sep=1.5em, column sep=3.5em]
& S^{2n} \arrow[d,"\iota_1" left]\arrow[dr, "\sigma"] \\
\CP^n \arrow[r,"Q"] & S^{2n} \vee \CP^n \arrow[r, "\sigma \vee V" near start] & BU(r)\\
& \CP^n \arrow[u, "\iota_2" left] \arrow[ur, "V" below] &
\end{tikzcd}
\end{center}
where $\iota_1$ and $\iota_2$ are the standard maps of the summands into the wedge. The association $$(\sigma, V) \mapsto \sigma V$$ defines an action of $\pi_{2n}BU(r)$ on equivalence classes of rank $r$ vector bundles over $\CP^{n}$.
\end{const}
This action preserves Chern classes provided that $n>r$. Therefore, if nontrivial, this action gives topologically distinct bundles with the same Chern classes.
In the case that $r=2$ and $n=3$, the action of $\pi_6BU(2)\simeq \Z/2$ on rank $2$ bundles on $\CP^3$ with fixed Chern data is transitive, and is free if and only if $c_1 \equiv 0 \pmod 2$. This aligns with the enumeration of bundles with fixed Chern data in Theorem~\ref{thm:AR} above. The theorem below shows that the role of Construction~\ref{const:action_Z3} in analyzing rank $3$ bundles on $\CP^5$ is analogous.
\begin{theorem}\label{classification}
Let $a_1,$ $a_2$, and $a_3$ be integers satisfying $S_5$. Let $\V_{a_1,a_2,a_3}$ be the set of homotopy classes of rank $3$ bundles on $\CP^5$ with $i$-th Chern class equal to $a_i$. Then:
\begin{enumerate}
\item The action of $\pi_{10}BU(3) \simeq \Z/3$ on rank $3$ bundles over $\CP^5$, as given in Construction~\ref{const:action_Z3}, induces a transitive action on $\V_{a_1,a_2,a_3}.$
\item If $a_1$ or $a_2$ is nonzero $\bmod$ $3$, then the action of $\pi_{10}BU(3)$ on $\V_{a_1,a_2,a_3}$ is trivial.
\item If $a_1$ and $a_2$ are zero $\bmod$ $3$, then the action of $\pi_{10}BU(3)$ on $\V_{a_1,a_2,a_3}$ is free.
\end{enumerate}
\end{theorem}
This refines the enumeration result in Theorem~\ref{thm:combined}.
Theorem~\ref{classification} says that, if $a_1,a_2,a_3$ satisfy $S_5$, $a_1\equiv 0 \pmod 3$, and $a_2\equiv 0\pmod 3$, then the set of complex rank $3$ topological vector bundles on $\CP^5$ with $i$-th Chern class $a_i$ is a torsor for $\Z/3$. The goal of the rest of the paper is to trivialize this torsor via a bundle invariant. To explain our approach to defining such an invariant for rank $3$ bundles on $\CP^5$, we discuss the $\alpha$-invariant of rank $2$ bundles on $\CP^3$ in greater detail.
The Atiyah--Rees invariant $\alpha$ is initially defined for rank $2$ bundles with $c_1 = 0$. Such bundles are classified by maps to $BSU(2)$, allowing an invariant to be defined via a universal class in the generalized cohomology of $BSU(2)$ rather than $BU(2)$. Atiyah and Rees give a class $\alpha\in KO^{4}(BSU(2))$, where $KO$ denotes real $K$-theory. They define the $\alpha$-invariant of $V\: \CP^3 \to BSU(2)$ as $$\alpha(V):=p_*V^*(\alpha)\in KO^{-2}(\text{point})\simeq \Z/2,$$
where $V^*$ is pullback with respect to $V$ and $p_*\: KO^*(\CP^3)\to \KO^{*-6}(\text{point})$ is the $KO$-theory pushforward for the spin manifold $\CP^3$.
They extend $\alpha$ to bundles with $c_1(V)\equiv 0 \pmod 2$ by $$\alpha(V):= \alpha
\left(V \otimes \mathcal O(-\frac{c_1(V)}{2})\right).$$
Alternatively, the Atiyah--Rees invariant can be rephrased as a {\em twisted characteristic class}. Recall that, given a virtual bundle $W$ over a space $X$, the Thom spectrum of $X$ with respect to $W$, written $\Th{X}{W}$, can be viewed as a twisted version of the suspension spectrum $\suspp X$. By a twisted characteristic class, we will mean a class in some generalized cohomology of a Thom spectrum over a classifying space.
In this framework, one can show that there is a class $\tilde \alpha\in KO^*(\Th{BU(2)}{-\gamma_2})$ which extends $\alpha$ in a precise sense. Given any rank $2$ vector bundle on $\CP^3$, the pullback of $\tilde \alpha$ gives a class
$$\tilde \alpha(V):=V^*{\tilde \alpha}\in KO^4(\Th{\CP^3}{-V}).$$
If $c_1(V)\equiv 0\pmod 2,$ $V$ is canonically $KO$-oriented, yielding a $KO$-Thom isomorphism $$KO^*(\Th{\CP^3}{-V})\simeq KO^*(\suspp \CP^3).$$ We can thus define
$$\alpha'(V)=p_*(\tilde\alpha(V))\in KO^{-2}(\text{point})\simeq \Z/2,$$ where as before $p_*$ is the $KO$-theory pushforward.
The invariant $\alpha'$ also distinguishes rank $2$ bundles on $\CP^3$ with $c_1\equiv 0 \pmod 2$ and agrees with the original $\alpha$-invariant when $c_1(V)=0$.
The insight here is that both $BSU(2)$ and $\Th{BU(2)}{-\gamma_2}$ stabilize $\pi_6BU(2)$, in the following sense. While $\pi_6BU(2)\simeq \Z/2,$ the stable homotopy group $\pi_6\left(\susp BU(2)\right)$ is trivial, so bundles differing by an element in the unstable group $\pi_6BU(2)$ cannot be distinguished by a characteristic class in the generalized cohomology of $BU(2)$ itself. However, both $\pi_6\left( \susp BSU(2)\right)$ and $\pi_6\Th{BU(2)}{-\gamma}$ are nontrivial and are canonically isomorphic to $\pi_6BU(2)$, permitting their $KO$-cohomology to supply the classes $\alpha$ and $\alpha'$, respectively.
Recalling Theorem~\ref{classification}, the relevant group for understanding rank $3$ bundles on $\CP^5$ is $\pi_{10}BU(3)$. We also find that $\pi_{10}BU(3)\simeq \Z/3$ is stably trivial. By the above discussion, we might attempt to classify rank $3$ bundles on $\CP^5$ by first stabilizing $\pi_{10}BU(3)$ and then detecting it with some generalized cohomology theory.
Indeed, our strategy to define an invariant of rank $3$ bundles on $\CP^5$ is as follows:
\begin{itemize}
\item We identify a Thom spectrum related to $BU(3)$ which stabilizes $\pi_{10}BU(3)$ (Introduction to Section~\ref{sec:exist});
\item We define a twisted characteristic class in an appropriate generalized cohomology of this Thom spectrum, with certain key properties (Section~\ref{sec:exist}); and
\item We show that our twisted characteristic class can be resolved to an honest invariant $\rho$, via orientation data, and that this invariant distinguishes vector bundles with the same Chern data (Section~\ref{sec:untwisting}).
\end{itemize}
The main result of Section~\ref{sec:exist} can be stated as follows:
\begin{thm}\label{thm:imprecise_existence} Let $\BUc$ be the homotopy fiber of $c_1\pmod 3\: BU(3) \to K(\Z/3,2)$. Let $\tmf$ denote the $3$-localization of the spectrum of topological modular forms. There is a
class $$\tilde \rho \in \tmf^{-3}(\sk^{26}\BUct)$$ such that the pullback of $\tilde \rho$ with respect to the Thomificiation of a generator for $\pi_{10}\BUc$ induces an isomorphism
\begin{equation}\label{eq:isom_pi_10}\pi_{10}\BUc \simeq \pi_{13}\tmf.\end{equation}
\end{thm}
The class $\tilde \rho$ and isomorphism \eqref{eq:isom_pi_10} tell us that the cohomology theory $\tmf$ stably detects $\pi_{10}BU(3)$ and therefore retains information about the bundles of interest. Under pullback, the class $\tilde\rho$ together with Thom isomorphisms determined by orientation data give rise to the invariant $\rho$ of Theorem~\ref{thm:combined}, as follows.
Theorem~\ref{thm:imprecise_existence} gives an association
\begin{equation}\label{eq:twisted_invariant_bad} V \mapsto \operatorname{Th}(V)^*(\tilde \rho) \in \tmf^{-3}(\Th{\CP^5}{-V}),\end{equation}
where $\Thom(V)$ denotes the Thomification of the classifying map $V\: \CP^5 \to BU(3)$. Equation~\eqref{eq:twisted_invariant_bad} does not define an invariant of $V$ because the target depends on $V$ itself.
However, vector bundles with $c_1\equiv 0 \pmod 3$ and $c_2\equiv 0 \pmod 3$ are $\tmf$-orientable and therefore admit a $\tmf$-Thom isomorphisms $\tmf^*(\Th{\CP^5}{-V})\simeq \tmf^*(\suspp \CP^5)$. The problem is not quite solved: we need a consistent way of choosing Thom isomorphisms. This is the main project of Section~\ref{sec:untwisting}, which involves a detailed study of $\tmf$-orientations for the relevant bundles. The problem cannot be reduced to a known orientation problem (e.g. using the celebrated string orientation for topological modular forms \cite{AHR}).
\subsecl{Paper outline}{subsec:outline}
The proof of Theorem~\ref{classification} is the main project of Section~\ref{sec:count} and proceeds via
analyses of the set of homotopy classes of maps from $\CP^5$ to $BU(3)$ localized at the primes $3$ and $2$. These arguments are carried out in Subsections~\ref{subsec:post_BU3} and \ref{subsec:2local}, respectively, and involve obstruction-theoretic arguments. Subsection~\ref{subsec:technical_claims_1} proves claims used in Subsection~\ref{subsec:post_BU3}. In Subsection~\ref{subsec:schwarzenberger} we compute the Schwarzenberger condition explicitly and show that it is necessary and sufficient for three integers to be the Chern classes of a rank $3$ bundle on $\CP^5$.
In Subsection~\ref{subsec:start_existence_proof}, we outline our method to produce the class $\tilde \rho$ in the $\tmf$-cohomology of $\sk^{26}\BUct$. The remaining Subsections~\ref{subsec:proof_exists}, \ref{subsec:cohomology_BUct}, and \ref{subsec:proof_unique} supply the details of the proof, which includes a uniqueness result. This concludes the proof of Theorem~\ref{thm:imprecise_existence}.
In Subsection~\ref{subsec:background_orient}, we review the theory of Thom isomorphisms and orientations and also establish notation. In Subsection~\ref{subsec:choiceOrientation}, we study orientations of rank $3$ bundles on $\CP^5$ with $c_1\equiv 0 \pmod 3$ and $c_2\equiv 0\pmod 3$ and isolate a desirable set of $\tmf$-orientations. Using this set of orientations, we are able to produce a well-defined invariant: in Section~\ref{subsec:rho_works}, we combine orientation data with $\tilde \rho$ to define the invariant $\rho$ of complex rank $3$ topological bundles on $\CP^5$ with $c_1\equiv 0\pmod 3$ and $c_2\equiv 0\pmod 3$. We prove that $\rho$ separates topological equivalence classes of rank $3$ bundles on $\CP^5$ with the same Chern data. This completes the proof of Theorem~\ref{thm:combined}.
The remaining subsections offer examples and suggest future directions. In Subsection~\ref{subsec:sum_line_bundles}, we show that $\rho(L^{\oplus 3})=0$ for $L$ a line bundle with $c_1(L)\equiv0\pmod 3$. We also state an additivity result for $\rho$ on sums of line bundles.
In Subsection~\ref{subsec:rank_2}, we show that the methods discussed in this paper also produce a $3$-local invariant of rank $2$ bundles.
\subsecl{Acknowledgements}{subsec:acknowledgements}
First and foremost I want to thank my PhD advisor, Mike Hopkins, for suggesting this project and for his immense support throughout my PhD program. I am also immensely grateful to Haynes Miller for his mentorship during my time in graduate school; and to both Haynes and Elden Elmanto for serving on my dissertation committee and offering feedback on my thesis write-up. After my move to UCLA, Mike Hill's guidance and encouragement -- mathematical and practical -- were invaluable for me while improving and revising my thesis. Hood Chatham and Jeremy Hahn were both extremely generous in offering specific suggestions for methods and strategies used in this paper. This work benefited greatly from my conversations with Aravind Asok, Lukas Brantner, Yang Hu, Dev Sinha, Alexander Smith, and Dylan Wilson.
While working on this project, the author was supported by the National Science Foundation under Award No.~2202914.
\newpage
\subsecl{Conventions}{subsec:conventions}
\begin{itemize}
\item ``Vector bundle" will refer to a complex, topological vector bundle. As such, we use ``vector bundle" and ``map to $BU(r)$" interchangeably. ``Rank" refers to complex rank.
\item Given spaces $X$ and $Y$, we write $[X,Y]$ for homotopy classes of maps from $X$ to $Y$.
\item $H^*$ will refer to ordinary cohomology with $\Z$ coefficients, except otherwise stated. We write $\HF{p}{*}$ for cohomology with $\mathbb F_p=\Z/p$ coefficients.
\item If $C$ is a space, then $\pi_*C$ will refer to {\em unstable} homotopy groups. If $X$ is a spectrum, $\pi_*X$ will refer to its stable homotopy groups. Thus, the stable homotopy groups of a space $C$ will be written as $\pi_*(\suspp C)$.
\item Given a space or spectrum $X$, we write $\tau_{n}X$ for its $n$-th Postnikov section.
\item Given virtual bundle $W$ on a topological space $Y$, we write $\Th{Y}{W}$ for the Thom spectrum of $W$. Given a map $f\: X \to Y$ of spaces, $f$ has a Thomification $$\op{Th}(f):\Th{X}{f^*W}\to \Th{Y}{W}.$$ When taking Thom spectra, we assume all bundles have virtual dimension zero.
\item Given a space or spectrum $X$, we write $\c{X}{p}$ for its completion at a prime $p$, and $X_{(p)}$ for its localization away from $p$.
\item In Sections~\ref{sec:exist} and \ref{sec:untwisting}, all spaces and spectra are implicitly localized at the prime $3$.
\item Given an $E_\infty$-ring spectrum $R$ and two $R$-modules $X,Y$, we write $$\operatorname{Maps}_R(X,Y):=\operatorname{Maps}_{R\text{-}\op{Mod}}(X,Y)=R\text{-}\op{Mod}(X,Y).$$
\end{itemize}
\section{A count of rank $3$ bundles on $\CP^5$}\label{sec:count}
The primary goal of this section is to prove Theorem~\ref{classification} by computing the set $[\CP^5,BU(3)]$. For this we need the homotopy of $BU(3)$ through degree $10$, which can be computed via the fiber sequence
$$BSU(3) \to BU(3) \to BU(1)$$
and its associated homotopy long exact sequence. The homotopy of $U(1) \simeq S^1$ is known and enough of the homotopy of $SU(3)$ is computed in \cite{MT}. We give the result in Figure~\ref{fig:homotopy_BU3}.
\begin{figure}[h]
\begin{tabular}{| M{1.2cm} | M{1cm} | M{1cm} | M{1cm} | M{1cm} |M{1cm} | M{1cm} | M{1cm} | M{1cm} | M{1cm} | M{1cm} | N}
\hline
& \textbf{$\pi_2$} & \textbf{$\pi_3$} & \textbf{$\pi_4$} & \textbf{$\pi_5$} & \textbf{$\pi_6$} & \textbf{$\pi_7$} & \textbf{$\pi_8$} & \textbf{$\pi_9$} & \textbf{$\pi_{10}$} \\
\hline
& & & & & & & & &
\\[-8pt]
$BU(3)$ & $\Z$ &0 & $\Z$ & 0 & $\Z$ & $\Z/6$ &0 & $\Z/12$ & $\Z/3$ \\[2pt]
\hline
\end {tabular}\caption{Homotopy of BU(3)}\label{fig:homotopy_BU3}
\end{figure}
Note that the torsion in $\pi_nBU(3)$ for $n\leq 10$ is either $2$- or $3$- primary. Thus we may break the computation into analyses at the primes 2 and 3.
The key tool which allows us to study the problem one prime at a time is the theory of rationalization and completion of spaces. Given a space $X$, let $\c{X}{p}$ denote its $p$-completion.
The Fracture Theorem for completion, as stated in \cite[Theorem 13.1.1]{MP}, implies that
an element in $[\CP^5,BU(3)]$ is the same data as pairs of maps
$$f_2\: \CP^5 \to \c{BU(3)}{2},\,\,\text{ }f_3\: \CP^5 \to \c{BU(3)}{3}$$
such that the $\c{\Z}{2}$- and $\c{\Z}{3}$-valued Chern classes are both in the image of the canonical inclusion $\Z \hookrightarrow \c{Z}{p}$ and agree under this identification.
The in-depth analysis in the proof of Theorem~\ref{classification} is calculating $[\CP^5,\c{BU(3)}{3}]$. This is carried out in Subsection~\ref{subsec:post_BU3}, with supporting technical results in Subsection~\ref{subsec:technical_claims_1}. In Subsection~\ref{subsec:2local}, we compute $[\CP^5,\c{BU(3)}{2}]$. In Subsection~\ref{subsec:schwarzenberger}, we show that the Schwarzenberger condition $S_5$ is necessary and sufficient for three integers to be the Chern classes of a rank $3$ bundle on $\CP^5$. This completes the proof of Theorem~\ref{classification} and justifies Remark~\ref{rmk:S5_nec_suff}.
\subsecl{$3$-complete rank $3$ vector bundles on $\CP^5$}{subsec:post_BU3}
We give the first stages of a Postnikov-type tower for the $3$-completion of $BU(3)$ and analyze maps from $\CP^5$ into this tower.
\begin{claim}\label{claim:princ_fib_BU3} There is a tower of principal fibrations given by the solid arrows below:
\begin{center}\label{post1}\begin{tikzcd}[column sep=.5em, row sep= 2em]
K(\Z/3,10)\arrow[r] & P_{10} \arrow[d] \arrow[from=ddl,dashed, "\tau_{10}\,\,\,\,\,\,\," {near end, above}]\\
K(\Z/3,7) \times K(\Z/3,9)\arrow[r, crossing over] & P_9 \arrow[d] \arrow[r, "U"] & K(\Z/3,11) \\
BU(3)\arrow[r,dashed, "{(c_1,c_2,c_3)}" {below}]\arrow[ur, dashed,"\tau_9" {near end, below}]
&K(\Z,2) \times K(\Z,4)\times K(\Z,6) \arrow[rr,"{k_7\times k_9}"] & & K(\Z/3, 8) \times K(\Z/3,10)
\end{tikzcd}\end{center}
where $(c_1,c_2,c_3)$ induces a $3$-complete equivalence on $\tau_{6}BU(3)$; $\tau_9$ induces a $3$-complete equivalence on $\tau_9BU(3)$; and $\tau_{10}$ induces a $3$-complete equivalence on $\tau_{10}BU(3)$.
\end{claim}
At present, the explicit forms of $k_7 \times k_9$ and $U$ are not needed.
Given Claim~\ref{claim:princ_fib_BU3}, we can calculate $[\CP^5,\tau_{10}\c{BU(3)}{3}] \simeq [\CP^5,\c{BU(3)}{3}]$ by working up the tower. We need the following standard lemma.
\begin{lem}\label{lem:torsor} Let $X$ be a connected space. For any other space $Y$ let $Y^X$ denote the mapping space.
Given a fiber sequence of connected spaces \begin{equation}\label{eq:fib1} \begin{tikzcd} F \ar[r]& E \ar[r]& B.\end{tikzcd}\end{equation} and a map $f\: X \to E$ so that the composite map to $B$ is nullhomotopic, the set of homotopy classes of choices of lifts of $f$ to $F$ is a torsor for $\operatorname{coker}\big(\pi_1(E^X,f) \to \pi_1(B^X,0)\big)$.
\end{lem}
\iffalse\begin{proof}
The fiber sequence $F \to E \to B$ gives a fiber sequence
\begin{equation}\label{eq:fib_exp}(F^{X},g) \to (E^{X},f) \to (B^{X},0),\end{equation}
where $g$ is any lift of $f$.
The last piece of the long exact sequence on homotopy for \eqref{eq:fib_exp} is a long exact sequence of pointed sets
$$\pi_1(E^X,f) \to \pi_1(B^X,0)\to \pi_0(F^X) \to \pi_0(E^X),$$
showing that elements in $\pi_0(F^X)$ lifting the given basepoint class in $f\in\pi_0(E^X)$ are a torsor for the cokernel in the statement of the lemma.
\end{proof} \fi
We apply Lemma~\ref{lem:torsor} to the diagram in Claim~\ref{claim:princ_fib_BU3}. Candidate Chern data $(a_1,a_2,a_2) \in H^{2}(\mathbb CP^5)\times H^{4}(\mathbb CP^5) \times H^{6}(\mathbb CP^5)$ lifts to $P_9$ if and only if $(k_7\times k_9)\circ \left( a_1,a_2,a_3\right) \equiv 0 \pmod 3$. This is a $\mod 3$ condition on Chern classes, which we do not compute since we recover the condition via different methods in Subsection~\ref{subsec:schwarzenberger}.
By Lemma~\ref{lem:torsor}, the number of lifts to $P_9$ are a torsor for a quotient of $$\pi_1\Big (\big(\K(\Z/3,8) \times K(\Z/3,10)\big)^{\CP3}\Big) \simeq \HF{3}{7}(\CP^3) \times \HF{3}{9}(\CP^3) =0,$$ so when a lift exists it is unique.
There are no obstructions to lifting from $P_9$ to $P_{10}$, since $\HF{3}{11}(\CP^3)=0$. Choices of lift are a torsor for
$$ \operatorname{coker}\big( \pi_1({P_9}^{\CP^5}) \xrightarrow{U\circ -} \pi_1(K(\Z/3,11)^{\CP^5}) \big)$$
Since $\pi_1(K(\Z/11)^{\CP^5}) \simeq \pi_0(\K(\Z/3,10)^{\CP^5}))\simeq \HF{3}{10}(\CP^5) \simeq \Z/3,$ there are two possibilities that will depend on $a_1,a_2,a_3$: \begin{itemize}
\item The map is surjective, the cokernel is trivial, and there is a unique lift; or
\item The map is zero, the cokernel is $\Z/3$ and there are three lifts. \end{itemize}
To compute $\op{Im}(U\circ -),$ we consider a related problem. From the principal fibration \begin{equation}\label{eq:princ_fib_nice}K(\Z/3,7)\times K(\Z/3,9) \to P_9 \to K(\Z,2) \times K(\Z,4) \times
K(\Z,6),\end{equation}
we get an action of the fiber $\big(K(\Z/3,7) \times K(\Z/3,9)\big) \times P_9 \to P_9$. This gives an action
\begin{equation}\label{eq:action_KZ379}\pi_1\Big(K(\Z/3,7)^{\CP^5}\times K(\Z/3,9)^{\CP^5}\Big)\times \pi_1( P_9^{\CP^5})\to \pi_1(P_9^{\CP^5}). \end{equation}
\begin{claim}\label{claim:transitive_KZ379} The action given in Equation~\eqref{eq:action_KZ379} is transitive.
\end{claim}
Assuming this claim too, fix $a_1,a_2,a_3 \in \Z$ with $(k_7\times k_9)\circ\left( a_1,a_2,a_3\right)=0$. Consider the
diagram:
\begin{equation}\label{eq:image_BU3}
\begin{tikzcd}[column sep=.8in,row sep=.6in]
{}&P_9\arrow[r,"U"] & K(\Z/3,11)
\\
* \times \CP^5\arrow[ur,"{[a_1,a_2,a_3]}"]\arrow[r,"* \times 1"]
& S^1 \times \CP^5 \arrow[u,"a"]\arrow[r, "{(x\iota_1 t^3, y \iota_1 t^4,a)}" above]\arrow[ur, phantom,"\dagger" {near end, above}]\arrow[ul,phantom,"\star" near start]
& K(\Z/3,7)\times K(\Z/3,9) \times P_9, \arrow[ul,bend right=10,"m" above]\arrow[u,"m^*U" right]
\end{tikzcd}
\end{equation}
where only the triangles $\dagger$ and $\star$ commute. In the above:
\begin{enumerate}
\item $a\:S^1 \times \CP^3 \to \tau_5BU(2)$ restricts to $[a_1,a_2,a_3]$ on $* \times \CP^3$.
\item $x$,$y \in \Z/3$ are arbitrary coefficients of the classes $\iota_1t^3$ and $\iota_1 t^4$, which are the natural generators of
\begin{align*} \HF{3}{7}(S^1 \times \CP^5) &\simeq \HF{3}{1}(S^1 \otimes \HF{3}{6}(\CP^5) \\
&\simeq \Z/3 \{ \iota_1\} \otimes \Z/3 \{ t^3\}\end{align*}
and
\begin{align*} \HF{3}{9}(S^1 \times \CP^5) &\simeq \HF{3}{1}(S^1) \otimes \HF{3}{8}(\CP^5) \\
&\simeq \Z/3 \{ \iota_1\} \otimes \Z/3 \{ t^4\}.\end{align*}
\end{enumerate}
Given Claim~\ref{claim:transitive_KZ379}, to compute $\op{Im}(U\circ -)$, it suffices to compute $m^*U\circ(x\iota_1t^3,y\iota_1t^4,a)$
as $(x,y)$ ranges over $\Z/3 \times \Z/3$. We will obtain formula for the difference \begin{equation}\label{eq:difference} m^*U\circ (x\iota_1t^3,y\iota_1t^5,a)-m^*U\circ
(0,0,a).\end{equation} Showing that $\op{Im}(U\circ -)$ is $\Z/3$ is equivalent to finding $x,y\in \Z/3$ so that the difference in Equation~\eqref{eq:difference} is nonzero.
\begin{claim}\label{claim:m_upper_U} The class $m^*U \in \HF{3}{11}\big(K(\Z/3,7) \times K(\Z/3,9)\times P_9\big)$ is given by
$$m^*U = U +P^1(\iota_7') -\iota_2\iota_9' +\iota_2^2\iota_7'-\iota_4\iota_7' \in \HF{3}{11}\big(K(\Z/3,7) \times K(\Z/3,9)\times P_9\big),$$ where $\iota_7'$ and $\iota_9'$ generate $\HF{3}{7}
\big(K(\Z/3,7)
\big)$ and $\HF{3}{9}\big(K(\Z/3,9)\big)$, respectively; and where $\iota_2, \iota_4$ are the images in $\HF{3}{*}(P_9)$ of generators in $\HF{3}{i}(K(\Z,2))$ and $\HF{3}{i}(K(\Z,4))$, respectively.
\end{claim}
Given Claim~\ref{claim:m_upper_U}, since $\iota_2$ pulls back to $c_1 \pmod 3$ in $\HF{3}{*}(\CP^5)$ and $\iota_4$ pulls back to $c_2 \pmod 3$, we see that
\begin{align*} m^*U\circ (x\iota_1t^3,y\iota_1t^4,a)-U^*m\circ (0,0,a) &= U+ xP^1(\iota_1t^3)-( a_1t)y\iota_1t^4\\ & \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, +( a_1^2t^2)x\iota_1t^3-(a_2t^2)x\iota_1t^3 -U\\
&= -(ya_1-xa_1^2+xa_2) \iota_1t^5.
\end{align*}
The quantity $ya_1-xa_1^2+xa_2$ is zero $\mod 3$ for all choices of $x$ and $y$ if and only if $a_1$ and $a_2$ are both zero $\mod 3$.
Predicated on Claims~\ref{claim:princ_fib_BU3}, \ref{claim:transitive_KZ379}, and \ref{claim:m_upper_U}, we have shown:
\begin{prop}\label{prop:classification_rank2_CP3_post} Given integers $(a_1,a_2,a_3)$ with $(k_7\times k_9)\circ \left( a_1,a_2,a_3\right)=0$, the following two situations can occur:
\begin{itemize}
\item If either $a_1$ or $a_2$ are nonzero $\mod 3$, then the map in $U\circ -$ is surjective and, up to homotopy, there is a unique $3$-local vector bundle with $i$-th Chern class $a_i$.
\item If $a_1 \equiv 0 \pmod 3$ and $a_2 \equiv 0 \pmod 3$, then the map $U\circ -$ is zero and there are three distinct homotopy classes of $3$-local vector bundles with $i$-th Chern class $a_i$.
\end{itemize}
\end{prop}
\subsecl{Proof of technical claims}{subsec:technical_claims_1}
We now prove the claims ~\ref{claim:princ_fib_BU3}, \ref{claim:transitive_KZ379}, and \ref{claim:m_upper_U}, completing the proof of Proposition~\ref{prop:classification_rank2_CP3_post}.
\begin{proof}[Proof of Claim~\ref{claim:princ_fib_BU3}]
The map pullback of the Chern class map $$c:=(c_1,c_2,c_3)\:BU(3) \to K(\Z,2) \times K(\Z,4) \times K(\Z,6)$$ on $\mod 3$-cohomology is a $3$-complete equivalence through degree 6.
We correct the degree $8$ and degree $10$ cohomology terms simultaneously via a map
\begin{center}\begin{tikzcd}[column sep=3em]K(\Z,2) \times K(\Z,4)\times K(\Z,6) \arrow[r,"{k_7 \times k_9}"] & K(\Z/3, 8) \times K(\Z/3, 10),
\end{tikzcd}\end{center}
and a factorization
\begin{center}\begin{tikzcd}
& P_9\arrow[d,dashed] \\
BU(3) \arrow[ru, dashed,"{\tau_9}"]\arrow[r, "c"] & K(\Z,2) \times K(\Z,4)\times K(\Z,6) \arrow[r,dashed,"{k_7 \times k_9}"] & K(\Z/3\Z, 8) \times K(\Z/3\Z,10),
\end{tikzcd}\end{center}
where $P_9 := \operatorname{hofib}(k_7 \times k_9)$, such that:
\begin{enumerate}
\item The map $(k_7\times k_9) \circ c$ is nullhomotopic; and
\item The lift $\tau_9\: BU(3) \to P_9$ is $\mod 3$-cohomology isomorphism up to at least degree $10$ and therefore realizes the $9$-truncation of $BU(3)$, up to $3$-completion.
\end{enumerate}
We complete (1) in Construction~\ref{const1} and (2) in Verification~\ref{verif1}.
\begin{const}\label{const1}\label{const2}
Let $P^i$ denote the $\mod 3$ Steenrod operation of degree $4i$. The first relation among Steenrod operations on Chern classes is $P^1$ on $c_2$: $P^1(c_2) = c_1^2 c_2 + c_2^2 - c_1c_3.$
Let $\iota_j$ denote a generator for $\HF{3}{j}(K(\Z,j))$ and take
$$k_7 := P^1 \iota_4 - \iota_2^2 \iota_4 - \iota_4^2 + \iota_2\iota_6 \in \HF{3}{8}\Big(K(\Z,2) \times K(\Z,4) \times K(\Z,6)\Z\Big).$$
We identify a candidate for $k_9$ by computing $P^1$ on $c_3$.
$P^1c_3= c_3(c_1^2+c_2)$ so let $$k_9 := P^1\iota_6-\iota_6\big(\iota_2^2+\iota_4\big)= P^1\iota_6-\iota_6\iota_2^2-\iota_6\iota_4 \in \HF{3}{10}\Big(K(\Z,2) \times K(\Z,4) \times K(\Z,6)\Z\Big).$$
\end{const}
\begin{verif}\label{verif1}
One computes the $\HZt$-cohomology of integral Eilenberg Mac Lane spaces, which can be computed directly from the path-loop fibration.
The main result we need is the following:
\begin{prop}\label{K2K4K6_cohomology} Let $\iota_j$ generate $\HF{3}{j}K(\Z,j)$ for $j=2,4,6.$ We can identify the multiplicative structure of $\HF{3}{*}\left(K(\Z,2) \times K(\Z,4) \times K(\Z,6)\right)$ through degree $11$ as follows:
$$\left(\HF{3}{*}\big(K(\Z,2) \times K(\Z,4) \times K(\Z,6)\big)\right)_{\leq 11} \simeq \left(\Z/3\Z[\iota_2, \iota_4,\iota_6, Y_8, W_{10}] \otimes \Lambda[N_9, S_{11}]\right)_{\leq 11}$$
where the subscript indicates the degree of the polynomial or exterior generator, the notation $(-)_{\leq 11}$ indicates that we quotient by all elements of degree at least $12$, and
\begin{align*}
Y_8&= P^1 \iota_4 & W_{10} &= P^1 \iota_6 \\
N_{9} &= \beta P^1 \iota_4 & S_{11} &= \beta P^1 \iota_6.
\end{align*}
\end{prop}
From the above, we can compute the Serre spectral sequence for the fibration $$K(\Z/3,7) \times K(\Z/3,9) \to P_9 \to K(\Z,2) \times K(\Z,4) \times K(\Z,6).$$
The $E_2$-page is given in Figure~\ref{ss1}. Moreover, if $\beta$ denotes the Bockstein power operation:
\begin{align*}
L_8&=\beta \iota_7
\\
R_{10}&= \beta \iota_9
\\ M_{11}&=P^1\iota_7.
\end{align*}
\begin{figure}[b]
\centering
\textbf{The $E_2$-page of a spectral sequence computing $\HF{3}{*}(P_9).$}\par\medskip
\begin{tikzpicture}
\matrix (m) [matrix of math nodes,
nodes in empty cells,nodes={minimum width=2.5ex,
minimum height=2.5ex,outer sep=-1pt},
column sep=.25ex,row sep=.25ex]{
11 & M_{11} & & & & & & & & & & && \\
10 & R_{10} & & & & & & & & & & && \\
9 & \iota_9 & & & & & & & & & & && \\
8 & L_8& & & & & & & & & & && \\
7 &\iota_7 & & & & & & & & & & && \\
6 & & & & & & & & & & & && \\
5 & & & & & & & & & & & && \\
4 & & & & & & & & & & & && \\
3 & & & & & & & & & & & && \\
2 & & & & & & & & & & & && \\
1 & & & & & & & & & & & && \\
0 & & & \iota_2 & & \iota_4 & &\iota_6 & & Y_8 & N_9 & W_{10} & S_{11} & \\
\quad\strut & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 &11& \strut \\};
\draw[dashed] (m-1-2.north west) -- (m-13-14.south east);
\draw[thick] (m-1-1.east) -- (m-13-1.east) ;
\draw[thick] (m-13-1.north) -- (m-13-14.north) ;\end{tikzpicture}\caption{Only multiplicative generators for the $E_2$-page are indicated.}\label{ss1}\end{figure}
To obtain the associated graded of $\HF{3}{*}(P_9)$, we compute all relevant differentials using a combination of the following two facts (Kudo's transgression theorem, see \cite{Kudo56} or \cite[Ch. 6]{McCleary}):
\begin{itemize}
\item Given a principal fibration $F \to E \to K(\Z/p\Z,n)$, the fundamental class $\iota_{n+1}$ is transgressive in the mod $p$ Serre spectral sequence for $\Omega K(\Z/ p \Z,n) \to F \to E.$
\item A power operation applied to a transgressive class is transgressive; transgressions commute with power operations.
\end{itemize}
From the above items and the fact that $L_8 = \beta(\iota_7)$, we deduce that
\begin{align*} d_7 (\iota_7) &= Y_8- \iota_2^2\iota_4 - \iota_4^2+\iota_6\iota_2,\,\,\text{and}\\
d_8(L_8) &= \beta(d_7(\iota_7)) = \beta( Y_8- \iota_2^2\iota_4 - \iota_4^2+\iota_6\iota_2) = \beta(Y_8) = N_{9}.\end{align*}
Similarly, since $\beta(\iota_9)=R_{10}$, we get that
\begin{align*}d_{9}(\iota_9)&=W_{10}-\iota_6(\iota_2^2 -2\iota_4)\,\,\text{ and} \\
d_{10}(R_{10})&=\beta\big(W_{10}-\iota_6(\iota_2^2 +\iota_4)\big)= S_{11}.\end{align*}
All other terms strictly below the dotted line in Figure~\ref{ss1} are computed using the Liebniz rule. Thus the images of $\iota_2$, $\iota_4$, and $\iota_6$ are polynomial generators for $\HF{3}{*}(P_9)$ up to degree $10$; since
$c^*\: \HF{3}{2j} (K(\Z,2) \times K(\Z,4) \times K(\Z,6)) \to \HF{3}{*}(BU(3)\Z)$ satisfies $\iota_{2j} \mapsto c_j,$
this shows that a lift of $(c_1,c_2,c_3)$ induces an equivalence through degree $9$, completing Verification~\ref{verif1}.
\end{verif}
Evidently the next stage in the tower is given by a class $U\:P_9 \to K(\Z/3,11)$.
\end{proof}
\begin{proof}[Proof of Claim~\ref{claim:transitive_KZ379}] Consider the $\pi_1$ portion of the homotopy long exact sequence associated to the fibration \eqref{eq:princ_fib_nice}
\begin{equation}\label{keyLES2}
\begin{tikzcd}[column sep=.15in, row sep=.15in]
\pi_1\Big(\big(K(\Z/3, 7) \times K(\Z/3,9)\big)^{\CP^5}\Big) \arrow[d,"s"] \\
\pi_1(P_9^{\CP^5}) \arrow[d] \\
\pi_1\Big(\big(K(\Z,2) \times K(\Z,4) \times K(\Z,6)\big)^{\CP^5}\Big)
\end{tikzcd}
\end{equation}
where the basepoint for $\left(K(\Z,2) \times K(\Z,4) \times K(\Z,6)\right)^{\CP^5}$ is $(a_1, a_2,a_3)$.
The last term of \eqref{keyLES2} is zero, so $s$ is surjective. The action
\eqref{eq:action_KZ379} is given on $$(x,a)\in \pi_1\left(\big(K(\Z/3, 7) \times K(\Z/3,9)\big)^{\CP^5}\right)
\times \pi_1(P_9^{\CP^5})$$ by
\begin{equation}\label{eq:principal_action} (x,a)\,\, \mapsto \,\, s(x)a \in \pi_1(P_9^{\CP^5}).\end{equation}
Thus, surjectivity of $s$ implies the action is transitive.
\end{proof}
\begin{proof}[Proof of Claim~\ref{claim:m_upper_U}]
In order to understand $U$ more explicitly, we study the spectral sequence in Figure~\ref{ss1} up to and including the dotted line. Computing differentials, we see that the class $M_{11} = P^1(\iota_7)$
detects a nonzero class in $\HF{3}{11}(P_9)$.
The action that \eqref{eq:principal_action} gives rise to a map of spectral sequences, from the Serre spectral sequence for
$$ K(\Z/3,7) \times K(\Z/3,9)\to P_9 \to K(\Z,2) \times K(\Z,4) \times K(\Z,6) $$
to the Serre spectral sequence for
\begin{equation}\label{eq:Serre1}\Big(K(\Z/3,7) \times K(\Z/3,9)\Big)^{\times 2}\to K(\Z/3,7) \times K(\Z/3,9) \times P_9 \to \prod_{i=1}^3 K(\Z,2i).\end{equation}
We compute this map of spectral sequences using the fiber-by-fiber action.
For $i=7$ and $i=9$, let $\iota_i$ and $\iota_i'$ generate the two copies of $\HF{3}{i}(K(\Z/3,i))$ in the fiber of Equation~\ref{eq:Serre1}. The comultiplication on the fiber implies that the coaction on the $E_2$-page is:
\begin{align*}\iota_7 &\mapsto \iota_7+\iota_7',& \iota_9 &\mapsto \iota_9 + \iota_9'.\end{align*}
We claim that, in the double complex of the source sequence, $U$ should be represented by
\begin{equation}\label{eq:U_really}M_{11} -\iota_4\iota_7+\iota_2^2\iota_7 -\iota_2\iota_9.\end{equation}
To see this, note that $M_{11}$ is transgressive and
\begin{align*}d_{11}(M_{11})&= d_{11}(P^1\iota_7)\\
&=P^1(d_7(\iota_7))\\
&=P^1(P^1\iota_4-\iota_2^2\iota_4 - \iota_4^2 + \iota_2\iota_6) \\
&= -P^2\iota_4+\iota_2^4\iota_4-\iota_2^2P^1\iota_4 +
\iota_4P^1\iota_4 + \iota_2^3\iota_6 +\iota_2P^1\iota_6 \\
&= -\iota_4^3+\iota_2^4\iota_4-\iota_2^2P^1\iota_4 +
\iota_4P^1\iota_4 + \iota_2^3\iota_6 +\iota_2P^1\iota_6 \\
&= \left(\iota_4(d_7\iota_7) +\iota_2^2\iota_4^2-\iota_2\iota_4\iota_6\right)+\iota_2^4\iota_4 -\iota_2^2P^1\iota_4+\iota_2^3\iota_6+\iota_2P^1\iota_6\\
&=\left( \iota_4(d_7\iota_7)-\iota_2^2d_7(\iota_7) +\iota_2^3\iota_6 \right)-\iota_2\iota_4\iota_6+\iota_2^3\iota_6+\iota_2P^1\iota_6\\
&= \iota_4(d_7\iota_7)-\iota_2^2d_7(\iota_7)-\iota_2\iota_4\iota_6-\iota_2^3\iota_6+\iota_2P^1\iota_6 \\
&= \iota_4(d_7\iota_7) -\iota_2^2(d_7\iota_7)+\iota_2(d_9\iota_9).\end{align*}
This indicates that a cocycle representative for $U$ in the double complex computing $H^*(P_9;\Z/2)$ is Equation~\eqref{eq:U_really} on the $E_2$-page.
Therefore, on the $E_2$-page, we see that the action on the class representing $U$ is detected by
\begin{align*}(M_{11} - \iota_4\iota_7+ \iota_2^2\iota_7 - \iota_2\iota_9 ) \xmapsto{m^*} (M_{11}- \iota_4\iota_7+\iota_2^2\iota_7 - \iota_2\iota_9 +P^1 \iota_7'-\iota_4\iota_7'+\iota_2^2\iota_7' -\iota_2\iota_9').\end{align*}
Passing to the $E_\infty$-page of the Serre spectral sequence for Equation~\eqref{eq:Serre1} we see that $m^*U$ is detected by \begin{align*}M_{11} + P^1 \iota_7'-\iota_4\iota_7' +\iota_2^2\iota_7'
-\iota_2\iota_9'\in \HF{3}{*}\big(K(\Z/3,7) \times K(\Z/3,9)\times P_9\big),\end{align*}
and therefore $m^*U=U + P^1 \iota_7'
-\iota_4\iota_7'+\iota_2^2\iota_7'-\iota_2\iota_9',$
completing the proof of the claim.
\end{proof}
\begin{rmk}\label{rmk:top_cell} Instead of analyzing the tower of Diagram~\ref{post1}, we could transpose over the skeleton-truncation adjunction and instead lift a map $\sk^0(\CP^5)\to \c{BU(3)}{3}$ up the higher skeleta of $\CP^5$. The action of $\op{coker}(U\circ -)$ corresponds to the action of $\pi_{10}\c{BU(3)}{3}$ on lifts of a given map $\sk^9\CP^5 \to \c{BU(3)}{3}$ to the $10$-skeleton. This shows that the action from Construction~\ref{const:action_Z3} is the relevant one.
\end{rmk}
\subsecl{$2$-complete rank $3$ vector bundles on $\CP^5$}{subsec:2local}
In this section we show there are no $2$-complete bundles on $\CP^3$ which were not already detected by Chern classes.
\begin{prop} Consider the map $c\: \c{BU(3)}{2} \to K(\c{\Z}{2},2)\times K(\c{\Z}{2},4)\times K(\c{\Z}{2},6)$ given by the product $c_1\times c_2\times c_3$ of two-completed Chern classes. The induced $2$-complete Chern class map from $[\CP^5,\c{BU(3)}{2}]$ to $H^2(\CP^5,\c{\Z}{2})\times H^4(\CP^5,\c{\Z}{2}) \times H^6(\CP^5,\c{\Z}{2})$ is injective.
\end{prop}
\begin{proof}To understand $[\CP^5,\c{BU(3)}{2}]$,
we build a map from $\CP^5$ into $\c{BU(3)}{2}$ cell-by-cell. First,
recall the $2$-complete homotopy of $BU(3)$, as in Figure~\ref{fig:htpy_BU3_2}, computed from Figure~\ref{fig:homotopy_BU3}.
\begin{figure}[h]
\begin{tabular}{| M{1.2cm} | M{1cm} | M{1cm} | M{1cm} | M{1cm} |M{1cm} | M{1cm} | M{1cm} | M{1cm} | M{1cm} | M{1cm} | N}
\hline
& \textbf{$\pi_2$} & \textbf{$\pi_3$} & \textbf{$\pi_4$} & \textbf{$\pi_5$} & \textbf{$\pi_6$} & \textbf{$\pi_7$} & \textbf{$\pi_8$} & \textbf{$\pi_9$} & \textbf{$\pi_{10}$} \\
\hline
& & & & & & & & &
\\[-8pt]
$\c{BU(3)}{2}$ & $\c{\Z}{2}$ &0 & $\c{\Z}{2}$ & 0 & $\c{\Z}{2}$ & $\Z/2$ &0 & $\Z/4$ & $0$ \\[2pt]
\hline
\end {tabular}\caption{$2$-complete homotopy of BU(3)}\label{fig:htpy_BU3_2}
\end{figure}
Consider the dotted arrows $(i)$ to $(v)$ in Diagram~\eqref{eq:skeletal} below.
\begin{equation}\label{eq:skeletal}
\begin{tikzcd}[row sep=.4cm]
\CP^5\arrow[from=d]\arrow[ddrr,bend left=32,dashed,"(v)" above]
& \\
\sk_8\CP^5\arrow[from=d]\arrow[drr,bend left=16,dashed,"(iv)" above]
& \\
\sk_6\CP^5\arrow[from=d]\arrow[rr,dashed,"(iii)"]
& &\c{BU(3)}{2} \\
\sk_4\CP^5 \arrow[from=d]\arrow[urr,dashed,bend right=16,"(ii)" above]
&\\
\sk_2\CP^5\arrow[from=d]\arrow[uurr,bend right=32,dashed,"(i)" above]
&\\
*
\end{tikzcd}
\end{equation}
An arrow $(i)$ corresponds to a 2-complete first Chern class $\CP^2 \to K(\c{\Z}{2},2)$. The obstruction to lifting further is in $\pi_3(\c{BU(3)}{2})=0$.
The choices of lifts to an arrow $(ii)$ are acted on transitively by
$\pi_3(\c{BU(3)}{2}) \simeq \c{{\Z}}{2}$ and correspond to
$c_2$. The obstructions to lifting to an arrow $(iii)$ lie in $\pi_5(\c{BU(3)}{2})=0$, and the choices of lift to $(iii)$ correspond to $c_3$.
The obstruction to a lift to a map $(iv)$ lies in $\pi_7(\c{BU(3)}{2}) \simeq \Z/2.$
The choices of lifts are acted on transitively by $\pi_8(\c{BU(3)}{2})\simeq 0.$ The obstruction to lifting from $(iv)$ to $(v)$ are in $\pi_9(\c{BU(3)}{2})\simeq \Z/4$. The choices of lift are acted upon transitively by $\pi_{10}(\c{BU(3)}{2})=0.$
\end{proof}
\begin{rmk} We have shown that there are $\mod 2$ and $\mod 4$ conditions on the Chern classes of a rank $3$ vector bundle on $\CP^5$, but no new $2$-primary invariants. However, for a general $10$-skeletal space (one that is not even), there may be additional 2-complete bundles not determined by Chern classes.
\end{rmk}
\subsecl{The Schwarzenberger conditions}{subsec:schwarzenberger}
Finally, we discuss necessary conditions for a collection of integers $a_1,a_2, a_3\in \Z$ to be the Chern classes of a topological vector bundle of rank $3$ on $\CP^5$.
Following \cite{Thomas}, let integers $c_1,\ldots, c_k$ be given. Inductively, let
\begin{align*} s_1&:=c_1\\
s_k(c_1,\ldots, c_k) &:= \Sigma_{i=1}^{k-1}(-1)^{i+1}c_is_{k-i}\\
f_1(s_1)&:=\operatorname{Identity} \\
f_n(s_1,\ldots, s_n) &:= f_{n-1}(s_2,\ldots, s_n) - (n-1)f_{n-1}(s_1,\ldots, s_{n-1}).
\end{align*}
\begin{defn} The Schwarzenberger condition $S_k$ on a set $c_1,\ldots, c_k$ of integers is the requirement that, for each $1\leq n \leq k$,
$$f_n(s_1(\vec c),\ldots, s_n(\vec c)) \equiv 0 \pmod {n!}\,\,\, , $$
where $\vec c:=(c_1,\ldots, c_k).$
\end{defn}
\begin{rmk} This condition has different forms, e.g. see \cite[Appendix A]{Hirz}.
\end{rmk}
\begin{thm}[\cite{Thomas}, Theorem A]\label{lem:Thomas_thmA} Integers $c_1,\ldots , c_k \in \Z$ are the Chern classes of a rank $k$ vector bundle on $\CP^k$ if and only if $c_1,\ldots, c_k$ satisfy the condition $S_k$.
\end{thm}
From this we can to prove:
\begin{lem}\label{lem:schwarz_enough}Let $a_1,a_2,a_3 \in \Z$. Then there exists a complex rank $3$ topological vector bundle $V$ on $\CP^5$ with $c_i(V)=a_i$ if and only if the $5$-tuple $(a_1,a_2,a_3,0,0)$ satisfies the Schwarzenberger condition $S_5.$
\end{lem}
\begin{proof}By Theorem~\ref{lem:Thomas_thmA} above, the condition is necessary.
To show the condition $S_5$ is sufficient, we prove that a rank $5$ vector $V'$ bundle on $\CP^5$ with $c_4$ and $c_5$ equal to zero is in fact is isomorphic to a bundle $V \oplus \underline {\mathbb C}^2$, i.e. its stable class has a rank $3$ representative. Consider \cite[Proposition 5.7.5]{Zab}, which
implies that any (complex) rank $7$ vector bundle on $\CP^5$ with top four Chern classes zero is a sum of a rank $3$ bundle and two trivial bundles. We apply this to $V'\oplus \underline {\mathbb C}^2$ to get the desired result.
\end{proof}
We now work out $S_5$ explicitly.
\begin{lem}\label{lem:explicit_S5} The condition $S_5$ on $(a_1,a_2,a_3,0,0)$ is equivalent to the system of equations:
\begin{align*}
a_3+a_1a_2 &\equiv 0 &\pmod 2\\
-a_1^2a_2+a_1a_3-a_2^2+a_2 &\equiv 0 &\pmod 3\\
a_1a_2-a_1^2a_3-a_1a_2^2+a_2a_3 +a_1^2a_2-a_1a_3+a_2^2& \equiv 0 &\pmod 3\\
-{a_1}^3a_2+{a_1}^2a_3+a_1{a_2}^2-a_2a_3-a_1a_2+a_3&\equiv 0 &\pmod 4\\
\end{align*}
\end{lem}
\begin{proof}
Using the definitions preceding Theorem~\ref{lem:Thomas_thmA}, we evaluate $s_1,\ldots, s_5$ at $(a_1,a_2,a_3,0,0)$. Let $\vec a := (a_1,a_2,a_3, 0, 0).$
\begin{align*}
s_1(\vec a)&= a_1\\
s_2(\vec a) & =a_1^2-2a_2 \\
s_3(\vec a) & = a_1^3-3a_1a_2+3a_3 \\
s_4(\vec a) &= a_1^4-4a_1^2a_2+4a_1a_3+2a_2^2\\
s_5(\vec a)&= a_1^5-5a_1^3a_2+5a_1^2a_3+5a_1a_2^2-5a_2a_3.
\end{align*}
We now compute $f_i(s_1,\ldots, s_i)$ for $1 \leq i \leq 5$.
\begin{align*}
f_1(s_1)&=s_1\\
f_2(s_1,s_2)&=s_2-s_1\\
f_3(s_1,s_2,s_3)&= (s_3-s_2)-2(s_2-s_1) \\
&= s_3-3s_2+2s_1 \\
f_4(s_1,s_2,s_3,s_4)&=s_4-3s_3+2s_2-3(s_3-3s_2+2s_1)\\
&= s_4-6s_3+11s_2-6s_1\\
f_5(s_1,s_2,s_3,s_4,s_5) &= s_5-6s_4+11s_3-6s_2-4(s_4-6s_3+11s_2-6s_1)\\
&= s_5-10s_4+35s_3-50s_2+24s_1.
\end{align*} From the above we get:
\begin{align*} f_2(s_i)|_{\vec a}&= a_1^2-2a_2-a_1\\ &= a_1(a_1-1)-2a_2\\
f_3(s_i)|_{\vec a} &=a_1^3-3a_1a_2+3a_3-3(a_1^2-2a_2)+2a_1 \\
&=a_1(a_1-1)(a_1-2) -3a_1a_2+3a_3-6a_2 \\
f_4(s_i)_{\vec a}&=a_1^4-4a_1^2a_2+4a_1a_3+2a_2^2 -6(a_1^3-3a_1a_2+3a_3)+11(a_1^2-2a_2)-6(a_1) \\
&= a_1(a_1-1)(a_1-2)(a_1-3) -4a_1^2a_2+4a_1a_3+2a_2^2+18a_1a_2-18a_3-22a_2\end{align*}
\begin{align*}
f_5(s_i)|_{\vec a} &= a_1^5-5a_1^3a_2+5a_1^2a_3+5a_1a_2^2-5a_2a_3\\ & \,\,\,\,\,\,\,\,\,-10(a_1^4-4a_1^2a_2+4a_1a_3+2a_2^2)+35(a_1^3-3a_1a_2+3a_3)-50(a_1^2-2a_2)+24a_1\\
& = \prod_{i=0}^4(a_1-i) +5(-a_1^3a_2+a_1^2a_3+a_1a_2^2-a_2a_3 )\\ &
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, -10(-4a_1^2a_2+4a_1a_3+2a_2^2)+35(-3a_1a_2+3a_3).
\end{align*}
For simplicity, write $f_i(\vec a)$ for $f_i(s_1,\ldots, s_i)|_{\vec a}$. We now expand the equations $f_i(\vec a)\equiv 0\pmod{i!}$ for $2 \leq i \leq 5$:
\begin{enumerate}
\item[$f_2\pmod{2!}\,$:] $a_1(a_1-1)-2a_2\equiv 0 \pmod 2$ for any $a_1,a_2$, so this gives no condition.
\item[$f_3\pmod{3!}\,$:] $f_3(\vec a)\equiv a_3+a_1a_2 \pmod 2,$ so we get the condition
\begin{equation}\label{f3_2} a_3+ a_1a_2 \equiv 0 \pmod 2.\end{equation}
Since all terms of $f_3(\vec a)$ are divisible by $3$, there is no 3-primary condition from $f_3$.
\item[$f_4\pmod{4!}\,$:] All terms of $f_4(\vec a)$ are divisible by $2$, so this gives no condition.
Since $f_4(\vec a) \equiv -a_1^2a_2+a_1a_3+2a_2^2-a_2 \pmod 3,$ this gives the condition
\begin{equation}\label{f4_3}-a_1^2a_2+a_1a_3-a_2^2+a_2 \equiv 0 \pmod 3.\end{equation}
Consider $f_4(\vec a) \equiv 2a_2^2+2a_1a_2+2a_3-2a_2\pmod 4.$ This quantity is zero $\pmod 4$ if and only if $a_2^2+a_1a_2-a_3-a_2 \equiv 0 \pmod 2.$
However, this condition is implied by \eqref{f3_2}, so we get no new constraint.
\item[$f_5\pmod{5!}\,$:]$f_5(\vec a) \equiv a_1^3a_2+a_1^2a_3+a_1a_2^2+a_2a_3 +a_1a_2+a_3 \pmod 2$ giving the condition
$$a_1a_3+a_1a_2+ a_2a_3+a_3\equiv 0 \pmod 2.$$
However, this condition is implied by \eqref{f3_2} so we get no new constraint.
Next, we reduce $\pmod 3$:
\begin{align*}f_5(\vec a)& \equiv a_1^3a_2-a_1^2a_3-a_1a_2^2+a_2a_3 +a_1^2a_2-a_1a_3+a_2^2 \pmod 3\end{align*}
Thus $f_5(\vec a)\equiv 0 \pmod 3$ if and only if
\begin{equation}\label{f5_3}a_1a_2-a_1^2a_3-a_1a_2^2+a_2a_3 +a_1^2a_2-a_1a_3+a_2^2 \equiv 0 \pmod 3. \end{equation}
Since $f_5(\vec a) \equiv 0 \pmod 5$, the last condition comes from
$$f_5(\vec a) \pmod 4 \equiv -{a_1}^3a_2+{a_1}^2a_3+a_1{a_2}^2-a_2a_3-a_1a_2+a_3.$$
This gives the condition:
\begin{equation}\label{f5_4}-{a_1}^3a_2+{a_1}^2a_3+a_1{a_2}^2-a_2a_3-a_1a_2+a_3\equiv 0 \pmod 4 \end{equation}
\end{enumerate}
Equations~\eqref{f3_2}, \eqref{f4_3}, \eqref{f5_3}, and \eqref{f5_4} are precisely the conditions to be proved.
\end{proof}
\section{Defining a twisted $\tmf$-valued invariant}\label{sec:exist}
By Theorem~\ref{classification}, rank $3$ bundles on $\CP^5$ that are not determined by their Chern data arise from the action of $\pi_{10}BU(3) \simeq \Z/3$ given in Construction~\ref{const:action_Z3}. To go from an action to an invariant, we must study the unstable homotopy of $BU(3)$ in greater detail. We outline the key insights in Lemmas~\ref{lem:htpy_BU2_BU3_spheres}, \ref{lem:htpy_BU3_sphere2}, and \ref{lem:negative_gamma_works} below. These are motivational but not logically necessary for what follows, so we omit most proofs.
\begin{conv} Throughout this section all spaces and spectra are implicitly localized at $3$.
\end{conv}
For any $n$, there is a fiber sequence $$S^{2n+1} \xrightarrow{\delta_{n+1}} BU(n) \to BU(n+1)$$
(for example, see \cite[Section 72]{HRM_lec}). For each $n$, $\delta_{n+1}$ is the attaching map for a $(2n+2)$-cell corresponding to $c_{n+1}\in H^{2n+2}BU(n+1)$. The homotopy class of $\delta_{n+1}$ is linked to the existence of non-isomorphic vector bundles with the same Chern data in both the case of rank $2$ bundles on $\CP^3$ and that of rank $3$ bundles on $\CP^5$, as shown by the next result.
\begin{lem}\label{lem:htpy_BU2_BU3_spheres} A generator for $\pi_6BU(2)$ is given by the composite $$S^6 \xrightarrow{\eta} S^5 \xrightarrow{\delta_3} BU(2),$$
where the $\eta$ is an unstable representative for the class by the same name in
$\pi_*\sphere$, which is the first element of order $2$.
A generator for $\pi_{10}BU(3)$ is given by the composite $$S^{10} \xrightarrow{\alpha_1} S^{7} \xrightarrow{\delta_4} BU(3),$$
where the $\alpha_1$ is an unstable representative for the class by the same name in $\pi_*\sphere$, which is the first element of order $3$.
\end{lem}
Moreover, we can describe the generator for $\pi_{10}(BU(3))$ as a further composite.
\begin{lem}\label{lem:htpy_BU3_sphere2} Let $x\: S^4 \to BU(3)$ generate $\pi_4BU(3)$. Then there is a map $\epsilon\: S^7 \to S^4$ such that:
\begin{enumerate}
\item $\epsilon \circ x$ generates the three-torsion in $\pi_7BU(3)$; and
\item $\susp \epsilon = \alpha_1$.
\end{enumerate}
\end{lem}
\begin{rmk} Lemma~\ref{lem:htpy_BU2_BU3_spheres} (for $BU(3)$) and Lemma~\ref{lem:htpy_BU3_sphere2} will follow from the the proof of Theorem~\ref{thm:tmf_class_exists}. \end{rmk}
Combining the previous two lemmas, $\pi_{10}BU(3)$ is generated by
\begin{equation}\label{eq:alpha12}S^{10}\xrightarrow{\alpha_1} S^7 \xrightarrow{\epsilon} S^4 \xrightarrow{x} BU(3).\end{equation}
This shows that a generator $\sigma$ for $\pi_{10}BU(3)$ is stably trivial every sense: first, the bundle on $BU(3)$ represented by $\sigma$ is stably trivial as a map to $BU$; second, $\susp \sigma =\susp \alpha_1^2x=0$ since $\alpha_1^2=0$ in $\pi_*\sphere$. In fact the composition in \eqref{eq:alpha12} is null after just one suspension.
Instead of applying the suspension spectrum functor to Diagram~\eqref{eq:alpha12}, we can take a Thom spectrum with respect to one of the canonical bundles that are present. Let $V$ be a bundle on $BU(3)$ (for example, the universal bundle $\gamma_3$, its determinant, or $-\gamma_3$). Equation~\eqref{eq:alpha12} gives a sequence of spectra
\begin{equation}\label{eq:thomified_htpy}
\begin{tikzcd}[row sep=1em]
\sphere^{10}\ar[d]\ar[drrr,bend left=20, dashed, "\tilde v" above] \ar[r,"\alpha_1" below] &\sphere^7\ar[d] \\
\suspp S^{10}\ar[r,"\operatorname{Th}(\alpha_1)"] & \suspp S^7\ar[r,"\operatorname{Th}(\epsilon)"] & \Th{S^4}{V|_{S^4}} \ar[r,"\op{Th}(x)"] & \Th{BU(3)}{V}.
\end{tikzcd}
\end{equation}
For various choices of $V$, we can ask whether $\tilde v$ is null.
\begin{rmk}To obtain Diagram~\eqref{eq:thomified_htpy} by Thomifying, note that the bundles $V|_{S^7}$ and $V|_{S^{10}}$ are trivial as maps to $bu$ and fix a spherical orientation for $V|_{S^7}$.
\end{rmk}
The Thom spectrum $\Th{S^4}{V|_{S^4}}$ has two cells, one in degree four and one in degree zero. Thomifying can either keep the cells split or introduce an $\alpha_1$-attaching map, in which case the Thom spectrum is $C(\alpha_1)$.\footnote{$C(\alpha_1)$ is the cofiber of the map $\alpha_1: \sphere^3 \to \sphere^0$.} Which option occurs depends on $V$. In order for the dotted composite $\tilde v$ to be nonzero, the latter must occur.
Moreover, given any spectrum $X$ together with a map $C(\alpha_1)\to X$, if
\begin{equation}\label{eq:composite}\big(\sphere^{10} \to \sphere^{7} \to C(\alpha_1)\to X \big) \not\simeq 0.\end{equation} Then the image in $X$ of the $4$-cell of $C(\alpha_1)$ cannot support a $P^1$ in $X$.
From experimentation, it seems that taking $X=\Th{BU(3)}{V}$ with $V$ a canonical construction on $\gamma_3$ fails one condition or the other: either $\Th{S^4}{V|_{S^4}}\simeq \suspp S^4$ or there is a nonzero $P^1$ on the relevant $4$-cell of $\Th{BU(3)}{V}$.
To resolve this issue, we modify our classifying space.
\begin{defn}\label{def:BUn_coz} Let $BU(n)_{\coz}:=\op{hofib}\big(c_1\pmod 3\: BU(n)\to K(\Z/3,2)\big).$\end{defn}
Note that:
\begin{itemize}
\item The space $BU(n)_{\coz}$ is even, so
any rank $n$ on $\CP^k$ bundle with $c_1\equiv 0 \pmod 3$ lifts uniquely, up to homotopy, along the natural map $BU(n)_{\coz}\to BU(n)$.
\item The natural map $BU(n)_{\coz} \to BU(n)$ is a homotopy equivalence above degree 2.
\item The space $BU(n)_{\coz}$ carries a universal bundle which we denote $\gamma_n$.
\end{itemize}
Thus our previous analyses of rank $3$ bundles on $\CP^5$ can be repeated after adding the constraint $c_1\equiv 0 \pmod 3$ and substituting $\BUc$ in place of $BU(3)$. In particular, we get a modification of Diagram~\eqref{eq:thomified_htpy}:
\begin{equation}\label{eq:thomified_htpy2}
\begin{tikzcd}[row sep=1em]
\sphere^{10}\ar[d]\ar[drrr,bend left=20, dashed, "\tilde v" above] \ar[r,"\alpha_1" below] &\sphere^7\ar[d] \\
\suspp S^{10}\ar[r,"\operatorname{Th}(\alpha_1)"] & \suspp S^7\ar[r,"\operatorname{Th}(\epsilon)"] & \Th{S^4}{-\gamma_3} \ar[r,"\op{Th}(x)"] & \BUct.
\end{tikzcd}
\end{equation}
\begin{lem}\label{lem:negative_gamma_works} In the diagram above, $\Th{S^{4}}{-\gamma_3}=C(\alpha_1)$ and the element $\tilde v$ is nontrivial in $\pi_{10}\big(\BUct\big).$
\end{lem}
It will be useful to have better terminology for the Thomified homotopy classes such as $\tilde v$.
\begin{defn}\label{def:thom_sigma} Given a pointed map $y\: S^n \to BU(r)_{\coz}$ representing a stably trivial bundle on $S^n$, and a nullhomotopy $u$ of the composite map to $BGL_1\sphere$, let $\op{Th}_u(y)\: \sphere^n \to BU(r)^{-\gamma_r}$ denote the composite
$$ \sphere^{n} \xrightarrow{i_2}\suspp S^n \xrightarrow{u_{\simeq}} \Th{S^n}{-y} \to \Th{BU(r)}{-\gamma_r},$$
where the arrow $i_2$ is the inclusion of the top cell determined by a base point and the map $u_{\simeq}$ is the spherical Thom isomorphism determined by $u$.\footnote{We recall the basics of orientations and Thom isomorphisms in Subsection~\ref{subsec:background_orient}.}
\end{defn}
The main goal of this section is to prove the following:
\begin{thm}\label{thm:tmf_class_exists} Given $\sigma\: S^7 \to BU(3)$ generating $\pi_7BU(3)$ and a Thom class $u_0$ for $\sigma$, there is a unique class $$\tilde \rho \in \tmf^{-3}( \sk^{26}\BUct)$$ such that $$\operatorname{Th}_{u_0}(\sigma)^*\tilde \rho=\alpha_1\beta_1 \in \pi_{13}\tmf,$$ where $\sk^{26}\BUct$ is the Thomification of a $26$-skeleton of $\BUc.$
\end{thm}
\begin{rmk} The reader may wonder why we look for a class in $\tmf$ cohomology. The spectrum $X=\Sigma^{-3}\tmf$ carries a natural map $C(\alpha_1)\to \Sigma^{-3}\tmf$ induced by $\alpha_1\:\sphere^0\to \Sigma^{-3}\tmf$ such that Equation~\eqref{eq:composite} holds. Moreover, $\tmf$ is one of the simplest ring spectra with this property. This makes $\tmf$ is a natural candidate to detect $\pi_{10}\BUc$.
\end{rmk}
In Subsection~\ref{subsec:start_existence_proof}, we outline the proof strategy for Theorem~\ref{thm:tmf_class_exists}. We prove $\tilde \rho$ exists in Subsection~\ref{subsec:proof_exists}, predicated on cohomology calculations which are recorded in Subsection~\ref{subsec:cohomology_BUct}. The proof of uniqueness of $\tilde \rho$, given in Subsection~\ref{subsec:proof_unique}, also uses these calculations.
\subsecl{Proof outline: existence and uniqueness of a twisted $\tmf$ invariant}{subsec:start_existence_proof}
In this subsection we will state a sequence of propositions and explain how they imply Theorem~\ref{thm:tmf_class_exists}. To begin, let $u$ be the canonical Thom class in the $\HZt$-cohomology of $\BUct$. As a module over the $\mod 3$ Steenrod algebra,
$$\HF{3}{*}\BUct \simeq \big(\HF{3}{*}\BUc\big)\cdot u.$$ We find that $P^1(u)=-c_2\cdot u$ and $P^1P^1(u)=0$ (see Proposition~\ref{prop:cohomology_BUct}), so there is a map
$$k\: C(\alpha_1)\to \BUct$$
that takes the $0$-cell in $C(\alpha_1)$ to the $0$-cell in $\BUct$ and the $4$-cell in $C(\alpha_1)$ to the cell dual to $-c_2\cdot u$.
\begin{prop}\label{prop:splitting} The map $k\: C(\alpha_1) \to \sk^{26}\BUct$ splits after tensoring with $\tmf$. More precisely, there is a map of $\tmf$-modules $$r\: (\sk^{26}\BUct)\otimes \tmf \to C(\alpha_1 \otimes \tmf)$$ so that $r\circ (k\otimes \tmf)$ is homotopic to the identity.
\end{prop}
Fix $u_0$ a spherical orientation for $\sigma\: S^{10}\to \BUc$ generating $\pi_{10}\BUc.$ Assuming the previous proposition, we immediately obtain:
\begin{cor}\label{cor:exists} There is a map $\tilde \rho\: \sk^{26}\BUct\to\Sigma^{-3}\tmf$ such that $$\left(\sphere^{10} \xrightarrow{\Thom_{u_0}(\sigma)} \sk^{26} \BUct \xrightarrow{\tilde\rho} \Sigma^{-3}\tmf\right)=\alpha_1\beta_1$$ in $\pi_{13}\tmf$.
\end{cor}
\begin{proof}[Proof of Corollary~\ref{cor:exists} assuming Proposition~\ref{prop:splitting}] Let $r$ and $k$ be as above. The map $\alpha_1\: \sphere^0\to\Sigma^{-3}\tmf$ extends over $C(\alpha_1)$, since $\alpha_1^2=0$.
Let $o: C(\alpha_1) \to \Sigma^{-3}\tmf$ denote an extension and let $\bar o=o\otimes \tmf$ be the associated map of $\tmf$-modules. Let $1\: \sphere \to \tmf$ be the unit map. We then define $\tilde \rho$ to be the composite
\[
\begin{tikzcd}[column sep=12]
\sk^{36}\BUct\otimes \sphere \ar[r,"1 \otimes \iota"]\ar[drr, "\tilde\rho" below]& \sk^{36}(\BUct)\otimes\tmf \ar[r,"r"] & C(\alpha_1)\otimes\tmf \ar[d,"\bar o" left]\\
& &\Sigma^{-3}\tmf.
\end{tikzcd}
\]
To show this map has the desired property, consider the homotopy commutative diagram:
\[
\begin{tikzcd}
\sk^{36}(\BUct)\otimes\tmf \ar[r,"r"] & C(\alpha_1)\otimes\tmf \ar[r,"\bar o"]\ar[l,bend right,"k\otimes \tmf" above] &\Sigma^{-3}\tmf \\
\sk^{36}(\BUct)\otimes\sphere \ar[u,"1\otimes \iota"] & C(\alpha_1)\otimes\sphere \ar[u,"1\otimes \iota"]\ar[ur,"o" below]\ar[l,bend right,"k" below] \\
\sphere^{10}\ar[u,"\Thom(\sigma)"]\ar[ur,"\beta_1\lbrack 0\rbrack" below]
\end{tikzcd}
\]
where $\beta_1\lbrack 0\rbrack$ is the image of $\beta_1\in\pi_{10}\sphere^0$ under $\sphere^0\rightarrow C(\alpha_1).$
The fact that the lower triangle commutes is a consequence of the fact that the map $$\sphere^{10}\xrightarrow{\alpha_1}\sphere^7\xrightarrow{\alpha_1\lbrack 4\rbrack} C(\alpha_1)=\langle \alpha_1,\alpha_1,\alpha_1\rangle =\beta_1.$$ The map $o \circ \beta_1\lbrack 0\rbrack \in \pi_{13}\tmf$ is precisely $\alpha_1\beta_1$.
\end{proof}
The splitting is not canonical. However, given an identification $\Th{S^{10}}{-\sigma} \simeq \sphere^0\oplus \sphere^{10}$,
the class $\tilde \rho$ is uniquely determined by requiring it to restrict to $\alpha_1\beta_1$ on $\sphere^{10}$.
\begin{prop}\label{prop:uniqueness} Let $\epsilon\: S^7 \to BU(3)$ generate $\pi_{10}BU(3)$ and let $u_0$ be a spherical orientation for $\epsilon$. Let $\Thom_{u_0}$ be as in Definition~\ref{def:thom_sigma}.
Any two classes $$\tilde \rho, \,\tilde \rho'\in\tmf^{-3}(\sk^{26}\BUct)$$ such that $\Thom_{u_0}(\epsilon)^*(\tilde \rho)=\Thom_{u_0}(\epsilon)^*(\tilde \rho')=\beta_1\in\pi_{13}\tmf$ are in fact equal in $\tmf^{-3}(\BUct)$.
\end{prop}
We will prove this in Subsection~\ref{subsec:proof_unique}. This immediately implies:
\begin{cor}\label{cor:uniqueness} Let $\sigma = \alpha_1\circ \epsilon \in \pi_{10}BU(3).$
Then:
\begin{itemize}
\item $\sigma= \alpha_1\circ \epsilon$ generates $\pi_{10}BU(3)$.
\item If $\tilde \rho, \, \tilde \rho' \in \tmf^{-3}(\sk^{26}\BUct)$ satisfy $$\Thom_{u_0}(\sigma)^*(\tilde \rho) =\Thom_{u_0}(\sigma)^*(\tilde \rho')=\alpha_1\beta_1\in \pi_{13}\tmf,$$
then $\tilde \rho = \tilde \rho'.$
\end{itemize}
\end{cor}
\begin{proof} Note that the orientation $u_0$ for $\epsilon$ gives an orientation $u_0$ for $\sigma$ such that $$\alpha_1\cdot \Thom_{u_0}(\epsilon)=\Thom_{u_0}(\sigma).$$
For the second item, suppose that $\rho',\rho$ both satisfy
\begin{align*}\Thom(\sigma)^*(\tilde \rho) &= \alpha_1\beta_1=\Thom(\sigma)^*(\tilde \rho')\end{align*}
in $\pi*\tmf$. This implies:
\begin{align*} \Thom(\epsilon)^*(\tilde \rho) &=\beta_1 = \Thom(\epsilon)^*(\tilde \rho')
, \end{align*}
so, by Proposition~\ref{prop:uniqueness},
$\tilde \rho =\tilde \rho' \in \tmf^{-3}\left(\sk^{26}\BUct\right)$.
For the first item, note that $\Thom(\sigma)^*(\tilde \rho) \neq 0$, which implies $\sigma \neq 0$. Since $\pi_{10}BU(3)$ is cyclic, this implies $\sigma$ generates.
\end{proof}
Together, Corollary~\ref{cor:exists} and Corollary~\ref{cor:uniqueness} imply Theorem~\ref{thm:tmf_class_exists}. It remains to prove Propositions~\ref{prop:splitting} and \ref{prop:uniqueness}. These are the projects of the next subsections.
\subsecl{Proof of Proposition~\ref{prop:splitting}}{subsec:proof_exists}
By a skeleton of $\BUct$ we mean a term in a filtration of $\BUct$ obtained by Thomifying a skeletal filtration of $\BUc$. Precisely, $\BUc$ has a cell structure filtering the space by a sequence of pushouts as in Diagram~\eqref{eq:filt_BUc} below.
\begin{equation}\label{eq:filt_BUc}
\begin{tikzcd}[row sep = 7]
& \vdots\\
\vee_{j \in H_{i+2}} S^{i+3}\ar[r,"\vee \bar c_{i+2,j}"] & \sk^{i+2}\BUc \ar[u] \\
\vee_{j \in H_{i}}S^{i+1} \ar[r,"\vee \bar c_{ij}"] & \sk^{i}\BUc \ar[u]\\
&\vdots \ar[u]
\end{tikzcd}
\end{equation}
Each $H_{i}$ above is a finite indexing set. Each skeleton carries a bundle pulled back from $\gamma_3$ on $\BUc$.
We can Thomify all diagrams involved to obtain a filtration for $\BUct$: each stage in Diagram~\eqref{eq:filt_BUc} gives pushout in spectra
\begin{equation}\label{eq:skeleta_Thom_BSU3}
\begin{tikzcd}
* \ar[r] & \sk^{i+2}\BUct \\
\oplus_{H_{i}}\sphere^{i}\ar[u]\ar[r,"\oplus_{H_i}c_{ij}"]& \sk^{i}\BUct\,. \ar[u]
\end{tikzcd}\end{equation}
In Diagram~\ref{eq:skeleta_Thom_BSU3}, we define $\sk^i\BUct := \Th{\sk^i\BUc}{-\gamma_3}$ and $c_{ij}$ is the Thomification of $ \bar c_{ij}$ restricted to $\sphere^i$.
Consider the cofiber $$C:=\operatorname{cof}( C(\alpha_1)\xrightarrow{k} \sk^{26}\BUct.$$
\begin{lem}\label{lem:zero_maps}With notation as above, $\pi_0\operatorname{Maps}_{\tmf}(\Sigma^{-1}C\otimes \tmf, C(\alpha_1)\otimes \tmf) =0.$
\end{lem}
Given this lemma, the going around map $\Sigma^{-1}C\otimes \tmf \to C(\alpha_1)\otimes \tmf$ is null and there is an extension making Diagram~\ref{eq:section_exists_2} homotopy commutative, which is the desired section.
\begin{equation}\label{eq:section_exists_2}
\begin{tikzcd}
\Sigma^{-1}C\otimes \tmf \ar[r,"0"] & C(\alpha_1)\otimes \tmf \ar[r,"k"] \ar[d,"="]&\sk^{26}\BSUt\otimes \tmf \ar[dl,dashed,"\exists r"] \\
& C(\alpha_1)\otimes \tmf
\end{tikzcd}
\end{equation}
\begin{proof}[Proof of Lemma~\ref{lem:zero_maps}]
Using the free-forgetful adjunction for $\tmf$-modules:
$$\pi_0\operatorname{Maps}_{\tmf}(\Sigma^{-1}C \otimes \tmf, C(\alpha_1)\otimes \tmf) \simeq \pi_0\operatorname{Maps}_\sphere(\Sigma^{-1}C, C(\alpha_1)\otimes \tmf).$$
To establish the result, we compute $\pi_0\operatorname{Maps}_\sphere(\Sigma^{-1}C, C(\alpha_1)\otimes \tmf)$ via an Atiyah--Hirzebruch spectral sequence
\begin{equation}\label{eq:AHSS_maps}E_{2}^{p,q}=H^p\left(\Sigma^{-1}C; \left(\tmf\right)_{-q}C(\alpha_1)\right)\implies \pi_{p+q}\operatorname{Maps}_{\sphere}\left(\Sigma^{-1}C, C(\alpha_1)\otimes \tmf\right).\end{equation}We use grading conventions indicated in Figure~\ref{fig:AHSS_schematic} and we depict the spectral sequence in Figure~\ref{fig:tmf_hom_maps}.
\begin{figure}[h]
\centering
\textbf{Schematic grading convention for the Atiyah--Hirzebruch spectral sequence.}\par\medskip
\begin{tikzpicture}
\matrix (m) [matrix of math nodes,
nodes in empty cells, nodes={minimum width=6ex,
minimum height=6ex},
column sep=.25ex,row sep=.25ex]{
& & & & & & p= &\\
& & \node[](d){};& & & & 4 &\\
& & & \node[] (c){};& & & 3 &\\
& & & &\node[] (b){}; & & 2 & \\
& & & & & & 1 \\
& & & & & \node[] (a){} ; & 0 & \\
q= & -4 & -3 & -2 & -1 & 0 & \\};
\draw[dashed] (m-1-1) -- (m-7-7);
\draw[dashed] (m-1-2) -- (m-6-7);
\draw[dashed] (m-2-1) -- (m-7-6);
\draw[] (m-1-6.east) -- (m-7-6.east);
\draw[] (m-7-1.north) -- (m-7-7.north) ;
\draw [arrow] (a) -- node[anchor=south west]{$d_2$} (b);
\draw [arrow] (a) -- node[anchor=south]{$d_3$} (c);
\draw [arrow] (a) -- node[anchor=south east]{$d_4$} (d);
\end{tikzpicture}\caption{Dotted lines indicate diagonals $p+q=i$ which converge to an associated graded of the $i$-th homotopy; the direction of the first few differential are
indicated. }\label{fig:AHSS_schematic}\end{figure}
We need to understand some aspects of $\HF{3}{*}\Sigma^{-1}(C)$. From Proposition~\ref{prop:cohomology_BUct}, whose proof we defer to the next subsection, we have that
$$\HF{3}{*}(\BUct)\simeq \Z/3[t,c_3,c_3]\cdot u$$ as a module over the Steenrod algebra, where $|t|=2,$ $|c_2|=4$, $|c_3|=6$. We have drawn the $P^1$-action on those classes which are not multiples of $t$ in Diagram~\ref{fig:P1_mod_BUct}. Note that $k^*$
is surjective, so $\HF{3}{*}(C)$ can be identified with a submodule of $\HF{3}{*}(\BUct)$ consisting of all elements except $\Z/3$-multiples of $u$ and $c_2\cdot u$.
\begin{figure}[h]
\centering
\textbf{$P^1$-module structure of $\BUct$ through degree $18$, excluding generators divisible by $t$.}\par\medskip
\begin{tikzpicture}
\matrix (m) [matrix of math nodes,
nodes in empty cells, nodes={minimum width=5ex,
minimum height=5ex},
column sep=.25ex,row sep=.25ex]{
22 && & & & & & & c_2c_3^3\cdot u \\
20&&
& & & &c_2^2c_3^2 \cdot u \\
18&& & & c_2^3c_3\cdot u & & & & c_3^3\cdot u \\
16&&c_2^4\cdot u & & & & c_2c_3^2 \cdot u & & \\
14&& & & c_2^2c_3\cdot u & & & & \\
12&&c_2^3\cdot u & & & & c_3^2 \cdot u & & \\
10&& & & c_2 c_3\cdot u & & & & \\
8&& c_2^2\cdot u& & & & & & \\
6&& & & c_3\cdot u & & & & \\
4&&c_2\cdot u & & & & & & \\
2& & \\
0 & & u \\ };
\draw[dashed] (m-4-3) -- (m-6-3);
\draw[dashed] (m-6-3) -- (m-8-3);
\draw[dashed] (m-10-3) -- (m-12-3);
\draw[dashed] (m-3-5) -- (m-5-5);
\draw[dashed] (m-5-5) -- (m-7-5);
\draw[dashed] (m-2-7) -- (m-4-7);
\draw[dashed] (m-4-7) -- (m-6-7);
\draw[dashed] (m-1-9) -- (m-3-9);
\end{tikzpicture}\caption{The left column is the degree of the cohomology class. Dotted lines indicate $\pm \alpha_1$ attaching maps detected by a $P^1$ in $\HF{3}{*}\BUct$. We omit classes above degree 18 that do not attach to cells at or below 18. }\label{fig:P1_mod_BUct}\end{figure} Given a class $t^lc_2^ic_3^j\cdot u \in \HF{3}{*}(C)$, we refer the reader to Proposition~\ref{cor:P1_u_classes} to deduce that
\begin{equation}\label{eq:P1_generators} P^1(t^lc_2^ic_3^j\cdot u) =l(t^{l+2}c_2^ic_3^j\cdot u) + (i+j-1)t^lc_2^{i+1}c_3^j \cdot u. \end{equation}
Since $\HF{3}{*}(\Sigma^{-1}C)$ has odd cohomology, terms on the diagonal $p+q=0$ on the $E_2$-page of the Atiyah--Hirzebruch spectral sequence of Equation~\ref{eq:AHSS_maps} arise from odd elements in $\pi_*C(\alpha_1)\otimes \tmf$. In the relevant range, we can describe
$\pi_*(C(\alpha_1)\otimes \tmf)$ as follows:
$$\pi_{* \leq 26}(C(\alpha_1)\otimes \tmf) \simeq \Z/3\cdot \{1, \alpha_1[4], \alpha_1\beta_1[4],\beta_1[0], \beta^2[0]\}\oplus H,\,\, \text{ where}$$
\begin{itemize}
\item $H$ has only even terms (arising from the classes $c_2$ and $c_4$ in $\pi_*\tmf$) and these terms support no $\alpha_1$ or $\beta_1$ multiplications, so can be disregarded for our calculation.
\item The $\sphere$-module structure is given by:
\begin{align*}
\alpha_1\cdot(\alpha_1[4])&=\beta_1[0] &\beta_1\cdot 1& =\beta_1[0] & \\
\alpha_1\cdot (\alpha_1\beta_1[4])&=\beta_1^2[0] &\beta_1\cdot(\beta_1[0])&=\beta_1^2[0] \\ \beta_1\cdot(\alpha_1[4])&=\alpha_1\beta_1[4],
\end{align*}
and all other multiplications are zero.
\item The degree of an indicated class is the degree of the corresponding class in $\pi_*\tmf$ plus the number in the bracket.\footnote{These elements are named by the classes that detect them on the $E_2$-page of the Atiyah--Hirzebruch spectral sequence computing $\pi_*(C(\alpha_1)\otimes \tmf))$.} So, for example, $\alpha_1[4]$ has degree $7$.
\end{itemize}
The only odd classes in $\pi_{*\leq 26}\left(C(\alpha_1)\otimes \tmf\right)$ are $\alpha_1\beta_1[4]$ and $\alpha_1[4]$, so contributions
on the $E_2$-page are in bidegrees $(-17,17)$ and $(-7,7)$.
First consider the classes which are not multiples of $t$
$$\alpha_1[4]\otimes (c_2^2u), \,\,\, \alpha_1\beta_1[4]\otimes (c_2^3c_3\cdot u), \,\,\, \alpha_1\beta_1[4]\otimes (c_3^3\cdot u).$$
By inspection of Figure~\ref{fig:P1_mod_BUct} and the fact that $\langle \alpha_1,\alpha_1,\alpha_1\rangle =\beta_1$ we see that
\begin{align*}
d_4(\alpha_1[4]\otimes c_2^2\cdot u)&=\beta_1[0]\otimes c_2^3\cdot u\\
d_4(\alpha_1\beta_1[4]\otimes c_3^3\cdot u) &=-\beta_1^2[0]\otimes c_2c_3^3\cdot u\\
\alpha_1\beta_1[4]\otimes c_2^3c_3\cdot u&=d_8(\beta_1[0]\otimes c_2c_3\cdot u)
\end{align*}
In degree $7$ we have
$$t^4\cdot u \,\,\,\,\,\, c_2^2.$$
By Equation~\eqref{eq:P1_generators} we have:
\begin{align*}P^1(t^4 \cdot u)&=t^6\cdot u -t^4c_2\cdot u\\
P^1(c_2^2\cdot u) &= c_2^3\cdot u\end{align*}
Therefore:
\begin{align*}
d_4(\alpha_1[4]\otimes c_2^2\cdot u)&=\beta_1[0]\otimes c_2^3\cdot u\\
d_4(\alpha_1[4]\otimes t^4\cdot u)&=\beta_1[0]\otimes(t^6\cdot u -t^4c_2\cdot u).\end{align*}
Thus the cell $(-7,7)$ does not contribute to the $E_\infty$-page.
In degree $17$ we have generators:
\begin{align*}
t^9 \cdot u & & t^7 c_2\cdot u & & t^6 c_3\cdot u & & t^5 c_2^2 \cdot u\\
t^4 c_2c_3\cdot u & & t^3 c_2^3 \cdot u && t^3 c_3^2 \cdot u & & t^2 c_2^2c_3 \cdot u\\
tc_2^4\cdot u & & tc_2c_3^2\cdot u & & c_2^3c_3\cdot u & & c_3^3\cdot u \\
\end{align*}
Using Equation~\ref{eq:P1_generators}, we see that:
The image of $P^1$ on the generators for $H^{17}(\Sigma^{-1}C)$ listed above is:
\begin{align*}
-t^9c_2 \cdot u & & t^9 c_2\cdot u & & 0 & & -t^7 c_2^2 \cdot u+t^7c_2^3\cdot u\\
t^6 c_2c_3\cdot u+t^4c_2^2c_3\cdot u & & -t^3 c_2^4 \cdot u && t^3 c_2c_3^2 \cdot u & & -t^4 c_2^2c_3 \cdot u -t^2c_2^3c_3\cdot u\\
t^3c_2^4\cdot u & & t^3c_2c_3^2\cdot u-tc_2^2c_3^2\cdot u & & 0 & & -c_2c_3^3\cdot u \\
\end{align*}
Since $d_4(\alpha_1\beta_1[4]\otimes x)=\beta_1^2\otimes P^1(x),$ we see that $\ker(d_4)\: E^4_{(-17,17)} \to E^4_{(-20,21)}$ is generated by $\alpha_1\beta_1$ tensored with:
\begin{align*}
t^9\cdot u+t^7c_2\cdot u & & t^6c_3\cdot u \\ tc_2^4\cdot u + t^3c_2^3\cdot u && c_2^3c_3\cdot u
\end{align*}
The next differential involved is a $d_8$ with source (-10,9), which is computed as $$d_8(\beta_1[0]\otimes y)=\alpha_1\beta_1[4]\otimes P^1P^1(y).$$ Using Equation~\eqref{eq:P1_generators}, we get:
\begin{align*}
P^1\left(P^1(t^5\cdot u)\right)&=P^1\left(P^1(t^5)\right)\cdot u -P^1(t^5)P^1(u) + t^5P^1\left(P^1(u)\right)\\
&= (7*5)t^9\cdot u -(5t^7)(-c_2\cdot u) + t^5\cdot 0=-(t^9\cdot u +t^7c_2\cdot u),\\
P^1\left(P^1(tc_2^2 u)\right)&=P^1\left(P^1(t)\right)\cdot u -P^1(t)P^1(c_2^2u) + tP^1\left(P^1(c_2^2u)\right)\\
&= -(t^3c_2^3\cdot u +tc_2^4\cdot u),\\
P^1\left(P^1(t^2c_3\cdot u)\right)&=P^1\left(P^1(t^2)\right)\cdot u -P^1(t^2)P^1(c_3\cdot u) + t^2P^1\left(P^1(c_3\cdot u)\right)= -t^6c_3\cdot u, \\
\end{align*} since $P^1(c_2\cdot u)=0$. And similarly:
\begin{align*}
P^1\left(P^1(c_2c_3\cdot u)\right)&=P^1\left(P^1(c_2)\right)\cdot u -P^1(c_2)P^1(c_3\cdot u) + c_2P^1\left(P^1(c_3\cdot u)\right)= -c_2^3c_3\cdot u,
\end{align*}
This shows that $\operatorname{gr}\big(\pi_0(\operatorname{Maps}_\sphere(\Sigma^{-1}C, C(\alpha_1)\otimes \tmf)\big)=0,$ which implies the group itself is zero.
\begin{figure}[h]
\centering
\textbf{The relevant part of the $E_2$-page of the Atiyah--Hirzebruch spectral sequence computing $\pi_*\operatorname{Maps}_{\sphere}(\Sigma^{-1}C,C(\alpha_1)\otimes \tmf).$}\par\medskip
\begin{tikzpicture}
\matrix (m) [matrix of math nodes,
nodes in empty cells,nodes={ minimum width=4ex,
minimum height=3ex},
column sep=.1ex,row sep=.1ex]{
& & & & &&&&&&&&&& &\node[](f){}; \\
& \node[circle,draw,yscale=.2,xscale=.2] {}; \node[circle,draw,yscale=.3,xscale=.3] (b){};&&&\node[circle,draw,yscale=.2,xscale=.2] {}; &&&&&&&\node[circle,draw,yscale=.2,xscale=.2] {}; &&&\node[circle,draw,yscale=.2,xscale=.2] {}; &&21
\\
\\
& \node[circle,draw,yscale=.2,xscale=.2] {}; &&&\node[circle,draw,yscale=.2,xscale=.2] {}; &&&&&&&\node[circle,draw,yscale=.2,xscale=.2] {}; &&&\node[circle,draw,yscale=.2,xscale=.2] {}; &&19
\\
& \\
& \node[circle,draw,yscale=.2,xscale=.2] {}; &&&\node[circle,draw,yscale=.2,xscale=.2] {}; \node[rectangle,draw,yscale=.5,xscale=.8](a){};&&&&&&&\node[circle,draw,yscale=.2,xscale=.2] {}; &&&\node[circle,draw,yscale=.2,xscale=.2] {}; &&17
\\
\\
& \node[circle,draw,yscale=.2,xscale=.2] {}; &&&\node[circle,draw,yscale=.2,xscale=.2] {}; &&&&&&&\node[circle,draw,yscale=.2,xscale=.2] {}; &&&\node[circle,draw,yscale=.2,xscale=.2] {}; &&15
\\
& & \\%14
& \node[circle,draw,yscale=.2,xscale=.2] {}; &&&\node[circle,draw,yscale=.2,xscale=.2] {}; &&&&&&&\node[circle,draw,yscale=.2,xscale=.2] {}; &&& \node[circle,draw,yscale=.2,xscale=.2] {};& &13
\\
& & \\%12
& \node[circle,draw,yscale=.2,xscale=.2] {}; &&&\node[circle,draw,yscale=.2,xscale=.2] {}; &&&&&&&\node[circle,draw,yscale=.2,xscale=.2](e) {}; &&&\node[circle,draw,yscale=.2,xscale=.2] {}; &&11
\\
& \\%10
& \node[circle,draw,yscale=.2,xscale=.2] {}; &&&\node[circle,draw,yscale=.2,xscale=.2] {}; &&&&&&&\node[circle,draw,yscale=.2,xscale=.2](c) {}; &&& \node[circle,draw,yscale=.2,xscale=.2] {};&&9
\\
& & \\
& \node[circle,draw,yscale=.2,xscale=.2] {}; &&&\node[circle,draw,yscale=.2,xscale=.2] {}; &&&&&&&\node[circle,draw,yscale=.2,xscale=.2] {}; &&& \node[circle,draw,yscale=.2,xscale=.2](d) {}; \node[rectangle,draw,yscale=.5,xscale=.8]{};&&7
\\
\node[](h){}; & & & & & & & & & & & & & & & & \node[](i){};\\
\quad\strut -q & 20& 19&18 &17 & 16& 15& 14& 13 & 12 & 11 & 10 & 9& 8 & 7 &\node[](g) {}; \strut \\
\quad\strut &\smallbetasq &&& {\smallalphabeta} & && && & &\smallbeta & &&\smallalpha & \strut \\};
\draw [arrow,dashed] (a) -- node[anchor=west]{$d_4$} (b);
\draw[arrow,dashed](c) --node[anchor=west]{$d_8$}(a);
\draw[arrow,dashed](d) --node[anchor=west]{$d_4$}(e);
\draw[](f) --(g);
\draw[](h)--(i);
\end{tikzpicture}\caption{The $x$-axis is graded by $q$ with the generator of $\pi_{-q}(C(\alpha_1)\otimes \tmf)$. The $y$-axis is graded by the degree of generators for $\HF{3}{q}(\Sigma^{-1}C)$. A circle in degree $(p,-q)$ indicates a non-zero three-torsion group. Rectangles mark the bidegrees that can contribute to the $p+q=0$ line. We indicate the differentials that eliminate the terms on $p+q=0$.}\label{fig:tmf_hom_maps}\end{figure}
\end{proof}
\begin{rmk} One might hope that $C(\alpha_1)\otimes \tmf$ split from $\sk^i\BUct \otimes \tmf$ for $i>26$. In addition to being a cleaner result, such an extension might allow us to extend the $\rho$ invariant to vector bundles on higher-dimensional spaces. Unfortunately, our approach to splitting $C(\alpha_1)\otimes \tmf$ is computationally intensive and it is not at all clear how far up the skeleton the splitting extends. However, from discussions with M.J. Hopkins, we believe $\tilde \rho$ can be extended to all of $\BUct$ by a different method. We hope to work this out in the future, although it is not necessary for our result.
\end{rmk}
\subsecl{The cohomology of $\BUct$ and related spectra}{subsec:cohomology_BUct}
This subsection includes the main calculations needed for the proof of Theorem~\ref{thm:tmf_class_exists}. Some of these results have already been used in the previous subsection, and some are necessary for the next subsection. We begin with the cohomology of the relevant classifying space and the Thom spectrum of interest.
\begin{prop}\label{prop:cohomology_BUct}
The $P^1$-module structure on the $\pmod 3$-cohomology of $\BUc$ and $\BUct$ is given as follows:
\begin{enumerate}
\item $\HF{3}{*}(\BUc) \simeq \Z/3[t,c_2,c_3]$, where $|t|=2$, $|c_2|=4$, and $|c_3|=6$; and
\begin{align*}
P^1c_2&=c_2^2 & (P^1P^1)c_2&=2c_2^3\\
P^1t&=t^3 & (P^1 P^1)t&=0\\
P^1c_3&=c_2c_3 & (P^1P^1)c_3&=-c_2^2c_3.\\
\end{align*}
\item $\HF{3}{*}(\BUct) \simeq \HF{3}{*}(\BUc)\cdot u,$ with
\begin{align*}
P^1u&=-c_2\cdot u & (P^1 P^1)u &=0\\
\end{align*}
where we view $\HF{3}{*}(\BUc)$ as a $P^1$-algebra and therefore the $P^1$-module structure is determined by the action on $u$.
\end{enumerate}
\end{prop}
\begin{proof}
Part (1) follows from Steenrod operations on Chern classes in $\HF{3}{*}(BU(3))$, together with the fact the natural map $\BUc \to BU(3)$ induces $c_1\mapsto 0$, $c_2\mapsto c_2$, and $c_3 \mapsto c_3$.
Part (2) follows from the $\HZt$-Thom isomorphism together with the universal formula for Steenrod operations on Thom classes, which we now explain. However, we will later need formulas for Steenrod operations on the Thom class for the bundle $-\gamma_4$ on $BU(4)_{\coz}$, so we give these and then deduce those for $-\gamma_3$ on $\BUc$.
We first compute $P^1$ on the Thom class $u_{\gamma_4}$ for $\gamma_4$ on $BU(4)$ via the universal example of $MU(1)^{\times 4}$.
\begin{align*} P^1(wxyz) &=w^3xyz+wx^3yz+wxy^3z+wxyz^3\\
& =wxyz(w^2+ x^2+y^2+z^2)\\
&=wxyz((w+x+y+z)^2-2(wx+wy+wz+xy+xz+yz)), \end{align*}
implying that
\begin{equation}\label{eq:P1c4}
P^1(u_{\gamma_4})=(c_1^2+c_2)u_{\gamma_4}.
\end{equation}
The Chern classes for $-\gamma_4$ are the coefficients of the power series inverse to
$$c({\gamma_4})=1+c_1t+c_2t^2+c_3t^3+c_4t^4.$$
\begin{align*}
\frac{1}{c(\gamma_4)}&=1-(c_1t+c_2t^2+c_3t^3+c_4t^4)+(c_1t+c_2t^2+c_3t^3+c_4t^4)^2 \\
&\,\,\,\,\,\,\,\,\,\,\,\, -(c_1t+c_2t^2+c_3t^3+c_4t^4)^3 +(c_1t+c_2t^2+c_3t^3+c_4t^4)^4+\dots \\
&= 1-c_1t+(c_1^2-c_2)t^2+(-c_1^3+2c_1c_2-c_3)t^3+(c_1^4-3c_1^2c_2+c_2^2+2c_3c_1-c_4)t^4+,\dots
\end{align*}
so that
\begin{align*}
c_1(-\gamma_4) &= -c_1
& c_2(-\gamma_4) & =c_1^2-c_2
\\c_3(-\gamma_4) & = -c_1^3+2c_1c_2-c_3
&c_4(-\gamma_4)&=c_1^4+c_2^2-c_3c_1-c_4.
\end{align*}
The above implies
\begin{align}\label{eq:P1u}P^1(u_{-\gamma_4})&=\big( (-c_1)^2 + c_1^2-c_2\big) \cdot u_{-\gamma_4}= -(c_1^2+c_2)\cdot u_{-\gamma_4}.\end{align}
We next calculate $P^1c_1^2$ and $P^1c_2$ in $H^*(BU(4))$. For $c_1$ we have:
\begin{align*}P^1((w+x+y+z)^2)&=2(w+x+y+z)P^1(w+x+y+z)\\
&=2(w+x+y+z)(w+x+y+z)^3=2(w+x+y+z)^4, \,\, \text{ and}
\end{align*}
\begin{align}\label{P1c12}P^1(c_1^2)=-c_1^4.\end{align}
Let $s_n=s_n(w,x,y,z)$ denote the $n$-th elementary symmetric polynomial in $x,y,z,w$. The computation for $c_2$ goes as follows:
\begin{align*}
P^1(wx+wy+wz+xy+xz+yz)&=w^3x+wx^3+w^3y+wy^3+w^3z+wz^3\\ &\,\,\,\,\,\,\,\, +x^3y+xy^3+x^3z+xz^3+y^3z+yz^3\\
&=s_2(w^2+x^2+y^2+z^2)-s_3s_1+xyzw
\\
&=s_2(s_1^2+s_2) - s_3s_1 +s_4
\end{align*}
Therefore:
\begin{align}\label{eq:P1c2}
P^1(c_2)&=c_1^2c_2+c_2^2-c_1c_3+c_4
\end{align}
Combining Equations~\eqref{eq:P1u},\eqref{P1c12}, and \eqref{eq:P1c2} we get:
\begin{align*}P^1 P^1(u_{-\gamma_4})&=-\big( P^1(c_1^2+c_2)\cdot u_{-\gamma_4}+ (c_1^2+c_2)P^1(u_{-\gamma_4})\big)\\
&= -\big(-c_1^4+c_1^2c_2+c_2^2-c_1c_3+c_4 -(c_1^2+c_2)^2) \big)\cdot u_{-\gamma_4}\\
&=-(c_1^4-c_1^2c_2-c_1c_3+c_4)\cdot u_{-\gamma_4}.
\end{align*}
Now let $\tilde u$ denote the Thom class of the negative of the universal bundle on $BU(4)_{\coz}$ and as before let $u$ denote the Thom class of $-\gamma_3$ on $\BUc$. Since the universal bundle has zero first Chern class, we get:
\begin{align}\label{eq:P1_tildeu}
P^1(\tilde u)&=-c_2\cdot \tilde u &P^1 P^1(\tilde u)&=-c_4\cdot \tilde u
\end{align}
\begin{align}\label{eq:P1_real_u}
P^1( u)&=-c_2\cdot u &P^1P^1( u)&=0.
\end{align}
This completes the proof. \end{proof}
From the above we can use the Liebniz rule for $P^1$ to derive:
\begin{cor}\label{cor:P1_u_classes} In $\HF{3}{*}(\BUct)$,
\begin{align*}
P^1(t^i\cdot u)&=it^{i+2}\cdot u -t^ic_3\cdot u\\
P^1(c_2^ic_3^j\cdot u)& = (i+j-1) c_2^{i+1}c_3^j\cdot u
\end{align*}
\end{cor}
In order to prove that $\tilde \rho$ is unique in Subsection~\ref{subsec:proof_unique}, we will need to analyze a certain cofiber. To that end, the following calculation will be useful.
\begin{prop}\label{prop:P1_module_cofiber} Let $\epsilon\: S^7 \to \BUc$ generate $\pi_7\BUc$, let $u$ the Thom class for $\epsilon$, and let $C(\epsilon)$ denote the cofiber of $\Thom_{u_0}(\epsilon)\:\sphere^7 \to \BUct$. As a module over $P^1$,
$$\HF{3}{*}(C(\epsilon)) \simeq \HF{3}{*}\BUct \oplus \Z/3\cdot \{y\},$$
where $|y|=8$ and \begin{align*}
P^1(-c_2\cdot u)&=y\\
P^1(y)&=0
\end{align*}
\end{prop}
\begin{proof}
Let $\iota\: BU(3) \to BU(4)$ denote the natural map. Because the composite $\iota \circ \epsilon\: S^7 \to BU(4)$ is null, we get a homotopy commutative diagram
\begin{equation}\label{eq:BU3BU4}
\begin{tikzcd}[row sep=.5in, column sep=.5in]
S^7 \arrow[r,"{}"]\arrow[dr, "0"] & \BUc \arrow[d,"{\gamma_3 \oplus \underline{\C}}" near start]\arrow[r] & (\BUc)/S^7 \arrow[dl, dashed, "\delta"] \\
& BU(4)_{\coz},&
\end{tikzcd}
\end{equation}
where $\delta$ is any extension of the bundle $\gamma_3 \oplus \underline{\C}$ to the cofiber. We get a homotopy pushout a by taking Thom spectra:
\begin{equation}\label{eq:pushout_spectra}
\begin{tikzcd}
\sphere^0 \oplus \sphere^{7} \arrow[r,"p_{1}"]\arrow[d,"\Thom(\epsilon)" left]
& \sphere^0\arrow[d]\\
\BUct\arrow[r]
&\Th{(\BUc)/S^7}{-\delta},
\end{tikzcd}
\end{equation}
where $p_1$ is projection onto the first factor. Therefore
$$C(\epsilon)\simeq\Th{(\BUc)/S^7}{-\delta}.$$
Let $T:=\BUct$ and $T_4:=\Th{BU(4)_{\coz}}{-\gamma_4}$ below. From Equation~\eqref{eq:BU3BU4} we get a commuting diagram
\begin{equation}\label{eq:coh_G}
\begin{tikzcd}[column sep = 10]
H^{i-1}(T)\arrow[r] & H^{i-1}(\sphere^7)\arrow[r] & H^i(C(\epsilon))\arrow[r] & H^i(T)\arrow[r] & H^i(S^7)\\
& & H^i(T_4)\arrow[ull]\arrow[ul,"0" above]\arrow[u,"a"]\arrow[ur]\arrow[urr,"0" above]
\end{tikzcd}
\end{equation}
where $H^i$ denotes either $\Z$ or $\Z/3$ coefficients and the top row is exact. The map $a$ induces a ring isomorphism
\begin{equation}\label{eq:cof_is_kinda_BU4} H^*(C(\epsilon))/H^{>8}\simeq H^*(\Th{BU(4)_{\coz}}{-\gamma_4})/H^{>8},\end{equation}
where $H^{>8}$ is the ideal of elements of degree greater than $8$. Equation~\eqref{eq:cof_is_kinda_BU4} identifies the class $c_4\cdot u$ with a class $y \in H^*(C(\epsilon))$ and implies that
\begin{align*}
P^1(-c_2\cdot u)&=y\\
P^1(y)&=0,
\end{align*}
as was to be shown.
To complete the proof, note that Diagram~\eqref{eq:coh_G} implies
$$H^{*}(\BUct) \simeq H^*(C(\epsilon))/\langle y\rangle$$ as modules over the Steenrod algebra.
\end{proof}
\subsecl{Proof of Proposition~\ref{prop:uniqueness}}{subsec:proof_unique}
Consider the cofiber $C:=\operatorname{Cof}(\epsilon\:S^7 \to \sk^{26}\BUc)$. Let
\begin{align*}C'(\epsilon)&=\operatorname{Cof}(\sphere^7 \to \sk^{26}\BUct) \simeq \Th{C}{-\delta},
\end{align*}
where $\delta\: C \to BU(4)$ is an extension of $\gamma_3\oplus \underline{\mathbb C}$ over $C$.
We get a diagram
\begin{equation}\label{eq:0_vee_beta}
\begin{tikzcd}[column sep=1em, row sep=1em]
& \sphere^{10} \arrow[d,"\alpha_1" left] \arrow[dr,"\tilde v"]\\
\Sigma^{-1}C'(\epsilon) \arrow[r,"\phi"]& \sphere^{7} \arrow[r]\arrow[d,"\beta_1" left] &\sk^{26}\BUct\arrow[dl,dashed, above] \\
& \Sigma^{-3}\tmf,
\end{tikzcd}
\end{equation} where $\tilde v$ is as in Diagram~\eqref{eq:thomified_htpy2}.
A dotted arrow is an element $\tilde \rho \in \tmf^{-3}\BUct$ satisfying Corollary~\ref{cor:exists}. Choices of the dotted arrow in Diagram~\eqref{eq:0_vee_beta} up to homotopy are a torsor for $\pi_0\operatorname{Maps}_{\sphere}(C'(\epsilon),\Sigma^{-4}\tmf).$ We show that this group is zero via an Atiyah--Hirzebruch spectral sequence
$$E^2_{p,q}=H^p(C'(\epsilon); \pi_{-q+3}\tmf)\implies \tmf^{p+q-3}(C'(\epsilon)),$$
depicted in Figure~\ref{fig:tmf_choice}.
We computed the cohomology of $C(\epsilon)$ in Proposition~\ref{prop:P1_module_cofiber} and
$H^{*\leq 26}\C(\epsilon)\simeq H^{*\leq 26}C'(\epsilon)$. The homotopy of $\tmf$ is known \cite{Bauer,Mathew}. The terms along the line $p+q=0$ on the $E_2$-page are $u \otimes \alpha_1$ and $\alpha_1\beta_1$ tensored with elements in $H^{10}(C'(\epsilon))\simeq H^{10}(\BUct/S^7)\cdot u.$ Since $\pi_*\tmf$ has no other odd homotopy groups until degree $27$, there are no further possible contributions to consider.
First, consider $u\otimes \alpha_1$. Since $P^1P^1(u)=y$ and $\langle \alpha_1,\alpha_1,\alpha_1\rangle =\beta_1$, $d_8(u \otimes \alpha_1) = \beta_1 \otimes y \neq 0$ and the class $\alpha_1\otimes u$ does not survive the spectral sequence.
On the other hand, there are many bidegree $(-10,10)$ classes to check:
$$E^2_{(-10,10)} \simeq\alpha_1\beta_1\otimes \langle t^5\cdot u, \, t^3c_2\cdot u,\, t^2c_3\cdot u,\,tc_2^2\cdot u,\,c_2c_3\cdot u \rangle.$$
We claim that all classes above support a differential or are the target of a nonzero differential. Figure~\ref{fig:tmf_choice} shows the first interesting differentials in and out of this cell on the $E_2$-page.
\begin{figure}[h]
\centering
\textbf{A larger portion of the $E_2$-page of the Atiyah--Hirzebruch spectral sequence computing $\tmf^*\left(C'(\epsilon)\right)$. }\par\medskip
\begin{tikzpicture}
\matrix (m) [matrix of math nodes,
nodes in empty cells,nodes={minimum width=3ex,
minimum height=3ex},
column sep=.1ex,row sep=.1ex]{
& & & & & & & & & & & & p= \\
& \node[circle,draw,yscale=.1,xscale=.1](c) {}; \node[rectangle,draw,yscale=.1,xscale=.1]{};& & & & \node[rectangle,draw,yscale=.1,xscale=.1]{};& & & \node[circle,draw,yscale=.2,xscale=.2] {}; & \node[rectangle,draw,yscale=.1,xscale=.1]{}; & &\node[circle,draw,yscale=.2,xscale=.2] {}; & 18 \\
& & & & & & & & & & & & 17 \\
&\node[circle,draw,yscale=.2,xscale=.2] {}; \node[rectangle,draw,yscale=.1,xscale=.1]{}; & & & & \node[rectangle,draw,yscale=.1,xscale=.1]{};& & & \node[circle,draw,yscale=.2,xscale=.2] {}; & \node[rectangle,draw,yscale=.1,xscale=.1]{};& &\node[circle,draw,yscale=.2,xscale=.2] {}; & 16 \\
& & & & & & & & & & & & 15 \\
&\node[circle,draw,yscale=.2,xscale=.2] {}; \node[rectangle,draw,yscale=.1,xscale=.1](f){}; & & & & \node[rectangle,draw,yscale=.1,xscale=.1]{};& & & \node[circle,draw,yscale=.2,xscale=.2] {}; & \node[rectangle,draw,yscale=.1,xscale=.1]{};& &\node[circle,draw,yscale=.2,xscale=.2] {}; & 14 \\
& & & & & & & & & & & & 13 \\
& \node[circle,draw,yscale=.2,xscale=.2] {}; \node[rectangle,draw,yscale=.1,xscale=.1]{}; & & & & \node[rectangle,draw,yscale=.1,xscale=.1]{};& & \node[](b'){};& \node[circle,draw,yscale=.2,xscale=.2] {}; & \node[rectangle,draw,yscale=.1,xscale=.1]{};& &\node[circle,draw,yscale=.2,xscale=.2] {}; & 12 \\
& & & & & & & & & & & & 11 \\
&\node[circle,draw,yscale=.2,xscale=.2] {}; \node[rectangle,draw,yscale=.1,xscale=.1]{}; & & & & \node[rectangle,draw,yscale=.1,xscale=.1]{};& & & \node[rectangle,draw,yscale=.3,xscale=.3,rotate=45] (b) {}; & \node[rectangle,draw,yscale=.1,xscale=.1]{};& &\node[circle,draw,yscale=.2,xscale=.2] {}; & 10 \\
& & & & & & & & & & & & 9 \\
&\node[circle,draw,yscale=.2,xscale=.2] {}; \node[rectangle,draw,yscale=.1,xscale=.1]{}; & & & & \node[rectangle,draw,yscale=.1,xscale=.1]{};& & & \node[circle,draw,yscale=.2,xscale=.2] {};& \node[rectangle,draw,yscale=.1,xscale=.1]{};& &\node[circle,draw,yscale=.2,xscale=.2] {}; & 8 \\
& & & & & & & & & & && 7 \\
&\node[circle,draw,yscale=.2,xscale=.2] {}; \node[rectangle,draw,yscale=.1,xscale=.1]{}; & & & & \node[rectangle,draw,yscale=.1,xscale=.1]{};& & & \node[circle,draw,yscale=.2,xscale=.2] {}; & \node[rectangle,draw,yscale=.1,xscale=.1]{};& & \node[circle,draw,yscale=.2,xscale=.2] (a){};& 6\\
\quad\strut q=& -17 & -16 & -15 & -14& -13&-12 &-11 &-10 &-9 &-8 &-7\strut \\
\pi_{-q}
&\beta_1^2, & & & & c_4^2 & & & \alpha \beta & c_6 & & \beta \\
& c_4c_6 & & & & & & & & & & \\
};
\draw[dashed] (m-3-2.north) -- (m-13-12.north);
\draw [arrow] (a) -- node[anchor=west]{$d_4$} (b);
\draw [arrow] (b) -- node[anchor=south]{$d_8$} (c);
\end{tikzpicture}\caption{The dashed line indicates the $p+q=0$ line converging to $\pi_0\operatorname{Maps}_{\sphere}(C'(\epsilon),\Sigma^{-3}\tmf).$ A dot indicates a non-zero three-torsion group; a square a non-zero torsion-free group. The diamond in bidegree $(-10,10)$ indicate nonzero $E_2$-terms which may contribute to the computation.}\label{fig:tmf_choice}\end{figure}
First, we check which of the above are the target of a differential in (A) below. Then we check that the remaining classes support a nonzero differential in (B).
\begin{enumerate}
\item[(A)] The only possible differential is a $d_4$ originating in bidegree $(-7,6)$ is computed as follows: classes in this cell are $\beta_1$ times classes in \begin{equation}\label{eq:h6}H^6(\BUc)\cdot u \simeq \langle t^3 \cdot u, \, tc_2 \cdot u, \, c_3 \cdot u \rangle\end{equation} Proposition~\ref{prop:P1_module_cofiber} implies:
\begin{align*}
P^1(t^3u)& =t^3P^1(u)=-c_2t^3u \\
P^1(tc_2\cdot u)&=t^3c_2\cdot u+tc_2^2\cdot u -tc_2^2\cdot u=t^3c_2\cdot u
\\
P^1(c_3 \cdot u)&=c_2c_3\cdot u -c_2c_3 \cdot u =0
\end{align*}
Therefore: $d_4(\beta_1 \otimes t^3\cdot u)= -\alpha_1\beta_1 \otimes c_2t^3\cdot u$ and the target dies on the $E^5$-page.
\item[(B)] A class $ \beta_1 \alpha_1 \otimes (z \cdot u)$ which survives to the $E^8$-page supports a nonzero $d_8$ if $P^2z$ is nonzero.
Using Proposition~\ref{prop:P1_module_cofiber}, facts about Steenrod operations we get:
\begin{align*} -(P^1 P^1)(t^5\cdot u)&=-(P^1 P^1)(t^5)\cdot u +P^1(t^5)P^1(u) \\&= (t^9+t^7c_2)\cdot u\\ \\
-(P^1 P^1)(t^2c_3\cdot u) & = -(P^1P^1)(t^2)c_3\cdot u+P^1(t^2)P^1(c_3\cdot u)-t^2(P^1P^1)(c_3\cdot u)\\
&=t^6c_3\cdot u\\
-(P^1 P^1)(tc_2^2 \cdot u) &= -(P^1P^1)(tc_2\cdot u) c_2+P^1(tc_2\cdot u)P^1(c_2)-P^1P^1(c_2)(tc_2\cdot u)\\
&= -P^1(t^3c_2 \cdot u)c_2 +t^3c_2^3\cdot u+tc_2^4 \cdot u\\
&= -t^3P^1(c_2\cdot u)+t^3c_2^2\cdot u+tc_2^4u\\
& = t^3c_2^2\cdot u+tc_2^4\cdot u\\ \\
-(P^1 P^1)(c_2c_3\cdot u) & = -(P^1P^1)(c_3 \cdot u)c_2+P^1(c_3\cdot u)P^1(c_2)-(P^1P^1)(c_2)(c_3\cdot u)\\
&= -(P^1P^1)(c_2)c_3\cdot u \\& =c_2^3c_3\cdot u.
\end{align*}
This shows that the classes $t^5 \cdot u, \, t^2c_3\cdot u, \, tc_2^2 \cdot u, \, c_2c_2\cdot u$ support nonzero differentials whose joint span is $4$-dimensional.
\end{enumerate}
No terms on the $p+q=0$ line survive the spectral sequence, so the group in question is zero.
\section{Untwisting the invariant for rank $3$ bundles on $\CP^5$}\label{sec:untwisting}
The work of the previous Section~\ref{sec:exist} provides an association
$$V \mapsto \Thom(V)^*\tilde \rho \in \tmf^{-3}(\Th{CP^5}{-V}).$$
Vector bundles on $\CP^5$ with $c_1\equiv 0 \pmod 3$ and $c_2\equiv 0 \pmod 3$ are $\tmf$-orientable, as we now explain: Any complex bundle $V$ carries an $H\Z$-Thom class. Since $\tau_0\tmf = H\Z$, $V$ is $\tmf$-orientable if and only if this class lifts up the Postnikov tower for $\tmf$. Since $\CP^5$ is finite-dimensional, this is a finite lifting problem. The Postnikov tower for $\tmf$ through degree $10$ has only one stage that obstructs lifting the $\HZ$-Thom class. This gives the condition that $c_1^2-2c_2\equiv 0 \pmod 3$.
Thus, for $V$ over $\CP^5$ with $c_1\equiv 0 \pmod 3$ and $c_2\equiv 0 \pmod 3$, there exist isomorphisms $\tmf^*(\Th{\CP^5}{-V})\simeq \tmf^*(\suspp \CP^5)$.
However, there are many choices of such an isomorphism and our choice cannot be dependent on $V$.
The ideal way to resolve this would be via a universal example: a space $B$ together with a bundle $V_B$ which carries a (canonical) $\tmf$-orientation, such that all bundles of interest are canonically pullbacks of $V_B$ and therefore inherit a $\tmf$-orientation. Classical examples of this phenomenon are numerous: $BU(n)$ is canonically $H\Z$ oriented, giving the classical $H\Z$-Thom isomorphism for complex bundles; $BSpin$ is canonically $KO$-oriented via the Atiyah--Bott--Shapiro orientation, giving a canonical $KO$-Thom isomorphism for spin bundles \cite{ABS}; $BString$ is canonically $tmf$-oriented, giving a canonical $tmf$-orientation for string bundles \cite{AHR}.
Our bundles are not string, nor is there an obvious candidate for a universal example. Other, more hands-on, approaches to resolving orientability problems can be found in the literature, for example in \cite{Chat,BhatChat}\footnote{Since $L_{K(2)}TMF=EO_2$ at the prime $3$, the orientability studied in \cite{Chat,BhatChat} is closely related to ours.}. The approach we take here is informed by discussions with H. Chatham.
In Theorem~\ref{thm:well_defined}, we show that for $V\: \CP^5 \to BU(3)$ with $c_1\equiv 0 \pmod 3$ and $c_2\equiv 0 \pmod 3$, there is a set of Thom isomorphisms subject to some concrete restrictions, with the following property: there is a map $i\:\sphere^{10}\to \suspp \CP^5$ such that, for any Thom isomorphism $v$ in this distinguished set, the image of $\Thom(V)^*(\tilde \rho)$ under the restriction
\begin{equation}\label{eq:indep}\tmf^*(\Th{ \CP^5}{-V}) \xrightarrow{v}\tmf^*(\suspp \CP^5)\xrightarrow{i^*} \tmf^{-3}(\sphere^{10})\end{equation} does not depend on the choice of $v$. Thus, applying the composite in Equation~\eqref{eq:indep} to the class $\Thom(V)^*(\tilde \rho)$ defines the desired invariant $\rho(V)$.
In Subsection~\ref{subsec:background_orient}, we briefly recapitulate the theory of orientations and establish notation. In Subsection~\ref{subsec:choiceOrientation} we study orientation of bundles on $\CP^5$ in detail, define the set of orientations which make Equation~\eqref{eq:indep} independent of choice within this set. We prove independence in Theorem~\ref{thm:well_defined}. We can then define the invariant $\rho$ for $V$ rank $3$ on $\CP^5$ with $c_1\equiv 0 \pmod 3$ and $c_2\equiv 0 \pmod 3$ via the definition indicated in the previous paragraph.
The next Subsection~\ref{subsec:rho_works} features the verification that $\rho$ distinguishes bundles with the same Chern classes, completing the proof of Theorem~\ref{thm:combined}.
The final two subsections include some computations (Subsection~\ref{subsec:sum_line_bundles}) and the observation that $\rho$ can also be defined for rank $2$ bundles (Subsection~\ref{subsec:rank_2}). In these subsections we also discuss future research questions that we hope to address.
\begin{conv} Throughout this section, all spaces and spectra are implicitly localized at the prime $3$. \end{conv}
\subsecl{Background on orientability and orientations}{subsec:background_orient}
We present the relevant background on orientability, orientations, Thom classes, and Thom isomorphisms. The classical version is as follows. Let $V$ be a vector bundle over a space $X$, with disc bundle $dV$ and sphere bundle $sV$. Let $S^n \to dV$ be the inclusion of a fiber inducing $ i\: S^n \to dV/sV$. Recall that, classically, a Thom class for a vector bundle $V$ over $X$ in generalized cohomology theory $E$ is a class $v \in E^{\operatorname{dim}V}(dV/sV)$ such that $i^*v$ is a unit in $E^0(\sphere^0)\simeq E^n(\sphere^n)$. Orientability refers to the existence of such a class; an orientation is a choice of such a class. Given an such orientation, pairing with the Thom class under the Thom diagonal gives a Thom isomorphism
$$(-)\cdot u\: E^*(\suspp X) \simeq E^*(\Th{X}{V}).$$
\begin{conv} We will use the terms orientation, Thom class, and Thom isomorphism synonymously. \end{conv}
Alternatively, a vector bundle $V\: X \to BU(n)$ gives rise to a (stable) spherical bundle
\begin{equation}\label{eq:assoc_spherical} V_\sphere:= \left(X \to BU(n)\to BU \xrightarrow{J}BGL_1\sphere\right),\end{equation}
where $J$ the complex $J$-homomorphism.\footnote{The complex $J$ homomorphism is commonly defined as a map $J\:U \to GL_1\sphere$. Our $J$ is the delooping of the former.} The unit map $1\: \sphere \to E$ induces a map $BGL_11\: BGL_1\sphere \to BGL_1E$ and obtain \begin{equation}\label{eq:assoc_E_bdl} V_E:= \left(X \xrightarrow{V_\sphere}BGL_1\sphere \xrightarrow{BGL_11}BGL_1E\right).\end{equation}
$E$-orientability is the condition that $V_E$ in Equation~\eqref{eq:assoc_E_bdl} is nullhomotopic; an $E$-orientation is a choice of nullhomotopy of $ V_E$ (see \cite{ABGHR2}, building on \cite{May77}).
\begin{rmk}\label{rm:module_iso} An orientation gives not just an isomorphism on cohomology, but a isomorphism of $E$-modules:
$$\Th{X}{V}\otimes E\simeq \suspp X \otimes E.$$
\end{rmk}
\begin{notation}\label{not:stabilized_bdl} In this section we will need to refer to many different maps associated to a given $V\: X \to BU(r)$. Our notation is slightly abusive, but clear from context.
\begin{itemize}
\item $V$ will refer to any of the following associated maps:
\begin{itemize}
\item The definitional map $X \to BU(r)$,
\item The composite $X \to BU(r) \to BU$, and
\item The transpose of the previous item under the loops-suspension adjunction, which is a map $\suspp X \to bu$.
\end{itemize}
\item For $E$ a commutative ring spectrum, $V_E$ will refer to both:
\begin{itemize}
\item The composite $X \xrightarrow{V} BU \xrightarrow{J} BGL_1\sphere \xrightarrow{BGL_11} BGL_1E$, and
\item The transpose $ \suspp X \xrightarrow{V} bu \xrightarrow{j} bgl_1\sphere \xrightarrow{bgl_11} bgl_1E$.
\end{itemize}
\end{itemize}
\end{notation}
\subsecl{Selecting orientations and the definition of $\rho$}{subsec:choiceOrientation}
The $3$-local spectrum $\Sigma^\infty_+ \CP^5$ has a splitting arising from the Adams splitting of $\suspp\CP^\infty$: \begin{equation}\label{eq:sum}\suspp \CP^5 \simeq X_0 \oplus X_1,\end{equation}
where
\begin{align*}X_1 &\simeq \Sigma^2C(\alpha_1) \oplus \sphere^{10}, &
X_0&\simeq \suspp H\mathbb P^2 \simeq \sphere^0 \oplus \Sigma^4 C(\alpha_1).\end{align*}
The summand $\suspp\mathbb HP^2$ is split via $p=\Sigma^\infty_+p'$ where $p'\: \mathbb CP^5 \to \mathbb HP^2$ is the map of spaces given by taking the $10$-skeleton of the quotient map $\CP^\infty \to \mathbb H P^\infty$.
\begin{rmk} We fix isomorphisms of $X_0$ and $X_1$ with the sums above. We will write $i$ for all inclusions of summands and $p$ for all projections onto summands. These choices can be made once independent of any future bundles involved.
\end{rmk}
Recall the terminology from Notation~\ref{not:stabilized_bdl}. To study $\tmf$-orientations is to study nullhomotopies of $$V_{\tmf}:\suspp \CP^5 \to bgl_1\tmf.$$ Our strategy is to restrict $V_{\tmf}$ to each summand in the decomposition of Equation~\eqref{eq:sum} and separately study nullhomotopies on each summand. On the summand $X_1$, we show that the bundles of interest possess a certain canonical orientation arising from the image of $j$ spectrum, while the choice on the summand $X_0$ will turn out not to matter.
At a prime $p$, the image of $j$ spectrum $bj$ can be defined as the cofiber of the map $\psi^q-1\: bu \to bu$, where $\psi^q$ is the $q$-th Adams operation and $q$ is a topological generator for $\Z_{(p)}^\times$. By stable Adams conjecture (proved\footnote{Notable previous attempts at the stable Adams conjecture are \cite{Fried,FriedSey}. The Adams conjecture for spaces is proved in \cite{Sull,Quill}.} in \cite{BhatKit}) there is a factorization of the $j$ homomorphism $$j=\big( bu \to bj \xrightarrow{j'} bgl_1\sphere\big),$$ where the map $bu\to bj$ is the natural map to the cofiber of $\psi^q-1$.
\begin{lem}\label{lem:j_homtopy} The map $V_j:=\big(X_1 \xrightarrow{V|_{X_1}} bu\to bj\big)$ is nullhomotopic. Up to homotopy, there is a unique such nullhomotopy.
\end{lem}
\begin{proof} The Atiyah--Hirzebruch spectral sequence for computing $\pi_*\operatorname{Maps}_\sphere(X_1,bj)$ shows that both $\pi_0(\operatorname{Maps}_{\operatorname{Sp}}({X_1},bj)$ and $\pi_0(\operatorname{Maps}_{\operatorname{Sp}}({X_1}, \Sigma^{-1}bj)$ are trivial, since $\pi_{\leq 10}bj$ is concentrated in degrees $0, 4,$ and $8$ \cite[Sec. 4]{MahRav} and $\HF{p}{*}X_1$ is concentrated in degrees $2,6,$ and $10$.
\end{proof}
\begin{defn}\label{def:jorient} Let $X$ be a space and let $W\: X\to bu$ be a given. Assume that the composite $$W_j=\big(X \xrightarrow{W} bu \to bj\big)$$ is canonically null homotopic. Given any generalized cohomology theory $E$, the {\em $j$-orientation of $W_{E}$} will refer to the distinguished nullhomotopy
obtained by whiskering the the canonical nullhomotopy of $W_j$ with the composite
$bj\xrightarrow{j'}bgl_1\sphere \xrightarrow{bgl_11} bgl_1E.$
\end{defn}
In particular, given a vector bundle $V$ of rank $3$ on $\CP^5$ with $c_1\equiv 0 \pmod 3$ and $c_2\equiv 0 \pmod 3$, $j$-orientation of $(V|_{X_1})_{\tmf}$ will refer to the nullhomotopy of the composite $$X_1 \xrightarrow{V|_{X_1}} bu \to bj \to bgl_1\tmf$$ obtained from the unique nullhomotopy of $(V_{X_1})_j$ given by Lemma~\ref{lem:j_homtopy}.
We are interested in orientations which extend $j$-orientations.
\begin{defn}\label{def:star} Let $Y$ be a space together with a splitting $\suspp Y=Z\oplus X$. Let $E$ be a ring spectrum with $\tau_0E=H\mathbb Z_{(3)}$.
Given a vector bundle $V$ on $Y$ is such that $X \xrightarrow{V|_{X}}bu \to bj$ is canonically null, we say that a $E$-orientation $v$ of $V$ satisfies condition $(*)$ if:
\begin{itemize}
\item[$(*1)$] $v$ restricts to the $j$-orientation of $(V|_{X})_E$; and
\item[$(*2)$] $v$ lifts the canonical $\HZ$-orientation under $0$-truncation $bgl_1E\to bgl_1\HZ_{(3)}$.
\end{itemize}
\end{defn}
The main goal of this section is to prove the following:
\begin{thm}\label{thm:well_defined}Let $V$ be a vector bundle over $\CP^5$ with $c_1(V)\equiv c_2(V)\equiv 0\pmod 3$. Let $v$ be a $\tmf$-orientation for $V$ satisfying condition $(*)$ with respect to the decomposition $\suspp \CP^\infty=X_0 \oplus X_1.$
Then the composite \begin{equation}\label{eq:rho_def}
\begin{tikzcd}[column sep=-18em, row sep=.5em]\rho(V) := \left(\sphere^{10} \xrightarrow{i\otimes 1} \Sigma^\infty_+ \CP^5\otimes \tmf \xrightarrow{v} \Th{\CP^5}{-V}\otimes \tmf \xrightarrow{\Thom(V)^*(\tilde\rho)}\Sigma^{-3}\tmf\right)
\end{tikzcd}
\end{equation} is independent of $v$, as an element in $\pi_0\left(\operatorname{Maps}_{\sphere}(\sphere^{10},\Sigma^{-3}\tmf) \right)\simeq \pi_{13}\tmf\simeq\Z/3.$
\end{thm}
\begin{rmk} Note that $v$ satisfying the hypothesis of the theorem exists.
Let $$V\: \suspp \mathbb HP^2 \oplus X_1 \to bgl_1\sphere$$ be $\tmf$-orientable. Since $\pi_0\operatorname{Maps}_{\sphere}(X_1,gl_1H\Z)=0$, there is a unique nullhomotopy of any null homotopic map $X_1 \to bgl_1H\Z$. Summing the $j$-orientation of $V|_{X_1}$ with any $\tmf$-orientation of $V|_{\suspp \mathbb HP^2}$ lifting the canonical $H\Z$ orientation gives an appropriate orientation of $V$.\end{rmk}
The rest of this section is devoted to the proof of Theorem~\ref{thm:well_defined}. To begin, suppose that $v$ and $w$ are two $\tmf$-orientations of a bundle $V$, both satisfying $(*)$. Consider the following diagram $\tmf$-modules:
\begin{equation}\label{diag:comms}
\begin{tikzcd}[row sep = .6em]
& \Sigma^\infty_+\CP^5 \otimes \tmf \\
\sphere^{10}\otimes \tmf \arrow[ur, "i"]\arrow[dr,"i" below] & \\
& \Sigma^\infty_+\CP^5 \otimes \tmf \arrow[uu, "w^{-1} v" right],
\end{tikzcd}
\end{equation}
where $w^{-1}v$ is the automorphism of $\CP^5 \otimes \tmf$ obtained from composing the Thom isomorphisms corresponding to $v$ with the inverse of the Thom isomorphism corresponding to $w$ (see Remark~\ref{rm:module_iso}).
Note that, if Diagram~\ref{diag:comms} were to commute up to homotopy, the theorem would be immediate. However, commutativity is stronger than necessary. More precisely, we only need the diagram to commute after applying a certain $\tmf$-cohomology. We will return to this after some preliminary calculations.
Let $\Auttmf$ denote automorphisms in the category of $\tmf$-modules.
We study which elements of $\pi_0\operatorname{Aut}_{\tmf}(\Sigma^\infty_+\CP^5\otimes \tmf)$ arise as ratios of orientations satisfying condition $(*)$. Since $\CP^\infty\simeq X_0 \oplus X_1$, an automorphism $a$ of $\suspp \CP^5\otimes \tmf$ is represented by
\begin{equation}\label{eq:matrix_aut} a=\begin{pmatrix} a_{00} & a_{01} \\
a_{10} & a_{11}
\end{pmatrix}\end{equation} where the $a_{ij} \in \op{Maps}_{\tmf}(X_i\otimes \tmf,X_j\otimes \tmf)$ for $i\neq j$ and $a_{ii} \in \Auttmf(X_i\otimes \tmf).$ To study the failure of Diagram~\ref{diag:comms} to commute, we examine the possibilities for $a_{10}$ and $a_{11}$.
\begin{prop}\label{prop:ratio_thom} Suppose that $v,w$ are two $\tmf$-Thom classes for a $\tmf$-orientable bundle of rank $3$ on $\CP^5$ and that both satisfy condition $(*)$. Let $a=w^{-1}v$ be the associated automorphism of $\suspp\CP^5\otimes \tmf.$ Then $a_{10}\: X_1\otimes \tmf \to \suspp\mathbb HP^2\otimes \tmf$ is null.\end{prop}
\begin{proof}
Consider the cofiber sequence of spectra $X_1 \xrightarrow{i} \suspp \CP^5 \xrightarrow{p} X_0.$ Recall that $p=\suspp p'$, where $p'$ is the $10$-skeleton of the natural map $\CP^\infty \to \mathbb HP^\infty$. We have a diagram
\begin{equation}\label{diag:extension}
\begin{tikzcd}
X_1 \ar[dd]\ar[ddrr,"\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \implies" {labl, above},"(\dagger)","{V}|_{X_1}" below]\ar[ddrr,bend left,"0"]\\
\\
\suspp \CP^5 \ar[d,"\suspp p' "] \ar[r,"V"] &bu \ar[r] & bj \ar[r,"{j'}"] & bgl_1\sphere\ar[r, "bgl_11"] &bgl_1\tmf. \\
\suspp\mathbb HP^2 \ar[urr,dashed, "q" {below}]
\end{tikzcd}
\end{equation}
In Diagram~\ref{diag:extension}, the nullhomotopy marked $(\dagger)$ is the unique one. The dashed arrow $q$ is determined by the nullhomotopy $(\dagger)$ of the fiber. Given this, a choice $v$ of nullhomotopy $V_{\tmf}$ extending the canonical $j$-orientation of $V|_{X_1}$ is equivalent to a choice $v'$ of nullhomotopy $bgl_11\circ j' \circ q$.
Thus, the map $p'$ of spaces gives rise to a map $\Thom(p')\:\Th{\CP^5}{-V} \to \Th{\mathbb HP^2}{-j'\circ q}$ participating in a homotopy commutative diagram
\begin{center}
\begin{tikzcd}[column sep=4em, row sep=1em]
\Th{\CP^5}{-V}\otimes \tmf \ar[d,"v"] \ar[rr,"\operatorname{Th}(p') \otimes \tmf"] && \Th{\mathbb HP^2}{-j'\circ p}\otimes \tmf \ar[d,"v'"] \\
\suspp \CP^5\otimes \tmf \ar[rr,"\suspp p'\otimes \tmf"]&& \suspp \mathbb HP^2\otimes \tmf .
\end{tikzcd}
\end{center}
Applying the same argument to obtain $w$ and $w'$, we get a homotopy commutative diagram
\begin{equation}\label{diag:restrict_to_HP}
\begin{tikzcd}
\suspp \CP^5\otimes \tmf \ar[d,"v"]\ar[rr,"\suspp p'"]&& \suspp \mathbb HP^2\otimes \tmf\ar[d,"v'"] \\
\operatorname{Th}(p')\:\Th{\CP^5}{-V}\otimes \tmf \ar[d,"w^{-1}"] \ar[rr,"\operatorname{Th}(p') \otimes \tmf"] && \Th{\mathbb HP^2}{-j'\circ q}\otimes \tmf \ar[d,"(w')^{-1}"] \\
\suspp \CP^5\otimes \tmf \ar[rr,"\suspp p'"]&& \suspp \mathbb HP^2\otimes \tmf .
\end{tikzcd}
\end{equation}
Diagram~\ref{diag:restrict_to_HP} implies that $a=w^{-1}v$ has $a_{10}\simeq 0$.\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:well_defined}]
Given $v$ and $w$ both $\tmf$-orientations for $V$ satisfying condition $(*)$, let $a=w^{-1}v$ and let $a_{11}\:X_1\to X_1$ be as in Equation~\eqref{eq:matrix_aut}. By Proposition~\ref{prop:ratio_thom}, all but the left-most triangle in the following diagram commute:
\begin{equation}\label{diag:comms2}
\begin{tikzcd}[column sep=10, row sep=6]
& X_1\arrow[r]&\Sigma^\infty_+\CP^5 \otimes \tmf\arrow[dr,"w"] \\
\sphere^{10}\otimes \tmf \arrow[ur, "i"]\arrow[dr,"i"] & & & (\CP^5)^{-V}\otimes \tmf \ar[r,"\operatorname{Th}(V)"] & BU(3)^{-\gamma_3} \otimes \tmf.
\\
& X_1\arrow[uu,"a_{11}" right]\arrow[r] & \Sigma^\infty_+\CP^5 \otimes \tmf
\arrow[uu, "w^{-1}v"]\arrow[ur,"v"]
\end{tikzcd}
\end{equation}
To get that $\rho(V)$ as in Diagram~\eqref{eq:rho_def} is well-defined, it suffices to prove that Diagram~\eqref{diag:comms2} commutes after applying the functor $$\tmf^{-3}(-)=\pi_0\left(\op{Maps}_{\tmf}\left((-)\otimes \tmf,\Sigma^{-3}\tmf\right)\right).$$
Note that the map $a_{11} \circ i$ splits as a sum $b_0 \oplus b_1,$ where $b_0\in \Auttmf(\sphere^{10}\otimes \tmf)$ and $b_1\in \Maps_\sphere(\sphere^{10},\Sigma^2C(\alpha_1)\otimes \tmf).$ Since $\tmf^{-3}(\Sigma^2C(\alpha_1))=0$ by an Atiyah--Hirzebruch spectral sequence, it suffices to show that $b_0^*$ is the identity on $\tmf^{-3}(-)$.
We have reduced to studying the $\tmf$-module automorphisms of $\sphere^{10}\otimes \tmf$ which arise from automorphisms of $\suspp \CP^5\otimes \tmf$ of the form $w^{-1}v$ for $v,w$ satisfying conditions $(*)$.
Next, we partially compute the set of all nullhomotopies of $V_{\tmf}\:\suspp \CP^5\to bgl_1\tmf$. This set is a torsor for
$$G:=\pi_0\operatorname{Maps}_{\sphere}(\suspp\CP^5,\Sigma^{-1}blg_1\tmf)=\pi_0\operatorname{Maps}_{\sphere}(\suspp\CP^5,gl_1\tmf).$$
Fix one nullhomotopy $v$ of $V_{\tmf}$ which satisfies the hypotheses of Theorem~\ref{thm:well_defined}.
We can a group homomorphism $h\: G \to \pi_0\operatorname{Aut}_{\tmf}(\suspp \CP^5 \otimes \tmf)$ by
\begin{align*} g&\mapsto \left( \suspp\CP^5 \otimes \tmf \xrightarrow{gv} \Th{\CP^5}{-f}\otimes \tmf \xrightarrow{ v^{-1}} \suspp \CP^5\otimes \tmf\right)
\end{align*}
and a subgroup $G' := \{ h \in G \, | \, hv \text{ satisfies conditions $(*)$ }\} \subset G.$ This induces a group homomorphisms
$$h':=\Big(G'\hookrightarrow G \xrightarrow{h} \pi_0\Auttmf(\suspp\CP^5 \otimes \tmf) \xrightarrow{\tilde\pi} \pi_0\Auttmf(\sphere^{10} \otimes \tmf) \Big),$$ where $\tilde\pi$ takes an automorphism $a$ of $\suspp\CP^5 \otimes \tmf$ to the component $b_0\in \pi_0\Auttmf(\sphere^{10}\otimes \tmf).$
To show that
Diagram~\ref{diag:comms2} commutes after applying $\tmf^{-3}(-)$, it suffices to show that $h'$ is the zero map. We show that $G'$ is a subgroup of $\Z/3 \oplus \Z_{(3)}$. Since $\pi_0\Auttmf(\sphere^{10} \otimes \tmf) \simeq \Z_{(3)}^\times$ and $\Z/3 \oplus \Z_{(3)}$ admits no nontrivial maps to $\Z_{(3)}^\times,$ this suffices.
Given a basepoint for $\CP^5$, we split the zero cell of $\suspp\CP^5$. Requiring a given orientation to lift to the canonical $\HZ$-orientation determines the nullhomotopy on $\sphere^0$. Therefore we have that $G' \subset \pi_0 \operatorname{Maps}_{\operatorname{Sp}}(\susp\CP^5,gl_1\tmf).$ We compute this group via an Atiyah--Hirzebruch spectral sequence shown in Figure~\ref{fig:gl1_CP5} below.
\begin{figure}[h]
\centering
\textbf{Nonzero terms on the $E_2$-page of the Atiyah--Hirzebruch spectral sequence $H^p(\susp \CP^5; \pi_{-q}gl_1\tmf) \implies \pi_{p+q}\operatorname{Maps}_{\sphere}(\susp\CP^5,gl_1\tmf)$. }\par\medskip
\begin{tikzpicture}
\matrix (m) [matrix of math nodes,
nodes in empty cells,nodes={minimum width=2.5ex,
minimum height=2.5ex},
column sep=.1ex,row sep=.1ex]{
& & & & & & & & & & & & p= \\
& \node[circle,draw,yscale=.2,xscale=.2] {}; \node[star,star points=7,star point ratio=2,minimum height=.1cm,minimum width=.1cm,draw] {}; & &\node[rectangle,draw,yscale=.2,xscale=.2]{}; & \node[circle,draw,yscale=.2,xscale=.2] {}; & & & & \node[circle,draw,yscale=.2,xscale=.2] {}; & & & \node[diamond,draw,yscale=.2,xscale=.2] {}; & 10 \\
& & & & & & & & & & & & 9 \\
& \node[circle,draw,yscale=.2,xscale=.2] {}; & &\node[rectangle,draw,yscale=.2,xscale=.2]{};\node[star,star points=7,star point ratio=2,minimum height=.1cm,minimum width=.1cm,draw] {}; & \node[circle,draw,yscale=.2,xscale=.2] {}; & & & & \node[circle,draw,yscale=.2,xscale=.2] {}; & & & \node[diamond,draw,yscale=.2,xscale=.2] {}; & 8 \\
& & & & & & & & & & & & 7 \\
& \node[circle,draw,yscale=.2,xscale=.2] {}; & &\node[rectangle,draw,yscale=.2,xscale=.2]{}; & \node[circle,draw,yscale=.2,xscale=.2] {}; & & & & \node[circle,draw,yscale=.2,xscale=.2] {}; & & & \node[diamond,draw,yscale=.2,xscale=.2] {}; & 6 \\
& & & & & & & & & & & & 5 \\
& \node[circle,draw,yscale=.2,xscale=.2] {}; & &\node[rectangle,draw,yscale=.2,xscale=.2]{}; & \node[circle,draw,yscale=.2,xscale=.2] {}; & & & & \node[circle,draw,yscale=.2,xscale=.2] {}; & & & \node[diamond,draw,yscale=.2,xscale=.2] {}; & 4 \\
& & & & & & & & & & & & 3 \\
& \node[circle,draw,yscale=.2,xscale=.2] {}; & &\node[rectangle,draw,yscale=.2,xscale=.2]{}; & \node[circle,draw,yscale=.2,xscale=.2] {}; & & & & \node[circle,draw,yscale=.2,xscale=.2] {}; & & & \node[diamond,draw,yscale=.2,xscale=.2] {}; & 2 \\
\quad\strut q=& -10 & -9 & -8& -7& -6&-5 &-4 &-3 &-2 &-1 &0\strut \\
};
\end{tikzpicture}\caption{A circle indicates a non-zero three-torsion group; a square a non-zero torsion-free group; and a diamond a group $\Z_{(3)}^\times$. The stars in bidegree $(-10,10)$ and $(-8,8)$ indicate terms along the $p+q=0$ line which contribute.}\label{fig:gl1_CP5}\end{figure}
Thus, there is an extension of $\Z_{(3)}$-modules
$$0 \to \Z/3 \to \pi_0 \operatorname{Maps}_{\operatorname{Sp}}(\susp\CP^5,gl_1\tmf) \to \Z_{(3)} \to 0.$$
Since $\operatorname{Ext}^1_{\Z_{(3)}}(\Z_{(3)},\Z/3)=0$, the group is $\Z_{(3)}\oplus \Z/3.$
\end{proof}
\subsecl{The invariant $\rho$ separates Chern classes for rank $3$ bundles on $\CP^5$}{subsec:rho_works}
Our next goal is to show that the invariant $\rho$ defined by Theorem~\ref{thm:well_defined} distinguishes vector bundles of rank $3$ on $\CP^5$ with the same Chern classes. Recall that $\pi_{10}\BUc\simeq \Z/3$ acts on $[\CP^5,\BUc]$ as in Construction~\ref{const:action_Z3}. Given $(f,\sigma) \in [\CP^5,\BUc]\times \pi_{10}\BUc,$
$$(f,\sigma) \, \mapsto \, \sigma V:= \Big( \CP^5 \xrightarrow{Q} \CP^5 \vee S^{10}\xrightarrow{f\vee \sigma} \BUc\Big).$$
Fix $a_1,a_2,$ and $a_3$ with $a_1\equiv 0\pmod 3$. By Theorem~\ref{classification}, this action restricts to a transitive one on each set
$$\V_{a_1,a_2,a_3}=\{ V\: \CP^5 \to \BUc \, | \, c_i(V)=a_i\}/ \sim,$$ which is free if only if $a_2\equiv 0 \pmod 3$.
By Theorem~\ref{thm:tmf_class_exists} we have a Thomification isomorphism
$$t\: \pi_{10}(\BUc) \to \pi_{10}(\Sigma^{-3}\tmf),$$
defined relative to an orientation for a generator of $\pi_{10}\BUct$. More precisely, let $\sigma \in \pi_{10}\BUct$ and an take an orientation $v_0$ for $\sigma$ satisfying condition $(*)$ from Definition~\ref{def:star} with respect to the splitting $\suspp S^{10}\simeq \sphere^0 \oplus \sphere^{10}$. Then
\begin{equation}\label{def:t}t(\sigma):=\tilde\rho\circ \Thom_{v_0}(\sigma),\end{equation}
with $\Thom_{v_0}$ as in Definition~\ref{def:thom_sigma}.
\begin{conv} We write $v_0$ for both the nullhomotopy of $\sigma\: \suspp S^{10} \to bgl_1\sphere$ and the associated nullhomotopy of $bgl_11\circ \sigma$.
\end{conv}
Our main goal in this subsection is to prove:
\begin{prop}\label{prop:rho_works} Let $V$ be a rank $3$ vector bundle on $\CP^5$ with $c_1\equiv 0 \pmod 3$ and $c_2 \equiv 0 \pmod 3$. The $\rho(\sigma V)= \rho(V)+t(\sigma).$
\end{prop}
This implies that the invariant $\rho$ separates Chern classes as follows.
\begin{cor}\label{cor:rho_works} If $V_1,$ $V_2$ are rank $3$ vector bundles on $\CP^5$ that have the same Chern classes, and such that $$c_1(V_1)=c_1(V_2)\equiv 0 \pmod 3, \,\,\,\,\,\,c_2(V_1)=c_2(V_2)\equiv 0 \pmod 3.$$
Then $\rho(V_1)=\rho(V_2)$ if and only if $V_1$ and $V_2$ are represented by homotopic maps to $BU(3)$, i.e. if and only if $V_1$ and $V_2$ are topologically equivalent.
\end{cor}
\begin{proof}[Proof of Corollary~\ref{cor:rho_works} assuming Proposition~\ref{prop:rho_works}] Since $\pi_{10}(\BUc)$ acts transitively, there is some element $\sigma \in \pi_{10}(\BUc)$ such that $$V_1 = \sigma V_2 \text{ and }\rho(V_2)=\rho(V_1)+t(\sigma).$$
Since $t$ is an isomorphism, $\rho(V_1)=\rho(V_2)$ if and only if $\sigma=0$ if and only if $V_1\simeq V_2$.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:rho_works}]
Consider the diagram
\begin{equation}\label{eq:wedge_thom}
\begin{tikzcd}[row sep=2em]
\suspp \CP^5\ar[rr,"\sigma V" above] \ar[d,"\suspp Q" left]
& & bgl_1\tmf . \\
\suspp \CP^5 \oplus \sphere^{10}, \ar[urr,"\,\,\,\,\,V \oplus \sigma" below]
\end{tikzcd}
\end{equation}
where $Q\: \CP^5 \to \CP^5 \vee S^{10}$ is as in Construction~\ref{const:action_Z3}. By Equation~\ref{eq:sum}, $\suspp \CP^5 \simeq \suspp\HP^2 \oplus \Sigma^2C(\alpha_2)\oplus \sphere^{10}$. Under this identification, $$\suspp Q=1 \oplus 1 \oplus \Delta \: \suspp\HP^2 \oplus \Sigma^2C(\alpha_2)\oplus \sphere^{10} \to \suspp\HP^2 \oplus \Sigma^2C(\alpha_2)\oplus (\sphere^{10} \oplus \sphere^{10}).$$
Let $X_1':=\Sigma^2C(\alpha_2)\oplus (\sphere^{10}\oplus \sphere^{10})$. We obtain an orientation $v'$ of $V \oplus \sigma$ by summing the $j$-orientation of $(V\oplus \sigma|_{X_1'})_{\tmf}$ with any nullhomotopy of $(V|_{\suspp \HP^2})_{\tmf}$ that lifts the canonical $\HZ_{(3)}$-orientation. By construction, $v'$ satisfies $(*)$. This induces a nullhomotopy $v$ of $(\sigma V)_{\tmf}$ and a nullhomotopy $\bar v$ of $V$, both of which satisfy condition $(*)$. Thus we get a commuting diagram of $\tmf$-modules:
\begin{center}
\begin{tikzcd}
& & \Sigma^{-3}\tmf
\\
(\CP^5)^{-\sigma V}\otimes \tmf \ar[r,"\operatorname{Th}(Q)"]\arrow[urr, "\,\,(\sigma V)^*\tilde \rho"above, bend left=10] & (\CP^5 \oplus \sphere^{10})^{-(V\oplus \sigma)}\otimes \tmf \arrow[ur,"(V \oplus \sigma)^*\tilde\rho"]
\\
\suspp \CP^5 \otimes \tmf \ar[r,"Q"]\ar[u,"v^{-1}"] & (\suspp \CP^5 \oplus \sphere^{10})\otimes \tmf \ar[u,"(v')^{-1}"]
,\end{tikzcd}
\end{center}
where we suppress tensoring with $\tmf$ from the horizontal arrows.
Below, the pushout of spaces on the left induces the diagram of Thom spectra on the right:
\begin{equation*}
\begin{tikzcd}
* \ar[r] \ar[d]& S^{10} \ar[d]
& & \sphere^0 \ar[r]\ar[d] & (S^{10})^{-\sigma}\ar[d] \\
\CP^5\ar[r]& \CP^5 \vee S^{10}
& & (\CP^5)^{-V} \ar[r]& (\CP^5 \vee S^{10})^{-V\oplus \sigma}.
\end{tikzcd}
\end{equation*}
The $j$-orientation for $\sigma_\sphere$ gives an equivalence \begin{equation}\label{eq:equiv_sum}(\CP5 \vee S^{10})^{-V \oplus \sigma} \simeq (\CP^5)^{-V} \oplus \sphere^{10}.\end{equation} The nullhomotopy $v'|_{\sphere^{10}}$ of $\sigma_{\tmf}$ extends the $j$-orientation. So, using the identification \eqref{eq:equiv_sum}, we have that $\Thom(V \oplus \sigma)^*\tilde \rho=\Thom(V)^*\tilde \rho \oplus t(\sigma)$.
Thus we get the homotopy commutative Diagram~\eqref{diag:commutes7} of $\tmf$-modules below:
\begin{equation}\label{diag:commutes7}
\begin{tikzcd}[column sep = 3em]
(\CP^5)^{-\sigma V}\otimes \tmf\ar[rrr, "\Thom(\sigma V)^*\tilde \rho" above, bend left]
& \big((\CP^5)^{-V} \oplus \sphere^{10}\big)\otimes \tmf \ar[rr,"\Thom(V)^*\tilde \rho\oplus t(\sigma)"] & &\Sigma^{-3}\tmf\\
\suspp \CP^5\otimes \tmf \ar[r,"Q"]\ar[u,"v^{-1}"] & \big(\suspp \CP^5 \oplus \sphere^{10}\big)\otimes \tmf \ar[u,"{\bar v}^{-1}\oplus 1" right] \\
\sphere^{10}\otimes \tmf \ar[uurrr, bend right=45, "\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\rho(V)+t(\sigma)" below]\ar[u,"i"]\ar[r,"\Delta\otimes \tmf"] & \big(\sphere^{10} \oplus \sphere^{10}\big)\otimes \tmf\ar[u,"i\oplus 1" right] & & .
\end{tikzcd}
\end{equation}
Comparing the two outer paths from the lower left-hand corner to $\Sigma^{-3}\tmf$ gives the result.
\end{proof}
\subsecl{Computing $\rho$ on certain sums of line bundles}{subsec:sum_line_bundles}
Let $\rho$ be as defined by Theorem~\ref{thm:well_defined}. For $L$ a line bundle, we define
$\rho(L):=\rho(L\oplus \underline{\C}^2)$.
\begin{lem}\label{lem:sum_line_bundles}
Suppose that $\O(a_1), \mathcal O(a_2)$ and $\mathcal O(a_3)$ are line bundles on $\CP^5$ with $a_i\equiv 0 \pmod 3$. Then
$\rho\left(\mathcal O(a_1)\oplus \mathcal O(a_2) \oplus \mathcal O(a_3)\right)=\rho(\O(a_1))+\rho(\O(a_2))+\rho(\O(a_3)) \in \Z/3.$
\end{lem}
This immediately implies:
\begin{cor}\label{cor:three_times_bundle} Let $a$ be an integer divisible by $3$. Then $\rho(\O(a)^{\oplus 3}) = 0.$
\end{cor}
\begin{proof}[Proof of Lemma~\ref{lem:sum_line_bundles}] Let $V:=\oplus_{i=1}^3\O(a_i)$. Let $\tilde V$ be the bundle on $(\CP^5)^{\times 3}$ given by $$\tilde V:=\oplus_{i=1}^3p_i^*\O(a_i),$$ where $p_i$ is the $i$-th projection. We can factor $V\: \CP^5 \to \BUc$ as follows
$$V\: \CP^5 \xrightarrow{\Delta} {\CP^5}^{\times 3} \xrightarrow{\tilde V} \BUc$$
and naturally identify $\Th{(\CP^5)^{\times 3}}{-\tilde V} \simeq \otimes_i\Th{\CP^5}{-\O(a_i)}.$ Using $\tmf$-orientations satisfying $(*)$ \footnote{See Definition~\ref{def:star}.} for each $\O(a_i)$, we get a diagram of $\tmf$-modules:
\begin{center}
\begin{tikzcd}
\Th{\CP^5}{-V}
\ar[r,"\op{Th}(\Delta)"]
& \otimes_i\Th{\CP^5}{-\O(a_i)} \ar[r,"\op{Th}(\tilde V)"]
&\BUct \ar[r,"\tilde \rho"]
& \Sigma^{-3}\tmf \\
\suspp\CP^5 \ar[r,"\suspp \Delta"] \ar[u,"\simeq_{\tmf}"]
& (\suspp \CP^5)^{\otimes 3} \ar[u,"\simeq_{\tmf}"]\\
\sphere^{10}\ar[u] \ar[r,"\Sigma^\infty \Delta"] & (\sphere^{10})^{\otimes 3}\ar[u]\\
\sphere^{10}\ar[uuurrr,dashed,bend right=30, "\rho(V)" below]\ar[u]\ar[r,"1 \oplus 1 \oplus 1",dashed]& \sphere^{10}\oplus \sphere^{10}\oplus \sphere^{10}\ar[u]\ar[uuurr,dashed,"\oplus_i\rho(\O(a_i))\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\," above]
\end{tikzcd}
\end{center}
where all terms in the diagram are implicitly tensored with $\tmf$ and the maps marked $\simeq_{\tmf}$ are $\tmf$-Thom isomorphisms.
The diagram is homotopy commutative and comparing the dashed arrows proves the Lemma.\end{proof}
While this section provides some computations of $\rho$, we do not have a general formula. Indeed, it is unclear what a formula for $\rho$ should look like, since $\rho$ cannot be computed from Chern classes. Some inspiration can be drawn from \cite{AR}, where Atiyah and Rees show that the $\alpha$ invariant of a rank $2$ bundles on $\CP^3$ can be computed as a holomorphic semi-characteristic \cite[Theorem 4.2]{AR}, provided we choose an holomorphic representative for the topological class of the bundle. This leads to the following question:
\begin{q} For $V$ an algebraic vector bundle on $\CP^5$, is there some description of the invariant $\rho(V)$ in terms of sheaf cohomology of $V$?
\end{q}
\subsecl{A $3$-torsion $\tmf$-valued invariant for rank $2$ bundles}{subsec:rank_2}
The homotopy groups of $BU(3)$ through degree $10$ are fairly sparse, so a complete analysis of $[\CP^5,BU(3)]$ was possible. This allowed us to classify rank $3$ bundles on $\CP^5$ by first enumerating such bundles and second defining an invariant to distinguish them.
While unraveling the structure of $\pi_*BU(3)$, we began to suspect that $\rho$ may also be interesting in the case of rank $2$ bundles on $\CP^5$.
\begin{prop}\label{prop:rank2} The class $\tilde \rho\: \sk^{26}\BUct \to \Sigma^{-3}\tmf$ factors through
$\sk^{26}$ $\Th{BU(2)_{\coz}}{-\gamma_2}.$ Moreover:
\begin{enumerate}
\item[a.] The map induced map $\pi_{10}BU(2)_{\coz} \to \pi_{10}\Sigma^{-3}\tmf$ given by Thomifying with respect to $-\gamma_2$ followed by $\tilde \rho$ is a bijection.
\item[b.] The invariant $V \mapsto \rho(V)$ distinguishes $3$-local equivalence classes of rank $2$ vector bundles on $\CP^5$ with fixed $c_1,c_2$ where additionally $c_1\equiv 0\pmod 3$.
\end{enumerate}
\end{prop}
\begin{proof}
For (a), recall Diagram~\ref{eq:alpha12}. Note that the unstable generator for $\pi_4BU(3)$ factors through $BU(2)$, so Theorem~\ref{thm:tmf_class_exists} shows that the image of a generator for $\pi_{10}BU(2)_{\coz}$ is nonzero. Therefore it suffices to check that $\pi_{10}BU(2)\simeq \Z/3$. This is classical: since $BSU(2)\simeq BS^3$, the homotopy in the relevant range is given in Figure~\ref{fig:homotopy_BU2}.
\begin{figure}[h]
\begin{tabular}{| M{1.2cm} | M{1cm} | M{1cm} | M{1cm} | M{1cm} |M{1cm} | M{1cm} | M{1cm} | M{1cm} | M{1cm} | M{1cm} | N}
\hline
& \textbf{$\pi_2$} & \textbf{$\pi_3$} & \textbf{$\pi_4$} & \textbf{$\pi_5$} & \textbf{$\pi_6$} & \textbf{$\pi_7$} & \textbf{$\pi_8$} & \textbf{$\pi_9$} & \textbf{$\pi_{10}$} \\
\hline
& & & & & & & & &
\\[-8pt]
$BU(2)$ &$\Z$ & $0$ &$\Z$ &$\Z/2$ &$\Z/2$ &$\Z/12$ &$\Z/2$ &$\Z/2$ &$\Z/3$ \\[2pt]
\hline
\end {tabular}\caption{Homotopy of $BU(3)$}\label{fig:homotopy_BU2}
\end{figure}
Since $\CP^5$ is even, only even $3$-local homotopy gives rise to $3$-local invariants (odd $3$-local homotopy contributes to constraints on possible Chern classes). Therefore a argument as in Remark~\ref{rmk:top_cell} shows that the $\pi_{10}BU(3)$-action as in Construction~\ref{const:action_Z3} is the only source of $3$-local invariants beyond Chern classes; for rank $2$ bundles on $\CP^5$ with $c_1\equiv 0 \pmod 3$, these bundles detected by $\rho$.
\end{proof}
\begin{qs}\label{rmk:more_needed_BU2} Proposition~\ref{prop:rank2} opens up several avenues of inquiry:
\begin{itemize}
\item For which $c_1,c_2\in \Z$ is the action of $\pi_{10}BU(2)$ nontrivial? In other words, when is the invariant $\rho$ of rank 2 bundles on $\CP^5$ is occupied?
\item What is the 2-local enumeration of rank 2 bundles on $\CP^5$ with fixed Chern data? Figure~\ref{fig:homotopy_BU2} shows that there is significant two-local data to analyze.
\end{itemize}
\end{qs}
\bibliographystyle{abbrv}
|
2,877,628,090,692 | arxiv | \section{From toric varieties to toric schemes}
During the last forty years a huge amount of work on toric varieties was and still is published. Their theory was generalised in several directions, and this often lead to a better understanding of classical toric varieties. However, the generalisation that seems to be the most natural and the most important -- the \textit{study of toric varieties from a scheme-theoretical point of view} -- was never actually carried out. It is clear that to do this one has to be able to make arbitrary base changes. Hence, instead of considering toric varieties over an algebraically closed field (or, as often done, over the field of complex numbers), one needs to study \textit{toric schemes,} that is ``toric varieties over arbitrary base rings''. Special cases of this generalisation were mentioned briefly in \cite[\textsection 4]{dem} (for regular fans and mainly over the ring of integers) and \cite[IV.3]{kkms} (over discrete valuation rings). But unfortunately later authors seemed to ignore this, and hence the knowledge of toric schemes is very small compared to the one of toric varieties.
Besides yielding a better understanding of the geometry of toric varieties, there are concrete applications of the above generalisation, as the following remark shows.
\begin{no}
Let $X$ be the toric variety over an algebraically closed field $K$ associated with a fan $\Sigma$. A fundamental question in algebraic geometry is then if the Hilbert functor ${\rm Hilb}_{X/K}$ of $X$ over $K$ is representable, i.e. if the Hilbert scheme of $X$ exists (cf.~\cite{fga}). If $X$ is projective, then this is indeed the case and follows from Grothendieck's more general result \cite[Th\'eor\`eme 3.1]{fga}. However, toric varieties are not necessarily projective, and in general it is not known whether their Hilbert schemes exist. Studying ${\rm Hilb}_{X/K}$ amounts to studying quasicoherent sheaves on the base change $X\otimes_KR$ for every $K$-algebra $R$, and it turns out that $X\otimes_KR$ is the same as the toric scheme over $R$ associated with $\Sigma$. Hence, in order to study Hilbert functors of toric varieties \textit{it is necessary to study toric schemes over more general bases than just over algebraically closed fields.}
\end{no}
The development of a theory of toric schemes was begun in the PhD Thesis \cite{diss}, and its contents were refined and extended in \cite{cof}, \cite{ts1}, \cite{ts2} and \cite{ts3}. Here we give an overview of the most important results and refer the reader to the aforementioned sources for a more extensive treatment including proofs.
\section{The geometry of toric schemes}
We start by briefly describing the construction of toric schemes from fans. In \cite{ts1}, toric schemes are obtained as a special case of the more general construction of schemes from so-called openly immersive projective systems of monoids (also yielding the Cox schemes introduced below).
\smallskip
\noindent\textit{$\bullet$\quad From now on let $V$ be an $\mathbbm{R}$-vector space of finite dimension $n$, let $N$ be a $\mathbbm{Z}$-structure on $V$ (i.e.~a subgroup of rank $n$ of the additive group underlying $V$ with $\langle N\rangle_{\mathbbm{R}}=V$), and let $M\mathrel{\mathop:}= N^*$ denote the dual of $N$ which is a $\mathbbm{Z}$-structure on the dual $V^*$ of $V$.}
\smallskip
An \textit{$N$-polycone (in $V$)} is the set of $\mathbbm{R}$-linear combinations with coefficients in $\mathbbm{R}_{\geq 0}$ of a finite subset of $N$, and an $N$-polycone is called \textit{sharp} if it does not contain a line. If $\sigma$ is an $N$-polycone then a \textit{face of $\sigma$} is a set of the form $\sigma\cap\ke(u)$ for some $u\in\sigma^{\vee}\cap M$ (where $E^{\vee}\mathrel{\mathop:}=\{v\in V^*\mid v(E)\subseteq\mathbbm{R}_{\geq 0}\}$ for a subset $E\subseteq V$). The set of faces of a (sharp) $N$-polycone is a finite set of (sharp) $N$-polycones. An \textit{$N$-fan (in $V$)} is a finite set $\Sigma$ of sharp $N$-polycones that is closed under taking faces and such that the intersection of two cones in $\Sigma$ is a common face of both of them. By means of the relation ``$\tau$ is a face of $\sigma$'', denoted by $\tau\preccurlyeq\sigma$, we consider an $N$-fan as an ordered set.
\smallskip
\noindent\textit{$\bullet$\quad From now on let $\Sigma$ be an $N$-fan in $V$ and let $R$ be a ring\/\footnote{By a ring, group or monoid we always mean a commutative ring, group or monoid, respectively, and by an algebra we always mean a commutative, unital and associative algebra.}.}
\smallskip
If $\sigma\in\Sigma$ then $\sigma^{\vee}\cap M$ is a torsionfree, cancellable, finitely generated submonoid of $M$, and if moreover $\tau\preccurlyeq\sigma$ then $\sigma^{\vee}\cap M$ is a submonoid of $\tau^{\vee}\cap M$. Taking spectra of algebras of monoids over $R$ and setting $X_{\sigma}(R)\mathrel{\mathop:}=\spec(R[\sigma^{\vee}\cap M])$ for $\sigma\in\Sigma$ we get an inductive system $(X_{\sigma}(R))_{\sigma\in\Sigma}$ of $R$-schemes over $\Sigma$. Its inductive limit exists and is an $R$-scheme denoted by $X_{\Sigma}(R)\rightarrow\spec(R)$ and called \textit{the toric scheme over $R$ associated with $\Sigma$ (and $N$).} It can be understood as obtained by glueing $(X_{\sigma}(R))_{\sigma\in\Sigma}$ along $(X_{\sigma\cap\tau}(R))_{(\sigma,\tau)\in\Sigma^2}$.
The above construction of toric schemes gives rise to a contravariant functor $X_{\Sigma}$ from the category of rings to the category of schemes together with a morphism $X_{\Sigma}\rightarrow\spec$. Moreover, the functor $X_{\Sigma}$ is compatible with base change in the following sense.
\begin{prop}\mbox{\rm(\cite[5.9]{ts1})}\label{basechange}
There is a canonical isomorphism $$X_{\Sigma}(\bullet)\cong X_{\Sigma}(R)\otimes_R\bullet$$ of contravariant functors from the category of $R$-algebras to the category of $R$-schemes.
\end{prop}
In particular, if $\mathfrak{a}\subseteq R$ is an ideal then $X_{\Sigma}(R/\mathfrak{a})$ is canonically identified with a closed subscheme of $X_{\Sigma}(R)$.
\smallskip
The first important question is now of course how the base ring affects the geometry of a toric scheme. It turns out that some basic properties hold for all toric schemes, making them a class of ``nice schemes''. More precisely, on use of the above base change property we get the following result.
\begin{prop}\mbox{\rm(\cite[5.8]{ts1})}
The $R$-scheme $X_{\Sigma}(R)\rightarrow\spec(R)$ is separated, quasicompact, flat, and of finite presentation; it is faithfully flat if and only if\/ $\Sigma\neq\emptyset$ or $R=0$.
\end{prop}
In contrast, a lot of other basic properties are respected and reflected by $X_{\Sigma}$. The following statements are proved by reducing to the affine case, i.e.~$X_{\sigma}$, and then applying corresponding results about algebras of monoids (see e.g.~\cite{gil}).
\begin{prop}\mbox{\rm(\cite[5.8]{ts1})}
a) The scheme $X_{\Sigma}(R)$ is reduced, connected, or normal if and only if $R$ is so or\/ $\Sigma=\emptyset$; it is irreducible, or integral if and only if $R$ is so and\/ $\Sigma\neq\emptyset$.
b) If\/ $\Sigma\neq\emptyset$ then there is a bijection\/ $\mathfrak{p}\mapsto X_{\Sigma}(R/\mathfrak{p})$ from the set of minimal prime ideals of $R$ to the set of irreducible components of $X_{\Sigma}(R)$.
c) The scheme $X_{\Sigma}(R)$ is Noetherian if and only if $R$ is so or\/ $\Sigma=\emptyset$; it is Artinian if and only if $R$ is so and $n=0$, or $R=0$, or\/ $\Sigma=\emptyset$.
d) If\/ $\Sigma\neq\emptyset$ then $$\dim(R)+n\leq\dim(X_{\Sigma}(R))\leq(n+1)\dim(R)+n;$$ if $R$ is moreover Noetherian then $\dim(R)+n=\dim(X_{\Sigma}(R))$.
e) If $R$ is Noetherian, then $X_{\Sigma}(R)$ is equidimensional if and only if $R$ is so or\/ $\Sigma=\emptyset$.
\end{prop}
The above shows in particular that on general toric schemes \textit{no satisfying theory of Weil divisors is available.} Since a lot of results about toric varieties were proved by heavy use of Weil divisor techniques (see e.g.~\cite{cox}, \cite{ful}), one has to come up with new proofs in order to generalise these results to toric schemes.
\smallskip
Finally, as an example of a property depending on the fan but not on the base ring we consider properness. Its characterisation needs the notion of a \textit{complete} $N$-fan $\Sigma$, i.e.~an $N$-fan $\Sigma$ with $\bigcup\Sigma=V$. This result is well-known for toric varieties (see e.g.~\cite[2.4]{ful}), and proved on use of torus operations for toric schemes associated with regular fans in \cite[\textsection 4 Proposition 4]{dem}. Our proof for arbitrary fans avoids speaking of torus operations and relies only on the valuative criterion for properness and on properties of projections of fans proved in \cite{cof}.
\begin{prop}\mbox{\rm(\cite[6.12]{ts1})}
The $R$-scheme $X_{\Sigma}(R)\rightarrow\spec(R)$ is proper if and only if\/ $\Sigma$ is complete, or\/ $\Sigma=\emptyset$, or $R=0$.
\end{prop}
\section{Sheaves on toric schemes}
Generalising work by Cox \cite{cox} and Musta\c{t}\v{a} \cite{mus1} we introduce a notion of Cox ring (not to be confused with the one introduced in \cite{hukeel}) and describe quasicoherent modules on toric schemes in terms of graded modules over these rings. In order to do so we need to define some objects encoding the combinatorics of the fan $\Sigma$.
\smallskip
Let $\Sigma_1$ denote the set of $1$-dimensional cones in $\Sigma$. Every $\rho\in\Sigma_1$ has a unique minimal $N$-generator (i.e.~an $x\in N$ with $\rho=\mathbbm{R}_{\geq 0}x$ such that $rx\notin N$ for every $r\in]0,1[$), denoted by $\rho_N$. There is an exact sequence of groups $$M\overset{c}\longrightarrow\mathbbm{Z}^{\Sigma_1}\overset{a}\longrightarrow A\longrightarrow 0,$$ where $c(u)\mathrel{\mathop:}=(u(\rho_N))_{\rho\in\Sigma_1}$ for $u\in M$ and where $a$ is defined as the cokernel of $c$. Note that $c$ is a monomorphism if and only if $\Sigma$ if \textit{full,} i.e.~$\langle\bigcup\Sigma\rangle_{\mathbbm{R}}=V$. We denote by $(\delta_{\rho})_{\rho\in\Sigma_1}$ the canonical basis of $\mathbbm{Z}^{\Sigma_1}$ and we set $\alpha_{\rho}\mathrel{\mathop:}= a(\delta_{\rho})$ for $\rho\in\Sigma_1$.
Now, we denote by $S$ the polynomial algebra $R[(Z_{\rho})_{\rho\in\Sigma_1}]$ in indeterminates $(Z_{\rho})_{\rho\in\Sigma_1}$ over $R$, furnished with the $A$-graduation induced by $a$, i.e.~such that $\deg(Z_{\rho})=\alpha_{\rho}$ for $\rho\in\Sigma_1$. For $\sigma\in\Sigma$ we set $\widehat{Z}_{\sigma}\mathrel{\mathop:}=\prod_{\rho\in\Sigma_1\setminus\sigma_1}Z_{\rho}\in S$ (where $\sigma_1$ denotes the set of $1$-dimensional faces of $\sigma$). Finally we define a graded ideal $I\mathrel{\mathop:}=\langle\widehat{Z}_{\sigma}\mid\sigma\in\Sigma\rangle_S$.
\smallskip
\noindent\textit{$\bullet$\quad From now on let $B\subseteq A$ be a subgroup.}
\smallskip
The $B$-graded $R$-algebra $S_B\mathrel{\mathop:}=\bigoplus_{\alpha\in B}S_{\alpha}$ obtained from $S$ by degree restriction to $B$ is called \textit{the $B$-restricted Cox ring over $R$ associated with $\Sigma_1$ (and $N$),} and its graded ideal $I_B\mathrel{\mathop:}= I\cap S_B$ is called \textit{the $B$-restricted irrelevant ideal over $R$ associated with $\Sigma$ (and $N$).} One can show that $I_B$ is generated by finitely many monomials.
To proceed we need to ``invert the monomials $\widehat{Z}_{\sigma}$ in the Cox ring'', and hence we have to assure that some power of these monomials lies in $S_B$. This amounts to supposing that $B$ is \textit{big,} i.e.~it has finite index in $A$.
\smallskip
\noindent\textit{$\bullet$\quad From now on suppose that $B$ is big, so that there exists $m\in\mathbbm{N}_0$ with $\widehat{Z}_{\sigma}^m\in S_B$ for every $\sigma\in\Sigma$.}
\smallskip
For $\sigma\in\Sigma$ the $B$-graded ring of fractions $(S_B)_{\widehat{Z}_{\sigma}^m}$ is independent of the choice of $m$. Its component of degree $0$ is independent of the choice of $B$ and is denoted by $S_{(\sigma)}$. Moreover, for $\tau\preccurlyeq\sigma$ there is a canonical morphism of rings $S_{(\sigma)}\rightarrow S_{(\tau)}$ which is independent of $m$ and $B$. Taking spectra and setting $Y_{(\sigma)}(R)\mathrel{\mathop:}=\spec(S_{(\sigma)})$ for $\sigma\in\Sigma$ we obtain an inductive system $(Y_{\sigma}(R))_{\sigma\in\Sigma}$ of $R$-schemes over $\Sigma$. Its inductive limit exists and is an $R$-scheme denoted by $Y_{\Sigma}(R)\rightarrow\spec(R)$ and called \textit{the Cox scheme over $R$ associated with $\Sigma$ (and $N$).} It can be understood as obtained by glueing $(Y_{\sigma}(R))_{\sigma\in\Sigma}$ along $(Y_{\sigma\cap\tau}(R))_{(\sigma,\tau)\in\Sigma^2}$.
The above construction of Cox schemes gives rise to a contravariant functor $Y_{\Sigma}$ from the category of rings to the category of schemes together with a morphism $Y_{\Sigma}\rightarrow\spec$, and $Y_{\Sigma}$ is compatible with base change in the sense of (\ref{basechange}).
\smallskip
Cox schemes are closely related to toric schemes as follows. The morphism of groups $c:M\rightarrow\mathbbm{Z}^{\Sigma_1}$ induces morphisms of rings $R[\sigma^{\vee}\cap M]\rightarrow S_{(\sigma)}$ for $\sigma\in\Sigma$, and these induce a canonical morphism of contravariant functors $\gamma:Y_{\Sigma}\rightarrow X_{\Sigma}$. Then, we have the following result.
\begin{prop}\mbox{\rm(\cite[3.17]{ts2})}
The canonical morphism of contravariant functors $\gamma:Y_{\Sigma}\rightarrow X_{\Sigma}$ is an isomorphism if and only if\/ $\Sigma$ is full.
\end{prop}
Using the (non-canonical) procedure to consider a toric scheme associated with a non-full fan as a toric scheme associated with a full fan (\cite[5.10]{ts1}) it is sufficient to study from now on Cox schemes instead of toric schemes. (Note that this reduction demands a base change and is in general \textit{not available for toric varieties.})
\medskip
Now we are ready to explain how $B$-graded $S_B$-modules give rise to quasicoherent sheaves on $Y_{\Sigma}(R)$. We denote by ${\sf GrMod}^B(S_B)$ and ${\sf QCMod}(\mathscr{O}_{Y_{\Sigma}(R)})$ the categories of $B$-graded $S_B$-modules and of quasicoherent $\mathscr{O}_{Y_{\Sigma}(R)}$-modules. Moreover, for a $B$-graded $S_B$-module $F$ we denote by $F_{(\sigma)}$ the component of degree $0$ of the $B$-graded module of fractions $F_{\widehat{Z}_{\sigma}^m}=F\otimes_{S_B}(S_B)_{\widehat{Z}_{\sigma}^m}$, and for an $S_{(\sigma)}$-module $G$ we denote by $\widetilde{G}$ the $\mathscr{O}_{Y_{\sigma}(R)}$-module associated with $G$.
\begin{prop}\mbox{\rm(\cite[4.2]{ts2})}
There exists a unique functor $$\mathscr{S}_B:{\sf GrMod}^B(S_B)\rightarrow{\sf QCMod}(\mathscr{O}_{Y_{\Sigma}(R)})$$ with $\mathscr{S}_B(F)\!\upharpoonright_{Y_{\sigma}(R)}=\widetilde{F_{(\sigma)}}$ for every $\sigma\in\Sigma$ and every $B$-graded $S_B$-module $F$.
\end{prop}
Since $\mathscr{S}_B$ coincides locally with the canonical equivalence between modules and quasicoherent sheaves on affine schemes it is exact and commutes with inductive limits. Furthermore, denoting by $\bullet(\alpha)$ the functor of shifting degrees by $\alpha$, we can construct a right quasiinverse $$\Gamma_*^B(\bullet)\mathrel{\mathop:}=\bigoplus_{\alpha\in B}\Gamma\bigl(Y_{\Sigma}(R),\bigl(\bullet\otimes_{\mathscr{O}_{Y_{\Sigma}(R)}}\mathscr{S}_B(S_B(\alpha))\bigr)\bigr)$$ for $\mathscr{S}_B$, called \textit{the first total functor of sections associated with $\Sigma$ and $B$ over $R$.} Thus, we get the following generalisation of \cite[Theorem 1.1]{mus1}, itself a generalisation of \cite[Theorem 3.2]{cox}.
\begin{theorem}\label{surj}\mbox{\rm(\cite[4.22]{ts2})}
The functor $\mathscr{S}_B:{\sf GrMod}^B(S_B)\rightarrow{\sf QCMod}(\mathscr{O}_{Y_{\Sigma}(R)})$ is essentially surjective.
\end{theorem}
Next, we restrict our attention to ideals. A graded ideal $\mathfrak{a}\subseteq S_B$ is called \textit{$I_B$-saturated} if $\mathfrak{a}=\bigcup_{k\in\mathbbm{N}_0}(\mathfrak{a}:_{S_B}I_B^k)$. Let $\mathbbm{J}^{{\rm sat}}_B$ and $\widetilde{\mathbbm{J}}$ denote the sets of $I_B$-saturated graded ideals of $S_B$ and of quasicoherent ideals of $\mathscr{O}_{Y_{\Sigma}(R)}$, respectively. Then, $\mathscr{S}_B$ induces by exactness a map $\Xi_B:\mathbbm{J}^{{\rm sat}}_B\rightarrow\widetilde{\mathbbm{J}}$. The next result treats the question whether this map is surjective or injective. To get injectivity, besides being big the subgroup $B$ must not be ``too big''. More precisely, $B$ is called \textit{small (with respect to $\Sigma$)} if it is contained in $\bigcap_{\sigma\in\Sigma}\langle\{\alpha_{\rho}\mid\rho\in\Sigma_1\setminus\sigma_1\}\rangle_{\mathbbm{Z}}$.
\begin{theorem}\label{bij}\mbox{\rm(\cite[4.27]{ts2})}
The map\/ $\Xi_B:\mathbbm{J}_B^{{\rm sat}}\rightarrow\widetilde{\mathbbm{J}}$ is surjective, and if $B$ is small then it is bijective.
\end{theorem}
An example of a subgroup that is big and small (and moreover well understood) is given in the following remark (cf.~\cite[V.5]{ewald}).
Consider a family $(U_{\sigma})_{\sigma\in\Sigma}$ of subsets of $V^*$ such that for every $\sigma\in\Sigma$ there exists a (not necessarily unique) $m_{\sigma}\in M$ with $U_{\sigma}=m_{\sigma}+\sigma^{\vee}$. Such a family is called a \textit{virtual polytope over\/ $\Sigma$} if $\tau\subseteq\ke(m_{\sigma}-m_{\tau})$ for all $\sigma,\tau\in\Sigma$ with $\tau\preccurlyeq\sigma$, and this condition is independent of the choice of the family $(m_{\sigma})_{\sigma\in\Sigma}$. There is a canonical structure of group on the set of virtual polytopes over $\Sigma$, and the set of virtual polytopes of the form $(m+\sigma^{\vee})_{\sigma\in\Sigma}$ is a subgroup. The corresponding quotient group is denoted by $\pic(\Sigma)$ and called \textit{the Picard group of\/ $\Sigma$.} It can be considered as the group of virtual polytopes over $\Sigma$ modulo $M$-rational translations.
The map $$(m_{\sigma}+\sigma^{\vee})_{\sigma\in\Sigma}\mapsto(m_{\rho}(\rho_N))_{\rho\in\Sigma_1}$$ yields a monomorphism from the group of virtual polytopes over $\Sigma$ to $\mathbbm{Z}^{\Sigma_1}$, and this induces a monomorphism $\pic(\Sigma)\rightarrowtail A$ by means of which we consider $\pic(\Sigma)$ as a subgroup of $A$. Then, $\pic(\Sigma)$ is small, and if $\Sigma$ is simplicial then $\pic(\Sigma)$ is big. Hence, it provides an example of a subgroup of $A$ to which (\ref{bij}) can be applied.
Finally, since $\pic(\Sigma)\cong\pic(X_{\Sigma}(\mathbbm{C}))$ by \cite[Theorem VII.2.15]{ewald} we get back \cite[Corollary 3.9]{cox} as a special case.
\section{Cohomology on toric schemes}
Our results about quasicoherent sheaves in the last section reveals that toric schemes (or more precisely, Cox schemes) are very similar to projective schemes. Hence, we ask if there is a toric version of the Serre-Grothendieck correspondence (cf.~\cite[20.4.4]{bs}), relating cohomology of quasicoherent sheaves on a Cox scheme to graded local cohomology of $B$-graded $S_B$-modules with respect to the irrelevant ideal $I_B$. This is indeed the case.
\smallskip
First, we have to explain what we mean by graded local cohomology. We denote by $${}^B\Gamma_{I_B}:{\sf GrMod}^B(S_B)\rightarrow{\sf GrMod}^B(S_B)$$ the $B$-graded $I_B$-torsion functor. Its right derived cohomological functor is denoted by $({}^BH^i_{I_B})_{i\in\mathbbm{Z}}$ and called \textit{$B$-graded local cohomology with respect to $I_B$.} The reason for this clumsy notation is that the ungraded module underlying a graded local cohomology module of a graded module $F$ might not be the same as the local cohomology module of the ungraded module underlying $F$. (A sufficient condition for this to hold is coherence of the graded ring $S_B$.)
Next, we introduce a variant of sheaf cohomology that is useful for our purpose. We define a functor $$\Gamma_{**}^B(\bullet):{\sf GrMod}^B(S_B)\rightarrow{\sf GrMod}^B(S_B),$$ called \textit{the second total functor of sections associated with $\Sigma$ and $B$ over $R$,} by setting $$\Gamma_{**}^B(\bullet)\mathrel{\mathop:}=\bigoplus_{\alpha\in B}\Gamma(Y_{\Sigma}(R),\mathscr{S}_B(\bullet(\alpha))).$$ Note that despite its name it is defined on the category ${\sf GrMod}^B(S_B)$. However, by (\ref{surj}) this is merely a technical point. The reason for two (in general different) total functors of sections is that the canonical morphism $$\mathscr{S}_B(\bullet)\otimes_{\mathscr{O}_{Y_{\Sigma}(R)}}\mathscr{S}_B(S_B(\alpha))\rightarrow\mathscr{S}_B(\bullet(\alpha))$$ is not necessarily an isomorphism. The right derived cohomological functor of $\Gamma_{**}^B(\bullet)$ is denoted by $(H^i_{**,B})_{i\in\mathbbm{Z}}$ and contains the usual sheaf cohomology as a direct summand.
To go on we need a certain behaviour of injectives in the category ${\sf GrMod}^B(S_B)$. Namely, the $B$-graded ring $S_B$ is said to have \textit{the ITR-property with respect to $I_B$} if every $B$-graded $I_B$-torsion $S_B$-module has an injective resolution whose components are $B$-graded $I_B$-torsion $S_B$-modules. This is fulfilled for example if $S_B$ is Noetherian (as a graded ring), and in particular if $R$ is Noetherian. Using this notion and imitating the corresponding proof in the projective case we arrive at the Toric Serre-Grothendieck Correspondence.
\begin{theorem}\mbox{\rm(\cite[4.14]{ts3})}
If $S_B$ has the ITR-property with respect to $I_B$, then there exist an exact sequence of functors $$0\longrightarrow{}^B\Gamma_{I_B}\longrightarrow{\rm Id}_{{\sf GrMod}^B(S_B)}\longrightarrow\Gamma_{**}^B\overset{\zeta_B}\longrightarrow{}^BH^1_{I_B}\longrightarrow 0$$ and a unique morphism of $\delta$-functors $$(\zeta^i_B)_{i\in\mathbbm{Z}}:\bigl(H^i_{**,B}\bigr)_{i\in\mathbbm{Z}}\longrightarrow\bigl({}^BH^{i+1}_{I_B}\bigr)_{i\in\mathbbm{Z}}$$ with $\zeta^0_B=\zeta_B$, and $\zeta^i_B$ is an isomorphism for every $i\in\mathbbm{N}$.
\end{theorem}
As an application we can prove a toric version of Serre's Finiteness Theorem.
\begin{prop}\mbox{\rm(\cite[4.16]{ts3})}
Let $F$ be a finitely generated $B$-graded $S_B$-module, and suppose that $\Sigma$ is complete and that $R$ is Noetherian. Then, the $R$-modules $H^i_{**,B}(F)_{\alpha}$ and ${}^BH^i_{I_B}(F)_{\alpha}$ are finitely generated for every $i\in\mathbbm{Z}$ and every $\alpha\in B$.
\end{prop}
Considering the fibres of a toric scheme, this allows us to define and investigate Hilbert functions of toric schemes, a task we would like to address in future research. Note that the above hypothesis of a complete fan $\Sigma$ can be achieved by the Completion Theorem (\cite{cof}, \cite[6.13]{ts1}).
|
2,877,628,090,693 | arxiv | \section{Introduction}
Massive black hole (MBH) binaries (MBHBs) are among the primary candidate
sources of gravitational waves (GWs) at mHz frequencies
~\cite{enelt, jaffe, uaiti, ses04, ses05}, the range probed by the space-based
{\it Laser Interferometer Space Antenna} ({\it LISA}, ~\cite{bender}).
Today, MBHs are ubiquitous in the nuclei of nearby galaxies ~\cite{mago}. If MBHs were
also common in the past (as implied by the notion that many distant galaxies
harbor active nuclei for a short period of their life), and if their host
galaxies experience multiple mergers during their lifetime, as dictated by
popular cold dark matter (CDM) hierarchical cosmologies, then MBHBs inevitably formed
in large numbers during cosmic history. MBHBs that are able to coalesce in less than
a then Hubble time give origin to the loudest GW signals in the Universe.
Provided MBHBs do not ``stall'', their GW driven inspiral will then
follow the merger of galaxies and protogalactic structures at high redshifts.
A low--frequency detector like {\it LISA} will be sensitive to GWs from coalescing binaries
with total masses in the range $10^3-10^6\,\,{\rm M_\odot}$ out to $z\sim 10-15$ ~\cite{hughes}.
The formation and evolution of MBHs has been investigated recently by several
groups in the framework of hierarchical clustering cosmology
~\cite{MHN01, VHM03, KBD04}. {\it LISA} detection rate, ranging from a few to a few hundred per
year, were derived in a number of papers ~\cite{uaiti, ses04, ses05, rw05}.
A comprehensive understanding of the details of MBH formation and
evolution are essential in assessing {\it LISA} detection efficiency
and in planning sensible data analysis strategies.
In two recent papers ~\cite{ses07a, ses07b}, we considered in detail two important
ingredients of the MBH formation route. (i)
{\it The nature and abundance of the first black hole seeds}.
Our understanding of seed black hole formation is
extremely poor. There are several proposed formation mechanisms resulting
in a broad spectrum of possible seed populations ~\cite{madres01, KBD04, bvr06, ln06}.
Following ~\cite{ses07a}, we investigate different physically and cosmologically
motivated seed formation routes, showing their imprint on the
expected MBHB coalescence rate and hence on {\it LISA} detection.
(ii) {\it Extreme gravitational recoils}. Recent relativistic numerical simulations of
merging spinning black hole binaries have shown that the remnant may get a kick of
the order of a few thousand km/s ~\cite{tm07, cetal07}, which is
likely to eject it even from the center of a giant elliptical
(escape velocity $\gtrsim$ 2000 km/s), with important astrophysical
implications ~\cite{mad04, merr04, vol07}. We incorporate the effect of extreme gravitational
recoils following the merger of highly spinning black holes in our hierarchical
models ~\cite{ses07b}, exploring its consequences on the GW detection side.
We highlight in this paper the main results reported in ~\cite{ses07a, ses07b},
focusing on their implications for {\it LISA} detections and assessing
{\it LISA} capability to place constraints on MBH formation scenarios, looking
for reliable diagnostics to discriminate between the different models. The plan
of the paper is as follow. In Section 2, we describe the general framework
of MBHB formation in hierarchical scenarios. We briefly discuss the issue of GW
detection with {\it LISA} in Section 3, and then in Section 4 and 5 we discuss
the impact of the seed black hole population and of the gravitational recoil prescription
on {\it LISA} observations. We summarize our findings in Section 6.
\section{Hierarchical models of black hole formation}
In the hierarchical framework of structure formation ~\cite{wr78, pb82}, MBHs grow starting from
pregalactic seed black holes formed at early times. MBH evolution then follows
the merging history of their host galaxies and dark matter halos. The merger process
would inevitably form a large number of MBHBs during cosmic history.
In our models we apply the extended Press \& Schechter EPS formalism ~\cite{PS74, LC93, ST99} to the hierarchical assembly
of dark matter halos, using a range of prescriptions for the evolution of the
population of MBHs residing in the halo centers. The halo hierarchy
is followed by means of Monte Carlo realizations
of the merger hierarchy. Each model is constructed tracing backwards the
merger hierarchy of 220 dark matter halos in the mass range
$10^{11}-10^{15}\,{\rm M_\odot}$ up to $z=20$ ~\cite{VHM03},
then populating the halos with seed black holes and following their
evolution to the present time. Nuclear activity is triggered by halo mergers:
in each major merger the more massive hole accretes gas until its mass scales
with the fifth power of the circular velocity of the host halo,
normalized to reproduce the observed local correlation
between MBH mass and velocity dispersion ($m_{\rm BH}-\sigma_*$ relation ~\cite{tr02}).
Gas accretion onto the MBHs is assumed to occur at a fraction of the Eddington rate.
In the boundaries given by this general framework, there is a certain freedom in the choice
of the seed masses, in the accretion prescription, and in the MBHB coalescence efficiency.
\subsection{MBHB dynamics}
During a galactic merger, the central MBHs initially share their fate with the host galaxy.
The merging is driven by dynamical friction, which has been shown to efficiently
merge the galaxies and drive the MBHs in the central regions of the newly formed
galaxy when the mass ratio of the satellite halo to the main halo is sufficiently
large ~\cite{kz05}. The efficiency of dynamical friction decays
when the MBHs get close and form a binary. In gas-poor systems, the subsequent
evolution of the binary may be largely determined by three-body interactions
with background stars
~\cite{bbr80}, leading to a long coalescence timescale.
In gas rich high redshift halos, the orbital evolution of the central MBH is
likely dominated by dynamical friction against the surrounding gaseous medium.
The available simulations ~\cite{E04, D06} show that the binary may shrink to
about parsec or slightly subparsec scale by dynamical friction against the gas,
depending on the gas thermodynamics. We have assumed here that, if a hard MBH
binary is surrounded by an accretion disc, it coalesces instantaneously owing
to interaction with the gas disc. If instead there is no gas readily available,
the binary will be losing orbital energy to the stars, using the scheme
described in ~\cite{VHM03}.
Assuming efficient coalescence for the MBH pairs, for each of the 220 halos
all the coalescence events happening during the cosmic history
are collected. The outputs are then weighted
using the EPS halo mass function and integrated over the observable
volume shell at every redshift to obtain numerically the coalescence
rate of MBHBs as a function of black hole masses and redshift.
\section{Gravitational wave signal}
Full discussion of the GW signal produced by an inspiraling
MBHB can be found in ~\cite{ses05}, along with all the relevant references.
Here we just recall that a MBHB at (comoving) distance $r(z)$
with chirp mass ${\cal M}=m_1^{3/5}m_2^{3/5}/(m_1+m_2)^{1/5}$ ($m_1>m_2$ are the
two MBH masses) generates a GW signal with a characteristic strain given by:
\begin{equation}
h_c=\frac{1}{3^{1/2}\pi^{2/3}}\,\frac{G^{5/6}
{{\cal M}}^{5/6}}{c^{3/2} r(z)}\,f_r^{-1/6},
\label{eq1h_c}
\end{equation}
where $G$ and $c$ have their standard meaning.
An inspiraling binary is then detected if the signal-to-noise ratio ($S/N$)
{\it integrated over the observation} is larger than a given
detection threshold, where the optimal $S/N$ is given by ~\cite{FH98}
\begin{equation}
S/N=\sqrt{ \int d\ln f \, \left[
\frac{h_c(f_r)}{h_{\rm rms}(f)} \right]^2}.
\label{eqSN}
\end{equation}
Here, $f=f_r/(1+z)$ is the (observed) frequency emitted at time
$t=0$ of the observation, and the integral is performed over the
frequency interval spanned by the shifting binary during the observational time.
Finally, $h_{\rm rms}=\sqrt{5fS_h(f)}$ is the effective rms noise of the
instrument; $S_h(f)$ is the one-sided noise spectral density, and the
factor $\sqrt{5}$ takes into account for the random directions and
orientation of the wave; $h_{\rm rms}$ is obtained by adding in the instrumental
noise contribution (given by e.g. the Larson's online sensitivity curve generator
http://www.srl.caltech.edu/$\sim$shane/sensitivity), and the confusion noise from
unresolved galactic ~\cite{N01} and extragalactic
~\cite{FP03} WD--WD binaries. Notice that extreme mass-ratio inspirals
(EMRIs) could also contribute to the confusion noise in the mHz frequency range ~\cite{BC04}.
All the results shown in the following sections assume a {\it LISA} operation time of
3 years, a cut-off at $10^{-4}$ Hz in the instrumental sensitivity and a detection
integrated threshold of $S/N=5$ (equation~\ref{eqSN}).
\section{Seed black hole population imprint}
In this section we investigate the influence of different seed population prescriptions
on {\it LISA} detections. In all the models described below, we assume non spinning
binaries; the coalescence remnant thus experiences only a mild recoil, that is never larger
than $\sim$ 250 km/s ~\cite{fhh04, baker06}. The effects of extreme gravitational recoil
associated to the coalescence of highly spinning MBHs will be separately explored
in Section 5.
\subsection{Description of the models}
Several scenarios has been proposed for the seed black hole formation:
seeds of $m_{\rm seed}\sim $few$\times100\,{\rm M_\odot}$ can form as remnants of metal free (PopIII)
stars at redshift $\gtrsim20$ (as assumed by Volonteri, Haardt and Madau ~\cite{VHM03},
hereinafter VHM model), while intermediate--mass seeds ($m_{\rm seed}\sim10^5\,{\rm M_\odot}$) can be the
endproduct of the dynamical instabilities arising in massive gaseous protogalactic
disks in the redshift range $10\lesssim z \lesssim15$ (as investigated
by Koushiappas, Bullock and Dekel ~\cite{KBD04}, hereinafter KBD model;
or by Begelman, Volonteri and Rees ~\cite{bvr06}, hereinafter BVR model).
In the VHM model, pregalactic seed holes form with intermediate
masses ($m_{\rm seed}\sim150\,\,{\rm M_\odot}$) as remnants of the first generation of massive
metal-free stars with $m_*>260\,\,{\rm M_\odot}$ that do not disappear as
pair-instability supernovae ~\cite{madres01}. We place them in isolation
within halos above $M_H=1.6\times 10^7\,\,{\rm M_\odot}$ collapsing
at $z=20$ from rare $>3.5\sigma$ peaks of the primordial density
field. While $Z=0$ stars with $40<m_*<140\,\,{\rm M_\odot}$ are also predicted
to collapse to MBHs with masses exceeding half of the initial stellar mass
~\cite{hw02}, the merger rate of MBHBs in the mass range relevant to {\it LISA}
observations is not very sensitive to the precise choice for the seed hole mass.
A different class of models assumes that MBH seeds form already massive.
In the KBD model, seed MBHs form from the low angular momentum tail of material in halos
with efficient molecular hydrogen gas cooling. MBHs with mass
\beq
m_{\rm seed}\simeq5\times10^4\,{\rm M_\odot}(M_H/10^7\,{\rm M_\odot})(1+z/18)^{3/2}(\lambda/0.04)^{3/2}
\label{KBD:mbh}
\eeq
form in in dark matter halos with mass
\beq
M_H \gtrsim 10^7\,{\rm M_\odot} (1+z/18)^{-3/2}(\lambda/0.04)^{-3/2}.
\eeq
We have fixed the free parameters in equation \ref{KBD:mbh} by requiring an acceptable match
with the luminosity function of quasars at $z<6$.
Here $\lambda$ is the so called spin parameter, which is a measure of the angular
momentum of a dark matter halo $\lambda \equiv J |E|^{1/2}/G M_H^{5/2}$, where
$J$, $E$ and $M_H$ are the total angular momentum, energy and mass of the halo.
The angular momentum of galaxies is believed to have been acquired by tidal
torques due to interactions with neighboring halos. The distribution of spin
parameters found in numerical simulations is well fit by a lognormal
distribution in $\lambda_{\rm spin}$, with mean $\bar \lambda_{\rm spin}=0.04$
and standard deviation $\sigma_\lambda=0.5$ ~\cite{B01, vdB02}.
We have assumed that the MBH formation process proceeds
until $z\approx15$.
In the BVR model, black hole seeds form in halos subject to runaway gravitational instabilities,
via the so called "bars within bars" mechanism ~\cite{SFB89}.
We assumed here, as in BVR, that runaway instabilities are efficient
only in metal free halos with virial temperatures $T_{\rm vir} \gtrsim 10^4$K.
The "bars within bars" process produces in the center of the halo a "quasistar"
(QSS) with a very low specific entropy. When the QSS core collapses, it leads
to a seed black hole of a few tens solar masses. Accretion from the QSS
envelope surrounding the collapsed core can however build up a substantial
black hole mass very rapidly until it reaches a mass of the order the "quasistar"
itself, $m_{\rm QSS}\simeq 10^4-10^5 \,{\rm M_\odot}$. The seed black hole accretion rate adjusts
so that the feedback energy flux equals the Eddington limit for the quasistar
mass; thus, the black hole grows at a super-Eddington rate as long as
$m_{QSS} > m_{\rm seed}$. The result is that $m_{\rm seed} (t) \sim 4 \times 10^5 (t/10^7 \ {\rm yr})^2
M_\odot$ i.e., $m_{\rm seed} \propto t^2$. In metal rich halos star formation
becomes efficient, and depletes the gas inflow before the conditions for QSS
(and MBH) formation are reached. BVR
envisage that the process of MBH formation stops when gas is sufficiently
metal enriched. We consider here two scenarios, one in which star formation exerts a high
level of feedback and ensures a rapid metal enrichment (BVRhf), one in
which feedback is milder and halos remain metal free for longer (BVRlf).
In the former case MBH formation ceases at $z\approx 18$, in the latter at $z\approx 15$.
The BVRhf model appears to produce barely enough MBHs to reproduce the
observational constraints (ubiquity of MBHs in the local Universe, luminosity function of quasars).
We consider it a very strong lower limit to the number of seeds that need to be formed
in order to fit the observational constraints.
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics[clip=true]{FIGNEW1.ps}
\includegraphics[clip=true]{FIGNEW2.ps}}
\caption{{\it Left panel}: number of MBHB coalescences per observed year at $z=0$, per unit redshift,
in different $m_{\rm BH}=m_1+m_2$ mass intervals. {\it Solid lines}: VHM model;
{\it short--long dashed lines}: KBD model; {\it short--dashed lines}:
BVRlf model; {\it long--dashed lines}: BVRhf model.
{\it Right panel}: redshift distribution of MBHBs resolved with
$S/N>5$ by {\it LISA} in a 3-year mission. Line style as in the left panel.
The number of events predicted by KBD model
({\it long--short dashed curve}) is divided by a factor of $10$.
The top-left corner label lists the total number of expected detections.
}
\label{figseed}
\end{figure}
\subsection{Coalescence rates and {\it LISA} detections}
Different seed populations would inevitably leave a peculiar imprint on
the merger rate of MBHBs relevant to {\it LISA}.
Left panel of figure \ref{figseed} shows the number of MBH binary coalescences per unit
redshift per unit {\it observed} year, $dN/dzdt$, predicted by the five models we tested.
Each panel shows the rates for
different $m_{\rm BH}=m_1+m_2$ mass intervals. The total coalescence rate
spans almost two orders of magnitude
ranging from $\sim 3$ yr$^{-1}$ (BVRhf) to $\sim 250$ yr$^{-1}$ (KBD).
As a general trend, coalescences of more massive MBHBs peak at lower
redshifts (for all the models the coalescence peak in the case
$m_{\rm BH}>10^6\,{\rm M_\odot}$ is at $z\sim2$).
Note that there are no merging MBHBs with $m_{\rm BH}<10^4\,{\rm M_\odot}$
in the KBD and BVR models.
\begin{figure}
\centering
\includegraphics[width=3.5in]{FIGNEW3.ps}
\caption{Mass function of the more massive member
of MBHBs resolved with $S/N>5$ by {\it LISA} in a 3-year mission.
Line style as in figure \ref{figseed}. All curves are normalized such as the integral
in $d\log(m_1)$ gives the number of detected events.
}
\label{m1dist}
\end{figure}
Right panel of figure~\ref{figseed} shows the redshift distribution of {\it LISA} MBHB
detections. The KBD model results in a number of events ($\simeq 700$) that is more than an
order of magnitude higher than that predicted by other models,
with a skewed distribution peaked at sensibly high redshift, $z \gtrsim 10$.
It is interesting to compare the {\it number of detections} with the {\it total number
of binary coalescences} predicted by the different formation models.
The KBD model produces $\simeq 750$ coalescences, the VHM model $\simeq 250$,
and the two BVR models just few tens. A difference of a factor $\simeq 3$
between the KBD and the VHM models in the total number of coalescences,
results in a difference of a factor of $\simeq 10$ in the {\it LISA} detections,
due to the different mass of the seed black holes.
Almost all the KBD coalescences involve massive binaries ($m_1 \gtrsim 10^4 \,{\rm M_\odot}$),
which are observable by {\it LISA}. The KBD and BVR models differ for
the sheer number of MBHs. The halo mass threshold in the KBD model is well below
(about 3 orders of magnitude) the BVR one, the latter requiring halos with virial
temperature above $10^4$K. In a broader context, results pertaining to the
KBD model describe the behavior of families of models where efficient
MBH formation can happen also in mini-halos where the source of cooling is molecular hydrogen.
It is difficult, on the basis of the redshift distributions of detected binaries only,
to discriminate between heavy and light MBH seed scenarios.
Although the VHM and BVRlf models predict a different number of observable sources,
the uncertainties in the models are so high, that a difference
of a factor of two (96 for the VHM model, 44 for the BVRlf model) cannot be
considered a safe discriminant. Moreover the redshift distributions are quite
similar, peaked at $z\simeq6-7$ and without any particular feature in the shape.
In ~\cite{ses05} we showed that {\it LISA} will be sensitive to
binaries with masses $\lesssim10^3\,{\rm M_\odot}$ up to redshift ten. Hence the
discrimination between heavy and light MBH seed scenarios should be
easy on the basis of the mass function of detected binaries.
This is shown in figure~\ref{m1dist}. As expected, in the VHM model, the mass
distribution extends to masses $\lesssim10^3\,{\rm M_\odot}$, giving a clear and unambiguous
signature of a light seed scenario. VHM predict that many detections (about 50$\%$) involve
low mass binaries ($m_{\rm BH}<10^{4}\,{\rm M_\odot}$) at high redshift ($z>8$). These sources
are observable during the inspiral phase, but their frequency at the last stable orbit $f_{\rm LSO}$ is too high for
{\it LISA} detection (see ~\cite{ses05}, figure 2).
Heavy seed scenarios predict instead that the GW emission at $f_{\rm LSO}$, and the subsequent
plunge are always observable for all binaries.
\section{Gravitational recoil imprint}
In all the models detailed in the previous section, we implemented a 'conservative' recoil prescription
appropriate to mergers of non spinning black holes. In the following, we quantify, for
selected seed scenarios, the maximum effect that gravitational recoil may have on {\it LISA}
detection rates.
\subsection{Description of the models}
We focus here on two specific models described in Section 4.1,
that are representative of these two classes of MBH seed formation scenarios:
the VHM and the BVRlf models. For both of them, we consider two cases that bound the effect
of recoil in the assembly of MBHs and, as a consequence, LISA events: (i) no gravitational
recoil takes place and (ii) maximal gravitational recoil is associated to every MBHB merger,
using the model by Volonteri 2007 ~\cite{vol07}, which is based on the estimates reported by Campanelli
at al. 2007 ~\cite{cetal07}. For the latter we use the merger tree realizations
presented in ~\cite{vol07}. The model takes into account consistently for the
cosmic evolution of the mass ratio distribution of merging binaries and
of their spin parameters (see discussion in ~\cite{vol07}).
The spin orientations during each merger are instead always in the
configuration leading to the maximum recoil according to ~\cite{cetal07},
i.e., MBH spins are assumed to lie in the binary orbital
plane counter-aligned one to each other. The recoil velocity is then
computed according to equation 1 of ~\cite{cetal07} assuming
$\Theta=\Theta_0$ (i.e., the maximum possible recoil velocity).
We would like to emphasize that the prescription that we have chosen for (ii),
and whose main features we have just summarized is the least favorable for
GW observations and (probably) unlikely to occur in these
extreme circumstances during MBH assembly.
\subsection{Merger rates and {\it LISA} detections}
Left panel of figure \ref{figrec} shows the number of MBH binary coalescences per unit
${\rm log}{\cal M}$ per unit {\it observed} year, $dN/d{\rm log}{\cal M} dt$,
predicted by the two models that we have considered, for both cases where
recoil is neglected and extreme recoil is taken into account. Each panel shows
the rates for different redshift intervals.
Note that when extreme recoil in included, the rate predicted by the BVRlf model at any
redshift is only marginally affected, while the VHM model is more sensitive to the GW recoil:
at $z>15$, GW kicks do not affect the coalescence rate;
on the contrary, at $z<15$, the rate drops by a factor of
$\sim 3$ for ${\cal M}\gtrsim 10^3\,{\rm M_\odot}$,
if extreme kicks are included in the evolution. This is related to the fraction
of seeds that experience multiple coalescences during the MBH assembly
history. We can schematically think of the assembly history as a
sequence of coalescence rounds. After each round
extreme recoil depletes a large fractions of remnants, and the relative
importance of each subsequent round drops accordingly.
In the VHM model, about 65\% of the
remnants of the first round will undergo a second round of coalescences,
so the second round has an important relative weight in the computation of the
total rate. When extreme recoil is taken into account, a large fraction of
the first round remnants is ejected from their hosting halos. We
find that the effective fraction of remnants that can experience
a second coalescence drops to $\sim30\%$. This is the reason why the number
of coalescences involving light black holes (${\cal M}<10^3 \,{\rm M_\odot}$) does not
drop at any redshift, while the number of coalescences involving more
massive binaries drops by a factor $\approx 3$. In the BVRlf scenario
seeds are rarer, and the fraction of first coalescence remnants that
participate to the second round is around 25\%; switching on
the extreme recoil has a significantly smaller impact on the global rate in this case.
Moreover, in this model seeds are more massive and the bulk of
merging events happens at lower redshift, where the hosting halo
potential wells are deeper and consequently larger kicks are needed
to eject the coalescence remnants.
As a matter of fact, the seed abundance sets the mean number of major
mergers that a seed is expected to undergo during the cosmic history,
and this basically sets the ability of extreme kicks to reduce
the coalescence rate.
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics[clip=true]{FIG_kick1.ps}
\includegraphics[clip=true]{FIG_kick2.ps}}
\caption{{\it Left panel}: number of MBHB coalescences per observed year at $z=0$, per unit
log chirp mass,
in different redshift intervals. {\it Dashed lines}: GW recoil
neglected; {\it solid lines}: extreme GW recoil included.
{\it Thick lines}: VHM model; {\it thin lines}: BVRlf model.
{\it Right panel}: redshift distribution of MBHBs resolved with
$S/N>5$ by {\it LISA} in a 3-year mission. Line style as in the left panel.
The top-right corner label lists the total number of expected detections.
}
\label{figrec}
\end{figure}
Right panel of figure ~\ref{figrec} shows the redshift distribution of MBHBs detected by
{\it LISA}. The effect of extreme GW recoils on the source number
counts drastically depends on the abundance and nature of the seeds. In the VHM
model, the number of detectable sources drops by a factor $\sim 60\%$, and
the number of the potential {\it LISA} detections is reduced from
$\approx 140$, if the recoil is neglected, to $\approx 60$, if extreme
recoil is included. Vice versa, the detection rate predicted by the BVRlf model
is only weakly affected by the extreme recoil prescription, and it
drops by about $15\%$ (from 40 to 34 events in 3 years of observation \footnote{Note that both numbers
are smaller than 44, which is the numbers of event found in Section 4 assuming a non-spinning recoil prescription.
This is consistent with a Poissonian variance in the number of coalescences for different Monte Carlo
realizations of the seed populations. The 15\% decrease found here (from 40 to 34) is instead
computed starting from the same Monte Carlo realization of the seed population,
and then applying different recoil recipes.} ).
Note that though the overall number of coalescences in the VHM
model decreases only by about $25\%$ when extreme recoil is considered,
the number of {\it LISA} detections is reduced by a much larger
factor. This is because if the seeds are light, {\it LISA} can
not detect the bulk of the first coalescences of light
binaries happening at high redshift, that are responsible
for the major contribution to the coalescence rate and
are not affected by the recoil. {\it LISA}
can observe later events, involving more massive binaries,
that are largely suppressed by the MBH depopulation
due to extreme GW kicks. In the BVRlf
model, on the other hand, seeds are more massive, and the second
coalescence round is less important; in this case, the {\it LISA} sensitivity
is sufficient to observe almost all the first coalescences, and
the number of detections is only mildly reduced.
\section{Summary and conclusions}
Using dedicated Monte Carlo simulations of the hierarchical assembly
of dark matter halos along the cosmic history, we have
computed the expected gravitational wave signal from the evolving
population of massive black hole binaries.
We investigated the imprint of different seed black hole formation routes
and on extreme recoils on {\it LISA} detections.
We found that a large fraction (depending on models) of coalescences
will be directly observable by {\it LISA}, and on the basis of the detection
rate, constraints can be put on the MBH formation process.
Detection of several hundreds events in 3 years will be the sign of a
efficient formation of heavy MBH seeds in a large fraction of high redshift halos (KBD).
On the other extreme, a low event rate, about few tens in 3 years,
is peculiar of scenarios where either the seeds are light, and many
coalescences do not fall into the {\it LISA} band, or seeds are massive,
but rare, as envisioned by, e.g., BVR (see also ~\cite{ln06}).
In this case a decisive diagnostic is provided by the mass distribution
of detected events. In the light seed scenario, the mass distribution
of observed binaries extend to $\sim 10^3\,{\rm M_\odot}$, while there are no
sources with mass below $10^4\,{\rm M_\odot}$ in the high seed scenario.
We have then considered two specific MBH assembly models (VHM, BVRlf), representative
of two different MBH seed formation scenarios. For both of them, we investigated two cases
that bound the effect of recoil in the assembly of MBHs and, as a consequence, LISA events:
(i) no gravitational recoil takes place and (ii) maximal gravitational recoil is
associated to every MBHB merger, using the model described in ~\cite{vol07}.
Our results show that at time it is not clear if {\it LISA} will be
able to shed light on the importance of recoil in MBH assembly,
even in this extreme case, since the uncertainty introduced in the
number counts is at most of a factor of $\sim 3$, comparable with uncertainties
due to our ignorance in the MBH accretion history and in the
detailed dynamics of MBHBs (see, e.g., discussion in ~\cite{ses07a}).
On the other hand, this fact confirm that MBHBs are {\it LISA} safe targets; since
extreme recoil effects increase with the seed abundance, we expect the drop in
the detections to be more significant for those scenarios that predict a larger
number of sources.
In conclusions, from the point of detection of low frequency
gravitational waves, massive black hole binaries are certainly one of
the major target for the {\it LISA} mission, independently on the actual
seed black hole formation route and on the magnitude of the typical recoil
suffered by the remnants of binary coalecences. On the astrophysical ground,
{\it LISA} will be a unique probe of the formation, accretion and merger of
MBHs along the {\it entire} cosmic history of galactic structures.
\section*{References}
|
2,877,628,090,694 | arxiv | \section{}
\begin{abstract}
We continue previous works by various authors and study the birational geometry of moduli spaces of stable rank-two vector bundles on surfaces with Kodaira dimension $-\infty$. To this end, we express vector bundles as natural extensions, by using two numerical invariants associated to vector bundles, similar to the invariants defined by Br\^\i nz\u anescu and Stoia in the case of minimal surfaces. We compute explicitly these natural extensions on blowups of general points on a minimal surface.
In the case of rational surfaces, we prove that any irreducible component of a moduli space is either rational or stably rational.
\end{abstract}
\section{Introduction}
Following the inception of the GIT and its appearance in the works of Mumford, Takemoto, Maruyama and Gieseker that set the foundations of modern vector bundle theory in the 1970s, several decisive results suggested that the geometry of moduli spaces of vector bundles
and the geometry of the base manifolds are interlaced. For example:
\begin{itemize}
\item A careful study of the geometry of the moduli spaces of vector bundles plays an essential role in Qin's proof of the Van de Ven's conjecture on the deformation invariance of the Kodaira dimension.
\item Mukai proves that two-dimensional moduli spaces of vector bundles over K3 surfaces are also K3 surfaces. In arbitrary dimension, moduli spaces are holomorphically symplectic manifolds.
\item A Beilinson spectral sequences analysis carried by Horrocks, Barth, Hulek and Maruyama show that moduli spaces of vector bundles on the projective plane are either rational or unirational.
\end{itemize}
In \cite{Costa-MiroRoig_NMJ02}, the following natural question is addressed. Is it true that the moduli spaces of rank-two vector bundles with large enough second Chern class on rational surfaces are themselves rational? Building on previous contributions in this direction \cite{Costa-MiroRoig_JPAA99}, \cite{Costa-MiroRoig_CRELLE99} this question is positively answered also in \cite{Costa-MiroRoig_NMJ02}.
\medskip
The first goal of our paper is to answer an improved version of this question, dropping the condition on the second Chern class. We prove in Theorems \ref{thm:structure2} and \ref{thm:structure3} that any nonempty irreducible component of an arbitrary moduli space of stable rank-two bundles on a rational surface is either rational or stably rational. Since stably rational are close to be rational, this is a conclusive evidence that the moduli spaces are always rational without imposing any further condition on the second Chern class. (for a discussion on rationality, stable rationality and differences between them see, for example, \cite{Beauville_arXiv}.)
The proof of our result relies on the use of natural numerical invariants associated to vector bundles, similar to the ones introduced by Br\^\i nz\u anescu and Stoia in the minimal case \cite{Brinzanescu-Stoia_REV}, \cite{Brinzanescu_LNM}. The definition makes perfect sense for rank-two vector bundles on any surface $X$ with Kodaira dimension $-\infty$. These invariants allow to present any vector bundle on $X$ as an extension of a certain type and this construction comes with some advantages. Recall that Serre's method permits us to write any rank-two bundle on an arbitrary surface as an extension involving line bundles and some zero-dimensional subschemes. Conversely, non-trivial extensions are locally-free if the zero-dimensional subschemes in question satisfy a certain condition, called {\em Cayley-Bacharach}. Unfortunately, this condition is locally closed, and is neither closed nor open in general. Hence, if we want to use extension spaces corresponding to general bundles (in an irreducible component) to parametrise moduli spaces, we need to control the corresponding locus of zero-dimensional subschemes with the Cayley-Bacharach property. The ideal situation occurs, of course, if this locus coincides with the whole Hilbert scheme. The invariants we use place us precisely in this situation. Their definition and their basic properties form the content of section \ref{sec:invariants}.
Along the way, we prove in section \ref{sec:structure} that any irreducible component of a moduli space of rank-two vector bundles is dominated by a projective bundle over a Hilbert scheme, Theorem \ref{thm:structure}. The projective bundle in question is a space of extensions and the essential fact is that all the zero-dimensional subschemes we work with satisfy the Cayley-Bacharach property, see the proof of Theorem \ref{thm:structure}. A similar result was previously obtained by Qin for minimal surfaces \cite[Theorem~C]{Qin_INVENTIONES92}.
In section \ref{sec:extensions}, we find explicit values for the numerical invariants of general stable vector bundles on surfaces $X$ that are obtained as blowups of general points on a minimal surface $S$, Theorems \ref{thm:c1F=0} and \ref{thm:c1F=1}. As a consequence, we obtain a refinement of Theorem \ref{thm:structure} for these surfaces. If $C$ is the curve over which $S$ is ruled, and $F$ is the class of a general fibre lifted to $X$, we prove that the moduli spaces are birational to a projective bundle over either a product of two copies of the Jacobian of $C$ with a symmetric product of $C$ if $c_1\cdot F$ is even, or over just a product of two copies of the Jacobian of $C$ if $c_1\cdot F$ is odd, respectively, see Corollaries \ref{cor:c1F=0} and~\ref{cor:c1F=1}.
\medskip
\noindent {\bf Notation:} We will work over an algebraically closed field $K$ of characteristic zero. Given a non-singular variety $X$ we denote by $K_X$ its canonical divisor and by $q(X)$ its irregularity. For any coherent sheaf $ E$ on $X$ we are going to denote by $H^i(X,E)$ the cohomology groups meanwhile $h^i(X,E)$ stands for their dimension. If $ E$ and $E'$ are two coherent sheaves on $X$, the dimension of the space $\mathrm{Ext}^i_X( E,E')$ is denoted by $\mathrm{ext}^i_X(E,E')$. We denote by $\chi (X,E):= \sum_{i=0}^{\dim X} (-1)^ih^i(E) $ the Euler characteristic of $E$.
\section{Background}
\label{sec:background}
We start collecting the main results that we will use concerning stable vector bundles on a smooth projective surface and their moduli spaces.
\begin{defn}
Let $L$ be an ample divisor on a smooth projective surface $X$. A rank two vector bundle $V$ on $X$ is $L$-semistable if for any rank one subbundle $E$ of $V$,
\[ c_1(E)\cdot L \leq \frac{c_1(V)\cdot L}{2}. \]
If strict inequality holds, we say that $V$ is $L$-stable. We say that $V$ is simple if $\mathrm{Hom}_X(V,V)=K$. Notice that any $L$-stable vector bundle is simple.
\end{defn}
We will denote by ${\mathcal M_L(c_1,c_2)}$ the moduli space of rank two $L$-stable vector bundles $V$ on a smooth projective surface $X$ with $c_1(V)=c_1$ and $c_2(V)=c_2$.
\begin{thm}
\label{moduli}
Let $X$ be a smooth projective surface, $L$ an ample divisor on $X$ and $c_1,c_2 \in H^*(S, \ZZ)$ Chern classes. For all $c_2 \gg 0$, ${\mathcal M_L(c_1,c_2)}$ is a smooth, irreducible, quasiprojective variety of dimension $4c_2-c_1^2-3\chi(\mathcal O_X)+q(X)$.
\end{thm}
\proof
See \cite{Don86}, \cite{Zuo91}, \cite{GL96} and \cite{OG96}.
\vspace{3mm}
One of the tools that we will use concerns prioritary sheaves. Prioritary sheaves were introduced on ruled surfaces by
Walter in \cite{Walter_Europroj}
as a generalization of semistable sheaves and we recall its definition for sake of completeness.
\begin{defn}
\label{prior} Let $\pi: S \longrightarrow C$ be a
ruled surface and we consider $F \in \mathrm{Num}(S)$ the numerical class
of a fiber of $\pi$. A coherent sheaf $E$ on $S$ is said to be
prioritary if it is torsion free and if $\mathrm{Ext}^2_S(E,E(-F))=0$.
\end{defn}
\begin{rmk}
\label{stableimplicaprior} \rm If $H$ is an ample divisor on a ruled surface $S$
such that $H(K_S+F)<0$, then any $H$-stable, torsion free
sheaf is prioritary (see the proof of \cite[Theorem 1]{Walter_Europroj}).
\end{rmk}
We denote by ${\mathcal Spl(c_1,c_2)}$ the moduli space of rank two simple, prioritary,
torsion free sheaves $E$ on $S$ with Chern classes $c_1$
and $c_2$. It follows from \cite[Proposition 2]{Walter_Europroj}, the following result:
\begin{thm}
\label{spl}
Let $S$ be a smooth ruled surface, $L$ an ample divisor on $S$ and $c_1,c_2 \in H^*(S, \ZZ)$ Chern classes. Then, the moduli space ${\mathcal Spl(c_1,c_2)}$ is a smooth, irreducible, quasiprojective variety. Moreover, if $L\cdot (K_S+F)<0$, then the moduli space ${\mathcal M_L(c_1,c_2)}$ is an open dense subset of ${\mathcal Spl(c_1,c_2)}$.
\end{thm}
We end the section gathering the relevant results on ruled surfaces that we will use through this paper.
\vspace{3mm}
Let $e$ and $m\ge 1$ be two integers. Let $p_1,\ldots,p_m$ be distinct points on a geometrically ruled surface $S$ of invariant $e$ over a smooth genus--$g$ curve $C$, let $\pi:S\to C$ be the ruling and $\sigma:X\to S$ a blowup of $S$ in $p_1=p_{11},\ldots,p_m=p_{m1}$ and possibly other infinitely near points $p_{ij}$ with $i=1,\ldots,m$ and $j=2,\ldots k_i$. Put $\phi=\pi\circ\sigma$ and denote by $E_{11},\ldots,E_{1k_1},$ $E_{21},\ldots,E_{2k_2},\ldots,$ $E_{m1},\ldots,E_{mk_m}$ the irreducible components of the exceptional divisor. In this notation, since $p_{i1}=p_i$, $E_{i1}=E_i$ is the first component of the blowup of $S$ in $p_i$.
Denote by $C_0$ the minimal section of $S$ so that $e=-C_0^2$, by $F$ the fibre over a general point $p\in C$ of the ruling and, if no confusion arises, we use the same notation for their pullbacks to $X$. Denote $\widetilde{F}_i$ the strict transform of the fibre through $p_i$.
\section{Numerical invariants associated to vector bundles}
\label{sec:invariants}
The main goal of this section will be to associate to any rank two vector bundle $V$ on $X$ two different invariants that will be a key ingredient in order to classify $L$-stable vector bundles later on. More precisely, in the minimal case ($m=0$), Br\^\i nz\u anescu and Stoia introduced two numerical invariants associated to any rank-two vector bundle and they are used to present the given bundle as a natural extension \cite{Brinzanescu_LNM}. In the sequel, we will define similar invariants in the non-minimal case, and we will find the corresponding canonical extension. To do so, we fix $V$ a rank--two vector bundle on $X$ with $\mathrm{det}(V)=\mathcal O_X(\alpha C_0+ \beta F+G)\otimes \phi^*P$ where $P\in\mathrm{Pic}^0(C)$ and $G$ is supported on the exceptional divisor. By twisting $V$ with multiples of $E_{ij}$ if necessary, we may assume that $G$ is effective and $c_2(V)=c_2 \in \ZZ$. The first invariant will be given by the generic splitting type:
\medskip
{\em The invariant $d_V$}. For a general point $p$ in $C$ the restriction of $V$ to the corresponding fibre is of type $\mathcal O_F(d)\oplus\mathcal O_F(d')$ with $d\ge d'$ and $d+d'=\alpha$. The given $d$ is the first invariant $d_V$. Note that this invariant is upper--continuous in flat families of rank-two bundles; indeed, $d_V\ge k$ if and only if $h^0(\mathcal O_{F_q}(-k))\ne 0$ for any fibre $F_q$ over $q\in C$.
\medskip
Once $d_V$ is determined, we define the second invariant:
\medskip
{\em The invariant $r_V$}. The push-forward $\phi_*V(-d_VC_0)$ is either a line bundle (if $2d_V>\alpha$) or a rank-two bundle (if $2d_V=\alpha$) on $C$. Indeed, since the target of $\phi$ is a smooth curve, $\phi$ is flat implying that $\phi_*V(-d_VC_0)$ is torsion--free and hence locally--free. Therefore, by Grauert's Theorem (\cite[Corollary 12.9]{Har}) over an open subset of the target $rank(\phi_*V(-d_VC_0))$ equals to one (if $2d_V>\alpha$) or two (if $2d_V=\alpha$).
Define $r=r_V$ to be the maximum degree of a line subbundle of $\phi_*V(-d_VC_0)$; by a result of Nagata \cite[Theorem 1]{Nagata_NMJ70}, $2r\ge \mathrm{deg}(\phi_*V(-d_VC_0))-g$. Alternatively, $r_V$ is the maximum number for which there is a non-zero morphism $\mathcal O_X(d_VC_0+rF)\otimes\phi^*M\to V$ with $M\in\mathrm{Pic}^0(C)$.
If $C=\mathbb P^1$ then the invariant $r_V$ has a simpler description:
\[
r_V=\mathrm{max}\left\{r|\ h^0((\phi_*V(-d_VC_0))(-r))\ne 0\right\}.
\]
Note that if $2d_V=\alpha$ and the genus of the base curve is at least one, the maximal subbundle is not necessarily unique, see \cite{Lange-Narasimhan_MATHANN83}.
\begin{lem}
The invariant $r_V$ is upper-semicontinuous in flat families of rank-two bundles with $d_V=0$.
\end{lem}
\proof
Let $\{V_t\}_{t\in T}$ be a flat family of rank-two bundles with $d_{V_t}=0$ for all $t$ and let $r\in \mathbb Z$. We need to prove that the set
\[
\{t\in T|\ r_{V_t}\ge r\}\subset T
\]
is closed. Note that $r_{V_t}\ge r$ if and only if there exists a line bundle $\mathcal L$ of degree $r$ on $C$ such that $h^0(X,V_t\otimes \phi^*\mathcal L)\ne 0$ and hence
\[
\{t\in T|\ r_{V_t}\ge r\}=\bigcup_{\mathcal L\in\mathrm{Pic}^r(C)}\{t\in T|\ h^0(V_t\otimes \phi^*\mathcal L)\ne 0\}.
\]
The conclusion follows observing that the subset
\[
\{(t,\mathcal L)\in T\times\mathrm{Pic}^r(C)|\ h^0(V_t\otimes \phi^*\mathcal L)\ne 0\}\subset T\times\mathrm{Pic}^r(C)
\]
is closed and it maps, via the first projection $T\times\mathrm{Pic}^r(C)\to T$ which is a proper map, to the subset under question
\[
\bigcup_{\mathcal L\in\mathrm{Pic}^r(C)}\{t\in T|\ h^0(V_t\otimes \phi^*\mathcal L)\ne 0\}\subset T.
\]
\endproof
Let $r$ be an integer such that $H^0(X,V(-d_VC_0-rF)\otimes\phi^*{M}^{-1}) \neq 0$. A section $\sigma$ in $H^0(X,V(-d_VC_0-rF)\otimes\phi^*{M}^{-1})$ giving a morphism $\mathcal O_X(d_VC_0+rF)\otimes\phi^*M\to V$ with $M\in\mathrm{Pic}^0(C)$ will vanish along a zero-dimensional lci subscheme $Z$ plus possibly along an effective divisor $D$. In this case, $V$ is presented as an extension
{\footnotesize
\begin{equation}
\label{eqn:canonical}
0\to \mathcal O_X(d_VC_0+rF+D)\otimes\phi^*M\to V\to \mathcal I_Z((\alpha-d_V)C_0+(\beta-r)F+(G-D))\otimes\phi^*N\to 0
\end{equation}
}
with $M,N\in\mathrm{Pic}^0(C)$, $M\otimes N=P$.
By the definition of the invariants $d_V$ and $r_V$, if $r=r_V$ then the divisor $D$ must be supported along the exceptional divisor and strict transforms $\widetilde{F}_i$ and $h^0(X,\mathcal O_X(D-F_q))=0$ for any fiber $F_q$ over $q\in C$ . On the other hand, if $r_V > r$, then $D$ must be supported along the exceptional divisor and strict transforms $\widetilde{F}_i$ and possibly copies of~$F$.
\vspace{3mm}
To an extension (\ref{eqn:canonical}) one associates a natural numerical class
\[
\zeta \equiv (2d_V-\alpha)C_0+(2r-\beta)F+(2D-G),
\]
see \cite{Qin_JDG94}, \cite{Qin_MANUSCRIPTA93}. It is clear from the definition that the length of the scheme $Z$ from the extension (\ref{eqn:canonical}) is computed as
\begin{equation}
\label{eqn:l(Z)}
\ell:= \ell(Z)=c_2+(\zeta^2-c_1^2)/4\ge 0.
\end{equation}
Following Z. Qin, \cite{Qin_JDG94}, \cite{Qin_MANUSCRIPTA93}, \cite{Qin_INVENTIONES92}, we denote by $E_\zeta(c_1,c_2)$ the family of nontrivial extensions of type (\ref{eqn:canonical}); it is birational to a projective bundle over $X^{[\ell]}\times \mathrm{Pic}^0(C)\times\mathrm{Pic}^0(C)$. Since $2d_V\ge\alpha$ it follows that $Z$ trivially satisfies the Cayley--Bacharach property with respect to $|K_X\otimes \mathcal O_X((\alpha-2d_V)C_0+(\beta-2r)F+G-2D) \otimes N\otimes M^{-1}|$ and hence for any $Z$, $M$ and $N$ a general extension is a vector bundle.
These extension families are crucial in the birational description of the moduli spaces.
\begin{rmk} If we replace $V$ by a twist with a divisor supported on the exceptional divisor, the invariant $d_V$ remains the same while the invariant $r_V$ might change. Indeed, let $C=\mathbb P^1$ and assume $m=1$ and $k_1=1$ i.e. the exceptional divisor has only one component $E_1$. If $V$ is the trivial bundle, then the invariants $d_V$, $r_V$ are zero. However the invariant $r_{V(-E_1)}$ of $V(-E_1)$ will be equal to $-1$ as $h^0(\mathcal O_X(-E_1))=0$ and $|\mathcal O_X(F-E_1)|$ consists of the strict transform $\widetilde{F}$ of a fibre.
However, the canonical extension is
\[
0\to \mathcal O_X(-F+\widetilde{F})\to \mathcal O_X(-E_1)^{\oplus 2}\to \mathcal O_X(-F+\widetilde{F})\to 0,
\]
i.e. it is indeed the canonical extension of the trivial bundle twisted by $\mathcal O_X(-E_1)$.
\end{rmk}
\begin{rmk}
\label{rmk:invariants}
By definition, it is clear that any vector bundle in an extension (\ref{eqn:canonical}) has $d_V=d$.
Moreover, if $2d_V>\alpha$ then any vector bundle $V$ in the extension (\ref{eqn:canonical}) will also have $r_V=r$.
Beside, for any $M,N^\prime\in\mathrm{Pic}^0(C)$ and $Z^\prime\subset X$, there is no nonzero morphism from $\mathcal O_X(d_VC_0+rF+D)\otimes \phi^*M$ to $\mathcal O_X((\alpha-d_V)C_0+(\beta-r_V)F+(G-D))\otimes \phi^*N^\prime\otimes\mathcal I_{Z^\prime}$ (as $d_V>\alpha-d_V$) and hence the divisor $D$, the line bundle $M$ and the subscheme $Z$ are also determined by $V$ and $r_V=r$.
In conclusion, if $2d_V>\alpha$ then the extension (\ref{eqn:canonical}) is uniquely determined by~$V$, and this phenomenon is mainly due to the fact that $\phi_*V(-d_VC_0)$ is a line bundle.
On the contrary, if $2d_V=\alpha$, it might happen that some bundles presented as an extension (\ref{eqn:canonical}) have $r_V\ne r$ as it is shown in the next example. The equality occurs if and only if the rank-two bundle $\phi_*V(-d_VC_0)\otimes \mathcal O_C(-rp)\otimes {M}^{-1}$ on $C$ is normalized, i.e. $h^0(\phi_*V(-d_VC_0)\otimes \mathcal O_C(-rp)\otimes {M}^{-1}) \neq 0$ and $h^0(\phi_*V(-d_VC_0)\otimes \mathcal O_C(-(r+1)p)\otimes {M}^{-1})=0$ .
\end{rmk}
\begin{ex} To construct an example in the simplest setup, let us assume that $S$ is a ruled surface over $\PP^1$ and that $X$ is the blowup of $S$ at one point. Let $V$ be a rank two vector bundle given by a nontrivial extension
\[
0\to \mathcal O_X(-nF) \to V\to \mathcal I_Z(nF+E_1))\to 0
\]
where $Z$ is a $0$-dimensional subscheme of length $2n+1$ such that 3 points lie on a fiber and the other ones lie in $2n-2$ different fibers. Notice that since $H^0({\mathcal I_Z((2n-1)F+E_1))\ne 0}$, we have $h^0V((n-1)F) \ne 0$. Therefore, $r_V >r=-n$.
\end{ex}
Next, we address the following question:
\begin{question}
We place ourselves in the case $\alpha=d_V=0$. Let $\zeta$ be the numerical class $(2r-\beta)F+(2D-G)$ and let $E_\zeta(c_1,c_2)$ be the family of extensions
\begin{equation}
\label{eqn:canonical-alpha=0}
0\to \mathcal O_X(rF+D)\otimes \phi^*M\to V\to \mathcal I_Z((\beta-r)F+(G-D))\otimes \phi^*N\to 0
\end{equation}
with $M,N\in\mathrm{Pic}^0(C)$.
When is $r_V=r$ for a general $V$ in $E_\zeta(c_1,c_2)$?
\end{question}
We answer this question for $D=0$ and we will see later on that quite often $D=0$ (see the proof of Theorem \ref{thm:c1F=0}).
\begin{prop}
\label{bound}
Let $V_\eta$ be a vector bundle corresponding to a general extension $\eta\in E_\zeta(c_1,c_2)$ where $\zeta=(2r-\beta)F+2D-G$ with $G=\sum_{i=1}^{\rho}E_i\ge 0$ and $D=\sum_{i=1}^{\rho}q_iE_i$ with $q_i \geq 0$.
\begin{itemize}
\item[(a)] If $r_{V_\eta}=r$ for a general $\eta\in E_\zeta(c_1,c_2)$, then $2r\ge \beta-g-c_2$.
\item[(b)] If $D=0$ and $2r\ge \beta-g-c_2$, then $r_{V_\eta}=r$.
\end{itemize}
\end{prop}
\proof $(a)$ By definition, for any $\eta$ we have $r_{V_\eta}\ge r$. By semicontinuity, $r_{V_\eta}=r$ for a general $\eta$ if and only if there exists a $V$ with $r_V=r$ i.e. there exists $V$ such that $(\phi_*V)(-rp)\otimes M^{-1}$ is normalized. By Nagata's Theorem (\cite[Theorem 1]{Nagata_NMJ70}), we obtain
\[
\mathrm{deg}((\phi_*V)(-rp))\le g.
\]
On the other hand, the non-zero morphism $\mathcal O_X(rF)\otimes\phi^*M\to V$ gives rise to a canonical short exact sequence
\[
0\to \mathcal O_X(rF+\sum_{i=1}^\rho q_iE_i)\otimes\phi^*M\to V\to \mathcal I_Z((\beta-r)F+\sum_{i=1}^\rho (1-q_i)E_i))\otimes \phi^*N\to 0
\]
with $q_i\ge 0$, $N\in\mathrm{Pic}^0(C)$ and $Z$ a zero--dimensional subscheme of length $\ell(Z)=c_2+\sum_{i=1}^\rho q_i(1-q_i)$. Since $C$ is a smooth curve, the pushforward of the above exact sequence gives a short exact sequence:
\[
0\to\mathcal O_C(rp)\otimes M\to \phi_*V\to \mathcal O_C((\beta-r)p)\otimes N\otimes\phi_*(\mathcal I_Z(\sum_{i=1}^\rho(1-q_i)E_i))\to 0.
\]
Hence, since $\phi_*\mathcal O_X(E_i)=\mathcal O_C$, we have
\[
\mathrm{deg}((\phi_*V)(-rp))=(\beta-2r)+\mathrm{deg}(\phi_*(\mathcal I_Z(\sum_{i=1}^\rho(1-q_i)E_i)))
\]
\[
\le(\beta-2r)-(c_2+\sum_{i=1}^\rho q_i(1-q_i))+(\sum_{i,\ q_i\ge 2}(1-q_i))
\]
\[
=(\beta-2r)-(c_2+\sum_{i,\ q_i\ge 2} q_i(1-q_i))+(\sum_{i,\ q_i\ge 2}(1-q_i))
=\beta-2r-c_2+\sum_{i,\ q_i\ge 2}(1-q_i)^2.
\]
Therefore,
\[
2r\ge \beta-g-c_2+\sum_{i,\ q_i\ge 2}(1-q_i)^2 \ge \beta-g-c_2.
\]
$(b)$ Suppose that $2r-\beta\ge -g-c_2$. We denote by $\ell:=c_2$ and we will prove that there exists a $V$ associated to an extension in $E_\zeta(c_1,c_2)$ for which $\phi_*V\otimes \mathcal O_C(-rp)\otimes {M}^{-1}$ is normalized.
Let $Z=\{z_1,\ldots,z_\ell\}$ be the reduced zero--dimensional subscheme of $X$ obtained by intersection between
$C_0$ and $\ell$ distinct fibres $F_{q_i}$ of $\phi$, over general points $q_1,\ldots,q_\ell\in C\setminus\{\phi(p_1),\ldots,\phi(p_m)\}$. In this case, $\phi_*\mathcal I_Z=\phi_*(\mathcal I_Z(G))\cong \mathcal O_C(-\sum_{i=1}^{l} q_i)$. We will choose also $M=N=\mathcal O_C$.
\medskip
{\em Claim.} The map
\[
\mathrm{Ext}^1_X(\mathcal I_Z((\beta-r)F+G),\mathcal O_X(rF))\to \mathrm{Ext}^1_C(\mathcal O_C((\beta-2r)p-\sum_{i=1}^{l} q_i),\mathcal O_C)
\]
given by $V\mapsto (\phi_*V)(-rp)$ is surjective.
\medskip
We prove the claim in several steps, factoring the given map in other surjective maps. First, we prove the surjectivity of the natural map
\[
\mathrm{Ext}^1_X(\mathcal I_Z((\beta-r)F+G),\mathcal O_X(rF))\to \mathrm{Ext}^1_X(\mathcal I_Z((\beta-r)F),\mathcal O_X(rF)).
\]
Indeed, this map is dual, via Serre's duality to the map
\[
H^1(X,\mathcal I_Z((\beta-2r)F)\otimes K_X)\to H^1(X,\mathcal I_Z((\beta-2r)F+G)\otimes K_X)
\]
which is injective, as
\[
H^0(G,\mathcal I_Z((\beta-2r)F+G)\otimes K_X|_Z)=H^0(G,K_G)=0
\]
(use $\mathcal I_Z|_G\cong \mathcal O_G$ and $\mathcal O_X(F)|_G\cong \mathcal O_G$).
Second, we prove the surjectivity of the natural map
\[
\mathrm{Ext}^1_X(\mathcal I_Z((\beta-r)F),\mathcal O_X(rF))\to \mathrm{Ext}^1_X(\mathcal O_X((\beta-r)F-\sum_{i=1}^{l} F_i),\mathcal O_X(rF)).
\]
Since $\mathcal I_{\{z_i\}\subset F_{q_i}}=\mathcal O_{F_{q_i}}(-1)$, we obtain a short exact sequence:
\[
0\to\mathcal O_X\left(-\sum_{i=1}^\ell F_{q_i}\right) \to\mathcal I_Z \to \bigoplus_{i=1}^\ell\mathcal O_{F_{q_i}}(-1)\to 0
\]
which yields, after tensorization with $K_X\otimes \mathcal O_X((\beta-2r)F)$, to an injective map
\[
H^1(X,K_X\otimes \mathcal O_X((\beta-2r)F)\otimes \mathcal O_X(-\sum_{i=1}^\ell F_{q_i}))
\to H^1(X,K_X\otimes \mathcal O_X((\beta-2r)F)\otimes \mathcal I_Z).
\]
Applying duality we obtain the surjective natural map
\[
\mathrm{Ext}^1_X(\mathcal I_Z((\beta-r)F))\otimes \mathcal I_Z,\mathcal O_X(rF))
\to \mathrm{Ext}^1_X(\mathcal O_X((\beta-r)F-\sum_{i=1}^\ell F_{q_i}),\mathcal O_X(rF))
\]
we were looking for.
Finally, the pushforward map
\[
\mathrm{Ext}^1_X(\mathcal O_X((\beta-r)F-\sum_{i=1}^\ell F_{q_i}),\mathcal O_X(rF))\to \mathrm{Ext}^1_C(\mathcal O_C((\beta-r)p-\sum_{i=1}^\ell q_i),\mathcal O_C(rp))
\]
is surjective, as the pullback is a right-inverse, by projection formula. Hence, we have proved the claim.
Using \cite[Exercise 2.5 (c)]{Har}, if $\beta-2r-\ell\le g$, then a general extension in
$\mathrm{Ext}^1_C(\mathcal O_C((\beta-2r)p-\sum_{i=1}^\ell q_i),\mathcal O_C)$ is normalized, which implies also, via the surjectivity of the map $V\to(\phi_*V)(-rp)$, that a general extension in $\mathrm{Ext}^1_X(\mathcal I_Z((\beta-r)F+G),\mathcal O_X(rF))$ will have $r_V=r$.
\endproof
\section{The birational structure of moduli spaces}
\label{sec:structure}
The main result of this section is the following birational structure characterisation of moduli spaces (compare to \cite{Qin_MANUSCRIPTA93}, \cite{Qin_INVENTIONES92}, \cite{Costa-MiroRoig_CRELLE99}, \cite{Costa-MiroRoig_NMJ02}):
\begin{thm}
\label{thm:structure}
Let $H$ be an ample divisor on $X$. Then any nonempty irreducible component $\mathcal M$ of the moduli space $\mathcal M_H(c_1,c_2)$ is dominated by a projective bundle over $C^{[\ell]}\times\mathrm{Pic}^0(C)\times\mathrm{Pic}^0(C)$ for a suitable positive integer $\ell$.
\end{thm}
\proof
From the previous section, any vector bundle $V$ in $\mathcal M$ is presented as an extension (\ref{eqn:canonical}) for the corresponding $d_V$, $r_V$ and $D$. Given $d_V$, $r_V$ and $D$ the set of vector bundles that live in corresponding extensions (\ref{eqn:canonical}) is a constructible set, as it is the image of a morphism from $E_\zeta(c_1,c_2)$ to $\mathcal M$. Note that the set of triples $(d_V,r_V,D)$ is countable ($D$ is supported on given fixed divisors) and hence $\mathcal M$ is a countable union of constructible subsets. This implies that $\mathcal M$ is also a finite union of these countable number of subsets and in particular one of the subsets must be dense (i.e. must contain an open subset), see for example \cite{Lieberman-Mumford_ARCATA}. In conclusion, a general vector bundle $V$ in $\mathcal M$ is presented as an extension (\ref{eqn:canonical}) with fixed $d_V$, $r_V$ and $D$. Since the general elements in the extension (\ref{eqn:canonical}) are locally free, it follows that $\mathcal M$ is dominated by the space of extensions $E_\zeta(c_1,c_2)$.
In what concerns the structure of $E_\zeta(c_1,c_2)$ it is clear that it is birational to a projective bundle over $X^{[\ell]}\times\mathrm{Pic}^0(C)\times\mathrm{Pic}^0(C)$ via the map that associates to any extension the triple $(Z,M,N)$ and the fibres of this map are the projective spaces
$$\mathbb P\mathrm{Ext}^1_X(\phi^*N\otimes \mathcal I_Z, \mathcal O_X((2d_V-\alpha)C_0+(2r_V-\beta)F+(2D-G))\otimes\phi^*M).$$
Since $X^{[\ell]}$ is also birational to a projective bundle over $C^{[\ell]}$ the conclusion follows.
\endproof
\begin{rmk}
The dominating extension family is not necessarily unique, see Theorem \ref{thm:c1F=0} and Example \ref{ex:AnotherExtension} in the next section.
\end{rmk}
\begin{rmk}
A similar argument shows that for any surface $S$, any irreducible component $\mathcal M$ of a moduli space of stable rank--two vector bundles with fixed Chern classes is dominated by a locally closed subset of a projective bundle over $S^{[\ell]}\times\mathrm{Pic}^0(S)\times\mathrm{Pic}^0(S)$. Indeed, any bundle can be presented as an element of the countable family of extensions
\[
0\to M_0\otimes M\to V\to N_0\otimes N\otimes I_Z\to 0
\]
where $M_0$ and $N_0$ are fixed line bundles, $M,N\in \mathrm{Pic}^0(S)$ and $Z$ is a zero--dimensional subscheme. A general vector bundle will belong to a single given extension family and the locally closed subvariety is obtained from the condition that $Z$ satisfies the Cayley-Bacharach property with respect to the corresponding adjoint bundle. However, it seems out of reach a nice description of this locus in the widest generality.
\end{rmk}
To obtain a full birational description of the irreducible components $\mathcal M$, we need first to describe the general fibres of the map $\psi:E_\zeta(c_1,c_2)\to\mathcal M$. If $2d_V>\alpha$ then, from Remark \ref{rmk:invariants}, the map $\psi$ is birational. In particular, we obtain (see also \cite[Theorem A]{Costa-MiroRoig_NMJ02}):
\begin{thm}
\label{thm:structure2}
If $c_1\cdot F$ is odd then any nonempty irreducible component $\mathcal M$ of the moduli space $\mathcal M_H(c_1,c_2)$ is birational to a projective bundle over $C^{[\ell]}\times\mathrm{Pic}^0(C)\times\mathrm{Pic}^0(C)$ for a suitable positive integer $\ell$.
In particular, for $C=\mathbb P^1$ the moduli space is rational.
\end{thm}
\proof
Since $c_1\cdot F$ is odd, we necessarily have $2d_V>\alpha$.
\endproof
In the case $2d_V=\alpha$ and $C=\mathbb P^1$, we can prove the following
\begin{thm}
\label{thm:structure3}
If $C=\mathbb P^1$ and $c_1\cdot F$ is even then any nonempty irreducible component $\mathcal M$ of the moduli space $\mathcal M_H(c_1,c_2)$ is stably rational.
\end{thm}
\proof
We use the notation from the proof of Theorem \ref{thm:structure} and the previous section. If $2d_V>\alpha$ then we can apply the discussion above to conclude that $\mathcal M$ is rational.
We are hence in the situation where $2d_V=\alpha$. The fibre of $\psi$ over $V$ in this case is the open subset of $\mathbb P H^0(V(-d_VC_0-rF-D))$ corresponding to sections that vanish along a zero--dimensional subscheme only. Indeed, any section of $H^0(V(-d_VC_0-rF-D))$ vanishing along a zero--dimensional subscheme gives a presentation of $V$ as an extension in $E_\zeta(c_1,c_2)$ and conversely, any presentation corresponds to a section.
Hence $E_\zeta(c_1,c_2)$ is birationally a projective bundle over $\mathcal M$. On the other hand, $E_\zeta(c_1,c_2)$ is rational, which implies that $\mathcal M$ must be stably rational.
\endproof
\section{Computation of the extension spaces}
\label{sec:extensions}
In the previous section we have proved that general elements in a given irreducible component of a moduli space of stable rank-two bundles are presented as an extension in some particular extension space. If $c_1\cdot F=1$, this extension space is unique, however, in the case $c_1\cdot F=0$, this extension space is not unique anymore, see Remark \ref{rmk:invariants}.
One can raise some very natural questions related to this situation. Can one determine effectively this extension space in the case $c_1\cdot F=1$? If $c_1\cdot F=0$, what is the most natural extension space that can cover (an irreducible component of) a moduli space? We answer these questions in the case when $X$ is the blowup of general points on the minimal surface $S$, i.e. there are no infinitely near blownup points.
So, throughout this section, $X\to S$ will be the blowup of $S$ at $m$ general points $p_1, \cdots, p_m$. Since we are dealing with rank two stable vector bundles $V$ on $X$ we can assume that $c_1(V)= \alpha C_0+ \beta F + \sum_{i=1}^{m} \gamma_ i E_i$ with $\alpha, \beta, \gamma_i \in \{0,1 \}$.
Given any smooth surface $Y$, $H$ an ample divisor on $Y$ and $\sigma: \tilde{Y} \to Y$ the blow up of $Y$ at one point, for any $n \gg 0$, the divisor
\[ H_n:=n \sigma^*H-E_1 \]
is ample. Moreover, by \cite[Theorem 1]{Nakashima_JMKYAZ}, for $n \gg 0$, there exists an open immersion
\[
{\mathcal M_{Y,H}(c_1,c_2)} \subset {\mathcal M_{\tilde{Y},H_n}(\sigma^*c_1,c_2)}
\]
between smooth irreducible moduli spaces of the same dimension. Therefore, while describing the moduli space ${\mathcal M_{L}(c_1,c_2)}$ of $L$-stable rank two vector bundles on $X$, we can assume without loss of generality that
\[ c_1= \alpha C_0+ \beta F + \sum_{i=1}^{\rho} E_i \]
with $\alpha, \beta \in \{0,1 \}$ and $\rho=m$.
From now on, and until the end of the paper, we will work under this assumption and we will analyze separately the cases $\alpha=1$ and $\alpha=0$.
\subsection{The case $c_1\cdot F=0$} \
\vspace{3mm}
\noindent
Let $c_1=\eta F+\sum_{i=1}^m E_i$ with $\eta\in\{0,1\}$ and $c_2=2n+\varepsilon \gg 0$ with $\varepsilon\in\{0,1\}$. Let $L$ be a $(c_1,c_2)$--suitable ample divisor, i.e. $L$ belongs to a chamber of type $(c_1,c_2)$ whose closure contains the ray spanned by $F$, \cite[Definition 1, p. 142]{Friedman}.
In order to find the most natural extension space that dominates ${\mathcal M_L(c_1,c_2)}$, we need to identify first the invariants $d$ and $r$ of general bundles.
\begin{lem}
\label{lem:dV=0}
For any $V$ in ${\mathcal M_L(c_1,c_2)}$ we have $d_V=0$.
\end{lem}
\proof
The result is proved in its most general settings in \cite[Theorem 5]{Friedman}. For convenience of the reader, we present here the proof in our case.
Suppose $d=d_V>0$ for some $V$ and put $r=r_V$. Then $V$ lies in an extension space $E_\zeta(c_1,c_2)$ with $\zeta=2dC_0+(2r-\eta)F+(2D-\sum_{i=1}^m E_i)$. Note that $\zeta\cdot F=2d>0$ and $\zeta\cdot L<0$, by the $L$-stability of $V$.
We claim that $c_1^2-4c_2\le \zeta^2< 0$. In this case, $\zeta$ would define a wall separating $F$ and $L$ which would be in contradiction with the $(c_1,c_2)$--suitability of~$L$.
The inequality $c_2+(\zeta^2-c_1^2)/4\ge 0$ is automatic and follows from (\ref{eqn:l(Z)}). To verify $\zeta^2<0$, we consider the numerical class $\xi:=(L\cdot F)\zeta-(L\cdot \zeta)F$. It is orthogonal to $L$ and hence, by the Hodge index Theorem it follows that $\xi^2\le 0$ with equality if and only if $\xi=0$. We compute $\xi^2=(L\cdot F)^2(\zeta^2)-2(L\cdot F)(\zeta\cdot L)(\zeta\cdot F)\le 0$. Since $L\cdot F>0$, $\zeta\cdot L<0$ and $\zeta\cdot F>0$, it follows that $\zeta^2<0$, and the claim is proved.
\endproof
\begin{thm}
\label{thm:c1F=0}
A general vector bundle $V$ in any irreducible component of ${\mathcal M_L(c_1,c_2)}$ lies in an extension of type
\begin{equation}
\label{eqn:c1F=0}
0\to\mathcal O_X(r_0F)\otimes\phi^*M\to V\to \mathcal I_Z((\eta-r_0)F+\sum_{i=1}^m E_i)\otimes \phi^*N\to 0
\end{equation}
with $h^0(V(-r_0F)\otimes \phi^*M^{-1})=1$ and $r_V=r_0:=\left\lceil\frac{\eta-c_2-g}{2}\right\rceil=\left\lceil\frac{\eta-2n-\varepsilon-g}{2}\right\rceil. $
\end{thm}
\proof
By semicontinuity, for a general vector bundle $V$ in an irreducible component of ${\mathcal M_L(c_1,c_2)}$, we have $r_V=r_1$ and (using an argument as in the proof of Theorem \ref{thm:structure}) $V$ sits in an exact sequence
\begin{equation}
\label{eqn:r1}
0\to\mathcal O_X(r_1F+\sum_{i=1}^m \ell_iE_i)\otimes\phi^*M\to V\to\mathcal I_Z((\eta-r_1)F+\sum_{i=1}^m (1-\ell_i)E_i)\otimes\phi^*N\to 0
\end{equation}
where $\ell_i\ge 0$ and $Z$ is a zero--dimensional subscheme of length $\ell(Z)=2n+\varepsilon+\sum_{i=1}^{m} \ell_i(1-\ell_i)$.
We prove that
\[
r_1=r_0,\ \ell_i=0\mbox{ for all }i\mbox{ and }h^0(V(-r_0F)\otimes \phi^*M^{-1})=1.
\]
To this end, we compute the dimension of the family $\mathcal F$ of vector bundles given by extensions of type (\ref{eqn:r1}). Note that
\[
\mathrm{dim}(\mathcal F)=\mathrm{ext}^1_X+2\mathrm{dim}(\mathrm{Pic}^0(C))+2\ell(Z)-h^0(V(-r_1F-\sum_{i=1}^m\ell_iE_i)\otimes\phi^*M^{-1})
\]
where
\[
\mathrm{ext}^1_X=\mathrm{dim}\ \mathrm{Ext}^1_X(\mathcal I_Z((\eta-r_1)F+\sum_{i=1}^m (1-\ell_i)E_i)\otimes\phi^*N,\mathcal O_X(r_1F+\sum_{i=1}^m \ell_iE_i)\otimes\phi^*M)
\]
\[
=h^1(\mathcal I_Z((\eta-2r_1)F+\sum_{i=1}^m(1-2\ell_i)E_i)\otimes \phi^*(M^{-1}\otimes N)\otimes K_X)
\]
for a general choice of $Z$, $M$ and $N$.
Note that
\[
h^0(\mathcal I_Z((\eta-2r_1)F+\sum_{i=1}^m(1-2\ell_i)E_i)\otimes \phi^*(M^{-1}\otimes N)\otimes K_X)=0,
\]
as the coefficient of $C_0$ in the expression of the corresponding line bundle equals $-2$, and
\[
h^2(\mathcal I_Z((\eta-2r_1)F+\sum_{i=1}^m(1-2\ell_i)E_i)\otimes \phi^*(M^{-1}\otimes N)\otimes K_X)
\]
\[
=h^2(\mathcal O_X((\eta-2r_1)F+\sum_{i=1}^m(1-2\ell_i)E_i)\otimes \phi^*(M^{-1}\otimes N)\otimes K_X)
\]
\[
=h^0(\mathcal O_X((2r_1-\eta)F+\sum_{i=1}^m(2\ell_i-1)E_i)\otimes \phi^*(M\otimes N^{-1})).
\]
We claim that the latter $h^0$ vanishes. Indeed, if it was different from zero, then we necessarily have
\[
2r_1-\eta\ge \#\{E_i|\ \ell_i=0\}
\]
since $F$ is numerically equivalent to each $E_i+\widetilde{F_i}$. Equivalently
\[
r_1\ge (\eta-r_1)+\#\{E_i|\ \ell_i=0\}
\]
which implies
\[
r_1F\cdot L\ge (\eta-r_1)F\cdot L+\sum(1-\ell_i)F\cdot L\ge
(\eta-r_1)F\cdot L+\sum(1-\ell_i)E_i\cdot L,
\]
taking into account that $F\cdot L\ge E_i\cdot L$.
This shows that the sequence (\ref{eqn:r1}) destabilises $V$ which is a contradiction.
In particular, it follows that
\[
\mathrm{ext}^1=-\chi(X,I_Z((\eta-2r_1)F+\sum_{i=1}^m(1-2\ell_i)E_i)\otimes \phi^*(M^{-1}\otimes N)\otimes K_X)
\]
\[
=-\chi(X,\mathcal O_X((2r_1-\eta)F+\sum_{i=1}^m(2\ell_i-1)E_i)\otimes\phi^*(M\otimes N^{-1}))+\ell(Z).
\]
We compute from the Riemann-Roch Theorem
\[
\chi(X,\mathcal O_X((2r_1-\eta)F+\sum_{i=1}^m(2\ell_i-1)E_i)\otimes\phi^*(M\otimes N^{-1}))=1-g+2r_1-\eta-\sum_{i=1}^m (1-2\ell_i)(1-\ell_i).
\]
Hence,
\[
\mathrm{dim}\ \mathcal F=-2r_1+\eta+3g-1+\sum_{i=1}^m(1-\ell_i^2)+3(2n+\varepsilon)-h^0(V(-r_1F-\sum_{i=1}^m \ell_iE_i)\otimes\phi^*M^{-1})
\]
\[
=-2r_1+(\eta+3g-1)+(m -\sum_{i=1}^m\ell_i^2)+3(2n+\varepsilon)-h^0(V(-r_1F-\sum_{i=1}^m \ell_iE_i)\otimes\phi^*M^{-1}).
\]
Since
\[
-2r_1\le -1-\eta+c_2+g,\ m -\sum_{i=1}^m\ell_i^2\le m\mbox{ and }
h^0(V(-r_1F-\sum_{i=1}^m \ell_iE_i)\otimes\phi^*M^{-1})\ge 1
\]
it follows that
\[
\mathrm{dim}\ \mathcal F\le (-1-\eta+c_2+g)+(\eta+3g-1)+m+3(2n+\varepsilon)-1=4(2n+\varepsilon)+4g-3+m.
\]
Moreover, if either $-2r_1<-1-\eta+c_2+g$ or there exists $i$ such that $\ell_i\ne 0$ or $h^0(V(-r_1F-\sum_{i=1}^m \ell_iE_i)\otimes\phi^*M^{-1})\ge 2$ then
\[
\mathrm{dim}\ \mathcal F<(-1-\eta+c_2+g)+(\eta+3g-1)+m+3(2n+\varepsilon)-1=4(2n+\varepsilon)+4g-3+m.
\]
On the other hand, the expected dimension of ${\mathcal M_L(c_1,c_2)}$ is precisely $4(2n+\varepsilon)+4g-3+m$, and this shows that if either $-2r_1<-1-\eta+c_2+g$ or there exists $i$ such that $\ell_i\ne 0$ or $h^0(V(-r_1F-\sum_{i=1}^m \ell_iE_i)\otimes\phi^*M^{-1})\ge 2$ the family $\mathcal F$ cannot dominate a component of the moduli space. In particular, $\ell_i=0$ for all $i$, $h^0(V(-r_1F-\sum_{i=1}^m \ell_iE_i)\otimes\phi^*M^{-1})=1$ and $2r_1\le 1+\eta-c_2-g$. Since $2r_1\ge \eta-c_2-g$ from Proposition \ref{bound}, it follows that $r_1=r_0$.
\endproof
\begin{cor}
\label{cor:c1F=0}
Let $c_1=\eta F+\sum_{i=1}^m E_i$ with $\eta\in\{0,1\}$ and $c_2=2n+\varepsilon \gg 0$ with $\varepsilon\in\{0,1\}$ and let $L$ be a $(c_1,c_2)$--suitable ample divisor. Then,
the moduli space ${\mathcal M_L(c_1,c_2)}$ is smooth and irreducible and there exists a generically finite rational map from a projective bundle over $\mathrm{Pic}^0(C)\times\mathrm{Pic}^0(C)\times C^{[2n+\varepsilon]}$ to ${\mathcal M_L(c_1,c_2)}$.
\end{cor}
\proof
We apply the previous Theorem. The generic finiteness follows from the dimension computation of the family of extensions.
\endproof
\begin{ex}
\label{ex:AnotherExtension}
In this example we show that the dominating families of moduli spaces are not necessarily unique.
Let $S$ be a geometrically ruled surface of invariant $e >0$ over $\PP^1$, $n \gg 0$ an integer and $c_1=F$, $c_2=2n$.
Fix an ample $(c_1,c_2)$--suitable divisor $L=C_0+mF$ on $S$, with $m\gg 0$. By Theorem \ref{thm:c1F=0} a general vector bundle $V$ in $\mathcal M_L(F,2n)$
lies in an extension of type
\begin{equation}
\label{eqn:c1F=0}
0\to\mathcal O_S(-(n-1)F)\to V\to \mathcal I_Z(nF) \to 0
\end{equation}
where $Z$ is a $0$-dimensional subscheme of length $2n$. Let us construct another extension family dominating $\mathcal M_L(F,2n)$. To this end,
we consider the irreducible family ${\mathcal F_n}$ of rank 2 vector
bundles $V$ on $S$ given by a non trivial extension
\begin{equation}
\label{rk2se5}
0 \rightarrow {\mathcal O}_{S}(-D) \rightarrow V \rightarrow
\mathcal I_{Z}(D+F) \rightarrow 0
\end{equation}
where $D=nF$ and $Z$ is a sufficiently general 0-dimensional subscheme of length $2n$.
\vspace{3mm}
First of all notice that $h^0(V(D))=3.$ In fact, it follows from the exact cohomology sequence associated to the
exact sequence (\ref{rk2se5}) and the fact that from the generality of $Z$ we have $h^0 (\mathcal I_{Z}(2D+F))=2$. Now, we are going to compute the dimension of ${\mathcal F_n}$.
By definition we have
\[ \begin{array}{ll}
\mathrm{dim}\ {\mathcal F_n} & = \# \mathrm{moduli}(Z)+ \mathrm{ext}^{1}(\mathcal I_{Z}(D+F),
{\mathcal O}_{S}(-D)) -h^{0}(V(D)) \\
& = 2\ell(Z) + \mathrm{ext}^{1}(\mathcal I_{Z},
\mathcal O_{S}(-2D-F))
-h^{0}(V(D)). \\
\end{array} \]
\vspace{2mm}
By Serre's duality and applying Riemmann-Roch Theorem we get:
\[ \mathrm{ext}^{1}_X(\mathcal I_{Z},\mathcal O_{S}(-2D-F))= - \chi (X,{\mathcal O}_S(-2D-F)) + \ell(Z) = 4n . \]
Therefore,
\[ \mathrm{dim}\ {\mathcal F_n}=2(2n)+4n-3=4(2n)-3. \]
\vspace{2mm}
It is easy to check that for any $V \in \mathcal F_n$, $c_1(V)=F$ and $c_2(V)=2n$.
Let us see that $V$ is $L$-stable; i.e.,
for any rank $1$ subbundle ${\mathcal O}_{S}(A)$ of $V$ we have
$c_1({\mathcal O}_{S}(A))\cdot L<\frac{1}{2}$ or, equivalently,
\[c_1({\mathcal O}_{S}(A))\cdot L<\frac{c_1(V)\cdot L}{2}.\]
\vspace{2mm}
\noindent
Indeed, since $V$ sits in an extension of type (\ref{rk2se5}) we have
\[ \begin{array}{l}
(1) \quad {\mathcal O}_{S}(A) \hookrightarrow {\mathcal O}_{S}(-nF) \quad \mbox{or} \\
(2) \quad {\mathcal O}_{S}(A) \hookrightarrow
\mathcal I_{Z}((n+1)F). \end{array} \]
\vspace{2mm}
In the first case, $-A-nF$
is an effective divisor.
Since $L$ is an ample divisor we have $(-A-nF)\cdot L \geq 0$ and
\[c_1({\mathcal O}_{S}(A))\cdot L=A\cdot L \leq -nF\cdot L =-n <\frac{1}{2}= \frac{c_1(V)\cdot L}{2}.\]
\vspace{2mm}
If ${\mathcal O}_{S}(A) \hookrightarrow {\mathcal O}_{S}((n+1)F)\otimes \mathcal I_{Z}$ then $(n+1)F-A$ is an effective
divisor. On the other hand, using the generality of $Z$ we have
\[ H^0({\mathcal O}_{S}(A+(n-2)F) ) \subset H^0 (\mathcal I_Z((2n-1)F))=0. \]
So $A+(n-2)F$ is not an effective
divisor and writing $A=\alpha C_0 + \beta F $, we have either $ \beta
+n-2<0$ or $\alpha <0$.
\newline
Assume that $ \beta +n-2<0$, in particular $ \beta <0$.
Since $(n+1)F-A$ is an effective, it follows
$\alpha \leq 0$ and, using $m\gg e$, we have
\[ \begin{array}{ll}
c_1({\mathcal O_{S}}(A))\cdot L=A\cdot L & =- \alpha e + \alpha m+ \beta \\
& =\alpha(m-e)+\beta \\
& <\frac{1}{2}=\frac{c_1(V)\cdot L}{2}. \end{array} \]
Assume that $\alpha <0$ and $ \beta +n -2 \geq 0$. Using again the fact that
$(n+1)F-A$ is an effective divisor and hence $\beta \leq n+1$, we obtain
\[ \begin{array}{ll}
c_1({\mathcal O}_{S}(A))\cdot L=A\cdot L & = -\alpha e +\alpha m+ \beta \\ & \leq
-\alpha e +\alpha m+ n+1 \\ & =
\alpha(m-e)+n+1 \\ & <\frac{1}{2}=
\frac{c_1(V)L}{2}, \end{array} \]
as $m\gg n+\frac{3}{2}+e$,
which proves the $L$-stability of $V$. In conclusion, we have
a dominant morphism
\[ \phi: {\mathcal F_n} \longrightarrow \mathcal M_L(F,2n). \]
\end{ex}
\subsection{The case $c_1\cdot F=1$} \
\vspace{3mm}
\noindent
Let $c_1=C_0+\beta F+\sum_{i=1}^m E_i$ and let $L$ be a fixed ample divisor.
The most natural extension space that dominates ${\mathcal M_L(c_1,c_2)}$, for large $c_2$ is given by the following result:
\begin{thm}
\label{thm:c1F=1}
Let $L$ be an ample divisor with $L\cdot (K_X+F)<0$. Then, for $c_2\gg 0$, a general vector bundle $V$ in ${\mathcal M_L(c_1,c_2)}$ lies in an extension of type
\begin{equation}
\label{eqn:c1F=1}
0\to \mathcal O_X(C_0-(c_2-\beta)F)\otimes \phi^*M\to V\to\mathcal O_X(c_2F+\sum_{i=1}^{m} E_i)\otimes \phi^*N\to 0
\end{equation}
with $M,N\in\mathrm{Pic}^0(C)$. In particular, $d_V=1$ and $r_V=\beta-c_2$.
\end{thm}
\proof
The proof will follow after different steps.
\medskip
{\em Step 1.}
We show that the dimension of the irreducible family $\mathcal F$ of non--trivial extensions of type (\ref{eqn:c1F=1}) equals the dimension of ${\mathcal M_L(c_1,c_2)}$.
Observe that $h^0(\mathcal O_X(-C_0+(2c_2-\beta)F+\sum_{i=1}^{m} E_i)\otimes \phi^*(N\otimes M^{-1})=0$, which implies $h^0(V(-C_0+(c_2-\beta F))\otimes\phi^*M^{-1})=1$ for any extension. Moreover
\[
\mathrm{ext}^1_X(\mathcal O_X(c_2F+\sum_{i=1}^{m} E_i)\otimes \phi^*N,\mathcal O_X(C_0-(c_2-\beta)F)\otimes\phi^*M)
\]
\[
=h^1(X,\mathcal O_X(C_0-(2c_2-\beta)F-\sum_{i=1}^{m} E_i)\otimes\phi^*(M\otimes N^{-1})).
\]
Note that for $c_2 \gg 0$ we have
\[
h^0(X,\mathcal O_X(C_0-(2c_2-\beta)F-\sum_{i=1}^{m} E_i)\otimes\phi^*(M\otimes N^{-1}))=0
\]
and, by duality
\[
h^2(X,\mathcal O_X(C_0-(2c_2-\beta)F-\sum_{i=1}^{m} E_i)\otimes\phi^*(M\otimes N^{-1}))=0
\]
and hence
\[
\mathrm{ext}^1_X(\mathcal O_X(c_2F+\sum_{i=1}^{m} E_i)\otimes \phi^*N,\mathcal O_X(C_0-(c_2-\beta)F)\otimes\phi^*M).
\]
\[
=-\chi(X,\mathcal O_X(C_0-(2c_2-\beta)F-\sum_{i=1}^{m} E_i)\otimes\phi^*(M\otimes N^{-1}))
\]
From the Riemann-Roch Theorem we have
\[
-\chi(X,\mathcal O_X(C_0-(2c_2-\beta)F-\sum_{i=1}^{m} E_i)\otimes\phi^*(M\otimes N^{-1}))
\]
\[
=4c_2-2\beta+\rho+2g+e-2.
\]
It follows that
\[
\mathrm{dim}\ \mathcal F=\mathrm{ext}^1_X(\mathcal O_X(c_2F+\sum_{i=1}^{m} E_i)\otimes \phi^*N,\mathcal O_X(C_0-(c_2-\beta)F)\otimes \phi^*M)+2\mathrm{dim}(\mathrm{Pic}^0(C))-1
\]
\[
=4c_2-2\beta+\rho+4g-3+e.
\]
Note that the dimension of ${\mathcal M_L(c_1,c_2)}$ equals the expected dimension
\[
4c_2-c_1^2-3\chi(X,\mathcal O_X)+g=4c_2+e-2\beta+\rho-3+4g
\]
i.e. it equals the dimension of $\mathcal F$.
\medskip
{\em Step 2.}
We prove that any bundle $V$ in an extension (\ref{eqn:c1F=1}) is simple. To this end, we consider the exact sequence
\[
0\to \mathrm{Hom}(\mathcal O_X(c_2F+\sum_{i=1}^{m} E_i)\otimes\phi^*N,V)\to \mathrm{Hom}(V,V)\to \mathrm{Hom}(\mathcal O_X(C_0-(c_2-\beta)F)\otimes\phi^*M,V)\
\]
The simplicity of $V$ follows from the following two facts: $h^0(V(-C_0+(c_2-\beta F))\otimes\phi^*M^{-1})=1$, which we have already used, and $h^0(V(-c_2F-\sum_{i=1}^{m} E_i)\otimes\phi^*N^{-1})=0$. The latter vanishing is a direct consequence of the nontriviality of the extension (\ref{eqn:c1F=1}) and of the vanishing of $h^0(X,\mathcal O_X(C_0-(2c_2-\beta)F-\sum_{i=1}^{m} E_i)\otimes\phi^*(M\otimes N^{-1}))$.
\medskip
{\em Step 3.}
We prove that any bundle $V$ in an extension (\ref{eqn:c1F=1}) is prioritary. To this end, consider the exact sequence
\[
\mathrm{Ext}^2_X(\mathcal O_X(c_2F+\sum_{i=1}^{m} E_i)\otimes\phi^*N,V(-F))\to \mathrm{Ext}^2_X(V,V(-F))\to
\]
\[
\to\mathrm{Ext}^2_X(\mathcal O_X(C_0-(c_2-\beta)F)\otimes \phi^*M,V(-F))
\]
and we easily check that
\[
h^2(V(-(c_2+1)F-\sum_{i=1}^{m} E_i)\otimes\phi^*N^{-1})=h^2(V(-C_0+(c_2-\beta-1)F)\otimes\phi^*M^{-1})=0.
\]
\medskip
From the previous steps we obtain a natural map from the extension family $\mathcal F$ to the moduli space $\mathcal Spl(c_1,c_2)$ of simple prioritary bundles with Chern classes $c_1$ and $c_2$. This map is injective. Indeed, if a bundle $V$ is presented as two extensions
\[
0\to \mathcal O_X(C_0-(c_2-\beta)F)\otimes \phi^*M\stackrel{f}{\to} V\stackrel{g}{\to}\mathcal O_X(c_2F+\sum_{i=1}^{m} E_i)\otimes \phi^*N\to 0
\]
and
\[
0\to \mathcal O_X(C_0-(c_2-\beta)F)\otimes \phi^*M'\stackrel{f'}{\to} V\stackrel{g'}{\to}\mathcal O_X(c_2F+\sum_{i=1}^{m} E_i)\otimes \phi^*N'\to 0
\]
since $g\circ f'$ and $g'\circ f$ are necessarily zero, we obtain an isomorphism between $\mathcal O_X(C_0-(c_2-\beta)F)\otimes \phi^*M$ and $\mathcal O_X(C_0-(c_2-\beta)F)\otimes \phi^*M'$. The injectivity follows then from the fact that $h^0(V(-C_0+(c_2-\beta F))\otimes\phi^*M^{-1})=1$. On the other hand, since $L\cdot(K_X+F)<0$ the moduli space $\mathcal Spl(c_1,c_2)$ is irreducible (Theorem \ref{spl}) and contains ${\mathcal M_L(c_1,c_2)}$ as an open subscheme. In particular, since $\mathrm{dim}\ \mathcal F=\mathrm{\dim}(\mathcal Spl(c_1,c_2))$, it follows that ${\mathcal M_L(c_1,c_2)}$ and the image of $\mathcal F$ in $\mathcal Spl(c_1,c_2)$ contain a common non-empty open subset.
\endproof
As a consequence of this analysis we obtain the following refinement of Theorem \ref{thm:structure}:
\begin{cor}
\label{cor:c1F=1}
For any ample divisor $L$ on $X$ with $L\cdot(K_X+F)<0$, $c_1=C_0+\beta F+\sum_{i=1}^m E_i$ and $c_2 \gg 0$, the moduli space $\mathcal M_{L}(c_1,c_2)$ is irreducible and birational to a projective bundle over $\mathrm{Pic}^0(C)\times \mathrm{Pic}^0(C)$.
\end{cor}
|
2,877,628,090,695 | arxiv |
\subsection{Architecture Overview}
\begin{figure}
\begin{center}
\includegraphics[trim = 55mm 45mm 0mm 29mm, clip=true, scale=0.4]{Thread.pdf}
\caption{DTranx architecture.}
\label{fig:seda}
\end{center}
\vspace{-5mm}
\end{figure}
As shown in Figure\ref{fig:seda}, DTranx follows the SEDA~\cite{seda} design and invents
three categories of stages: \embf{Service}, \embf{Internal},and \embf{Daemon}. \embf{Service}
stages handle Remote Procedure Calls(RPCs). For example, \textit{ClientService} accepts transaction
commit requests from clients and \textit{TranxService} processes 2PC requests from peer servers.
\embf{Internal} stages manage local shared resources. For example, \textit{LockService} maintains
locking states and \textit{WAL} writes logs to persistent storage. \embf{Daemon} stages
run background tasks. For example, \textit{GC} periodically reclaims logs and \textit{TranxAck}
sends commit results from coordinators to participants.
To further exploit concurrency in SEDA, DTranx adjusts each of the stage components.
First, DTranx removes the dynamic control of the thread pools but statically assigns
thread numbers for each stage, after which threads are bound to physical CPU cores.
We found out that one thread for each stage yields better performance when context switching
is rare than that of multiple threads for each stage when context switching happens frequently.
However, dynamic control of the thread pools is enabled in certain stages where handlers might be
blocking. For example, \textit{Storage} launches multiple threads to handle I/O requests which
involve blocking system calls. Besides the core bindings for DTranx threads,
kernel Interrupt Request(IRQ) threads are bound to CPU cores as well since I/O throughput
is severely affected otherwise.
Second, DTranx adopts lock free queues as the incoming event queues such that the enqueue/dequeue
operations on the queues are nonblocking and it achieves high throughput without compromising consistency.
The lock free queues utilize atomic primitives to reserve a spot and then proceed to read/write in
non-critical sections. In addition, multiple queues are created in each lock free queue to spread loads.
Third, DTranx reduces queue element construction and destruction costs by pushing element pointers
, instead of the element itself into the lock free queues and allocating an element pool to store destructed
elements. For example, \embf{Service} stages get elements from the pool when new requests come and
\embf{Internal} stages put elements to the pool when requests are completed.
\subsection{Serializability}
\begin{figure}
\begin{center}
\includegraphics[trim = 0mm 0mm 0mm 0mm, clip=true, scale=0.38]{CommitProtocol.pdf}
\vspace{+2mm}
\caption{Commit Protocol.}
\label{fig:commit_protocol}
\end{center}
\vspace{-8mm}
\end{figure}
DTranx combines OCC and 2PC protocols to guarantee serializability, following
Alexander's~\cite{docc2} hybrid OCC scheme that embedded lock acquisition
and validation in the 2PC. The main benefit compared to distributed Two-Phase
Locking(2PL) is that locks are only held during the commit time and DTranx
employs parallel validation for better scalability. The detailed protocol
flows are shown in Figure\ref{fig:commit_protocol}. Additionally, if all data items in a transaction
are stored in the same server, 2PC is converted to One-Phase Commit(1PC) to reduce latencies.
Initially (Stage 1), transactions read data without locks
and clients keep track of the read items, read item versions, and write items in the local buffer.
At commit time (Stage 2), clients choose a server
as the coordinator and send it the transactions.
During the first phase, coordinators initiate
2PC by first sending prepare messages to participants.
Then, participants lock both the read
and write items, check the read item versions, write WAL logs
and reply to the coordinator. During the second phase, coordinators wait for
responses from all participants, then decides whether to commit or abort,
and notifies all participants of the agreed result. However, if any participant aborts in the first phase,
the coordinator immediately sends out abort messages without waiting for all replies. Finally,
participants write WAL logs, update database states, and unlock all relevant data.
\textbf{Proof of Serializability}
\textbf{\textit{Assumption}}: Two phase locking(2PL) ensures serializability,
see proof at \cite{dbcomplete}.
\textbf{\textit{Method}}: We reduce the hybrid OCC to 2PL. Using action
abbreviations L~(Locking), C~(Checking), U~(Unlock), R~(Read), W~(Write) and object abbreviations r~(read items),
w~(write items). Concatenated action and object symbols represent tasks, e.g., Lr means
``lock read items''. The sequencing abbreviation ``-'' binds two actions and enforces an ``execute
before'' local order and $\rightarrow$ binds two tasks and enforces an
``execute before'' distributed order. Our transactions can thus be
represented as R $\rightarrow$ Lrw-Cr $\rightarrow$ W-Urw. The Cr action validates the
read items. If any read item has been changed after it was read, the transaction aborts,
releasing all locks. If not, our successful transaction is equivalent to Lr-R
$\rightarrow$ Lw-Cr $\rightarrow$ W-Urw, thus Lr-R $\rightarrow$ Lw
$\rightarrow$ W-Urw. In this way, all locking actions precede all unlocking
actions, which is 2PL. Unlocking after committing to the
database avoids cascading rollbacks. For successful transactions, the
serialization point is the moment when all the write locks are granted.
\textbf{Deadlock}
Common deadlock avoidance methods are timeout, wait-for
graph, ordered locking and timestamps with wait-die or
wait-wound mechanisms. SiloR~\cite{silor} avoids deadlocks by enforcing
a global order on the locking sequences, necessitating multiple round trips in distributed environments.
The wait-for graph introduces too much network traffic and
timestamps method requires a global synchronized clock, which
will become the bottleneck or single point
of failure. Our deadlock avoidance method aborts transactions immediately
if read locks are not granted and waits for a configurable
time period(e.g. 50ms) before aborting write lock requests. However, if
the data is currently exclusively locked, the write lock request is
aborted immediately. If a transaction is aborted since write
locks are not granted, DTranx retries committing it after an exponential timeout.
If a transaction is aborted since read locks are not granted, DTranx
restarts it immediately. This is because read lock request denial indicates there
are concurrent transactions updating the same item and retrying
committing will fail again.
\textbf{Livelock} Write starvation rarely happens since write lock requests are blocked
for a short fixed period and exponential backoff technique is adopted to reduce the probability of
lock conflicts when the transactions are retried.
The same goes for read starvation since the number of read items are usually much larger
than write items.
\subsection{Persistent Memory}
\input{pmem}
\subsection{Environment Setup}
First, we evaluate DTranx with a database of 120 million key value items in a 36-node cluster.
Test data keys are generated as integers from 1 to 120 million
and values are 100 bytes of random characters.
Transactions are categorized into read and update transactions.
Read transactions only contain read items and update transactions contain 1 write item.
The total number of read/write items in one single transaction is uniformly distributed between 1 and 3.
\subsection{Transaction}
\begin{figure}[t]
\begin{center}
\includegraphics[trim = 0mm 0mm 10mm 4mm, clip=true, scale=0.46]{hyperdex-eps-converted-to.pdf}
\caption{Distributed transactions. The horizontal axis represents the
percentage of read transactions while the vertical axis shows the
throughput on the left and commit
success rates on the right.}
\label{fig:hyperdex}
\end{center}
\vspace{-5mm}
\end{figure}
In Figure \ref{fig:hyperdex}, DTranx is compared with Hyperdex Warp~\cite{warp} that supports
distributed transactions. Only successful transactions are counted in the throughput metric.
DTranx shows approximately 30\% higher throughput than Hyperdex and DTranx degrades slowly
as the percentage of update transactions increases. Moreover, DTranx maintains high commit success rates.
For example, DTranx reaches
99.65\% success rate for 50\% read
workloads.
On the other hand, Hyperdex shows high throughput but the software is
unstable and periodically fails due to internal assertion errors, leading to
low success rates. For example, several servers crashed during the 95\% read workload,
causing 58.96\% success rates and 275.72k ops/sec throughput. To remedy the crash effect,
we restarted the servers manually after each run.
There are three reasons why DTranx outperforms Hyperdex. First, DTranx
follows the highly concurrent SEDA architecture with lock free queues and stages are bound to physical
cores, utilizing all CPU power and avoiding context switching overhead. Second, DTranx
integrates the NVM based log that bypasses system calls like sync/fsync, reducing
log persistence latencies. Third, DTranx applies various optimization techniques, such as
an allocated element pool, batch ack phase, and optimal core binding strategy.
Furthermore, strace~\cite{strace} reveals that Hyperdex does not synchronize data to physical
storage devices immediately after write log calls.
While the Hyperdex paper supports fault recovery by replication, that version of the software is not
publicly available.
Lastly, the average latency for DTranx is
below 2ms when the throughput is 50\% of the maximum and it increases to 10ms
when the throughput reaches the maximum.
\begin{figure}
\begin{center}
\includegraphics[trim = 0mm 0mm 10mm 7mm, clip=true, scale=0.48]{scalability-eps-converted-to.pdf}
\caption{DTranx Scalability. Four workloads are run, namely 50\%, 75\%, 95\%, and 100\% read workloads.}
\vspace{-5mm}
\label{fig:scalability}
\end{center}
\end{figure}
\subsection{Scalability}
In this experiment, scalability tests are run against cluster of 3, 9,
18 and 36 servers. Corresponding to the cluster size, 10, 30, 60, 120 million keys are
inserted into DTranx. As shown in Figure \ref{fig:scalability}, the throughput shows linear
increases as more nodes are involved. For example, with pure read workloads,
throughput reaches 574.76k reqs/sec with 36 nodes.
In addition, workloads with various mixture of read and update transactions are benchmarked.
Even with 50\% read workloads and 50\% update workloads, the throughput is 60\% to 85\% of
that with pure read workloads. The high scalability of DTranx results from our efficient hybrid
commit protocol design that minimizes the critical section of distributed locking and reduces the 2PC
to 1PC whenever possible.
\subsection{Persistent Memory}
\begin{figure}
\begin{center}
\begin{subfigure}{0.23\textwidth}
\includegraphics[trim = 0mm 0mm 0mm 0mm, scale=0.38]{pmem_throughput-eps-converted-to.pdf}
\caption{throughput}
\label{fig:pmemthroughput}
\end{subfigure}
\begin{subfigure}{0.23\textwidth}
\includegraphics[trim = 0mm 0mm 0mm 0mm, scale=0.38]{pmem_space-eps-converted-to.pdf}
\caption{log space usage}
\label{fig:pmemspace}
\end{subfigure}
\caption{The left plot shows the instant throughput with and without GC for both NVM and SSD.
The right plot shows the space usage with and without GC for NVM. Specifically, gc\_nvm means the system with
GC enabled and NVM based log; nogc\_nvm is the system with NVM based log but GC disabled; gc\_ssd
is the system with GC enabled and SSD based log.}
\end{center}
\vspace{-4mm}
\end{figure}
Two experiments are conducted to validate the effectiveness and efficiency of the NVM based log.
Both experiments are run with 36 servers and 95\% read transactions.
In Figure \ref{fig:pmemthroughput}, the instant throughput is plotted with and without
GC. The GC process doesn't affect normal transactions
when WALs are GC'ed every 10 seconds since
the reclamation of volatile states that affects normal transactions
is delayed until servers are in light loads.
The system with SSD log shows 19k ops/sec on average, which is 30 times slower than that with NVM log.
In Figure \ref{fig:pmemspace}, we measure the space over time with and without GC
to show the GC efficiency. The logs are NVM files of
100MB size so that the space usage changes in units of 100MB.
The GC mechanism successfully keeps log space usage low since DTranx reclaims the
transactions from WALs much faster than it writes them.
After 120 seconds, tests are complete and the log usage with GC
converges to 200 MB(one for GCLog and one for the current WAL).
\subsection{Cache}
DTranx enables client side cache to avoid excessive network traffic.
The caching policy works as follows: (1) Data cache is updated if read or commit requests succeed.
(2) Data cache is invalidated if commit requests fail. In addition, DTranx servers piggyback the
updated data in the response to failed commit requests such that the clients can update the local
cache and read requests in the retrying transaction can read from the local cache.
On the other hand, DTranx enables server side database cache to serve read
requests in lower latency and it adopts the write-through strategy for durability.
\subsection{Exactly-Once RPC}
There are three different RPC calls corresponding
to the three stages in Figure~\ref{fig:seda}:
Read requests from clients to servers; Commit requests from
clients to servers; and, Transaction requests from servers
to servers.
Duplicate processing of RPCs would lead to system failures in certain cases. For example,
if DTranx servers process a duplicate prepare request after the corresponding
commit request is done, the locking service would lock the data items and future transactions
would not be able to update these data. Therefore, DTranx should guarantee
to process each RPC exactly once.
First, we guarantee at least once delivery by resending messages on the sender side if
no responses are received within a timeout. We build the RPC protocol based
on the ZeroMQ library, which automatically resends messages if they are lost.
In addition, DTranx implements the retrying mechanism itself when no responses are received since
at least once delivery in ZeroMQ does not indicate at least once delivery in DTranx.
For example, if servers are restarted after the ZeroMQ library receives a message but before the DTranx system
detects the message, ZeroMQ does not retry the message and the message is lost.
Second, we guarantee at most once processing by blocking duplicate messages.
we assign distinct IDs for each RPC message and receivers record
the IDs of completed messages. Read requests are never blocked since they are idempotent. For Commit requests, each
message has a clientID and messageID where the clientID is distinct
for each TCP connection and the messageID is monotonically increasing
for each client. ClientIDs are assigned by the ZeroMQ library when the connection
is established. For Transaction requests, each message has a unique transaction ID(TranxID) and
a message type. TranxID is the concatenation of
the distinct coordinator server ID and a monotonically increasing integer.
And, there are four message types corresponding to the four Transaction requests in Figure~\ref{fig:commit_protocol}:
\textit{Prepare}, \textit{Ready}, \textit{Commit}, and \textit{Abort}.
With at least once delivery and at most once processing, each message is processed exactly once.
\subsection{LevelDB}
We choose levelDB\cite{leveldb} as the local database implementation
since it is lightweight and efficient compared to multi-version KV stores.
To validate the read items during the OCC commit, DTranx keeps a version number
for each key value pair by storing the combination of the real value and
a version number as the value in levelDB.
The real value and the version number are separated by a special delimiter, such as
\textit{\#}. When clients send read requests, servers interpret the values retrieved from
levelDB and returns the value and the version number to the clients.
When clients send Commit requests, servers increment the corresponding
version numbers by 1 if transactions commit.
\subsection{Fault Recovery}
As the cluster size increases, the probability of
server failures will increase considerably. For example, if the aggregated
MTBF~(Mean Time Between Failure) of a server is 1 year including disk
failures, network failures etc., then in a cluster of 100 servers, there
is a server failure every 3 to 4 days on average.
DTranx triggers the recovery process in two stages: local recovery and global recovery.
Local recovery reapplies local logs by updating databases if transactions commit and
lock data items if no agreement has been reached.
In addition, DTranx fills the TranxID space with aborted transactions.
Missing transactions are possible when servers
crash immediately after read item checking fails in coordinators.
Global recovery repairs transactions of which commit results can not be decided unilaterally.
It is initiated after local recovery to inquire transaction states from other involved servers.
Specially, if the coordinator is in \textit{Prepare} state and all participants are in \textit{Ready} states,
neither committing nor aborting violates distributed consensus. DTranx
chooses to abort them such that the clients can assume the transaction failure
if no responses are received.
On the other hand, DTranx starts service stages in Figure~\ref{fig:seda} after local recovery
such that the changes from completed transactions are applied and in-memory states of
ongoing transactions are stored. However, the order between service stage startup and
global recovery does not matter and DTranx chooses to starts service stages before global
recovery to reduce the service downtime.
\subsection{Optimization}
In order to achieve better performance, multiple optimization techniques are applied.
The most significant techniques are listed below.
\begin{itemize}
\item \textbf{Delayed In-Memory Reclamation} DTranx reclaims the volatile \textit{Commit}/\textit{Abort}
states in participants when the servers are under light loads to avoid performance hiccups.
\item \textbf{Batch Ack Phase} DTranx delays the second phase(Ack phase) of 2PC when coordinators
send transaction commit results. We delegate the Ack phase to a separate stage, TranxAck in Figure~\ref{fig:seda},
to reduce the transaction latency and offload the high processing demand of the ClientService stage.
\item \textbf{Core Bindings} We manually analyze the queue size for each stage and bind the
threads to physical cores in an optimal way.
The best core binding strategy yields almost 6 times higher throughput than the worst.
In the future, we plan to explore how to automate the core bindings to attain
the best performance based on the number of CPU cores available.
\end{itemize}
\section{Introduction}
\input{introduction}
\section{Background}
\input{background}
\section{Design}
\label{sec:design}
\input{design}
\section{Implementation}
\label{sec:implementation}
\input{implementation}
\section{Evaluation}
\label{sec:evaluation}
\input{evaluation}
\section{Related Work}
\label{sec:related}
\input{related-work}
\section{Conclusions}
\input{conclusions}
\subsubsection{Log Design}
The logging module is designed in three vertical layers: NVM library, LogManager
and TranxLog. The NVM library provides the basic interface to persistent memory
to create files, read and write data. LogManager structures the log into a list of
log files, calculates checksums, and supports block read/write operations.
Lastly, TranxLog offers high level abstractions for distributed transaction logs and
presents a continuous and append-only log.
We use Intel's NVM~\cite{pmem} library to manipulate memory
mapping for log files in persistent memory. After log files are mapped
to the memory space, writes are immediately durable after being flushed from
cache to memory(e.g. using clflush in the x86 instruction set).
Two adjustments of the NVM library are made. First, a read pointer is added to
the original NVM library to provide an on-demand read interface.
Second, internal write locks are disabled since only one thread is launched in the
\textit{WAL} stage, thus no race conditions.
On top of NVM library, LogManager organizes the logs in a list structure such that logs of
variable sizes are supported. To reduce I/O system calls and reach higher
throughput, reads and writes are block based and logs in the same block are buffered
in memory. In addition, checksums are calculated and written for each block to detect
data corruption.
TranxLog serializes transaction logs such as \textit{CommitLog} that
records commit states for coordinators and \textit{ReadyLog} that records
ready states for participants. Then, TranxLog separates WALs into files
whose names are set to their creation timestamps. Thus, the file with the smallest
timestamp is the oldest one, with which the garbage collector starts. On the other hand, reclaiming
old log files does not interfere with current transactions since current transactions are
appending logs to new log files.
\subsubsection{Garbage Collection}\label{subsubsec:garbagecollection}
Since WAL is written as transactions are committed, its size would increase indefinitely if DTranx
does not reclaim the WAL of complete transactions.
WAL for transactions that reach consensus are not required during recovery.
Therefore, we introduce a state transition based garbage collection mechanism to identify
unnecessary logs without performance hiccups.
In particular, each transaction is assigned with a unique TranxID that combines
the ServerID and a local monotonically increasing 64-bit integer.
Since ServerIDs are assigned as the server indexes in the group
membership stored in the replicated state machine Raft~\cite{raft}, TranxIDs are guaranteed
to be distinct among servers.
Moreover, each server keeps updating the largest committed TranxID, LC\_TranxID, where
transactions with TranxIDs less than LC\_TranxID have reached consensus.
Then, each server broadcasts its LC\_TranxID and stores the LC\_TranxIDs from other servers
in a fixed-size GC log. The benefits of the GC log are twofold:
it is fixed size space usage and it enables WAL reclamation.
The state transition flow is illustrated in Figure \ref{fig:state_transition}.
On the one hand, each transaction has a state to represent the current stages in the 2PC and
each server has a \textit{GC} state to record completed transactions where LC\_TranxIDs are calculated.
On the other hand,
there are volatile and nonvolatile states where the nonvolatile
states are durable copies of the volatile ones.
For example, WALs are the nonvolatile copies of 2PC states including
\textit{Start}, \textit{Prepare}, \textit{Ready}, \textit{Commit}, and \textit{Abort}.
GCLog is the nonvolatile state persisting the \textit{GC} state.
Although both WALs and GCLog are persistent copies of the volatile states, their orders of updating volatile and nonvolatile
states differ. For WALs, nonvolatile 2PC states are updated after WALs are written in order for the transactions to be recoverable.
For the GCLog, \textit{GC} state is updated before GCLog for two reasons.
First, the history can be replayed as long as WALs are not reclaimed yet. Second,
accumulating in-memory \textit{GC} states and writing to the GCLog in batch is more I/O efficient.
For coordinators, GCThread periodically collects the volatile \textit{GC} states and updates the GCLog, after which
WALs containing only completed transactions are reclaimed and the LC\_TranxIDs are broadcasted to all
the other servers. For the participants, GCBroadcast thread passively receives
the broadcasted LC\_TranxID, updates the local \textit{GC} state and GCLog,
and then reclaims WALs.
Not only does the state transition help to reclaim WALs, it is also utilized to clean the aborted
transaction IDs in the lock service, which are referenced to avoid faulty re-lock situations. For example, after
a participant receives the prepare request of transactionA and its volatile state is checked to be \textit{Start},
an abort request of transactionA arrives, changing the volatile state to \textit{Abort}.
It is possible that abort requests come before prepare requests are done since the coordinator
immediately sends out abort requests if any participants aborts.
Note that these two requests are processed concurrently.
Then, the prepare requests lock the data items and these locks will never be unlocked.
Nonetheless, the committed transaction IDs are not stored in the lock service since coordinators only
send out commit requests after all participants agree to commit, in which case it is impossible that
prepare and commit requests are processed concurrently.
|
2,877,628,090,696 | arxiv | \chapter*{Introduction}
\addcontentsline{toc}{chapter}{Introduction}
K3-surfaces are special two-dimensional holomorphic symplectic
manifolds. They come equip\-ped with a symplectic form $\omega$, which
is unique up to a scalar factor, and their symmetries are naturally
partitioned into symplectic and nonsymplectic transformations. An
important class of K3-surfaces consists of those possessing an
\emph{antisymplectic involution}, i.e., a holomorphic involution $\sigma$ such that
$\sigma^* \omega = - \omega$.
K3-surfaces with antisymplectic involution occur classically as branched double covers of the projective plane, or more generally of Del Pezzo surfaces. This construction is a prominent source of examples and plays a significant role in the classification of log Del Pezzo surfaces of index two (see the works of Alexeev and Nikulin e.g.\,in \cite{AlexNikulin} and the classification by Nakayama \cite{Nakayama}).
Moduli spaces of K3-surfaces with antisymplectic involution are
studied by Yoshikawa in \cite{yoshikawaInvent},
\cite{yoshikawaPreprint}, and lead to new developments in the area of automorphic forms.
In this monograph we study K3-surfaces with antisymplectic involution from the point of view of symmetry. On a K3-surface $X$ with antisymplectic involution it is natural the consider those holomorphic symmetries of $X$ compatible with the given structure $(X,\omega, \sigma)$. These are symplectic automorphisms of $X$ commuting with $\sigma$.
Given a finite group $G$ one wishes to understand if it can act in the
above fashion on a K3-surface $X$ with antisymplectic involution
$\sigma$. If this is the case, i.e., if there exists a holomorphic
action of $G$ on $X$ such that $g^* \omega = \omega$ and $g \circ
\sigma = \sigma \circ g$ for all $g \in G$, then the structure of $G$
can yield strong constraints on the geometry of $X$. More precisely,
if the group $G$ has rich structure or large order, it is possible to
obtain a precise description of $X$. This can be considered the guiding
classification problem of this monograph.
In Chapter \ref{chapterlarge} we derive a classification of K3-surfaces with antisymplectic involution centralized by a group of symplectic automorphisms of order greater than or equal to 96. We prove (cf. Theorem \ref{roughclassi}):
\begin{theorem*} \label{1}
Let $X$ be a K3-surface with a symplectic action of $G$ centralized by an
antisymplectic involution $\sigma$ such that
$\mathrm{Fix}(\sigma)\neq \emptyset$. If $|G|>96$, then $X/\sigma$ is a
Del Pezzo surface and $\mathrm{Fix}(\sigma)$ is a smooth connected curve $C$ with $ g(C)\geq
3$.
\end{theorem*}
By a theorem due to Mukai \cite{mukai} finite groups of symplectic
transformations on K3-surfaces are characterized by the existence of a
certain embedding into a particular Mathieu group and are
subgroups of eleven specified finite groups of maximal symplectic symmetry.
This result naturally limits our considerations and has
led us to consider the above classification problem for a group $G$ from this list of eleven \emph{Mukai groups}.
Theorem \ref{1} above can be refined to obtain a complete classification of K3-surfaces with a symplectic action of a Mukai group centralized by an antisymplectic involution with fixed points (cf. Theorem \ref{thm mukai times invol}).
\begin{theorem*}
Let $G$ be a Mukai group acting on a K3-surface $X$ by symplectic transformations. Let $\sigma $ be an antisymplectic involution on $X$ centralizing $G$ with $\mathrm{Fix}_X(\sigma) \neq \emptyset$. Then the pair $(X,G)$ can be found in Table \ref{Mukai times invol}.
\end{theorem*}
In addition to a number of examples presented by Mukai we find new examples of K3-surfaces with maximal symplectic symmetry as equivariant double covers of Del Pezzo surfaces.
It should be emphasized that the description of K3-surfaces
with given symmetry does however not necessary rely on the size of the group or its maximality and a classification can also be obtained for rather small subgroups of the Mukai groups.
In order to illustrate that the approach does rather depend on the structure of the group, we prove a classification of K3-surfaces with a symplectic action of the group $C_3 \ltimes C_7$ centralized by an antisymplectic involution in Chapter \ref{chapterC3C7}. The surfaces with this given symmetry are characterized as double covers of $\mathbb P_2$ branched along invariant sextics in a precisely described one-dimensional family $\mathcal M$ (Theorem \ref{mainthmc3c7}).
\begin{theorem*}
The K3-surfaces with a symplectic action of $G = C_3 \ltimes C_7$ centralized by an antisymplectic involution $\sigma$ are parametrized by the space $\mathcal M$ of equivalence classes of sextic branch curves in $\mathbb P_2$.
\end{theorem*}
The group $C_3 \ltimes C_7$ is a subgroup of the simple group $L_2(7)$
of order 168 which is among the Mukai groups. The actions of $L_2(7)$ on
K3-surfaces have been studied by Oguiso and Zhang \cite{OZ168} in an a
priori more general setup. Namely, they consider finite groups containing $L_2(7)$ as a proper subgroup and obtain lattice theoretic classification results using the Torelli theorem. Since a finite group containing $L_2(7)$ as a proper subgroup posseses, in the cases considered, an antisymplectic involution centralizing $L_2(7)$, we can apply Theorem \ref{thm mukai times invol} and improve the existing result (cf. Theorem \ref{improve OZ}).
All classification results summarized above are proved by applying the following general strategy.
The quotient of a K3-surface by an antisymplectic involution $\sigma$ with fixed points centralized by a finite group $G$ is a rational $G$-surface $Y$. We apply an equivariant version of the minimal model program respecting finite symmetry groups to the surface $Y$. Chapter \ref{chapter mmp} is dedicated to a detailed derivation of this method, a brief outline of which can also be found in the book of Koll\'ar and Mori (\cite{kollarmori} Example 2.18, see also Section 2.3 in \cite{Mori}).
In the setup of rational surfaces it leads to the well-known classification of $G$-minimal rational surfaces (\cite{maninminimal}, \cite{isk}).
Equivariant Mori reduction and the theory of $G$-minimal models have applications in various different context and can also be generalized to higher dimensions.
Initiated by Bayle and Beauville in \cite{Bayle}, the methods have been employed in the classification of subgroups of the Cremona group $\mathrm{Bir}(\mathbb P_2)$ of the plane for example by Beauville and Blanc (\cite{Beauville}, \cite{BeauBlancPrime}, \cite{PhDBlanc}), \cite{Blanc1}, etc.), de Fernex \cite{fernex}, Dolgachev and Iskovskikh \cite{DolgIsk}, and Zhang \cite{ZhangRational}.
The equivariant minimal model $Y_\mathrm{min}$ of $Y$ is obtained from
$Y$ by a finite number of blow-downs of (-1)-curves. Since individual
(-1)-curves are not necessarily invariant, each reduction step blows
down a number of disjoint (-1)-curves. The surface $Y_\mathrm{min}$ is, in all cases considered, a Del Pezzo surface.
Using detailed knowledge of the equivariant reduction map $Y \to Y_\mathrm{min}$, the shape of the invariant set $\mathrm{Fix}_X(\sigma)$, and the equivariant geometry of Del Pezzo surfaces, we classify $Y$, $Y_\mathrm{min}$ and $\mathrm{Fix}_X(\sigma)$ and can describe $X$ as an equivariant double cover of a possibly blown-up Del Pezzo surface. Besides the book of Manin, \cite{manin}, our analysis relies, to a certain extend, on Dolgachev's discussion of automorphism groups of Del Pezzo surfaces in \cite{dolgachev}, Chapter 10.
In addition to classification, this method yields a multitude of new
examples of K3-surfaces with given symmetry and a more geometric
understanding of existing examples. It should be remarked that a
number of these arise when the reduction $Y \to Y_{\mathrm{min}}$ is nontrivial.
In the last two chapters we present two different generalizations of our classification strategy for K3-surfaces with antisymplectic involution.
One of our starting points has been the
study of K3-surfaces with $L_2(7)$-symmetry by Oguiso and Zhang mentioned above. Apart from a classification result for K3-surfaces with an action the group $L_2(7)\times C_4$, they also show that there does not exist a K3-surface with an action of a the group $L_2(7)\times C_3$. We give an independent proof of this result in Chapter \ref{chapter non exist}.
Assuming the existence of such a surface and following the strategy above,
we consider the quotient by the nonsymplectic action of $C_3$ and apply the equivariant minimal model program to its desingularization. Combining this with additional geometric consideration we reach a contradiction.
In the last chapter we consider K3-surfaces $X$ with an action of a
finite group $\tilde G$ which contains an antisymplectic involution
$\sigma$ but is not of the form $\tilde G_\mathrm{symp} \times \langle
\sigma \rangle$. Since the action of $\tilde G_\mathrm{symp}$ does not
descend to the quotient $X/\sigma$ we need to restrict our
considerations to the centralizer of $\sigma$ inside $\tilde G$. This
strategy is exemplified for a finite group $\tilde A_6$ characterized
by the short exact sequence $\{\mathrm{id}\} \to A_6 \to \tilde A_6
\to C_4 \to \{\mathrm{id}\}$. In analogy to the $L_2(7)$-case, the
action of $\tilde A_6$ on K3-surfaces has been studied by Keum,
Oguiso, and Zhang (\cite{KOZLeech}, \cite{KOZExten}), and a
characterization of $X$ using lattice theory and the Torelli theorem
has been derived. Since the existing realization of $X$ does however
not reveal its equivariant geometry, we reconsider the problem and,
though lacking the ultimate classification, find families of
K3-surfaces with $D_{16}$-symmetry, in which the $\tilde A_6$-surface
is to be found, as branched double covers.
These families are of independent interest and should be studied further. In particular,
it remains to find criteria to identify the $\tilde A_6$-surface inside these families. Possible approaches are outlined at the end of Chapter \ref{chapterA6}.
Since none of our results depends on the Torelli theorem, our approach to the classification problem allows generalization to fields of appropriate positive characteristic. This possible direction of further research was proposed to the author by Prof.\,Keiji Oguiso. Another potential further development would be the adaptation of the methods involved in the present work to related questions in higher dimensions.
\vfill
\begin{small}
\textit{Research supported by Studienstiftung des deutschen Volkes and Deutsche Forschungsgemeinschaft}.
\end{small}
\chapter{Finite group actions on K3-surfaces}
\pagestyle{fancy}
\fancyhf{}
\fancyhead[EL]{\thepage}
\fancyhead[ER]{\emph{\nouppercase{\leftmark}}}
\fancyhead[OR]{\thepage}
\fancyhead[OL]{\emph{\nouppercase{\rightmark}}}
This chapter is devoted to a brief introduction to finite groups actions on K3-surfaces and presents a number of basic, well-known results:
We consider quotients of K3-surfaces by finite groups of symplectic or nonsymplectic automorphisms. It is shown that the quotient of a K3-surface by a finite group of symplectic automorphisms is a K3-surface, whereas the quotient by a finite group containing nonsymplectic transformations is either rational or an Enriques surface. Our attention concerning nonsymplectic automorphisms is then focussed on antisymplectic involutions and the description of their fixed point set.
The chapter concludes with Mukai's classification of finite groups of symplectic automorphisms on K3-surfaces and a discussion of basic examples.
\section{Basic notation and definitions}
Let $X$ be a n-dimensional compact complex manifold. We denote by $\mathcal O_X$ the sheaf of holomorphic functions on $X$ \nomenclature{$\mathcal O_X$}{the sheaf of holomorphic functions on $X$} and by $\mathcal K_X$ its canonical line bundle\nomenclature{$\mathcal K_X$}{the canonical line bundle of $X$}.
The \emph{ $i^{\text{th}}$ Betti number of $X$} \index{Betti number}\nomenclature{$b_i(X)$}{the $i^{\text{th}}$ Betti number of $X$} is the rank of the free part of $H_i(X)$ and denoted by $b_i(X)$.
A \emph{surface}\index{surface} is a compact connected complex manifold of complex dimension two. A \emph{curve} \index{curve} on a surface $X$ is an irreducible 1-dimensional closed subspace of $X$. The (arithmetic) genus of a curve $C$ is denoted by $g(C)$ \nomenclature{$g(C)$}{the (arithmetic) genus of a curve $C$}.
\begin{definition}
A \emph{K3-surface} \index{K3-surface} is a surface $X$ with trivial canonical bundle $\mathcal K_X$ and $b_1(X)=0$.
\end{definition}
Note that a K3-surface is equivalently characterized if the condition $b_1(X)=0$ is replaced by $q(X) = \mathrm{dim}_\mathbb C H^1(X, \mathcal O_X) =0$ or $\pi_1(X) =\{\mathrm{id}\}$, i.e., $X$ is simply-connected. Examples of K3-surfaces arise as Kummer surfaces, quartic surfaces in $\mathbb P_3$ or double coverings of $\mathbb P_2$ branched along smooth curves of degree six.
Let $X$ be a K3-surface. Triviality of $\mathcal K _X$ is equivalent to the existence of a nowhere vanishing holomorphic 2-form $\omega$ on $X$. Any 2-form on $X$ can be expressed as a complex multiple of $\omega$. We will therefore mostly refer to $\omega$ (or $\omega _X$) as "the" holomorphic 2-form on $X$ \nomenclature{$\omega_X$}{the holomorphic 2-form on a K3-surface $X$}. We denote by $\mathrm{Aut}_\mathcal O (X)=\mathrm{Aut}(X)$ the group of holomorphic automorphisms of $X$ \nomenclature{$\mathrm{Aut}(X)$}{the group of holomorphic automorphisms of $X$} and consider a (finite) subgroup $G \hookrightarrow \mathrm{Aut}(X)$. If the context is clear, the abstract finite group $G$ is identified with its image in $\mathrm{Aut}(X)$. The group $G$ is referred to as a transformation group, symmetry group or automorphism group of $X$. Note that our considerations are independent of the question whether the group $\mathrm{Aut}(X)$ is finite or not. The order of $G$ is denoted by $|G|$\nomenclature{$|G|$}{the order of the group $G$}.
\begin{definition}
The action of $G$ on $X$ is called \emph{symplectic} \index{symplectic action} if $\omega$ is $G$-invariant, i.e., $g^*\omega = \omega$ for all $g \in G$.
\end{definition}
For a finite group $G < \mathrm{Aut}(X)$ we denote by $G_\mathrm{symp}$ the subgroup of symplectic transformations in $G$ \nomenclature{$G_\mathrm{symp}$}{the subgroup of symplectic transformations in $G$}. This group is the kernel of the homomorphism $\chi : G \to \mathbb{C}^*$ defined by the action of $G$ on the space of holomorphic 2-forms $\Omega^2(X) \cong \mathbb C \omega$. It follows that $G$ fits into the short exact sequence
\begin{equation*}
\{\mathrm{id}\} \to G_\mathrm{symp} \to G \to C_n \to \{\mathrm{id}\}
\end{equation*}
for some cyclic group $C_n$\nomenclature{$C_n$}{the cyclic group of order $n$}.
If both $G_\text{symp}$ and $C_n \cong G/G_\text{symp}$ are nontrivial, then $G$ is called a symmetry group of \emph{mixed type} \index{mixed type}.
\section{Quotients of K3-surfaces}
Let $X$ be a surface and let $G < \mathrm{Aut}(X)$ be a finite subgroup of the group of holomorphic automorphisms of $X$. The orbit space $X/G$ carries the structure of a reduced, irreducible, normal complex space of dimension 2 where the sheaf of holomorphic functions is given by the sheaf $G$-invariant functions on $X$. In many cases, the quotient is a singular space. The map $X \to X/G$ is referred to as a quotient map or a covering (map).
For reduced, irreducible complex spaces $X,Y$ of dimension 2 a proper holomorphic map $f: X \to Y$ is called \emph{bimeromorphic} \index{bimeromorphic map} if there exist proper analytic subsets $A \subset X$ and $B \subset Y$ such that $f: X \backslash A \to Y \backslash B$ is biholomorphic. A holomorphic, bimeromorphic map $f: X \to Y$ with $X$ smooth is a \emph{resolution of singularities of $Y$} \index{resolution of singularities}.
\begin{definition}
A resolution of singularities $f: X \to Y$ is called \emph{minimal}\index{resolution of singularities! minimal resolution of singularities} if it does not contract any (-1)-curves, i.e., there is no curve $E \subset X$ with $E \cong \mathbb{P}_1$ and $E^2=-1$ such that $f(E) = \{\text{point}\}$.
\end{definition}
Every normal surface $Y$ admits a minimal resolution of singularities $f: X \to Y$ which is uniquely determined by $Y$. In particular, this resolution is equivariant.
\subsection{Quotients by finite groups of symplectic transformations}
In the study and classification of finite groups of symplectic transformations on K3-surfaces, the following well-known result has proved to be very useful (see e.g. \cite{NikulinFinite})
\begin{theorem}\label{K3quotsymp}
Let $X$ be a K3-surface, $G$ be a finite group of automorphisms of $X$ and $f: Y \to X/G$ be the minimal resolution of $X/G$. Then $Y$ is a K3-surface if and only if $G$ acts by symplectic transformations.
\end{theorem}
For the reader's convenience we give a detailed proof of this theorem.
We begin with the following lemma.
\begin{lemma}\label{betti}
Let $X$ be a simply-connected surface, $G$ be a finite group of
automorphisms and $f: Y \to X/G$ be an arbitrary
resolution of singularities of $X/G$. Then
$b_1(Y) =0$.
\end{lemma}
\begin{proof}
We denote by $\pi_1(Y)$ the fundamental group of $Y$ \nomenclature{$\pi_1(X)$}{the fundamental group of $X$} and by $[\gamma] \in \pi_1(Y)$ the homotopy equi\-va\-lence class of a closed continuous path $\gamma$.
The first Betti number is the rank
of the free part of
$$
H_1(Y) = \pi_1(Y) / [\pi_1(Y), \pi_1(Y)].
$$
We show that
for each
$ [\gamma] \in \pi_1(Y)$ there exists $N \in \mathbb N$ such that $[\gamma]^N =0$, i.e., $\gamma ^N$ is
homotopic to zero for some $N \in \mathbb N$ . It then
follows that $H_1(Y)$ is a torsion group and $b_1(Y) =0$.
Let $C \subset X/G$ be the union of branch curves of the
covering $q: X \to X/G$, let $P \subset X/G$ be the set of isolated singularities of $X/G$, and $E \subset Y$ be the exceptional locus of $f$. Let $\gamma: [0,1] \to Y$ be a closed path in $Y$. By choosing a path homotopic to
$\gamma$ which does not intersect $E \cup f^{-1}(C)$ we may assume without loss of generality that $\gamma \cap (E \cup f^{-1}(C)) =
\emptyset$.
The path $\gamma$ is mapped to a
closed path in $(X/G)\backslash (C \cup P)$ which we denote also by $\gamma$. The quotient $q:X \to X/G$ is unbranched
outside $C \cup P$ and we can lift $\gamma$ to a path $\widetilde{\gamma}$ in $X$.
Let $\widetilde{\gamma}(0) = x \in X$, then $\widetilde{\gamma}(1) = g.x$ for some $g
\in G$. Since $G$ is a finite group, it follows that $\widetilde{\gamma^N}$ is
closed for some $N \in \mathbb N$.
As $X$ is simply-connected, we know that also $X \backslash q^{-1}(P)$ is
simply-connected. So $\widetilde{\gamma^N }$ is homotopic to zero in $X\backslash q^{-1}(P)$. We can map the corresponding homotopy to $(X/G)\backslash P$ and conclude that
$\gamma^N$ is homotopic to zero in $(X/G)\backslash P$. It follows that $\gamma
^N$ is homotopic to zero in $Y\backslash E$ and therefore in $Y$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{K3quotsymp}]
We let $E \subset Y$ denote the exceptional locus of the map $f:Y \to X/G$.
If $Y$ is a K3-surface, let $\omega_Y$ denote the nowhere vanishing holomorphic 2-form on $Y$. Let $(X/G)_\text{reg}$ denote the regular part of $X/G$. Since $f|_{Y\backslash E}: Y\backslash E \to (X/G)_\text{reg}$ is biholomorphic, this defines a holomorphic 2-form $\omega_{(X/G)_\text{reg}}$ on $(X/G)_\text{reg}$. Pulling this form back to $X$, we obtain a $G$-invariant holomorphic 2-form on $\pi^{-1}((X/G)_\text{reg}) = X \backslash \{p_1,\dots p_k\}$. This extends to a nonzero, i.e., not identically zero, $G$-invariant holomorphic 2-form on $X$. In particular, any holomorphic 2-form on $X$ is $G$-invariant and the action of $G$ is by symplectic transformations.
Conversely, if $G$ acts by symplectic transformations on $X$, then $\omega_X$ defines a nowhere vanishing holomorphic 2-form on $(X/G)_\text{reg}$ and on $ Y \backslash E$ . Our aim is to show that it extends to a nowhere vanishing holomorphic 2-form on $Y$. In combination with Lemma \ref{betti} this yields that $Y$ is a K3-surface.
Locally at $p \in X$ the action of $G_p$ can be linearized. I.e., there exist a neighbourhood of $p$ in $X$ which is $G_p$-equivariantly isomorphic to a neighbourhood of $0 \in \mathbb C^2$ with a linear action of $G_p$. A neighbourhood of $\pi(p) \in X/G$ is isomorphic to a neighbourhood of the origin in $\mathbb C^2 / \Gamma$ for some finite subgroup $\Gamma < \mathrm{SL}(2,\mathbb C)$. In particular, the points with nontrivial isotropy are isolated. The singularities of $X/G$ are called simple singularities, Kleinian singularities, Du Val singularities or rational double points. Following \cite{shafarevic} IV.4.3, we sketch an argument which yields the desired extension result.
Let $X \times_{(X/G)} Y = \{ (x,y) \in X\times Y \, | \, \pi(x) = f(y) \}$ and let $ N$ be its normalization. Consider the diagram
$$
\begin{xymatrix}
{
X\ar[d]_{\pi} & \ar[l]_{p_X}\ar[d]^{p_Y}N \\
X/G & \ar[l]^f Y.
}
\end{xymatrix}
$$
We let $\omega_X$ denote the nowhere vanishing holomorphic 2-form on $X$. Its pullback $p_X^*\omega_X$ defines a nowhere vanishing holomorphic 2-form on $N_\text{reg}$. Simultaneously, we consider the meromorphic 2-form $\omega _Y$ on $Y$ obtained by pulling back the 2-form on $X/G$ induced by the $G$-invariant 2-form $\omega_X$. By contruction, the pullback $p_Y^*\omega_Y $ coincides with the pullback $p_X^*\omega_X$ on $N_\text{reg}$.
Consider the finite holomorphic map $p_Y|_{N_\text{reg}}: N_\text{reg} \to p_Y(N_\text{reg}) \subset Y$. Since $p_Y^*\omega_Y $ is holomorphic on $N_\text{reg}$, one checks (by a calculation in local coordinates) that $\omega_Y$ is holomorphic on $p_Y(N_\text{reg}) = Y \backslash \{y_1, \dots y_k\}$ and consequently extends to a holomorphic 2-form on $Y$. Since $p_X^*\omega_X =p_Y^*\omega_Y $ is nowhere vanishing on $N_\text{reg}$, it follows that $\omega_Y$ defines a global, nowhere vanishing holomorphic 2-form on $Y$.
\end{proof}
\begin{remark}
Let $g$ be a symplectic automorphism of finite order on a K3-surface $X$.
Since K3-surfaces are simply-connected, the covering $X \to X/\langle g \rangle$ can never be unbranched. It follows that $g$ must have fixed points.
\end{remark}
Using Theorem \ref{K3quotsymp} we give an outline of Nikulin's classification of finite Abelian groups of symplectic transformations on a K3-surface \cite{NikulinFinite}. Let $C_p$ be a cyclic group of prime order acting on a K3-surface $X$ by symplectic transformations and $Y$ be the minimal desingularization of the quotient $X/C_p$.
Notice that by adjunction the self-intersection number of a curve $D$ of genus $g(D)$ on a K3-surface is given by $D^2 = 2g(D)-2$. In particular, if $D$ is smooth, then $D^2 = -e(D)$.
The exceptional locus of the map $Y \to X/G$ is a union of (-2)-curves and one can calculate their contribution to the topological Euler characteristic $e(Y)$ \nomenclature{$e(X)$}{the topological Euler characteristic of $X$} in relation to $e(X/C_p)$. Let $n_p$ denote the number of fixed point of $C_p$ on $X$. Then
\begin{align*}
24 &= e(X) = p\cdot e(X/G) - n_p \\
24 &= e(Y) = e(X/G) + n_p \cdot p.
\end{align*}
Combining these formulas gives $n_p = 24/(p+1)$. For a general finite Abelian group $G$ acting symplectically on a K3-surface $X$, one needs to consider all possible isotropy groups $G_x$ for $x \in X$. By linearization, $G_x < \mathrm{SL}_2(\mathbb C)$. Since $G$ is Abelian, it follows that $G_x$ is cyclic and an analoguous formula relating the Euler characteristic of $X$, $X/G$, and $Y$ can be derived. A case by case study then yields Nikulin's classification. In particular, we emphasize the following remark.
\begin{remark}\label{order of symp aut}
If $g \in \mathrm{Aut}(X)$ is a symplectic automorphism of finite order $n(g)$ on a K3-surface $X$, then $n(g)$ is bounded by eight and the number of fixed points of $g$ is given by the following table:
\begin{table}[h]
\centering
\begin{tabular}{c|c|c|c|c|c|c|c}
$n(g)$ & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline
$|\mathrm{Fix}_X(g)|$ & 8 & 6 & 4 & 4& 2& 3 & 2
\end{tabular}
\caption{Fixed points of symplectic automorphisms on K3-surfaces}\label{fix points symplectic}
\end{table}
\end{remark}
\subsection{Quotients by finite groups of nonsymplectic transformations}
In this subsection we consider the quotient of a K3-surface $X$ by a finite group $G$ such that $G \neq G_\text{symp}$, i.e., there exists $g \in G$ such that $g^* \omega \neq \omega$. We prove
\begin{samepage}
\begin{theorem}\label{K3quotnonsymp}
Let $X$ be a K3-surface and let $G < \mathrm{Aut}(X)$ be a finite group such that $g^* \omega \ne \omega$ for some $g \in G $. Then either
\begin{itemize}
\item[-]
$X/G$ is rational, i.e., bimeromorphically equivalent to $\mathbb P_2$, or
\item[-]
the minimal desingularisation of $X/G$ is a minimal Enriques surface and $$G/G_\text{symp} \cong C_2.$$ In this case, $\pi: X \to X/G$ is unbranched if and only if $G_\text{symp} = \{ \mathrm{id}\}$.
\end{itemize}
\end{theorem}
\end{samepage}
Before giving the proof, we establish the necessary notation and state two useful lemmata.
We denote by $\pi: X \to X/G$ the quotient map. This map can be ramified at isolated points
and along curves. Let $P=
\{p_1, \dots, p_n\}$ denote the set of singularities of $X/G$. For simplicity, the denote the correspondig subset $\pi^{-1}(P)$ of $X$ also by $P$. Outside $P$, the map $\pi$ is ramified along curves $C_i$ of ramification order $c_i+1$. We write $C = \sum c_i C_i$.
Let $r: Y \to X/G$ denote a minimal resolution of singularities of
$X/G$. The exceptional locus of $r$ in $Y$ is denoted by $D$.
As $Y$ is not necessarily a minimal surface, we denote by $p: Y \to Y_ \text{min}$
the sucessive blow-down of (-1)-curves. The union of exceptional curves of $p$
is denoted by $E$.
\[
\begin{xymatrix}{
C \subset \mathbf{X}\supset P \ar[d]^\pi \\
\pi(C) \subset \mathbf{X/G} \supset P & D \subset \mathbf{Y} \supset E \ar[l]_>>>>>>{r} \ar[d]^p\\
& \mathbf{Y_\text{min}}
}
\end{xymatrix}
\]
The following two lemmata (cf. e.g. \cite{BPV} I.16 and Thm. I.9.1) will be useful in order to relate the canonical bundles of the spaces $X$, $(X/G)_\text{reg}$, $Y$ and $Y_ \text{min}$. For a divisor $D$ on a manifold $X$ we denote by $\mathcal O_X(D)$ the line bundle associated to $D$ \nomenclature{$\mathcal O_X(D)$}{the line bundle associated to the divisor $D$}.
\begin{lemma}\label{adj1}
Let $X,Y$ be surfaces and let $\varphi:X \to Y$ be a surjective
finite proper holomorphic map ramified along a curve $C$ in $X$ of ramification order
$k$. Then
\[
\mathcal{K}_X = \varphi^*( \mathcal K_Y ) \otimes \mathcal O_X (C)^{\otimes(k-1)}.
\]
More generally, if $\pi$ is ramified along a ramification divisor $R = \sum_i r_i R_i$, where $R_i$ is an irreducible curve and $r_i + 1$ is the ramification order of $\pi$ along $R_i$, then
\[
\mathcal{K}_X = \pi^*( \mathcal K_Y ) \otimes \mathcal O_X (R).
\]
\end{lemma}
\begin{lemma}\label{adj2}
Let $X$ be a surface and let $b: X
\to Y$ be the blow-down of a (-1)-curve $E \subset X$. Then
\[
\mathcal{K}_X = b^* ( \mathcal K_Y ) \otimes \mathcal O_X (E).
\]
\end{lemma}
We present a proof of Theorem \ref{K3quotnonsymp} using the Enriques Kodaira classification of surfaces.
\begin{proof}[Proof of Theorem \ref{K3quotnonsymp}]
The Kodaira dimension of the K3-surface $X$ is $\mathrm{kod}(X)=0$.
The Kodaira dimension of $X/G$, which is defined as the Kodaira dimension of some resolution of $X/G$, is less than or equal to the Kodaira dimension of $X$. (c.f. Theorem 6.10 in \cite{ueno}),
\[
0=\mathrm{kod}(X) \geq \mathrm{kod}(X/G) = \mathrm{kod}(Y) =
\mathrm{kod}(Y_\text{min}) \in \{0, -\infty\}.
\]
By Lemma \ref{betti}, the first Betti number of $Y$ and $Y_\text{min}$ is zero.
If $\mathrm{kod}(Y) = - \infty$, then $Y$ is a smooth rational surface. If $\mathrm{kod}(Y) =\mathrm{kod}(Y_\text{min})= 0$, then, since $Y$ is not a K3-surface by Theorem \ref{K3quotsymp}, it follows that $Y_\text{min}$ is an Enriques surface.
If $Y_\text{min}$ is an Enriques surface, then $\mathcal K_{Y_\text{min}}^{\otimes 2}$ is trivial. Let $ s \in
\Gamma(Y_\text{min}, \mathcal{K}_{Y_\text{min}}^{\otimes 2})$ be a nowhere vanishing section.
Consecutive application of Lemma \ref{adj2} yields the following formula
\[
\mathcal{K}_Y^{\otimes 2} = (p^* \mathcal{K}_{Y_\mathrm{min}})^{\otimes 2} \otimes \mathcal O_Y(E)^{\otimes 2}= p^* (\mathcal{K}_{Y_\mathrm{min}}^{\otimes 2}) \otimes \mathcal O_Y(E)^{\otimes 2}.
\]
Let $e \in \Gamma(Y, \mathcal O_Y(E)^{\otimes 2})$ and write
$\tilde{s}= p^*(s) \cdot e$. This global section of $\mathcal{K}_Y^{\otimes 2}$ vanishes along $E$ and is
nowhere vanishing outside $E$.
By restricting $\tilde{s}$ to $Y\backslash D$ we obtain a section of
$\mathcal{K}_{Y \backslash D}^{\otimes 2}$. Since $\pi$ is biholomorphic
outside $D$, we can map the restricted section to $(X/G)\backslash P =
(X/G)_\text{reg}$ and obtain a section $\hat{s}$ of
$\mathcal{K}_{(X/G)_\text{reg}}^{\otimes 2}$. Note that $\hat{s}$ is not the
zero-section. If $E \neq \emptyset$, i.e., $Y$ is not minimal, let $E_1 \subset E$ be a (-1)-curve. The minimality of the resolution $r: Y \to X/G$ implies $E_1 \nsubseteq D$. It follows that $\hat{s}$ vanishes along the image of $E_1$ in $(X/G)_\text{reg}$
We may now apply Lemma \ref{adj1} to the map $\pi|_{X\backslash P}$ to see
\begin{align*}
\mathcal{K}_{X \backslash P}^{\otimes 2} &= (\pi^*
\mathcal{K}_{(X/G)_\text{reg}})^{\otimes 2} \otimes \mathcal O_{X\backslash P}(C)^{\otimes 2}\\
&=\pi^* (\mathcal{K}_{(X/G)_\text{reg}}^{\otimes 2}) \otimes \mathcal O_{X\backslash P}(C)^{\otimes 2}.
\end{align*}
Let $c \in \Gamma(X\backslash P,
\mathcal O_{X\backslash P}(C))^{\otimes 2}$. Then $t := \pi^*
\hat{s} \cdot c \in \Gamma(X\backslash P, \mathcal{K}_{X\backslash P}^{\otimes2})$
is not the zero-section but
vanishes along $C$ and along the preimage of the zeroes of $\hat{s}$.
Now $t$ extends to a holomorphic section $\tilde{t} \in \Gamma(X,
\mathcal{K}_X^{\otimes 2})$. Since $X$ is K3, it follows that both $\mathcal{K}_X$ and $\mathcal{K}_X^{\otimes 2}$ are trivial and $\tilde t$ must be nowhere vanishing. Consequently,
both $E$ and $C$ must be empty.
It follows that the map $\pi$ is at worst branched at points $P$ (not along curves) and the
minimal resolution $Y$ of $X/G$ is a minimal surface.
\[
\begin{xymatrix}{
P \subset \mathbf{X} \ar[d]^\pi \\
P \subset \mathbf{X/G} & \mathbf{Y} \supset D \ar[l]_>>>>>>{r}
}
\end{xymatrix}
\]
The section $\tilde{t}$ on $X$ is $G$-invariant by construction. Let $\omega$
be a nonzero section of the trivial bundle $\mathcal{K}_X$ such that $\tilde t = \omega ^2$. The action of $G$ on $X$ is
nonsymplectic, therefore $\omega$ is not invariant but $\tilde{t}$ is. Hence
$G$ acts on $\omega$ by multiplication with $\{1,-1\}$ and $G/G_\text{symp} \cong C_2$.
If $\pi: X \to X/G$ is unbranched, it follows that $\mathrm{Fix}_X(g) = \emptyset$ for all $g \in G \backslash \{\mathrm{id}\}$. Since symplectic automorphisms of finite order necessarily have fixed points, this implies $G_\text{symp} = \{\mathrm{id}\}$.
Conversely,
if $G$ is isomorphic to $C_2$,
it remains to show that the set $P=\{p_1,\dots,p_n\}$ is empty.
Our argument uses the Euler characteristic $e$ of $X$, $X/G$, and $Y$. By chosing a triangulation of $X/G$
such that all points $p_i$ lie on vertices we calculate $24= e(X) = 2e(X/G)-n$.
Blowing up the $C_2$-quotient singularities of $X/G$ we obtain $12=e(Y) = e(X/G) +n$.
This implies $e(X/G) =12$ and $n=0$ and completes the proof of the theorem.
\end{proof}
\section{Antisymplectic involutions on K3-surfaces}
As a special case of the theorem above we consider the quotient of a K3-surface $X$ by an involution $\sigma \in \mathrm{Aut}(X)$ which acts on the 2-form $\omega$ by multiplication by $-1$ and is therefore called \emph{antisymplectic involution}\index{antisymplectic involution}.
\begin{proposition}\label{K3quotnonsympinvo}
Let $\pi:X \to X/\sigma$ be the quotient of a K3-surface by an antisymplectic involution $\sigma$. If $\mathrm{Fix}_X(\sigma) \neq \emptyset$, then $\mathrm{Fix}_X(\sigma)$ is a disjoint union of smooth curves and $X/\sigma$ is a smooth rational surface. Furthermore, $\mathrm{Fix}_X(\sigma) = \emptyset$ if and only if $X/\sigma$ is an Enriques surfaces.
\end{proposition}
\begin {proof}
If $\mathrm{Fix}_X(\sigma) \neq \emptyset$, then Theorem \ref{K3quotnonsymp} and linearization of the $\sigma$-action at its fixed points yields the proposition.
If $\mathrm{Fix}_X(\sigma) = \emptyset$, then $X \to X/\sigma$ is unbranched and $\mathrm{kod}(X) = \mathrm{kod}(X/G)$. It follows that $X/G$ is an Enriques surface.
\end {proof}
In order to sketch Nikulin's description of the fixed point set of an anti\-symplec\-tic involution we summarize some information about the Picard lattice of a K3-surface.
\subsection{Picard lattices of K3-surfaces}
Let $X$ be a complex manifold. The \emph{Picard group of $X$} \index{Picard group} \nomenclature{$\mathrm{Pic}(X)$}{the Picard group of $X$} is the group of isomorphism classes of line bundles on $X$ and denoted by $\mathrm{Pic}(X)$. It is isomorphic to $H^1(X, \mathcal O_X^*)$. Let $\mathbb Z_X$ denote the constant sheaf on $X$ corresponding to $\mathbb Z$, then the exponential sequence $0 \to \mathbb Z_X \to \mathcal O_X \to \mathcal O_X^* \to 0$ induces a map
\[
\delta: H^1(X, \mathcal O_X^*) \to H^2(X, \mathbb Z).
\]
Its kernel is the identity component $\mathrm{Pic}^0(X)$ of the Picard group. The quotient $\mathrm{Pic}(X) / \mathrm{Pic}^0(X)$ is isomorphic to a subgroup of $ H^2(X, \mathbb Z)$ and referred to as the \emph{N\'eron-Severi group $NS(X)$ of $X$} \index{N\'eron-Severi group}\nomenclature{$NS(X)$}{the N\'eron-Severi group of $X$}. On the space $ H^2(X, \mathbb Z)$ there is the natural intersection or cupproduct pairing. The rank of the N\'eron-Severi group of $X$ is denoted by $\rho(X)$ and referred to as the \emph{Picard number of $X$}\nomenclature{$\rho(X)$}{the Picard number of $X$}
If $X$ is a K3-surface, then $H^1(X, \mathcal O_X)=\{0\}$ and $\mathrm{Pic}(X)$ is isomorphic to $NS(X)$. In particular, the Picard group carries the structure of a lattice, the \emph{Picard lattice} \index{Picard lattice} of $X$, and is regarded as a sublattice of $ H^2(X, \mathbb Z)$, which is known to have signature $(3,19)$ (cf. VIII.3 in \cite{BPV}).
If $X$ is an algebraic K3-surface, i.e., the transcendence degree of the field of meromorphic functions on $X$ equals 2, then $\mathrm{Pic}(X)$ is a nondegenerate lattice of signature $(1, \rho -1)$ (cf. \S3.2 in \cite{NikulinFinite}).
\subsection{The fixed point set of an antisymplectic involution}
We can now present Nikulin's classification of the fixed point set of an antisymplectic involution on a K3-surface \cite{NikulinFix}.
\begin{theorem}\label{FixSigma}
The fixed point set of an antisymplectic involution $\sigma$ on a K3-surface $X$ is one of the following three types:
\[
\text{1.)\ }\ \mathrm{Fix}(\sigma) = D_g \cup \bigcup_{i=1}^n R_i,
\quad \quad
\text{2.)\ }\ \mathrm{Fix}(\sigma) = D_1 \cup D'_1,
\quad ¸\quad
\text{3.)\ }\ \mathrm{Fix}(\sigma)= \emptyset,
\]
where $D_g$ denotes a smooth curve of genus $g \geq 0$ and $\bigcup_{i=1}^n R_i$
is a possibly empty union of smooth disjoint rational curves. In case 2.), $D_1$ and $D'_1$ denote disjoint elliptic curves.
\end{theorem}
\begin{proof}
Assume there exists a curve $D_g$ of genus $g \geq 2$ in $\mathrm{Fix}(\sigma)$. By adjunction, this curve has positive self-intersection. We claim that each curve $D$ in $\mathrm{Fix}(\sigma)$ disjoint from $D_g$ is rational.
First note that the existence of an antisymplectic automorphism on $X$ implies that $X$ is algebraic (cf. Thm. 3.1 in \cite{NikulinFinite}) and therefore $\mathrm{Pic}(X)$ is a nondegenerate lattice of signature $(1, \rho -1)$.
If $D$ is elliptic, then $D^2 =0$, $D_g^2>0$ and $D \cdot D_g=0$ is contrary to the fact that $\mathrm{Pic}(X)$ has signature $(1,\rho-1)$. If $D$ is of genus $\geq 2$, then $D^2 >0$ and we obtain the same contradiction.
Now assume that there exists an elliptic curve $D_1$ in $\mathrm{Fix}(\sigma)$. By the considerations above, there may not be curves of genus $\geq 2$ in $\mathrm{Fix}(\sigma)$. If there are no further elliptic curves in $\mathrm{Fix}(\sigma)$, we are in case 1) of the classification. If there is another elliptic curve $D'_1$ in $\mathrm{Fix}(\sigma)$, this has to be linearly equivalent to $D_1$, as otherwise the intersection form of $\mathrm{Pic}(X)$ would degenerate on the span of $D_1$ and $D'_1$.
The linear system of $D_1$ defines an elliptic fibration $X \to \mathbb{P}_1$. The induced action of $\sigma$ on the base may not be trivial since this would force $\sigma$ to act trivially in a neighbourhood of $D_1$ in $X$. It follows that the induced action of $\sigma$ on $\mathbb P_1$ has precisely two fixed points and that $\mathrm{Fix}(\sigma)$ contains no other curves than $D_1$ and $D'_1$.
This completes the proof of the theorem.
\end{proof}
\section{Finite groups of symplectic automorphisms}
In preparation for stating Mukai's classification of finite groups of symplectic automorphisms on K3-surfaces we present his list \cite{mukai} of symplectic actions of finite groups $G$ on K3-surfaces $X$. It is an important source of examples, many of these will occur in our later discussion.
For the sake of brevity, at this point we do not introduce the notation of groups used in this table.
\renewcommand{\baselinestretch}{1.5}
\begin{table}[h]
\centering
\begin{tabular}{l|l|l|l}
& $G$ & $|G|$ & \textbf{K3-surface} $X$ \\ \hline
1 & $L_2(7)$ & 168 & $\{x_1^3x_2+x_2^3x_3+x_3^3x_1+x_4^4 =0\} \subset \mathbb P_3$\\ \hline
2 & $A_6$ & 360 & $\{\sum_{i=1}^6 x_i = \sum_{i=1}^6 x_1^2 = \sum_{i=1}^6 x_i^3=0\} \subset \mathbb P_5$\\ \hline
3 & $S_5$ & 120 & $\{\sum_{i=1}^5 x_i = \sum_{i=1}^6 x_1^2 = \sum_{i=1}^5 x_i^3=0\} \subset \mathbb P_5$\\ \hline
4 & $M_{20}$ & 960 & $\{ x_1^4+ x_2 ^4 + x_3^4 +x_4^4 +12 x_1x_2x_3x_4 = 0\} \subset \mathbb P_3$\\ \hline
5 & $F_{384}$ & 384 & $\{ x_1^4+ x_2 ^4 + x_3^4 +x_4^4 = 0\} \subset \mathbb P_3$\\ \hline
6 & $A_{4,4}$ & 288 & $\{x_1^2+x_2^2 +x_3^2 = \sqrt{3}x_4^2\} \cap $\\
& & & $ \{x_1^2+ \omega x_2^2 +\omega^2 x_3^2 = \sqrt{3}x_5^2\} \cap $\\
& & & $ \{x_1^2+\omega^2 x_2^2 +\omega x_3^2 = \sqrt{3}x_6^2\} \subset \mathbb P_5$ \\ \hline
7 & $T_{192}$ & 192 & $\{ x_1^4+ x_2 ^4 + x_3^4 +x_4^4 - 2 \sqrt{-3}(x_1^2x_2^2 + x_3^2x_4^2 = 0\} \subset \mathbb P_3$\\\hline
8 & $H_{192}$ & 192 & $ \{x_1^2+x_3^2+x_5^2 = x_2^2 + x_4^2 + x_6^2\} \cap $ \\
& & & $ \{x_1^2+x_4^2 = x_2^2+x_5^2=x_3^2+x_6^2\} \subset \mathbb P_5$\\ \hline
9 & $N_{72}$ & 72 & $\{ x_1^3+ x_2 ^3 + x_3^3 +x_4^3= x_1x_2 + x_3x_4+ x_5^2 = 0 \} \subset \mathbb P_4$ \\ \hline
10 & $M_9$ & 72 & Double cover of $\mathbb P_2$ branched along \\
& & & $\{x_1^6+y_2^6 +x_3^6 -10(x_1^3x_2^3 + x_2^3x_3^3 +x_3^3x_1^3) =0\}$\\ \hline
11 & $ T_{48}$ & 48 & Double cover of $\mathbb P_2$ branched along \\
& & & $\{x_1x_2(x_1^4-x_2^4)+ x_3^6 =0\}$
\end{tabular}
\caption{Finite groups of symplectic automorphisms on K3-surfaces}\label{TableMukai}
\end{table}
\renewcommand{\baselinestretch}{1.1}
The following theorem (Theorem 0.6 in \cite{mukai}) characterizes finite groups of symplectic automorphisms on K3-surfaces.
\begin{theorem}\label{mukaithm}
A finite group $G$ has an effective sympletic actions on a K3-surface if and only if it is isomorphic to a subgroup of one of the eleven groups in Table \ref{TableMukai}.
\end{theorem}
The "only if"-implication of this theorem follows from the list of eleven examples summarized in Table \ref{TableMukai}. This list of examples is, however, far from being exhaustive. It is therefore desirable to find further examples of K3-surfaces where the groups from this list occur and describe or classify these surfaces with maximal symplectic symmetry..
\begin{definition}\label{maxsymp}
By Proposition 8.8 in \cite{mukai} there are no subgroup relations among the eleven groups in Mukai's list. Therefore, the groups are \emph{maximal finite groups of symplectic transformations}\index{maximal group of symplectic trans\-for\-ma\-tions}. We refer to the groups in this list also as \emph{Mukai groups}. \index{Mukai group}
\end{definition}
\subsection{Examples of K3-surfaces with symplectic symmetry}
We conclude this chapter by presenting two typical examples of K3-surface with symplectic symmetry.
\begin{example}\label{L2(7)example}
The group $L_2(7) = \mathrm{PSL}(2, \mathbb F_7)=\mathrm{GL}_3(\mathbb F_2)$ is a simple group of order 168. It is generated by the three projective transformations $\alpha, \beta, \gamma$ of $\mathbb P_1( \mathbb F_7)$ given by
\[
\alpha(x) = x+1; \quad \beta(x) =2x; \quad \gamma(x) = -x^{-1}.
\]
In terms of these generators, we define a three-dimensional representation of $L_2(7)$ by
\[
\alpha \mapsto
\begin{pmatrix}
\xi & 0 & 0\\
0 & \xi^2 & 0\\
0 & 0 & \xi^4
\end{pmatrix};\quad
\beta \mapsto
\begin{pmatrix}
0 & 0 & 1\\
1 & 0 & 0\\
0 & 1 & 0
\end{pmatrix};\,
\gamma \mapsto \frac{-1}{\sqrt{-7}}
\begin{pmatrix}
a&b&c\\
b&c&a\\
c&a&b
\end{pmatrix}
\]
where
$
\xi=e^{\frac{2\pi i }{7}},\,
a=\xi^2-\xi^5,\,
b=\xi-\xi^6,\,
c=\xi^4-\xi^3,
$
and $\sqrt{-7}= \xi+\xi^2+\xi^4-\xi^3-\xi^5-\xi^6$. Klein's quartic curve
\[
C_\text{Klein}= \{x_1x_2^3 + x_2x_3^3 + x_3x_1^3=0\} \subset \mathbb P_2
\]
is invariant with respect to induced action of $L_2(7)$ on $\mathbb P_2$. Mukai's example of a K3-surface with symplectic $L_2(7)$-symmetry is the smooth quartic hypersurface in $\mathbb P_3$ defined by
\[
X_\text{KM} = \{x_1x_2^3 + x_2x_3^3 + x_3x_1^3 + x_4^4=0\} \subset \mathbb P_3,
\]
where the action of $L_2(7)$ is defined to be trivial on the coordinate $x_4$ and defined as above on $x_1,x_2,x_3$. Since $L_2(7)$ is a simple group, it follows
that the action is effective and symplectic. The surface $ X_\text{KM}$ is called the \emph{Klein-Mukai surface}\index{Klein-Mukai surface}. By construction, it is a cyclic degree four cover of $\mathbb P_2$ branched along Klein's quartic curve. In fact, there is an action of the group $L_2(7) \times C_4$ on $X_\text{KM}$, where the action of $C_4$ is by nonsymplectic transformations. The Klein-Mukai surface will play an important role in Sections \ref{KMsurface} and \ref{168}.
\end{example}
\subsubsection{Cyclic coverings}
Since many examples of K3-surfaces are constructed as double covers we discuss the construction of branched cyclic covers with emphasis on group actions induced on the covering space.
Let $Y$ be a surface such that Picard group of $Y$ has no torsion, i.e., there does not exist a nontrivial line bundle $E$ on $Y$ such that $E^{\otimes n}$ is trivial for some $n \in \mathbb N$.
Let $B$ be an effective and reduced divisor on $Y$ and suppose there exists a line bundle $L$ on $Y$ such that $\mathcal O_Y(B) = L^{\otimes n}$ and a section $s \in \Gamma( Y , L^{\otimes n})$ whose zero-divisor is $B$. Let $p: L \to L^{\otimes n}$ denote the bundle homomorphism mapping each element $(y,z) \in L$ for $y \in Y$ to $(y,z^n ) \in L^{\otimes n}$. The preimage $X = p^{-1}(\mathrm{Im}(s))$ of the image of $s$ is an analytic subspace of $L$. The bundle projection $L \to Y$ restricted to $X$ defines surjective holomorphic map $X \to Y$ of degree $n$.
\[
\begin{xymatrix}{
X \subset L \ar[r]^p \ar[d]& L^{\otimes n} \supset \mathrm{Im}(s) \ar[d]\\
Y \ar[r]_{\mathrm{id}}& Y \ar@/_1pc/[u]_s
}
\end{xymatrix}
\]
Since $\mathrm{Pic}(Y)$ is torsion free, the line bundle $L$ is uniquely determined by $B$. It follows than $X$ is uniquely determined and we refer to $X$ as \emph{the} cyclic degree $n$ covering of $Y$ branched along $B$. We note that $X$ is normal and irreducible. It is smooth if the divisor $B$ is smooth. (cf. I.17 in \cite{BPV})
Let $G$ be a finite group in $\mathrm{Aut}(Y)$ and assume that the divisor $B$ is invariant, i.e., $gB =B$ for all $g \in G$. Then the pull-back bundle $g^* L^{\otimes n}$ is isomorphic to $L^{\otimes n}$. We consider the group $\mathrm{BAut}(L^{\otimes n})$ of bundle maps of $ L^{\otimes n}$ and the homomorphism $\mathrm{BAut}(L^{\otimes n}) \to \mathrm{Aut}(Y)$ mapping each bundle map to the corresponding automorphism of the base. Its kernel is isomorphic to $\mathbb C^*$. The observation $g^* L^{\otimes n} \cong L^{\otimes n}$ implies that the group $G$ is contained in the image of $\mathrm{BAut}(L^{\otimes n})$ in $\mathrm{Aut}(Y)$.
By assumption, the zero set of the section $s$ is $G$-invariant.
The bundle map induced by $g^*$ maps the section $s$ to a multiple $\chi(g) s$ of $s$ for some character $\chi: G \to \mathbb C^*$.
It follows that the bundle map $\tilde g$ induced by $\chi(g)^{-1} g^*$ stabilizes the section.
The group $\tilde G = \{ \tilde g \, | \, g \in G \} \subset \mathrm{BAut}(L^{\otimes n})$ is isomorphic to $G$ and stabilizes $\mathrm{Im}(s) \subset L^{\otimes n} $.
In order to define a corresponding action on $X$, first observe that $g^* L \cong L$ for all $g \in G$. This follows from the observation that $ g^* L \otimes L^{-1}$ is a torsion bundle and the assumption that $\mathrm{Pic}(Y)$ has no torsion. As above, we deduce that the group $G$ is contained in the image of $\mathrm{BAut}(L)$ in $\mathrm{Aut}(Y)$. Let $\overline G$ be the preimage of $G$ in $\mathrm{BAut}(L)$. Then $\overline G$ is a central $\mathbb C^*$-extension of $G$,
\[
\{\mathrm{id}\} \to \mathbb C^* \to \overline G \to G \to \{\mathrm{id}\}.
\]
The map $p: L \to L^{\otimes n}$ induces a homomorphism $p_*:\mathrm{BAut}(L) \to \mathrm{BAut}(L^{\otimes n})$. Its kernel is isomorphic to $C_n < \mathbb C^*$ and we consider the preimage $H = p_*^{-1}(\tilde G)$ in $\mathrm{BAut}(L)$. The group $H < \overline G$ is a central $C_n$-extension of $\tilde G \cong G$,
\[
\{\mathrm{id}\} \to C_n \to H \to G \to \{\mathrm{id}\}.
\]
By construction, the subset $X \subset L$ is invariant with respect to $H$. This discussion proves the following proposition.
\begin{proposition}
Let $Y$ by a surface such that $\mathrm{Pic}(Y)$ is torsion free and $G < \mathrm{Aut}(Y)$ be a finite group. If $B \subset Y$ is an effective, reduced, $G$-invariant divisor defined by a section $s \in \Gamma( Y, L^{\otimes n})$ for some line bundle $L$, then the cyclic degree $n$ covering $X$ of $Y$ branched along $B$ carries the induced action of a central $C_n$-extension $H$ of $G$ such that the covering map $\pi: X \to Y$ is equivariant.
\end{proposition}
\begin{example}[Double covers]
For any finite subgroup $G < \mathrm{PSL}(3,\mathbb C)$ and any $G$-invariant smooth curve $C \subset \mathbb P_2$ of degree six, the double cover $X$ of $\mathbb P_2$ branched along $C$ is a K3-surface with an induced action of a degree two central extension of the group $G$.
Many interesting examples (no. 10 and 11 in Mukai's table) can be contructed this way. For example, the Hessian of Klein's curve $\mathrm{Hess}(C_\text{Klein})$ is an $L_2(7)$-invariant sextic curve and the double cover of $\mathbb{P}_2$ branched along $\mathrm{Hess}(C_\text{Klein})$ is a K3-surface with a symplectic action of $L_2(7)$ centralized by the antisymplectic covering involution (cf. Section \ref{168}).
\end{example}
\chapter{Equivariant Mori reduction}\label{chapter mmp}
This chapter deals with a detailed discussion of Example 2.18 in \cite{kollarmori} (see also Section 2.3 in \cite{Mori}) and introduces a minimal model program for surfaces
respecting finite groups of symmetries. Given a projective algebraic
surface $X$ with $G$-action, in analogy to the usual minimal model program, one obtains from $X$ a $G$-minimal model $X_{G\text{-min}}$ by a finite number of $G$-equivariant blow-downs, each contracting a finite number of disjoint (-1)-curves. The surface $X_{G\text{-min}}$ is either a conic bundle over a smooth curve, a Del Pezzo surface or has nef canonical bundle. The case $G \cong C_2$ is also discussed in \cite{Bayle}, the case $G \cong C_p$ for $p$ prime in \cite{fernex}. As indicated in the introduction, applications can be found throughout the literature.
\section{The cone of curves and the cone theorem}
Throughout this chapter we let $X$ be a smooth projective algebraic surface and let $\mathrm{Pic}(X)$ denote the group of isomorphism classes of line bundles on $X$.
\begin{definition}
A \emph{divisor}\index{divisor} on $X$ is a formal linear combination of irreducible curves $D = \sum a_i C_i$ with $a_i \in \mathbb Z$.
A \emph{1-cycle} \index{1-cycle} on $X$ is a formal linear combination of irreducible curves $C = \sum b_i C_i$ with $b_i \in \mathbb R$. A 1-cycle is \emph{effective} if $b_i \geq 0$ for all $i$.
We define a pairing $\mathrm{Pic}(X) \times \{\text{divisors}\} \to \mathbb Z$ by $(L,D) \mapsto L \cdot D = \deg(L|_D)$. Extending by linearity, this defines a pairing $\mathrm{Pic}(X) \times \{\text{1-cycles}\} \to \mathbb R$.
\nomenclature{$L\cdot C$}{the intersection number of a line bundle $L$ and a 1-cycle $C$}\index{intersection number}
We use this notation for the intersection number also for pairs of divisors $C$ and $D$ and write $C\cdot D = \deg(\mathcal O_X(D)|_C)$.
Two 1-cycles $C,C'$ are called \emph{numerically equivalent} \index{numerically equivalent} if $L\cdot C = L \cdot C'$ for all $L \in \mathrm{Pic}(X)$. We write $C \equiv C'$. The numerical equivalence class of a 1-cycle $C$ is denoted by $[C]$.
The space of all 1-cycles with real coefficients modulo numerical equivalence is a real vector space denoted by $N_1(X)$.
Note that $N_1(X)$ is finite-dimensional.
\begin{remark}
Let $L$ be a line bundle on $X$ and let $L^{-1}$ denote its dual bundle. Then $L^{-1} \cdot C = -L \cdot C$ for all $[C]\in N_1(X)$. We therefore write $L^{-1} = -L$ in the following.
\end{remark}
\begin{definition}
A line bundle $L$ is called \emph{nef}\index{nef} if $L \cdot C \geq 0$ for all irreducible curves $C$.
\end{definition}
We set
\[
NE(X) = \{ \sum a_i[C_i] \ | \ C_i \subset X \text{ irreducible curve},\, 0 \leq a_i \in \mathbb R\} \subset N_1(X).
\]
The closure $\overline{NE}(X)$ of $NE(X)$ in $N_1(X)$ is called \emph{Kleiman-Mori cone} or \emph{cone of curves} \index{cone of curves} on $X$. \nomenclature{$\overline{NE}(X)$}{the cone of curves on $X$}
For a line bundle $L$, we write $\overline{NE}(X)_{L\geq 0} = \{ [C]\in N_1(X) \ |\ L \cdot C\geq 0 \} \cap \overline{NE}(X)$. Analogously, we define $\overline{NE}(X)_{L\leq 0}$, $\overline{NE}(X)_{L > 0}$, and $\overline{NE}(X)_{L < 0}$.
\end{definition}
Using this notation we phrase Kleiman's ampleness criterion (cf. Theorem 1.18 in \cite{kollarmori})
\begin{theorem}
A line bundle $L$ on $X$ is ample if and only if $\overline{NE}(X)_{L>0} = \overline{NE}(X)\backslash \{0\}$.
\end{theorem}
\begin{definition}
Let $V$ be a finite-dimensional real vector space . A subset $N \subset V$ is called \emph{cone} if $0\in N$ and $N$ is closed under multiplication by positive real numbers.
A subcone $M \subset N$ is called \emph{extremal} if $u,v \in N $ satisfy $u,v \in M$ whenever $u+v \in M$. An extremal subcone is also referred to as an \emph{extremal face}. A 1-dimensional extremal face is called \emph{extremal ray}\index{extremal ray}.
For subsets $A, B \subset V$ we define $A+B := \{ a+b \,|\, a\in A, b\in B\}$.
\end{definition}
The cone of curves $\overline{NE}(X)$ is a convex cone in $N_1(X)$ and the following cone theorem, which is stated here only for surfaces, describes its geometry (cf. Theorem 1.24 in \cite{kollarmori}).
\begin{theorem}\label{conethm}
Let $X$ be a smooth projective surface and let $\mathcal{K}_X$ denote the canonical line bundle on $X$.
There are countably many rational curves $C_i \in X$ such that $0 < -\mathcal{K}_X \cdot C_i \leq \mathrm{dim}(X) +1 $ and
\[
\overline{NE}(X) = \overline{NE}(X) _{\mathcal{K}_X \geq 0} + \sum_i \mathbb R_{\geq 0} [C_i].
\]
For any $\varepsilon>0$ and any ample line bundle $L$
\[
\overline{NE}(X) = \overline{NE}(X) _{(\mathcal{K}_X+\varepsilon L) \geq 0} + \sum_{\text{finite}} \mathbb R_{\geq 0} [C_i].
\]
\end{theorem}
\section{Surfaces with group action and the cone of invariant curves}
Let $X$ be a smooth projective surface and let $G< \mathrm{Aut}_{\mathcal{O}}(X)$ be a group of holomorphic transformations of $X$. We consider the induced action on the space of 1-cycles on $X$. For $g \in G$ and an irreducible curve $C_i$ we denote by $g C_i$ the image of $C_i$ under $g$. For a 1-cycle $C = \sum a_i C_i$ we define $gC = \sum a_i (g C_i)$. This defines a $G$-action on the space of 1-cycles.
\begin{lemma}
Let $C_1, C_2$ be 1-cycles and $C_1 \equiv C_2$. Then $gC_1 \equiv gC_2$ for any $g \in G$.
\end{lemma}
\begin{proof}
The 1-cycle $gC_1$ is numerically equivalent to $g C_2$ if and only if
$L \cdot (gC_1) = L \cdot (gC_2)$ for all $L \in \mathrm{Pic}(X)$. For $g \in G$ and
$L \in \mathrm{Pic}(X)$ let $g^*L$ denote the pullback of $L$ by
$g$. The claim above is equivalent to $((g^{-1})^*L) \cdot (gC_1) = ((g^{-1})^*L) \cdot (gC_2)$ for all $L \in \mathrm{Pic}(X)$. Now
$$((g^{-1})^*L) \cdot (gC_1) = \mathrm{deg}((g^{-1})^* L|_{gC_1}) = \mathrm{deg}(L|_{C_1}) = L \cdot C_1 = L \cdot C_2 = (g^{-1})^*L(gC_2)$$
for all $L \in\mathrm{Pic}(X)$.
\end{proof}
This lemma allows us to define a $G$-action on $N_1(X)$ by setting $g[C] := [gC]$ and extending by linearity. We write $N_1(X)^G = \{ [C] \in N_1(X) \ | \ [C]=[gC] \text{ for all } g \in G\}$, the set of invariant 1-cycles modulo numerical equivalence. This space is a linear subspace of $N_1(X)$.
Since the cone $NE(X)$ is a $G$-invariant set it follows that its closure $\overline{NE}(X)$ is $G$-invariant. The subset of invariant elements in $\overline{NE}(X)$ is denoted by $\overline{NE}(X)^G$. \nomenclature{$\overline{NE}(X)^G$}{the intersection of $\overline{NE}(X)$ with the space of invariant numerical equivalence classes of 1-cycles}
\begin{remark}
$
\overline{NE}(X)^G = \overline{NE(X) \cap N_1(X)^G}=\overline{NE}(X) \cap N_1(X)^G .
$
\end{remark}
The subcone $\overline{NE}(X)^G$ of $\overline{NE}(X)$ is seen to inherit the geometric properties of $\overline{NE}(X)$ established by the cone theorem.
Note however that the extremal rays of $\overline{NE}(X)^G$ are in general neither extremal in $\overline{NE}(X)$ (cf. Figure \ref{moribild}) nor generated by classes of curves but by classes of 1-cycles.
\begin{figure}[H]
\centering
\subfigure[The cone of curves and its \textcolor{red}{extremal rays}]
{\includegraphics[width=0.45\textwidth]{mori1}}\hspace{1cm}
\subfigure[The cone of curves and the \textcolor{green}{invariant subspace $N_1(X)^G$}.
Their intersection $\overline{NE}(X)^G$ has a \textcolor{blue}{new extremal ray}.]
{\includegraphics[width=0.45\textwidth]{mori2}}
\caption{The extremal rays of $\overline{NE}(X)^G$ are not extremal in $\overline{NE}(X)$}\label{moribild}
\end{figure}
\begin{definition}
The extremal rays of $\overline{NE}(X)^G$ are called \emph{$G$-extremal rays} \index{extremal ray!$G$-extremal rays}.
\end{definition}
\begin{lemma}\label{Gextremalray} Let $G$ be a finite group and let $R$ be a $G$-extremal ray with $\mathcal{K}_X \cdot R<0$. Then there exists a rational curve $C_0$ such that $R$ is generated by the class of the 1-cycle $C = \sum_{g \in G} gC_0$.
\end{lemma}
\begin{proof}
Consider an $G$-extremal ray $R = \mathbb{R }_{\geq 0}[E]$ where $[E] \in \overline{NE}(X)^G \subset \overline{NE}(X)$. By the cone theorem (Theorem \ref{conethm}) it can be written as $[E] = [\sum_i a_i C_i] +[F]$,
where $ \mathcal{K}_X \cdot F \geq 0$, $a_i \geq 0$ and $C_i$ are rational curves. Let $|G|$ denote the order of $G$ and let $[GF] = G[F] = \sum_{g\in G} g[F]$. Since $g[E]=[E]$ for all $g\in G$ we can write
\[
|G| [E] = \sum_{g\in G} g[E] = \sum_{g\in G}([\sum_i a_igC_i] + g[F]) = \sum_i a_i G[C_i] + G[F].
\]
The element $[\sum a_i (GC_i)] + [GF]$ of the extremal ray $\mathbb{R }_{\geq 0}[E]$ is decomposed as the sum of two elements in $\overline{NE}(X)^G$. Since $R$ is extremal in $\overline{NE}(X)^G$ both must lie in $R=\mathbb{R }_{\geq 0}[E]$ .
Consider $[GF] \in R$. Since $g^*\mathcal{K}_X \equiv \mathcal{K}_X$ for all $g \in G$, we obtain
$$\mathcal{K}_X \cdot (GF) = \sum_{g\in G} \mathcal{K}_X \cdot (gF) = \sum_{g \in G}(g^*\mathcal{K}_X) \cdot F= |G|\mathcal{K}_X \cdot F \geq 0.$$
Since $\mathcal{K}_X \cdot R < 0$ by assumption this implies $[F]=0$ and $\mathbb{R }_{\geq 0}[E] = \mathbb{R }_{\geq 0}[\sum a_i (GCi)]$. Again using the fact that $R$ is extremal in $\overline{NE}(X)^G$, we conclude that each summand of $[\sum a_i (GC_i)]$ must be contained in $R=\mathbb{R }_{\geq 0}[E]$ and the extremal ray $\mathbb{R }_{\geq 0}[E]$ is therefore generated by $[GC_i]$ for some $C_i$ chosen such that $[GC_i] \neq 0$. This completes the proof of the lemma.
\end{proof}
\section{The contraction theorem and minimal models of surfaces}\label{mmp}
In this section, we state the contraction theorem for smooth
projective surfaces. The proof of this theorem can be found e.g. in
\cite{kollarmori} and needs to be modified slightly in order to give an equivariant contraction theorem in the next section.
\begin{definition}
Let $X$ be a smooth projective surface and let $F \subset \overline{NE}(X)$ be an extremal face. A morphism $\mathrm{cont}_F: X \to Z$ is called the \emph{contraction of $F$} \index{contraction morphism} if \nomenclature{$\mathrm{cont}_F$}{the contraction of an extremal face $F$}
\begin{itemize}
\item $(\mathrm{cont}_F)_*\mathcal{O}_X = \mathcal{O}_Z$ and
\item $\mathrm{cont}_F(C) = \{\text{point}\}$ for an irreducible curve $C\subset X$ if and only if $[C] \in F$.
\end{itemize}
\end{definition}
The following result is known as the contraction theorem (cf. Theorem 1.28 in \cite{kollarmori}).
\begin{theorem}\label{contractionthm} \index{contraction theorem}
Let $X$ be a smooth projective surface and $R \subset \overline{NE}(X)$ an extremal ray such that $\mathcal{K}_X \cdot R<0$. Then the contraction morphism $\mathrm{cont}_R: X \to Z$ exists and is one of the following types:
\begin{enumerate}
\item{$Z$ is a smooth surface and $X$ is obtained from $Z$ by blowing up a point. }
\item{$Z$ is a smooth curve and $\mathrm{cont}_R:X \to Z $ is a minimal ruled surface over $Z$.}
\item{$Z$ is a point and $-\mathcal{K}_X$ is ample. }
\end{enumerate}
\end{theorem}
The contraction theorem leads to the minimal model program for surfaces: Starting from $X$, if $\mathcal{K}_X$ is not nef, i.e, there exists an irreducible curve $C$ such that $\mathcal{K}_X C < 0$, then $\overline{NE}(X)_{\mathcal{K}_X <0}$ is nonempty and there exists an extremal ray $R$ which can be contracted. The contraction morphisms either gives a new surface $Z$ (in case 1) or provides a structure theorem for $X$ which is then either a minimal ruled surface over a smooth curve (case 2) or isomorphic to $\mathbb P^2$ (case 3). Note that the contraction theorem as stated above only implies $-\mathcal{K}_X$ ample in case 3. It can be shown that $X$ is in fact $\mathbb P^2$. This is omitted here since the statement does not transfer to the equivariant setup. In case 1, we can repeat the procedure if $K_Z$ is not nef. Since the Picard number drops with each blow down, this process terminates after a finite number of steps. The surface obtained from $X$ at the end of this program is called a \emph{minimal model} \index{minimal model} of $X$.
\begin{remark}
Let $E$ be a (-1)-curve on $X$. If $C$ is any irreducible curve on $X$, then $E \cdot C < 0$ if and only if $C =E$. It follows that $\overline{NE}(X) = \mathrm{span}(\mathbb R_{\geq 0}[E], \overline{NE}(X)_{E \geq 0})$. Now $E^2 = -1$ implies $ E \not\in \overline{NE}(X)_{E \geq 0}$ and $E$ is seen to generate an extremal ray in $\overline{NE}(X)$. By adjunction, $\mathcal K_X \cdot E < 0$. The contraction of the extremal ray $R = \mathbb R_{\geq 0}[E]$ is precisely the contraction of the (-1)-curve $E$. Conversely, each extremal contraction of type 1 above is the contraction of a (-1)-curve generating the extremal ray $R$.
\end{remark}
\section{Equivariant contraction theorem and $G$-minimal models}
We state and prove an equivariant contraction theorem for smooth projective surfaces with finite groups of symmetries. Most steps in the proof are carried out in analogy to the proof of the standard contraction theorem.
\begin{definition}\label{equiContraction}
Let $G$ be a finite group, let $X$ be a smooth projective surface with $G$-action and let $R \subset \overline{NE}(X)^G$ be $G$-extremal ray. A morphism $\mathrm{cont}_R^G: X \to Z$ is called the \emph{$G$-equivariant contraction of $R$} \index{contraction morphism!equivariant contraction morphism} if
\begin{samepage}
\begin{itemize}
\item $\mathrm{cont}_R^G$ is equivariant with respect to $G$
\item $(\mathrm{cont}_R^G)_*\mathcal{O}_X = \mathcal{O}_Z$ and
\item $\mathrm{cont}_R(C) = \{\text{point}\}$ for an irreducible curve $C\subset X$ if and only if $[GC] \in R$.
\end{itemize}
\end{samepage}
\end{definition}
\begin{theorem}\index{contraction theorem!equivariant contraction theorem}
Let $G$ be a finite group, let $X$ be a smooth projective surface with $G$-action and let $R$ be a $G$-extremal ray with $ \mathcal{K}_X \cdot R <0$. Then $R$ can be spanned by the class of $C= \sum_{g \in G} gC_0$ for a rational curve $C_0$, the equivariant contraction morphism $\mathrm{cont}_R^G: X \to Z$ exists and is one of the following three types:
\begin{enumerate}
\item $C^2 <0$ and $gC_0$ are smooth disjoint (-1)-curves. The map $\mathrm{cont}_R^G: X \to Z$ is the equivariant blow down of the disjoint union $\bigcup_{g \in G} gC_0$.
\item $C^2 =0$ and any connected component of $C$ is either irreducible or the union of two (-1)-curves intersecting transversally at a single point. The map $\mathrm{cont}_R^G: X \to Z$ defines an equivariant conic bundle over a smooth curve .
\item $C^2 >0$ , $N_1(X)^G = \mathbb{R }$ and $\mathcal{K}_X^{-1}$ is ample, i.e., $X$ is a Del Pezzo surface. The map $\mathrm{cont}_R^G: X \to Z$ is constant, $Z$ is a point.
\end{enumerate}
\end{theorem}
\begin{proof}
Let $R$ be a $G$-extremal ray with $\mathcal{K}_X \cdot R <0$.
It follows from Lemma \ref{Gextremalray} that the ray $R$ can be
spanned by a 1-cycle of the form $C = GC_0$ for a rational curve
$C_0$. Let $n = |GC_0|$ and write $C = \sum_{i=1}^n C_i$
where the $C_i$ correspond to $gC_0$ for some $g \in G$.
We distinguish three cases according to the sign of the self-intersection of $C$.
\medskip
\textbf{The case $C^2 <0$}
We write $0 > C^2 = \sum_i C_i^2 + \sum_{i\neq j}C_i \cdot C_j$. Since $C_i$ are effective curves we know $C_i \cdot C_j \geq 0$ for all $i \neq j$. Since all curves $C_i$ have the same negative self-intersection and by assumption, $\mathcal{K}_X \cdot C = \sum_i \mathcal{K}_X \cdot C_i = n(\mathcal{K}_X \cdot C_i) <0$ the adjunction formula reads
$
2g(C_i) -2 = -2 = \mathcal{K}_X \cdot C_i + C_i^2
$. Consequently, $\mathcal{K}_X \cdot C_i = -1$ and $C_i^2 = -1$.
It remains to show that all $C_i$ are disjoint. We assume the contrary and without loss of generality
$C_1 \cap C_2 \neq \emptyset$. Now $gC_1 \cap g C_2 \neq \emptyset $ for all $g \in G$ and $\sum_{i \neq j} C_i \cdot C_j \geq n$. This is however contrary to $0>C^2 = \sum_i C_i^2 + \sum_{i\neq j}C_i \cdot C_j = -n + \sum_{i\neq j}C_i \cdot C_j$.
We let $\mathrm{cont}_R^G: X \to Z$ be the blow-down of $\bigcup_{g \in G} gC_0$ which is equivariant with respect to the induced action on $Z$ and fulfills $(\mathrm{cont}_R^G)_*\mathcal{O}_X = \mathcal{O}_Z$. If $D$ is an irreducible curve such that $\mathrm{cont}_R^G(D)= \{\text{point}\}$, then $D= gC_0$ for some $g \in G$. In particular, $GD= GC_0 =C$ and $[GD] \in R$. Conversely, if $[GD] \in R$ for some irreducible curve $D$, then $[GD]= \lambda [C]$ for some $\lambda \in \mathbb R _{\geq 0}$. Now $(GD)\cdot C = \lambda C^2 <0$. It follows that $D$ is an irreducible component of $C$.
\medskip
\textbf{The case $C^2 >0$}
This case is treated in precisely the same way as the corresponding case in the standard contraction theorem.
Our aim is to show that $[C]$ is in the interior of $\overline{NE}(X)^G$. This is a consequence of the following lemma.
\begin{lemma}
Let $X$ be a projective surface and let $L$ be an ample line bundle on $X$. Then the set $Q = \{[E] \in N_1(X) \ |\ E^2 >0\}$ has two connected components
$Q^+=\{ [E] \in Q \ |L \cdot E >0\}$ and $Q^-=\{ [E] \in Q \ |L \cdot E <0\}$. Moreover, $Q^+ \subset \overline{NE}(X)$.
\end{lemma}
This result follows from the Hodge Index Theorem (cf. Theorem IV.2.14 in \cite{BPV}) and the fact, that $E^2 >0$ implies that either $E$ or $-E$ is effective.
For a proof of this lemma, we refer the reader to Corollary 1.21 in \cite{kollarmori}.
We consider an effective cycle $C = \sum C_i$ with $C^2 >0$. By the above lemma, $[C]$ is contained in $Q^+$ which is an open subset of $N_1(X)$ contained in $\overline{NE}(X)$. It follows that $[C]$ lies in the interior of $\overline{NE}(X)$. The $G$-extremal ray $R = \mathbb{R }_{\geq 0} [C]$ can only lie in the interior if $\overline{NE}(X)^G= R$. By assumption $\mathcal{K}_X \cdot R <0$, so that $\mathcal{K}_X$ is negative on $\overline{NE}(X)^G \backslash \{0\}$ and therefore on $\overline{NE}(X) \backslash \{0\}$. The anticanonical bundle $\mathcal{K}_X^{-1}$ is ample by Kleiman ampleness criterion and $X$ is a Del Pezzo surface.
We can define a constant map $\mathrm{cont}_R^G$ mapping $X$ to a point $Z$ which is the equivariant contraction of $R = \overline{NE}(X)$ in the sense of Definition \ref{equiContraction}.
\medskip
\textbf{The case $C^2 =0$}
Our aim is to show that for some $m >0$ the linear system $|mC|$ defines a conic bundle structure on $X$. The argument is seperated into a number of lemmata.
For the convenience of the reader, we include also the proofs of well-known preparatory lemmata which do not involve group actions.
Recall that $\mathcal{O}(D)$ denotes the line bundle associated to the divisor $D$ on $X$.
\begin{lemma}
$H^2(X, \mathcal{O}(mC)) =0$ for $m \gg 0$.
\end{lemma}
\begin{proof}
By Serre's duality (cf. Theorem I.5.3 in \cite{BPV})
\[
h^2(X, \mathcal{O}(mC))= h^0(\mathcal{O}(-mC)\otimes \mathcal K_X).
\]
Since $C$ is an effective divisor on $X$, it follows that $h^0(\mathcal{O}(-mC)\otimes \mathcal K_X)=0$ for $m \gg 0$.
\end{proof}
\begin{lemma}
For $m \gg 0$ the dimension $h^0(X,\mathcal{O}(mC))$ of $H^0(X,\mathcal{O}(mC))$ is at least two.
\end{lemma}
\begin{proof}
Let $m$ be such that $h^2(X, \mathcal{O}(mC)) =0$. For a line bundle $L$ on $X$ we denote by $\chi(L) = \sum_i(-1)^i h^i(X,L)$ the Euler characteristic of $L$. Using the theorem of Riemann-Roch (cf. Theorem V.1.6 in \cite{hartshorne}),
\begin{align*}
h^0(X, \mathcal{O}(mC)) &\geq h^0(X, \mathcal{O}(mC))- h^1(X, \mathcal{O}(mC))\\
& = h^0(X, \mathcal{O}(mC)) - h^1(X, \mathcal{O}(mC)) + h^2(X, \mathcal{O}(mC))\\
& = \chi(\mathcal{O}(mC))\\
& = \chi (\mathcal{O}) + \frac{1}{2}(\mathcal{O}(mC)\otimes\mathcal{K}_X^{-1})\cdot(mC)\\
& \overset{C^2=0}{=} \chi (\mathcal{O}) - \frac{m}{2}\mathcal{K}_X \cdot C.
\end{align*}
Now $\mathcal{K}_XC<0$ implies the desired behaviour of $h^0(X, \mathcal{O}(mC))$.
\end{proof}
For a divisor $D$ on $X$ we denote by $|D|$ \nomenclature{$|D|$}{the linear system of a divisor $D$}\index{linear system}the complete \emph{linear system of $D$}, i.e., the set of all effective divisors linearly equivalent to $D$. A point $ p \in X$ is called a \emph{base point}\index{base point} of $|D|$ if $p \in \mathrm{support}(C)$ for all $C \in |D|$.
\begin{lemma}
There exists $m' >0$ such that the linear system $|m'C|$ is base point free.
\end{lemma}
\begin{proof}
We first exclude a positive dimensional set of base points. Let $m$ be chosen such that $h^0(X, \mathcal{O}(mC)) \geq 2$. We denote by $B$ the \emph{fixed part} of the linear system $|mC|$, i.e.,
the biggest divisor $B$ such that each $D \in |mC|$ can be decomposed as $D = B + E_D$ for some effective divisor $E_D$.
The support of $B$ is the union of all positive dimensional components of the set of base points of $|mC|$. We assume that $B$ is nonempty.
The choice of $m$ guarantees that $|mC|$ is not fixed, i.e., there exists $D \in |mC|$ with $D \neq B$.
Since $\mathrm{supp}(B) \subset \{s=0\}$ for all $ s \in \Gamma(X, \mathcal O(mC))$, each irreducible component of $\mathrm{supp}(B)$ is an irreducible component of $C$ and $G$-invariance of $C$ implies $G$-invariance of the fixed part of $|mC|$. It follows that $B=m_0 C$ for some $m_0 < m$.
Decomposing $|mC|$ into the fixed part $B = m_0C$ and the remaining \emph{free part} $|(m-m_0)C|$ shows that some multiple $|m'C|$ for $m' >0$ has no fixed components.
The linear system $|m'C|$ has no isolated base points since these would
correspond to isolated points of intersection of divisors linearly equivalent to $m'C$. Such intersections are excluded by $C^2=0$.
\end{proof}
We consider the base point free linear system $|m'C|$ and the induced morphism
\begin{align*}
\varphi =\varphi_{|m'C|}: &X \to \varphi(X) \subset \mathbb P(\Gamma(X,\mathcal{O}(m'C))^*)\\
&x \mapsto \{ s \in \Gamma(X,\mathcal{O}(m'C)) \, | \, s(x)=0 \}
\end{align*}
Since $C$ is $G$-invariant, it follows that $\varphi$ is an equivariant map with respect to action of $G$ on $\mathbb P(\Gamma(X,\mathcal{O}(m'C))^*)$ induced by pullback of sections.
Let us study the fibers of $\varphi$. Let $z$ be a linear hyperplane in $\Gamma(X,\mathcal{O}(m'C))$. By definition, $\varphi^{-1}(z)= \bigcap _{s\in z}(s)_0$ where $(s)_0$ denotes the zero set of the section $s$. Since $(s)_0$ is linearly equivalent to $m'C$ and $C^2=0$, the intersection $\bigcap _{s\in z}(s)_0$ does not consist of isolated points but all $(s)_0$ with $s \in z$ have a common component. In particular, each fiber is one-dimensional.
Let $f: X \to Z$ be the Stein factorization of $\varphi: X \to \varphi(X)$. The space $Z$ is normal and 1-dimensional, i.e., $Z$ is a smooth curve. Note that there is a $G$-action on the smooth curve $Z$ such that $f$ is equivariant.
\begin{lemma}
The map $f: X \to Z$ defines an equivariant conic bundle\index{conic bundle}, i.e., an equivariant fibration with general fiber isomorphic to $\mathbb{P}_1$.
\end{lemma}
\begin{proof}
Let $F$ be a smooth fiber of $f$. By construction, $F$ is a component of $(s)_0$ for some $s \in \Gamma(X,\mathcal{O}(m'C))$. We can find an effective 1-cycle $D$ such that $(s)_0 = F+D$. Averaging over the group $G$ we obtain
$
\sum_{g\in G}gF + \sum_{g\in G}gD = \sum_{g\in G}g(s)_0
$.
Recalling $(s)_0 \sim m'C $ and $[C] \in \overline{NE}(X)^G$ we deduce
\[
[\sum_{g\in G}gF + \sum_{g\in G}gD] = [\sum_{g\in G}g(s)_0] = m'[\sum_{g\in G}gC]= m |G| [C]
\]
showing that $[\sum_{g\in G}gF + \sum_{g\in G}gD]$ in contained in the $G$-extremal ray generated by $[C]$. Now by the definition of extremality $[\sum_{g\in G}gF] = \lambda [C] \in \mathbb{R }^{>0}[C]$ and therefore $\mathcal{K}_X \cdot (\sum_{g\in G}gF) <0$. This implies $\mathcal{K}_X F<0$.
In order to determine the self-intersection of $F$, we first observe $(\sum_{g\in G}gF)^2= \lambda^2 C^2 =0$. Since $F$ is a fiber of a $G$-equivariant fibration, we know that $\sum_{g\in G}gF = kF + kF_1 + \dots + kF_l$ where $F, F_1, \dots F_l$ are distinct fibers of $f$ and $k \in \mathbb N ^{>0}$. Now
$0=(\sum_{g\in G}gF)^2 = (l+1)k^2F^2$
shows $F^2=0$. The adjunction formula then implies $g(F)=0$ and $F$ is isomorphic to $\mathbb P_1$.
\end{proof}
The map $\mathrm{cont}_R^G:=f$ is equivariant and fulfills $f_* \mathcal{O}_X = \mathcal{O}_Z$ by Stein's factorization theorem. Let $D$ be an irreducible curve in $X$ such that $f$ maps $D$ to a point, i.e., $D$ is contained in a fiber of $f$. Going through the same arguments as above one checks that $[GD] \in R$. Conversely, if $D$ is an irreducible curve in $X$ such that $[GD] \in R$ it follows that $(GD) \cdot C=0$. If $D$ is not contracted by $f$, then $f(D) = Z$ and $D$ meets every fiber of $f$. In particular, $D \cdot C >0$, a contradiction. It follows that $D$ must be contracted by $f$.
This completes the proof of the equivariant contraction theorem.
\end{proof}
The singular fibers of the conic bundle in case 2 of the theorem above are characterized by the following lemma.
\begin{lemma}\label{singular fibers of conic bundle}
Let $R = \mathbb R ^{>0} [C]$ be a $\mathcal K_X$-negative $G$-extremal ray with $C^2=0$. Let $\mathrm{cont}_R^G:=f: X \to Z$ be the equivariant contraction of $R$ defining a conic bundle structure on $X$. Then every singular fiber of $f$ is a union of two (-1)-curves intersecting transversally.
\end{lemma}
\begin{proof}
Let $F$ be a singular fiber of $f$. The same argument as in the previous lemma yields that $\mathcal{K}_X \cdot F<0$ and $F^2 =0$. Since $F$ is connected, adjunction implies that the arithmetic genus of $F$ is zero and $\mathcal{K}_X \cdot F = -2$. It follows from the assumption on $F$ being singular that $F$ must be reducible. Let $F = \sum F_i$ be the decomposition into irreducible components. Now $g(F)=0$ implies $g(F_i)=0$ for all $i$.
We apply the same argument as above to the component $F_i$ of $F$: after averaging over $G$ we deduce that $GF_i$ is in the $G$-extremal ray $R$ and $ \mathcal K _X \cdot F_i <0$.
Since $-2 = \mathcal K _X \cdot F = \sum \mathcal K _X \cdot F_i$, we may conclude that $F = F_1 +F_2$ and $F_i^2=-1$.
The desired result follows.
\end{proof}
\section*{$G$-minimal models of surfaces}
Let $X$ be a surface with $G$-action such that $\mathcal{K}_X$ is not nef, i.e., $\overline{NE}(X)_{\mathcal{K}_X<0}$ is nonempty.
\begin{lemma}
There exists a $G$-extremal ray $R$ such that $\mathcal{K}_X \cdot R<0$.
\end{lemma}
\begin{proof}
Let $[C] \in \overline{NE}(X)_{\mathcal{K}_X<0}\neq \emptyset$ and consider $[GC] \in \overline{NE}(X)^G$. The $G$-orbit or $G$-average of a $\mathcal{K}_X$-negative effective curve is again $\mathcal{K}_X$-negative. It follows that $\overline{NE}(X)^G_{\mathcal{K}_X<0}$ is nonempty.
Let $L$ be a $G$-invariant ample line bundle on $X$.
By the cone theorem,
for any $\varepsilon>0$
\[
\overline{NE}(X)^G = \overline{NE}(X)^G _{(\mathcal{K}_X+\varepsilon L) \geq 0} + \sum_{\text{finite}} \mathbb R_{\geq 0} G[C_i].
\]
where $\mathcal K _X \cdot C_i < 0$ for all $i$.
Since $\overline{NE}(X)^G_{\mathcal{K}_X<0}$ is nonempty, we may choose $\varepsilon>0$ such that $\overline{NE}(X)^G \neq \overline{NE}(X)^G _{(\mathcal{K}_X+\varepsilon L) \geq 0}$.
If the ray $R_1 = \mathbb R_{\geq 0} G[C_1]$ is not extremal in $\overline{NE}(X)^G$, then its generator $G[C_1]$ can be decomposed as a sum of elements of $\overline{NE}(X)^G$ not contained in $R_1$. It follows that
\[
\overline{NE}(X)^G = \overline{NE}(X)^G _{(\mathcal{K}_X+\varepsilon L) \geq 0} + \underset{\text{finite}}{\sum_{i\neq 1}} \mathbb R_{\geq 0} G[C_i],
\]
i.e., the ray $R_1$ is superfluous in the formula.
By assumption $\overline{NE}(X)^G \neq \overline{NE}(X)^G _{(\mathcal{K}_X+\varepsilon L) \geq 0}$ and we may therefore not remove all rays $R_i$ from the formula and at least one ray $R_i = \mathbb R_{\geq 0} G[C_i]$ is $G$-extremal.
\end{proof}
We apply the equivariant contraction theorem to $X$:
In case 1 we obtain from $X$ a new surface $Z$ by blowing down a $G$-orbit of disjoint (-1)-curves. There is a canonically defined holomorphic $G$-action on $Z$ such that the blow-down is equivariant. If $K_Z$ is not nef, we repeat the procedure which will stop after a finite number of steps. In case 2 we obtain an equivariant conic bundle structure on $X$. In case 3 we conclude that $X$ is a Del Pezzo surface with $G$-action. We call the $G$-surface obtained from $X$ at the end of this procedure a \emph{$G$-minimal model of $X$} \index{minimal model!$G$-minimal model}.
As a special case, we consider a rational surface $X$ with $G$-action.
Since the canonical bundle $\mathcal{K}_X$ of a rational surface $X$ is never nef (cf. Theorem VI.2.1 in \cite{BPV}), a $G$-minimal model of $X$ is an equivariant conic bundle over $Z$ or a Del Pezzo surface with $G$-action. Note that the base curve $Z$ must be rational: if $Z$ is not rational, one finds nonzero holomorphic one-forms on $Z$. Pulling these back to $X$ gives rise to nonzero holomorphic one-forms on the rational surface $X$, a contradiction.
This proves the well-known classification of $G$-minimal models of rational surfaces (cf. \cite{maninminimal}, \cite{isk}).
Although this classification does classically not rely on Mori theory, the proof given above is based on Mori's approach. We therefore refer to an equivariant reduction $Y \to Y_\mathrm{min}$ as an \emph{equivariant Mori reduction}\index{equivariant Mori reduction}.
In the following chapters we will apply the equivariant minimal model program to quotients of K3-surfaces by nonsymplectic automorphisms.
\chapter{Centralizers of antisymplectic involutions}\label{chapterlarge}
This chapter is dedicated to a rough classification of K3-surfaces with anti\-sym\-plec\-tic involutions centralized by large groups of symplectic transformations (Theorem \ref{roughclassi}).
We consider a K3-surface $X$ with an action of a finite group $G
\times C_2 < \mathrm{Aut}(X)$ and assume that the action of $G$ is by
symplectic transformations whereas $C_2$ is generated by an
antisymplectic involution $\sigma$ centralizing $G$. Furthermore, we assume that $\mathrm{Fix}_X(\sigma) \neq \emptyset$. Let $\pi: X \to X/ \sigma = Y$ denote the quotient map. The quotient surface $Y$ is a smooth rational $G$-surface to which we apply the equivariant minimal model program developed in the previous chapter. A $G$-minimal model of $Y$ can either be a Del Pezzo surface or an equivariant conic bundle over $\mathbb P_1$. In the later case, the possibilities for $G$ are limited by the classification of finite groups with an effective action on $\mathbb P_1$
\begin{remark}\label{autP_1}
The classification of finite subgroups of $\mathrm{SU}(2, \mathbb C)$ (or $\mathrm{SO}(3, \mathbb R)$) yields the following list of finite groups with an effective action on $\mathbb P_1$:
\begin{samepage}
\nomenclature{$D_{2n}$}{the dihedral group of order $2n$}
\nomenclature{$T_{12}$}{the tetrahedral group}
\nomenclature{$O_{24}$}{the octahedral group}
\nomenclature{$I_{60}$}{the icosahedral group}
\begin{itemize}
\item
cyclic groups $C_n$
\item
dihedral groups $D_{2n}$
\item
the tetrahedral group $T_{12} \cong A_4$
\item
the octahedral group $O_{24} \cong S_4$
\item
the icosahedral group $I_{60} \cong A_5$
\end{itemize}
\end{samepage}
If $G$ is any finite group acting on a space $X$,
we refer to the number of elements in an orbit $G.x = \{ g.x \, | \, g \in G\}$ as the \emph{length of the $G$-orbit $G.x$}\index{length of an orbit}.
Note that the length of a $T_{12}$-orbit in $\mathbb P_1$ is at least four, the length of an $O_{24}$-orbit in $\mathbb P_1$ is at least six, and the length of an $I_{60}$-orbit in $\mathbb P_1$ is at least twelve.
\end{remark}
\begin{lemma}\label{conicbundle}
If a $G$-minimal model $Y_\mathrm{min}$ of $Y$ is an equivariant conic bundle, then $|G| \leq 96$.
\end{lemma}
\begin{proof}
Let $\varphi: Y_\mathrm{min} \to \mathbb{P}_1$ be an equivariant conic bundle structure on $Y_\mathrm{min}$. By definition, the general fiber of $\varphi$ is isomorphic to $\mathbb{P}_1$. We consider the induced action of $G$ on the base $\mathbb P_1$. If this action is effective, then $G$ is among the groups specified in the remark above. Since the maximal order of an element in $G$ is eight (cf. Remark \ref{order of symp aut}), it follows that the order $G$ is bounded by 60.
If the action of $G$ on the base $\mathbb P_1$ is not effective, every element $n$ of the ineffectivity $N < G$ has two fixed points in the general fiber. This gives rise to a positive-dimensional $n$-fixed point set in $Y_\mathrm{min}$ and $Y$. A symplectic automorphism however has only isolated fixed points. It follows that the action of $n$ on $X$ coincides with the action of $\sigma $ on $\pi^{-1}(\mathrm{Fix}_Y(N))$. In particular, the order of $n$ is two. Since $N$ acts effectively on the general fiber, it follows that $N$ is isomorphic to either $C_2$ or $C_2 \times C_2$.
If $G/N$ is isomorphic to the icosahedral group $I_{60}= A_5$, then $G$ fits into the exact sequence $ 1 \to N \to G \to A_5 \to 1$ for $N=C_2$ or $C_2 \times C_2$. Let $\eta$ be an element of order five inside $A_5$. One can find an element $\xi$ of order five in $G$ which is mapped to $\eta$. Since neither $C_2$ nor $C_2 \times C_2$ has automorphisms of order five it follows that $\xi$ centralizes the normal subgroup $N$. In particular, there is a subgroup $C_2 \times C_5 \cong C_{10}$ in $G$ which is contrary to the assumption that $G$ is a group of symplectic transformations and therefore its elements have order at most eight.
If $G/N$ is cyclic or dihedral, we again use the fact that the order of elements in $G$ is bounded by $8$ and conclude $|G/N| \leq 16$. It follows that the maximal possible order of $G/N$ is 24. Using $|N| \leq 4$ we obtain $|G| \leq 96$.
\end{proof}
If $| G | >96$, the lemma above allows us to restrict our classification to the case where a $G$-minimal model $Y_\mathrm{min}$ of $Y$ is a Del Pezzo surface. The next section is devoted to a brief introduction to Del Pezzo surfaces and their automorphisms groups.
\section{Del Pezzo surfaces}
A \emph{Del Pezzo surface}\index{Del Pezzo surface} is a smooth surface $Z$ such that the
anticanonical bundle $\mathcal K_Z^{-1} = \mathcal O _Z(-K_Z)$ is ample.
The self-intersection number of the
canonical divisor\index{canonical divisor}\nomenclature{$K_X$}{the canonical divisor of $X$} $d:= K_Z^2$ is referred to as the \emph{degree} \index{degree of a Del Pezzo surface} of the Del
Pezzo surface and $1 \leq d
\leq 9$ (cf. Theorem 24.3 in \cite{manin}).
\begin{example}
Let $Z = \{f_3=0\} \subset \mathbb P_3$ be a smooth cubic surface. The anticanonical bundle $\mathcal K_Z^{-1}$ of $Z$ is given by the restriction of the hyperplane bundle $\mathcal O_{\mathbb P_3}(1)$ to $Z$ and therefore ample.
\end{example}
As a consequence of the adjunction formula, an
irreducible curve with negative self-inter\-sec\-tion on a Del Pezzo
surface is a (-1)-curve. The following theorem (cf. Theorem 24.4 in \cite{manin}) gives a classification
of Del Pezzo surfaces according to their degree.
\begin{theorem}\label{classiDelPezzo}
Let $Z$ be a Del Pezzo surface of degree $d$.
\begin{itemize}
\item
If $d=9$, then $Z$ is isomorphic to $\mathbb P_2$.
\item
If $d=8$, then $Z$ is isomorphic to either $\mathbb P_1 \times \mathbb
P_1$ or the blow-up of $\mathbb P_2$ in one point.
\item
If $1 \leq d \leq 7$, then $Z$ is isomorphic to the blow-up of
$\mathbb P_2$ in $9-d$ points in general position, i.e., no three
points lie on one line and no six points lie on one conic.
\end{itemize}
\end{theorem}
In our later considerations of Del Pezzo surfaces Table \ref{minus one curves} below
(cf. Theorem 26.2 in \cite{manin})
specifying the number of (-1)-curves on a Del Pezzo surface of degree
$d$ will be very useful.
\begin{table}[H]
\centering
\begin{tabular}{l|ccccccc}
degree $d$ & 1&2&3&4&5&6&7 \\
\hline
number of (-1)-curves & 240 & 56 & 27 & 16 & 10 & 6 & 3
\end{tabular}
\caption{(-1)-curves on Del Pezzo surfaces}\label{minus one curves}
\end{table}
\begin{example}
Let $Z$ be a Del Pezzo surface of degree 5. It follows from the theorem above that $Z$ is isomorphic to the blow-up of $\mathbb P_2$ in four points $p_1, \dots, p_4$
in general position. We denote by $E_i$ the preimage of $p_i$ is $Z$. Let $L_{ij}$ denote the line in $\mathbb P_2$ joining $p_i$ and $p_j$ and note that there are precisely six lines of this type. The proper transform of $L_{ij}$ is a (-1)-curve in $Z$. We have thereby specified all ten (-1)-curves in $Z$. Their incidence graph is known as the \emph{Petersen graph} \index{Petersen graph}.
\end{example}
The following theorem summarizes properties of the
anticanonical map, i.e., the map associated to the linear system $|-K_Z|$ of
the anticanonical divisor (Theorem 24.5 in \cite{manin} and Theorem
8.3.2 in \cite{dolgachev})
\begin{theorem}\label{antican models of del pezzo}
Let $Z$ be a Del Pezzo surface of degree $d$. If $d \geq 3 $, then
$\mathcal K_Z^{-1}$ is very ample and the anticanonical map is a holomorphic
embedding of $Z$ into $\mathbb P_d$ such that the image of $Z$ in $\mathbb P_d$
is of degree $d$.
If $d=2$, then the anticanonical map is a
holomorphic degree two cover $\varphi: Z \to \mathbb P_2$ branched
along a smooth quartic curve.
If $d=1$, then the linear system
$|-K_Z|$ has exactly one base point $p$. Let $ Z' \to Z$ be the blow-up
of $p$. Then the pull-back of $-K_Z$ to $Z'$ defines an elliptic
fibration $f: Z' \to \mathbb P_1$. The linear system $|-2K_Z|$ defines a finite map of degree two onto a quadric cone $Q$ in $\mathbb P_3$. Its branch locus is given by the intersection of $Q$ with a cubic surface.
\end{theorem}
Our understanding of Del Pezzo surfaces as surfaces obtained by
blowing-up points in $\mathbb P_ 2$ in general position or as degree
$d$ subvarieties of $\mathbb P_d$ enables us the decide whether
certain finite groups $G$ can occur as subgroups of the automorphisms
group $\mathrm{Aut}(Z)$ of a Del Pezzo surface $Z$.
\begin{example}\label{DelPezzoC3C7}
Consider the semi-direct product $G= C_3 \ltimes C_7$ where the action of
$C_3$ on $C_7$ is defined by the embedding of $C_3$ into
$\mathrm{Aut}(C_7) \cong C_6$. The group $G$ is a maximal subgroup of the simple group $L_2(7)$ which is discussed below.
Let $Z$ be a Del Pezzo surface of degree $d$ with an effective action of $G$.
Since $G$ does not admit a two-dimensional representation, it follows
that $G$ does not have fixed points in $Z$. In particular, $d \neq
1$. For the same reason, $Z$ is not the blow-up of $\mathbb P_2$ in
one or two points.
Since there is no nontrivial homomorphisms $G \to C_2$ and no
injective homomorphism $ G \to \mathrm{PSL}(2, \mathbb C)$ it follows
that $G \not\hookrightarrow \mathrm{Aut}(\mathbb P_1 \times \mathbb
P_1) = (\mathrm{PSL}_2(\mathbb C) \times \mathrm{PSL}_2(\mathbb C)) \rtimes C_2$.
\end{example}
In many cases it can be useful to consider possible actions of a
finite group $G$ on the union of (-1)-curves on a Del Pezzo surfaces.
\begin{example}\label{DelPezzoL2(7)}
We consider $G= L_2(7)$, the simple group of order 168. Its maximal
subgroups are $C_3 \ltimes C_7$ and $S_4$. Assume $G$ acts effectively on a Del
Pezzo surface $Z$ of degree $d$. Since $L_2(7)$ does not
stabilize any smooth rational curve, the $G$-orbit of a (-1)-curve $E
\subset Z$ consists of 7, 8, 14, 24 or more curves. It now follows from
Table \ref{minus one curves} that $d \neq 3, 5, 6$.
If $d=4$, then the
union of (-1)-curves on $Z$ would consist of two $G$-orbits of length 8. In
particular, $\mathrm{Stab}_G(E)\cong C_3 \ltimes C_7$ for any
(-1)-curve $E \subset Z$.
Blowing down $E$ to a point $p \in Z'$ induces an action of $C_3 \ltimes C_7$ on $Z'$ fixing $p$. Since $C_3 \ltimes C_7$ does not admit a two-dimensional representation, it follows that the normal subgroup $C_7$ acts trivially on $Z'$ and therefore on $Z$. This is a contradiction.
Using the result of the previous example, it follows that $Z$ is
either a Del Pezzo surface of degree 2 or isomorphic to $\mathbb P_2
$. Both cases will play a role in our discussion of K3-surfaces with
an action of $L_2(7)$.
\end{example}
\begin{example}
Let be the Del Pezzo surface obtained by blowing up one point $p$ in $\mathbb P_2$. Then its automorphims group is the subgroup of $\mathrm{Aut}(\mathbb P_2)$ fixing the point $p$. Similarly, if $Z$ is the Del Pezzo surface obtained by blowing up two points $p,q$ in $\mathbb P_2$, then $\mathrm{Aut}(Z) = G \rtimes C_2$ where $G$ is the subgroup of $\mathrm{Aut}(\mathbb P_2)$ fixing the two points $p, q$ and $C_2$ acts by switching the exceptional curves $E_p,E_q$.
\end{example}
In the previous chapter we have shown that Del Pezzo surfaces can occur as equivariant minimal models. It should be remarked that the blow-up of $\mathbb P_2$ in one or two points is never equivariantly minimal: Let $Z$ be the surface obtained by blowing up one or two points in $\mathbb P_2$. Then $Z$ contains an $\mathrm{Aut}(Z)$-invariant (-1)-curve, namely the curve $E_p$ in the first case and the proper transform of the line joining $p$ and $q$ in the second case. This curve can always be blown down equivariantly. Using the language of equivariant Mori theory introduced in the previous chapter, the $\mathrm{Aut}(Z)$-invariant (-1)-curve spans a $\mathrm{Aut}(Z)$-extremal ray $R$ of the cone of invariant curves $\overline{NE}(X)^{\mathrm{Aut}(Z)}$ with $\mathcal K_Z \cdot R <0$. Its contraction defines an $\mathrm{Aut}(Z)$-equivariant map to $\mathbb P_2$. In particular, $Z$ is not equivariantly minimal.
\begin{remark}
A complete classification of automorphisms groups of Del Pezzo surfaces
can be found in \cite{dolgachev}.
\end{remark}
\section{Branch curves and Mori fibers} \label{branch curves mori fibers}
We return to the initial setup where $X$ is a K3-surface with an action of $G \times \langle \sigma \rangle$ and $\pi: X \to X/\sigma =Y$ denotes the quotient map, and fix an equivariant Mori reduction $M: Y \to Y_\mathrm{min}$.
A rational curve $E \subset Y$ is called a \emph{Mori fiber} \index{Mori fiber} if it is contracted in some step of the equivariant Mori reduction $Y \to Y_\mathrm{min}$. The set of all Mori fibers is denoted by $\mathcal E$. Its cardinality $|\mathcal E|$ is denoted by $m$. We let $n$ denote the total number of rational curves in $\mathrm{Fix}_X(\sigma)$.
\begin{lemma}
The total number $m$ of Mori fibers in $Y$ is bounded by $m \leq n+ 12 - e(Y_\mathrm{min}) \leq n+9$.
\end{lemma}
\begin{proof}\label{moribound}\index{Euler characteristic formula!}
Recall that $\mathrm{Fix}_X(\sigma)$ is a disjoint union of smooth curves.
We choose a triangulation of $\mathrm{Fix}_X(\sigma)$ and extend it to a triangulation of the surface $X$. The topological Euler characteristic of the double cover is
\begin{align*}
e(X) = 24 &= 2e(Y) - \sum_{C \subset \mathrm{Fix}_X(\sigma)} e(C)\\
&= 2e(Y) - \sum_{C \subset \mathrm{Fix}_X(\sigma)} (2-2g(C))\\
&= 2e(Y) - 2n + \underset{\ g(C) \geq 1}{\sum_{C \subset \mathrm{Fix}_X(\sigma)}} (2g(C)-2)\\
& \geq 2e(Y) -2n\\
&= 2(e(Y_\mathrm{min}) +m) -2n
\end{align*}
This yields $m \leq n+12 - e(Y_\mathrm{min})$, and $e(Y_{min}) \geq 3$ completes the proof of the lemma.
\end{proof}
Let $R:= \mathrm{Fix}_X(\sigma) \subset X$ denote the ramification
locus of $\pi$ and let $B:=\pi(R) \subset Y$ be its branch locus. In the following, we repeatedly use the fact that for a finite proper surjective holomorphic map of complex manifolds (spaces) $\pi: X \to Y$ of degree $d$, the intersection number of pullback divisors fulfills $(\pi^*D_1 \cdot \pi^*D_2) = d(D_1 \cdot D_2)$.
\begin{lemma}\label{preimage of E in X}
Let $E \in \mathcal{E}$ be a Mori fiber such that $E \not\subset B$ and $|E\cap B| \geq 2$ or $E \cdot B \geq 3$.
Then $E^2 = -1$ and $\pi^{-1}(E)$
is a smooth rational curve in $X$. Furthermore, $E \cdot B = |E \cap B| =2$.
\end{lemma}
\begin{proof}
Let $k < 0$ denote self-intersection number of $E$. By the remark above, the divisor
$\pi^{-1}(E) = \pi^* E$ has self-inter\-section $2k$. Assume that $\pi^{-1}(E)$ is
reducible and let $\tilde E_1, \tilde E_2$ denote its irreducible components. They are rational
and therefore, by adjunction on the K3-surface $X$, have self-intersection number $-2$. Write
$$
0 > 2k = (\pi^{-1}(E))^2 = \tilde E_1^2 + \tilde E_2^2 + 2 (\tilde E_1 \cdot \tilde E_2) = -4 + 2 (\tilde E_1 \cdot \tilde E_2).
$$
Since $\tilde E_1$ and $\tilde E_2$ intersect at points in the preimage of $E \cap B$, we obtain $\tilde E_1 \cdot \tilde E_2 \geq 2$, a contradiction. It follows that $\pi^{-1}(E)$ is irreducible. Consequently, $k=-1$ and $\pi^{-1}(E)$ is a smooth rational curve with two $\sigma$-fixed points .
\end{proof}
\begin{remark}\label{self-int of Mori-fibers}
Let $E \in \mathcal E$ be a Mori fiber.
\begin{itemize}
\item
If $E \subset B$, then $E$ is the image of a rational curve in $X$ and $E^2 = -4$. (cf. Corollary \ref{minusfour} below).
\item
If $E \not\subset B$ and $\pi^{-1}(E)$ is irreducible, then $2E^2 = (\pi^{-1}(E))^2 <0$. Adjunction on $X$ implies that $(\pi^{-1}(E))^2=-2$ and that $\pi^{-1}(E)$ is a smooth rational curve in $X$.
The action of $\sigma$ has two fixed points on $\pi^{-1}(E)$ and
the restricted degree two map $\pi|_{\pi^{-1}(E)}: \pi^{-1}(E) \to E$ is necessarily branched, i.e., $E \cap B \neq \emptyset$.
\item
If $E \not\subset B$ and $\pi^{-1}(E)= \tilde E_1 + \tilde E_2$ is reducible, then
\[
2E^2 = \underset{\geq -2}{\underbrace{\tilde E_1^2}} + \underset{\geq 0}{\underbrace{2(\tilde E_1 \cdot \tilde E_2)}} + \underset{\geq -2}{\underbrace{\tilde E_2^2}} \geq -4.
\]
In particular, $E^2 \in \{-1,-2\}$.
\begin{itemize}
\item
If $E^2=-1$, then $\tilde E_1 \cdot \tilde E_2 =1$ and $E\cap B \neq \emptyset$.
\item
If $E^2=-2$, then $\tilde E_1\cdot \tilde E_2 =0$ and $E\cap B = \emptyset$.
\end{itemize}
\end{itemize}
In summary, a Mori fiber $E \not\subset B$ has self-intersection -1 if and only if $E\cap B \neq \emptyset$ and self-intersection -2 if and only if $E\cap B = \emptyset$. A Mori fiber $E$ has self-intersection -4 if and only if $E \subset B$.
More generally, any (-1)-curve $E$ on $Y$ meets $B$ in either one or two points. If $|E \cap B |=1$, then $\pi^{-1}(E) = E_1 \cup E_2$ is reducible. If $|E \cap B |=2$, then $\pi^{-1}(E)$ is irreducible and meets $\mathrm{Fix}_X(\sigma)=R = \pi^{-1}(B)$ in two points.
\end{remark}
\begin {proposition} \label {at most two}
Every Mori fiber $E \in \mathcal{E}$, $ E \not\subset B$ meets the branch locus $B$ in at
most two points. If $E$ and $B$ are tangent at $p$, then
$E\cap B = \{p\}$ and $(E \cdot B)_p =2$.
\end {proposition}
\begin{proof}
Let $E \in \mathcal E$, $ E \not\subset B$ and assume $|E\cap B| \geq 2$ or $E \cdot B \geq
3$. Then by the lemma above, $\tilde E = \pi^{-1}(E)$ is a smooth
rational curve in $X$. Since $\tilde E \not\subset \mathrm{Fix}_X(\sigma)$, the involution $\sigma$ has exactly two fixed points on $\tilde E$ showing $|E\cap B| = 2$. It remains to show that the intersection is transversal.
To see this, let $N_{\tilde{E}}$ denote the normal bundle of $\tilde{E}$ in $X$. We
consider the induced action of $\sigma$ on $N_{\tilde{E}}$ by a bundle
automorphism. Using an
equivariant tubular neighbourhood theorem we may equivariantly identify a
neighbourhood of $\tilde E$ in $X$ with $N_{\tilde E}$ via a
$C^{\infty}$-diffeomorphism. The $\sigma$-fixed point curves
intersecting $\tilde{E}$ map to curves of $\sigma$-fixed points in
$N_{\tilde{E}}$ intersecting the zero-section and vice versa.
Let $D$ be a curve of $\sigma$-fixed point in $N_{\tilde{E}}$. If $D$ is
not a fiber of $N_{\tilde E}$, it follows that $\sigma$ stabilizes all
fibers intersecting
$D$ and the induced action of $\sigma$ on the base must be trivial, a contradiction.
It follows that the $\sigma$-fixed point curves correspond to fibers of
$N_{\tilde{E}}$, and $E$ and $B$ meet transversally.
By negation of the implication above, if $E$ and $B$ are tangent at $p$, then $|E \cap B |=1$ and $E \cdot B=2$.
\end{proof}
\subsection{Rational branch curves}
In this section we find conditions on $G$, in particular conditions on the order of $G$, guaranteeing the absence of rational curves in $\mathrm{Fix}_X(\sigma)$.
\begin{lemma}\label{selfintbranch}
Let $\pi:X \to Y$ be a cyclic degree two cover of surfaces and let $C \subset X$ be a smooth curve contained in the ramification locus of $\pi$. Then the image of $C$ in $Y$ has self-intersection $(\pi(C))^2= 2 C^2$.
\end{lemma}
\begin{proof}
We recall that the intersection of pullback divisors fulfills $\pi^*D_1 \cdot \pi^*D_2 = 2(D_1 \cdot D_2)$. In the setup of the lemma, $(\pi^* \pi(C))^2 = 2 (\pi(C))^2$. Now $\pi^* \pi(C) \sim 2C$ implies the desired result.
\end{proof}
Note that the lemma above can also be proved by considering the normal bundle $N_C$ of $C$ and the induced action of $\sigma$ on it. The normal bundle $N_{\pi(C)}$ is isomorphic to $N_C^2$. Since the self-intersection of a curve is the degree of the normal bundle restricted to the curve, the formula follows.
\begin{corollary}\label{minusfour}
Let $X$ be a K3-surface and let $\pi: X \to Y$ be a cyclic degree two cover.
Then a rational branch curve of $\pi $ has self-intersection -4.
\end{corollary}
\begin{proof}
Let $C$ be a rational curve on the K3-surface $X$. Then by adjunction $C^2 =-2$ and the image $\pi(C)$ in $Y$ is a (-4)-curve by Lemma \ref{selfintbranch} above.
\end{proof}
On a Del Pezzo surface a curve with negative self-intersection necessarily has self-intersection -1. So if $Y_\mathrm{min}$ is a Del Pezzo surface, all rational branch curves of $\pi$, which have self-intersection -4 by Corollary \ref{minusfour}, need to be modified by the Mori reduction when passing to $Y_{\mathrm{min}}$ and therefore have nonempty intersection with the union of Mori fibers.
An important tool in the study of rational branch curves is provided by the following lemma which describes the behaviour of self-intersection numbers under monoidal transformations.
\begin{lemma}\label{selfintblowdown}
Let $\tilde X$ and $X$ be smooth projective surfaces and let $b: \tilde X \to X$ be the blow-down of a (-1)-curve $ E \subset \tilde X$. For a curve $B \subset \tilde X$ having no common component with $ E$ the self-intersection of its image in $X$ is given by
\[
b( B)^2 = B^2 + ( E \cdot B)^2.
\]
\end{lemma}
\begin{proof}
We may choose an ample divisor $H$ in $X$ with $p \not\in \mathrm{supp}(H)$ and $D$ linearly equivalent to $ b(B) +H$ such that $p \not\in \mathrm{supp}(D)$.
Since $b$ is biholomorphic away from $p$, we know
\[
(b(B) +H)^2 = D^2 =(b^*D)^2 = (b^*((b(B) +H))^2.
\]
Using $(b^*H)^2 = H^2$ and $b^*(b(B))\cdot b^*H = b(B) \cdot H$ we find
$b(B)^2 = (b^*B)^2$.
Now $b^*B = B + \mu E$ where $\mu$ denotes the multiplicity of the point $p \in b(B)$. This multiplicity equals the intersection multiplicity $E \cdot B$. Therefore,
\[
b( B)^2 = (b^*B)^2= (B + \mu E)^2 = B^2 + 2 \mu^2 -\mu^2= B^2 + \mu^2.
\]
and the lemma follows.
\end{proof}
We denote by $\mathcal{C}$ the set of rational branch curves of
$\pi$. The total number $|\mathcal{C}|$ of these curves is denoted
by $n$. The union of all Mori fibers not contained in the branch locus $B$ is denoted by $\bigcup E_i$.
Let $\mathcal{C}_{\geq k}= \{ C \in \mathcal C \, | \, |C \cap \bigcup E_i | \geq k\} $ be the set of those rational branch curves $C$ which meet $\bigcup E_i$ in at least $k$ distinct points and let $| \mathcal{C}_{\geq k}| = r_k$.
We let
$\mathcal{E}_{\geq k}$ denote the set of Mori fibers $E \not\subset B$ which
intersect some $C$ in $\mathcal{C}_{\geq k}$ and define
\[
P_k = \{ (p,E) \, |\, p \in C \cap E,\, E \in \mathcal{E}_{\geq k},\, C
\in \mathcal{C}_{\geq k}\} \subseteq Y \times \mathcal{E}_{\geq k}
\]
and the projection map $\mathrm{pr}_k: P_k \to \mathcal{E}_{\geq k}$ mapping
$(p,E)$ to $E$. This map is surjective by definition of
$\mathcal{E}_{\geq k}$ and its fibers consist of $\leq 2$ points by
Proposition \ref{at most two}. Using $|P_k| \geq k r_k$ we see
\begin{equation}\label{boundforE_k}
|\mathcal{E}_{\geq k}| \geq \frac{k}{2}r_k.
\end{equation}
Let $N$ be the largest positive integer such that $\mathcal{C}_{\geq N} =
\mathcal{C}$, i.e., each rational ramification curve is intersected at least $N$
times by Mori fibers.
A curve $C \in \mathcal{C}$ which is intersected precisely $N$ times by
Mori fibers is referred to as a \emph{minimizing curve}. In the
following, let $C$ be a minimizing curve and let $ H =
\mathrm{Stab}_G(C) < G$ be the stabilizer of $C$ in $G$.
\begin{remark}
The index of
$H$ in $G$ is bounded by $n= r_N$.
\end{remark}
\subsection*{Bounds for $n$}
A smooth rational curve
on a K3-surface has self-intersection -2 and all curves in
$\mathrm{Fix}_X(\sigma)$ are disjoint. Therefore, the rational curves
in $\mathrm{Fix}_X(\sigma)$ generate a sublattice of
$\mathrm{Pic}(X)$ of signature $(0,n)$. It follows immediately that $n
\leq 19$.
A sharper bound $n \leq 16$ for the number of disjoint (-2)-curves on a K3-surface has been
obtained by Nikulin \cite{NikulinKummer} and the following optimal bound in our setup is due to Zhang \cite{ZhangInvolutions}, Theorem 3.
\begin{proposition}
The total number of connected curves in the fixed point set of an antisymplectic
involution on a K3-surface is bounded by 10.
\end{proposition}
\begin{corollary}\label{atmostten}
The number $n$ of rational curves in $\mathrm{Fix}_X(\sigma)$ is at
most 10. If $n=10$, then $\mathrm{Fix}_X(\sigma)$ is a union of
rational curves.
\end{corollary}
In the following, we use Zhang's bound $n \leq 10$. Note, however, that all results can likewise be obtained by using the weakest bound $n \leq 19$.
For $N \geq 4 $ Zhang's bound can be sharpened using the notion of
Mori fibers and minimizing curves.
\begin{lemma}\label{boundforn}
$\frac{N}{2}n \leq n +12 - e(Y_\mathrm{min}) \leq n+9$.
\end{lemma}
\begin{proof}
Using Lemma \ref{moribound} and inequality \eqref{boundforE_k}
$
\frac{N}{2}n = \frac{N}{2}r_N \leq |\mathcal{E}_{\geq N}|\leq
|\mathcal{E}| \leq n +12 - e(Y_\mathrm{min}) \leq n+9
$.
\end{proof}
In the following we consider the stabilizer $H$ of a minimizing curve
$C$ and using the above bounds for $n$, we obtain bounds for $|G|$.
\subsection*{A bound for $|G|$}
\begin{proposition}
Let $X$ be a K3-surface with an action of a finite group $G \times \langle \sigma
\rangle$ such that $ G < \mathrm{Aut}_\mathrm{symp}(X)$ and $\sigma$ is
an antisymplectic involution with fixed points. If $| G | > 108$,
then $\mathrm{Fix}_X(\sigma)$ contains no rational curves.
\end{proposition}
\begin{proof}
Assume that $\mathrm{Fix}_X(\sigma)$ contains rational curves and
consider a minimizing curve $C \subset B$ and its stabilizer $\mathrm{Stab}_G(C) =:H$. Since a symplectic automorphism on $X$ does not admit a one-dimensional set of fixed points, it follows that the action of $H$ on $C$ is effective and $H$ is among the groups discussed in Remark \ref{autP_1}.
We recall the possible lengths of $H$-orbits in $C$: the length of an orbit of a dihedral group is at least two, the length of a $T_{12}$-orbit in $\mathbb P_1$ is at least four, the length of an $O_{24}$-orbit in $\mathbb P_1$ is at least six, and the length of an $I_{60}$-orbit in $\mathbb P_1$ is at least twelve.
Let $Y_\mathrm{min}$ be a $G$-minimal model of $X/\sigma =Y$. Recall that by Lemma \ref{conicbundle} $Y_\mathrm{min}$ is a Del Pezzo surface. Each rational branch curve is a (-4)-curve in $Y$. Since its image in $Y_\mathrm{min}$ has self-intersection $\geq -1$, it must intersect Mori fibers.
\begin{itemize}
\item
If $N=1$, i.e., the rational curve $C$ meets the union of Mori fibers in exactly one point $p$, then $p$ is a fixed point of the $H$-action on $C$. In particular, $H$ is a cyclic group $C_k$. By Remark \ref{order of symp aut} $k \leq 8 $. Since the index of $H$ in $G$ is bounded by $n \leq 10$, it follows that $|G | \leq 80$.
\item
If $N=2$, then $H$ is either a cyclic or a dihedral group. By Proposition 3.10 in \cite{mukai} the maximal order of a dihedral group of symplectic automorphisms on a K3-surface is 12. We first assume $H \cong D_{2m}$ and that the $G$-orbit $G.C$ of the rational branch curve $C$ has the maximal length $n=|G.C|=10$, i.e., $B = G \cdot C$.
Each curve in $G \cdot C$ meets the union of Mori fibers in precisely two points forming an $D_{2m}$-orbit. If a Mori fiber $E_C$ meets the curve $C$ twice, then it follows from Proposition \ref{at most two} that $E$ meets no other curve in $B$. The contraction of $E$ transforms $C$ into a singular curve of self-intersection zero. The Del Pezzo surface $Y_\mathrm{min}$ does however not admit a curve of this type. It follows, that $E$ meets a Mori fiber $E'$ which is contracted in a later step of the Mori reduction and meets no other Mori fiber than $E'$.
The described configuration $G E \cup G E'$ requires a total number of at least 20 Mori fibers and therefore contradicts Lemma \ref{moribound}. If $C$ meets two distinct Mori fibers $E_1, E_2$, each of these two can meet at most one further curve in $B$. The contraction of $E_1$ and $E_2$ transforms $C$ into a (-2)-curve. As above, the existence of further Mori fibers meeting $E_i$ follows. Again, by invariance, the total number of Mori fibers exceeds 20, a contradiction. It follows that either $H$ is cyclic or $ |G.C|\leq 9$. Both imply $|G| \leq 108$.
\item
If $N=3$, let $S=\{p_1,p_2,p_3\}$ be the points
of intersection of $C$ with the union $\bigcup E_i$ of Mori fibers. The set $S$ is
$H$-invariant. It follows that $H$ is either trivial or isomorphic to $C_2$, $C_3$
or $D_6$ and that $|G| \leq 60$
\item
If $N = 4$, it follows from Lemma \ref{boundforn} that $n \leq 9 $. Now $|H| \leq 12$ implies $|G| \leq 108$. The bound for the
order of $H$ is attained by the tetrahedral group $T_{12}$. If the group $G$
does not contain a tetrahedral group, then $|H| \leq 8$ and $|G| \leq
72$.
\item
If $N=5$, the largest possible group acting on $C$ such
that there is an invariant subset of car\-di\-na\-li\-ty 5 is the dihedral
group $D_{10}$. Since \ref{boundforn} implies $n \leq 6$, we conclude $|G| \leq 60$.
\item
If $N=6$, then $n \leq 4 $ and $|H| \leq 24$ implies $|G| \leq
96$. This bound is attained if and only if $H \cong O_{24}$. If there is no
octahedral group in $G$, then $|H| \leq 12$ and $|G|\leq 48$.
\item
If $N\geq 12$, then $n=1$ and $H =G$. The maximal order 60 is attained by
the icosahedral group.
\item
If $6 < N <12$,
we combine $n \leq 4$ and $|H| \leq 24$ to obtain
$|G| \leq 96$.
If $H$ is not the octahedral group, then $|H| \leq 16$ and $|G| \leq
64$.
\end{itemize}
The case by case discussion shows that the existence of a rational curve in $B$ implies $| G|\leq 108$ and the proposition follows.
\end{proof}
\begin{remark}
If the group $G$ under consideration does not contain certain subgroups (such as large dihedral groups or $T_{12}$, $O_{24}$ or $I_{60}$) then the condition $|G| >108$ in the proposition above can be improved and non-existence of rational ramification curves also follows for smaller $G$.
\end{remark}
\subsection{Elliptic branch curves}
The aim of this section is to find conditions on the order of $G$ which allow us to exclude elliptic curves in $\mathrm{Fix}_X(\sigma)$. We prove:
\begin{proposition}\label{elliptic branch}
Let $X$ be a K3-surface with an action of a finite group $G \times \langle \sigma
\rangle$ such that $ G < \mathrm{Aut}_\mathrm{symp}(X)$ and $\sigma$ is
an antisymplectic involution with fixed points. If $| G | > 108$,
then $\mathrm{Fix}_X(\sigma)$ contains neither rational nor elliptic ramification curves.
\end{proposition}
\begin{proof}
By the previous proposition $\mathrm{Fix}_X(\sigma)$ contains no rational curves. It follows from Nikulin's description of $\mathrm{Fix}_X(\sigma)$ (cf. Theorem \ref{FixSigma}) that it is either a single curve of genus $g \geq 1$ or the disjoint union of two elliptic curves.
Let $T \subset B$ be an elliptic branch curve and let $H:= \mathrm{Stab}_G(T)$. If $H \neq G$, then $H$ has index two in $G$. The action of $H$ on $T$ is effective.
The automorphism group $\mathrm{Aut}(T)$ of $T$ is a semidirect product $L \ltimes T$, where $L$ is a
linear cyclic group of order at most 6. We consider the projection $\mathrm{pr}_L: \mathrm{Aut}(T) \to
L$ and let $\lambda \in \mathrm{Pr}_L(H)$ be a generating root of unity. We consider $T$ as a quotient $\mathbb C / \Gamma$ and choose $h \in H$ with $h(z) = \lambda z + \omega$ and $t \in T$ such that $\omega +(1-\lambda)t = 0$. After conjugation with the translation $z \mapsto z+t$ the group $H < \mathrm{Aut}(T)$ inherits the semidirect
product structure of $\mathrm{Aut}(T)$, i.e.,
\[
H = (H \cap L) \ltimes (H \cap T) .
\]
We refer to this decomposition as the \emph{normal form} of $H$.
By Lemma \ref{conicbundle} a $G$-minimal model of $Y$ is a Del Pezzo surface and therefore does not admit elliptic curves with self-intersection zero. It follows that $T$ meets the union $\bigcup E_i$ of Mori fibers. Let $E$ be a Mori fiber meeting $T$. By Proposition \ref{at most two} $|T \cap E | \in \{1,2\}$. The stabilzer of $E$ in $H$ is denoted by $\mathrm{Stab}_H(E)$. Since the total number of Mori fibers is bounded by 9 (cf. Lemma \ref{moribound}), the index of $\mathrm{Stab}_H(E)$ in $H$ is bounded by 9.
If $T \cap E =\{p\}$, then $\mathrm{Stab}_H(E)$ is a cyclic group of order less than or equal to six. It follows that $|G | \leq 6\cdot 9 \cdot 2 = 108$.
If $T \cap E =\{p_1, p_2\}$, then $B \cap E = T \cap E$ and the stabilizer $\mathrm{Stab}_G(E)$ of $E$ in $G$ is contained in $H$. If both points $p_1,p_2$ are fixed by $\mathrm{Stab}_G(E)$, then $|\mathrm{Stab}_G(E)| \leq 6$. If $p_1,p_2$ form a $\mathrm{Stab}_G(E)$-orbit, then in the normal form $|\mathrm{Stab}_G(E) \cap T | =2$. It follows that $\mathrm{Stab}_G(E)$ is either $C_2$ or $D_4= C_2 \times C_2$. The index of $\mathrm{Stab}_G(E)$ in $G$ is bounded by 9 and $|G| \leq 54$.
In summary, the existence of an elliptic curve in $B$ implies $|G| \leq 108$ and the proposition follows.
\end{proof}
\section{Rough classification}
With the preparations of the previous sections we may now turn to a classification result for K3-surfaces with antisymplectic involution centralized by a large group.
\begin{theorem}\label{roughclassi}
Let $X$ be a K3-surface with a symplectic action of $G$ centralized by an
antisymplectic involution $\sigma$ such that
$\mathrm{Fix}(\sigma)\neq \emptyset$. If $|G|>96$, then $Y$ is a $G$-minimal
Del Pezzo surface and there are no rational or elliptic curves in $\mathrm{Fix}(\sigma)$. In
particular, $\mathrm{Fix}(\sigma)$ is a single smooth curve $C$ with $ g(C)\geq
3$ and $\pi(C) \sim -2K_Y$, where $K_Y$ denotes the canonical divisor on $Y$.
\end{theorem}
\begin{proof}
The group $G$ is a subgroup of one of the eleven groups on Mukai's list \cite{mukai} (cf. Theorem \ref{mukaithm} and Table \ref{TableMukai}).
The orders of these Mukai groups are 48, 72, 120, 168, 192, 288, 360, 384, 960. None of these groups can have a subgroup $G$ with $96 < |G| < 120$.
In particular, the order of $G$ is at least 120.
We may therefore apply the results of the previous two sections and conclude that $\pi: X \to Y$ is branched along a single smooth curve $C$ of general type. Its genus $g(C)$ must be $\geq 3$ by Hurwitz' formula.
It remains to show that $Y$ is $G$-minimal.
Assume the contrary and let $E\subset Y$ be a Mori fiber with $E^2 = -1$. As before we let $B \subset Y$ denote the branch locus of $\pi: X \to Y$. By Remark \ref{self-int of Mori-fibers} $E \cap B \neq \emptyset$.
It follows that $|E \cap B| \in \{1,2\}$. Let $\mathrm{Stab}_G(E)$ denote the stabilizer of $E$ in $G$.
If $\pi^{-1}(E)$ is reducible its two irreducible components meet transversally in one point corresponding to $\{p\} =E \cap B$. The curve $E$ is tangent to $B$ at $p$ and we consider the linearization of the action of $\mathrm{Stab}_G(E)$ at $p$. If the action of $\mathrm{Stab}_G(E)$ on $E$ is not effective, the linearization of the ineffectivity $I < \mathrm{Stab}_G(E)$ yields a trivial action of $I$ on the tangent line of $B$ at $p$. It follows that the action of $I$ is trivial in a neighbourhood of $\pi^{-1}(p) \in R =\pi^{-1}(B)$. This is contrary to the assumption that $G$ acts symplectically on $X$.
Consequently, the action of $\mathrm{Stab}_G(E)$ on $E$ is effective and in particular, $\mathrm{Stab}_G(E)$ is a cyclic group.
If $\pi^{-1}(E)$ is irreducible, then it is a smooth rational curve with an effective action of $\mathrm{Stab}_G(E)$. It follows that $\mathrm{Stab}_G(E)$ is either cyclic or dihedral. The largest dihedral group with a symplectic action on a K3-surface is $D_{12}$ (Proposition 3.10 in \cite{mukai}).
We conclude that the order of $\mathrm{Stab}_G(E)$ is bounded 12 and the index of $G_E$ in $G$ is $> 9 $. By Lemma \ref{moribound} the total number $m$ of Mori fibers however satifies $m \leq 9$. This contradiction shows that $Y$ is $G$-minimal and, in particular, a Del Pezzo surface.
\end{proof}
\begin{remark}\label{stab of minus one curve}
Let $X$ be a K3-surface with a symplectic action of $G$ centralized by an
antisymplectic involution $\sigma $ with $\mathrm{Fix}_X(\sigma) \neq \emptyset$ and let $E$ be a (-1)-curve on $Y = X/\sigma$. Then the argument above can be applied to see that the stabilizer of $E$ in $G$ is cyclic or dihedral and therefore has order at most 12.
\end{remark}
In the following chapter, the classification above is applied and extended to the case where $G$ is a maximal group of symplectic transformations on a K3-surface.
\chapter{Mukai groups centralized by antisymplectic involutions}\label{chaptermukai}
In this chapter we consider K3-surfaces with a symplectic action of one of the eleven groups from Mukai's list (Table \ref{TableMukai}) and assume that it is centralized by an antisymplectic involution. We prove the following classification result.
\begin{theorem}\label{thm mukai times invol}
Let $G$ be a Mukai group acting on a K3-surface $X$ by symplectic transformations and $\sigma $ be an antisymplectic involution on $X$ centralizing $G$ with $\mathrm{Fix}_X(\sigma) \neq \emptyset$. Then the pair $(X,G)$ is in Table \ref{Mukai times invol} below. In particular, for groups $G$ numbered 4-8 on Mukai's list, there does not exist a K3-surface with an action of $G \times C_2$ with the properties above.
\renewcommand{\baselinestretch}{1.4}
\begin{table}[h]
\centering
\begin{tabular}{l|l|l|l}
& $G$ & $|G|$ & \textbf{K3-surface} $X$ \\ \hline
1a & $L_2(7)$ & 168 & $\{x_1^3x_2+x_2^3x_3+x_3^3x_1+x_4^4 =0\} \subset \mathbb P_3$\\ \hline
1b & $L_2(7)$ & 168 & Double cover of $\mathbb P_2$ branched along \\
& & & $\{x_1^5x_2+x_3^5x_1+x_2^5x_3-5x_1^2 x_2^2 x_3^2 =0\}$\\ \hline
2 & $A_6$ & 360 & Double cover of $\mathbb P_2$ branched along \\
& & & $\{10 x_1^3x_2^3+ 9 x_1^5x_3 + 9 x_2^3x_3^3-45 x_1^2 x_2^2 x_3^2-135 x_1 x_2 x_3^4 + 27 x_3^6 =0\}$\\ \hline
3a & $S_5$ & 120 & $\{\sum_{i=1}^5 x_i = \sum_{i=1}^6 x_1^2 = \sum_{i=1}^5 x_i^3=0\} \subset \mathbb P_5$\\ \hline
3b & $S_5$ & 120 & Double cover of $\mathbb P_2$ branched along \\
& & & $\{ F_{S_5} =0\}$\\ \hline
9 & $N_{72}$ & 72 & $\{ x_1^3+ x_2 ^3 + x_3^3 +x_4^3= x_1x_2 + x_3x_4+ x_5^2 = 0 \} \subset \mathbb P_4$ \\ \hline
10 & $M_9$ & 72 & Double cover of $\mathbb P_2$ branched along \\
& & & $\{x_1^6+x_2^6 +x_3^6 -10(x_1^3x_2^3 + x_2^3x_3^3 +x_3^3x_1^3) =0\}$\\ \hline
11a & $ T_{48}$ & 48 & Double cover of $\mathbb P_2$ branched along \\
& & & $\{x_1x_2(x_1^4-x_2^4)+ x_3^6 =0\}$\\ \hline
11b & $ T_{48}$ & 48 & Double cover of $\{ x_0x_1(x_0^4-x_1^4)+ x_2^3+x_3^2=0 \} \subset \mathbb P(1,1,2,3)$\\
& & & branched along $\{x_2=0\} $
\end{tabular}
\caption{K3-surfaces with $G \times C_2$-symmetry}\label{Mukai times invol}
\end{table}
\renewcommand{\baselinestretch}{1.1}\\
The polynomial $F_{S_5}$ in case 3b) is given by
\begin {align*}
&2(x^4yz+xy^4z+xyz^4)-2(x^4y^2+x^4z^2+x^2y^4+x^2z^4+y^4z^2+y^2z^4)\\
+&2(x^3y^3+x^3z^3+y^3z^3)+x^3y^2z+x^3yz^2+x^2y^3z+x^2yz^3+xy^3z^2+xy^2z^3-6x^2y^2z^2.
\end{align*}
\end{theorem}
\begin{remark}
The examples 1a, 3a, 9, 10, and 11a appaer in Mukai's list,
the remaining cases 1b, 2, 3b, and 11b provide additional examples of K3-surfaces with maximal symplectic symmetry.
\end{remark}
For the proof of this theorem we consider each group separately and apply the following general strategy.
For a K3-surface $X$ with $G \times C_2$-symmetry we consider the quotient $Y= X/C_2$ and a $G$-minimal model $Y_\mathrm{min}$ of the rational surface $Y$. We show that $Y_\mathrm{min}$ is a Del Pezzo surface and investigate which Del Pezzo surfaces admit an action of the group $G$.
It is then essential to study the branch locus $B$ of the covering $X \to Y$. As a first step, we exclude rational and elliptic curves in $B$. In order to exclude rational branch curves, we study their images in $Y_\mathrm{min}$ and their intersection with the union of Mori fibers.
We then deduce that $B$ consists of a single curve of genus $\geq 2$ with an effective action of the group $G$. The possible genera of $B$ are restricted by the nature of the group $G$ and the Riemann-Hurwitz formula for the quotient of $B$ by an appropriate normal subgroup $N$ of $G$. The equations of $B$ or $X$ given in Table \ref{Mukai times invol} are derived using invariant theory.
Throughout the remainder of this chapter, the Euler characteristic formula\index{Euler characteristic formula!}
\[
24 = e(X) = 2 e(Y_\mathrm{min}) + 2 m -2n +\underset{\text{branch curve exists}}{\underset{\text{if non-rational}}{\underbrace{(2g-2)}}}
\]
is exploited various times.
Here $m$ denotes the total number of Mori contractions of the reduction $Y \to Y_ \mathrm{min}$, the total number of rational branch curves is denoted by $n$ and $g$ is the genus of a non-rational branch curve.
All classification results are up to \emph{equivariant equivalence}:
\begin{definition}\label{equivariantequivalence}
Let $(X_1, \sigma_1)$ and $(X_2, \sigma_2)$ be K3-surfaces with antisymplectic involution and let $G$ be a finite group acting on $X_1$ and $X_2$ by
\[
\alpha_i: G \to \mathrm{Aut}_\mathrm{symp}(X_i),
\]
such that $\alpha_i(g) \circ \sigma_i = \sigma_i \circ \alpha_i(g)$ for $i =1,2$ and all $g \in G$.
Then the surfaces $(X_1, \sigma_1)$ and $(X_2, \sigma_2)$ are considered \emph{equivariantly equivalent}\index{equivariantly equivalent} if there exist a biholomorphic map $\varphi: X_1 \to X_2$ and a group automorphism $\psi \in \mathrm{Aut}(G)$ such that
\[
\alpha_2(g) \varphi (x) = \varphi ( \alpha_1(\psi(g))x) \quad \text{and} \quad \sigma_2(\varphi (x)) = \varphi ( \sigma_1(x)).
\]
for all $x \in X_1$ and all $g \in G$.
More generally, two surfaces $Y_1$ and $Y_2$, without additional structure such as a symplectic form or an involution, with actions of a finite group $G$
\[
\alpha_i: G \to \mathrm{Aut}(Y_i)
\]
are considered equivariantly equivalent if there exist a biholomorphic map $\varphi: Y_1 \to Y_2$ and a group automorphism $\psi \in \mathrm{Aut}(G)$ such that
\[
\alpha_2(g) \varphi (y) = \varphi ( \alpha_1(\psi(g))y)
\]
for all $y \in Y_1$ and all $g \in G$.
\end{definition}
This notion differs from the notion of equivalence in representation theory. Two non-equivalent linear represenations of a group $G$ can induce equivalent actions on the projective plane if they differ by an outer automorphism of the group.
\begin{remark}
If two K3-surfaces $(X_1, \sigma_1)$ and $(X_2, \sigma_2)$ are $G$-equivariantly equivalent, then the quotient surfaces $X_i / \sigma_i$ are equivariantly equivalent with respect to the induced action of $G$.
Conversely, let $Y$ be a rational surface with two action of a finite group $G$ which are equivalent in the above sense and let $\varphi \in \mathrm{Aut}(Y)$ be the isomorphims identifying these two actions. We consider a smooth $G$-invariant curve $B$ linearly equivalent to $-2K_Y$ and the K3-surfaces $X_B$ and $X_{\varphi(B)}$ obtained as double covers branched along $B$ and $\varphi(B)$ equipped with their respective antisymplectic covering involution.
Note that $X_B$ and $X_{\varphi(B)}$ are constructed as subsets of the anticanonical line bundle where the involution $\sigma$ is canonically defined. The induced biholomorphic map $\varphi_X: X_B \to X_{\varphi(B)}$ fulfills $\sigma \circ \varphi_X = \varphi_X \circ \sigma$ by construction.
If all elements of the group $G$ can be lifted to symplectic transformations on $X_B$ and $X_{\varphi(B)}$, then the central degree two extensions $E$ of $G$ acting on $X_B$, $X_{\varphi(B)}$, respectively, split as $E = E_\mathrm{symp} \times C_2$ with $E_\mathrm{symp} =G$. In this case the group $G$ acts by symplectic transformations on
$X_B$ and $X_{\varphi(B)}$ and these are $G$-equivariantly equivalent in strong sense introduced above.
This follows from the assumption that the corresponding $G$-actions on the base $Y$ are equivalent and the fact that for each $g \in G \subset \mathrm{Aut}(Y)$ there is only one choice of symplectic lifting $\tilde g \in \mathrm{Aut}(X_B)$ and $\mathrm{Aut}(X_{\varphi(B)})$.
\end{remark}
In the following sections we will go through the lists of Mukai groups and for each group we prove the classification claimed in Theorem \ref{thm mukai times invol}.
\section{The group $L_2(7)$}\label{mukaiL2(7)}\index{Mukai group! $L_2(7)$}
Let $G \cong L_2(7)$ be the finite simple group of order 168. If $G$ acts on a K3-surface $X$, then the kernels of the homomorphism $G \to \mathrm{Aut}(X)$ and the homomorphism $ G \to \Omega^2(X)$ are trivial and the action is effective and symplectic. Let $\sigma$ be an antisymplectic involution on $X$ centralizing $G$. Since $G$ has an element of order seven which is known to have exactly three fixed points $p_1, p_2,p_3$ and $\sigma$ acts on this set of three points, we know that $\mathrm{Fix}_X(\sigma) \neq \emptyset$. By Theorem \ref{roughclassi}, the K3-surface $X$ is a double cover of a Del Pezzo surface $Y$. Our study of Del Pezzo surfaces with an action of $L_2(7)$ in Example \ref{DelPezzoL2(7)} has revealed that $Y$ is either $\mathbb P_2$ or a Del Pezzo surface of degree 2. In the first case, $\pi: X \to Y$ is branched along a curve of genus 10, in the second case $\pi$ is branched along a curve of genus 3.
Section \ref{168} in the next chapter is devoted to an inspection of K3-surfaces with an action of $L_2(7) \times C_2$ and a precise classification result in the setup above will be obtained. The pair $(X,G)$ is equivariantly isomorphic to either the surface 1a) or 1b).
\section{The group $A_6$}\label{A6Valentiner}\index{Mukai group! $A_6$}
Let $G \cong A_6$ be the alternating group degree 6. It is a simple group and if it acts on a K3-surface $X$, then this action effective and symplectic. Let $\sigma$ be an antisymplectic involution on $X$ centralizing $G$ and assume that $\mathrm{Fix}_X(\sigma) \neq \emptyset$. By Theorem \ref{roughclassi}, the K3-surface $X$ is a double cover of a Del Pezzo surface $Y$ with an effective action of $A_6$.
\begin{lemma}
The Del Pezzo surface $Y$ is isomorphic to $\mathbb P_2$ with a uniquely determined action of $A_6$ given by a nontrivial central extension $V=3.A_6$ of degree three known as \emph{Valentiner's group}\index{Valentiner's group}.
\end{lemma}
\begin{proof}
We go through the list of Del Pezzo surfaces.
\begin{itemize}
\item
If $Y$ has degree one, then $| -K_Y|$ has precisely one base point which would have to be an $A_6$-fixed point. This is contrary to the fact that $A_6$ has no faithful two-dimensional representation.
\item
We recall that the stabilizer of a (-1)-curve $E$ in $Y$ is either cyclic or dihedral (Remark \ref{stab of minus one curve}). In particular, its order is at most 12 and therefore its index in $A_6$ is at least 30.
Using Table \ref{minus one curves} we see that $Y$ can not be a Del Pezzo surface of degree 2,3,4,5,6.
\item
Since the blow-up of $\mathbb P_2$ in one point is never $G$-minimal,
it remains to exclude $Y \cong \mathbb P_1 \times \mathbb P_1$. Assume there is an action of $A_6$ on $\mathbb P_1 \times \mathbb P_1$. Since $A_6$ has no subgroups of index two, it follows that $A_6 < \mathrm{PSL}(2, \mathbb C) \times \mathrm{PSL}(2, \mathbb C)$ and both canonical projections are $A_6$-equivariant. Since $A_6$ has neither an effective action on $\mathbb P_1$ nor nontrivial normal subgroups of ineffectivity, it follows that $A_6$ acts trivially on $Y$.
\end{itemize}
It follows that $Y \cong \mathbb P_2$. The action of $A_6$ on $\mathbb P_2$ is given by a degree three central extension of $A_6$. Since $A_6$ has no faithful three-dimensional representation, this extension is nontrivial and isomorphic the unique nontrivial degree three extension $V=3.A_6$ known as Valentiner's group. Up to equivariant equivalence, there is a unique action of $A_6$ on $\mathbb P_2$. This follows from the classification of finite subgroup of $\mathrm{SL}_3(\mathbb C)$ (cf. \cite{blichfeldtbook}, \cite{blichfeldt}, and \cite{YauYu}) and can also be derived as follows: An action of $A_6$ on $\mathbb P_2$ is given by a threedimensional projective representation. We wish to show that any two actions induced by $\rho_1, \rho_2$ are equivalent. We restrict the projective representations $\rho_1$ and $\rho_2$ to the subgroup $A_5$. The restricted representations are linear and after a change of coordinates $\rho_1(A_5) = \rho_2(A_5) \subset \mathrm{SL}_3(\mathbb C)$. We fix a subgroup $A_4$ in $A_5$ and consider its normalizer $N$ in $A_6$. The groups $N$ and $A_4$ generate the full group $A_6$ and it suffices to prove that $\rho_1(N) = \rho_2(N)$. This is shown by considering an explicit three-dimensional representation of $A_4 < A_5$ and the normalizer $\mathcal N$ of $A_4$ inside $\mathrm{PSL}_3(\mathbb C)$. The group $A_4$ has index two in $\mathcal N$ and therefore $\mathcal N = \rho_1(N)= \rho_2(N)$..
\end{proof}
The covering $X \to Y$ is branched along an invariant curve $C$ of degree six. This curve is defined by an invariant polynomial $F_{A_6}$ of degree six, which is unique by Molien's formula. Its explicit equation is derived in \cite{crass}. In appropriately chosen coordinates,
\[
F_{A_6}(x_1,x_2,x_3) = 10 x_1^3x_2^3+ 9 x_1^5x_3 + 9 x_2^3x_3^3-45 x_1^2 x_2^2 x_3^2-135 x_1 x_2 x_3^4 + 27 x_3^6.
\]
If a K3-surface with $A_6 \times C_2$-symmetry exists, then it must be the double cover of $\mathbb P_2$ branched along $\{F_{A_6}=0\}$.
The action of $A_6$ on $\mathbb P_2$ induces an action of a central degree two extension of $E$ on the double cover branched along $\{F_{A_6}=0\}$,
\[
\{\mathrm{id}\} \to C_2 \to E \to A_6 \to \{\mathrm{id}\}.
\]
Let $E_\mathrm{symp} \neq E$ be the normal subgroup of symplectic automorphisms in $E$. Since $A_6$ is simple, it follows that $E_\mathrm{symp}$ is mapped surjectively to $A_6$ and $E_\mathrm{symp} \cong A_6$. In particular, the group $E$ splits as $E_\mathrm{symp} \times C_2$ where $C_2$ is generated by the antisymplectic covering involution. This proves the existence of a unique K3-surface with $A_6 \times C_2$-symmetry. We refer to this K3-surface as the \emph{Valentiner surface}\index{Valentiner surface}.
\section{The group $S_5$}\index{Mukai group! $S_5$}
In this section we study K3-surfaces with an symplectic action of the symmetric group $S_5$ centralized by an antisymplectic involution.
Let $X$ be a K3-surface with a symplectic action of $G = S_5$ and let $\sigma$ denote an antisymplectic involution centralizing $G$. We assume that $\mathrm{Fix}_X(\sigma) \neq \emptyset$.
We may apply Theorem \ref{roughclassi} which yields that $X/ \sigma =Y$ is a $G$-minimal Del Pezzo surface and $\pi: X \to Y$ is branched along a smooth connected curve $B$ of genus
\[
g(B) = 13-e(Y).
\]
We will see in the following that only very few Del Pezzo surfaces admit an effective action of $S_5$ or a smooth $S_5$-invariant curve of appropriate genus.
\begin{lemma}
The degree $d(Y)$ of the Del Pezzo surface $Y$ is either three or five.
\end{lemma}
\begin{proof}
We prove the statement by excluding Del Pezzo surfaces of degree $\neq 3,5$.
\begin{itemize}
\item
Assume $Y \cong \mathbb P_2$. Then $G =S_5$ is acting effectively on $\mathbb P_2$, i.e., $S_5 \hookrightarrow \mathrm{PSL}_3(\mathbb C)$. Let $\tilde G$ denote the preimage of $G$ in $\mathrm{SL}_3(\mathbb C)$. Since $A_5$ has no nontrivial central extension of degree three, it follows that the preimage of $A_5 < S_5$ in $\tilde G$ splits as $\tilde A_5 = A_5 \times C_3$. It has index two in $\tilde G$ and therefore is a normal subgroup of $\tilde G$. Let $g \in S_5$ be any transposition and pick $\tilde g$ in its preimage with $\tilde g^2 = \mathrm{id}$. Now $\tilde g$ and $ A_5$ generate a copy of $S_5$ in $\mathrm{SL}_3(\mathbb C)$. The action of $S_5$ is given by a three-dimensional representation.
The irreducible representations of $S_5$ have dimensions $1,4,5$ or $6$ and it follows that there is no faithful three-dimensional represenation of $S_5$ and therefore no effective $S_5$-action on $\mathbb P_2$.
\item
Assume that $Y$ is isomorphic to $\mathbb P_1 \times \mathbb P_1$. We investigate the action of $S_5 = A_5 \rtimes C_2$ and note that $A_5$ is a simple group. The automorphism group $\mathrm{Aut}(Y)$ is given by
\[
(\mathrm{PSL}_2( \mathbb C) \times \mathrm{PSL}_2(\mathbb C)) \rtimes C_2.
\]
It follows that $A_5 < \mathrm{PSL}_2( \mathbb C) \times \mathrm{PSL}_2( \mathbb C)$, and the action of $A_5$ respects the product structure, i.e, the canonical projections onto the factors are $A_5$-equivariant.
If $A_5$ acts trivially on one of the factors, then the generator $\tau$ of the outer $C_2$ stabilizes this factor because $A_5$ must act nontrivially on the second factor. It follows that $S_5$ stablizes the second factor which is impossible since there is no effective action of $S_5$ on $\mathbb P_1$. It follows that $A_5$ acts effectively on both factors and $\tau$ exchanges them.
We consider an element $\lambda$ of order five in $A_5$ and chose coordinates on $\mathbb P_1 \times \mathbb P_1$ such that $\lambda$ acts by
\[
([z_1:z_2],[w_1:w_2]) \mapsto ([\xi z_1:z_2],[\xi ^a w_1:w_2])
\]
for some $a \in \{1,2,3,4\}$ and $\xi^5 =1$. The automorphism $\lambda$ has four fixed points
\begin{align*}
p_1 = ([1:0],[1:0]),\,\,
p_2 = ([1:0],[0:1]),\, \,
p_3 = ([0:1],[1:0]),\,\,
p_4 = ([0:1],[0:1]).\,\,
\end{align*}
Since it lifts to a symplectic automorphism on the K3-surface $X$ with four fixed points, all fixed points must lie on the branch curve.
The branch curve $B \subset Y$ is a smooth invariant curve linearly equivalent to $-2K_Y$ and is therefore given by an $S_5$-semi-invariant polynomial $f$ of bidegree $(4,4)$. Since $f$ must be invariant with respect to the subgroup $A_5$, it is a linear combination of $\lambda$-invariant monomials of bidegree $(4,4)$.
For each choice of $a$ one lists all $\lambda$-invariant monomials of bidegree $(4,4)$. For $a=1$ these are
\[
z_1 z_2^3 w_1^4,\,\, z_1^2 z_2^2 w_1^3 w_2 ,\,\, z_1^3 z_2 w_1^2 w_2^2,\,\, z_1^4 w_1 w_2^3 , \,\,z_2^4 w_2^4.
\]
Since $f$ must vanish at $p_1 \dots p_4$, one sees that $f$ may not contain $z_2^4 w_2^4$. The remaining monomials have a common component $z_1 w_1$ such that $f$ factorizes and $C$ must be reducible, a contradiction.
The same argument can be carried out for each choice of $a$. It follows that the action of $S_5$ on $\mathbb P_1 \times \mathbb P_1$ does not admit irreducble curves of bidegree $(4,4)$. This eliminates the case $Y \cong \mathbb P_1 \times \mathbb P_1$.
\item
Again using the fact that the largest subgroup of $S_5$ which can stabilize a (-1)-curve in $Y$ is the group $D_{12}$ of index 10,
it follows that the number of (-1)-curves in a $G$-orbit is at least 10.
A Del Pezzo surface of degree six has six (-1)-curves and therefore $d(Y) \neq 6$.
A Del Pezzo surface of degree four contains sixteen (-1)-curves. Since 16 does not divide the the order of $S_5$, the set of these curves is not a single $G$-orbit. As it cannot be the union of $G$-orbits either, we conclude $d(Y) \neq 4$.
\item
If $d(Y) =2$, then the anticanonical map defines an $\mathrm{Aut}(Y)$-equivariant double cover of $\mathbb P_2$. The induced action of $S_5$ on $\mathbb P_2$ would have to be effective and therefore we obtain a contradiction as in the case $Y \cong \mathbb P_2$.
\item
If $d(Y)=1$ then the anticanonical system $|-K_Y|$ is known to have precisely one base point which has to be fixed point of the action of $S_5$. Since $S_5$ has no faithful two-dimensional representation, this is a contradiction.
\end{itemize}
Since we have considered all possible $G$-minimal Del Pezzo surfaces the proof of the lemma is completed.
\end{proof}
\subsection{Double covers of Del Pezzo surfaces of degree three}
The following example of a K3-surface $X$ with an action of $S_5 \times C_2$ such that $X/\sigma$ is a Del Pezzo surface of degree three can be found in Mukai's list \cite{mukai} (cf. also Table \ref{TableMukai}).
\begin{example}\label{MukaiS5}
Let $X$ be the K3-surface in $\mathbb P_5$ given by
\[
\sum_{i=1}^5 x_i = \sum_{i=1}^6 x_1^2 = \sum_{i=1}^5 x_i^3=0
\]
and let $S_5$ act on $\mathbb P_5$ by permuting the first five variables and by the character $\mathrm{sgn}$ on the last variable. This induces an action on $X$.
The commutator subgroup $ S_5' = A_5 < S_5$ acts by symplectic transformations. In order to show that the full group acts symplectically, consider the transposition $\tau = (12) \in S_5$ acting on $\mathbb P_5$ by $[x_1:x_2:x_3:x_4:x_5:x_6] \mapsto [x_2:x_1:x_3:x_4:x_5:-x_6]$. One checks that the induced involution on $X$ has isolated fixed points and is therefore symplectic. It follows that $S_5 < \mathrm{Aut}_\mathrm{symp}(X)$.
Let $\sigma : \mathbb P_5 \to \mathbb P_5$ be the involution $[x_1:x_2:x_3:x_4:x_5:x_6] \mapsto [x_1:x_2:x_3:x_4:x_5:-x_6]$. This defines an involution on $X$ with a positive-dimensional set of fixed point $\{x_6=0 \} \cap X$. Therefore $\sigma$ is an antisymplectic involution on $X$ which centralizes the action of $S_5$.
The quotient $Y$ of $X$ by $\sigma$ is given by restricting then rational map $[x_1:x_2:x_3:x_4:x_5:x_6] \mapsto [x_1:x_2:x_3:x_4:x_5]$ to $X$. The surface $Y$ is given by
\[
\{ \sum_{i=1}^5 y_i= \sum_{i=1}^5 y_i^3 =0\} \subset \mathbb P_4.
\]
and is isomorphic to the Clebsch diagonal surface
$\{z_1^2 z_2 + z_1 z_3^2 + z_3 z_4^2 + z_4 z_2^2 = 0\} \subset \mathbb P_3$ (cf. Theorem 10.3.10 in \cite{dolgachev}), a Del Pezzo surface of degree three. The branch set $B$ is given by $\{ \sum_{i=1}^5 y_1^2=0\} \cap Y \subset \mathbb P_4$.
\end{example}
By the following proposition, the example above is the unique K3-surface with $S_5 \times C_2$-symmetry such that $X/\sigma$ is a Del Pezzo surface of degree three.
\begin{proposition}\label{S5 on degree three}
Let $X$ be a K3-surface with a symplectic action of the group $S_5$ centralized by an antisymplectic involution $\sigma$. If $Y=X/\sigma$ is a Del Pezzo surface of degree three, then $X$ is equivariantly isomorphic to Mukai's $S_5$-example $\{\sum_{i=1}^5 x_i = \sum_{i=1}^6 x_1^2 = \sum_{i=1}^5 x_i^3=0\} \subset \mathbb P_5$.
\end{proposition}
\begin{proof}
We consider the $\mathrm{Aut}(Y)$-equivariant embedding of the Del Pezzo surface $Y$ into $\mathbb P_3$ given by the anticanonical map. Any automorphism of $Y$ induced by an automorphism of the ambient projective space.
It follows from the representation and invariant theory of the group $S_5$ that a Del Pezzo surface of degree three with an effective action of the group $S_5$ is equivariantly isomorphic the Clebsch cubic $\{z_1^2 z_2 + z_1 z_3^2 + z_3 z_4^2 + z_4 z_2^2 = 0\} \subset \mathbb P_3$ (cf. Theorems 10.3.9 and 10.3.10, Table 10.3 in \cite{dolgachev}).
The ramification curve $B\subset Y$ is linearly equivalent to $-2K_Y$. We show that $B$ is given by intersecting $Y$ with a quadric in $\mathbb P_3$.
Applying the formula
\[
h^0(Y, \mathcal{O}(-rK_Y))= 1+ \frac{1}{2}r(r+1)d(Y)
\]
(cf. e.g. Lemma 8.3.1 in \cite{dolgachev})
to $d=d(Y)=3$ and $r=2$ we obtain $h^0(Y,\mathcal{O}( -2K_Y))= 10$. This is also the dimension of the space of sections of $\mathcal O_{\mathbb P_3} (2)$ in $\mathbb P_3$ (homogeneous polynomials of degree two in four variables). It follows that the restriction map
\[
H^0(\mathbb P_3, \mathcal O (2)) \to H^0(Y, \mathcal O(-2K_Y))
\]
is surjective and $B = Y \cap Q$ for some quadric $Q = \{f=0\}$ in $\mathbb P_3$.
Since $B$ is an $S_5$-invariant curve in $Y$, it follows that for each $g \in S_5$ the intersection of $gQ = \{ f \circ g^{-1} =0\}$ with Y coincides with $B$. It follows that $f|_Y$ is a multiple of $(f\circ g^{-1})|_Y$, i.e., there exists a constant $c \in \mathbb C^*$ such that $(f \circ g^{-1}) - cf$ vanishes identically on $Y$. Since $Y$ is irreducible, this implies $f \circ g^{-1} = cf$. It follows that the polynomial $f$ is an $S_5$- semi-invariant and therefore invariant with respect to the commutator subgroup $A_5$.
We have previously noted that after a suitable linear change of coordinates the surface $Y$ is given by $\{ \sum_{i=1}^5 y_i= \sum_{i=1}^5 y_i^3 =0\} \subset \mathbb P_4$ where $S_5$ acts by permutation. The action of any transposition on an $S_5$-semi-invariant polynomial is given by multiplication by $\pm 1$. It follows that in the coordinates $[y_1:\dots:y_5]$ the semi-invariant polynomial $f$ is given by
\[
a \sum_{i=1}^5 y_i^2 + b(\sum_{i=1}^5 y_i)^2 =0
\]
for some $a,b \in \mathbb C$. Using the fact $Y \subset \{\sum_{i=1}^5 y_i=0\}$ it follows that $B$ is given by intersecting $Y$ with $ \{\sum_{i=1}^5 y_i^2 =0\}$ and $X$ is Mukai's $S_5$-example discussed in Example \ref{MukaiS5}.
\end{proof}
\subsection{Double covers of Del Pezzo surfaces of degree five}
A second class of candidates of K3-surfaces with $S_5 \times C_2$-symmetry is given by double covers of Del Pezzo surfaces of degree five.
Any two Del Pezzo surfaces of degree five are isomorphic and the automorphisms group of a Del Pezzo surface $Y$ of degree five is $S_5$. The ten (-1)-curves on $Y$ form a graph known as the \emph{Petersen graph}. The Petersen graph has $S_5$-symmetry and every symmetry of the abstract graph is induced by a unique automorphism of the surface $Y$.
The following proposition classifies K3-surfaces with $S_5 \times C_2$-symmetry which are double covers of Del Pezzo surfaces of degree five.
\begin{proposition}
Let $X$ be a K3-surface with a symplectic action of the group $S_5$ centralized by an antisymplectic involution $\sigma$. If $Y=X/\sigma$ is a Del Pezzo surface of degree five, then $X$ is equivariantly isomorphic to the minimal desingularization of the double cover of $\mathbb P_2$ branched along the sextic
\begin{align*}
\{&2(x^4yz+xy^4z+xyz^4)
-2(x^4y^2+x^4z^2+x^2y^4+x^2z^4+y^4z^2+y^2z^4)
+2(x^3y^3+x^3z^3+y^3z^3)\\
&+x^3y^2z+x^3yz^2+x^2y^3z+x^2yz^3+xy^3z^2+xy^2z^3
-6x^2y^2z^2
=0\}
\end{align*}
\end{proposition}
\begin{proof}
Let $B \subset Y$ denote the branch locus of the covering $X \to Y$. The curve $B$ is smooth, connected, invariant with respect to the full automorphism group of $Y$ and linearly equivalent to $-2K_Y$.
The Del Pezzo surface $Y$ is the blow-up of $\mathbb P_2$ in four points $p_1,p_2,p_3,p_4$ in general position. We may choose coordinates $[x:y:z]$ on $\mathbb P_2$ such that
\begin{align*}
p_1=[1:0:0],\quad
p_2=[0:1:0],\quad
p_3=[0:0:1],\quad
p_4=[1:1:1].
\end{align*}
Let $m: Y \to \mathbb P_2$ be the blow-down map and let $E_i = m^{-1}(p_i)$.
Consider the $S_4$-action on $\mathbb P_2$ permuting the points $\{p_i\}$. The isotropy at the point $p_1$ is isomorphic to $S_3$ and induces an effective $S_3$-action on $E_1$.
Let $E$ be any (-1)-curve on $Y$. By adjunction $E\cdot B=2$. Since $Y$ contains precisely ten (-1)-curves forming an $S_5$-orbit, the group $H = \mathrm{Stab}_{S_5}(E)$ has order 12 and all stabilizer groups of (-1)-curves in $Y$ are conjugate.
It follows that the group $H$ contains $S_3$, which is acting effectively on $E$, and therefore $H$ is isomorphic to the dihedral group of order 12. The points of intersection $B\cap E$ form an $H$-invariant subset of $E$. Since $H$ has no fixed points in $E$ and precisely one orbit $H.p = \{p,q\}$ consisting of two elements, it follows that $B$ meets $E$ transversally in $p$ and $q$.
In particular, each curve $E_i$
meets $B$ in two points and the image curve $C = m(B)$ has nodes at the four points $p_i$.
By Lemma \ref{selfintblowdown}, the self-intersection number of $C$ is $20 + 4\cdot 4= 36$, so $C$ is a sextic curve. It is invariant with respect to the action of $S_4$ given by permutation on $p_1, \dots p_4$. For simplicity, we first only consider the action of $S_3$ permuting $p_1,p_2,p_3$ and conclude that $C$ is given by $\{f=\sum a_i f_i =0 \}$ as a linear combination of the following degree six polynomials
\begin{align*}
f_1&=x^6+y^6 +z^6\\
f_2&=x^5y + x^5z+ xy^5 +xz^5+y^5z+yz^6\\
f_3&=x^4yz+xy^4z+xyz^4\\
f_4&=x^4y^2+x^4z^2+x^2y^4+x^2z^4+y^4z^2+y^2z^4\\
f_5&=x^3y^3+x^3z^3+y^3z^3\\
f_6&=x^3y^2z+x^3yz^2+x^2y^3z+x^2yz^3+xy^3z^2+xy^2z^3\\
f_7&=x^2y^2z^2
\end{align*}
The fact that $C$ passes through $p_i$ and is singular at $p_i$ yields $a_1=a_2=0$ and
\[
3a_3+6a_4+3a_5+6a_6+a_7=0.
\]
The two tangent lines of $C$ at the node $p_i$ correspond to the unique $\mathrm{Stab}(E_i)$-orbit of length two in $E_i$. We consider the point $p_3$ and the subgroup $S_3 < S_4$ stabilizing $p_3$. The action of $S_3$ on $E_3$ is given by the linearized $S_3$-action on the set of lines through $p_3$. One checks that in local affine coordinates $(x,y)$ the unique orbit of length two corresponds to the line pair $x^2 -xy +y^2 =0$.
Dehomogenizing $f$ at $p_3$, i.e., setting $z=1$, we obtain the local equation $f_\mathrm{dehom}$ of $C$ at $p_3$.
The polynomial $f_\mathrm{dehom}$ modulo terms of order three or higher must be a multiple of $x^2 -xy +y^2$.
Therefore $a_3 = -a_4$.
Next we consider the intersection of $C$ with the line $L_{34} = \{x=y\}$ joining $p_3$ and $p_4$. We know that $f|_{L_{34}}$ vanishes of order two at $p_3$ and $p_4$ and at one or two further points on $L_{34}$.
Let $\widetilde L_{34}$ denote the proper transform of $L_{34}$ inside the Del Pezzo surface $Y$. The curve $\widetilde L_{34}$ is a (-1)-curve, hence its stabilizer $\mathrm{Stab}_G(\widetilde L_{34})$ is isomorphic to $D_{12} = S_3 \times C_2$. The factor $C_2$ acts trivially on $\widetilde L_{34}$. Since the intersection of $\widetilde L_{34}$ with $B$ is $\mathrm{Stab}_G(\widetilde L_{34})$ invariant, it follows that $\widetilde L_{34} \cap B$ is the unique $S_3$-orbit a length two in $\widetilde L_{34}$.
We wish to transfer our determination of the unique $S_3$-orbit of length two in $E_3$ above to the curve $\widetilde L_{34}$ using an automorphism of $Y$ mapping $E_3$ to $\widetilde L_{34}$. Consider the automorphism $\varphi$ of $Y$ induced by the birational map of $\mathbb P_2$ given by
\[
[x:y:z] \mapsto [x(z-y):z(x-y):xz]
\]
(cf. Theorem 10.2.2 in \cite{dolgachev}) and let $\psi$ be the automorphism of $Y$ induced by the permutation of the points $p_2$ and $p_3$ in $\mathbb P_2$. Then $\psi \circ \varphi$ is an automorphism of $Y$ mapping $E_3$ to $\widetilde L_{34}$. If $[X:Y]$ denote homogeneous coordinates on $E_3$ induced by the affine coordinates $(x,y)$ in a neighbourhood of $p_3$, then a point $[X:Y] \in E_3$ is mapped to the point corresponding to $[X:X: X-Y] \in L_{34} \subset \mathbb P_2$. It was derived above that the unique $S_3$-orbit of length two in $E_3$ is given by $X^2-XY +Y^2$ and it follows that the unique $S_3$-orbit of length two in $\widetilde L_{34}$ corresponds to the points $[x:x:z] \in \mathbb P_2$ fulfilling $x^2 -xz +z^2 =0$.
Therefore, $f|_{L_{34}}$ is a multiple of polynomial given by $x^2(x-z)^2(x^2 -xz +z^2)$.
Comparing coefficients with $f(x:x:z)$ yields
\begin{align*}
2a_3+2a_6 &= 2a_5 +2a_6\\
2a_4 + a_5&= 2a_4+a_3\\
8a_4+4a_5 &= 2a_4+2a_6 +a_7\\
-6a_4 -3a_5 &= 2a_5 +2a_6.
\end{align*}
We conclude $a_3=a_5=2= -a_4$, $a_6=1$, and $a_7=-6$. So if $X$ as in the lemma exists, it is the double cover of $Y$ branched along the proper transform of $\{f=0\}$ in $Y$, where
\begin{align*}
f(x,y,z)=&2(x^4yz+xy^4z+xyz^4)\\
&-2(x^4y^2+x^4z^2+x^2y^4+x^2z^4+y^4z^2+y^2z^4)\\
&+2(x^3y^3+x^3z^3+y^3z^3)\\
&+x^3y^2z+x^3yz^2+x^2y^3z+x^2yz^3+xy^3z^2+xy^2z^3\\
&-6x^2y^2z^2.
\end{align*}
In order to prove existence, let $X$ be the minimal desingularisation of the double cover of $\mathbb P_2$ branched along $\{f=0\}$. Then $X$ is the double cover of the Del Pezzo surface $Y$ of degree five branched along the proper transform $D$ of $\{f=0\}$ in $Y$.
Since all automorphisms of $Y$ are induced by explicit biholomorphic or birational transformation of $\mathbb P_2$
one can check by direct computations that $D$ is in fact invariant with respect to the action of $\mathrm{Aut}(Y) = S_5$. The covering involution $\sigma$ is antisymplectic.
On $X$ there is an action of a central extension $E$ of $S_5$ by $C_2$. Let $E_\mathrm{symp}$ be the subgroup of symplectic automorphisms in $E$. Since $E$ contains the antisymplectic covering involution $E_\mathrm{symp} \neq E$. The image $N$ of $E_\mathrm{symp}$ in $S_5$ is normal and therefore either $N \cong S_5$ or $N \cong A_5$.
If $N \cong A_5$ and $| E_\mathrm{symp}| =60$, then $E_\mathrm{symp} \cong A_5$. Lifting any transposition from $S_5$ to an element $g$ of order two in $E$, the group generated by $g$ and $E_\mathrm{symp}$ inside $E$ is isomorphic to $S_5$. It follows that $E$ splits as $S_5 \times C_2$ and $E / E_\mathrm{symp} \cong C_2 \times C_2$. This is a contradiction.
If $N \cong A_5$ and $| E_\mathrm{symp}| =120$, then $ E = E_\mathrm{symp} \times C_2$, where the outer $C_2$ is generated by the antisymplectic covering involution $\sigma$, and $E / C_2 = S_5$ implies that $E_\mathrm{symp} \cong S_5$. This is contradictory to the assumption $N \cong A_5$.
In the last remaining case $N \cong S_5$. Since $E_\mathrm{symp} \neq E$, also $E_\mathrm{symp} \cong S_5$ and $E$ splits as $E_\mathrm{symp} \times C_2$.
It follows that the action of $S_5$ on $Y$ induces an symplectic action of $S_5$ on the double cover $X$ centralized by the antisymplectic covering involution. This completes the proof of the proposition.
\end{proof}
\subsection{Conclusion}
We summarize our results of the previous subsections in the following theorem.
\begin{theorem}
Let $X$ be a K3-surface with a symplectic action of the group $S_5$ centralized by an antisymplectic involution $\sigma$ with $\mathrm{Fix}_X(\sigma) \neq \emptyset$. Then $X$ is equivariantly isomorphic to either Mukai's $S_5$-example or the minimal desingularization of the double cover of $\mathbb P_2$ branched along the sextic
\begin{align*}
\{&F_{S_5}(x_1,x_2,x_3)=\\
&2(x^4yz+xy^4z+xyz^4)-2(x^4y^2+x^4z^2+x^2y^4+x^2z^4+y^4z^2+y^2z^4)+2(x^3y^3+x^3z^3+y^3z^3)\\
&+x^3y^2z+x^3yz^2+x^2y^3z+x^2yz^3+xy^3z^2+xy^2z^3-6x^2y^2z^2=0\}.
\end{align*}
\end{theorem}
\section{The group $M_{20} = C_2^4 \rtimes A_5$}\index{Mukai group! $M_{20}$}
\begin{proposition}
There does not exist a K3-surface with a symplectic action of $M_{20}$ centralized by an antisymplectic involution $\sigma$ with $\mathrm{Fix}_X(\sigma) \neq \emptyset$.
\end{proposition}
\begin{proof}
Assume that a K3-surface $X$ with these properties exists. Applying Theorem \ref{roughclassi} we see that $X \to Y$ is branched along a single $M_{20}$-invariant smooth curve $C$ on the Del Pezzo surface $Y$. The curve $C$ is neither rational nor elliptic. By Hurtwitz' formula,
\[
| \mathrm{Aut} (C) | \leq 84(g(C)-1),
\]
the genus of $C$ must be at least twelve. Since $C$ is linearly equivalent to $-2K_Y$, the adjunction formula
\[
2g(C)-2 = (K_Y,C) + C^2 = 2K_Y^2
\]
implies $\mathrm{deg}(Y) = K_Y^2 \geq 11$. This is a contradiction since the degree of a Del Pezzo surface is at most nine.
\end{proof}
\section{The group $F_{384} = C_2^4 \rtimes S_4$}\index{Mukai group! $F_{384}$}
Before we prove non-existence of K3-surfaces with $F_{384} \times C_2$-symmetry, we note the following useful fact about $S_4$-actions on Riemann surfaces.
\begin{lemma}\label{S4 not on g=1,2}
The group $S_4$ does not admit an effective action on a Riemann surface of genus one or two.
\end{lemma}
\begin{proof}
The automorphism group of a Riemann surface $T$ of genus one is of the form $\mathrm{Aut}(T)= L \ltimes T$ for $L \in \{C_2, C_4, C_6\}$. We have seen before (cf. Proof of Proposition \ref{elliptic branch}) that any subgroup $H$ of $\mathrm{Aut}(T)$ can be put into the form $H = (H \cap L) \ltimes (H \cap T)$. The nontrivial normal subgroups of $S_4$ are $A_4$ and $C_2 \times C_2$. Since $A_4$ is not Abelian and the quotient of $S_4$ by $S_4 \cap T = C_2 \times C_2$ is not cyclic, we
conclude that $S_4$ is not a subgroup of $ \mathrm{Aut}(T)$.
Assume that $S_4$ acts effectively on a Riemann surface $H$ of genus two. Note that $H$ is hyperelliptic and the quotient of $H$ by the hyperelliptic involution is branched at six points. Since $S_4$ has no normal subgroup of order two, the induced action of $S_4$ on the quotient $\mathbb P_1$ is effective and therefore has precisely one orbit consisting of six points. The isotropy subgroup at these points is isomorphic to $C_4$. The isotropy group at the corresponding points in $H$ must be isomorphic to $C_4 \times C_2$. Since this group is not cyclic, it cannot act effectively with fixed points on a Riemann surface and we obtain a contradiction.
\end{proof}
\begin{proposition}
There does not exists a K3-surface with a symplectic action of $F_{384}$ centralized by an antisymplectic involution $\sigma$ with $\mathrm{Fix}_X(\sigma) \neq \emptyset$.
\end{proposition}
\begin{proof}
As above, assume that a K3-surface $X$ with these properties exists and apply Theorem \ref{roughclassi} to see that $X \to Y$ is branched along a single $F_{384}$-invariant smooth curve $C$ on the Del Pezzo surface $Y$. It follows from Hurwitz' formula that the genus of $C$ is at least 6.
We use the realization of $F_{384}$ as a semi-direct product $C_4^2 \rtimes S_4$ (cf. \cite{mukai}) and consider the quotient $Q$ of the branch curve $C$ by the normal subgroup $N = C_4^2$. On $Q$ there is the induced action of $S_4$. It follows from the lemma above that $Q$ is either rational or $g(Q) >2$. In the second case, if we apply the Riemann-Hurwitz formula to the covering $ C \to Q$, then
\[
e(C) = 16 e(Q) - \text{branch point contributions} \leq -64
\]
and $g(C) \geq 33$. This contradicts the adjunction formula on the Del Pezzo surface $Y$ and implies that $Q$ is a rational curve.
It follows from adjunction that $K_Y^2 = g(C)-1$. Therefore, the degree of the Del Pezzo surface $Y$ is at least five.
We consider the action of $F_{384}$ on the configuration of (-1)-curves on $Y$ and recall that the order of a stabilizer of a (-1)-curve in $Y$ is at most twelve (cf. Remark \ref{stab of minus one curve}) and therefore has index greater than or equal to $32$ in $G$. It follows that $Y$ is either $\mathbb P_1 \times \mathbb P_1$ or $\mathbb P_2$. In the first case, the canonical projections of $\mathbb P_1 \times \mathbb P_1$ are equivariant with respect to a subgroup of index two in $F_{384}$ and thereby contradict Lemma \ref{conicbundle}. Consequently, $Y \cong \mathbb P_2$. In particular, $g(C) =10$ and $e(C) = -18$. It follows that the branch point contribution of the covering $C \to Q$ must be 50. Since isotropy groups must be cyclic, the only possible isotropy subgroups of $N = C_4^2$ at a point in $C$ are $C_2$ and $C_4$ and have index four or eight. The full branch point contribution must therefore be a multiple of four. This contradiction yields the non-existence claimed.
\end{proof}
\section{The group $A_{4,4} = C_2^4 \rtimes A_{3,3}$}\index{Mukai group! $A_{4,4}$}
By $S_{p,q}$ for $p+q =n$ we denote a subgroup $S_p \times S_q$ of $S_n$ preserving a partition of the set $\{1,\dots, n\}$ into two subsets of cardinality $p$ and $q$. The intersection of $A_n$ with $S_{p.q}$ is denoted by $A_{p,q}$.
\begin{proposition}
There does not exists a K3-surface with a symplectic action of $A_{4,4}$ centralized by an antisymplectic involution $\sigma$ with $\mathrm{Fix}_X(\sigma) \neq \emptyset$.
\end{proposition}
\begin{proof}
We again assume that a K3-surface with these properties exists. Applying Theorem \ref{roughclassi} we see that $X \to Y$ is branched along a single $A_{4,4}$-invariant smooth curve $C$ on the Del Pezzo surface $Y$. The group $A_{4,4}$ is a semi-direct product $C_2^4 \rtimes A_{3,3}$ (see e.g. \cite{mukai}). We consider the quotient $Q$ of $C$ by the normal subgroup $N \cong C_2^4$. On $Q$ there is an action of $A_{3,3}$.
Since $A_{3,3}$ contains the subgroup $C_3 \times C_3$, which does not act on a rational curve, it follows that $Q$ not rational. We apply the Riemann-Hurwitz formula to the covering $C \to Q$.
If $Q$ is elliptic, then $2g(C) -2$ equals the branch point contribution of the covering $C \to Q$. As above, isotropy groups must be cyclic and the maximal possible isotropy group of the $C_2^4$-action on $C$ is $C_2$ and has index eight in $C_2^4$. Consequently, the branch point contribution at each branch point is eight.
Recall that any group $H$ acting on the torus $Q$ can be put into the form $H = (H \cap L) \ltimes (H \cap Q)$ for $L \in \{C_2, C_4, C_6\}$. Since $Q$ acts freely,
the action of $C_3 \times C_3 < A_{3,3}$ on the elliptic curve $Q$ has orbits of length greater than or equal to three. Therefore, the total branch point contribution must be greater than or equal to $24$. In particular, $g(C) = \mathrm{deg}(Y) +1 \geq 13$ contrary to $\mathrm{deg}(Y) \leq 9$.
If $g(Q) \geq 2$, then $g(C) \geq 17$ which is also contrary to $\mathrm{deg}(Y) \leq 9$
\end{proof}
\section{The groups $T_{192} = (Q_8 * Q_8) \rtimes S_3$ and $H_{192} = C_2^4 \rtimes D_{12}$}\index{Mukai group! $T_{192}$}
By $Q_8$\nomenclature{$Q_8$}{the quaternion group} we denote the quaternion group $\{+1,-1, +I,-I, +J,-J,+K,-K\}$ where $I^2= J^2= K^2 = IJK = -1$.
The central product $ Q_8 * Q_8 $ is defined as the quotient of $Q_8 \times Q_8$ by the central involution $(-1, -1)$, i.e., $Q_8 * Q_8 = (Q_8 \times Q_8) / (-1,-1)$.
Note that both groups $T_{192}$ and $H_{192}$ are semi-direct products $C_2^3 \rtimes S_4$ (cf. \cite {mukai}).
\begin{proposition}
For $G =T_{192}$ or $G = H_{192}$ there does not exists a K3-surface with a symplectic action of $G$ centralized by an antisymplectic involution $\sigma$ with $\mathrm{Fix}_X(\sigma) \neq \emptyset$.
\end{proposition}
\begin{proof}
Assume that a K3-surface with these properties exists. Applying Theorem \ref{roughclassi} we see that $X \to Y$ is branched along a single $G$-invariant smooth curve $C$ on the Del Pezzo surface $Y$. The genus of $C$ is at least four by Hurwitz' formula and therefore $\mathrm{deg}(Y) \geq 3$. We consider the quotient $Q$ of $C$ by the normal subgroup $N = C_2^3$. By Lemma \ref{S4 not on g=1,2} the quotient $Q$ is either rational or $g(Q) >2$. In the second case $g(C) \geq 19$ and we obtain a contradiction to $\mathrm{deg}(Y) = g(C)-1 \leq 9$. It follows that $Q$ is a rational curve.
We consider the action of $G$ on the Del Pezzo surface $Y$ of degree $\geq 3$, in particular the induced action on its configuration of (-1)-curves. By Remark \ref{stab of minus one curve} the stabilizer of a (-1)-curve in $Y$ has index $\geq 16$ in $G$ and we may immediately exclude the cases $\mathrm{deg}(Y) = 3,5,6,7$.
The automorphism group of a Del Pezzo surface of degree four is $C_2^4 \rtimes \Gamma$ for $\Gamma \in \{C_2, C_4, S_3, D_{10}\}$ (cf. \cite{dolgachev}). In particular, the maximal possible order is 160 and therefore $\mathrm{deg}(Y) \neq 4$.
Assume that $Y \cong \mathbb P_1 \times \mathbb P_1$. The canonical projection $\pi_{1,2}: Y \to \mathbb P_1$ is equivariant with respect to a subgroup $H$ of $G$ of index at most two. It follows that $H$ fits into the exact sequences
\begin{align*}
\{\mathrm{id}\} \to I_1 \to H \overset{(\pi_1)_*}{\to} H_1\to \{\mathrm{id}\} \\
\{\mathrm{id}\} \to I_2 \to H \overset{(\pi_2)_*}{\to} H_2\to \{\mathrm{id}\}
\end{align*}
where $I_i \cong C_2 \times C_2$ is the ineffectivity of the induced $H$-action on the base and $H_i \cong S_4$ (cf. proof of Lemma \ref{conicbundle}).
Since the action of $G$ on $\mathbb P_1 \times \mathbb P_1$ is effective by assumption, it follows that $I_2$ acts effectively on $\pi_1(\mathbb P_1 \times \mathbb P_1)$. We find a set of four points in $\pi_1(\mathbb P_1 \times \mathbb P_1)$ with nontrivial isotropy with respect to $I_2 \cong C_2 \times C_2$. Since $I_2$ is a normal subgroup of $H$, this set is $H$-invariant. The action of $H_1 \cong S_4$ on $\pi_1(\mathbb P_1 \times \mathbb P_1)$ does however not admit invariant sets of cardinality four since the minimal $S_4$-orbit in $\mathbb P_1$ has length six.
We conclude that $Y$ must be isomorphic to $\mathbb P_2$. It follows that $g(C) =10$. Return to the covering $C \to Q$,
\[
-18 = e(C) = 8\cdot e(Q) - \text{branch point contributions}.
\]
Since $Q$ is rational, the
branch point contribution must $34$. The possible isotropy of $N = C_2^3$ at a point in $C$ is $C_2$ and the full branch point contribution must be divisible by four. This contradiction yields the desired non-existence.
\end{proof}
\section{The group $N_{72} = C_3^2 \rtimes D_8$}\index{Mukai group! $N_{72}$}\label{N72}
We let $X$ be a K3-surface with a symplectic action of $G=N_{72}$ centralized by an antisymplectic involution $\sigma$ with $\mathrm{Fix}_X(\sigma) \neq \emptyset$.
Note that in this case we may not apply Theorem \ref{roughclassi} and therefore begin by excluding that a $G$-minimal model of $Y=X/\sigma$ is an equivariant conic bundle.
\begin{lemma}\label{N72notconicbundle}
A $G$-minimal model of $Y$ is a Del Pezzo surface.
\end{lemma}
\begin{proof}
Assume the contrary and let $Y_\mathrm{min}$ be an equivariant conic bundle and a $G$-minimal model of $Y$. We consider the induced action of $G$ on the base $B=\mathbb P_1$
and denote by $I \lhd G$ the ineffectivity of the $G$-action on $B$. Arguing as in the proof of Lemma \ref{conicbundle}, we see that $I$ is trivial or isomorphic to either $C_2$ or $C_2 \times C_2$. In all cases the quotient $G/I$ contains the subgroup $C_3 \times C_3$, which has no effective action on the rational curve $B$.
\end{proof}
As we will see, only very few Del Pezzo surfaces admit an effective action of the group $N_{72}$. We will explicitly use the group structure of $N_{72}= C_3^2 \rtimes D_8$: the action of $D_8 = C_2 \ltimes (C_2 \times C_2) = \langle \alpha \rangle \ltimes ( \langle \beta \rangle \times \langle \gamma \rangle) = \mathrm{Aut}(C_3 \times C_3)$ on $C_3 \times C_3$ is given by
\[
\alpha(a,b) = (b,a), \quad \beta(a,b)=(a^2,b), \quad \gamma(a,b) = (a,b^2).
\]
As a first step we show:
\begin{lemma}
The degree of a Del Pezzo surface $Y_\mathrm{min}$ is at most four.
\end{lemma}
\begin{proof}
We exclude Del Pezzo surface of degree $\geq 5$.
\begin{itemize}
\item
A Del Pezzo surface of degree five has automorphims group $S_5$ and $N_{72} \nless S_5$.
\item
The automorphism group of a Del Pezzo surface of degree six is $(\mathbb C^* )^2 \rtimes (S_3 \times C_2)$ (cf. Theorem 10.2.1 in \cite{dolgachev}). Assume that $N_{72} = C_3^2 \rtimes D_8$ is contained in this group and consider the intersection $ A = N_{72} \cap (\mathbb C^* )^2$. The quotient of $N_{72}$ by $A$ has order at most 12 and may not contain a copy of $C_3^2$. Therefore, the order of $A$ is at least six and $A$ contains a copy of $C_3$. If $|A| =6$, then $A = C_6 = C_3 \times C_2$ and $C_2$ is central in $N_{72}$. Using the group structure of $N_{72}$ specified above one finds that there is no copy of $C_2$ in $N_{72}$ centralizing $C_3 \times C_3$ and therefore $C_2$ cannot be contained in the centre of $N_{72}$.
For every choice of $C_3$ inside $C_3 \times C_3$ there is precisely one element in $\{\alpha, \beta, \gamma\}$ acting trivially on it and the centralizer of $C_3$ inside $D_8$ is isomorphic to $C_2$. If $|A| >6$, then the centralizer of $C_3$ in $D_8$ has order greater then 2, a contradiction.
\item
A Del Pezzo surface of degree seven is obtained by blowing-up to points $p, q$ in $\mathbb P_2$. As was mentioned before, such a surface is never $G$-minimal.
\item
If $G$ acts on $\mathbb P_1 \times \mathbb P_1$, then the canonical projections are equivariant with respect to a subgroup $H$ of index two in $G$. We consider one of these projections. The action of $H$ induces an effective action of $H/I$ on the base $\mathbb P_1$. The group $I$ is either trivial or isomorphic to $C_2$ or $C_2 \times C_2$. In all case we find an effective action of $C_3 ^2$ on the base, a contradiction.
\item
It remains to exclude $\mathbb P_2$.
If $N_{72}$ acts on $\mathbb P_2$ we consider its embedding into $\mathrm{PSL}_3(\mathbb C)$, in particular the realization of the subgroup $C_3^2 = \langle a \rangle \times \langle b \rangle$ and its lifting to $\mathrm{SL}_3(\mathbb C)$.
We fix a preimage $\tilde a$ of $a$ inside $\mathrm{SL}_3(\mathbb C)$ and may assume that $\tilde a$ is diagonal. Since the action of $a$ on $\mathbb P_2$ is induced by a symplectic action on $X$, it follows that $a$ does not have a positive-dimensional set of fixed point. In appropiately chosen coordinates
\[
\tilde a=
\begin{pmatrix}
1 & 0 & 0 \\
0 & \xi & 0\\
0&0& \xi^2
\end{pmatrix},
\]
where $\xi$ is third root of unity.
As a next step, we want to specify a preimage $\tilde b$ of $b$ inside $\mathrm{SL}_3(\mathbb C)$. Since $a$ and $b$ commute in $\mathrm{PSL}_3(\mathbb C)$, we know that
\[
\tilde a \tilde b \tilde a^{-1} \tilde b^{-1} = \xi^k \mathrm{id}_{\mathbb C^3}
\]
for $k \in \{0, 1,2\}$.
Note that $\tilde b$ is not diagonal in the coordinates chosen above since this would
give rise to $C_3^2 $-fixed points in $\mathbb P_2$. As these correspond to $C_3^2$-fixed points on the double cover $X \to Y$ and a symplectic action of $C_3^2 \nless \mathrm{SL}_2(\mathbb C)$ on a K3-surface does not admit fixed points, this is a contradiction. An explicit calculation yields
\begin{align*}
\tilde b = \tilde b_1 =
\begin{pmatrix}
0 & 0 & * \\
* & 0 & 0\\
0 & * & 0
\end{pmatrix} \quad \text{or} \quad
\tilde b = \tilde b_2=
\begin{pmatrix}
0 & * & 0 \\
0 & 0 & *\\
* & 0 & 0
\end{pmatrix}.
\end{align*}
We can introduce a change of coordinates commuting with $\tilde a$ such that
\begin{align*}
\tilde b = \tilde b_1 =
\begin{pmatrix}
0 & 0 & 1 \\
1 & 0 & 0\\
0 & 1 & 0
\end{pmatrix} \quad \text{or} \quad
\tilde b = \tilde b_2=
\begin{pmatrix}
0 & 1 & 0 \\
0 & 0 & 1\\
1&0&0
\end{pmatrix}.
\end{align*}
Since $\tilde b_1^2 = \tilde b_2$, the two choices above correspond to the two choices of generators $b$ and $b^2$ of $\langle b \rangle$. We pick $\tilde b = \tilde b_1$.
The action of $D_8$ on $C_3^2 $ is specified above and the element $\beta \in D_8$ acts on $C_3^2 $ by $a \to a^2$ and $b \to b$. There is no element $T \in \mathrm{SL}_3(\mathbb C)$ such that (projectively) $T \tilde a T^{-1} = \tilde a^2$ and $T \tilde b T^{-1} = \tilde b$. It follows that there is no action of $N_{72}$ on $\mathbb P_2$.
\end{itemize}
This completes the proof of the lemma.
\end{proof}
As a next step, we study the possibility of rational curves in $\mathrm{Fix}_X(\sigma)$.
\begin{lemma}
There are no rational curves in $\mathrm{Fix}_X(\sigma)$.
\end{lemma}
\begin{proof}
Let $n$ denote the total number of rational curves in $\mathrm{Fix}_X(\sigma)$ and recall $n \leq 10$. If $n \neq 0$, let $C$ be a rational curve in the image of $\mathrm{Fix}_X(\sigma)$ in $Y$ and let $H = \mathrm{Stab}_G(C)$ be its stabilzer. The index of $H$ in $G$ is at most nine, therefore the order of $H$ is at least eight. The action of $H$ on $C$ is effective.
First note that $G$ does not contain $S_4 = O_{24}$ as a subgroup. If this were the case, consider the intersection $S_4 \cap C_3^2$ and the quotient $S_4 \to S_4 / (S_4 \cap C_3^2) < D_8$. Since the only nontrivial normal subgroups of $S_4$ are $A_4$ and $C_2 \times C_2$, this leads to a contradiction.
Consequently, the order of $H$ is at most twelve. In particular, $n \geq 6$. Since $C_8 \nless G$, the group $H$ is not cyclic and any $H$-orbit on $C$ consists of at least two points.
It follows from $C^2 = -4$ that $C$ must meet the union of Mori fibers and the union of Mori fibers meets the curve $C$ in at least two points. Recalling that each Mori fibers meets the branch locus $B$ in at most two points we see that at least $n$ Mori fibers meeting $B$ are required. However, no configuration of $n$ Mori fibers is sufficient to transform the curve $C$ into a curve on a Del Pezzo surface and further Mori fibers are required. By invariance, the total number $m$ of Mori fibers must be at least $2n$.
Combining the Euler-characteristic formula
\[
24 = 2e(Y_\mathrm{min}) +2m - 2n + \underset{\text{branch curve exists}}{\underset{\text{if non-rational }}{\underbrace{2g-2}}}
\]
with our observation $\mathrm{deg}(Y_\mathrm{min}) \leq 4$, i.e., $e(Y_\mathrm{min}) \geq 8$ we see that $n \leq 4$. However, it was shown above, that if $n \neq 0$, then $n \geq 6$. It follows that $n =0$.
\end{proof}
\begin{proposition}
The quotient surface $Y$ is $G$-minimal and isomorphic to the Fermat cubic $\{x_1^3 + x_2^3 +x_3^3 +x_4^3 =0\} \subset \mathbb P_3$. Up to equivalence, there is a unique action of $G$ on $Y$ and the branch locus of $X \to Y$ is given by $\{x_1x_2 + x_3x_4 =0\}$. In particular, $X$ is equivariantly isomorphic to Mukai's $N_{72}$-example.
\end{proposition}
\begin{proof}
We first show that the total number $m$ of Mori fibers equals zero. By the Euler-characteristic formula above, the number $m$ is bounded by four. Using the fact that the maximal order of a stabilizer group of a Mori fiber is twelve (cf. proof of Theorem \ref{roughclassi}) we see that $Y$ must be $G$-minimal.
In order to conclude that $Y$ is the Fermat cubic we consult Dolgachev's lists of automorphisms groups of Del Pezzo surfaces of degree less than or equal to four (\cite{dolgachev} Section 10.2.2; Tables 10.3; 10,4; and 10.5):
It follows immediately from the order of $G$ that $Y$ is not of degree two or four. If $G$ were a subgroup of an automorphism group of a Del Pezzo surface of degree one, it would contain a central copy of $C_3$. The group structure of $N_{72}$ does however not allow this.
After excluding the cases $\mathrm{deg}(Y) \in \{1,2,4\}$ the result now follows from the uniqueness of the cubic surface in $\mathbb P_3$ with an action of $N_{72}$ (cf. Appendix \ref{N72appendix}).
The action of $G$ on $Y$ is induced by a four-dimensional (projective) representation of $G$ and the branch curve $C \subset Y$ is the intersection of $Y$ with an invariant quadric (compare proof of Proposition \ref{S5 on degree three}).
In the Appendix \ref{N72appendix} it is shown that there is a uniquely determined action of $N_{72}$ on $\mathbb P_3$ and a unique invariant quadric hypersurface $\{x_1x_2 + x_3x_4 =0\}$. In particular, the branch curve in $Y$ is defined by $\{x_1x_2 + x_3x_4 =0\} \cap Y$.
Mukai's $N_{72}$-example is defined by $\{ x_1^3+ x_2 ^3 + x_3^3 +x_4^3= x_1x_2 + x_3x_4+ x_5^2 = 0 \} \subset \mathbb P_4$. An anti-symplectic involution centralizing the action of $N_{72}$ is given by the map $ x_5 \mapsto -x_5$. The quotient of Mukai's example by this involution is the Fermat cubic and the fixed point set of the involution is given by $\{x_1x_2 + x_3x_4= 0 \}$.
\end{proof}
\section{The group $M_9 =C_3^2 \rtimes Q_8$}\index{Mukai group! $M_9$}\label{M9}
Let $G = M_9$ and let $X$ be a K3-surface with a symplectic $G$-action centralized by the antisymplectic involution $\sigma$ such that $\mathrm{Fix}_X(\sigma) \neq \emptyset$.
We proceed in analogy to the case $G=N_{72}$ above. Arguing precisely as in the proof of Lemma \ref{N72notconicbundle} one shows.
\begin{lemma}
A $G$-minimal model of $Y$ is a Del Pezzo surface.
\end{lemma}
We may exclude rational branch curves without studying configurations of Mori fibers.
\begin{lemma}\label{subgroupsM9}
There are no rational curves in $\mathrm{Fix}_X(\sigma)$.
\end{lemma}
\begin{proof}
Let $n$ be the total number of rational curves in $\mathrm{Fix}_X(\sigma)$. Assume $n \neq 0$, let $C$ be a rational curve in the image of $\mathrm{Fix}_X(\sigma)$ in $Y$ and let $H <G$ be its stabilizer. The action of $H$ on $C$ is effective. We go through the list of finite groups with an effective action on a rational curve.
Since $M_9$ is a group of symplectic transformations on a K3-surface, its element have order at most eight.
Clearly, $A_6 \nless M_9$ and $D_{10}, \, D_{14}, \, D_{16} \nless M_9$. If $S_4 < M_9 = C_3^2 \rtimes Q_8$, then $S_4 \cap C_3^2$ is a normal subgroup of $S_4$ and it is therefore trivial. Now $ S_4 = S_4 / (S_4 \cap C_3^2) < M_9 / C_3^2 = Q_8$ yields a contradiction. The same argument can be carried out for $A_4$, $D_8$ and $C_8$. If $D_{12} < M_9 = C_3^2 \rtimes Q_8$, then either $D_{12} \cap C_3^2 = C_3$ and $C_2 \times C_2 = D_{12} / C_3 < M_9 / C_3^2 = Q_8$ or $D_{12} \cap C_3^2 = \{ \mathrm{id} \}$ and $D_{12} < Q_8$, both are impossible.
It follows that the subgroups of $M_9$ admitting an effective action on a rational curve have index greater than or equal to twelve. Therefore $n \geq 12$, contrary to the bound $n \leq 10$ obtained in Corollary \ref{atmostten}.
\end{proof}
\begin{proposition}
The quotient surface $Y$ is $G$-minimal and isomorphic to $\mathbb P_2$. Up to equivalence, there is a unique action of $G$ on $Y$ and the branch locus of $X \to Y$ is given by
$\{x_1^6 + x_2^6+ x_3^6-10( x_1^3x_2^3 + x_2^3x_3^3+ x_3^3x_1^3 ) =0\}$. In particular, $X$ is equivariantly isomorphic to Mukai's $M_9$-example.
\end{proposition}
\begin{proof}
We first check that $Y$ is $G$-minimal. Again, we proceed as in the proof of Theorem \ref{roughclassi} and Lemma \ref{subgroupsM9} above to see that the largest possible stablizer group of a Mori fiber is $D_6 < G$. If $Y$ is not $G$-minimal, this implies that the total number of Mori fibers is $ \geq 12$, contradicting $m \leq 9$.
Note that $X \to Y$ is not branched along one or two elliptic curves as this would imply $e(Y) =12$ and contradict the fact that $Y$ is a Del Pezzo surface.
Let $D$ be the branch curve of $ X \to Y$ and consider the quotient $Q$ of $D$ by the normal subgroup $N = C_3^2$ in $G$. On $Q$ there is an action of $Q_8$ implying that $Q$ is not rational.
We show that $Q_8$ does not act on an elliptic curve $Q$. If this were the case, consider
the decomposition $Q_8 = (Q_8 \cap Q) \rtimes (Q_8 \cap L)$ where $(Q_8 \cap L)$ is a nontrivial cyclic group. For any choice of generator of $(Q_8 \cap L)$ the center $\{+1,-1\}$ of $Q_8$ is contained in $(Q_8 \cap L)$.
Let $q: Q_8 \to Q_8 /(Q_8 \cap Q) \cong Q_8 \cap L $ denote the quotient homomorphism. The commutator subgroup $Q_8' = \{+1,-1\}$ must be contained in the kernel of $q$. This contradiction yields that $Q_8$ does not act on an elliptic curve.
It follows that the genus of $Q$ is at least two and the genus of $D$ is at least ten. Adjunction on the Del Pezzo surface $Y$ now implies $g=10$ and $Y \cong \mathbb P_2$.
It is shown in Appendix \ref{M9 on P2} that,
up to natural equivalence, there is a unique action of $M_9$ on the projective plane. In suitably chosen coordinated the generators $a, b$ of $C_3^2$ are represented as
\begin{align*}
\tilde a=
\begin{pmatrix}
1 & 0 & 0 \\
0 & \xi & 0\\
0&0& \xi^2
\end{pmatrix}, \quad
\tilde b=
\begin{pmatrix}
0 & 1 & 0 \\
0 & 0 & 1\\
1&0&0
\end{pmatrix}
\end{align*}
and $I,J \in Q_8$ are represented as
\begin{align*}
\tilde I= \frac{1}{\xi -\xi^2}
\begin{pmatrix}
1 & 1 & 1 \\
1 & \xi & \xi^2\\
1 & \xi^2& \xi
\end{pmatrix}, \quad
\tilde J= \frac{1}{\xi -\xi^2}
\begin{pmatrix}
1 & \xi & \xi \\
\xi^2 & \xi & \xi^2\\
\xi^2 & \xi^2& \xi
\end{pmatrix}.
\end{align*}
We study the action of $M_9$ on then space of sextic curves. By restricting our consideration to the subgroup $C_3^2$ first, we see that a polynomial defining an invariant curve must be a linear combination of the following polynomials:
\begin{align*}
f_1 &= x_1^6 + x_2^6+ x_3^6;\\
f_2 &= x_1^2x_2^2x_3^2;\\
f_3 &= x_1^3x_2^3 + x_1^3x_3^3+ x_2^3x_3^3;\\
f_4 &= x_1^4x_2x_3 + x_1x_2^4x_3+ x_1x_2x_3^4.
\end{align*}
Taking now the additional symmetries into account, we find three $M_9$-invariant sextic curves, namely
\[
\{f_1- 10f_3 =x_1^6 + x_2^6+ x_3^6-10( x_1^3x_2^3 + x_2^3x_3^3+ x_3^3x_1^3 ) =0\},
\]
which is the example found by Mukai, and additionally
\[
\{f_a=f_1 + (18-3a) f_2 +2f_3 + af_4 =0\},
\]
where $a$ is a solution of the quadratic equation $a^2-6a+36$, i.e. $a= -6\xi$ or $a= -6\xi^2$. The polynomial $f_a$ is invariant with respect to the action of $M_9$ for $a= -6\xi^2$ and semi-invariant if $a= -6\xi$.
We wish to show that $X$ is not the double cover of $\mathbb P_2$ branched along $\{f_a=0\}$. If this were the case,
consider the fixed point $p=[0:1:-1]$ of the automorphism $I$ and note that $f_a(p)=0$. So the $\pi^{-1}(p)$ consists of one point $x \in X$ and we linearize the $\langle I \rangle \times \langle \sigma \rangle$ at $x$. In suitably chosen coordintes the action of the symplectic automorphism $I$ of order four is of the form $(z,w) \mapsto (iz, -iw)$. Since the action of $\sigma$ commutes with $I$, the $\sigma$-quotient of $X$ is locally given by
\[
(z,w) \mapsto (z^2, w) \quad \text{or}\quad (z,w) \mapsto (z, w^2).
\]
It follows that the action of $I$ on $Y$ is locally given by either
\[
\begin {pmatrix}
-1 & 0\\
0 & -i
\end {pmatrix}
\quad \text{or} \quad
\begin {pmatrix}
i & 0\\
0 & -1
\end {pmatrix}.
\]
In particular, the local linearization of $I$ at $p$ has determinant $\neq 1$.
By a direct computation using the explicit form of $\tilde I$ given above, in particular the facts that $\mathrm{det}(\tilde I) =1$ and $\tilde I v =v$ for $[v]=p$, we obtain a contradiction.
This completes the proof of the proposition.
\end{proof}
\begin{remark}\label{M9 symplectic}
In the proof of the propostion above we have observed that an element of $\mathrm{SL}_3(\mathbb C)$ does not necessarily lift to a symplectic transformation on the double cover of $\mathbb P_2$ branched along a sextic given by an invariant polynomial.
Mukai's $M_9$-example $X$ is a double cover of $\mathbb P_2$ branched along the sextic curve $\{x_1^6 + x_2^6+ x_3^6-10( x_1^3x_2^3 + x_2^3x_3^3+ x_3^3x_1^3 ) =0\}$ and for this particular example, the action of $M_9$ does lift to a group of symplectic transformation as claimed by Mukai.
To see this consider the set $\{a,b,I,J\}$ of generators of $M_9$. Since $a$ and $b$ are commutators in $M_9$, they can be lifted to symplectic transformation $\overline a, \overline b$ on $X$. For $I,J$ consider the linearization at the fixed point $[0:1:-1]$ and check that it has determinant one. Since $[0:1:-1]$ is \emph{not} contained in the branch set of the covering, its preimage in $X$ consists of two points $p_1,p_2$. We can lift $I$ ($J$, respectively) to a transformation of $X$ fixing both $p_1,p_2$ and a neighbourhood of $p_1$ is $I$-equivariantly isomorphic to a neighbourhood of $ [0:1:-1] \in \mathbb P_2$. In particular, the action of the lifted element $\overline I$ ($\overline J$, respectively) is symplectic. On $X$ there is the action of a degree two central extension $E$ of $M_9$,
\[
\{\mathrm{id}\} \to C_2 \to E \to M_9 \to \{\mathrm{id}\}.
\]
The elements $\overline a, \overline b, \overline I, \overline J$ generate a subgroup $\tilde M_9$ of $E_\mathrm{symp}$ mapping onto $M_9$. Since $E_\mathrm{symp} \neq E$, the order of $\tilde M_9$ is 72 and it follows that $\tilde M_9$ is isomorphic to $M_9$. In particular $E$ splits as $E_\mathrm{symp} \times C_2$ with $E_\mathrm{symp}= M_9$.
\end{remark}
\section{The group $T_{48} = Q_8 \rtimes S_3$}\index{Mukai group! $T_{48}$}
We let $X$ be a K3-surface with an action of $T_{48} \times C_2$ where the action of $G = T_{48}$ is symplectic and the generator $\sigma$ of $C_2$ is antisymplectic and has fixed points. The action of $S_3$ on $Q_8$ is given as follows: The element $c$ of order three in $S_3$ acts on $Q_8$ by permuting $I,J,K$ and an element $d$ of order two acts by exchanging $I$ and $J$ and mapping $K$ to $-K$.
\begin{lemma}\label{not two}
A $G$-minimal model $Y_\mathrm{min}$ of $Y$ is either $\mathbb P_2$, a Hirzebruch surface $\Sigma_n$ with $n >2$, or $e(Y_\mathrm{min}) \geq 9$.
\end{lemma}
\begin{proof}
Let us first consider the case where $ Y_\mathrm{min}$ is a Del Pezzo surface and go through the list of possibilities.
\begin{itemize}
\item
Let $Y_\mathrm{min} \cong \mathbb P_1 \times \mathbb P_1$. Since $T_{48}$ acts on $Y_\mathrm{min}$, both canonical projections are equivariant with respect to the index two subgroup $G'= Q_8 \rtimes C_3$.
Since $Q_8$ has no effective action on $\mathbb P_1$, it follows that the subgroup $Z =\{+1,-1\} < Q_8$ acts trivially on the base. Since this holds with respect to both projections, the subgroup $Z$ acts trivially on $Y_\mathrm{min}$, a contradiction.
\item
Using the group structure of $T_{48}$ one checks that the only nontrivial normal subgroup $N$ of $T_{48}$ such that $N \cap Q_8 \neq Q_8$ is the center $Z= \{+1,-1\}$ of $T_{48}$. It follows that $T_{48}$ is neither a subgroup of $(\mathbb C^*)^2 \rtimes (S_3 \times C_2)$ nor a subgroup of any of the automorphism groups $C_2^4 \rtimes \Gamma$ for $\Gamma \in \{C_2, C_4, S_3, D_{10}\}$ of a Del Pezzo surface of degree four.
Furthermore,
$T_ {48} \nless S_5$. Thus it follows that $d(Y_\mathrm{min}) \neq 4,5,6$.
\end{itemize}
So if $Y_\mathrm{min}$ is a Del Pezzo surface, then $Y_\mathrm{min} \cong \mathbb P_2$ or $e(Y_\mathrm{min}) \geq 9$ .
Let us now turn to the case where $Y_\mathrm{min}$ is an equivariant conic bundle.
We first show that $Y_\mathrm{min}$ is not a conic bundle with singular fibers. We assume the contrary and let $p: Y_\mathrm{min} \to \mathbb P_1$ be an equivariant conic bundle with singular fibers.
The center $Z = \{+1,-1\}$ of $G= T_{48}$ acts trivially on the base an has two fixed points in the generic fiber.
Let $C_1$ and $C_2$ denote the two curves of $Z$-fixed points in $Y_\mathrm{min}$. By Lemma \ref{singular fibers of conic bundle} any singular fiber $F$ is the union of two (-1)-curves $F_1,F_2$ meeting transversally in one point. We consider the action of $Z$ on this union of curves. The group $Z$ does not act trivially on either component of $F$ since linearization at a smooth point of $F$ would yield a trivial action of $Z$ on $Y_\mathrm{min}$.
Consequently,
it has either one or three fixed points on $F$. The first is impossible since $C_1$ and $C_2$ intersect $F$ in two points. It follows that $Z$ stabilizes each curve $F_i$. We linearize the action of $Z$ at the point of intersection $F_1 \cap F_2$. The intersection is transversal and the action of $Z$ is by $-\mathrm{Id}$ on $T_{F_1} \oplus T_{F_2}$ contradicting the fact the $Z$ acts trivially on the base. Thus $Y_{min}$ is not a conic bundle with singular fibers.
If $Y_\mathrm{min} \to \mathbb P_1$ is a Hirzebruch surface $\Sigma_n$, then the action of $T_{48}$ induces an effective action of $S_4$ on the base $\mathbb P_1$.
The action of $T_{48}$ on $\Sigma_n$ stabilizes two disjoint sections $E_\infty$ and $E_0$, the curves of $Z$-fixed points. This is only possible if $E_0^2 = -E_\infty^2 = n$. Removing the exceptional section $E_\infty$ from $\Sigma_n$, we obtain the hyperplane bundle $H^n$ of $\mathbb P_1$. Since $T_{48}$ stabilizes the section $E_0$, we chose this section to be the zero section and conclude that the action of $T_{48}$ on $H^n$ is by bundle automorphisms.
If $n=2$, then $H^n$ is the anticanonical line bundle of $\mathbb P_1$ and the action of $S_4$ on the base induces an action of $S_4$ on $H^2$ by bundle automorphisms. It follows that $T_{48}$ splits as $S_4 \times C_2$, a contradiction. Thus, if $Y_\mathrm{min}$ is a Hirzebruch surface $\Sigma_n$, then $n \neq 2$.
\end{proof}
\begin{lemma}\label{no rat T48}
There are no rational curves in $\mathrm{Fix}_X(\sigma)$.
\end{lemma}
\begin{proof}
We let $n$ denote the total number of rational curves in $\mathrm{Fix}_X(\sigma)$ and assume $n >0$. Recall $n \leq 10$, let $C$ be a rational curve in $B=\pi(\mathrm{Fix}_X(\sigma)) \subset Y$ and let $H = \mathrm{Stab}_G(C) < G$ be its stabilizer group. The action of $H$ on $C$ is effective, the index of $H$ in $G$ is at most 8. Using the quotient homomorphism $T_{48} \to T_{48}/Q_8 = S_3$ one checks that $T_{48}$ does not contain $O_{24}=S_4$ or $T_{12}= A_4$ as a subgroup. It follows that $H$ is a cyclic or a dihedral group.
If $H\in \{C_6, C_8, D_8\}$, then $H$ and all conjugates of $H$ in $G$ contain the center $Z= \{+1,-1\}$ of $G$. It follows that $Z$ has two fixed point on each curve $gC$ for $g \in G$. Since there are six (or eight) distinct curves $gC$ in $Y$, it follows that $Z$ has at least 12 fixed points in $Y$ and in $X$. This contradicts to assumption that $Z < G$ acts symplectically on $X$ and therefore has eight fixed points in the K3-surface $X$.
It remains to study the cases $H = D_{12}$ and $H = D_6$ where $n = 8$ or $n=4$.
We note that a Hirzebruch surface has precisely one curve with negative self-intersection and only fibers have self-intersection zero. A Del Pezzo surface does not contains curves of self-intersection less than $-1$. The rational branch curves must therefore meet the union of Mori fibers in $Y$.
The total number of Mori fibers is bounded by $n+9$. We study the possible stabilizer subgroups $\mathrm{Stab}_G(E) < G$ of Mori fibers. A Mori fiber $E$ with self-intersection (-1) meets the branch locus $B$ in one or two points and its stabilizer is either cyclic or dihedral. If $\mathrm{Stab}_G(E) \in \{C_4, D_8\}$, then the points of intersection of $E$ and $B$ are fixed points of the center $Z$ of $G$ and we find too many $Z$-fixed points on $X$.
Assume $n=4$ and let $R_1, \dots R_4$ be the rational curves in $B$. We denote by $\tilde R_i$ their images in $Y_\mathrm{min}$. The total number $m$ of Mori fibers is bounded by 12.
We go through the list of possible configurations:
\begin{itemize}
\item
If $m = 4$, there is no invariant configuration of Mori fibers such that the contraction maps the four rational branch curves to a configuration on the Hirzebruch or Del Pezzo surface $Y_\mathrm{min}$.
\item
If $m= 6$, then $\mathrm{Stab}_G(E) = C_8$ and the points of intersection of $E$ and $B$ are $Z$-fixed. Since $Z$ has at most eight fixed points on $B$, it follows that each curve $E$ meets $B$ only once. The images $\tilde R_i$ of the $R_i$ contradict our observations about curves in Del Pezzo and Hirzebruch surfaces.
\item
If $m=8$ and all Mori fibers have self-intersection $-1$, then each Mori fiber meets $\bigcup R_i$ in a $Z$-fixed point.
Since there at at most eight such points, it follows that each Mori fibers meets $\bigcup R_i$ only once and their contractions does not transform the curves $R_i$ sufficiently.
\item
If $m=8$ and only four Mori fibers have self-intersection $-1$, we consider the four Mori fibers of the second reduction step. Each of these meets a Mori fiber $E$ of the first step in precisely one point. By invariance, this would have to be a fixed points of the stabilizer $ \mathrm{Stab}_G(E)= D_{12}$, a contradiction.
\item
If $m=12$, then either $e(Y_\mathrm{min}) = 3$ and there exist a branch curve $D_{g=2}$ of genus two or $e(Y_\mathrm{min}) = 4$ and $B = \bigcup R_i$. In the first case, $Y_\mathrm{min} \cong \mathbb P_2$ and twelve Mori fibers are not sufficient to transform $B = D_{g=2} \cup \bigcup R_i$ into a configuration of curves in the projective plane.
So $Y_\mathrm{min} = \Sigma_n$ for $n > 2$.
Since $Z$ has two fixed points in each fiber of $p: \Sigma_n \to \mathbb P_1$ the $Z$-action on $\Sigma_n$ has two disjoint curves of fixed points. As was remarked above, these curves are the exceptional section $E_\infty$ of self-intersection -$n$ and a section $E_0 \sim E_\infty + n F$ of self-intersection $n$. Here $F$ denotes a fiber of $p: \Sigma_n \to \mathbb P_1$. There is no automorphisms of $\Sigma_n$ mapping $E_\infty$ to $E_0$.
Each rational branch curve $\tilde R_i$ has two $Z$-fixed points. These are exchanged by an element of $\mathrm{Stab}_G(R_i)$ and therefore both lie on either $E_\infty$ or $E_0$, i.e., $\tilde R_i$ cannot have nontrivial intersection with both $E_0$ and $E_\infty$. By invariance all curves $\tilde R_i$ either meet $E_0$ or $E_\infty$ and not both.
Using the fact that $\sum \tilde R_i$ is linearly equivalent to $-2K_{\Sigma_n} \sim 4 E_\infty +(2n +4)F$ we find that $\tilde R_i \cdot E_\infty = 0$ and $n=2$, a contradiction to Lemma \ref{not two}.
\end{itemize}
We have shown that
all possible configurations in the case $n \neq 4$ lead to a contradiction. We now turn to the case $n=8$ and let $R_1, \dots R_8$ be the rational ramification curves. The total number of Mori fibers is bounded by 16. Note that by invariance, the orbit of a Mori fiber meets $\bigcup R_i$ in at least 16 points or not at all. In particular, Mori fibers meeting $R_i$ come in orbits of length $\geq 8$. As above, we go through the list of possible configurations.
\begin{itemize}
\item
If $m =16$, then the set of all Mori fibers consists of two orbits of length eight. If all 16 Mori fibers meet $B$, then each meets $B$ in one point and $R_i$ is mapped to a (-2)-curve in $Y_\mathrm{min}$. If only eight Mori fibers meet $B$, then each of the eight Mori fibers of the second reduction step meets one Mori fiber $E$ of the first reduction step in one point. This point would have to be a $\mathrm{Stab}_G(E)$-fixed point. But if $\mathrm{Stab}_G(E)$ is cyclic, its fixed points coincide with the points $E\cap B$.
\item
If $m=12$, then the set of all Mori fibers consists of a single $G$-orbit and each curve $R_i$ meets three distinct Mori fibers. Their contraction transforms $R_i$ into a (-1)-curve on $Y_\mathrm{min}$. It follows that $Y_\mathrm{min}$ contains at least eight (-1)-curves and is a Del Pezzo surface of degree $\leq 5$. We have seen above that $d( Y_\mathrm{min}) \neq 4,5$ and therefore $e(Y_\mathrm{min}) \geq 9$. With $m=12$ and $n=8$, this contradicts the Euler characteristic formula $24 = 2 e(Y_\mathrm{min}) +2m -2n +(2g-2)$.
\item
If $m=8$ there is no invariant configuration of Mori fibers such that the contraction maps the eight rational branch curves to a configuration on the Hirzebruch or Del Pezzo surface $Y_\mathrm{min}$
\end{itemize}
This completes the proof of the lemma.
\end{proof}
Since there is an effective action of $T_{48}$ on $\mathrm{Fix}_X(\sigma)$, it is neither an elliptic curve nor the union of two elliptic curves. It follows that $ X \to Y$ is branched along a single $T_{48}$-invariant curve $B$ with $g(B) \geq 2$.
\begin{lemma}
The genus of $B$ is neither three nor four.
\end{lemma}
\begin{proof}
We consider the quotient $Q = B/Z$ of the curve $B$ by the center $Z$ of $G$ and apply the Euler characteristic formula, $e(B) = 2 e(Q) - |\mathrm{Fix}_B(Z)|$. On $Q$ there is an effective action of the group $G/Z = (C_2 \times C_2) \rtimes C_3 = S_4$. Using Lemma \ref{S4 not on g=1,2} we see that $e(Q) \in \{2, -4, -6,-8 \dots\}$.
If $g(B)=3$, then $e(B) = -4$ and the only possibility is $Q \cong \mathbb P_1$ and $|\mathrm{Fix}_B(Z)| =8$. In particular, all $Z$-fixed points on $X$ are contained in the curve $B$. Let $A < G$ be the group generated by $I \in Q_8 = \{ \pm 1, \pm I, \pm J, \pm K \}$. The four fixed points of $A$ in $X$ are contained in $\mathrm{Fix}_X(Z) = \mathrm{Fix}_B(Z)$ and the quotient group $A/Z \cong C_2$ has four fixed points in $Q$. This is a contradiction.
If $g(B)=4$, then $e(B) = -6$ and the only possibility is $Q \cong \mathbb P_1$ and $|\mathrm{Fix}_B(Z)| =10$. This contradicts the fact that $Z$ has at most eight fixed points in $B$ since it has precisely eight fixed points in $X$.
\end{proof}
In Lemma \ref{not two} we have reduced the classification to the cases $e(Y_\mathrm{min}) \in \{3,4, 9, 10, 11\}$. In the following, we will exclude the cases $e(Y_\mathrm{min}) \in \{4, 9, 10,\}$ and describe the remaining cases more precisely. Recall that the maximal possible stabilizer subgroup of a Mori fiber is $D_{12}$, in particular, $m = 0$ or $ m \geq 4$.
\begin{lemma}
If $e(Y_\mathrm{min}) =3$, then $Y_\mathrm{min} = Y = \mathbb P_2$ and $X \to Y$ is branched along the curve $\{x_1x_2(x_1^4-x_2^4) + x_3^6=0\}$. In particular, $Y$ is equivariantly isomorphic to Mukai's $T_{48}$-example.
\end{lemma}
\begin{proof}
Let $ M: Y \to \mathbb P_2$ denote a Mori reduction of $Y$ and let $B \subset Y$ be the branch curve of the covering $X \to Y$.
If $Y = Y_\mathrm{min}$, then $B= M(B)$ is a smooth sextic curve.
If $Y \neq Y_\mathrm{min}$, then the Euler characteristic formula with $m \in \{4,6,8\}$ shows that $g(B) \in \{2,4,6\}$. The case $m=6$, $g(B) =4$ has been excluded by the previous lemma.
If $m=4$, then the stabilizer group of each Mori fiber is $D_{12}$ and each Mori fiber meets $B$ in two points. Furthermore, since in this case $g(B) =6$, the self-intersection of $\mathrm{Fix}_X(\sigma)$ in $X$ equals ten and therefore $B^2 = 20$. The image $M(B)$ of $B$ in $Y_\mathrm{min}$ has self-intersection $20 + 4 \cdot 4 = 36$ and follows to be an irreducible singular sextic.
If $m=8$, then $g(B) =2$ and $B^2 =4$. Since the self-intersection number $M(B)^2$ must be a square, one checks that all possible invariant configurations of Mori fibers yield $M(B)^2 =36$ and involve Mori fibers meeting $B$ is two points. In particular, $M(B)$ is a singular sextic.
We study the action of $T_{48}$ on the projective plane. As a first step, we may choose coordinates on $\mathbb P_2$ such that the automorphism $-1 \in Q_8 < T_{48}$ is represented as
\[
\widetilde {-1}=
\begin{pmatrix}
-1 & 0 & 0 \\
0 & -1 & 0\\
0&0& 1
\end{pmatrix}.
\]
We denote by $V$ to the $-1$-eigenspace of this operator.
For each element $I, J , K$ there is a unique choice $\widetilde I, \widetilde J, \widetilde K$ in $\mathrm{SL}_3(\mathbb C)$ such that $\widetilde I ^2 = \widetilde J^2 = \widetilde K^2 = \widetilde {-1}$. One checks $\widetilde I \widetilde J\widetilde K = \widetilde {-1}$.
Therefore $\widetilde I, \widetilde J, \widetilde K$ generate a subgroup of $\mathrm{SL}_3(\mathbb C)$ isomorphic to $Q_8$.
By construction $\widetilde I, \widetilde J, \widetilde K$ stabilze the vector space $V$. Up to isomorphisms, there is a unique faithful 2-dimensional representation of $Q_8$ and it follows that $I,J,K$ are represented as
\begin{align*}
\widetilde I =
\begin{pmatrix}
-i & 0 & 0 \\
0 & i & 0\\
0&0& 1
\end{pmatrix}, \quad
\widetilde J=
\begin{pmatrix}
0 & -1 & 0 \\
1 & 0 & 0\\
0 & 0 & 1
\end{pmatrix}, \quad
\widetilde K=
\begin{pmatrix}
0 & i & 0 \\
i & 0 & 0\\
0 & 0 & 1
\end{pmatrix}.
\end{align*}
We recall that the action of $S_3$ on $Q_8$ is given as follows: The element $c$ of order three in $S_3$ acts on $Q_8$ by permuting $I,J,K$ and an element $d$ of order two acts by exchanging $I$ and $J$ and mapping $K$ to $-K$.
With $\mu = \sqrt{\frac{i}{2}}$ and $ \nu = \frac{i}{\sqrt{2}}$ it follows that the elements $c$ and $d$ are represented as
\begin{align*}
\widetilde c =
\begin{pmatrix}
-i\mu & i\mu& 0 \\
\mu & \mu & 0\\
0&0& 1
\end{pmatrix}, \quad
\widetilde d=
\begin{pmatrix}
-i\nu & -\nu & 0 \\
\nu & i\nu & 0\\
0 & 0 & -1
\end{pmatrix}.
\end{align*}
In particular, there is a unique action of $T_{48}$ on $\mathbb P_2$. In the following, we denote by $[x_1:x_2:x_3]$ homogeneous coordintes such that the action of $T_{48}$ is as above. Using the explicit form of the $T_{48}$-action and the fact that the commutator subgroup of $T_{48}$ is $Q_8 \rtimes C_3$ one can check that any invariant curve of degree six is of the form
\[
C_\lambda = \{ x_1x_2(x_1^4-x_2^4) + \lambda x_3^6 =0\}
\]
In order to avoid this calculation, one can also argue that the polynomial $x_1x_2(x_1^4-x_2^4)$ is the lowest order invariant of the octahedral group $S_4 \cong T_{48} / Z$.
The curve $C_\lambda$ is smooth and it follows that $Y = Y_\mathrm{min}$.
We may adjust the coordinates equivariantly such that $\lambda =1$ and find that our surface $X$ is precisely Mukai's $T_{48}$-example.
\end{proof}
\begin{remark}\label{T48 symplectic}
As claimed by Mukai, the action of $T_{48}$ on $\mathbb P_2$ does indeed lift to a symplectic action of $T_{48}$ on the double cover of $\mathbb P_2$ branched along the invariant curve $\{ x_1x_2(x_1^4-x_2^4) + x_3^6 =0\}$. The elements of the commutator subgroup can be lifted to symplectic transformation on the double cover $X$.
The remaining generator $d$
is an involution fixing the point $[0:0:1]$. Any involution $\tau$ with a fixed point $p$ outside the branch locus can be lifted to a symplectic involution on the double cover $X$ as follows:
The linearized action of $\tau$ at $p$ has determinant $\pm 1$. We consider the lifting $\tilde \tau$ of $\tau$ fixing both points in the preimage of $p$. Its linearization coincides with the linearization on the base and therefore also has determinant $\pm 1$.
In particular, $\tilde \tau$ is an involution.
It follows that either $\tilde \tau$ or the second choice of a lifting $\sigma \tilde \tau$ acts symplectically on $X$.
The group generated by all lifted automorphisms is either isomorphic to $T_{48}$ or to the full central extension $E$
\[
\{\mathrm{id}\} \to C_2 \to E \to T_{48} \to \{\mathrm{id}\}
\]
acting on the double cover.
Since $E_\mathrm{symp} \neq E$ the later is impossible it follows that $E$ splits as $E_\mathrm{symp} \times C_2$ with $E_\mathrm{symp} = T_{48}$.
\end{remark}
Finally, we return to the remaining possibilities $e(Y_\mathrm{min}) \in \{4, 9, 10, 11\}$.
\begin{lemma}
$e(Y_\mathrm{min}) \not\in \{4,9,10\}$.
\end{lemma}
\begin{proof}
Recalling that the genus of the branch curve $B$ is neither three nor four and that $m$ is either zero or $\geq 4$, we may exclude $e(Y_\mathrm{min}) =9,10$ using the Euler characteristic formula $12 = e(Y_\mathrm{min}) +m +g-1$. It remains to consider the case $Y_\mathrm{min} = \Sigma _n$ with $n >2$ and we claim that this is impossible.
Let $M= Y \to Y_\mathrm{min} = \Sigma_n$ denote a (possibly trivial) Mori reduction of $Y$. The image $M(B)$ of $B$ in $\Sigma _n$ is linearly equivalent to $-2K_{\Sigma _n}$. Now $M(B) \cdot E_\infty = 2(2-n) < 0$ and it follows that $M(B)$ contains the rational curve $E_\infty$. This is a contradiction since $B$ does not contain any rational curves by Lemma \ref{no rat T48}.
\end{proof}
In the last remaining case, i.e., $e(Y_\mathrm{min})=11$, the quotient surface $Y$ is a $G$-minimal Del Pezzo surface of degree 1. Consulting \cite{dolgachev}, Table 10.5, we find that $Y$ is a hypersurface in weighted projective space $\mathbb P(1,1,2,3)$ defined by the degree six equation
\[
x_0x_1(x_0^4-x_1^4)+ x_2^3+x_3^2.
\]
This follows from the invariant theory of the group $S_4 \cong T_{48}/Z$ and fact that $Y$ is a double cover of a quadric cone $Q$ in $\mathbb P_3$ branched along the intersection of $Q$ with a cubic hypersurface (cf. Theorem \ref{antican models of del pezzo}).
The linear system of the anticanonical divisor $K_Y$ has precisely one base point $p$. In coordinates $[x_0:x_1:x_2:x_3]$ this point is given as $[0:0:1:i]$. It is fixed by the action of $T_{48}$. The linearization of $T_{48}$ at $p$ is given by the unique faithful 2-dimensional represention of $T_{48}$. This represention has implicitly been discussed above as a subrepresentation $V$ of the three-dimensional representation of $T_{48}$. It follows that there is a unique action of $T_{48}$ on $Y$. The branch curve $B$ is linearly equivalent to $-2K_Y$, i.e., $B = \{s=0\}$ for a section $s \in \Gamma(Y, \mathcal O(-2K_Y))$ which is either invariant or semi-invariant.
By an adjunction formula for hypersurfaces in weighted projective space $\mathcal O (-2K_Y)) = \mathcal O _Y(2)$. The four-dimensional space of sections $\Gamma(Y, \mathcal O(-2K_Y))$ is generated by the weighted homogeneous polynomials $x_0^2, x_1^2, x_0x_1, x_2$.
We consider the map $ Y \to \mathbb P (\Gamma(Y, \mathcal O(-2K_Y))^*)$ associated to $|-2K_Y|$. Since this map is equivariant with respect to $\mathrm{Aut}(Y)$, the fixed point $p$ is mapped to a fixed point in $\mathbb P (\Gamma(Y, \mathcal O(-2K_Y))^*)$. It follows that the section corresponding to the homogeneous polynomial $x_2$ is invariant or semi-invariant with respect to $T_{48}$. It is the only section of $\mathcal O(-2K_Y)$ with this property since the representation of $T_{48}$ on the span of $x_0^2, x_1^2, x_0x_1$ is irreducible.
The curve $B \subset Y$ defined by $s=0$ is connected and has arithmetic genus $2$. Since $T_{48}$ acts effectively on $B$ and does not act on $\mathbb P_1$ or a torus, it follows that $B$ is nonsingular.
It remains to check that the action of $T_{48}$ on $Y$ lifts to a group of symplectic transformation on the double cover $X$ branched along $B$. First note that $B$ does not contain the base point $p$.
For $I,J,K, c \in T_{48}$ we we choose liftings $ \overline I, \overline J, \overline K , \overline c \in \mathrm{Aut}(X)$ fixing both points in $\pi^{-1}(p)= \{p_1,p_2\}$. The linearization of $ \overline I, \overline J, \overline K , \overline c$ at $p_1$ is the same as the linearization at $p$ and in particular has determinant one.
By the general considerations in Remark \ref{T48 symplectic} the involution $d$ can be lifted to a symplectic involution on $X$.
The symplectic liftings of $I,J,K,c,d$ generate a subgroup $\tilde G$ of $\mathrm{Aut}(X)$ which is isomorphic to either $T_{48}$ or to the central degree two extension of $T_{48}$ acting on $X$.
In analogy to Remarks \ref{M9 symplectic} and \ref{T48 symplectic} we conclude that $\tilde G \cong T_{48}$ and the action of $T_{48}$ on $Y$ induces a symplectic action of $T_{48}$ on the double cover $X$.
This completes the classification of K3-surfaces with $T_{48} \times C_2$-symmetry. We have shown:
\begin{theorem}
Let $X$ be a K3-surface with a symplectic action of the group $T_{48}$ centralized by an antisymplectic involution $\sigma$ with $\mathrm{Fix}_X(\sigma) \neq \emptyset$. Then $X$ is equivariantly isomorphic either to Mukai's $T_{48}$-example or to the double cover of
\[
\{x_0x_1(x_0^4-x_1^4)+ x_2^3+x_3^2=0\} \subset \mathbb P(1,1,2,3)
\]
branched along $\{x_2=0\}$
\end{theorem}
\begin{remark}
The automorphism group of the Del Pezzo surface $Y = \{x_0x_1(x_0^4-x_1^4)+ x_2^3+x_3^2=0\} \subset \mathbb P(1,1,2,3)$ is the trivial central extension $C_3 \times T_{48}$. By contruction, the curve $B=\{s=0\}$ is invariant with respect to the full automorphism group. The double cover $X$ of $Y$ branched along $B$ carries the action of a finite group $\tilde G$ of order $2 \cdot 3 \cdot 48 = 288$ containing $T_{48} < \tilde G_\mathrm{symp}$. Since $T_{48}$ is a maximal group of symplectic transformations, we find $T_{48} = \tilde G_\mathrm{symp}$ and therefore
\[
\{\mathrm{id}\} \to T_{48} \to \tilde G \to C_6 \to \{\mathrm{id}\}.
\]
In analogy to the proof of Claim 2.1 in \cite{OZ168}, one can check that 288 is the maximal order of a finite group $H$ acting on a K3-surface with $T_{48} < H_\mathrm{symp}$. It follows that $\tilde G$ is maximal finite subgroup of $\mathrm{Aut}(X)$. For an arbitrary finite group $H$ acting on a K3-surface with
$\{\mathrm{id}\} \to T_{48} \to H \to C_6 \to \{\mathrm{id}\}$,
there need however not exist an involution in $H$ centralizing $T_{48}$.
\end{remark}
\chapter{K3-surfaces with an antisymplectic involution centralizing $C_3 \ltimes C_7$}\label{chapterC3C7}
In this chapter it is illustrated that a classification of K3-surfaces with antisymplectic involution $\sigma$ can be carried out even even if the centralizer $G$ of $\sigma$ inside the group of symplectic transformations is relatively small, i.e., well below the bound 96 obtianed in Theorem \ref{roughclassi}, and not among the maximal groups of symplectic transformations. We consider the group $G = C_3 \ltimes C_7$, which is a subgroup of $L_2(7)$. The principles presented in Chapter \ref{chapterlarge} can be transferred to this group $G$ and yield a description of K3-surfaces with $G \times \langle \sigma \rangle$-symmetry. Using this, we deduce the classification K3-surfaces with an action of $L_2(7) \times C_2$ announced in Section \ref{mukaiL2(7)}. The results presented in this chapter have appeared in \cite{FKH1}.
To begin with, we present a family of K3-surfaces with $G \times \langle \sigma \rangle$-symmetry.
\begin{example}
We consider the action of $G$ on $\mathbb P_2$ given by one of its three-dimensional representations.
After a suitable change of coordinates, the action of the commutator subgroup $G'=C_7 < G$ is given by
\[
[z_0:z_1:z_2] \mapsto [ \lambda z_0: \lambda^2 z_1: \lambda^4 z_2]
\]
for $\lambda = \mathrm{exp}(\frac{2\pi i}{7})$ and $C_3$ is generated by the permutation
\[
[z_0:z_1:z_2] \mapsto [z_2:z_0:z_1].
\]
The vector space of $G$-invariant homogeneous polynomials of degree six is the span of $P_1 = z_0^2 z_1^2 z_2^2$ and $P_2 = z_0^5 z_1 + z_2 ^5 z_0 + z_1^5 z_2$.
The family $\mathbb{P}(V)$ of curves defined by polynomials in $V$
contains exactly four singular curves, namely the curve defined by
$z_0^2z_1^2z_2^2$ and those defined by
$3z_0^2z_1^2z_2^2 -\zeta ^k(z_0^5z_1+z_2^5z_0+z_1^5z_2)$, where
$\zeta $ is a nontrivial third root of unity, $k=1,2,3$.
We let $\Sigma = \mathbb P(V) \backslash \{z_0^2z_1^2z_2^2=0\}$.
The double cover of $\mathbb P_2$ branched along a curve $C \in \Sigma$ is a K3-surface (singular K3-surface if $C$ is singular) with an action of $G \times C_2$ where $C_2$ acts nonsymplectically. It follows that $\Sigma$ parametrizes a family of K3-surface with $G \times C_2$-symmetry.
\end{example}
\begin{remark}
Let us consider the cyclic group $\Gamma$ of order
three generated by the transformation
$[z_0:z_1:z_2]\mapsto [z_0:\zeta z_1: \zeta ^2z_2]$ and its induced action on the space $\Sigma$. One finds that the three irreducible singular $G$-invariant curves form a $\Gamma$-orbit. Furthermore, if two curves $C_1, C_2 \in \Sigma$ are equivalent with respect to the action of $\Gamma$, then the corresponding K3-surfaces are equivariantly isomorphic (see Section \ref{EquiEqui} for a detailed discussion).
\end{remark}
\begin{remark}
The singular curve $C_\text{sing} \subset \mathbb P_2$ defined by
$3z_0^2z_1^2z_2^2 -(z_0^5z_1+z_2^5z_0+z_1^5z_2)$ has exactly seven
singular points $p_1, \dots p_7$ forming an $G$-orbit. Since they are in
general position (cf. Proposition \ref{blowdownseven}), the blow up of $\mathbb{P}_2$ in these points defines a Del Pezzo surface $Y_\text{Klein}$ of degree two with an action of $G$. It is seen to be the double cover of $\mathbb{P}_2$ branched along Klein's quartic curve\index{Klein's curve!}
$$
C_\text{Klein}:=\{z_0z_1^3+z_1z_2^3+z_2z_0^3=0\}.
$$
The proper transform $B$ of $C_\text{sing}$ in $Y_\text{Klein}$ is a
smooth $G$-invariant curve. It is a normalization of $C_\text{sing}$ and has genus three by the genus formula. The curve $B$ coincides with the
preimage of $C_{\text{Klein}}$ in $Y_\text{Klein}$.
The minimal resolution $\tilde X_\text{sing}$ of the singular surface $X_\text{sing}$ defined as the double cover of $\mathbb P_2$ branched along $C_\text{sing}$ is a K3-surface with an action of $G$.
By construction, it is the double cover of
$Y_\text{Klein}$ branched along $B$. In particular, $\tilde X
_\text{sing}$ is the degree four cyclic cover of $\mathbb P_2$ branched along $C_\text{Klein}$ and known as the Klein-Mukai-surface $X_{\text{KM}}$ (cf. Example \ref{L2(7)example}).
\end{remark}
\begin{notation}
In the following, the notion of ``$G \times C_2$-symmetry'' abbreviates a symplectic action of $G$ centralized an antisymplectic action of $C_2$.
\end{notation}
In this chapter we will show that the space $\mathcal M = \Sigma / \Gamma$ parametrizes K3-surfaces with $G \times C_2$-symmetry up to equivariant equivalence. More precisely, we prove:
\begin{theorem}\label{mainthmc3c7}
The K3-surfaces with a symplectic action of $G = C_3 \ltimes C_7$ centralized by an antisymplectic involution $\sigma$ are parametrized by the space $\mathcal M = \Sigma / \Gamma$ of equivalence classes of sextic branch curves in $\mathbb P_2$. The Klein-Mukai-surface occurs as the minimal desingularization of the double cover branched along the unique singular curve in $\mathcal M$.
\end{theorem}
Inside the family $\mathcal M$ one finds two K3-surfaces with a symplectic action of the larger group $L_2(7)$ centralized by an antisymplectic involution.
\begin{theorem}\label{L2(7) times invol}
There are exactly two K3-surfaces with an action of the group $L_2(7)$ centralized by an antisymplectic involution. These are the Klein-Mukai-surface $X_\mathrm{KM}$ and the double cover of $\mathbb P_2$ branched along the curve $\mathrm{Hess}(C_\text{Klein})=\{z_0^5z_1+z_2^5z_0+z_1^5z_2-5z_0^2 z_1^2 z_2^2=0\}$.
\end{theorem}
\section{Branch curves and Mori fibers}
Let $X$ be a K3 surface with an symplectic action of $G=C_3 \ltimes C_7$ centralized by the antisymplectic involution $\sigma$. We consider the quotient $\pi: X \to X/\sigma =Y$. Since the action of $G'$ has precisely three fixed points in $X$ and $\sigma$ acts on this point set, we know that $\mathrm{Fix}_X(\sigma)$ is not empty. It follows that $Y$ is a smooth rational surface with an effective action of the group $G$ to which we apply the equivariant minimal model program. The following lemma excludes the possibility that a $G$-minimal model is a conic bundle. The argument resembles that in the proof of Lemma \ref{conicbundle}.
\begin{lemma}\label{C3C7 conic bundle}
A $G$-minimal model of $Y$ is a Del Pezzo surface.
\end{lemma}
\begin{proof}
Assume the contrary and let $Y_\mathrm{min} \to \mathbb P_1$ be a $G$-equivariant conic bundle. Since $G$ has no effective action on the base, there must be a nontrivial normal subgroup acting trivially on the base. This subgroup must be $G'$. The action of $G'$ on the generic fiber has two fixed points and gives rise to a positive-dimensional $G'$-fixed point set in $Y_\mathrm{min}$ and $Y$. Since the action of $G'$ on $Y$ is induced by a symplectic action of $G'$ on $X$, this is a contradiction.
\end{proof}
\begin{remark}\label{P1xP1C3C7}
Since $G$ has no subgroup of index two,
the above proof also shows that $Y_\mathrm{min} \not\cong \mathbb P_1 \times \mathbb P_1$.
\end{remark}
In analogy to the procedure of the previous chapter we exclude rational and elliptic ramification curves and show that $\pi$ is branched along a single curve of genus greater than or equal to three.
\begin{proposition}
The set $\mathrm{Fix}_X(\sigma)$ consists of a single curve $C$ and $g(C) \geq 3$.
\end{proposition}
\begin{proof}
We let $\{x_1,x_2,x_3\} = \mathrm{Fix}_X(G')$. Since $G$ has no faithful two-dimensional representation, it has no fixed points in $X$ an therefore acts transitively on $\{x_1,x_2,x_3\}$. It follows that the central involution $\sigma$, which fixes at at least one point $x_i$, fixes all three points by invariance. Now $\{x_1,x_2,x_3\} \subset \mathrm{Fix}_X(\sigma)$ implies that $G'$ has precisely three fixed points in $Y$. Let $C_i$ denote the connected component of $\mathrm{Fix}_X(\sigma)$ containing $x_i$. Since $G$ acts on the set $\{C_1,C_2,C_3\}$, it follows that either $C_1=C_2=C_3$ or no two of them coincide.
In the later case, it follows from Theorem \ref{FixSigma} that at least two curves $C_1,C_2$ are rational. The action of $G'$ on a rational curves $C_i$ has two fixed points. We therefore find at least five $G'$-fixed points in $X$ contradicting $|\mathrm{Fix}_X(G')|=3$.
It follows that all three points $x_1,x_2,x_3$ lie on one $G$-invariant connected component $C$ of $\mathrm{Fix}_X(\sigma)$. The action of $G$ on $C$ is effective and it follows that $C$ is not rational.
If $g(C)=1$, then an effective action of $G$ on $C$ would force $G'$ to act by translations on $C$, in particular freely, a contradiction.
If $g(C)=2$, then $C$ is hyperelliptic. The quotient $C \to \mathbb P_1$ by the hyperellitic involution is $\mathrm{Aut}(C)$-equivariant and would induce an effective action of $G$ on $\mathbb P_1$, a contradiction.
It follows that $g(C) \geq 3$ and it remains to check that there are no rational ramification curves.
We let $n$ denote the total number of rational curves in $\mathrm{Fix}_X(\sigma)$. Since $G'$ acts freely on the complement of $C$ in $X$, it follows that the number $n$ must be a multiple of seven. Combining this observation with the bound $n \leq 9$ from Corollary \ref{atmostten} we conclude that $n$ is either 0 or 7.
We suppose $n =7$ and let $m$ denote the total number of Mori contractions of a reduction $Y \to Y_\mathrm{min}$. The Euler characeristic formula
\[
13 - g(C) = e(Y_\mathrm{min}) +m -n
\]
with $n=7$, $g(C) \geq 3$ and $e(Y_\mathrm{min})\geq 3$ implies $m \leq 14$.
Let us first check that no Mori fiber $E$ coincides with a rational branch curve $B$. If this was the case, then all seven rational branch curves coincide with Mori fibers. Rational branch curves have self-intersection -4 by Corollary \ref{minusfour}. Before they may by contracted, they need to be transformed into (-1)-curves by earlier reduction steps. The remaining seven or less Mori contraction are not sufficient to achieve this transformation. It follows that each rational branch curve is mapped to a curve in $Y_\mathrm{min}$ and not to a point.
We now first consider the case $m=14$. The Euler characteristic formula implies $Y_\mathrm{min} \cong \mathbb P_2$ and $g(C)=3$. Using our study of Mori fibers and branch curves in Section \ref{branch curves mori fibers}, in particular Remark \ref{self-int of Mori-fibers} and Proposition \ref{at most two}, we see that no configuration of 14 Mori fibers is such that the images in $Y_\mathrm{min} \cong \mathbb P_2$ of any two rational branch curves have nonempty intersection. It follows that $m \leq 13$.
Let $R_1, \dots, R_7 \subset Y$ denote the rational branch curves. Each curve $R_i$ has self-intersection -4 and therefore has nontrivial intersection with at least one Mori fiber. Let $E_1$ be a Mori fiber meeting $R_1$, let $H \cong C_3$ be the stabilizer of $R_1$ in $G$ and let $I$ be the stabilizer of $E_1$ in $G$. Since $m \leq 13$ the group $I$ is nontrivial. If $I$ does not stabilize $R_1$, then $E_1$ meets the branch locus in at least three points. This is contrary to Proposition \ref{at most two}. It follows that $I=H$. If $E_1$ meets any other rational branch curve $R_2$, then it meets all curves in the $H$-orbit through $R_2$. Since $H$ acts freely on the set $\{R_2, \dots, R_7\}$, it follows that $E_1$ meets three more branch curves. This is again contradictory to Proposition \ref{at most two}.
Since $m \leq 13$ it follows that each rational branch curve meets exactly one Mori fiber. Their intersection can be one of the following three types:
\begin{enumerate}
\item
$E_i \cap R_i = \{p_1,p_2\}$ or
\item
$E_i \cap R_i = \{p\}$ and $ (E_i, R_i)_p =2$ or
\item
$E_i \cap R_i = \{p\}$ and $ (E_i, R_i)_p =1$.
\end{enumerate}
In all three cases the contraction of $E_i$ alone does not transform the curve $R_i$ into a curve on a Del Pezzo surface. So further reduction steps are needed and require the existence of Mori fibers $F_i$ disjoint from $\bigcup R_i$. Each $F_i$ is a (-2)-curve meeting $\bigcup E_i$ transversally in one point and the total number of Mori fibers exceeds our bound 13.
This contradiction yields $n=0$ and the proof of the proposition is completed.
\end{proof}
\section{Classification of the quotient surface $Y$}
We now turn to a classification of the quotient surface $Y$.
\begin {proposition}
The surface $Y$ is either $G$-minimal or the blow up of $\mathbb P_2$ in seven singularities of an irreducible $G$-invariant sextic..
\end {proposition}
\begin {proof}
Since $n=0$, the Euler characteristic formula yields
$m \leq 7$. The fact that
$G$ acts on the set of Mori fibers implies that
$m \in \{ 0,3,6, 7\}$. If $m \in \{3, 6\}$,
then $G'$ stabilizes every Mori fiber, and
consequently it has more then three fixed points, a contradiction. Thus we must only
consider the case $m =7$.
In this case the set of Mori fibers is a $G$-orbit and it follows that every
Mori fiber has self-inter\-section -1 and therefore has
nonempty intersection with $\pi(C)$ by Remark \ref{self-int of Mori-fibers}.
As before, the Euler characteristic formula
implies that $g(C)=3$ and $Y_\mathrm{min}=\mathbb P_2$ and
adjunction in $X$ shows that $(\pi(C))^2=8$ in $Y$. The fact that
$\pi(C)$ has nonempty intersection with seven different Mori fibers implies
that its image $D$ in $Y_\mathrm{min}$ has self-intersection either $15 = 8 +7$ or $36 = 8 + 4 \cdot 7$. Since the first is impossible it follows that $E \cdot \pi(C)=2$ for all Mori fibers $E$ and the $G$-invariant irreducible sextic $D$ has seven singular points corresponding to the images of $E$ in $\mathbb P_2$.
\end {proof}
\begin{corollary}\label{sing exam}
If $Y$ is not $G$-minimal, then $X$ is the minimal desingularization of a double cover of $\mathbb{P}_2$ branched along an irreducible $G$-invariant sextic with seven singular points.
\end{corollary}
We conclude this section with a classification of possible
$G$-minimal models of $Y$.
\begin {proposition}
The surface $Y_\mathrm{min}$ is either a Del Pezzo surface of degree two
or $\mathbb P_2$.
\end {proposition}
\begin {proof}
The case $Y_\mathrm{min}=\mathbb P_1\times \mathbb P_1$ is excluded by Example \ref{DelPezzoC3C7} and also by Remark \ref{P1xP1C3C7}.
Thus $Y_\mathrm{min}=Y_d$
is a Del Pezzo surface of degree $d=1,\ldots ,9$ which is a blowup
of $\mathbb P_2$ in $9-d$ points.
If $Y_\mathrm{min}=Y_1$ the anticanonical map has exactly one base point.
This point has to be $G$-fixed and since $G$ has no faithful
two-dimensional representations, this case does not occur.
It remains to eliminate $d=8,\ldots ,3$. In these
cases the sets $\mathcal S$ of (-1)-curves consist
of 1, 2, 6, 10, 16 or 27 elements, respectively (cf. Table \ref{minus one curves}).
The $G$-orbits in $\mathcal S$
consist of $1$, $3$, $7$ or $21$ curves and
there must be orbits of length three or one.
If $G$ stablizes a curve
in $\mathcal S$, then its contraction gives rise to a two-dimensional
representation of $G$ which does not exist. If $G$ has an
orbit consisting of three curves, then $G'$ stabilizes each of the
curves in this orbit. Thus $G'$ has at least six fixed points in
$Y_\mathrm{min}$ and in $Y$. This contradicts the fact that $| \mathrm{Fix}_Y(G')|=3$.
\end {proof}
\section{Fine classification - Computation of invariants}
We have reduced the classification of K3-surfaces with $G \times C_2$-symmetry to the study of equivariant double covers of rational surfaces $Y$ branched along a single invariant curve of genus $g \geq 3$. Here $Y$ is either $\mathbb P_2$, the blow-up of $\mathbb P_2$ in seven singular points of an irreducible $G$-invariant sextic, or a Del Pezzo surface of degree two.
\subsection{The case $Y = Y_\mathrm{min} = \mathbb P_2$}
An effective action of $G$ on $\mathbb P_2$ is given by an injective homomorphisms $G \to \mathrm {PSL}_3(\mathbb C)$. There are two central degree three extension of $G$, the trivial extension and $C_9 \ltimes C_7$. A study of their three-dimensional representation reveals that in both cases the action of $G$ on $\mathbb P_2$ is given by an irreducible representation $G \hookrightarrow \mathrm {SL}_3(\mathbb C)$. There are two isomorphism classes of irreducible 3-dimensional representations.
Since these differ by a group automorphism
and the corresponding actions on $\mathbb{P}_2$ are therefore equivalent,
we may assume that in appropriately chosen coordinates a generator of $G'$ acts by
\begin{equation}\label{c7action}
[z_0:z_1:z_2]\mapsto [\lambda z_0,\lambda ^2,z_1,\lambda ^4z_2],
\end{equation}
where $\lambda =\mathrm{exp}{\frac{2\pi i}{7}}$ and a generator of
$C_3$ acts by the cyclic permutation
$\tau $ which is defined by
\begin{equation}\label{tauaction}
[z_0:z_1:z_2]\mapsto [z_2:z_0:z_1].
\end{equation}
A homogeneous polynomial defining an invariant curve
must be a $G$-semi-invariant with $G'$ acting with eigenvalue one.
The $G'$-invariant monomials of degree six are
$$
\mathbb C[z_0,z_1,z_2]_{(6)}^{G'}=
\mathrm {Span}\{z_0^2z_1^2z_2^2, z_0^5z_1,z_2^5z_0,z_1^5z_2\}\,.
$$
Letting $P_1=z_0^2z_1^2z_2^2 $ and $P_2=z_0^5z_1+z_2^5z_0+z_1^5z_2$,
it follows that
$$
\mathbb C[z_0,z_1,z_2]_{(6)}^{G}=
\mathrm {Span}\{P_1,P_2\}=:V\,.
$$
There are two $G$-semi-invariants which are not invariant, namely
$z_0^5z_1+\zeta z_2^5z_0+\zeta ^2z_1^5z_2$ for $\zeta ^3=1$
but $\zeta \not =1$. By direct computation one checks that
the curves defined by these polynomials are smooth and that in both
cases all $\tau $-fixed points in $\mathbb P_2$ lie on them.
Thus, $\tau $ has only three fixed points on the K3-surface
$X$ obtained as a double cover and therefore does not act symplectically (cf. Table \ref{fix points symplectic}). Consequently,
$G$ does not lift to an action by symplectic transformations
on the K3-surfaces defined
by these two curves. Hence it is enough to consider
ramified covers $X\to Y=\mathbb P_2$, where the branch curves are defined by invariant polynomials $f \in V$.
We wish to determine which polynomials
$P_{\alpha,\beta}=\alpha P_1+\beta P_2$ define singular curves.
Since $\mathrm {Fix}(\tau) = \{ [1:\zeta :\zeta ^2] \, | \, \zeta ^3 =1 \}$, the
curves which contain $\tau $-fixed points are
defined by condition $\alpha +3\zeta \beta =0$.
Let $C_{P_1} = \{ P_1 =0\} $ and let $C_\zeta $ be
the curve defined by $P_{\alpha,\beta}$ for $\alpha +3\zeta \beta =0$. A direct computation shows that $C_\zeta $ is singular at the point $[1:\zeta :\zeta ^2]$. We let $\Sigma_\mathrm{reg} $ be
the complement of this set of four curves, $\Sigma_\mathrm{reg} = \mathbb P(V) \backslash \{C_{P_1}; \, C_\zeta \, | \, \zeta ^3 =1 \}$.
\begin{lemma}
A curve $C\in \mathbb P(V)$ is smooth if and only if $C \in \Sigma_\mathrm{reg} $.
\end{lemma}
\begin{proof}
Let $C \in \Sigma_\mathrm{reg} $. Since $\tau $ has no fixed points in $C$ by definition and every subgroup of
order three in $G$ is conjugate to $\langle \tau \rangle $, it follows that any $G$-orbit $G.p$ through a point $p\in C$ has length three or 21.
The only
subgroup of order seven in $G$ is the commutator group
$G'$. So the $G$-orbits of length three
are the orbits of the $G'$-fixed points $[1:0:0], [0:1:0],[0:0:1]$. One checks by direct computation that every $C\in \Sigma_\mathrm{reg} $ is smooth
at these three points.
An irreducible curve of degree six has at most ten singular points by the genus formula.
Suppose that $C$ is singular at some point $q$. Then it
is singular at each of the 21 points in $G.q$ and $C$ must be reducible.
Considering
the $G$-action on the space of irreducible components of $C$ yields a contradiction
and it follows that $C$ is smooth.
\end{proof}
For any curve $C \in \Sigma_\mathrm{reg}$ the double cover of $\mathbb P_2$ branched along $C$ is a K3-surface $X_C$ with an action of a degree two central extension of $G$. By the following lemma, this action is always of the desired type.
\begin{lemma}\label{G acts sympl}
For every $C \in \Sigma_\mathrm{reg}$ the K3-surface $X_C$ carries an action of the group
$G \times \langle \sigma \rangle $.
The group $G$ acts by symplectic transformations on $X_C$ and $\sigma$ denotes the covering involution.
\end{lemma}
\begin{proof}
It follows from the group structure of $G$
that the central degree two extension of $G$ acting on $X_C$ splits as $G \times C_2$. The factor $C_2$ is by construction generated by the covering involution $\sigma$. It remains to check that $G$ acts symplectically. As the commutator subgroup $G'$ acts symplectically it is sufficient to check whether $\tau$ lifts to a symplectic automorphism. Consider the $\tau$-fixed point $p=[1:1:1]$ and check that the linearization of $\tau$ at $p$ is in $\mathrm{SL}(2, \mathbb C)$. Since $p$ is not contained in $C$, it follows that the linearization of $\tau$ at a corresponding fixed point in $X_C$ is also in $\mathrm{SL}(2, \mathbb C)$. Consequently, the group $G$ acts by symplectic transformations on $X_C$.
\end{proof}
\subsection{Equivariant equivalence} \label{EquiEqui}
We wish to describe the space of K3-surfaces with $G \times C_2$-symmetry modulo equivariant equivalence.
For this, we study the family of K3-surfaces parametrized by the family of branch curves $\Sigma_\mathrm{reg}$. Consider the cyclic group $\Gamma$ of order three in $\mathrm{PGL}(3, \mathbb C)$ generated by
\[
[z_0:z_1:z_2]\mapsto [z_0:\zeta z_1: \zeta ^2z_2]
\]
for $\zeta = \mathrm{exp}(\frac{2 \pi i}{3})$. The group $\Gamma$ acts on $\Sigma_\mathrm{reg}$ and by the following proposition the induced equivalence relation is precisely equivariant equivalence formulated in Definition \ref{equivariantequivalence}.
\begin{proposition}
Two K3-surfaces $X_{C_1}$ and $X_{C_2}$ for $C_1,C_2 \in \Sigma_\mathrm{reg}$ are equivariantly equivalent if and only if $C_1 = \gamma C_2$ for some $\gamma \in \Gamma$, i.e., the quotient $\Sigma_\mathrm{reg}/ \Gamma$ parametrizes equivariant equivalence classes of K3-surfaces $X_C$ for $C \in \Sigma_\mathrm{reg}$.
\end{proposition}
\begin{proof}
If two K3-surfaces $X_{C_1}$ and $X_{C_2}$ for $C_1,C_2 \in \Sigma_\mathrm{reg}$ are equivariantly equivalent, then the isomorphism $X_{C_1} \to X_{C_2}$ induces an automorphism of $\mathbb P_2$ mapping $C_1$ to $C_2$.
Let $C\in \Sigma_\mathrm{reg}$ and for $T\in \mathrm {SL}_3(\mathbb C)$
assume that $T(C)\in \Sigma_\mathrm{reg} $. We consider the group span
$S$ of $TGT^{-1}$ and $G$. By Lemma \ref{G acts sympl}, the group $G$ acts by symplectic transformations on $X_C$ and $X_{T(C)}$. We argue precisely as in the proof of this lemma to see that $TGT^{-1}$ also acts symplectically on the K3-surface $X_{T(C)}$. It follows that $S$ is acting as a group of symplectic transformations on this K3-surface.
If $S=G$, then $T$ normalizes $G$. The normalizer $N$ of $G$ in
$\mathrm {PGL}_3(\mathbb C)$
is the product
$\Gamma \times G$ and
it follows that $gT$ is contained
in $\Gamma $ for some $g \in G$ and $T(C) = gT(C) = \gamma C$.
Note
that $L_2(7)$ is the only group in Mukai's
list which contains $G$. Therefore,
$S$ is a subgroup of $L_2(7)$. The group $G$ is a maximal subgroup of $L_2(7)$ and if $S\not=G$, then it follows that
$S=L_2(7)$. Any two subgroups of order 21 in $L_2(7)$ are conjugate. This implies the existence of $s \in S= L_2(7)$ such that $sTGT^{-1}s^{-1} = G$. Now $sT \in N = \Gamma \times G$ can be written as $sT = \gamma g$ for $(\gamma, g) \in \Gamma \times G$. By assumption, $s$ stabilizes $T(C)$ and $T(C) = sT(C) = \gamma g (C) = \gamma C$.
This completes the proof of the proposition.
\end{proof}
\subsection{The case $Y \neq Y_\mathrm{min}$}
Let us now consider the three singular irreducible curves in our family $\mathbb P (V)$. They are identified by the action of $\Gamma$. Using Corollary \ref{sing exam} we see that if $Y = X/\sigma$ is not $G$-minimal, then, up to equivariant equivalence, the K3-surface $X$ is the minimal desingularization of
the double cover of $\mathbb P_2$ branched along $C_{\zeta=1} = C_\mathrm{sing}$ and $Y$ is the blow-up of $\mathbb P_2$ in the seven singular points of $C_\mathrm{sing}$. These points are the $G'$-orbit of $[1:1:1]$. In the following propostion we prove that these are in general position and therefore $Y$ is a Del Pezzo surface.
\begin{proposition}\label{blowdownseven}
If $Y$ is not minimal, then it is the Del Pezzo surface of degree two which
arises by blowing up the seven singular points $p_1,\ldots ,p_7$ on the curve
$C_\mathrm {sing}$ in $\mathbb P_2$. The corresponding map $Y \to
\mathbb P_2$ is $G$-equivariant and therefore a Mori reduction of
$Y$.
\end{proposition}
\begin{proof}
We show that the points $\{p_1, \dots, p_7\} = G'.[1:1:1]$
are in general position, i.e., no three lie on one line and
no six lie on one conic.
It follows from direct computation that no three points in $G'.[1:1:1]$
lie on one line.
If $p_1,\dots p_6$ lie on a conic $Q$, then $g.p_1, \dots , g.p_6$ lie on
$g.Q$ for every $g \in G$. Since $\{p_1, \dots, p_7\}$ is a
$G$-invariant set, the conics $Q$ and $g.Q$ intersect in at
least five points and therefore coincide. It follows that
$Q$ is an invariant conic meeting $C_\mathrm{sing}$ at its
seven singularities and $(Q,C_\mathrm{sing}) \geq 14$ implies
$Q \subset C_\mathrm{sing}$, a contradiction.
\end{proof}
\section{Klein's quartic and the Klein-Mukai surface}\label{KMsurface}
In this section we show that the Del Pezzo surface discussed in Proposition \ref{blowdownseven} above can be realized as the double cover of $\mathbb P_2$ branched along Klein's quartic curve.
\begin{proposition}\label{DelPezzoYKlein}
A Del Pezzo surface of degree two with an action of $G$ is equivariantly isomorphic to the double cover $Y_\mathrm{Klein}$ of $\mathbb P_2$ branched along Klein's quartic curve.
\end{proposition}
\begin{proof}
Recall that the anticanonical map of a Del Pezzo surface $Y$ of degree two defines a 2:1 map to $\mathbb P_2$. This map is branched along a smooth curve of degree four and equivariant with respect to $\mathrm{Aut}(Y)$. We obtain an action of $G$ on $\mathbb P_2$ stabilizing a smooth quartic. As before, we may choose coordinates such that $G$ is acting as in equations \eqref{c7action} and \eqref{tauaction}. Then
$$
\mathbb C[z_0:z_1:z_2]_{(4)}^{G'}
=\mathrm {Span}\{z_0^3z_2, z_1^3z_0,z_2^3z_1\}\,.
$$
is a direct sum of $G$-eigenspaces. The eigenspace
of the eigenvalue $\zeta $ is spanned by the polynomial
$Q_\zeta :=z_0^3z_2+\zeta z_2^3z_1+\zeta ^2z_1^3z_0$ with $\zeta $
being a third root of unity.
In order to take into account equivariant equivalence
we consider the cyclic group
$\Gamma \subset \mathrm {SL}_3(\mathbb C)$ which
is generated by the transformation
$\gamma$, $[z_0:z_1:z_2]\mapsto [z_0:\zeta z_1:\zeta^2z_2]$.
The induced action on $\mathbb C[z_0:z_1:z_2]_{(4)}^{G'}$ is transitive on the $G$-eigenspaces
spanned by the $Q_\zeta $. Consequently, up to equivariant
equivalence, we may assume that
$Y\to \mathbb P_2$ is branched along Klein's curve $C_\text{Klein}$\index{Klein's curve!}
which is defined by $Q_1$.
\end{proof}
\begin{corollary}
A Del Pezzo surface of degree two with an action of $G$ is never $G$-minimal. Its Mori reduction $Y_\mathrm{Klein} \to \mathbb P_2$ is precisely the map discussed in Proposition \ref{blowdownseven}.
\end{corollary}
We summarize our observartions in the following proposition.
\begin {proposition}
If $X$ is a K3-surface with a symplectic $G$-action centralized
by an antisymplectic involution $\sigma $, then $Y_{min}=\mathbb P_2$.
In all but one case $X/\sigma =Y=Y_{min}$. In the exceptional case
$Y=Y_\mathrm{Klein}$, the Mori reduction $Y\to Y_{min}$ is the contraction of seven (-1)-curves to the singular
points of $C_\mathrm{sing}$ and the branch set $B$ of $X\to Y$
is the proper transform of $C_\mathrm {Klein}$ in $Y$.
\end {proposition}
\begin {proof}
It remains to prove that $B$
is the proper transform of $C_\mathrm {Klein}$ in $Y$.
Suppose that the
branch curve of $X\to Y$ is some other curve
$\widetilde B$ linearly equivalent to $-2K_Y$. Let
$I:=\widetilde B\cap B$ and note that $\vert I\vert \le B \cdot \widetilde B = 4 K_Y ^2 = 8$.
Since $G$ has no fixed points in $B$, it follows that
$\vert I\vert =3$ and that $I$ is a $G$-orbit.
Thus the
intersection multiplicities at the three points in $\widetilde B\cap B$ are
the same. Since 3 does not divide 8, this is a contradiction.
\end {proof}
In order to complete the proof of Theorem \ref{mainthmc3c7} it remains to show that the action of $G$ on $Y_\mathrm{Klein}$ lifts to a group of symplectic transformation on the K3-surface $X=X_{KM}$ defined as a double cover of $Y_\mathrm{Klein}$ branched along the proper transform of $C_\mathrm{sing}$.
Since $G$ stabilizes $C_\text{Klein}$ and does not admit nontrivial central extensions of degree two, it lifts to a subgroup of $\mathrm{Aut}(Y_\text{Klein})$ and subsequently to a subgroup of $\mathrm{Aut}(X)$.
The covering involution
$Y_\text{Klein}\to \mathbb P_2$,
lifts to a holomorphic transformation of $X$ where we also find the
involution defining $X \to Y_\text{Klein}$. These two transformations
generate a group $F$
of order four. The elements of $F$ all have a positive-dimensional
fixed point set. It follows that $F$ acts solely by nonsymplectic transformations and is therefore isomorphic to $C_4$.
The full preimage of $G$ in $\mathrm {Aut}(X)$ splits
as $G\times C_4$.
Since the commutator group $G'$
automatically acts by symplectic transformations, we must
only check that the lift of the cyclic permutation $\tau $,
$[z_0:z_1:z_2]\mapsto [z_2:z_0:z_1]$, acts symplectically.
As above, this follows from a linearization argument at a $\tau$-fixed point not in $C_\mathrm{Klein}$.
In conclusion, up to equivalence there
is a unique action of $G$ by symplectic transformations
on the K3-surface $X_{KM}$. It is centralized by a cyclic
group of order four which acts faithfully on the symplectic
form.
The Klein-Mukai-surface is the only surface with $G \times C_2$-symmetry for which $Y \not\cong \mathbb P_2$. As in the introduction of this chapter, we define $\Sigma$ as the complement of $C_{P_1}$ in $\mathbb P(V)$. Then $\Sigma = \Sigma_\mathrm{reg} \cup \{C_\zeta \, | \, \zeta^3 =1\}$. Using this notation the space
\[
\mathcal M = \Sigma / \Gamma
\]
parametrizes the space of K3-surfaces with $G \times C_2$-symmetry up to equivariant equivalence.
This completes the proof of Theorem \ref{mainthmc3c7}.
\section{The group $L_2(7)$ centralized by an antisymplectic involution}\label{168}
We consider the simple group of order 168. This group is $\mathrm{PSL}(2, \mathbb F_7)$ and usually denoted by $L_2(7)$. It contains our group $G = C_3 \ltimes C_7$ as a subgroup.
Since $L_2(7)$ is a simple group, if it acts on a K3-surface, it automatically acts by symplectic transformations.
We wish to prove Theorem \ref{L2(7) times invol} stating that there are exactly two K3-surfaces with an action of the group $L_2(7)$ centralized by an antisymplectic involution. These are the Klein-Mukai-surface $X_\mathrm{KM}$\index{Klein-Mukai surface!} and the double cover of $\mathbb P_2$ branched along the curve $\mathrm{Hess}(C_\text{Klein})=\{z_0^5z_1+z_2^5z_0+z_1^5z_2-5z_0^2 z_1^2 z_2^2=0\}$.
We have to check which elements of $\mathcal M$ have the symmetry of the larger group. The Klein-Mukai-surface is known to have $L_2(7) \times C_4$-symmetry (cf. Example \ref{L2(7)example}).
If $X \neq X_\mathrm{KM}$ has $L_2(7)$-symmetry, then it follows from the considerations of the previous sections that $X$ is an $L_2(7)$-equivariant double cover of $\mathbb P_2$ branched along a smooth $L_2(7)$-invariant sextic curve. I.e., it remains to identify the surfaces with $L_2(7)$-symmetry in the family parametrized by $\Sigma_\mathrm{reg} / \Gamma$.
\begin{lemma}\label{L2(7)onP2}
The action of $L_2(7)$ on $\mathbb P_2$ is necessarily given by a three-dimensional represention.
\end{lemma}
\begin{proof}
The lemma follows from the fact that the group $L_2(7)$ does not admit nontrivial degree three central extensions. This can be derived from the cohomology group $H^2(L_2(7), \mathbb C^*) \cong C_2$ known as the Schur Multiplier.
\end{proof}
There are two isomorphism classes of three-dimensional representations and these differ by an outer automorphism. We may therefore consider the particular representation given in Example \ref{L2(7)example}. One checks that the curve $\mathrm{Hess}(C_\mathrm{Klein})$ is $L_2(7)$-invariant.
The maximal possible isotropy group is $C_7$ and each $L_2(7)$-orbit in $\mathrm{Hess}(C_\mathrm{Klein})$ consists of at least 21 elements. If there was another $L_2(7)$-invariant curve $C$ in $\Sigma_\mathrm{reg}$, then the invariant set $ C \cap \mathrm{Hess}(C_\mathrm{Klein})$ consists of at most 36 points. This is a contradiction and it follows that $\mathrm{Hess}(C_\mathrm{Klein})$ is the only $L_2(7)$-invariant curve in $\Sigma_\mathrm{reg}$.
It remains to check that $L_2(7)$ lifts to a subgroup of $\mathrm{Aut}(X_{\mathrm{Hess}(C_\mathrm{Klein})})$: On $X_{\mathrm{Hess}(C_\mathrm{Klein})}$ we find an action of a central degree two extension $E$ of $L_2(7)$. Since $E \neq E_\mathrm{symp}$ and $L_2(7)$ is simple, the subgroup of symplectic transformations inside $E$ must be isomorphic to $L_2(7)$.
It follows that $X_\mathrm{KM}$ and the double cover of $\mathbb P_2$ branched along $\mathrm{Hess}(C_\mathrm{Klein})$ are the only examples of K3-surfaces with $L_2(7) \times C_2$ symmetry. This completes the proof of Theorem \ref{L2(7) times invol}.
\begin{remark}
If we consider the quotient $Y_\mathrm{Klein}$ of $X_\mathrm{KM}$ by the antisymplectic involution $\sigma \in C_4$, this surface was seen not to be minimal with respect to the action of $C_3 \ltimes C_7$. It is however $L_2(7)$-minimal as we cannot find a equivariant contraction morphism blowing down an orbit of disjoint (-1)-curves in $Y_\mathrm{Klein}$ .
Such an orbit would have to consists of seven Mori fibers. The only subgroup of index seven is $S_4$. A Mori fiber of self-intersection (-1) does however not admit an action of the group $S_4$ (cf. Proof of Theorem \ref{roughclassi}).
\end{remark}
\chapter{The simple group of order 168}\label{chapter non exist}
In this chapter we consider finite groups containing $L_2(7)$, the simple group of order 168, and their actions on K3-surfaces. Based on our considerations about $L_2(7) \times C_2$-actions on K3-surfaces in Section \ref{168} we derive a classification result (Theorem \ref{improve OZ}). This gives a refinement of a lattice-theoretic result due to Oguiso and Zhang \cite{OZ168}. The main part of this chapter is dedicated to proving the non-existence of K3-surfaces with an action of the group $L_2(7) \times C_3$ (Theorem \ref{nonexist}) using equivariant Mori reduction.
\section{Finite groups containing $L_2(7)$}
If $H$ is a finite group acting on a K3-surface and $L_2(7) \lneqq H$, then it follows from Mukai's theorem and the fact that $L_2(7)$ is simple, that $H$ fits into the short exact sequence
\[
1 \to L_2(7) = H_\mathrm{symp} \to H \to C_m \to 1
\]
for some $m \in \mathbb N$. As it is noted by Oguiso and Zhang, Claim 2.1 in \cite{OZ168}, it follows from Proposition 3.4 in \cite{mukai} that $m \in \{1,2,3,4,6\}$.
The action of $H$ on $L_2(7)$ by conjugation defines a homomorphism
$H \to \mathrm{Aut}(L_2(7))$.
Factorizing by the group of inner automorphism of $L_2(7)$ we obtain a homomorphism
\[
C_m \cong H/L_2(7) \to \mathrm{Out}(L_2(7)) \cong C_2.
\]
If $H$ is not the nontrivial semidirect product $L_2(7) \rtimes C_2$, this homomorphism has a nontrivial kernel. In particular, we find a cyclic group $C_k < C_m$ centralizing $L_2(7)$. If $k$ is even, we may apply our results on K3-surfaces with $L_2(7) \times C_2$-symmetry from the previous chapter.
If $m = 3,6$, then $k=3$ or $k=6$. These cases may be excluded as is shown in \cite{OZ168}, Added in proof, Proposition 1. An independent proof of this fact, i.e., the non-existence of K3-surfaces with $L_2(7) \times C_3$ symmetry, using equivariant Mori theory, in particular the classification of $L_2(7)$-minimal models, is given below (Theorem \ref{nonexist}).
We summarize our observations about K3-surfaces with $L_2(7)$-symmetry in the following theorem, which improves the classification result due to Oguiso and Zhang.
\begin{theorem}\label{improve OZ}
Let $H$ be finite group acting on a K3-surface $X$ with $L_2(7) \lneqq H$. Then
\begin{samepage}
\begin{itemize}
\item
$|H/L_2(7)| \in \{2,4\}$.
\item
If $|H/L_2(7)|= 4$, then $H = L_2(7) \times C_4$ and $X \cong X_\mathrm{KM}$.
\item
If $|H/L_2(7)|= 2$ and $H = L_2(7) \times C_2$, then either $X \cong X_\mathrm{KM}$ or $X \cong X_{\mathrm{Hess}(C_\mathrm{Klein})}$
\end{itemize}
\end{samepage}
\end{theorem}
The first statement follows from the non-existence of K3-surfaces with $L_2(7) \times C_3$-symmetry (Theorem \ref{nonexist} below) and the third statement follows from Theorem \ref{L2(7) times invol}. The remaining part ist covered in the following lemma (cf. Main Theorem in \cite{OZ168}).
\begin{lemma}\label{OZresult}
If $X$ is a K3-surface with an action of a finite group containing the $L_2(7)$ as a subgroup of index four, then $X$ is the Klein-Mukai-surface.
\end{lemma}
\begin{proof}
We let $X$ be a K3-surface and $H$ be a finite subgroup of $\mathrm{Aut}(X)$ with $L_2(7) < H$ and $|H/L_2(7)|=4$.
Since $L_2(7)$ is simple and a maximal group of symplectic transformations,
it coincides with the group of symplectic transformations in $H$.
In particular, $H / L_2(7) = C_4$ and a group
$\langle \sigma \rangle$ of
order two is contained in the kernel of the homomorphism $H\to \mathrm {Aut}(L_2(7))$.
It follows that we are in the setting of Theorem \ref{L2(7) times invol} where
$\Lambda :=H/\langle \sigma \rangle$ acts on $Y=X/\sigma $. If
$X\not =X_{KM}$, then $Y= \mathbb P_2$. This possibility needs to be eliminated.
Let $\tau $ be any element of $\Lambda $ which
is not in $L_2(7)$ and let $\Gamma = C_3 \ltimes C_7 < L_2(7)$.
Since any two subgroups of order 21 in $L_2(7)$ are conjugate by an element of $L_2(7)$, it follows that there exists
$h\in L_2(7)$ with $(h\tau )\Gamma(h\tau)^{-1}=\Gamma$. Thus, the normalizer
$N(\Gamma)$ of $\Gamma$ in $\Lambda $ is a group of order 42 which also normalizes the commutator subgroup
$\Gamma'$ and therefore stabilizes its set $F$ of fixed points.
Using coordinates $[z_0:z_1:z_2]$ of $\mathbb{P}_2$
as in Theorem \ref{L2(7) times invol} one
checks by direct computation that the only transformations in $\mathrm {Stab}(F)$ which stabilize
the branch curve $\mathrm {Hess}(C_{\mathrm {Klein}})$
are those in $\Gamma$ itself. This contradiction shows that
$Y\not =\mathbb P_2$ and therefore $X=X_{KM}$.
\end{proof}
\section{Non-existence of K3-surfaces with an action of $L_2(7) \times C_3$}
The method of equivariant Mori reduction can be applied to obtain both classification and non-existence results.
In the following, we exemplify a general approach to prove non-existence of K3-surfaces with specified symmetry by considering the group $L_2(7) \times C_3$ and give an independent proof of the following observation of Oguiso and Zhang \cite {OZ168}:
\begin{theorem}\label{nonexist}
There does not exist a K3-surface with an action of $L_2(7) \times C_3$.
\end{theorem}
The remainder of this chapter is dedicated to the proof of this theorem.
\subsection{Global structure}
Let $G \cong L_2(7)$, let $D \cong C_3$, and assume there exists a K3-surface $X$ with a holomorphic action of $G \times
D$. Since $G$ is a simple group and a maximal group of symplectic transformations on a K3-surface, it follows that $G$ acts symplectically whereas the action of $D$ is nonsymplectic. We obtain the following commuting diagram.
\[
\begin{xymatrix}{
X \ar[d]^{\pi} & \hat{X} \ar[d]^{\hat{\pi}} \ar[l]_{b_X}\\
X/D=Y & \hat{Y} \ar[l]_{b_Y}\ar[d]^{M_{\mathrm{red}}}\\
& \hat{Y}_\mathrm{min}=Z
}
\end{xymatrix}
\]
Here $b_X$ is the blow-up of the isolated $D$-fixed points in $X$.
The singularities of $X/D$ correspond to isolated $D$-fixed points.
Since the linearization of the $D$-action at an isolated fixed point is locally of the form $(z,w) \mapsto (\chi z, \chi w)$ for some nontrivial character $\chi: D \to \mathbb C^*$, each singularity of $X/D$ is resolved by a single blow-up. We let
$b_Y$ denote the simultanious blow-up of all singularities of $Y$. We fix a $G$-Mori reduction $M_\mathrm{red}: \hat Y \to \hat{Y}_\mathrm{min}=Z$. All maps in
the diagram are $G$-equivariant. By Theorem \ref{K3quotnonsymp}, the surface $\hat{Y}$ is rational. As conic bundles do not admit an action of $G$ (cf. Lemma \ref{C3C7 conic bundle}), we know that $\hat{Y}_\mathrm{min}$ is
a Del Pezzo surface . The following lemma specifies $Z$.
\begin{lemma}
The Del Pezzo surface $Z$ is either $\mathbb P_2$
or a surface obtained from $\mathbb P_2$ by blowing up 7 points in general
position. In the later case, $Z$ is a $G$-equivariant double cover of $\mathbb P_2$ branched along
Klein's quartic curve. The action of $G$ on $\mathbb P_2$ is given by a three-dimensional representation.
\end{lemma}
\begin{proof}
The first part of the lemma follows from our observations in Example \ref{DelPezzoL2(7)}, the last part has been discussed in Lemma \ref{L2(7)onP2}.
If $Z$ is a Del Pezzo surface of degree two, then the anticanonical map realizes it as an equivariant double cover of $\mathbb P_2$ branched along a smooth quartic curve $C$. We choose coordinates on $\mathbb P_2$ such that the action of $G$ is given by the representation $\rho$ of Example \ref{L2(7)example} (or its dual represenation $\rho^*$) and have already seen that Klein's quartic curve
\[
C_\text{Klein}= \{x_1x_2^3 + x_2x_3^3 + x_3x_1^3=0\} \subset \mathbb P_2
\]
is $G$-invariant. If $C \neq C_\text{Klein}$, then $C \cap C_\text{Klein}$ is a $G$-invariant subset of $\mathbb P_2$. Since the maximal cyclic subgroup of $G$ is of order seven, it follows that a $G$-orbit $G.p$ for a point $p \in C \cap C_\text{Klein}$ consists of at least 24 elements. Since $C \cap C_\text{Klein}$ however consists of at most 16 points,
this is a contradiction. Therefore, $C= C_\text{Klein}$ and the lemma follows.
\end{proof}
\subsection*{$D$-fixed points}
The map $\pi$ is in general ramified both at points and along curves.
Let $x$ be an isolated
$D$-fixed point in $X$. As was noted above, the isotropy representation of the nonsymplectic $D$-action at $x$ in local coordinates $(z,w)$ is given by
$(z,w) \mapsto (\chi z, \chi w)$ for some nontrivial character $\chi: D \to \mathbb C^*$. The action of $D$ on the rational
curve $\hat E$ obtained by blowing up $x$ is trivial and therefore $\hat E$ is contained
in the ramification set $\mathrm{Fix}_{\hat{X}}(D)$. Let $\{\hat{E_i}\}$ denote the set of (-1)-curves in $\hat{X}$ obtained from blowing up isolated $D$-fixed points in $X$ and define $E_i = \hat{\pi}(\hat{E_i})$.
If $C$ is a curve of $D$-fixed points in $X$, it follows that $\hat \pi$ is ramified along $b_X^{-1}(C)$. Let $\{\hat {F_j}\}$ denote the set of all ramification curves of type $b_X^{-1}(C)$ and define $F_j= \hat \pi(\hat {F_j})$. The map $\hat \pi$ is a $D$-quotient and ramified along curves
$$\mathrm{Fix}_{\hat{X}} (D) = \bigcup \hat{E_i} \cup
\bigcup \hat{F_j}.$$
\subsection{Mori contractions and $C_7$-fixed points}
Many aspects of the group theory of $G$ can be well understood in term of its generators $\alpha, \beta, \gamma$ of order 7,3,2, respectively. Since the action of $G$ on $\mathbb{P}_2$ is given by a three-dimensional irreducible representation, the action of $G$ on $Z$ is given explicitly in terms of $\alpha, \beta, \gamma$.
We let $S= \langle \alpha \rangle \cong C_7 < G$ be a cyclic subgroup of order seven in $G$.
The symplectic action of a cyclic group of order seven on an K3-surface has exactly three fixed points.
Since $p_1=[1:0:0]$, $p_2=[0:1:0]$ and $p_3=[0:0:1]$ all lie on $C_\mathrm{Klein} \subset \mathbb P_2$, the action of
$S$ on $Z$ has exactly three fixed points.
Let $\mathrm{Fix}_{\hat{Y}}(S) =: \{y_1, \dots,y_k\}$ and let
$\mathrm{Fix}_{\hat{X}}(S) =: \{x_1, \dots,x_l\}$. Since blowing-up an $S$-fixed point in $X$ replaces the fixed
point by a rational curve with two $S$-fixed points in $\hat{X}$, we find $3 \leq k \leq l
\leq 6$.
\begin{lemma}\label{fixSinfixD}
The fixed points of $S$ in $\hat{X}$ are contained in the
$D$-ramification set, i.e., $\mathrm{Fix}_{\hat{X}}(S) \subset
\mathrm{Fix}_{\hat{X}} (D)$.
\end{lemma}
\begin{proof}
Since $D$ centralizes $S$, the action of $D$ stabilizes the $S$-fixed point set.
We first show that $\mathrm{Fix}_{{X}}(S) \subset
\mathrm{Fix}_{{X}} (D)$. Assume the contrary and let $\mathrm{Fix}_X(S)=
\{s_1,s_2,s_3\}$ be a $D$-orbit and $\pi(s_i)=y$. Then $y$ is
a smooth point and fixed by the action of $S$ on $Y$. There exists a neighbourhood of $y$ in $Y$ which is
biholomorphic to a neighbourhood of $b_Y^{-1}(y)= \tilde{y}$ in
$\hat{Y}$. By construction, $\tilde{y}\in \mathrm{Fix}_{\hat{Y}}(S)$. Since $\mathrm{Fix}_{\hat{Y}}(S)$ consists of at least three points, we let $\tilde{\tilde{y}} \neq \tilde{y}$ be an additional $S$-fixed point on
$\hat{Y}$. The fibre $\pi^{-1}(b_Y(\tilde{\tilde{y}}))$ consists of one or three points and is disjoint from $\{s_1,s_2,s_3\}$. Since the point $\tilde{\tilde{y}}$ is a fixed point of $S$, we know that $S \cong C_7$
acts on the fiber $\pi^{-1}(b_Y(\tilde{\tilde{y}}))$ and is seen to fix it pointwise. This is contrary to the fact that $\mathrm{Fix}_X(S)=\{s_1,s_2,s_3\}$. It follows that $\mathrm{Fix}_{{X}}(S) \subset
\mathrm{Fix}_{{X}} (D)$.
It remains to show the corresponding inclusion on $\hat{X}$. If the
points $s_i$ do not coincide with isolated $D$-fixed points, the
statement follows since $b_X$ is equivariant and biholomorphic
outside the isolated $D$-fixed points.
If $s_i$ is an isolated
$D$-fixed point, we have seen above that the action of $D$ on the blow-up of $s_i$ is trivial. In particular, $\mathrm{Fix}_{\hat{X}}(S) \subset
\mathrm{Fix}_{\hat{X}} (D)$.
\end{proof}
\subsection*{Excluding the case $|\mathrm{Fix}_{\hat{Y}}(S)|=3$}
\begin{lemma} If $|\mathrm{Fix}_{\hat{Y}}(S)
|=3$, then $\mathrm{Fix}_{\hat{Y}}(S) \cap \bigcup E_i = \emptyset$.
\end{lemma}
\begin{proof}
Fixed points
of $S$ on a curve $\hat{E_i}$ always come in pairs: If the curve $\hat{E_i}$ contains a fixed point of $S$, then the isotropy
representation of $S$ at the fixed point $b_X(\hat{E_i})$ in $X$ defines an action of the cyclic group $S$ on the rational curve $\hat{E_i}$ with exactly two fixed points.
If $|\mathrm{Fix}_{\hat{Y}}(S)|=|\mathrm{Fix}_{\hat{X}}(S)
|=3$ and $\mathrm{Fix}_{\hat{Y}}(S) \cap \bigcup E_i \neq \emptyset$, then two of the $S$-fixed point lie on the same curve $\hat{ E_i}$ and $|\mathrm{Fix}_{X}(S)| \leq 2$, a contradiction.
\end{proof}
\begin{lemma}\label{fixed points on mori fibers}
If $|\mathrm{Fix}_{\hat{Y}}(S)
|=3$, then the set $\mathrm{Fix}_{\hat{Y}}(S)$ has empty intersection with the exceptional locus of the full equivariant Mori reduction $M_{\mathrm{red}}: \hat{Y} \to Z$.
\end{lemma}
\begin{proof}
Let $C$ be any exceptional curve of the Mori reduction and assume there is a fixed point of $S$ on $C$. As the point $p$ obtained from blowing down $C$ has to be a fixed point of $S$, it follows that the curve $C$ is $S$-invariant. In particular, we know that the action of $S$ on $C$ has exactly two fixed points. Now blowing down $C$ reduces the number of $S$-fixed point by 1. This contradicts the fact that $|\mathrm{Fix}_{Z}(S)|=3$.
\end{proof}
\begin{lemma}\label{shapeofSaction}
Let $|\mathrm{Fix}_{\hat{Y}}(S)|=3$ and let $p \in \mathrm{Fix}_Z (S)$. Then there exist local coordinates $(u,v)$ at $p$ and a nontrivial character $\mu: S \to \mathbb{C}^*$ such that the action of $S$ at $p$ is locally given by either
\[
(u,v) \mapsto (\mu^3 u, \mu^{-1} v)\quad \text{or} \quad (u,v) \mapsto (\mu u, \mu^{-3} v).
\]
\end{lemma}
\begin{proof}
On the K3-surface $X$ the action of $S$ at a fixed point is in local coordinates $(z,w)$ given by $(z,w) \mapsto (\mu
z, \mu^{-1}w)$ for some nontrivial character $\mu: S \to \mathbb{C}^*$. Since $\mathrm{Fix}_{\hat{Y}}(S) \cap \bigcup E_i = \emptyset$, the map $b_X$ is biholomorphic in a neighbourhood of the fixed point.
Recalling that $\mathrm{Fix}_{\hat{X}}(S)$ is contained in the ramification locus of $\hat{\pi}$ (i.e., $p \in \mathrm{Fix}_{\hat{X}}(D)$) the action of $D$ may be linearized at $p$.
Since $S$ and $D$ commute, the action of $D$ is diagonal in the chosen local coordinates $(z,w)$. We conclude that $\hat \pi$ is locally of the form $(z,w) \mapsto (z^3,w)$ or $(z,w^3)$. The action of $S$ at a fixed point in $\hat{Y}$ is defined by $(\mu^3, \mu^{-1})$ or $(\mu, \mu^{-3})$, respectively.
By the lemma above, the fixpoints of $S$ are not affected by the Mori reduction. The map $M_{\mathrm{red}}$ is $S$-equivariant and locally biholomorphic in a neighbourhood of a fixed point of $S$. The lemma follows.
\end{proof}
Using our explicit knowledge of the $G$-action on $Z$ we will show in the following that the linearization of the action of $S < G$ at a fixed point in the Del Pezzo surface $Z$ is not of the type described by the lemma above. We distinguish two cases when studying $Z$.
Let $Z \cong \mathbb{P}_2$ and $[x_0:x_1:x_2]$ denote homogeneous coordinates on $\mathbb{P}_2$ such that the action of $S <G$ on $\mathbb{P}_2$ is given by $[x_0:x_1:x_2] \mapsto [\zeta x_0, \zeta^2 x_1, \zeta ^4 x_2 ]$ where $\zeta$ is a $7^\text{th}$ root of unity. Using affine coordinates $z= \frac{x_1}{x_0}, w= \frac{x_2}{x_0}$ we check that the action of $S$ at $p_1=[1:0:0]$ is locally given by $(z,w) \mapsto (\zeta z, \zeta^3 w)$. This contradicts Lemma \ref{shapeofSaction}.
Let $Z \overset{q}{\to} \mathbb{P}_2$ be the double cover of $\mathbb{P}_2$ branched along Klein's quartic curve and
let $[x_0:x_1:x_2]$ denote homogeneous coordinates on $\mathbb{P}_2$. As above, using affine coordinates $u= \frac{x_1}{x_0}, v= \frac{x_2}{x_0}$ we check that the action of $S$ in a neighbourhood of $[1:0:0]$ is locally given by $(u,v) \mapsto (\zeta u, \zeta^3 v)$. The branch curve $C_\mathrm{Klein} \subset \mathbb{P}_2$ is defined by the equation $u^3+uv^3+v$. In new coordinates $(\tilde u (u,v), \tilde v(u,v))= (u, u^3+uv^3+v)$ the branch curve is defined by $\tilde v = 0$ and the action of $S$ is given by $(\tilde u,\tilde v) \mapsto (\zeta \tilde u, \zeta^3 \tilde v)$. Consider the fixed point $[1:0:0] \in \mathbb P_2$ and its preimage $p \in Z$. At $p$, coordinates $(z,w)$ can be chosen such that the covering map is locally given by $(z,w) \mapsto (z,w^2) = (\tilde u, \tilde v)$. It follows that the action of $S$ at $p \in Z$ is locally given by $(z,w) \mapsto (\zeta z, \zeta^5 w)$. This is again contrary to Lemma \ref{shapeofSaction}.
In summary, if $|\mathrm{Fix}_{\hat{Y}}(S)|=3$, the action of $S < G$ on the Del Pezzo surface $Z$ cannot be induced by a symplectic $C_7$-action on the K3-surface $X$. This proves the following lemma.
\begin{lemma}
$|\mathrm{Fix}_{\hat{Y}}(S)|\geq 4$.
\end{lemma}
\subsection{Lifting Klein's quartic}
The discussion of the previous section shows that there must be a step in the Mori reduction where the blow-down of a (-1)-curve identifies two $S$-fixed points. Let $z \in Z$ be a fixed point of $S$. Then, by equivariance, all points in the $G$-orbit of $z$ are obtained by blowing down (-1)-curves in the process of Mori reduction.
If $Z \cong \mathbb{P}_2$, we denote by $C_\mathrm{Klein} \subset Z$ Klein's quartic curve.
If $Z$ is the double cover of $\mathbb{P}_2$ branched along Klein's curve, we abuse notation and denote by $C_\mathrm{Klein}$ the ramification curve in $Z$. In the later case $C_\mathrm{Klein}$ is a $G$-invariant curve of genus 3 and self-intersection 8 by Lemma \ref{selfintbranch}.
Let $z \in \mathrm{Fix}_Z (S) \subset C_\mathrm{Klein}$ and consider the $G$-orbit $G\cdot z$. By invariance, $G\cdot z \subset C_\mathrm{Klein}$. The isotropy group $G_z$ must be cyclic and $G_z =S$ implies $|G\cdot z|=24$. Let $B$ denote the strict transform of $C_\mathrm{Klein}$ in $\hat Y$. The curve $B$ is a smooth $G$-invariant curve of genus 3 and meets at least 24 Mori fibers. Applying Lemma \ref{selfintblowdown} to $M_\mathrm{red}(B)=C_\mathrm{Klein}$ we obtain
\[
B^2 \leq C_\mathrm{Klein}^2 -24 \leq -8.
\]
\begin{lemma}
The curve $B$ does not coincide with any of the curves of type $E$ or $F$. Its preimage $\hat B := \hat \pi^{-1}(B) \subset \hat X$ is a cyclic degree three cover of $B$ branched at $B \cap(\bigcup E_i \cup \bigcup F_j)$.
\end{lemma}
\begin{proof}
The curves $E_i \subset \hat Y$ are (-3)-curves whereas $B$ has self-intersection less than or equal to $-8$. Assume $B = F_j$ for some $j$. Then $\hat B$ is a curve of self-intersection less than or equal to $-4$ by Lemma \ref{selfintbranch} which is mapped biholomorphically to the K3-surface $X$. We obtain a contradiction since K3-surfaces do not admit curves of self-intersection less than $-2$.
\end{proof}
Since $\mathrm{Fix}_Z (S) \subset C_\mathrm{Klein}$ there are three fixed points of $S$ on $\hat B$. From $\mathrm{Fix}_{\hat{X}}(S) \subset \mathrm{Fix}_{\hat{X}} (D)$ it follows that $\hat \pi|_{\hat B}: \hat B \to B$ is branched at three or more points. In particular, the curve $\hat{B}$ is connected.
In the following, we will distinguish two cases: the curve $\hat B$ being reducible or irreducible.
\subsection*{Case 1: The curve $\hat{B}$ is reducible}
\addcontentsline{toc}{subsection}{\hspace{1.1cm} Case 1: The curve $\hat B$ is reducible}
The three irreducible components $\hat{B}_i$, $i= 1,2,3$ of $\hat B$ are smooth curves which are mapped biholomorphically onto $B$. Since $B$ is exceptional, the configuration of curves $\hat{B}$ is also exceptional.
It follows that the intersection matrix $(\hat{B}_i \cdot \hat{B}_j)_{ij}$ is negative definite. In the following we study the intersection matrix of $\hat B$ and will obtain a contradiction.
The restricted map $b_X: \hat B_i \to b_X(\hat B_i)$ is the normalization of $b_X(\hat B_i)$ and consequently the arithmetic genus of $b_X(\hat{B}_i)$ is given by the formula (cf. II.11 in \cite{BPV})
\[
g(b_X(\hat{B}_i)) = g(\hat {B_i}) + \delta(b_X(\hat{B}_i)),
\]
where the number $\delta$ is computed as
$\delta(b_X(\hat{B}_i)) = \sum_{p \in b_X(\hat{B}_i)} \mathrm{dim}_\mathbb{C}({b_X}_*\mathcal{O}_{\hat{B}_i}/\mathcal{O}_{b_X(\hat{B}_i)})_p$.
Note that the sum can also be taken over the singular points $p \in b_X(\hat{B}_i)$ only, since smooth points do not contribute to the sum.
Since $X$ is a K3-surface, the adjunction formula for $b_X(\hat{B}_i)$ reads
\[
(b_X(\hat{B}_i))^2 = 2 g(b_X(\hat{B}_i)) -2 = 2g(\hat B_i) +2 \delta(b_X(\hat{B}_i)) -2.
\]
By Lemma \ref{selfintblowdown}, the self-intersection number $(b_X(\hat{B}_i))^2$ can be expressed in terms of the self-intersection $\hat{B}_i^2$ and intersection multiplicities $E_j\cdot \hat{B}_i$:
\[
(b_X(\hat{B}_i))^2 = \hat{B}_i^2 + \sum_j (\hat{E}_j \cdot \hat{B}_i)^2.
\]
It follows that the self-intersection number of $\hat{B}_i$ can be expressed as
\begin{equation}\label{selfintB}
\hat{B}_i^2 = 2g(\hat B_i) +2 \delta(b_X(\hat{B}_i)) -2 -\sum_j (\hat{E}_j\cdot \hat{B}_i)^2.
\end{equation}
For simplicity, we first consider the case where $\hat{B}_i$ has nontrivial intersection with only one curve of type $\hat{E}$. We refer to this curve as $\hat{E}$. The general case then follows by addition over all curves $\hat E_j$, the number $\delta$ for the full contraction $b_X$ is the sum of all numbers $\delta$ obtained when blowing down disjoint curves $\hat{E}_j$ stepwise.
\paragraph{Estimating the number $\delta$}
\begin{example}
Let $C= C_1 \cup C_2$ be a connected curve consisting of two irreducible components. Then the arithmetic genus of $C$ is calculated as
$g(C)= g(C_1) + g(C_2) + C_1 \cdot C_2 -1$.
The normalization $ \tilde C$ of $C$ is given by the disjoint union of the normalizations $\tilde C_i$ of $C_1$ and $C_2$. In particular,
$g(\tilde C) = g( \tilde C_1) + g(\tilde C_2) -1$,
so that $\delta(C) = \delta(C_1) + \delta(C_2) + C_1 \cdot C_2$
(cf. II.11 in \cite{BPV}).
\end{example}
Since the number $\delta$ is a sum of contributions $ \delta_p$ at singular points $p$, we can calculate the number $\delta_p$ locally at each singularity where we decompose the germ of the curve as the union of irreducible components and use a formula generalizing the example above. We refer to an irreducible component of a curve germ realized in a open neighbourhood of the surface as a \emph{curve segment}.
In order to study the singularities of $b_X(\hat B_i)$ one needs to consider the points of intersection $\hat E \cap \hat B_i$. These points of intersection can be of different quality:
\begin{itemize}
\item
\textbf{Type $ m= 1$:} The intersection at $b \in \hat B_i$ is transversal and the local intersection multiplicity at $b$ is equal to 1. A neighbourhood of $b$ in $\hat B_i$ is mapped to a smooth curve segment in $b_X(\hat B_i)$.
\item
\textbf{Type $m>1$:} The intersection at $b \in \hat B_i$ is of higher multiplicity $m(b)$, i.e., $\hat E$ is tangent to $\hat B_i$ and in local coordinates $(z,w)$ we may write $\hat E = \{ z=0\}$ and $\hat B_i = \{ z- w^m\}$. Blowing down $\hat E$ transforms a neighbourhood of $b$ into a a curve segment isomorphic to $\{ x^{m+1} -y^m =0\}$. For the singularity $(0,0)$ of this curve we calculate
\[
\delta_{(0,0)}= \frac{1}{2}m(m-1).
\]
\end{itemize}
Let $b_m$ denote the number of points in $\hat E \cap \hat B_i$ with local intersection multiplicity $m$. For each point of intersection of $\hat E$ and $\hat B_i$ we obtain an irreducible component of the germ of $b_X(\hat B_i)$ at $p = b_X(\hat E)$. We compute $\delta_p$ by decomposing this germ and need to determine local intersection multiplicities of all combinations of irreducible components.
\begin{lemma}\label{normarg}
Two irreducible components of the germ of $b_X(\hat B_i)$ at $p$ corresponding to points in $\hat E \cap \hat B_i$ of type $m$ and $n$ meet with local intersection multiplicity greater than or equal to $mn$.
\end{lemma}
\begin{proof}
In order to determine the intersection multiplicity of two irreducible components corresponding to points of type $m$ and $n$, we write one curve as $\{ x^{m+1} -y^m =0\}$. The second curve can be expressed as $\{h_1(x,y)^{n+1} - h_2(x,y)^n=0\}$ where $(x,y) \mapsto (h_1(x,y),h_2(x,y))$ is a holomorphic change of coordinates. Now normalizing the first curve by $\xi \mapsto (\xi^m, \xi^{m+1})$ and pulling back the equation of the second curve to the normalization $\mathbb{C}$, we obtain the equation
$h_1(\xi^m, \xi^{m+1})^{n+1} - h_2(\xi^m, \xi^{m+1})^n=0$
which has degree at least $mn$ in $\xi$. It follows that the local intersection multiplicity is greater than or equal to $mn$.
\end{proof}
Counting different types of intersections of irreducible components we obtain the following estimate for $\delta_p$
\begin{align*}
\delta_p &= \sum \delta_p(C_i) + \sum_{i\neq j}(C_i \cdot C_j)_p\\
&\geq \sum_{m\in\mathbb N}\frac{b_m}{2}m(m-1) + \frac{1}{2}\sum_{m \in \mathbb N}b_m(b_m-1)m^2 + \sum_{m> n} b_mb_n mn
\end{align*}
where $\sum_{i\neq j}(C_i \cdot C_j)_p$ decomposes into intersections $(C_i \cdot C_j)_p$ of type $mm$ and intersections of type $mn$ for $m \neq n$. The formula above applies to each curve $\hat E_j$ having nontrivial intersection with $\hat B_i$.
Let $p_j$ be the point on $X$ obtained by blowing down $\hat E_j$ and let $b_m^j$ denote the number of points of type $m$ in $\hat B_i \cap \hat E_j$. Then
\begin{align*}
\delta ( b_X(\hat B_i)) &= \sum_j \delta_{p_j}(b_X(\hat B_i)) \\
&\geq \sum_j(\sum_{m\in\mathbb N}\frac{b_m^j}{2}m(m-1) + \frac{1}{2}\sum_{m\in \mathbb N}b_m^j(b_m^j-1)m^2 + \sum_{m> n} b_m^jb_n^j mn).
\end{align*}
Returning to the formula (\ref{selfintB}) for $\hat B_i^2$ we obtain
\begin{align*}
\hat{B}_i^2 &= 2g(\hat B_i) +2 \delta(b_X(\hat{B}_i)) -2 -\sum_j (\hat{E}_j \cdot \hat{B}_i)^2\\
& \geq \sum_j(\sum_{m\in\mathbb{N}}b_m^jm(m-1) + \sum_{m\in \mathbb{N}}b_m^j(b_m^j-1)m^2 + 2\sum_{m> n} b_m^jb_n^j mn)\\
&-2 - \sum_j(\sum_m b_m^jm)^2\\
& \geq -2-\sum_j \sum_m b_m^j m.
\end{align*}
As a next step, we will find a bound for $(\hat B_i \cdot \hat B_k)$ in the
case $i \neq k$. If a curve $\hat B_i$ intersects a ramification curve of type $\hat
E$ or $\hat F$ in a point $x$, then $(\hat B_i \cdot \hat B_k)_x \geq
1$. If $(\hat B_i \cdot \hat E_j)_x =m$, then for $k \neq i$
\[
(\hat B_k \cdot\hat E_j)_x = (\varphi_D( \hat B_i) \cdot \hat E_j)_x= ( \varphi_D(\hat B_i) \cdot \varphi_D(\hat E_j))_x = (\hat B_i \cdot \hat E_j)_x =m
\]
where $\varphi_D \in D$ is a biholomorphic transformation and $E_j$
is in the fixed locus of $D$.
\begin{lemma}
Assume $\hat B_i$ meets a curve of type $\hat E$ or $\hat F$ in $x$ with local intersection multiplicity $m$. Then $(\hat B_i \cdot \hat B_k)_x \geq m$.
\end{lemma}
\begin{proof}
Let $\hat E$, $\hat F$ respectively, be locally given by
$\{z=0\}$. Then $\hat B_i$ is locally given by $\{z-w^m=0\}$ and $\hat
B_k$ by $\{h_1(z,w) - h_2(z,w)^m =0\}$ where $(z,w) \mapsto
(h_1(z,w), h_2(z,w))$ is, as in the proof of Lemma \ref{normarg}, a holomorphic
change of coordinates. Note that it stabilizes $\{z=0\}$,
i.e., $h_1(0,w)=0$ for all $w$ and we can write $h_1(z,w) =z
\tilde{h}_1(z,w)$. The intersection of $\hat B_i$ and $\hat B_j$ corresponds to the equation
$ w^m \tilde{h}_1(w^m, w) - h_2(w ^m, w)$
which is of degree greater than or equal to $m$. The lemma follows.
\end{proof}
Summing over all points of intersection of $\hat B_i$ and $ \hat B_k$ one finds
$\hat B_i \cdot \hat B_k \geq \sum_j \sum_m b_m^j m$.
Recall that by Lemma \ref{fixSinfixD}
$\mathrm{Fix}_{\hat X}(S)$ is contained in $\mathrm{Fix}_{\hat X}(D)$ and that the curve $B$ contains
three $S$-fixed points. Therefore, it intersects the
ramification locus of $\hat \pi$ in at least three points. At these points the three irreducible components of $\hat B$ must meet. In particular, $(\hat B_i, \hat B_k) \geq 3 $. This yields
\begin{align*}
&(1,1,1) \begin{pmatrix}
\hat B_1^2 & \hat B_1 \cdot \hat B_2 &\hat B_1 \cdot \hat B_3\\
\hat B_2 \cdot \hat B_1 & \hat B_2^2 & \hat B_2 \cdot \hat B_3\\
\hat B_3 \cdot \hat B_1 & \hat B_3 \cdot \hat B_2 & \hat B_3^2
\end{pmatrix}
\begin{pmatrix}
1\\1\\1
\end{pmatrix}\\
&= \hat B_1^2 + \hat B_2^2 + \hat B_3^2 + 2(\hat B_1 \cdot \hat B_2 + \hat B_2 \cdot \hat B_3 + \hat B_1 \cdot \hat B_3)\\
& \geq -6 - 3 \sum_j \sum_m b_m^j+3\sum_j \sum_m b_m^j m +(\hat B_1 \cdot \hat B_2 + \hat B_2 \cdot \hat B_3 + \hat B_1 \cdot \hat B_3) \\
&= -6 +(\hat B_1 \cdot \hat B_2 + \hat B_2 \cdot \hat B_3 + \hat B_1 \cdot \hat B_3)\\
&\geq 3.
\end{align*}
Hence, the intersection matrix $(\hat B_i \cdot \hat B_j)_{ij}$ is not negative-definite contradicting the fact that $\hat B$ is exceptional. It follows that the curve $\hat B$ must be irreducible.
\subsection*{Case 2: The curve $\hat B$ is irreducible}
\addcontentsline{toc}{subsection}{\hspace{1.1cm} Case 2: The curve $\hat B$ is irreducible}
Let $n: N \to \hat B$ be the normalization of $\hat B$. Since $b_X$ is a blow-up, $b_X \circ n: N \to b_X(\hat B)$ is the normalization of the curve $b_X(\hat B) \subset X$. It follows that
$g(b_X(\hat{B})) = g(N) + \delta(b_X(\hat{B}))$.
By adjunction, the self-inter\-sec\-tion of $b_X(\hat B)$ is given by
\[
(b_X(\hat B))^2 = 2g(b_X(\hat{B}))- 2 = 2g(N) + 2\delta(b_X(\hat{B})) -2.
\]
As above, by Lemma \ref{selfintblowdown},
$(b_X(\hat{B}))^2 = \hat{B}^2 + \sum_j (\hat{E}_j \cdot \hat{B})^2$.
Thus, the self-in\-ter\-sec\-tion of $\hat B$ can be expressed as
\[
\hat{B}^2 = 2g(N) + 2\delta(b_X(\hat{B})) -2 - \sum_j (\hat{E}_j \cdot \hat{B})^2.
\]
Since the curve $\hat B$ is exceptional, this self-intersection number must be negative. By finding a lower bound for $\hat B^2$ we will obtain a contradiction.
Let us first examine the points of intersection $\hat B \cap \hat E$ for one curve $\hat E$ among the exceptional curves of the blow-down $b_X$.
We consider the corresponding points of intersection of $B$ and $E$ in
$\hat Y$ and we choose coordinates $(\xi, \eta)$ such that $E$ is locally defined by $\{\xi =0\}$, the map $\hat \pi $ is locally given by $(z,w) \mapsto (z^3,w)=(\xi, \eta)$ and $B = \{f(\xi, \eta)=0\}$. It follows that $\hat B$ is locally defined by $\{ h= f\circ \hat\pi =0\}$.
If $E$ and $B$ meet transversally, we know that the function $f(\xi, \eta)$ fulfills $\frac{\partial f}{\partial \eta}|_{(0,0)} \neq 0$. It follows that $\frac{\partial h}{\partial w}|_{(0,0)} \neq 0$ and after a suitable change of coordinates $h(z,w)= z^m-w$.
If $E$ and $B$ meet tangentially, we know that the function $f(\xi, \eta)$ fulfills $\frac{\partial f}{\partial \eta}|_{(0,0)} = 0$. Since $B$ is smooth, we know $\frac{\partial f}{\partial \xi}|_{(0,0)} \neq 0$. After a suitable change of coordinates $h(z,w)= z^3-w^n$ with $n>0$. Note that in both cases the coordinate change on $\hat X$ is such that $\hat E$ is still defined by $\{z=0\}$. This will be important when describing the blow-down $b_X$ of $\hat E$.
Consider a curve segment $\{h=0\}$ in $\hat X$ and its image under the map $b_X$. If $h(z,w)= z^m-w$ then the corresponding smooth segment of $b_X(\hat B)$ is defined by $ x^{m+1} -y =0$. If $h(z,w)= z^3-w^n$ then the corresponding piece of $b_X(\hat B)$ is defined by $ x^{n+3} -y^n =0$ and has a singular point if $n>1$.
Let $p =b_X(\hat E)$. We will determine $\delta_p$ by decomposing the germ of $b_X(\hat B)$ at $p$ into its irreducible components. There are three different types of such components:
\begin{enumerate}
\item
smooth components locally defined by $ x^{m+1} -y =0$,
\item
singular components locally defined by $ x^{n+3} -y^n =0$ for $n>1$ not divisible by $3$,
\item
triplets of smooth components locally defined by $ x^6 -y^3 =0$,
\item
triplets of singular components locally defined by $ x^{n+3} -y^n =0$ for $n=3k$ and $k >1$.
\end{enumerate}
The singularity in case 2) gives $\delta = \frac{n^2+n-2}{2}$. In case 4), each component is defined by an equation of type $x^{k+1}-y^k=0$ and the singularity of each component gives $\delta = \frac{k^2-k}{2}$. \\
In order to determine $\delta_p$ we need to specify intersection multiplicities for all combinations of irreducible components.
\begin{lemma}
The local intersection multiplicities of
pairs of irreducible components of the germ of $b_X(\hat B)$ at $p$ in general position are given by the following table.
\renewcommand{\baselinestretch}{1.5}
\begin{center}
\begin{table}[H]
\begin{center}
\begin{tabular}{c|c|c|c|c}
local equation & $ x^{m_1+1} -y $ & $ x^{n_1+3} -y^{n_1}$ & $x^2-y$ & $x^{k_1+1}-y^{k_1}$\\ \hline
$ x^{m_2+1} -y $ & 1& $n_1$ & 1& $k_1$\\ \hline
$ x^{n_2+3} -y^{n_2} $& $n_2$ & $n_1n_2$ & $n_2$ & $n_2k_1$ \\ \hline
$x^2-y $ & 1& $n_1$ & 1 \text{\ or\ }(2) & $k_1$ \\ \hline
$x^{k_2+1}-y^{k_2}$ & $k_2$ & $k_2n_1$ & $k_2$ & $k_1k_2$ \text{\ or\ }($k^2+k$)
\end{tabular}
\end{center}
\end{table}
\end{center}
\renewcommand{\baselinestretch}{1.1}
\end{lemma}
Note that the local equations in the first row and column, although
all written as functions of $(x,y)$, describe the curve segments in
different choices of local coordinates.
\begin{proof}[Sketch of proof]
As above, we rewrite one equation as
$f(h_1(x,y),h_2(x,y))$ where $(h_1,h_2)$ is a holomorphic change of local
coordinates. The intersection multiplicities can then be calculated by
the method introduced in the proof of Lemma \ref{normarg}.
Two irreducible components in a triplet of type 3) meet with
intersection multiplicity 2. Two irreducible components in a triplet
of type 4) meet with intersection multiplicity $k^2+k$. These
quantities are indicated in brackets as they differ from the intersection multiplicities of two irreducible components from different triplets.
\end{proof}
\begin{remark}
If two irreducible components of the germ of $b_X(\hat B)$ at $p$ are in special position, their local intersection multiplicity is greater than the value specified in the above table. In particular, the table gives lower bounds for the respective intersection numbers.
\end{remark}
Let $a$ denote the number of irreducible components of type 1), let $b_n$ the number of irreducible components of type 2) where $ n \not\in 3\mathbb{N}$, let $c \in 3\mathbb{N}$ denote the number of irreducible components of type 3) and let $d_k \in 3 \mathbb{N}$ denote the number of irreducible components of type 4). We summarize $e=a+c$.
A lower bound for $\delta_p$ is given by
\begin{align*}
\delta_p &\geq \sum_n b_n \frac{n^2+n-2}{2} + \sum_k d_k \frac{k^2-k}{2}\\
& + \frac{1}{2}e(e-1) + c +\sum_n e b_n n + \sum_k e d_k k\\
& + \frac{1}{2}\sum_n b_n(b_n-1)n^2 +\sum_{n_1 > n_2} b_{n_1}b_{n_2} n_1n_2 + \sum_{n,k} b_nd_k nk\\
& + \frac{1}{2}\sum_k d_k(d_k-1)k^2 + \sum_k d_k k+ \sum_{k_1 > k_2} d_{k_1}d_{k_2} k_1k_2.
\end{align*}
For simplicity, we first consider only one curve $\hat E$ intersecting $\hat B$. The formula for $\hat B^2$ becomes
\begin{align}
\hat{B}^2 &= 2g(N) + 2\delta(b_X(\hat{B})) -2 - (\hat{E} \cdot \hat{B})^2 \notag\\
&=2g(N) + 2\delta(b_X(\hat{B})) -2 - \underset{(\hat{E} \cdot \hat{B})^2}{\underbrace{(e+ \sum_n b_n n + \sum_k d_k k)^2}} \notag\\
&=2g(N)-2- e +2c+ \sum_k d_k k + \sum_n b_n(n-2) \notag \\
&\geq 2g(N)-2- a. \label{ineq}
\end{align}
The same formula also holds if we consider the general case of curves $\bigcup_i \hat E_i$ intersecting $\hat B$ since both the calculation of $\delta$ and the intersection number $\sum_i(\hat B, \hat E_i)^2$ can be obtained from the above by addition. The number $a$ now represents the number of points of type 1) in the union of curves $\hat E_i$.
The map $n \circ \hat \pi: N \to \hat B \to B$ is a degree three cover of the smooth curve $B$ branched at $V \subset B$. The genus of $B$ is three, the topological Euler characteristic is $e(B)= -4$. Let $\tilde V := B \cap (\bigcup E_i \cup \bigcup F_j)$ denote the branch locus of $\hat \pi: \hat B \to B$. Then $V \subset \tilde V$ and $V$ must contain those points in $\tilde V$ which correspond to smooth points on $\hat B$. In partcular, $|V| \geq a$.
The Euler characteristic of $N$ is given by
$e(N) = 3e(B) - 2|V| = -12 -2|V| = 2- 2g(N)$.
and inequality \eqref{ineq} becomes
\[
\hat B ^2 \geq 12 + 2|V|-a \geq 12 + |V| \geq 0
\]
contradicting the fact that $\hat B$ is exceptional.
\subsection*{Conclusion}
\addcontentsline{toc}{subsection}{Conclusion}
The above contradiction shows the non-existence of a K3-surface with an action of $G \times C_3$. This completes the prove of Theorem \ref{nonexist}.
\chapter{The alternating group of degree six}\label{chapterA6}
In the previous chapters we have considered symplectic automorphisms groups of K3-surfaces centralized by an antisymplectic involution, i.e., the groups under consideration were of the form $\tilde G = G \times \langle \sigma \rangle$ where $\tilde G_\mathrm{symp} = G$. In this chapter we wish to discuss more general automorphims groups $\tilde G$ of mixed type: if $\tilde G $ contains an antisymplectic involution $\sigma$ with fixed points we consider the quotient by $\sigma$. In general, if $\sigma$ does not centralize the group $\tilde G_\mathrm{symp}$ inside $\tilde G$, the action of $\tilde G_\mathrm{symp}$ does \emph{not} descend to the quotient surface. We therefore restrict our consideration to the centralizer $Z_{\tilde G}(\sigma)$ of $\sigma$ inside $\tilde G$ (or $\tilde G_\mathrm{symp}$) and study its action on the quotient surface.
If we are able to describe the family of K3-surfaces with $Z_{\tilde G} (\sigma)$-symmetry, it remains to detect the surfaces with $\tilde G$-symmetry inside this family.
This chapter is devoted to a situation where the group $\tilde G$ contains the alternating group of degree six.
Although, a precise classification cannot be obtained at present, we achieve an improved understanding of the equivariant geometry of K3-surfaces with $\tilde G$-symmetry and classify families of K3-surfaces with $Z_{\tilde G}(\sigma)$-symmetry (cf. Theorem \ref{classiA6}). In this sense, this closing chapter serves as an outlook on how the method of equivariant Mori reduction allows generalization to more advanced classification problems.
\section{The group $\tilde A_6$}
We let $\tilde G$ be any finite group which fits into the exact sequence
\[
\{\mathrm{id}\} \to A_6 \to \tilde G \overset {\alpha}{\to} C_n \to \{\mathrm{id}\}.
\]
and in the following consider a K3-surface $X$ with an effective action of $\tilde G$.
The group of
symplectic automorphisms $(\tilde{G})_{\text{symp}}$ in $\tilde G$
coincides with $A_6$.
This particular situation is considered by Keum, Oguiso, and Zhang in \cite{KOZLeech} and \cite{KOZExten}. They lay special emphasis on the maximal possible choice of $\tilde G$ and therefore consider a group $\tilde G = \tilde A_6$ characterized by the exact sequence
\begin{equation}\label{tilde A6}
\{\mathrm{id}\} \to A_6 \to \tilde A_6 \overset {\alpha}{\to} C_4 \to \{\mathrm{id}\}.
\end{equation}
Let $N := \mathrm{Inn}(\tilde{A_6}) \subset
\mathrm{Aut}(A_6)$ denote the group of inner automorphisms of $\tilde A_6$ and let $\mathrm{int} : \tilde A_6 \to N$ be the homomorphisms mapping an element $g \in \tilde A_6$ to conjugation with $g$.
It can be shown that the group $\tilde{A_6}$ is a semidirect product $A_6
\rtimes C_4$ embedded in $N\times C_4$ by the map $(\mathrm{int}, \alpha)$
(Theorem 2.3 in \cite{KOZExten}). By Theorem 4.1 in \cite{KOZExten} the group $N$ is isomorphic to $M_{10}$
and the isomorphism class of $\tilde A_6$ is uniquely determined by \eqref{tilde A6} and the condition that it acts on a K3-surface.
In \cite{KOZLeech} a lattice-theoretic proof of the following classification result (Theorem 5.1, Theorem 3.1, Proposition 3.5) is given.
\begin{theorem}
A K3 surface $X$ with an effective action of $\tilde A_6$ is isomorphic to the minimal desingularization of the surface in $\mathbb P_1 \times \mathbb P_2$ given by
\[
S^2(X^3+Y^3 + Z^3) -3 (S^2 + T^2) XYZ =0.
\]
\end{theorem}
Although this realization is very concrete, the action of $\tilde A_6$ on this surface is hidden. The existence of an isomorphism from a K3-surface with $\tilde A_6$-symmetry to the surface defined by the equation above follows abstractly since both surfaces are shown to have the same transcendental lattice. It is therefore desirable to achieve a more geometric understanding of K3-surfaces with $\tilde A_6$-symmetry in general and in particular to obtain an explicit realization of $X$ where the action of $\tilde A_6$ is visible.
We let the generator of the
factor $C_4$ in the semidirect product $\tilde{A_6} = A_6
\rtimes C_4$ be denoted by $\tau$. The order four
automorphism $\tau$ is nonsymplectic and has fixed points. It follows that the antisymplectic
involution $\sigma := \tau^2$ fulfils
\[
\mathrm{Fix }_X(\sigma) \neq \emptyset.
\]
Since $\sigma$ is mapped to the trivial automorphism in
$\mathrm{Out}(A_6) = \mathrm{Aut}(A_6)/\mathrm{int}(A_6) \cong C_2 \times C_2$
there exists $h \in A_6$ such that $
\mathrm{int}(h) = \mathrm{int}(\sigma) \in \mathrm{Aut}(A_6)$.
The antisymplectic involution $h \sigma$ centralizes $A_6$ in $\tilde{A_6}$.
\begin{remark}
If $\mathrm{Fix}_X(h \sigma) \neq \emptyset$, we are in the
situation dealt with in Section \ref{A6Valentiner}, i.e., the K3-surface $X$ is an $A_6$-equivariant double
cover of $\mathbb{P}_2$ where $A_6$ acts as Valentiner's group and the branch locus is given by $F_{A_6}(x_1,x_2,x_3) = 10 x_1^3x_2^3+ 9 x_1^5x_3 + 9 x_2^3x_3^3-45 x_1^2 x_2^2 x_3^2-135 x_1 x_2 x_3^4 + 27 x_3^6$. By construction, there is an evident action of $A_6 \times C_2$ on the Valentiner surface\index{Valentiner surface!}, it is however not clear whether this surface admits the larger symmetry group $\tilde A_6$.
\end{remark}
In the following we assume that $h \sigma$ acts without fixed points on $X$ as otherwise the remark above yields an $A_6$-equivariant classification of $X$.
\subsection{The centralizer $G$ of $\sigma$ in $\tilde{A_6}$}\label{centralizer}
We study the quotient $ \pi : X \to X/\sigma = Y$. As mentioned above, the action of the centralizer of $\sigma$ descends to an action on $Y$. We therefore start by identifying the centralizer $G :=Z_{\tilde{A_6}}(\sigma)$ of $\sigma$ in
$\tilde{A_6}$.
\begin{lemma}
The group $G$ equals $Z_{A_6}(\sigma) \rtimes C_4$ and
$Z_{A_6}(\sigma) = Z_{A_6}(h)$
\end{lemma}
\begin{proof}
The lemma follows from direct computations:
we write an element of $\tilde A_6$ as $a\tau^k$ with $a \in A_6$. Then $a \tau^k$ is in $Z_{\tilde{A_6}}(\sigma)$ if and only if $a \tau^k \tau^2 = \tau^2 a \tau^k$. This is the case if and only if $a \tau^2 = \tau^2 a$, i.e., if $a \in Z_{A_6}(\sigma)$. Now $\langle \tau \rangle < Z_{\tilde{A_6}}(\sigma)$ implies the first part of the lemma. The second part follows from the equality $\mathrm{int}(\sigma) = \mathrm{int}(h)$.
\end{proof}
\begin{lemma}
$Z_{A_6}(h) = D_8$
\end{lemma}
\begin{proof}
Since $\mathrm{int}(\sigma) = \mathrm{int}(h)$ and $\sigma^2 = \mathrm{id}$, it follows that $h^2$ commutes with any element in $A_6$. As $Z(A_6)= \{ \mathrm{id} \}$, it follows that $h$ is of order two. There is only one conjugacy class of elements of order two in $A_6$. We calculate $Z_{A_6}(h) = D_8$ for one particular choice of $h$.
Let
\[
h=
\begin{pmatrix}
1 & 2 & 3 & 4 & 5 & 6 \\
3 & 4 & 1 & 2 & 5 & 6
\end{pmatrix}.
\]
Any element in the centralizer of $h$ must be of the form
\[
\begin{pmatrix}
1 & 2 & 3 & 4 & 5 & 6 \\
* & * & * & * & 5 & 6
\end{pmatrix}
\quad \text{or} \quad
\begin{pmatrix}
1 & 2 & 3 & 4 & 5 & 6 \\
* & * & * & * & 6 & 5
\end{pmatrix}
\]
It is therefore sufficient to perform all calculations in $S_4$. If an element of $S_4$ is a composition of an even (odd) number of transpositions, the corresponding element of $Z_{A_6}(h)$ is given by completing it with the identity map (transposition map) on the fifth and sixth letter.
Let
\[
g_1=
\begin{pmatrix}
1 & 2 & 3 & 4 \\
2 & 1 & 4 & 3
\end{pmatrix},
g_2=
\begin{pmatrix}
1 & 2 & 3 & 4 \\
3 & 2 & 1 & 4
\end{pmatrix},
g_3=
\begin{pmatrix}
1 & 2 & 3 & 4 \\
1 & 4 & 3 & 2
\end{pmatrix}.
\]
and check that $g_1,g_2,g_3 \in Z_{A_6}(h)$. Define $g_1 g_2 =:c$ and check
\[
c=
\begin{pmatrix}
1 & 2 & 3 & 4 \\
2 & 3 & 4 & 1
\end{pmatrix}, \quad
c^2
= h.
\]
Now $g_3 c g_3 = c^3 $ and the subgroup of $S_4$ ($A_6$, respectively) generated by $c$ and $g_3$ is seen to be a dihedral group of order eight; $\langle g_3 \rangle \ltimes \langle c \rangle = D_8 < Z_{A_6}(h)$. In order to show equality, assume that $Z_{A_6}(h)$ is bigger. It then follows that the centralizer of $h$ in $S_4$ is a subgroup of order 12, in particular, it has a subgroup of order three. Going through the list of elements of order three in $S_4$ one checks that none commutes with $h$ and obtains a contradiction.
\end{proof}
Let $D_8=C_2\ltimes C_4$ where $C_2$ is generated by
$g= g_3$ and $C_4$ by $c$ and note that $c^2 =h$.
We study the action of $\tau $
on $D_8$ by conjugation. Since $C_4$ is the only cyclic subgroup
of order four in $D_8$, it is $\tau $-invariant.
If $c$ is $\tau $-fixed, i.e. $\tau c = c \tau$, then
\[
(\tau c)^2= c \tau \tau c = c \sigma c \overset{c \in Z(\sigma)}{=} \sigma c^2 =
\sigma h.
\]
In this case
$\tau c$ generates a cyclic group of order four acting
freely on $X$, a contradiction.
So $\tau $ acts on $\langle c\rangle$
by $c \mapsto c^3$ and $c^2 \mapsto c^2$.
Now $\tau g\tau^{-1}=c^kg$ for some $k \in \{0,1,2,3\}$.
If $k=2$, then
\[
(\tau g)^2= \tau g \tau g = \tau g \tau^{-1} \tau^2 g = c^2 g \sigma g
\overset{g \in Z(\sigma)}{=} c^2 \sigma = \sigma h
\]
and we obtain the same
contradiction as above. So $k\in \{1,3\}$ and by choosing the
appropriate generator of $\langle c\rangle$ we may assume that $k=3$.
The action of $\tau $
on $Z_{A_6}(h)=D_8$ given by $g\mapsto c^3 g$ and $c\mapsto c^3$.
\begin{lemma}
$G'=\langle c\rangle$.
\end{lemma}
\begin{proof}
The commutator subgroup $G'$ is the smallest normal subgroup $N$ of $G$ such that $G/N$ is Abelian.
We use the above considerations about the action of $\tau$ on $D_8$ by conjugation.
The subgroup $\langle c \rangle$ is normal in $G = D_8 \rtimes \langle \tau \rangle$ and $G/ \langle c \rangle$ is seen to be Abelian. Since $G / \langle c^2 \rangle$ is not Abelian, $ G' \neq \langle c^2 \rangle$ and the lemma follows.
\end{proof}
\subsection{The group $H = G / \langle \sigma \rangle$}
We consider the quotient $Y=X/\sigma $ equipped with
the action of
$G/\sigma =:H=Z_{\tilde A_6}(\sigma)/\langle \sigma \rangle=D_8\rtimes C_2$. The
group $C_2$ is generated by $[\tau]_\sigma$.
For simplicity, we transfer the above notation from $G$ to $H$ by writing e.g. $ \tau$ for
$[\tau]_\sigma$. etc.. Since $\tau g\tau ^{-1}=c^3 g= g c$, it follows as above that
$H'=\langle c\rangle$.
Let $K < G$ be the cyclic group of order eight generated by $g \tau $.
\[
K = \{ \mathrm{id}, g \tau , c \sigma, g \tau^3 c, c^2, g \tau c^2, \sigma
c^3, g c \tau^3 \}.
\]
We denote the image
of $K$ in $G/\sigma $ by the same symbol.
Since $[\sigma c ]_\sigma = [c]_\sigma \in K$ it contains $H'=\langle c\rangle$
and we can write
$H=\langle \tau \rangle \ltimes K=D_{16}$.
\begin {lemma}\label{normal groups}
There is no nontrivial normal subgroup $N$ in $H$ with
$N\cap H'=\{\mathrm{id}\}$.
\end {lemma}
\begin {proof}
If such a group exists, first consider the case $N \cap K =
\{\mathrm{id}\}$. Then $N \cong C_2$ and
$H= K \times N$ would be Abelian, a contradiction. If $N \cap K \neq
\{\mathrm{id}\}$ then $N \cap K = \langle (g \tau) ^k\rangle $ for some $k \in
\{1,2,4\}$. This implies $(g \tau )^4 =c^2 \in N$ and contradicts $N \cap H' = N
\cap \langle c \rangle = \emptyset$.
\end{proof}
The following observations strongly rely the assumption that $\sigma h$ acts freely on $X$.
\begin {lemma}\label{free on B}
The subgroup $H'$ acts freely on the branch set $B = \pi(\mathrm{Fix}_X(\sigma))$
in $Y$.
\end {lemma}
\begin {proof}
If for some $b \in B$ the isotropy group $H'_b$ is nontrivial, then $c^2(b) = h(b)=b$ and
$\sigma h$ fixes the corresponding point $\tilde b\in X$.
\end {proof}
\begin {corollary}
The subgroup $H'$ acts freely on the set $\mathcal R$
of rational branch curves. In particular, the number of rational branch curves
$n$ is a
multiple of four.
\end {corollary}
\begin {corollary}\label{tau-fixed}
The subgroup $H'$ acts freely on the set of $\tau $-fixed
points in $Y$.
\end {corollary}
\begin {proof}
We show $\mathrm{Fix}_Y(\tau) \subset B$.
Since $\sigma = \tau ^2$ on $X$, a $\langle \tau \rangle$-orbit $\{x, \tau x,
\sigma x, \tau^3 x \}$ in $X$ gives rise to a $\tau$-fixed point $y$ in the
quotient $Y = X / \sigma$ if and only if $\sigma x = \tau x $. Therefore,
$\tau$-fixed points in $Y$ correspond to $\tau$-fixed points in $X$. By
definition $\mathrm{Fix}_X(\tau) \subset \mathrm{Fix}_X(\sigma)$ and the claim
follows.
\end {proof}
\section {$H$-minimal models of $Y$}\label {reduction to del Pezzo}
Since $\mathrm{Fix}_X(\sigma) \neq \emptyset$, the quotient surface $Y$ is a smooth rational $H$-surface to which we apply the equivariant minimal model program. We denote by $Y_\mathrm{min}$ an $H$-minimal model of $Y$. It is known that $Y_\mathrm{min}$ is either a Del Pezzo surface or an $H$-equivariant conic bundle over $\mathbb P_1$.
\begin {theorem}\label {no equivariant fibration}
An $H$-minimal model $Y_{\mathrm {min}}$ does not admit an $H$-equivariant $\mathbb P_1$-fibration. In particular, $Y_{\mathrm {min}}$ is a Del Pezzo surface.
\end {theorem}
In order to prove this statement we begin with the following general fact (cf. Proof of Lemma \ref{fixed points on mori fibers}).
\begin {lemma}\label{no increase}
If $Y\to Y_{\mathrm {min}}$ is an $H$-equivariant Mori reduction and $A$ a cyclic subgroup of $H$, then
\[
\vert \mathrm {Fix}_Y(A)\vert \geq
\vert \mathrm {Fix}_{Y_{\mathrm {min}}}(A)\vert \,.
\]
\end {lemma}
\begin {proof}
Each step of a Mori reduction is known to contract a disjoint union of (-1)-curves.
It is sufficient to prove the statement for one step
in a Mori reduction. If such a step
changes the
number of fixed points, then some Mori fiber $E$ of the reduction
is contracted to an $A$-fixed point.
The rational curve
$E$ is $A$-invariant and therefore contains two $A$-fixed points. The number of fixed points drops.
\end {proof}
Suppose that some $Y_\mathrm{min}$ is an $H$-equivariant conic bundle, i.e., there is an $H$-equivariant fibration
$p :Y_{\mathrm {min}}\to \mathbb P_1$ with generic fiber $\mathbb P _1$. We
let
$p _*:H\to \mathrm {Aut}(\mathbb P_1)$
be the associated homomorphism.
\begin {lemma}\label{ker p*}
$\mathrm {Ker}(p_*)\cap H'=\{\mathrm {id}\}\,.$
\end {lemma}
\begin {proof}
The elements of $\mathrm {Ker}(p_*)$ fix two points
in every generic $p $-fiber. If $h = c^2 \in H' = \langle c \rangle$ fixes
points in every generic $p$-fiber, then $h$ acts trivially on a one-dimensional subset $C \subset Y$. Since $h=c^2$ acts symplectically on $X$ it has only
isolated fixed points in $X$. Therefore, on the preimage $\tilde C = \pi^{-1}(C) \subset X$, the action of $h$ coincides with the action of $\sigma$. But then $\sigma h | _{\tilde C} = \mathrm{id}| _{\tilde C}$ contradicts the assumption that $\sigma h$ acts freely on $X$.
\end {proof}
\begin{proof}[Proof of Theorem \ref{no equivariant fibration}]
Since there are no nontrivial normal
subgroups in $H$ which have trivial intersection with
$H'$ (Lemma \ref{normal groups}), it follows from Lemma \ref{ker p*} that $\mathrm {Ker}(p_*)=
\{ \mathrm{id} \}$, i.e., the group $H$
acts effectively on the base.
We regard $H$ as the semidirect product
$H=\langle \tau \rangle \ltimes K$, where $K=C_8$
is described above. The group $H$ acts on the base as a dihedral group and therefore $\tau$ exchanges the $K$-fixed points. We will obtain a contraction by
analyzing the $K$-actions on the fibers over its two fixed
points. Since $\tau $ exchanges these fibers, it is
enough to study the $K$-action on one of them which
we denote by $F$.
By Lemma \ref{singular fibers of conic bundle} there are two situations which we must consider:
\begin{enumerate}
\item
$F$ is a regular fiber of $Y_{\mathrm {min}}\to \mathbb P_1$.
\item
$F=F_1\cup F_2$ is the union of two (-1)-curves intersecting transversally in
one point.
\end{enumerate}
We study the fixed points
of $c$, $h=c^2$ and $g \tau $ in $Y_{\mathrm {min}}$. Recall that
in $X$ the symplectic transformation $c$ has precisely four fixed
points and $h$ has precisely eight fixed points. This set of eight
points is stabilized by the full centralizer of $h$, in particular by
$K = \langle g \tau \rangle \cong C_8$.
Since $h \sigma$ acts by assumption freely on $X$, it follows that
$\sigma$ acts freely on the set of $h$-fixed points in $X$. If $hy=y$ for some $y \in Y$, then the preimage of $y$ in $X$ consists of two elements $x_1,\sigma x_1=x_2$. If these form an $\langle h \rangle $-orbit, then both are $ \sigma h $-fixed, a contradiction. It follows that $\{x_1,x_2 \} \subset \mathrm{Fix}_X(h)$ and the number of $h$-fixed points in $Y$ is precisely four. In particular, $h$ acts effectively on any curve in $Y$.
Let us first consider
Case 2 where $F= F_1 \cup F_2$ is reducible.
Since $\langle c\rangle $ is the only subgroup
of index two in $K$, it follows that $\langle c \rangle $ stabilizes $F_i$ and both $c$ and $h$ have
three fixed points in $F$ (two on each irreducible component, one is the point of intersection $F_1 \cap F_2$), i.e., six fixed points on $F \cup \tau F \subset Y_\mathrm{min}$. This is contrary to Lemma \ref{no increase}
because $h$ has at most four fixed
points in $Y_{\mathrm {min}}$.
If $F$ is regular (Case 1),
then the cyclic group $K$ has two fixed points on the rational curve
$F$. Since $h \in K$, the four $K$-fixed points on $F \cup \tau F$ are
contained in the set of $h$-fixed points on $Y_\mathrm{min}$. As
$|\mathrm{Fix}_{Y_\mathrm{min}}(h)| \leq 4$, the $K$-fixed points
coincide with the four $h$-fixed points in $Y_\mathrm{min}$;
\[
\mathrm{Fix}_{Y_\mathrm{min}}(h)= \mathrm{Fix}_{Y_\mathrm{min}}(K).
\]
In particular, the Mori reduction does not affect the four $h$-fixed
points $\{y_1, \dots y_4\}$ in $Y$. By equivariance of the reduction,
the group $K$ acts trivially on this set of four points. Passing to
the double cover $X$, we conclude that the action of $g \tau \in K$ on
a preimage $\{x_i , \sigma x_i\}$ of $y_i$ is either trivial or
coincides with the action of $\sigma$. In both cases it follows that
$(g \tau )^2 = c\sigma$ acts trivially on the set of $h$-fixed points in $X$. As $\mathrm{Fix}_X(c) \subset \mathrm{Fix}_X(h)$, this is contrary to the fact that $\sigma$ acts freely on $\mathrm{Fix}_X(h)$.
\end{proof}
In the following we wish to identify the Del Pezzo surface $Y_\mathrm{min}$. For thus, we use the Euler characteristic
formulas,
\[
24= e(X)= 2 e(Y)-2n + \underset{\text{if $D_g$ is present}}{\underbrace{2g-2}},
\]
where $D_g \subset B$ is of general type, $g = g(D_g) \geq 2$, and
\[
e(Y)= e(Y_{\mathrm{min}}) + m,
\]
where $m = | \mathcal E|$ denotes the total number of Mori fibers.
For convenience we introduce the difference
$\delta =m -n$.
If a branch curve $D_g$ of general type is present, then
$
13-g-\delta =e(Y_{\mathrm {min}})
$
and if it is not present
$
12-\delta =e(Y_{\mathrm {min}})
$.
\begin {proposition}
For every Mori fiber $E$ the orbit $H.E$ consists
of at least four Mori fibers.
\end {proposition}
\begin {proof}
We need to distinguish three cases:
\[
\text{1.)}\,\, E\cap B \neq \emptyset\text{\ and\ } E \not\subset B; \quad\quad
\text{2.)}\,\, E \subset B; \quad\quad
\text{3.)}\,\, E \cap B = \emptyset
\]
{\bf Case 1 }
Since $H'$ acts freely on the branch curves
and $E$ meets $B$ in at most two points,
we know $\vert H'.E\vert \ge 2$.
If $\vert H.E\vert =2$, then the isotropy group
$H_E$ is a normal subgroup of index two which necessarily
contains the commutator group $H'$, a contradiction.
{\bf Case 2 }
We show that the $H'$-orbit of $E$ consists of four Mori fibers.
If it consisted of less than four Mori fibers, the stabilizer $H'_E \neq \{\mathrm{id}\}$ of $E$ in $H'$ would fix two points in $E \subset B$. This contradicts Lemma \ref{free on B}.
{\bf Case 3 }
All Mori fibers disjoint from $B$ have self-intersection (-2) and meet exactly one Mori fiber of the previous steps of the reduction in exactly one point.
If $E \cap B = \emptyset$ there is a chain of Mori fibers $E_1, \dots, E_k =E$ connecting $E$ and $B$ with the following properties:
The Mori fiber $E_1$ is the only one to have nonempty intersection with $B$ and is the first curve of this configuration to be blown down in the reduction process. The curves fulfil $(E_i, E_{i+1})=1$ for all $i \in \{1, \dots, k-1\}$ and $(E_i,E_j)=0$ for all $j\neq i+1$. The curves are blown down subsequently and meet no Mori fibers outside this chain.
The $H$-orbit of this union of Mori fibers consists of at least four copies of this chain. This is due to that fact that the $H$-orbit of $E_1$ consists of at least four Mori fibers by Case 1. In particular, the $H$-orbit of $E$ consists of at least four copies of $E$.
\end{proof}
\begin {corollary}
The difference $\delta $ is a non-negative multiple $4k$
of four. If $\delta =0$, then $X$ is a double cover of $Y = Y_\mathrm{min} = \mathbb P_1 \times \mathbb P_1$ branched along a curve of genus nine.
\end {corollary}
\begin {proof}
Above we have shown that $m$ and
and $n$ are multiples of four. Therefore $\delta =4k$.
If $\delta$ was negative, i.e.,
$m < n$, there is no configuration of Mori fibers meeting the rational branch curves such that the corresponding contractions transform the (-4)-curves in $Y$ to curves on a Del Pezzo surface $Y_\mathrm{min}$. It follows that $\delta$ is non-negative.
If $\delta= 0$, then $n= m=0$ and $Y$ is $H$-minimal. The commutator subgroup $H' \cong C_4$ acts freely on the branch locus $B$ implying $e(B)\in \{0,-8,-16, \dots \}$. Since the Euler characteristic of the Del Pezzo surface $Y$ is at least 3 and at most 11,
\[
6 \leq 2e(Y)= 24 + e(B) \leq 22,
\]
we only need to consider the case $e(Y)\in \{4,8\}$ and $B=D_{g}$ for $g \in \{9,5\}$.
The automorphism group of a Del Pezzo surface of degree 4 is $C_2^4 \rtimes \Gamma$ for $\Gamma \in \{C_2,C_4, S_3, D_{10} \}$. If $D_{16} < C_2^4 \rtimes \Gamma$ then $ A := D_{16} \cap C_2^4 \lhd D_{16}$ and $A$ is either trivial or isomorphic to $C_2$. In both case $D_{16} / A$ is not a subgroup of $\Gamma$ in any of the cases listed above. Therefore, $e(Y) \neq 8$.
A Del Pezzo surface of degree 8 is either the blow-up of $ \mathbb P_2$ in one point or $\mathbb P_1 \times \mathbb P_1$. Since the first is never equivariantly minimal, it follows that $Y \cong \mathbb P_1 \times \mathbb P_1$ and $g(B)=9$.
\end {proof}
\begin {theorem}
Any $H$-minimal model $Y_\mathrm{min}$ of $Y$ is
$\mathbb P_1\times \mathbb P_1$ .
\end {theorem}
\begin {proof}
Suppose $\delta \neq 0$.
Since $\delta \ge 4$, it follows that $e(Y_\mathrm{min})=13-g-\delta \le 7$ if a branch curve $D_g$ of general type is present, and $e(Y_\mathrm{min})=12-\delta \le 8$
if not. We go through the list of
of Del Pezzo surfaces with $e(Y_\mathrm{min}) \leq 8$.
\begin {itemize}
\item
If $e(Y_{\mathrm {min}})=8$, i.e., $\mathrm{deg}(Y_\mathrm{min}) = 4$, then the possible automorphism groups are very limited and we have alredy noted above that
$D_{16}$ does not occur.
\item
If $e(Y_{\mathrm {min}})=7$, then
$\mathrm {Aut}(Y_{\mathrm {min}})=S_5$. Since $120$ is not
divisible by $16$, we see that a Del Pezzo surface of degree five does not admit an effective action of the group $H$.
\item
If $e(Y_{\mathrm {min}})=6$, then
$A:=\mathrm {Aut}(Y_{\mathrm {min}})=
(\mathbb C^*)^2\rtimes (S_3\times C_2)$.
We denote by $A^\circ \cong (\mathbb C^*)^2$ the connected component of $A$.
If $q :A\to A/A^\circ$ is the canonical quotient homomorphism
then $q (H') < q(A)'\cong C_3$. Consequently
$H'=C_4 < A^\circ$. We may realize $Y_{\mathrm {min}}$ as $\mathbb P_2$ blown up at the three
corner points and $A^\circ$ as the space of diagonal
matrices in $\mathrm {SL}_3(\mathbb C)$.
Every possible representation of $C_4$ in this group
has ineffectivity along one of the lines joining corner points.
But, as we have seen before, the elements of $H'$, in particular $c^2 = h$, have only isolated fixed points
in $Y_{\mathrm {min}}$.
\item
A Del Pezzo surface obtained by blowing up one or two points in $\mathbb P_2$ is never $H$-minimal and therefore does not occur
\item
Finally, $Y_{\mathrm {min}}\not=\mathbb P_2$:
If $e(Y_\mathrm{min}) =3$ then either $\delta =9$ (if $D_g$ is not present), a contradiction to $\delta = 4k$, or $g + \delta =10$. In the later case, $\delta =4,8$ forces $g= 6,2$. In both cases, the Euler characteristic $2-2g$ of $D_g$ is not divisible by 4. This contradicts the fact that $H'$ acts freely on $D_g$.
\end {itemize}
We have hereby excluded all possible Del Pezzo surfaces except $\mathbb P_1\times \mathbb P_1$ and the proposition follows.
\end {proof}
\section{Branch curves and Mori fibers}
We let $M : Y \to Y_\mathrm{min} = \mathbb P_1 \times \mathbb P_1$ denote an $H$-equivariant Mori reduction of $Y$.
\begin{lemma}
The length of an orbit of Mori fibers is at least eight.
\end{lemma}
\begin{proof}
Consider the action of $H$ on $\mathbb P_1 \times \mathbb P_1$.
Both canonical projections are equivariant with respect to the commutator subgroup $H'= \langle c \rangle \cong C_4$. Since $c^2 \in H'$ does not act trivially on any curve in $Y$ or $Y_\mathrm{min}$, it follows that $H'$ has precisely four fixed points in $Y_\mathrm{min} =\mathbb P_1 \times \mathbb P_1$.
Since $h = c^2$ has precisely four fixed points in $Y$ and $\mathrm{Fix}_Y (H') = \mathrm{Fix}_Y (c) \subset \mathrm{Fix}_Y (c^2)$, we conclude that $H'$ has precisely four fixed points in $Y$ and it follows that the Mori fibers do not pass through $H'$-fixed points. Note that the $H'$-fixed points in $Y$ coincide with the $h$-fixed points.
Suppose there is an $H$-orbit $H.E$ of Mori fibers of length strictly less then eight and let $p = M(E)$. We obtain an $H$-orbit $H.p$ in $\mathbb P_1 \times \mathbb P_1$ with $|H.p| \leq 4$. Now $| K.p| \leq 4$ implies that $K_p \neq \{\mathrm{id}\}$, in particular, $h= c^2 \in K_p$. It follows that $p$ is a $h$-fixed point. This contradicts the fact that the Mori fibers do not pass through fixed points of $h$.
\end{proof}
\begin{corollary}
The total number $m$ of Mori fibers equals 0, 8, or 16..
\end{corollary}
\begin{proof}
A total number of 24 or more Mori fibers would require 16 rational curves in $B$. This contradicts the bound for the number of connected components of the fixed point set of an antisymplectic involution on a K3-surface (cf. Corollary \ref{atmostten})
\end{proof}
Recalling that the number of rational branch curves is a multiple of four, i.e., $n \in \{0,4,8\}$ and
using the fact $m \in \{0,8,16\}$ along with $m \leq n+9$, we conclude that the surface $Y$ is of one of the following types.
\begin{enumerate}
\item
$m=0$\\
The quotient surface $Y$ is $H$-minimal. The map $X \to Y \cong \mathbb P_1 \times \mathbb P_1$ is branched along a single curve $B$. This curve $B$ is a smooth $H$-invariant curve of bidegree $(4,4)$.
\item
$m=8$ and $e(Y) = 12$\\
The surface $Y$ is the blow-up of $\mathbb P_1 \times \mathbb P_1$ in an $H$-orbit consisting of eight points.
\begin{enumerate}
\item
If the branch locus $B$ of $X \to Y$ contains no rational curves, then $e(B)=0$ and $B$ is either an elliptic curve or the union of two elliptic curves defining an elliptic fibration on $X$.
\item
If the branch locus $B$ of $X \to Y$ contains rational curves, their number is exactly four (Observe that eight or more rational branch curves of self-intersection (-4) cannot be modified sufficiently and mapped to curves on a Del Pezzo surface by contracting eight Mori fibers). It follows that the branch locus is the disjoint union of an invariant curve of higher genus and four rational curves.
\end{enumerate}
\item
$m=16$ and $e(Y) =20$\\
The map $X \to Y$ is branched along eight disjoint rational curves.
\end{enumerate}
We can simplify the above situation by studying rational curves in $B$, their intersection with Mori fibers and their images in $\mathbb P_1 \times \mathbb P_1$.
\begin{proposition}
If $e(Y)=12$, then $n=0$.
\end{proposition}
\begin{proof}
Suppose $n \neq 0$ and let $C_i \subset Y$ be a rational branch curve. Since $C_i^2 =-4$ and $M(C_i) \subset \mathbb P_1 \times \mathbb P_1$ has self-intersection $\geq 0$ it must meet the union of Mori fibers $\bigcup E_j$.
All possible configurations of Mori fibers yield image curves $M(C_i)$ of self-inter\-sec\-tion $\leq 4$. If $M(C_i)$ is a curve a bidegree $(a,b)$, then, by adjunction.
\[
2g(M(C_i)) -2 = (M(C_i))^2 + (M(C_i) \cdot K_{\mathbb P_1 \times \mathbb P_1} )= 2ab -2a-2b,
\]
and $(M(C_i))^2 = 2ab \leq 4$ implies
that $g(M(C_i))=0$. In particular, $M(C_i)$ must be nonsingular. Hence each Mori fiber meets $C_i$ in at most one point. It follows that $C_i$ meets four Mori fibers, each in one point, and $(M(C_i))^2 =0$.
Curves of self-intersection zero in $\mathbb P_1 \times \mathbb P_1$ are fibers of the canonical projections $\mathbb P_1 \times \mathbb P_1 \to \mathbb P_1$.
The curve $C_1$ meets four Mori fibers $E_1, \dots E_4$ and each of these Mori fibers meets some $C_i$ for $i \neq 1$. After renumbering, we may assume that $E_1$ and $E_2$ meet $C_2$ and therefore $M(C_1)$ and $M(C_2)$ meet in more than one point, a contradiction. It follows that $e(Y) = 12$ implies $n=0$
\end{proof}
\begin{proposition}
If $e(Y)=20$, then $Y$ is the blow-up of $\mathbb P_1 \times \mathbb P_1$ in sixteen points
\[
\{p_1, \dots p_{16}\} = (F_1 \cup F_2 \cup F_3 \cup F_4) \cap (F_5 \cup F_6 \cup F_7 \cup F_8),
\]
where $F_1, \dots F_4$ are fibers of the canonical projection $\pi_1$ and $F_5, \dots F_8$ are fibers of $\pi_2$. The branch locus is given by the proper transform of $\bigcup F_i$ in $Y$.
\end{proposition}
\begin{proof}
We denote the eight rational branch curves by $C_1, \dots C_8$. The Mori reduction can have two steps. A slightly more involved study of possible configurations of Mori fibers shows that $0 \leq (M(C_i))^2 \leq 4$.
As above $M(C_i)$ is seen to be nonsingular and each Mori fiber can meet $C_i$ in at most one point. Any configuration of curves with this property yields $(M(C_i))^2=0$ and $F_i = M(C_i)$ is a fiber of a canonical projection $\mathbb P_1 \times \mathbb P_1 \to \mathbb P_1$.
If there are Mori fibers disjoint from $B$ these are blown down in the second step of the Mori reduction. Let $E_1, \dots, E_8$ denote the Mori fibers of the first step and $\tilde E_1, \dots, \tilde E_8$ those of the second step. We label them such that $\tilde E_i$ meets $E_i$. Each curve $E_i$ meets two rational branch curves $C_i$ and $C_{i+4}$ and their images $F_i = M(C_i)$ and $F_{i+4}=M(C_{i+4})$ meet with multiplicity $\geq 2$. This is contrary to the fact that they are fibers of the canonical projections. It follows that there are no Mori fibers disjoint from $B$ and all 16 Mori fibers are contrancted simultaniously. There is precisely one possible configuration of Mori fibers on $Y$ such that all rational brach curves are mapped to fibers of the canonical projections of $\mathbb P_1 \times \mathbb P_1$: The curves $C_1, \dots C_4$ are mapped to fibers of $\pi_1$ and $C_5, \dots, C_8$ are mapped to fibers of $\pi_2$. The Mori reduction contracts 16 curves to the 16 points of intersection $\{p_1, \dots p_{16} \} = (\bigcup_{i=1}^4 F_i) \cap(\bigcup_{i=5}^8 F_i) \subset \mathbb P_1 \times \mathbb P_1$.
\end{proof}
Let us now restrict our attention to the case where the branch locus $B$ is the union of two linearly equivalent elliptic curves and exclude this case.
\subsection{Two elliptic branch curves}
In this section we prove:
\begin{theorem}\label{two elliptic branch curves}
$\mathrm{Fix}_X(\sigma)$ is not the union of two elliptic curves.
\end{theorem}
We assume the contrary, let $\mathrm{Fix}_X(\sigma) = D_1 \cup D_2$ with $D_i$ elliptic and let
$f :X\to \mathbb P_1$ denote the elliptic fibration
defined by the curves $D_1$ and $D_2$.
Recall that $\sigma $ acts effectively on the base $\mathbb P_1$ as otherwise $\sigma$ would act trivially in a neighbourhood of $D_i$ by a linearization argument (cf. Theorem \ref{FixSigma}).
It follows that the group of order
four generated by $\tau $ acts effectively on $\mathbb P_1$.
Let $I$ be the ineffectivity of the induced $G$-action on the base
$\mathbb P_1$. We regard
$G=C_4\ltimes D_8$ where $C_4=\langle \tau \rangle$ and $D_8$ is the centralizer of $\sigma$ in $A_6$ (cf. Section \ref{centralizer}) and define $J:=I\cap D_8$. First, note that $I$ is nontrivial:
\begin {lemma}
The group $G$ does not act effectively on $\mathbb P_1$, i.e., $I \neq \{ \mathrm{id}\}$.
\end {lemma}
\begin {proof}
If $G$ acts effectively on $\mathbb P_1$, then $G$ is among the groups specified in Remark \ref{autP_1}. In our special case $|G| = 32$ and $G$ would have to be cyclic or dihedral.
Since the group $G$ does not contain a cyclic group of order 16, this is a contradiction.
\end {proof}
\begin {lemma}
The intersection $J=I\cap D_8$ is nontrivial.
\end {lemma}
\begin{proof}
Assume the contrary and let $J = I \cap D_8 = \{e\}$. We consider the quotient $G \to G/D_8 \cong C_4$ and see that either $I \cong C_2$ or $I \cong C_4$.
\begin{itemize}
\item
If $I\cap D_8=\{e\}$ and $I\cong C_2$, we write $I=\langle \sigma \xi\rangle $ with $\xi \in D_8$
an element of order two. Now $I$ is normal if and only if $\xi =h$, i.e., $I=\langle \sigma h\rangle$.
In this case, since $\sigma h \notin K$, the image of $K$ in $G/I$ is a normal subgroup of
index two and one checks that $G/I\cong D_{16}$ . The group $K$ is mapped
injectively into $G/I$. The equivalence
relation defining this quotient identifies $\sigma $ and $h$ and
both are in the image of $K$. So $h$-fixed points in $X$ must lie in the fibers over the $\sigma$-fixed
points in $\mathbb P_1$, i.e., the $\sigma $-fixed points sets $D_1, D_2$.
Since $\sigma h$ acts freely on $X$, this is a contradiction.
\item
If $I\cap D_8=\{e\}$ and $I\cong C_4$ we write
$I =\langle \tau \xi\rangle$ and
show that for no choice of $\xi$ the group $I =\langle \tau \xi\rangle$ is normal in $G$:
If $\xi =c^kg$, then $\langle \tau \xi\rangle =K$ is of order eight.
If $\xi =c^k$, then $\langle \tau \xi\rangle$ is of order
four and has trivial intersection with $D_8$. It is however not
normalized by $g$.
\end{itemize}
As we obtain contradictions in all cases, we see that the intersection $J=I\cap D_8$ is nontrivial.
\end{proof}
In the following, we consider the different possibilities for the order of $J$ and show that in fact none of these occur.
If $|J|=8$ then $D_8 \subset I$. Recall that any automorphism group of an elliptic curve splits into an Abelian part acting freely and a cyclic part fixing a point. Since $D_8$ is not Abelian, any $D_8$-action on the fibers of $f$ must have points with nontrivial isotropy and gives rise to a positive-dimensional fixed point set of some subgroup of $D_8$ on $X$ contradicting the fact that $D_8$ acts symplectically on $X$.
It follows that the maximal possible order of $J$ is four.
\begin{lemma}\label{I does not contain c}
The ineffectivity $I$ does not contain $\langle c\rangle$.
\end{lemma}
\begin{proof}
Assume the contrary and consider the fixed points of $c^2$. If a $c^2$-fixed point lies at a smooth point of a fiber of $f$, then the linearization of the $c^2$-action at this fixed point gives rise to a positive-dimensional fixed
point set in $X$ and yields a contradiction.
It follows that
the fixed points of $c^2$ are contained
in the singular $f $-fibers.
Since $\langle \tau \rangle $ normalizes $\langle c\rangle$ and the $\langle \tau \rangle $-orbit of a singular
fiber consists of four such fibers, we must only consider two cases:
\begin {enumerate}
\item
The eight $c^2$-fixed points are contained in four singular
fibers (one $\langle \tau \rangle $-orbit of fibers), each of these fibers contains two $c^2$-fixed points.
\item
The eight $c^2$-fixed points are contained in eight singular fibers
(two $\langle \tau \rangle$-orbits).
\end {enumerate}
Note that $\langle c^2 \rangle$ is normal in $I$
and therefore $I$ acts on the set of
$\langle c^2\rangle $-fixed points.
In the second case, all eight $c^2$-fixed points are
also $c$-fixed. This is contrary to $c$ having only four fixed
points and therefore the second case does not occur.
The first case does not occur for similar reasons: If
$c^2$ has exactly two fixed points $x_1$ and $x_2$ in some
fiber $F$, then $\langle c\rangle $ either acts transitively
on $\{x_1,x_2\}$ or fixes both points. Since $\mathrm{Fix}_X(c) \subset \mathrm{Fix}_X(c^2)$ and $\langle c\rangle $
must have exactly one fixed point on $F$, this is impossible.
\end {proof}
\begin {corollary}
$|J| \neq 4$.
\end {corollary}
\begin {proof}
Assume $|J| = 4$. Using $\tau$ we check that no subgroup of $D_8$ isomorphic to $C_2 \times C_2$ is normal in $G$. It follows that the group $\langle c\rangle$ is the only order four subgroup of $D_8$ which is normal in $G$ and therefore $J = \langle c\rangle$. By the lemma above this is however impossible.
\end {proof}
It remains to consider the case where $|J|=2$.
The only normal subgroup
of order two in $D_8$ is $J=\langle h\rangle$.
\begin {lemma}
If $|J|=2$, then $I=\langle \sigma c\rangle$.
\end {lemma}
\begin {proof}
We first show that $|J|=2$ implies $|I|=4$:
If $|I|=2$, then $ I = \langle h \rangle$ and $G / I = C_4 \ltimes (C_2\times C_2)$. Since this group does not act effectively on $\mathbb P_1$, this is a contradiction.
If $|I| \geq 8$, then $G/I$ is Abelian and therefore $I$ contains the commutator subgroup $G' = \langle c \rangle$. This contradicts Lemma \ref{I does not contain c}.
It follows that $|I|=4$ and either $I \cong C_4$ or $I \cong C_2 \times C_2$. In the later case, the only possible choice is $I = \langle \sigma \rangle \times \langle h \rangle$ which contradicts the fact that $ \sigma$ acts effectively on the base.
It follows that
$I=\langle \sigma \xi \rangle$, where $\xi ^2=h$
and therefore $\xi =c$.
\end {proof}
Let us now consider the action of $G$ on $X$ with
$I=\langle \sigma c\rangle $.
Recall that
the cyclic group $\langle \tau \rangle$ acts effectively on the base
and has two fixed points there. Since $\sigma =\tau ^2$, these
are precisely the two $\sigma $-fixed points. In particular,
$\langle \tau \rangle$ stabilizes both $\sigma $-fixed point
curves $D_1$ and $D_2$ in $X$.
Furthermore, the transformations $\sigma c$ and $c$ stabilize $D_i$ for $i =1,2$.
Since the only fixed points of $c$ in $\mathbb P_1$ are the
images of $D_1$ and $D_2$,
$$
\mathrm {Fix}_X(c)\subset D_1\cup D_2=\mathrm {Fix}_X(\sigma).
$$
On the other hand, we know
that $\mathrm {Fix}_X(c)\cap \mathrm {Fix}_X(\sigma )=\emptyset$.
Thus $I=\langle \sigma c\rangle $ is not possible
and the case $|J|=2$ does not occur.
We have hereby eleminated all possibilities for $|J|$ and completed the proof of Theorem \ref{two elliptic branch curves}.
\section{Rough classification of $X$}
We summerize the observations of the previous section in the following classification result.
\begin{theorem}\label{roughclassiA6}
Let $X$ be a K3-surface with an effective action of the group $G$ such that $\mathrm{Fix}_X(h\sigma) = \emptyset$. Then $X$ is one of the following types:
\begin{enumerate}
\item
a double cover of $\mathbb P_1 \times \mathbb P_1$ branched along a smooth $H$-invariant curve of bidegree (4,4).
\item
a double cover of a blow-up of $\mathbb P_1 \times \mathbb P_1$ in eight points and branched along a smooth elliptic curve $B$. The image of $B$ in $\mathbb P_1 \times \mathbb P_1$ has bidegree (4,4) and eight singular points.
\item
a double cover of a blow-up $Y$ of $\mathbb P_1 \times \mathbb P_1$ in sixteen points $\{p_1, \dots p_{16}\} = (\bigcup_{i=1}^4 F_i) \cap(\bigcup_{i=5}^8 F_i)$, where $F_1, \dots F_4$ are fibers of the canonical projection $\pi_1$ and $F_5, \dots F_8$ are fibers of $\pi_2$. The branch locus ist given by the proper transform of $\bigcup F_i$ in $Y$. The set $\bigcup F_i$ is an invariant reducible subvariety of bidegree (4,4).
\end{enumerate}
\end{theorem}
\begin{proof}
It remains to consider case 2. and show that the image of $B$ in $\mathbb P_1 \times \mathbb P_1$ has bidegree (4,4) and eight singular points.
We prove that each Mori fiber $E$ meets the branch locus $B$ either in two points or once with multiplicity two, i.e., we need to check that $E$ may not meet $B$ transversally in exactly one point.
If this was the case, the image $M(B)$ of the branch curve is a smooth $H$-invariant curve of bidegree $(2,2)$.
The double cover $X'$ of $\mathbb P_1 \times \mathbb P_1$ branched along the smooth curve $M(B)= C_{(2,2)}$ is a smooth surface. Since $X$ is K3 and therefore minimal the induced birational map $X \to X'$ is an isomorphism. This is a contradiction since $X'$ is not a K3-surface.
As each Mori fiber meets $B$ with multiplicity two, the self-intersection number of $M(B)$ is 32 and $M(B)$ is a curve of bidegree (4,4) with eight singular points. These singularities are either nodes or cusps depending on the kind of intersection of $E$ and $B$. We obtain a diagram
\[
\begin{xymatrix}{
X_\mathrm {sing}\ar[d]^{2:1} & X \ar[d]^{2:1} \ar[l]^{\text{desing.}}\\
C_{(4,4)} \subset \mathbb P_1 \times \mathbb P_1 & \ar[l]_>>>>>>{M} Y \supset B}
\end{xymatrix}
\]
\end{proof}
In order to obtain a description of possible branch curves, we study the action of $H$ on $\mathbb P_1 \times \mathbb P_1$ and its invariants.
\subsection{The action of $H$ on $\mathbb P_1 \times \mathbb P_1$}
Recall that we consider the dihedral group $H \cong D_{16}$ generated by $\tau g$ of order eight and $\tau$. For convenience, we recall the group structure of $H$:
\begin {align*}
c= ( g \tau )^2, & \quad \tau g \tau = gc,\\
g^2 = \mathrm{id}, & \quad \tau c \tau = c^3,\\
c^4 = \mathrm{id}, & \quad \tau ^2 = \mathrm{id}.
\end {align*}
In this section, we prove:
\begin{proposition}
In appropriately chosen coordinates the action of $H$ on $\mathbb P_1\times \mathbb P_1$ given by
\begin {itemize}
\item
$
c([z_0:z_1],[w_0:w_1])=
([iz_0:z_1],[-iw_0:w_1])
$
\item
$
\tau ([z_0:z_1],[w_0:w_1])=
([z_1:z_0],[iw_1:w_0])
$
\item
$
g([z_0:z_1],[w_0:w_1])=
([w_0:w_1],[z_0:z_1])\,.
$
\end {itemize}
\end{proposition}
\begin{proof}[Sketch of proof]
First note that the index two subgroup $H_1$ of $H$ preserving the canonical projections is generated by $\tau$ and $c$, i.e,
$H_1 = \langle \tau \rangle \ltimes \langle c \rangle \cong D_8$. We begin by choosing coordinates such that
$$
c([z_0:z_1],[w_0:w_1])=
([\chi _1(c)z_0:z_1],[\chi _2(c)w_0:w_1])
$$
where $\chi _i : H' \to S^1$ are faithful characters. Since $\tau$ acts transitively on the set of $H'$-fixed points, we conclude that after an appropriate change of coordinates not affecting the $H'$-action
$$
\tau ([z_0:z_1],[w_0:w_1])=
([z_1:z_0],[w_1:w_0]).
$$
The automorphism $g$ permutes the factors of $\mathbb P_1 \times \mathbb P_1$, stabilizes the fixed point set of $H'$ and fulfills $gcg^{-1}= c^3$ and $g \tau g^{-1} = c\tau$. Therefore, one finds
\begin {itemize}
\item
$
c([z_0:z_1],[w_0:w_1])=
([iz_0:z_1],[-iw_0:w_1])
$
\item
$
\tau ([z_0:z_1],[w_0:w_1])=
([z_1:z_0],[w_1:w_0])
$
\item
$
g([z_0:z_1],[w_0:w_1])=
([\lambda w_0:w_1],[\lambda ^{-1}z_0:z_1])\,,
$
where $\lambda ^2=i$.
\end {itemize}
We introduce a change of coordinates such that $g$ is of the simple form
\[
g([z_0:z_1],[w_0:w_1]) = ([w_0:w_1],[z_0:z_1]).
\]
This does affect the shape of the $\tau$-action and yields the action of $H$ described in the propostion.
\end{proof}
\subsection{Invariant curves of bidegree $(4,4)$}
Given the action of $H$ on $\mathbb P_1 \times \mathbb P_1$ discussed above, we wish to study the invariants and semi-invariants of bidegree $(4,4)$.
The space of $(a,b)$- bihomogeneous polynomials in $[z_0 : z_1][w_0 : w_1]$ is denoted by $\mathbb C_{(a,b)} ([z_0 : z_1][w_0 : w_1])$.
An invariant curve $C$ is given by a $D_{16}$-eigenvector $f \in \mathbb C_{(4,4)} ([z_0 : z_1][w_0 : w_1])$. The kernel of the $D_{16}$-representation on the line $\mathbb C f$ spanned $f$ contains the commutator subgroup $H' = \langle c \rangle $. It follows that $f$ is a linear combination of $c$-invariant monomials of bidegree $(4,4)$. These are
\begin{align*}
z_0^4w_0^4,\,
z_0^4w_1^4, \,
z_1^4w_0^4, \,
z_1^4w_1^4, \,
z_0^2z_1^2w_0^2w_1^2, \,
z_0^3z_1w_0^3w_1, \,
z_0z_1^3w_0w_1^3.
\end{align*}
The polynomials
\begin{align*}
f_1 =z_0^4w_0^4 + z_1^4w_1^4, \quad
f_2 =z_0^4w_1^4 + z_1^4w_0^4, \quad
f_3 =z_0^3z_1w_0^3w_1 -i z_0z_1^3w_0w_1^3
\end{align*}
span the space of $D_{16}$-invariants. Semi-invariants are appropiate linear combinations of
\begin{align*}
g_1 =z_0^4w_0^4 - z_1^4w_1^4, \quad
g_2 =z_0^4w_1^4 - z_1^4w_0^4,\quad
g_3 =z_0^3z_1w_0^3w_1 +i z_0z_1^3w_0w_1^3,\quad
g_4 =z_0^2z_1^2w_0^2w_1^2.
\end{align*}
Note
{\begin{displaymath}
\begin{array}{llll}
\tau (g_1) = -g_1, & \tau (g_2) = -g_2, & \tau (g_3) = -g_3, & \tau (g_4) = -g_4, \\
g(g_1) = g_1, & g(g_2) = -g_2, & g(g_3) = g_3, & g(g_4) = g_4.
\end{array}
\end{displaymath}
}
It follows that a $D_{16}$-invariant curve of bidegree $(4,4)$ in $\mathbb P_1 \times \mathbb P_1$ is of the following three types
\begin{align*}
C_a &= \{a_1 f_1 + a_2 f_2 + a_3 f_3 = 0\}, \\
C_b &= \{b_1 g_1 + b_3 g_3 + b_4 g_4 =0\}, \\
C_0 &= \{g_2 =0\}.
\end{align*}
\subsection{Refining the classification of $X$}
Using the above description of invariant curves of bidegree (4,4) we may refine Theorem \ref{roughclassiA6}.
\subsubsection{Reducible curves of bidegree $(4,4)$}
\begin{theorem}
Let $X$ be a K3-surface with an effective action of the group $G$ such that $\mathrm{Fix}_X(h\sigma) = \emptyset$. If $e(X/\sigma) = 20$, then $X/\sigma$ is equivariantly isomorphic to the blow up of $\mathbb P_1 \times \mathbb P_1$ in the singular points of the curve $C = \{f_1-f_2=0\}$ and $X \to Y$ is branched along the proper transform of $C$ in $Y$.
\end{theorem}
\begin{proof}
It follows from Theorem \ref{roughclassiA6} that $X$ is the double cover of $\mathbb P_1 \times \mathbb P_1$ blown up in sixteen points. These sixteen points are the points of intersection of eight fibers of $\mathbb P_1 \times \mathbb P_1$, four for each of fibration.
By invariance these fibers lie over the base points $[1:1], [1:-1], [1: i], [1:-1]$ and the configurations of eight fibres is defined by the invariant polynomial $f_1-f_2$.
The double cover $X \to Y$ is branched along the proper transform of this configuration of eight rational curves. This proper transform is a disjoint union of eight rational curves in $Y$, each with self-intersection (-4).
\end{proof}
\subsubsection{Smooth curves of bidegree $(4,4)$}
\begin{theorem}
Let $X$ be a K3-surface with an effective action of the group $G$ such that $\mathrm{Fix}_X(h\sigma) = \emptyset$. If $X/\sigma \cong \mathbb P_1 \times \mathbb P_1$, then after a change of coordinates the branch locus is $C_a$ for some $a_1,a_2,a_3 \in \mathbb C$.
\end{theorem}
\begin{proof}
The surface $X$ is a double cover of $\mathbb P_1 \times \mathbb P_1$ branched along a smooth $H$-invariant curve of bidegree (4,4). The invariant (4,4)-curves $C_b$ and $C_0$ discussed above are seen to be singular at $([1:0],[1:0])$ or $([1:0],[0:1])$.
\end{proof}
Note that the general curve $C_a$ is smooth. We obtain a 2-dimensional family $\{C_a\}$ of smooth branch curves and a corresponding family of K3-surfaces $\{X_{C_a}\}$.
\subsubsection{Curves of bidegree $(4,4)$ with eight singular points}
It remains to consider the case 2. of the classification.
Our aim is to find an example of a K3-surface $X$ such that $X/\sigma = Y $ has a nontrivial Mori reduction $M: Y \to \mathbb P_1 \times \mathbb P_1= Z$ contracting a single $H$-orbit of Mori fibers consisting of precisely 8 curves. In this case the branch locus $B \subset Y$ is mapped to a singular $(4,4)$-curve $C= M(B)$ in $Z$. The curve $C$ is irreducible and has precisely 8 singular points along a single $H$-orbit in $Z$.
As we have noted above, many of the curves $C_a,C_b,C_0$ are seen to be singular at $([1:0],[1:0])$ or $([1:0],[0:1])$. Since both points lie in $H$-orbits of length two, these curves are not candidates for our construction. This argument excludes the curves $C_b, C_0$ and $C_a$ if $a_1 = 0$ or $a_2 = 0$.
For $C_a$ with $a_3=0$ one checks that $C_a$ has singular points if and only if $a_1 = -a_2$, i.e., if $C_a$ is reducible. It therefore remains to consider curves $C_a$ where all coefficients $a_i \neq 0$. We choose $a_3=1$.
\begin{lemma}
If $a_i\neq 0$ for $i=1,2,3$, then $C_a$ is irreducible.
\end{lemma}
\begin{proof}[Sketch of proof]
First note that $C_a$ does not pass through $([1:0],[1:0])$ or $([1:0],[0:1])$. Therefore, possible singularities or points of intersection of irreducible components come in orbits of length eight.
Assume that $C_a$ is reducible, consider the decomposition into irreducible components and the $H$-action on it. A curve of type $(n,0)$ is always reducible for $n>1$ and therefore does not occur in the decomposition.
If $C_a$ contains a (2,2)-curve $C_a^{(2,2)}$, then the $H$-orbit of $C_a^{(2,2)}$ has length $\leq2 $ and $C_a^{(2,2)}$ is stable with respect to the subgroup $H' = \langle c \rangle $ of $H$. All $c$-semi-invariants of bidegree (2,2) are, however, reducible. Similary, all $c$-semi-invariants of bidegree (1,2) or (2,1) are reducible an therefore $C$ does not have a curve of this type as an irreducible component.
The curve $C_a$ is not the union of a (1,3)- and a (3,1)-curve, since
their intersection number is 10 and contradicts invariance. Similarly one excludes the union of a (1,1) and a (3,3)-curve.
If $C_a$ is a union of (1,1) or (1,0) and (0,1)-curves, one checks by direct computation that the requirement that $C_a$ is $H$-invariant gives strong restrictions and finds that in all cases at least one coefficient $a_i$ has to be zero.
\end{proof}
One possible choice of an orbit of length eight is given by the orbit through a $\tau$-fixed point $p_\tau = ([1:1],[\pm \sqrt{i}:1])$. One checks that $p_\tau \in C_a$ for any choice of $a_i$. However, if we want $C_a$ to be singular in $p_\tau$, then $a_2=0$. It then follows that $C_a$ is singular at points outside $H .p_\tau$. It has more than eight singular points and is therefore reducible.
All other orbits of length eight are given by orbits through $g$-fixed points $p_x= ([1:x],[1:x])$ for $x \neq 0$. One can choose coefficients $a_i(x)$ such that $C_{a(x)}$ is singular at $p_x$ if and only if $x^8 \neq 1$. If the curve $C_{a(x)}$ is irreducible, then it has precisely eight singular points $H.p_x$ of multiplicity 2 (cusps or nodes) and the double cover of $\mathbb P_1 \times \mathbb P_1$ branched along $C_{a(x)}$ is a singular K3-surface with precisely eight singular points. We obtain a diagram
\[
\begin{xymatrix}{
X_\mathrm {sing}\ar[d]^{2:1} & X \ar[d]^{2:1} \ar[l]^{\text{desing.}}\\
C_{(4,4)} \subset \mathbb P_1 \times \mathbb P_1 & \ar[l]^<<<<<{M} Y \supset B}
\end{xymatrix}
\]
If $p_x$ is a node in $C_{a(x)}$, then the corresponding singularity of $X_\mathrm{sing}$ is resolved by a single blow-up. The (-2)-curve in $X$ obtained from this desingularization is a double cover of a (-1)-curve in $Y$ meeting $B$ in two points.
If $p_x$ is a cusp in $C_{a(x)}$, then the corresponding singularity of $X_\mathrm{sing}$ is resolved by two blow-ups. The union of the two intersecting (-2)-curves in $X$ obtained from this desingularization is a double cover of a (-1)-curve in $Y$ tangent to $B$ is one point.
The information determining whether $p_x$ is a cusp or a node is encoded in the rank of the Hessian of the equation of $C_{a(x)}$ at $p_x$. The condition that this rank is one gives a nontrivial polynomial condition. For a general irreducible member of the family $\{C_{a(x)} \, | \, x\neq 0, \, x^8 \neq 1 \}$ the singularities of $C_{a(x)}$ are nodes.
We let $q$ be the polynomial in $x$ that vanishes if and only if the rank of the Hessian of $C_{a(x)}$ at $p_x$ is one. It has degree 24, but 16 of its solutions give rise to reducible curves $C_{a(x)}$. The remaining eight solution give rise to four different irreducible curves. These are identified by the action of the normalizer of $H$ in $\mathrm{Aut}(\mathbb P_1 \times \mathbb P_1)$ and therefore define equivalent K3-surfaces.
We summarize the discussion in the following main classification theorem.
\begin{samepage}
\begin{theorem}\label{classiA6}
Let $X$ be a K3-surface with an effective action of the group $G$ such that $\mathrm{Fix}_X(h \sigma) = \emptyset$. Then $X$ is an element of one the following families of K3-surfaces:
\begin{enumerate}
\item
the two-dimensional family $\{X_{C_a}\}$ for $C_a$ smooth,
\item
the one-dimensional family of minimal desingularization of double covers of $\mathbb P_1 \times \mathbb P_1$ branched along curves in $\{C_{a(x)} \, | \, x\neq 0, \, x^8 \neq 1 \}$. The general curve $C_{a(x)}$ has precisely eight nodes along an $H$-orbit. Up to natural equivalence there is a unique curve $C_{a(x)}$ with eight cusps along an $H$-orbit.
\item
the trivial family consisting only of the minimal desingularization of the double cover of $\mathbb P_1 \times \mathbb P_1$ branched along the curve $C_a = \{f_1-f_2=0\}$ where $a_1 =1, a_2 =-1, a_3=0$.
\end{enumerate}
\end{theorem}
\end{samepage}
\begin{corollary}
Let $X$ be a K3-surface with an effective action of the group $\tilde A_6$. If $\mathrm{Fix}_X(h \sigma) = \emptyset$, then $X$ is an element of one the families 1. -3. above. If $\mathrm{Fix}_X(h \sigma) \neq \emptyset$, then $X$ is $A_6$-equivariantly isomorphic to the Valentiner surface.
\end{corollary}
\section{Summary and outlook}
Recall that our starting point was the description of K3-surfaces with $\tilde A_6$-symmetry. Using the group structure of $\tilde A_6$ we have divided the problem into two possible cases corresponding to the question whether $\mathrm{Fix}_X(h\sigma)$ is empty or not. If it is nonempty, the K3-surface with $\tilde A_6$-symmetry is the Valentiner surface discussed in Section \ref{A6Valentiner}.
If is is empty, our discussion in the previous sections has reduced the problem to finding the $\tilde A_6$-surface in the families of surfaces $X_{C_a}$ with $D_{16}$-symmetry.
It is known that a K3-surface with $\tilde A_6$-symmetry has maximal Picard rank 20. This follows from a criterion due to Mukai (cf. \cite{mukai}) and is explicitely shown in \cite{KOZLeech}.
All surfaces $X_{C_a}$ for $C_a \subset \mathbb P_1 \times \mathbb P_1$ a (4,4)-curve are elliptic since the natural fibration of $\mathbb P_1 \times \mathbb P_1$ induces an elliptic fibration on the double cover (or is desingularization).
A possible approach for finding the $\tilde A_6$-example inside our families is to find those surfaces with maximal Picard number by studying the elliptic fibration.
It would be desirable to apply the following formula for the Picard rank of an elliptic surface $f: X \to \mathbb P_1$ with a section (cf. \cite{shiodainose}):
\[
\rho(X) = 2 + \mathrm{rank}(MW_f) + \sum_i (m_i-1)
\]
where the sum is taken over all singular fibers, $m_i$ denotes the number of irreducible components of the singular fiber and $\mathrm{rank}(MW_f)$ is the rank of the Mordell-Weil group of sections of $f$. The number two in the formula is the dimension of the hyperbolic lattice spanned by a general fiber and the section.
First, one has to ensure that the fibration under consideration has a section. One approach to find sections is to consider the quotient $q:\mathbb P_1 \times \mathbb P_1 \to \mathbb P_2$ and the image of the curve $C_a$ inside $\mathbb P_2$. If we find an appropiate bitangent to $q(C_a)$ such that its preimage in $\mathbb P_1 \times \mathbb P_1$ is everywhere tangent to $C_a$, then its preimage in the double cover of $\mathbb P_1 \times \mathbb P_1$ is reducible and both its components define sections of the elliptic fibration. For $C_a$ the curve with eight nodes the existence of a section (two sections) follows from an application of the Pl\"ucker formula to the curve $q(C_a)$ with 3 cusps and its dual curve.
As a next step, one wishes to understand the singular fibers of the elliptic fibrations. Singular fibers occur whenever the branch curve $C_a$ intersects a fiber $F$ of the $\mathbb P_1 \times \mathbb P_1$ in less than four points. Depending on the nature of intersection $F \cap C_a$ one can describe the corresponding singular fiber of the elliptic fibration. For $C_a$ the curve with eight cusps one finds precisely eight singular fibres of type $I_3$, i.e., three rational curves forming a closed cycle. In particular, the contribution of all singular fibres $\sum_i (m_i-1)$ in the formula above is 16. In the case where $C_a$ is smooth or has eight nodes, this contribution is less.
In order to determine the number $ \rho(X_{C_a})$ it is neccesary to either understand the Mordell-Weil group and its $\mathrm{rank}(MW_f)$ or to find curves which give additional contribution to $\mathrm{Pic}(X_{C_a}$ not included in $2 + \sum_i (m_i-1)$.
In conclusion, the method of equivariant Mori reduction applied to quotients $X/\sigma$ yields an explicit description of a families of K3-surfaces with $D_{16} \times \langle \sigma \rangle$-symmetry and by construction, the K3-surface with $\tilde A_6$-symmetry is contained in one of these families. It remains to find criteria to characterize this particular surface inside this family. The possible approach by understanding the function
\[
a \mapsto \rho( X_{C_a})
\]
using the elliptic structure of $X_{C_a}$ requires a detailed analysis of the Mordell-Weil group.
|
2,877,628,090,697 | arxiv | \section{Introduction}
Most models of galaxy formation in a hierarchical universe assume that the merger history of the surrounding dark matter halo plays an important role in determining the properties of a galaxy (e.g. \cite{wr78,wf91,baugh06} and references therein). Although halo merger histories can be measured using N-body simulations, these can be time consuming and computationally intensive \citep{millennium05}. This has fueled considerable study of the formation and merger histories of dark matter haloes from a Monte Carlo perspective. Monte Carlo merger trees have the advantage of being fast and one may easily probe mass regimes inaccessible to current N-body simulations. Moreover, unlike N-body experiments, the cosmology and initial conditions may be easily modified.
The excursion set framework \citep{betal91,lc93}, which is motivated by the pioneering work of \cite{ps74}, provides the basis for current models of halo assembly. Initially, this framework was based on the assumption that haloes form from a spherical collapse, of the type first described by \cite{gg72}. Fast algorithms for generating halo merger trees, in which haloes were assumed to form from a spherical collapse, were developed in the 1990s \citep{kw93,sk99,sl99,galform00} (see \cite{zmf08} for a review). However, spherical collapse overpredicts (underpredicts) the abundance of haloes in the low (high) mass regime. To address these issues, \cite{st99} extended the excursion set framework to include ellipsoidal collapse \citep{smt01,st02}. This clearly showed that merger-trees which assume spherical collapse are inadequate.
\cite{hp06} describe a merger tree algorithm which extends some of the older algorithms to incorporate aspects of the ellipsoidal collapse results. In addition, a number of new algorithms have recently been published \citep{pch07,eyal1}; although efficient and accurate, such methods side-step the idea of ellipsoidal collapse altogether. Moreover, these methods are callibrated to match N-body simulations, and are therefore limited by the accuracy and scope of these simulations.
The aim of the present work is to provide a merger history tree algorithm which is based explicitly on the excursion-set formalism with ellipsoidal collapse. The most significant difference between the algorithm we derive and all the others described above is that it takes discrete steps in mass rather than time. This feature allows us to study a few problems which are more difficult to address with the other methods.
After we completed this project, \cite{zmf08} presented an alternative algorithm with ellipsoidal collapse (and discrete-time snapshots). This algorithm generates progenitors across many mass bins and then assigns them to final haloes. In this sense, it is very similar to the \cite{kw93} merger tree, but solves many of its problems. Although this tree is quite different from ours, it also attempts to describe the merger history of a halo from the excursion-set formalism. These two approaches show that generating merger trees without tuning to N-body simulations is a quite non-trivial, yet interesting, challenge.
A review of background material and a description of our algorithm are given in Section~\ref{tree}. Section~\ref{nbody} compares our algorithm with excursion-set theory predictions, and with measurements in N-body simulations. The tests include the progenitor mass fractions and mass functions, the formation-redshift distribution, the mass distribution at formation, and the redshift distribution of the most recent major merger. Section~\ref{conclusions} summarises our findings and discusses possible applications of our algorithm, an outline of which is in Appendix~\ref{algorithm}.
\section{Merger trees in the excursion set
approach}\label{tree}
In the excursion set approach, the problem of estimating halo abundances is mapped to one of estimating the distribution of the number of steps a Brownian-motion random walk must take before it first crosses a barrier of specified height \citep{betal91}. In this approach, the height of the barrier plays a crucial role.
\subsection{Constant and moving barriers}
The Press-Schechter mass function is associated with barriers of constant height - such barriers arise naturally in models in which haloes form from a spherical collapse model. In constrast, in the ellipsoidal collapse model, barriers of the form
\begin{equation}
\label{ellbarrier}
B(S,\delta_{c}) =
\sqrt{q}\delta_{c}\,
\left\{ 1+\beta \left[\frac{S}{q \delta_{c}^{2}}\right]^{\gamma}\right\}.
\end{equation} are more natural. Here $\delta_{\rm c}$ is the overdensity required for spherical collapse - it is a monotonically increasing function of redshift, and it is given by $\delta_{\rm c0}/D(z)$, where $\delta_{\rm c0} \equiv \delta_{\rm c}(z=0) \sim 1.686$ and $D(z)$ is the linear growth factor. $S$ is a monotonically decreasing function of halo mass, given by:
\begin{equation}
S(m) \equiv \sigma^{2}(m)
= \frac{1}{2\pi^2}\,\int {\rm d}k\,k^2\, P(k)\,\tilde{W}^2(kR),
\label{sofm}
\end{equation}
where $P(k)$ is the initial power spectrum of density fluctuations, linearly evolved to the present time, $\tilde{W}$ is the Fourier transform of $W(x) = (3/x^3) (\sin x -x\cos x)$, $R = (3m/4\pi\bar\rho)^{1/3}$, and $\bar\rho$ is the comoving background density. At large $R$, the overdensity contained in the associated volume is practically zero. As $R$ (and $m(R)$) decreases, $S(R)$ increases, and $\delta_{R}$ executes a random walk. We refer the reader to \cite{lc93} for more details.
Consider a barrier $B(S,\delta_{\rm c}(z))$, as in equation~(\ref{ellbarrier}), and an ensemble of random walks which start from the origin: $(S,\delta)=(0,0)$. The excursion set approach maps the distribution of $S$, the number of steps a random walk must take to first cross such a barrier, to the fraction of mass in $m$-haloes at redshift $z$. This quantity is associated with the so-called unconditional mass function. The conditional mass function of high redshift progenitors of a more massive final $M$-halo at some lower redshift $Z$ is modeled using walks which start from $(S_M,\delta_{\rm c}(Z))$ instead.
The shape of the mass functions (unconditional and conditional) is determined by the shape of the barrier, encoded in the $(q,\beta,\gamma)$ parameters. The spherical collapse model is associated with $(q,\beta,\gamma)=(1,0,0)$, whereas ellipsoidal collapse has $(0.707,0.47,0.615)$. When $\gamma > 1/2$, not all walks are guaranteed to cross the ellipsoidal collapse barrier \citep{st02}. Moreover, the barriers associated with two different times may intersect; of course, this never happens for the spherical collapse barriers. Sheth \& Tormen suggest that this intersection of barriers may represent the possibility that haloes can fragment. This complicates the algorithm we describe below, so we instead study the limiting case of `square-root' barriers for which $\gamma = 1/2$:
\begin{equation}
\label{sqrtB}
B(S,\delta_{c}) = \sqrt{q}\delta_{c} + \beta\sqrt{S}.
\end{equation}
The predicted halo abundances associated with $(q,\beta,\gamma)=(0.55,0.5,0.5)$ are very similar to those in simulations \citep{mr05,ms08}. Moreover, the first crossing distribution of this barrier is known \citep{breiman}.
\subsection{A conditional scaling symmetry}\label{wsim}
Recall that the unconditional mass function is associated to the first crossing distribution associated with walks which start from the origin: $(S,\delta)=(0,0)$. When constant barriers are use, it can be expressed self-similarly as
\begin{equation}
f(m|z) {\rm d}m=f(S|\delta_{\rm c}) {\rm d}S=f(\nu) {\rm d} \nu,
\end{equation}
where $\nu\equiv\delta_{c}^{2}/S$. The conditional mass function of $m$-haloes at $z$ that end up in bound objects of mass $M>m$ at $Z<z$ is given by $f(m,z|M,Z){\rm d}m=f(S_m,\delta_1|S_M,\delta_0){\rm d}S_m$, where where $\delta_1=\delta_{\rm c}(z)$, $\delta_0=\delta_{\rm c}(Z)$. In other words, the conditional mass function is associated with the first crossing distribution of a barrier of height $\delta_1$ by random walks with origin at $(S_M,\delta_0)$. Because a straight-line is straight whatever the origin of the coordinate system, the conditional mass function, in the spherical collapse model has the same functional form as that of the unconditional mass function, provided one sets $\nu = (\delta_1 - \delta_0)^2/(S_m-S_M)$.
However, for the square-root barrier, a walk which starts from $(\sqrt{q}\,\delta_0 + \beta \sqrt{S_M},S_M)$ must cross a barrier of shape $\sqrt{q}(\delta_1-\delta_0) + \beta\sqrt{S_m-S_M + S_M}$. This is not quite of the same form as equation~(\ref{sqrtB}). As a result, the conditional mass function is not simply a rescaled version of the unconditional one. Rather, in this model,
\begin{equation}
f(m,z|M,z_{0})\,{\rm d}m = f(S_m/S_M|\eta)\,{\rm d}(S_m/S_M),
\end{equation}
where
\begin{equation}
\label{omega}
\eta \equiv \frac{\delta_1-\delta_0}{\sqrt{S_M}}.
\end{equation}
(see \cite{breiman} or \cite{gmst07} for the exact expressions). Thus, final halos of different masses will have similar progenitor mass functions when expressed in terms of $S_m/S_M$, provided they have similar values of $\eta$. While this scaling is like that for the constant barrier model, in the square-root barrier model, the progenitor mass function is {\em not} a function of the combination $\nu^2=\eta^2/(S_m/S_M - 1)$. This is interesting, because \cite{st02} have shown that the conditional mass function in simulations is not well-fit by a function of $\nu$. In what follows, we will present evidence that it is, however, a function of $\eta$ and $S_m/S_M$ separately, so the qualitatively different scaling associated with the square-root barrier is indeed seen in simulations.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{Fig1}
\caption{A random walk and its associated mass history. The dark filled circles represent the history of a halo of mass $M$ at redshift $z=0$. A merger $(m',m-m') \rightarrow m$ at redshift $z$ is depicted by the $S_{m} \rightarrow S_{m'}$ {\it jump} at height $\delta_{c}(z)$. A new branch associated with $(m-m')$ is connected at $(S_{m-m'},\delta_{c}(z))$. The light-filled circles denote the mass history of this object.}
\label{chist}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{Fig2}
\caption{The same random walk as in Figure~\ref{chist}, but now with square-root rather than constant barriers, illustrating that the mass accretion history depends on the barrier shape. In our algorithm, the new object with mass $(m-m')$ is now connected at $(S_{m-m'},\sqrt{q}\delta_{\rm c}(z)+\beta\sqrt{S_{m-m'}})$.}
\label{shist}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=0.9\hsize]{Fig3}
\caption{The progenitor mass fraction (left) and mass function (right) at redshifts $z=(0.5,1,2,3,5)$, for haloes of mass $M/M{\star}=0.06$ at $z=0$. Filled circles show measurements in the GIF2 simulation, and open circles are from our square-root trees. The smooth solid and dashed curves show the exact square-root barrier solution, and the series approximation, respectively. The long-dashed curve shows the ellipsoidal collapse model with $\gamma>1/2$, and the short-dashed curve is the constant barrier prediction. Values of the scaling parameter $\eta$ (equation~\ref{omega}) are also shown (see Figure~\ref{wprog}).}
\label{prog1}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.9\hsize]{Fig4}
\caption{Same as Figure~\ref{prog1}, but with
$M/M_{\star}=0.6$.}
\label{prog3}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.9\hsize]{Fig5}
\caption{Same as Figures~\ref{prog1}, but
with $M/M_{\star}=6$.}
\label{prog5}
\end{figure*}
\subsection{Mass histories and merger trees}\label{masshist}
Figure~\ref{chist} illustrates how the mass growth history of an object is encoded in the excursion set approach if objects form from a spherical collapse (also see Figure~1 of \cite{lc93}). The jagged line shows a random walk which starts from the origin: $(S,\delta)=(0,0)$. Imagine drawing a horizontal line with height $\delta_{\rm c0}=1.686$ and marking the {\it smallest} value of $S$ at which the walk intersects this `barrier' of constant height ($\delta_{\rm c0}$ corresponds to the present time and $\delta_{c} > \delta_{\rm c0}$ corresponds to higher redshifts). The dotted horizontal line denotes such barrier. Then increase the height of this barrier, and record how this value of $S$ changes as $\delta$ increases. Such mass history points are depicted as dark filled circles in Figure~\ref{chist}. The dashed lines show that $S$ will occasionally jump from a small value to a larger one. Since $S$ is a proxy for mass, and $\delta_{\rm c}$ for time, such a jump is a proxy for an instantaneous change in mass: a merger. Note that the random walk steps under such jumps are not part of the mass history (e.g., the gray portion with $S_m < S < S_{m'}$).
The key to our merger tree algorithm, which is described in detail in Appendix~\ref{algorithm}, is to recognize that these jumps mean that there are a set of other walks which one might associate with this one -- one for each jump. One such walk is illustrated by the second jagged curve, which starts at about the middle of the panel. If the jump from $S_m$ to $S_{m'}$ occurred when the barrier height was $\delta_{\rm c}(z)$, then this other walk starts from $(S_{m-m'},\delta_{\rm c}(z))$. The `merger history' associated with this new {\it branch} is represented by the light-shade filled circles in Figure~\ref{chist}. For every such jump, a new random walk must be drawn. For each jump within each of those new walks, the same process applies -- more walks must be drawn. In summary, the bundle of such walks encodes the entire merger history of a present-day object. Notice that jumps can occur at any $z$ -- there is no constraint that they happen at discrete times. However, if one is interested in the mass function of progenitors at some fixed $z$, one simply reads-off the list of values of $S$ at which this bundle of walks first cross $\delta_{\rm c}(z)$.
So far, we have discussed how to generate trees in the spherical collapse model. Figure~\ref{shist} shows the same walk as before, but now the mass growth history associated with the walk is given by its intersection with square-root barriers of gradually increasing height. This shows clearly that the jumps in mass, and the times at which they occur, are modified. But the overall logic remains the same. Each jump gives rise to a new walk that starts from $(S_{m-m'},B(S_{m-m'},\delta_{\rm c}(z)))$, where $B$ is given by equation~(\ref{sqrtB}).
The natural generalisation to spherical collapse is to incorporate the original $\gamma>1/2$ barrier of \cite{smt01}. Such a choice would complicate this algorithm significantly. First of all, as Figure~\ref{shist} illustrates, the shape of the square-root barrier remains the same with different redshifts, except for an overall vertical shift. This is not the case with $\gamma>1/2$. As $\delta_{\rm c}$ increases (increasing redshift), the term $\delta_{\rm c}^{1/2-2\gamma}$ makes the barrier in equation~(\ref{ellbarrier}) increase less rapidly with $S$. A consequence of this is that the barriers associated with different redshifts cross. In the absence of crossing-barriers (e.g., constant and square-root barriers), one may uniquely map any point in the $(S,\delta)$ plane to $(m,z)$. The crossing of barriers invalidates this property, making the identification of jumps with mergers at a given time ill-defined.
\section{Comparison with simulations}
\label{nbody}
In this section, we compare the statistical properties of our merger history trees with expectations from the excursion set theory which they are supposed to reproduce, and with measurements in the GIF2\footnote{German Israel Fund.} N-body simulation \citep{gao04b}. The simulation followed the evolution of $400^3$ particles in a periodic cubic box $110h^{-1}$Mpc on a side in a flat ${\rm \Lambda}$CDM background cosmology with parameters ($\Omega_m,\sigma_8,h,\Omega_bh^2,n) = (0.3,0.9,0.7,0.0196,1)$. Fifty simulation snapshots were output, equally spaced in log$(1+z)$. At each snapshot, haloes were identified using the spherical overdensity criterion, adopting for virial mass the definition of \cite{eke} (i.e., with virial density at $\sim 324\bar{\rho}$ at redshift zero). The particle mass is $m_{\rm p} = 1.73 \times 10^{9} h^{-1} {\rm M_\odot}$ and only objects with at least ten particles are considered. $M_{\star}(z)$, defined by $\delta^{2}_{c}(z)=S(M_{\star}(z))$, is the typical mass scale at redshift $z$. It is common practice to express halo masses in terms of $M_{\star}=M_{\star}(z=0)$ (the $z$-dependence is suppressed for the present time). For this cosmology and initial power spectrum, $M_{\star}= 8.7 \times 10^{12}h^{-1} {\rm M_{\odot}} \simeq 5030 m_{\rm p}$. The simulation data and halo catalogues are available at \href{http://www.mpa-garching.mpg.de/Virgo}{\texttt{http://www.mpa-garching.mpg.de/Virgo}}. See \cite{getal08} for more details regarding the post-processing of the simulation.
To compare our merger histories with those in the GIF2 simulation, we generated 2000 realisations of our tree for each final halo mass bin $M$ of interest. In all cases, the minimum mass considered was $m_{\rm dust}=M/1000$, and the merger histories of haloes with mass below this were not followed (we call this minimum mass the `branching-mass resolution'). We used random walks with $10^{5}$ steps in between $S_{M}$ and $S_{\rm dust}$ to ensure that the mass change between the steps was less that $m_{\rm dust}$. Having a small step size is essential to faithfully reproduce random walks in a computer. Moreover, if the step size is too large, we run the risk of missing branches. Numerical tests showed that the outputs converged to our results for small step sizes. See the Appendix for more details on the implementation of our Monte Carlo tree. Recall that our tree does not take discrete steps in time. Nevertheless, for fair comparison with the measurements from the GIF2 simulation, the tree data were stored in the same discrete redshift bins as were output from the simulation. We use `Cont' to denote the original tree data and `Snap' for the data stored in redshift snapshots. Sections~\ref{mformation} and \ref{lastmerger} study some merger-related quantities which are sensitive to the differences between these two ways of storing trees (Figures~\ref{mF}, \ref{zoom} and \ref{zlmm}).
\begin{figure*}
\centering
\includegraphics[width=0.9\hsize]{Fig6}
\caption{The $\eta$-symmetry. Different combinations of $M$ and $z$ with similar $\eta$ (equation~\ref{omega} and Table~\ref{wtable}). N-body simulation measurements and the \citet{st02} result with $\gamma > 1/2$ are shown.}
\label{wprog}
\end{figure*}
\subsection{The progenitor mass function}
\label{prog}
Figures~\ref{prog1}-\ref{prog5} show the
progenitor mass fractions and mass functions
at five different redshifts ($z=0.5,1,2,3,5$), for haloes identified at $z=0$ with final masses given by $M/M_{\star}=0.06,0.6$ and 6. The corresponding values of $\eta$ (equation~\ref{omega}) are shown in each panel. In all three figures, filled circles show measurements in the GIF2 simulation, and open circles show results from our square-root trees. We probe the $m<m_{\rm p}$ regime with our trees to verify consistency with analytic excursion-set predictions (the smooth curves in all the panels). The expressions we use are given explicitly in Appendix~A of \cite{gmst07}. The short-dashed curve shows the constant barrier $(\beta,q)=(0,1)$ prediction associated with spherical collapse \citep{lc93}. The solid curves show the exact square-root barrier solution with $(\beta,q,\gamma)=(0.5,0.55,0.5)$; this is a complicated affair, involving sums of Parabolic Cylinder functions \citep{breiman}. Dashed curves show the considerably simpler approximation to the solution which is due to \cite{st02}; this approximation is excellent over the entire range of interest. The long-dashed curves show this same approximation for the ellipsoidal collapse barrier: $(\beta,q,\gamma)=(0.707,0.47,0.615)$. The square-root barrier prediction agrees well with the $\gamma=0.615$ curve, except in the high-mass regime. This discrepancy becomes evident when $\eta>1$ and it is amplified with increasing $\eta$ (see below).
Before we ask how our merger tree algorithm compares with simulations, we note that it produces progenitor mass functions that are well-described by the theory curves over a wide range of masses and redshifts. At high redshifts, our tree data lie slightly below the theory curves at both high and low $m/M$, and slightly above in between, although where the cross-over points occur depends on $z$ and $M$. In all other regimes, our Monte Carlo trees match the square-root barrier predictions. Any additional disagreement with the GIF 2 simulation measurements (compare open and filled circles) is due to limitations of the $\gamma=1/2$ model.
\begin{figure*}
\centering
\includegraphics[width=0.9\hsize]{Fig7}
\caption{Distribution of formation redshifts. Filled circles show simulation data, open circles and triangles show results from the square-root and constant barrier trees. Smooth curves show equation~(\ref{pw}) with $q=1$, 0.707 and 0.55 (short-dashed, long-dashed, and dashed), corresponding to the predicted distribution for constant barriers (spherical collapse), moving (ellipsoidal collapse) and square-root barriers.}
\label{zF}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.9\hsize]{Fig8}
\caption{Distribution of the mass at formation for several final masses. The left half of each panel shows the mass just prior to formation, whereas the right half shows the mass just after formation. Filled circles show simulation data, open circles and triangles are from the square-root and constant barrier trees. The solid curve shows $\mu q(\mu)$ (right half) and $\mu p(\mu)$ (left half) (equations~\ref{qu} and~\ref{pu} respectively). The dashed curve shows these same expressions with
$\mu \rightarrow 1/4\mu$.}
\label{mF}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.9\hsize]{Fig9}
\caption{Same as Figure~\ref{mF}, but showing only the region around $m/M=1/2$. The peak in the simulations (filled symbols) is less pronounced than in the merger trees (jagged lines). Open circles show the result of sampling the merger trees at the same redshifts as the simulation snapshots: this makes a dramatic difference around $0.49\le \mu\le 0.51$, suggesting that the sharp cusp predicted by the theory will also be present in simulations with sufficiently closely spaced outputs. The smaller discrepancies further from the peak remain.}
\label{zoom}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.9\hsize]{Fig10}
\caption{The redshift distribution of the most recent major merger, for several final masses. A major merger is defined as one in which the minor component has at least 1/3 of the mass of the major component. Filled circles show the GIF2 simulation data, open symbols show the corresponding measurements in the same `snapshot' versions of our trees, and smooth curves show the `continuous' distributions which would be seen with arbitrarily closely spaced output times.}
\label{zlmm}
\end{figure*}
\begin{table}
\label{wtable}
\begin{center}
\begin{tabular}{ l | l | l | c | l | l | l } \hline
$M/M_{\star}$ & $z$ & $\eta$ & | & $M/M_{\star}$ & $z$ & $\eta$ \\ \hline \hline
0.06 & 1 & 0.3 & | & 6 & 0.5 & 0.31 \\ \hline
0.06 & 2 & 0.65 & | & 6 & 1 & 0.66 \\ \hline
0.6 & 3 & 1.45 & | & 6 & 2 & 1.44 \\ \hline
\end{tabular}
\caption{The $\eta$-symmetry (equation~\ref{omega}) used for the comparisons shown in Figure~\ref{wprog}.}
\end{center}
\end{table}
Finally, recall that the square-root and constant barrier models make specific predictions for how the conditional mass functions should scale with final halo mass and time. Table~\ref{wtable} lists pairs with similar $\eta$, yet quite distinct values of $M$ and $z$. Figure~\ref{wprog} compares the associated conditional mass functions. The black squares and long-dashed lines refer to the left-hand side of Table~\ref{wtable} (low final masses), whereas the gray triangles and short-dashed lines refer to the right-hand side (high final masses). Notice that the curves are remarkably similar to one another, as are the symbols. This is true despite the fact that the values of $\eta$ are not perfectly identical, and that $f(m,z|M,Z){\rm d}m = f(S_m/S_M|\eta){\rm d}(S_m/S_M) \simeq f(m/M|\eta){\rm d}(m/M)$. The results for low-mass haloes (black squares) are truncated at higher $m/M$ than they are for larger $M$ (gray triangles), simply because only haloes with at least ten particles are considered. Evidently, the conditional mass functions are indeed functions of $\eta$ and $S_m/S_M$ separately, rather than of the combination $\nu$.
\subsection{The distribution of formation redshifts}\label{zformation}
Following \cite{lc93}, a halo is said to have `formed' when it first acquires half of its final mass. For a given halo mass, there is a distribution of formation redshifts. This distribution is expected to peak at earlier times for lower mass haloes. The excursion set model with constant barriers provides a simple expression for this distribution of formation times:
\begin{equation}
p(\omega_{F})\, {\rm d}\omega_{F} =
2\omega_{F}\,{\rm erfc}(\omega_{F}/\sqrt{2})\,{\rm d}\omega_{F},
\label{pw}
\end{equation}
where
\begin{equation}
\omega_{F} \equiv \frac{\delta_{\rm c}(z_{\rm F})-\delta_{\rm c}(z_0)} {\sqrt{S(M/2)-S(M)}}
= \frac{\eta}{\sqrt{S(M/2)/S(M)-1}},
\label{omegaf}
\end{equation}
with $\eta$ given by equation (\ref{omega}).
The filled circles in Figure~\ref{zF} show the formation redshift distributions for haloes with masses in the range $0.9M \leq M \leq 1.1M$ with $\log_{10}(M/M_*) = \{1, \dots, -1.5 \}$ in steps of $-0.5$ in the GIF2 simulation. We use the first snapshot when at least half the mass is in a single progenitor as the formation time, and make no attempt to
interpolate our simulation formation redshifts between these discretely spaced output times \citep{harker,gmst07}. The open circles and triangles show the corresponding formation time distributions from our square-root and constant barrier trees. Recall that we do not discretise redshift in our tree, so the question of interpolation does not arise.
For the ellipsoidal collapse model, \cite{gmst07} showed that the formation redshift is well-described by equation~(\ref{pw}) if one replaces $\omega_F \rightarrow \sqrt{q}\omega_F$. The smooth curves show this with $q=1$, 0.707 and 0.55 (short-dashed, long-dashed, and dashed), which represent the (constant, $\gamma=0.615$, and square-root) barrier predictions. For higher values of $q$, the peaks are located at lower redshifts and the widths of the curves decrease. Strictly speaking, equation~(\ref{pw}) only holds for white-noise initial conditions. Nevertheless, as pointed out \cite{lc93}, it remains a reasonable approximation to CDM case. Furthermore, note that it provides an excellent description of the formation times generated by our trees. However, no choice of $q$ provides particularly good agreement with the GIF2 simulation, a discrepancy noted by previous authors \citep{linjinglin,hp06,gmst07}. This is likely a consequence of the excursion set assumption that different steps in the walk are uncorrelated \citep{st02}. See \cite{pan} and references therein for how one might improve on this.
\subsection{The mass distribution at formation}\label{mformation}
The previous subsection studied halo formation, where formation was defined as the first time that the mass of one of the progenitors exceeds half the total. Therefore, this mass can have any value between 1/2 and 1 times the final mass, and one can study the distribution of masses at, and just prior to, formation. The excursion set constant barrier model makes a prediction for this distribution \citep{nusser}. The mass distribution at formation is expected to be
\begin{equation}
\label{pu}
p(\mu)\, \rm{d}\mu = \frac{2}{\pi} \sqrt{\frac{1-\mu}{2\mu-1}}\,
\frac{\rm{d}\mu}{\mu^{2}}, \,\,\, where \,1/2 \leq \mu \leq 1,
\end{equation}
and $\mu \equiv m/M$, and the distribution just before formation is
\begin{equation}
\label{qu}
q(\mu)\, \rm{d}\mu = \frac{1}{\pi(1-\mu)}
\Big{(} \sqrt{\frac{\mu}{1-2\mu}}-\sqrt{1-2\mu} \Big{)} \,
\frac{\rm{d}\mu}{\mu^{2}},
\end{equation}
where $1/4 \leq \mu \leq 1/2$. We have found that, to a very good
approximation, $\mu\,p(\mu) \to \mu\,q(\mu)$ if one replaces $\mu \rightarrow 1/4\mu$ (solid and dashed curves in Figure~\ref{mF},
respectively). Let $m_B$ and $m_A$ denote the masses before and after formation, respectively. Roughly speaking, the symmetry about $M/2$ indicates that having a specific ratio of $m_B$ to $M/2$ before formation is equaly likely to having the same ratio of $M/2$ to $m_A$ after formation.
Although these expressions were derived assuming a white-noise power spectrum, they are expected to be relatively independent of $P(k)$. \cite{st04} showed that they did indeed match numerical simulations well for different cosmologies and initial power spectra. Figure~\ref{mF} shows that they also work well for square-root barriers.
The agreement between the theory curves (smooth curves) and our Monte Carlo trees (jagged lines, labeled `Cont') is excellent. All panels show that agreement with simulation data is also quite good. However, there is a systematic discrepancy: the cusp at $\mu=1/2$ appears to be less pronounced in the simulation, with correspondingly lower tails. A similar discrepancy was seen by \cite{st04}, who suggested that the fact that the simulations only provide discrete snapshots in time may be smoothing out the peak. By sampling our trees at the simulation snapshots (open symbols, labeled `Snap'), we have attempted to model the magnitude of this effect. Figure~\ref{zoom} illustrates that the cusp has indeed been smoothed, but this is a dramatic effect only around $0.49\le \mu\le 0.51$. The discrepancies further from the peak remain (Figure~\ref{mF}).
\subsection{The redshift distribution of the last major merger}
\label{lastmerger}
Mergers of galaxies with similar masses are expected to produce strong short-lived periods of star formation: \emph{starbursts} \citep{hern}. Recent numerical studies suggest that the mass ratio of the galaxies involved plays an important role: merger-induced bursts occur when the galaxies have similar masses \citep{gao04a,sh05,cox}. Moreover, as suggested by \cite{maller}, a galaxy's Hubble type is strongly correlated with its last major merger.
Understanding such mergers requires understanding the mergers their host haloes undergo. Consider a merger $(m',m'-m) \rightarrow m$, with $m'>m-m'$. For ease of comparison with \cite{pch07}, we will define a `major' merger as one in which $(m-m')/m' \geq 1/3$. The filled circles in Figure~\ref{zlmm} show the redshift distribution of the last major merger on to the main branch. (The last major merger does not necessarily happen on the main branch. However, Figure~3 of \cite{pch07} suggests that, in most cases, it does. Presumably, this is because the assembly of haloes in recent times is dominated by mergers.) Curves show measurements in our full trees (`Cont'), and open symbols show the result of only sampling the trees at the GIF2 simulation outputs (`Snap'). For the discretely-sampled data, only mergers involving haloes with at least ten particles are considered. Note that the anomalously low data point that is second from the left in all panels appears to be an effect of seeing the tree at discrete snapshots -- the smooth curves show no such dip. This feature is also present in the simulation analysed by \cite{pch07}; we expect it to disappear if more finely spaced snapshots are analysed.
For high masses, the data from the square-root trees peak at about the same redshifts as the simulations; the constant barrier, spherical collapse trees peak at lower redshifts. This improvement relative to the spherical collapse case is similar to that in the modified \textsc{galform} trees of \cite{pch07}. However, our square-root trees tend to lie above the simulation at low redshifts, and below at higher redshifts. The discrepancy with simulation becomes increasingly worse at small masses, although it is possible that the GIF2 results for our two smallest mass bins are not reliable -- the high redshift mergers involve haloes with few particles. Because we require haloes to have more than 10 particles, we are likely to miss major mergers once the typical mass becomes of this order.
\section{Discussion}\label{conclusions}
We presented an algorithm for generating merger histories of dark matter haloes. This algorithm is based on the excursion set approach (Figures~\ref{chist}, \ref{shist} and related discussion), and can handle moving barriers of the sort that are associated with ellipsoidal collapse (equation~\ref{ellbarrier}). We illustrated its use by generating the forest of trees associated with a square root barrier (equation~\ref{sqrtB}). The halo mass function associated with this barrier is known to provide a reasonable description of halo abundances in the GIF2 simulation against which we test our model \citep{mr05,ms08}.
Our algorithm produces merger histories which yield the progenitor mass functions which are commonly computed using excursion set theory -- demonstrating that our approach is nearly self-consistent -- over a broad range of masses and redshifts (Figures~\ref{prog1}-\ref{prog5}). These progenitor mass functions are also in reasonable agreement with measurements in simulations; they are a significant improvement on trees based on the constant barrier, spherical collapse model.
Our algorithm is different from others in the literature because it is based on taking discrete steps in mass, whereas all others (full N-body simulations included) take discrete steps in time. We also used our algorithm to show that while the distribution of times at which haloes assemble half their mass depends quite strongly on the barrier shape (Figure~\ref{zF}), the distribution of masses at formation does not (Figure~\ref{mF}). Excursion set related formulae for this `universal' distribution (equations~\ref{pu} and~\ref{qu}) provide an excellent description of our merger trees; the agreement with simulations is good, but not perfect.
The algorithm is described in detail in Appendix~\ref{algorithm}. In essence, it requires that one be able to generate one-dimensional random walks quickly. Since this reduces to being able to generate long strings of Gaussian variables, and fast routines for this exist, it is reasonably fast. Significant speed-ups can be obtained if one exploits known properties of random walks. For example, in the present context, all steps in the walk which lie below the current threshold value are not interesting (e.g. gray-jagged portions of the random walks during the $S_m \rightarrow S_{m'}$ jump in Figures~\ref{chist}-\ref{shist}). If the distribution of the number of steps it takes to first exceed the current value is known, then one need not generate all these steps, one can instead draw a number from this distribution, and simply jump this number of steps. Incorporating such changes into our algorithm is the subject of ongoing work.
The excursion-set approach successfully describes many properties of the hierarchical growth of dark matter haloes. However, with the exception of white-noise initial conditions, the progenitor distributions it predicts cannot be made consistent with the intuitively attractive notion that, in sufficiently small time-steps, mergers are binary \citep{sp97,benson,eyal2,jun,benson2}. This accounts for some of the discrepancy between our trees and the excursion set based theory curves. A number of authors have attempted to alleviate this by relaxing the assumption of binary mergers in their merger tree algorithms. E.g., one of the two algorithms in \cite{sl99} reproduces the spherical collapse based results exactly, for arbitrary power spectra. In this algorithm, mergers occur between groups of objects rather than just two but, as they pointed out, this was a rather contrived solution. \cite{sk99}, \cite{eyal2} and \cite{zmf08} discuss other possibilities. We make no attempt to address this issue with our algorithm.
The appendix also shows that building the main branch is straightforward (Figure~\ref{split}). We expect our approach to facilitate studies involving the mass accretion history (MAH) of a halo \citep{avila,nusser,vdb02,wechsler,gao05,vladimir,kyle,getal08,lidia}. This is the subject of work in progress.
We mentioned in the introduction that halo merger histories play an important role in galaxy formation models. Models of the mergers of supermassive black holes \citep{menouetal,vetal03,yoo,vetal08}, merger-induced starbursts \citep{hern} and quasars \citep{kh00} also have the assembly of haloes as their backbone. Halo assembly histories also play a key role in studies of the brightest cluster galaxies \citep{delucia}, luminous red galaxies \citep{almeidaetal,conroy,masjedi,wake}, satellites and the intracluster light \citep{cwk,ssm07}, and the nature of substructure in galaxy clusters. This last is important for interpreting the Sunyaev-Zel'dovich \citep{holderetal}, and strong lensing \citep{natetal} signals.
One of the advantages of using Monte Carlo merger trees is that one may easily change the underlying cosmology and initial power spectrum. Recently there have been attempts to describe spherical collapse and non-linear growth in modified-gravity theories \citep{schafer,laszlo}. In principle, such modifications can easily be incorporated into our algorithm. We hope that our algorithm will prove useful in some of these studies.
\section*{acknowledgments}
We wish to thank E. Neistein, K. Stewart, V. Avila-Reese, J. Pan, M. Martino, J. Newman and J. Hyde for useful discussions and positive feedback. We also thank our referee, O. Fakhouri, for his insightful criticisms and genuine interest in improving this work. J. Moreno would like to thank V. Moreno and the kids for their patience and understanding during the completion of this project. This work was supported in part by CONACyT--Mexico, and by Grant 2002352 from the US-Israel BSF.
\bibliographystyle{mn2e}
|
2,877,628,090,698 | arxiv | \section{Introduction}
The baryon-baryon high energy interactions one could decompose into at least two distinct sources of produced hadrons. The first one is associated with the baryon valence quarks and a quark-gluon cloud coupled to the valence quarks. Those partons preexist long time before the interaction and could be considered as being a thermalized statistical ensemble. When a coherence of these partonic systems is destroyed via strong interaction between the two colliding baryons these partons hadronize into particles released from the collision. The hadrons from this source are distributed presumably according to the Boltzmann-like exponential statistical distribution in transverse plane w.r.t. the interaction axis. The second source of hadrons is directly related to the virtual partons exchanged between two colliding partonic systems. In QCD this mechanism is described by the BFKL Pomeron exchange. The radiated partons from this Pomeron have presumably a typical for the pQCD power-law spectrum. Schematically Figure~\ref{fig} shows these two sources of particles produced in high energy baryonic collisions.
\begin{figure}[h]
\includegraphics[width =8cm]{HP}
\caption{\label{fig} Two different sources of hadroproduction: red arrows - particles produced by the preexisted partons, green - particles produced via the Pomeron exchange.}
\end{figure}
Recently, a new unified approach to describe the particle production spectra shape was proposed~\cite{OUR1}. It was suggested to approximate the charged particle spectra as function of the particle’s transverse momentum~($P_T$) by a sum of an exponential (Boltzmann-like) and a power law statistical distributions.
\begin{equation}
\label{eq:exppl}
\frac{d\sigma}{P_T d P_T} = A_e\exp {(-E_{Tkin}/T_e)} +
\frac{A}{(1+\frac{P_T^2}{T^{2}\cdot n})^n},
\end{equation}
where $E_{Tkin} = \sqrt{P_T^2 + M^2} - M$
with M equal to the produced hadron mass. $A_e, A, T_e, T, n$ are the free parameters to be determined by fit to the data. The detailed arguments for this particular choice are given in~\cite{OUR1}.
The proposed new parameterization matches the naive illustrative picture of hadroproduction in baryon-baryon collisions described above.
A typical charged particle spectrum as function of transverse energy, fitted with this function~(\ref{eq:exppl}) is shown in Fig~\ref{fig:0}.
As one can see the exponential term dominates the particle spectrum at low $P_T$ values.
\begin{figure}[h]
\includegraphics[width =8cm]{PI}
\caption{\label{fig:0} Charge particle differential cross section~\cite{UA1} fitted to the function~(\ref{eq:exppl}): the red (dashed) line shows the exponential term and the green (solid) one - the power law.}
\end{figure}
The contributions of the exponential and power law
terms
of the parameterization~(\ref{eq:exppl}) to
the typical spectrum of charged particles produced in $pp-$collisions are also shown separately in Figure~\ref{fig:0}.
The relative contribution of these terms is characterized by ratio $R$ of the power law term alone to the parameterization function integrated over $P_T^{2}$:
\begin{equation}
R = \frac{AnT}{AnT + A_e(2MT_e + 2T_e^2)(n-1)}
\end{equation}
It was found that the exponential contribution dominates the charged particle spectra in baryon-baryon collisions while it is completely missing in gamma-gamma interactions~\cite{OUR1}. Moreover, the exponential contribution is characteristic for charged pion production and is much less pronounced in kaon or proton (antiproton) production spectra~\cite{OUR2}. There is also no room for the exponential contribution in a heavy quarkonium production in pp collisions~\cite{CDF}. In addition, the power law term contribution increases with the event charged multiplicity and energy of baryon-baryon collisions~\cite{OUR4}. These observations are naturally fit into the simple hadroproduction picture described above.
Indeed:
1. In gamma-gamma interactions there are no preexisted partons to form the exponential part of the produced particle spectra.
2. In $pp$-collisions the BFKL Pomerons are more flavor democratic with respect to the valence quark related radiation. This results in much smaller exponential contribution to the charged kaon spectra produced in baryon-baryon interactions then that to the charged pion spectra. Such behavior recently proved to be true by the comparative analysis~\cite{OUR3} of the data provided by the PHENIX experiment for $pp$ collisions at RHIC~\cite{Adare:2011vy}. Moreover, it turned out that the difference in the relative contribution of the exponential part in the approximation (\ref{eq:exppl}) for pions and kaons defines the peculiar shape of the kaon to pion yield ratio as function of the hadron transverse momentum.
3. The AGK cutting rules~\cite{AGK} state that charge multiplicity in hadronic interactions is proportional to the number of Pomerons involved in this interaction. Therefore, the relative contribution of the exponential part of the approximation (\ref{eq:exppl}) will decrease with the increase of charged multiplicity in $pp$ interactions. This has been recently checked in the analysis~\cite{OUR4} of charged particle spectra for the $pp$-collision events with different visible charged multiplicity published by the STAR experiment~\cite{STAR}.
Moreover, one could make further predictions about some observable features of the hadroproduction phenomenon:
1. In high energy photon-proton collision the exponential contribution to the charged particle spectra will strongly depend on rapidity of produced charged particles. The hadrons produced in the proton direction in rapidity space will show a sizable exponential contribution to their spectra, while the distribution of hadrons produced on the photon side of the event will be described by the power law function only. Therefore, one expects to observe a change between these two regimes for particles produced around zero rapidities in the photon-proton center of mass system.
2. Another prediction could be made about an absolute dominance of the exponential contribution to the spectra of particles produced in the high rapidity proton fragmentation region where the role of valence quark is more important.
The prediction (1) could be checked with a detailed measurement of the photon-proton interaction at HERA experiments. For that the double differential distribution of produced charged particles in transverse momentum and rapidity space has to be measured in the photon-proton center of mass system.
To check the prediction (2) it is possible to use already available data published by the UA1 experiment~\cite{UA1}. These data are presented by charged particle spectra for $pp$-collision in 5 different pseudorapidity regions, covering the total rapidity interval $|\eta|<3.0$. Pseudorapidity distributions of charged particles in $p\bar{p}$ collisions~\cite{P238, UA5} with the center of mass energy close to that in UA1 experiment~\cite{UA1} are shown in figure \ref{fig:01}. The rapidity central plateau region where a contribution of the Pomeron exchange is important for the UA1 data is limited by $|\eta|<2.5$ while the fragmentation rapidity region extends to higher rapidity values. The UA1 data allow exploring the hadroproduction properties in a transition between the central plateau and fragmentation region.
\begin{figure}[h]
\includegraphics[width =8cm]{UA1etas}
\caption{\label{fig:01} Pseudorapidity distribution of charged particles in $p\bar{p}$ collisions at $\sqrt{s} = 630 GeV$~\cite{P238} and $\sqrt{s} = 546 GeV$~\cite{UA5}.}
\end{figure}
\begin{figure*}[!]
\includegraphics[width =18cm]{UA1s}
\caption{\label{fig:02} Charged particle spectrum~\cite{UA1} for different values of pseudorapidity fitted to the function~(\ref{eq:exppl}): the red (dashed) curve shows the exponential term and the green (solid) one stands for the power law term.}
\end{figure*}
\begin{figure}[h]
\includegraphics[width =8cm]{UAr}
\caption{\label{fig:03} The relative value of the power law term contribution into the approximation (\ref{eq:exppl}) as function of pseudorapidity.}
\end{figure}
Two examples of charge particle spectra in different pseudorapidity regions together with the fit to the function (\ref{eq:exppl}) are shown in Figure \ref{fig:02}. It can be seen from this figure that the relative contribution of the exponential part is increasing with the pseudorapidity rising as it is expected from the point of view of the simple model given above. The dependence of the exponential contribution in the approximation (\ref{eq:exppl}) needed to describe the spectrum shape of the produced charged particles as function pseudorapidity is shown in Figure \ref{fig:03}. The functional trend demonstrated in Figure \ref{fig:03} is in accord with the qualitative predictions discussed above.
In conclusion, a simple naive model for hadroproduction mechanism was proposed to explain a number of recently observed phenomena. Within the framework of this model there are two distinct sources of hadrons produced in particle-particle collisions. One is a radiation of hadrons by the preexisted valence quarks. This source of hadrons is characteristic for colliding massive baryons and is completely missing for colliding gamma quanta. Another source of hadrons is related to QCD-vacuum fluctuations, described by the Pomeron exchange. The Pomeron interactions give rise to the hadrons distributed according the QCD-like power-law statistical distribution, while the valence quark radiation results in a formation of the exponential Boltzmann-like spectrum of produced particles. This simple model turned out to be successful in describing a number of observed phenomena in hadroproduction. The predicted within this model increase of the relative exponential contribution to the approximation (\ref{eq:exppl}) describing the particle spectra in $pp$-collisions with increasing pseudorapidity was proved on the data previously published by the UA1 Collaboration\cite{UA1}.
The authors thank Professor M.G.Ryskin for fruitful discussion and his help provided during the preparation of this short note.
|
2,877,628,090,699 | arxiv | \section{Introduction}
This paper is closely related to the authors' recent investigations \cite{FKKM20} on the matricial truncated Hamburger moment problem.
In \cite{FKKM20}, the authors obtained a parametrization of the Stieltjes transforms of the solution set in terms of a linear fractional transformation.
This parametrization works for the general matrix case of the problem under consideration.
Our main goal in this paper is to use results of \cite{FKKM20} to achieve a description of the set of values of matrices which will be attained by the Stieltjes transforms of the solutions of the moment problem at a prescribed fixed point of the open upper half plane.
We show that this set of values fills a (closed) matrix ball and present explicit expressions for the center and the semi-radii of this matrix ball, which will also be called the Weyl matrix ball associated with the concrete matricial Hamburger moment problem under consideration.
Following the classical monograph Akhiezer \cite{MR0184042}, the terminology ``Weyl circles'' or later ``Weyl matrix balls'' was consequently used in the Soviet literature (see, \teg{}\ \cite{MR0425671,MR647177,MR703593,MR752056,MR751390,MR1473266}).
As explained in \zitaa{MR0184042}{\cch{1}{}} or in \cite{MR2077204}, the history of the scalar case is intimately related to the classical papers \cite{MR1511560,MR1512075,zbMATH02604576,MR1503221}.
Hellinger \cite{MR1512075} mentioned the analogies of his considerations on the moment problem with H.~Weyl's paper \cite{MR1511560} on a boundary value problem for ordinary differential equations.
In 1934, H.~Weyl studies connections between the papers \cite{MR1512075} and \cite{MR1511560}.
However, he did not consider the moment problem, but the Nevanlinna--Pick problem for holomorphic functions in the open upper half plane, the imaginary part of which is non-negative.
The main purpose in \cite{MR1503221} is to formulate that problem in the area of ordinary differential equations, which corresponds to the Nevanlinna--Pick interpolation problem.
What concerns the matrix case, one can observe two lines of investigations.
The first one is connected with particular types of ordinary differential equations.
It is a generalization of the topic opened by H.~Weyl in his fundamental paper \cite{MR1511560} on Sturm--Liouville equations on the semi-axis.
In this framework, we mention the landmark paper \cite{MR0425671} by S.~A.~Orlov.
The central theme of this paper can be considered as part of the second line of investigations in the matrix case.
Starting in the 1970's, this line was essentially initiated by the research of the school of V.~P.~Potapov on matricial generalizations of classical interpolation and moment problems.
What concerns the matricial version of the non-degenerate Hamburger moment problem, the corresponding matrix balls were explicitly computed by I.~V.~Kovalishina in \cite{MR703593}.
Her approach is based on V.~P.~Potapov's method of ``Fundamental matrix inequalities'' (FMI method).
This paper is part of the investigations of the first and second authors on matricial versions of classical interpolation and moment problems in the general (possibly degenerate) matrix case.
In the first period of this research together with A.~Lasarow, they concentrated on interpolation problems for holomorphic matrix functions in the open unit disc, namely the matrix versions of the classical interpolation problems named after Carath\'eodory and Schur.
The key instrument of this approach is in both problems the so-called central solution.
The central features of this work can be summarized by the three steps:
1) Analysis of the structure of the central solution (see \cite{MR2104258,MR2066852})
2) Inspired by the first step construction of a particular polynomial matrix which parametrizes the solution set via a linear fractional transformation of the Schur class (see \cite{MR2222523,MR2493510})
3) Closer analysis of the blocks of the generating matrix polynomial realizing the parametrization of the solution in order to achieve information on the concrete shape of parameters of the associated Weyl matrix ball (see \cite{MR2656833,MR2390673})
This strategy could be also used to handle the corresponding interpolation problem for \(J\)\nobreakdash-Potapov functions.
This was done in collaboration with U.~Raabe and K.~Sieber (see \cite{MR2535572,MR2647541,MR2746081,MR2890903}).
Later the authors started their studies on matricial versions of truncated power moment problems on the real axis in the general matrix case.
There the holomorphic matrix functions are defined in the open upper half plane.
What are the differences to the former investigation in the case of the unit disk and which suggestions can be maintained.
The first difference now comes from the absence of an analogue of the central solution.
Remember that the existence of this solution essentially governed what we have done in the unit disk case.
This forces us to find a convenient way to find a matrix polynomial, called resolvent matrix in the sequel, which produces the parametrization of the solution set (, more precisely, the set of their corresponding Stieltjes transforms).
In \cite{FKKM20}, we obtained the resolvent matrix on the basis of a Schur type algorithm.
This procedure provided the four \tqqa{blocks} of the resolvent matrix in a form which is not convenient for obtaining information about the corresponding matrix balls.
For this reason, in this paper, we will take a closer look at the resolvent matrix.
In particular, we will recognize that these \tqqa{blocks} can be interpreted as parts of a corresponding quadruple of orthogonal \tqqa{matrix} polynomials.
This observation enables us now to continue in a way suggested by \cite{MR2222523}.
More precisely, the blocks of the resolvent matrix suggest to consider now a distinguished rational matrix function, which proves to be a key instrument for the subsequent considerations.
Roughly speaking, we obtain now a conjecture about the parameters of the Weyl matrix ball we are looking for.
This conjecture will be verified then.
This paper is organized as follows.
In \rsec{S-II}, we summarize some basic facts on matricial Hamburger moment problems.
In particular, we recall the well-known characterizations of the existence of solutions (see \rthmss{SolvHam}{T0708}).
Following the classical line, in \rsec{S-III}, we reformulate the original moment problem as an interpolation problem for appropriate classes of holomorphic matrix-valued functions in the open upper half plane \(\pip\).
For this reason, we discuss several integral representations of these functions (see \rthmss{Theo21}{FKMM16_32}).
The main aim of \rsec{S-IV} is to provide essential ingredients for the parametrization of the set of Stieltjes transforms of all solutions.
More precisely, we introduce a special class of ordered pairs of meromorphic \tqqa{matrix}-valued functions in \(\pip\), which serves as set of parameters later.
In \rsec{S1111}, we take a closer look at the structure of \tHnnde{} (sometimes also called Hankel non-negative definite extendable) sequences \(\seqs{2n}\) of complex \tqqa{matrices} and we introduce the \tsHp{} (see \rdefn{CM2.1.64}).
We consider the so-called \tnatext{} \(\seqsh{2n+1}\) of \(\seqs{2n}\) (see \rdefn{D.nat-ext}).
In \zitaa{FKKM20}{\cthm{8.11}{}}, we obtained a parametrization of the set of all Stieltjes transforms of the solution set via a linear fractional transformation of meromorphic pairs.
The generating matrix-valued function is a \taaa{2q}{2q}{matrix} polynomial, which was constructed via a Schur type algorithm.
For the purposes of this paper, it is essential to recognize that the four \tqqa{blocks} are parts of a \tabcd{}.
This is done in \rsec{S-VII} (see in particular \rthm{CM123}).
The investigations in \rsec{S1129} are guided by the experiences from \cite{MR2222523}.
As in \cite{MR2222523}, by a closer look to the resolvent matrix we are led to a sequence of rational matrix-valued functions (see \rdefn{K17.1}), which contain the key information about the Weyl matrix balls we are striving for.
These functions turn out to be related by our Schur type algorithm (see \rlem{P1056}).
This observation provides in combination with \zitaa{MR3380267}{\cprop{8.6}{}} that this function belongs to the \tqqa{Herglotz}--Nevanlinna class (see \rprop{P1129}).
\rsec{Cha13} contains the central result of this paper (see \rthm{T45CD}).
More precisely, it is a description of the Weyl matrix ball.
In \rsec{S1031}, we indicate how the matrix ball description, which I.~V.~Kovalishina \cite{MR703593} obtained in the non-degenerate case, can be derived within our approach which works for the most general case.
\section{Notation and preliminaries}\label{S-II}
First we state some notation.
Let \(\C\), \(\R\), \(\NO\), and \(\N\) be the set of all complex numbers, the set of all real numbers, the set of all \tnn{} integers, and the set of all positive integers, respectively.
Further, for every choice of \(\alpha,\beta\in\R\cup\set{-\infty,\infty}\), let \(\mn{\alpha}{\beta}\) be the set of all integers \(k\) such that \(\alpha\leq k\leq\beta\).
Throughout this paper, if not explicitly mentioned otherwise, then let \(p,q,r\in\N\).
If \(\mathcal{X}\) is a \tne{} set, then \(\mathcal{X}^\pq\) represents the set of all \tpqa{matrices} each entry of which belongs to \(\mathcal{X}\), and \(\mathcal{X}^p\) is short for \(\mathcal{X}^{p\times 1}\).
The notation \(\CHq\) is used to denote the set of all \tH{} complex \tqqa{matrices}.
We write \(\Cggq\) and \(\Cgq\) to designate the set of all \tnnH{} complex \tqqa{matrices} and the set of all positive \tH{} complex \tqqa{matrices}, respectively.
Let \(\OA\) be a measurable space.
Then each countably additive mapping defined on \(\fA\) with values in \(\Cggq\) is called a \tnnH{} \tqqa{measure} on \(\OA\) and the notation \(\Mggqa{\Omega,\fA}\) stands for the set of all \tnnH{} \tqqa{measures} on \(\OA\).
Let \(\mu=\matauuuo{\mu_{jk}}{j}{k}{1}{q}\) be a \tnnH{} \tqqa{measure} on \(\OA\).
Then we use \(\LOC{\mu}\) to denote the set of all Borel measurable functions \(f\colon\Omega\to\C\) for which \(\int_\Omega\abs{f}\dif\nu_{jk}<\infty\) holds true for every choice of \(j\) and \(k\) in \(\mn{1}{q}\), where \(\nu_{jk}\) is the variation of the complex measure \(\mu_{jk}\) (see also \rrem{M.8-1}).
If \(f\in\LOC{\mu}\), then let \(\int_\Omega f\dif\mu\defeq\matauuuo{\int_\Omega f\dif\mu_{jk}}{j}{k}{1}{q}\) and we also write \(\int_\Omega f(\omega)\mu\rk{\dif\omega}\) for this integral.
Denote by \(\BsA{\R}\) (\tresp{}\ \(\BsA{\C}\)) the \(\sigma\)\nobreakdash-algebra of all Borel subsets of \(\R\) (or \(\C\), respectively).
Let \(\Omega\in\BsA{\R}\setminus\set{\emptyset}\).
Then designate by \(\BsA{\Omega}\) the \(\sigma\)\nobreakdash-algebra of all Borel subsets of \(\Omega\) and by \(\Mggqa{\Omega}\) the set of all \tnnH{} \tqqa{measures} on \(\rk{\Omega,\BsA{\Omega}}\), \tie{}, \(\Mggqa{\Omega}\) is short for \(\Mggqa{\Omega,\BsA{\Omega}}\).
Let \(\kappa\in\NOinf\).
Then denote by \(\Mgguoa{\kappa}{q}{\Omega}\) the set of all \(\sigma\in\Mggqa{\Omega}\) such that, for all \(j\in\mn{0}{ \kappa}\), the function \(f_j\colon\Omega\to\C\) defined by \(f_j(\omega)\defeq\omega^j\) belongs to \(\Ls{1}{\Omega}{\BsA{\Omega}}{\sigma}{\C}\).
If \(\sigma\in\Mgguoa{\kappa}{q}{\Omega}\), then, for all \(j\in\mn{0}{\kappa}\), let \(\mpm{\sigma}{j}\defeq\int_\Omega\omega^j\sigma\rk{\dif\omega}\).
We will consider the following types of a matricial Hamburger power moment problem:
\begin{problem}[\mprobR{\kappa}{=}]
Let \(\kappa\in\NOinf\) and let \(\seqska\) be a sequence of complex \tqqa{matrices}.
Parametrize the set \(\MggqRsg{\kappa}\) of all \(\sigma\in\MgguqR{\kappa}\) fulfilling \(\su{j}=\mpm{\sigma}{j}\) for all \(j\in\mn{0}{\kappa}\).
\end{problem}
\begin{problem}[\mprobR{2n}{\lleq}]
Let \(n\in\NO\) and let \(\seqs{2n}\) be a sequence of complex \tqqa{matrices}.
Parametrize the set \(\MggqRskg{2n}\) of all \(\sigma\in\MgguqR{2n}\) for which the matrix \(\su{2n}-\mpm{\sigma}{2n}\) is \tnnH{} and, in the case \(n\geq1\), for which additionally \(\su{j}=\mpm{\sigma}{j}\) is fulfilled for all \(j\in\mn{0}{2n-1}\).
\end{problem}
For our further consideration, we introduce certain sets of sequences of complex \tqqa{matrices} which are determined by properties of particular \tbHms{} built of them.
If \(n\in\NO\) and if \(\seqs{2n}\) is a sequence of complex \tqqa{matrices}, then \(\seqs{2n}\) is called \emph{\tHnnd{}} (\emph{\tHpd{}}, respectively) if the \tbHm{}
\beql{N2}
\Hn
\defeq\mat{\su{j+k}}_{j,k =0}^n
\eeq
is \tnnH{} (positive \tH{}, respectively).
(Note that \tHnnd{} (\tHpd{}, respectively) sequences of complex \tqqa{matrices} are also said to be Hankel non-negative definite (Hankel positive definite, respectively).)
For all \(n\in\NO\), we will write \(\Hggq{2n}\) (or \(\Hgq{2n}\), respectively) for the set of all sequences \(\seqs{2n}\) of complex \tqqa{matrices} which are \tHnnd{} (\tHpd{}, respectively).
A well-known solvability criterion for Problem~\mprobR{2n}{\lleq} is the following:
\bthml{SolvHam}
Let \(n\in\NO\) and let \(\seqs{2n}\) be a sequence of complex \tqqa{matrices}.
Then \(\MggqRskg{2n}\neq\emptyset\) if and only if \(\seqs{2n}\in\Hggq{2n}\).
\ethm
There are various proofs of \rthm{SolvHam}, for example \cite[\cthm{3.2}{}]{MR1624548}
and \cite[\cthm{4.16}{795}]{MR2570113}.
If \(n\in\NO\) and if \(\seqs{2n}\in\Hggq{2n}\) (or \(\seqs{2n}\in\Hgq{2n}\), respectively), then, for each \(m\in\mn{0}{n}\), the sequence \(\seqs{2m}\) obviously belongs to \(\Hggq{2m}\) (or \(\Hgq{2m}\), respectively).
Thus, let \(\Hggqinf\) (or \(\Hgqinf\), respectively) be the set of all sequences \(\seqsinf\) of complex \tqqa{matrices} such that, for all \(n\in\NO\), the sequence \(\seqs{2n}\) belongs to \(\Hggq{2n}\) (or \(\Hgq{2n}\), respectively).
For all \(n\in\NO\), let \(\Hggeq{2n}\) (\tresp{}\ \(\Hgeq{2n}\)) be the set of all sequences \(\seqs{2n}\) of complex \tqqa{matrices} for which there exist complex \tqqa{matrices} \(\su{2n+1}\) and \(\su{2n+2}\) such that \(\seqs{2(n+1)}\) belongs to \(\Hggq{2\rk{n+1}}\) (\tresp{}\ \(\Hgq{2\rk{n+1}}\)).
Furthermore, for all \(n\in\NO\), we will use \(\Hggeq{2n+1}\) (\tresp{}\ \(\Hgeq{2n+1}\)) to denote the set of all sequences \(\seqs{2n+1}\) of complex \tqqa{matrices} for which there exists a complex \tqqa{matrix} \(\su{2n+2}\) such that \(\seqs{2(n+1)}\) belongs to \(\Hggq{2\rk{n+1}}\) (\tresp{}\ \(\Hgq{2\rk{n+1}}\)).
For each \(m\in\NO\), the elements of the set \(\Hggeq{m}\) are called \emph{\tHnnde{}} (or Hankel non-negative definite extendable) sequences.
For technical reasons, we set \(\Hggeqinf\defeq\Hggqinf\) and \(\Hgeqinf\defeq\Hgqinf\).
A well-known solvability criterion for Problem~\mprobR{\kappa}{=} is the following:
\bthml{T0708}
Let \(\kappa\in\NOinf\) and let \(\seqska\) be a sequence of complex \tqqa{matrices}.
Then \(\MggqRsg{\kappa}\neq\emptyset\) if and only if \(\seqska\in\Hggeqka\).
\ethm
A proof of \rthm{T0708} is given, \teg{}, \zitaa{MR2805417}{\cthm{6.6}{486}}.
\breml{Schr2.7}
According to \zitaa{MR2570113}{\crem{2.29}{} and \cprop{2.30}{}}, we have \(\Hgeq{2\tau}=\Hgq{2\tau}\subseteq\Hggeq{2\tau}\subseteq\Hggq{2\tau}\) and \(\Hgq{2\tau}\neq\Hggeq{2\tau}\) for all \(\tau\in\NOinf\).
Furthermore, \(\Hgeq{\kappa}\subseteq\Hggeq{\kappa}\) for all \(\kappa\in\NOinf\) as well as \(\Hggeq{0}=\Hggq{0}\), whereas \(\Hggeq{2\tau}\neq\Hggq{2\tau}\) for all \(\tau\in\Ninf\).
\erem
The following result is essential for a parametrization of the set \(\MggqRskg{2n}\):
\bthml{SK7}
Let \(n\in\NO\) and let \(\seqs{2n}\in\Hggq{2n}\).
Then there exists a unique sequence \(\seqa{\tilde{s}}{2n}\in\Hggeq{2n}\) such that \(\MggqRakg{\seqa{\tilde{s}}{2n}}=\MggqRskg{2n}\).
\ethm
The existence of such a sequence \(\seqa{\tilde{s}}{2n}\) was formulated first in \cite[\clem{2.12}{}]{MR1395706}.
A complete proof of \rthm{SK7} can be found in \cite[\cthm{7.3}{806--808}]{MR2570113}.
The main goal of \cite{MR3889658} is to present a purely matrix theoretical object which yields, applied to a special case, the explicit construction of the desired sequence \(\seqa{\tilde{s}}{2n}\in\Hggeq{2n}\).
At the end of this introductory section, we give some further notation.
We will write \(\Iq\) to denote the identity matrix in \(\Cqq\), whereas \(\Opq\) is the zero matrix belonging to \(\Cpq\).
Sometimes, if the size is clear from the context, we will omit the indices and write \(\EM\) and \(\NM\), \tresp{}
For each \(A\in\Cpq\), let \(\ran{A}\) be the column space of \(A\), let \(\nul{A}\) be the null space of \(A\), and let \(\rank A\) be the rank of \(A\).
For each \(A\in\Cqq\), we will use \(\rre A\) and \(\rim A\) to denote the real part and the imaginary part of \(A\), respectively:
\(\rre A\defeq\frac{1}{2}(A+A^\ad)\) and \(\rim A\defeq\frac{1}{2\iu}(A-A^\ad)\).
Furthermore, for each \(A\in\Cpq\), let \(\normS{A}\) be the operator norm of \(A\).
A complex \tpqa{matrix} \(A\) is said to be contractive if \(\normS{A}\leq1\).
We use \(\Kpq\) in order to designate the set of all contractive complex \tpqa{matrices}.
If \(A\in\Cqq\), then \(\det A\) denotes the determinant of \(A\).
For each \(A\in\Cqp\), let \(A^\mpi\) be the Moore--Penrose inverse of \(A\).
If \(p_1,p_2,q_1,q_2\in\N\) and if \(A_j\in\Coo{p_j}{q_j}\) for every choice of \(j\in\set{1,2}\), then let \(\diag(A_1,A_2)\defeq\smat{A_1 & \Ouu{p_1}{q_2}\\ \Ouu{p_2}{q_1} & A_2}\).
Furthermore, within the set \(\CHq\), we use the L\"owner semi-ordering:
If \(A\) and \(B\) are complex \tH{} \tqqa{matrices}, then we will write \(A\lleq B\) (or \(B\lgeq A\)) to indicate that \(B-A\) is a \tnnH{} matrix.
For all \(x,y\in\Cq\), by \(\ipE{x}{y}\) we denote the (left-hand side) Euclidean inner product of \(x\) and \(y\), \tie{}, we have \(\ipE{x}{y}\defeq y^\ad x\).
If \(\mM\) is a \tne{} subset of \(\Cq\), then let \(\mM^\oc\) be the set of all vectors in \(\Cq\) which are orthogonal to \(\mM\) (with respect to the Euclidean inner product \(\ipE{.}{.}\)).
If \(\mU\) is a linear subspace of \(\Cq\), then let \(\OPu{\mU}\) be the orthogonal projection matrix onto \(\mU\).
Let \(\dom\) be a a \tne{} open subset of the complex plane.
If \(f\) is a complex-valued function meromorphic in \(\dom\), then we use \(\holpt{f}\) to denote the set of all points \(w\) at which \(f\) is holomorphic and we use the notation
\beql{NM}
\nst{f}
\defeq\setaca{w\in\holpt{f}}{f\rk{w}=0}.
\eeq
A \tpqa{matrix}-valued function \(F=\mat{f_{jk}}_{\substack{j=1,\dotsc,p\\k=1,\dotsc,q}}\) is said to be meromorphic in \(\dom\) if \(f_{jk}\) is meromorphic in \(\dom\) for each \(j\in\mn{1}{p}\) and each \(k\in\mn{1}{q}\).
In this case, we signify \(\holpt{F}\defeq\bigcap_{j=1}^p\bigcap_{k=1}^q\holpt{f_{jk}}\).
\section{Particular classes of holomorphic matrix functions}\label{S-III}
We will reformulate the matricial moment problems under consideration as equivalent interpolation problems for particular classes of holomorphic matrix-valued functions.
For this reason, we introduce in this section the corresponding function classes and summarize some of their essential properties needed in the sequel.
Most of this material is taken from \cite{MR2988005,MR3380267}.
Let \(\pip\defeq\setaca{z\in\C}{\rim z\in(0,\infty)}\) be the upper half-plane of the complex plane.
The class \(\RFq\) of all \emph{\tqqa{Herglotz--Nevanlinna} functions in \(\pip\)} consists of all matrix-valued functions \(F\colon\pip\to\Cqq\) which are holomorphic in \(\pip\) and which satisfy \(\rim F\rk{z}\in\Cggq\) for all \(z\in\pip\).
Detailed observations about matrix-valued Herglotz--Nevanlinna functions can be found in \cite{MR1784638,MR2988005}.
Especially, the functions belonging to \(\RFq\) admit a well-known integral representation.
Before we formulate this matricial generalization of a famous result due to R.~Nevanlinna, we observe that, for every choice of \(\nu\in\MggqR\) and \(z\in\CR\), the function \(f_z\colon\R\to\C\) given by \(f_z(x) \defeq\rk{1 + xz}/\rk{x-z}\) belongs to \(\LRC{\nu}\).
\bthmnl{Nevanlinna}{Theo21}
\benui
\item For each \(F\in\RFq\), there exists a unique triple \((\alpha, \beta, \nu)\in\CHq \times \Cggq \times \MggqR\) such that
\begin{align}\label{FKMM16_31_1}
F\rk{w}&=\alpha +\beta w +\int_{\R}\frac{1 + xw}{x - w}\nu \rk{\dif x}&\text{for each }w&\in\pip.
\end{align}
\item If \(\alpha\in\CHq\), if \(\beta\in\Cggq\), and if \(\nu\in\MggqR\), then \(F\colon\pip\to\Cqq\) defined by \eqref{FKMM16_31_1} belongs to \(\RFq\).
\eenui
\ethm
For each \(F\in\RFq\), the unique triple \((\alpha, \beta, \nu)\in\CHq \times \Cggq \times \MggqR\) for which the representation \eqref{FKMM16_31_1} holds true is called the \emph{\tNpo{\(F\)}} and we also write \((\alpha_F, \beta_F, \nu_F)\) instead of \((\alpha, \beta, \nu)\).
\blemnl{See \zitaa{MR1784638}{\cthm{5.4(iv)}{}}}{fkm12bP3.3
If \(F\in\RFq\), then \(\beta_F = \lim_{y\to\infty} \rk{\iu y}^\inv F\rk{\iu y}\).
\elem
For our consideration, the class \(\RFOq\) given by
\[
\RFOq
\defeq\setaca*{ F\in\RFq }{ \sup_{y\in[1,\infty)} y\normS*{F\rk{\iu y}}<\infty},
\]
plays a key role.
The functions belonging to \(\RFOq\) admit a special integral representation as well.
Before we formulate this result, let us observe that, for every choice of \(\sigma\in\MggqR\) and \(z\in\C\setminus\R\), one can easily see that the function \(h_{z}\colon\R\to\C\) given by \(h_{z}(x)\defeq\rk{x-z}^\inv\) describes a bounded and continuous function which, in particular, belongs to \(\LRC{\sigma}\).
Now we formulate the well-known matricial generalization of another classical result due to R.~Nevanlinna \cite{zbMATH02604576}:
\bthml{FKMM16_32}
\benui
\il{FKMM16_32.a} For each \(F\in\RFOq\), there is a unique \(\sigma\in\MggqR\) such that
\begin{align}\label{FKMM16_N1140D}
F\rk{w}&=\int_\R\frac{1}{x -w}\sigma\rk{\dif x}&\text{for each }w&\in\pip.
\end{align}
\il{FKMM16_32.b} If \(\sigma\in\MggqR\), then \(F\colon\pip\to\Cqq\) defined by \eqref{FKMM16_N1140D} belongs to \(\RFOq\).
\eenui
\ethm
\rthm{FKMM16_32} can be proved by using its well-known scalar version in the case \(q=1\) as well as \rrem{M.8-1}, \zitaa{MR2988005}{\clem{B.3}{1788}}, and the fact that, for each \(F\in\RFOq\) and each \(u\in\Cq\), the function \(u^\ad Fu\) belongs to \(\RFOu{1}\).
If \(F\in\RFOq\), then the unique \(\sigma\in\MggqR\) for which \eqref{FKMM16_N1140D} holds true is the so-called \emph{\tRSm{} (or matricial spectral measure) of \(F\)} and we also write \(\smF\) instead of \(\sigma\).
If \(\sigma\in\MggqR\) is given, then \(F\colon\pip\to\Cqq\) defined by \eqref{FKMM16_N1140D} is said to be the \emph{\tRSto{\(\sigma\)}}.
\breml{R16}
In view of \rthm{FKMM16_32}, now one can reformulate Problems~\mprobR{\kappa}{=} and~\mprobR{2n}{\lleq} in the language of \tRSt{s}:
\begin{problem}[\rprobR{\kappa}{=}]
Let \(\kappa\in\NOinf\) and let \(\seqska\) be a sequence of complex \tqqa{matrices}.
Parametrize the set \(\RFOqsg{\kappa}\) of all matrix-valued functions \(F\in\RFOq\) the \tRSm{} of which belongs to \(\MggqRsg{\kappa}\).
\end{problem}
\begin{problem}[\rprobR{2n}{\lleq}]
Let \(n\in\NO\) and let \(\seqs{2n}\) be a sequence of complex \tqqa{matrices}.
Parametrize the set \(\RFOqskg{2n}\) of all matrix-valued functions \(F\in\RFOq\) the \tRSm{} of which belongs to \(\MggqRskg{2n}\).
\end{problem}
\erem
Note that parametrizations of the solution sets of Problems~\rprobR{\kappa}{=} and~\rprobR{2n}{\lleq} in the general case are given in \cite{MR1624548,MR1740433,MR1395706,MR3380267,Thi06,FKKM20}.
\bleml{CS314}
If \(F\in\RFOq\), then, for each \(z\in\pip\), the equations \(\ran{F\rk{z}}=\ran{\sigma_F\rk{\R}}\) and \(\nul{F\rk{z}}=\nul{\sigma_F\rk{\R}}\)
hold true.
\elem
There is a proof of \rlem{CS314}, \teg{}, in \cite[\clem{8.2}{1785} and \cprop{8.9}{1786}]{MR2988005}.
Let
\[
\RFoq{-2}
\defeq\setaca*{F\in\RFq}{\lim_{y\to\infty}\rk*{\frac{1}{y}\normS*{F\rk{\iu y}}}=0}.
\]
\bexanl{\tcf{}~\zitaa{MR3380267}{\cexa{3.3}{190}}}{E0520}
Let \(E\in\Cqq\) be such that \(\rim E\in\Cggq\).
Then \(F\colon\pip\to\Cqq\) defined by \(F(z)\defeq E\) belongs to \(\RFoq{-2}\).
\eexa
For all \(A\in\Cpq\), let \(\Pqevena{A}\defeq\setaca{F\in\RFoq{-2}}{\nul{A}\subseteq\nul{\alpha_F}\cap\Nul{\nu_F\rk{\R}}}\).
\bleml{L1654}
Let \(E\in\Cqq\) be such that \(\rim E\in\Cggq\) and let \(A\in\Cpq\) be such that \(\nul{A}\subseteq\nul{E}\).
Then \(F\colon\pip\to\Cqq\) defined by \(F(z)\defeq E\) belongs to \(\Pqevena{A}\).
\elem
\bproof
From \rexa{E0520} we see \(F\in\RFoq{-2}\).
In particular, \(F\in\RFq\).
Therefore, we can apply \zitaa{MR2988005}{\cprop{3.7}{1772}} to get \(\nul{F\rk{z}}=\nul{\alpha_F}\cap\nul{\beta_F}\cap\nul{\nu_F\rk{\R}}\) for all \(z\in\pip\), where \(\rk{\alpha_F,\beta_F,\nu_F}\) denotes the \tNpo{\(F\)}.
Consequently, \(\nul{A}\subseteq\nul{E}=\nul{\alpha_F}\cap\nul{\beta_F}\cap\nul{\nu_F\rk{\R}}\subseteq\nul{\alpha_F}\cap\nul{\nu_F\rk{\R}}\) follows.
\eproof
\section{Some facts on Nevanlinna pairs}\label{S-IV}
In this section, we state some results on certain pairs of matrix-valued functions meromorphic in \(\pip\).
These pairs take over the role of the free parameters within the parametrization of the set of solutions to the matricial power moment problems.
Before we recall the definition of this well-known class of so-called Nevanlinna pairs, we observe the following well-known fact:
\breml{AB53N}
The matrix \(\Jimq\) given by
\beql{JQ}
\Jimq
\defeq
\Mat{
\Oqq&-\iu\Iq \\
\iu\Iq&\Oqq
}
\eeq
is a \taaa{2q}{2q}{signature} matrix, \tie{}, \(\Jimq^\ad=\Jimq\) and \(\Jimq^2=\Iu{2q}\) hold true.
Moreover, \(\tmat{A \\ B}^\ad \rk{-\Jimq}\tmat{A \\ B}=2\rim\rk{B^\ad A}\)
for all \(A,B\in\Cqq\).
In particular, the case \(B=\Iq\) is of interest.
\erem
Let \(\dom\) be a a \tne{} open subset of the complex plane.
As usual, we will call a subset \(\mD\) of \(\dom\) a discrete subset of \(\dom\) if \(\mD\) does not have an accumulation point in \(\dom\).
\bdefnl{def-nev-paar}
Let \(\phi\) and \(\psi\) be \tqqa{matrix}-valued functions meromorphic in \(\pip\).
The pair \(\copa{\phi}{\psi}\) is called \emph{\tqqa{Nevanlinna} pair in \(\pip\)} if there is a discrete subset \(\mD\) of \(\pip\) such that the following three conditions are fulfilled:
\baeqi{0}
\il{def-nev-paar.i} \(\phi\) and \(\psi\) are holomorphic in \(\pip \setminus \mD\).
\il{def-nev-paar.ii} \(\rank\smat{\phi \rk{w}\\\psi\rk{w}} =q\) for each \(w\in\pip \setminus \mD\).
\il{def-nev-paar.iii} \(\smat{\phi \rk{w}\\\psi\rk{w}}^\ad \rk{-\Jimq} \smat{\phi \rk{w}\\\psi\rk{w}}\in\Cggq\) for each \(w\in\pip \setminus \mD\).
\eaeqi
We denote the set of all \tqqa{Nevanlinna} pairs in \(\pip\) by \(\PRF{q}\).
\edefn
\breml{IG.b}
\rrem{AB53N} shows that condition~\ref{def-nev-paar.iii} of \rdefn{def-nev-paar} is equivalent to:
\begin{aseqi}{2}
\il{IG.iii'} \(\rim\rk{\ek{\psi\rk{w}}^\ad \phi\rk{w}}\in\Cggq\) for all \(w\in\pip\setminus\mD\).
\end{aseqi}
\erem
\breml{Ma09_1.11}
Let \(\copa{\phi}{\psi}\in\PRF{q}\).
For each \tqqa{matrix}-valued function \(g\) meromorphic in \(\pip\) such that the function \(\det g\) does not vanish identically, one can easily see that the pair \(\copa{\phi g}{\psi g}\) belongs to \(\PRF{q}\) as well.
Pairs \(\copa{\phi_1}{\psi_1},\copa{\phi_2}{\psi_2}\) belonging to \(\PRFq\) are said to be \emph{equivalent} if there exist a \tqqa{matrix}-valued function \(g\) meromorphic in \(\pip\) and a discrete subset \(\mD\) of \(\pip\) such that \(\phi_{1}\), \(\psi_{1}\), \(\phi_{2}\), \(\psi_{2}\), and \(g\) are holomorphic in \(\pip\setminus\mD\) and that \(\det g\rk{w} \neq0\) as well as \(\phi_2\rk{w}=\phi_1\rk{w}g\rk{w}\) and \(\psi_2\rk{w}=\psi_1\rk{w}g\rk{w}\) hold true for each \(w\in\pip\setminus\mD\).
Indeed, it is readily checked that this relation defines an equivalence relation on \(\PRF{q}\).
For each \(\copa{\phi}{\psi}\in\PRF{q}\), let \(\cpcl{\phi}{\psi}\) denote the equivalence class generated by \(\copa{\phi}{\psi}\).
Furthermore, if \(\mM\) is a \tne{} subset of \(\PRF{q}\), then let \(\cpsetcl{\mM}\defeq\setaca{\cpcl{\phi}{\psi}}{\copa{\phi}{\psi}\in\mM}\).
\erem
Now we want to study special subclasses of the class \(\PRF{q}\).
For each linear subspace \(\cU\) of \(\Cq\), let \(\OPu{\cU}\) be the orthogonal projection matrix onto \(\cU\) (see \rremss{PM}{R.P}).
\bnotal{CbezA5}
Let \(M\in\Cqp\).
We denote by \(\PRFqa{M}\) the set of all pairs \(\copa{\phi}{\psi}\in\PRF{q}\) such that \(\OPu{\ran{M}}\phi=\phi\) is fulfilled.
\enota
The construction of \rnota{CbezA5} will later be used to treat Problem~\mprobR{2n}{\lleq} by choosing \(M=\hp{2n}\), where \(\hp{2n}\) is given by \rdefn{CM2.1.64} below.
\breml{CbemA8
Let \(M\in\Cqp \) and let \(\copa{\phi}{\psi}\in\PRFqa{M}\).
In view of \rrem{Ma09_1.11} and \rnota{CbezA5}, then \(\copa{\eta}{\theta}\in\PRFqa{M}\) holds true for all \(\copa{\eta}{\theta}\in\cpcl{\phi}{\psi}\).
\erem
The following lemma plays an important role in the proof of \rprop{L1204}, which is an essential step to a main result of this paper.
\bleml{CfolA15}
Let \(M\in\CHq\), let \(\copa{\phi}{\psi}\in\PRFqa{M}\), and let \(P\defeq\OPu{\ran{M}}\) and \(Q\defeq\OPu{\nul{M}}\).
Then there exists a pair \(\copa{\cpfP }{\cpfQ}\in\cpcl{\phi}{\psi}\) such that
\begin{align}\label{ST37N}
P\cpfP &=\cpfP,&
\cpfP P&=\cpfP,&
P\cpfQ &=\cpfQ -Q,&
&\text{and}&
\cpfQ P&=\cpfQ -Q.
\end{align}
\elem
\bproof
Let \(r\defeq\rank M\).
First we consider the case \(r\geq 1\).
Let \(u_1,u_2,\dotsc,u_r\) be an orthonormal basis of \(\ran{M}\) and let \(U\defeq\mat{u_1,u_2,\dotsc,u_r}\).
Then, from \zitaa{FKKM20}{\cprop{4.12(b)}{14}} we can infer that there exists a pair \(\copa{\tilde\phi}{\tilde\psi}\in\PRF{r}\) such that \(\copa{U\tilde\phi U^\ad}{U\tilde\psi U^\ad+\OPu{\ran{M}^\oc}}\in\PRFq\) and \(\cpcl{U\tilde\phi U^\ad}{U\tilde\psi U^\ad+\OPu{\ran{M}^\oc}}=\cpcl{\phi}{\psi}\), \tie{}\ \(\copa{U\tilde\phi U^\ad}{U\tilde\psi U^\ad+\OPu{\ran{M}^\oc}}\in\cpcl{\phi}{\psi}\) hold true.
Thus, setting \(\cpfP\defeq U\tilde\phi U^\ad\) and \(\cpfQ\defeq U\tilde\psi U^\ad+\OPu{\ran{M}^\oc}\), we have \(\copa{\cpfP}{\cpfQ}\in\cpcl{\phi}{\psi}\).
\rrem{R.P} shows \(P^2=P\) and \(P^\ad=P\).
Clearly, \(PU=U\).
Hence, we get \(U^\ad=\rk{PU}^\ad=U^\ad P\).
Consequently, \(P\cpfP=PU\tilde\phi U^\ad=\cpfP\) and \(\cpfP P=U\tilde\phi U^\ad P=\cpfP\).
From the assumption \(M\in\CHq\) and \rrem{tsa2} we get \(\nul{M}=\nul{M^\ad}=\ran{M}^\oc\), implying \(Q=\OPu{\ran{M}^\oc}\).
Hence, \(\cpfQ=U\tilde\psi U^\ad+Q\).
Using \rrem{A.R.0<P<1}, we obtain furthermore \(Q=\Iq-P\).
In view of \(P^2=P\), then \(QP=\Oqq\) and \(PQ=\Oqq\) follow.
Consequently, \(P\cpfQ=P\rk{U\tilde\psi U^\ad+Q}=PU\tilde\psi U^\ad+PQ=U\tilde\psi U^\ad=\cpfQ-Q\) and \(\cpfQ P=\rk{U\tilde\psi U^\ad+Q}P=U\tilde\psi U^\ad P+QP=U\tilde\psi U^\ad=\cpfQ-Q\).
Thus, in the case \(r\geq1\), the proof is complete.
If \(r=0\), then the assertion can be easily checked using \zitaa{FKKM20}{\cprop{4.12(a)}{14}}.
We omit the details.
\eproof
\section{\hHp{s}}\label{S1111}
Let \(\kappa\in\NOinf\) and let \(\seqska\) be a sequence of complex \tpqa{matrices}.
For every choice of integers \(\ell \) and \(m\) fulfilling \(0\leq \ell \leq m \leq\kappa\), let
\begin{align}\label{yz}
\yuu{\ell }{m}&\defeq\Mat{\su{\ell }\\\vdots\\\su{m}}&
&\text{and}&
\zuu{\ell }{m}&\defeq\mat{\su{\ell },\dotsc,\su{m}}.
\end{align}
Let
\begin{align}\label{Trip}
\Trip{0}&\defeq\Opq&
&\text{and}&
\Trip{n}\defeq\zuu{n}{2n-1}\Hu{n-1}^\mpi\yuu{n}{2n-1}
\end{align}
for each \(n\in\N\) such that \(2n-1\leq\kappa\).
For all \(n\in\NO\) fulfilling \(2n+1\leq\kappa\), we also introduce the \tbHm{} \(\Kn\defeq\matauuuo{\su{j+k+1}}{j}{k}{0}{n}\).
For every choice of \(n\in\N\) fulfilling \(2n-1\leq\kappa\), we set
\[
\Sigma_n
\defeq\zuu{n}{2n-1}\Hu{n-1}^\mpi\Ku{n-1}\Hu{n-1}^\mpi\yuu{n}{2n-1}.
\]
For each \(n\in\N\) fulfilling \(2n\leq\kappa\), let
\begin{align*}
M_n&\defeq\zuu{n}{2n-1}\Hu{n-1}^\mpi\yuu{n+1}{2n}&
&\text{and}&
N_n&\defeq\zuu{n+1}{2n}\Hu{n-1}^\mpi\yuu{n}{2n-1}.
\end{align*}
Let
\begin{align}\label{Lam}
\Lam{0}&\defeq\Opq&
&\text{and}&
\Lam{n}&\defeq M_n+N_n -\Sigma_n
\end{align}
for all \(n\in\N\) fulfilling \(2n\leq\kappa\).
Now we turn our attention to sequences of complex \tqqa{matrices} which are introduced in \rsec{S-II} and which are defined by certain properties of the \tbHms{} built from the given sequence.
\breml{R1624}
Let \(\kappa\in\NOinf\) and let \(\seqska\in\Hggeqka\).
Then \(\su{j}^\ad=\su{j}\) for each \(j\in\mn{0}{\kappa}\).
\erem
\breml{Schr2.5}
Let \(\kappa\in\NOinf\) and let \(\seqs{\kappa}\) be a sequence of complex \tqqa{matrices}.
It is easy to see that \(\seqs{\kappa}\in\Hggeq{\kappa}\) is valid if and only if \(\seqs{m}\in\Hggeq{m}\) holds true for each \(m\in\mn{0}{\kappa}\).
\erem
Now we recall useful parameters which will play a key role.
\bdefnnl{\zitaa{MR3014199}{\cdefn{2.3}{}}, \zitaa{FKKM20}{\cdefn{5.5}{}}}{CM2.1.64}
Let \(\kappa\in\NOinf\) and let \(\seqska\) be a sequence of complex \tpqa{matrices}.
For each \(k\in\NO\) fulfilling \(2k\leq\kappa\), let \(\hp{2k}\defeq\su{2k}-\Trip{k}\), where \(\Trip{k}\) is given by \eqref{Trip}, and, for each \(k\in\NO\) fulfilling \(2k+1\leq\kappa\), let \(\hp{2k+1}\defeq\su{2k+1}-\Lam{k}\), where \(\Lam{k}\) is given by \eqref{Lam}.
Then \(\sHp{\kappa}\) is called the \emph{\tsHp{} (or sequence of canonical Hankel parameters) of \(\seqska\)}.
\edefn
In view of \eqref{Trip} and \eqref{Lam}, we have in particular
\begin{align}\label{Hp.01}
\hp{0}&=\su{0}&
&\text{and}&
\hp{1}&=\su{1}.
\end{align}
\breml{CM2.1.65}
Let \(\kappa\in\NOinf\), let \(\seqska\) be a sequence of complex \tpqa{matrices}, and let \(\sHp{\kappa}\) be the \tsHpo{\(\seqska\)}.
Then, for all \(m\in\mn{0}{\kappa}\), the sequence \(\sHp{m}\) coincides with the \tsHpo{\(\seqs{m}\)}.
\erem
\bdefnl{D.nat-ext}
Let \(n\in\NO\) and let \(\seqs{2n}\) be a sequence of complex \tpqa{matrices}.
Let \(\hat{s}_j\defeq\su{j}\) for each \(j\in\mn{0}{2n}\) and let \(\hat{s}_{2n+1}\defeq\Lam{n}\), where \(\Lam{n}\) is given by \eqref{Lam}.
Then we call the sequence \(\seqsh{2n+1}\) the \emph{\tnatexto{\(\seqs{2n}\)}}.
\edefn
In view of the construction of \(\seqsh{2n+1}\), we can immediately see from \eqref{Lam} that this sequence is completely determined by the original sequence \(\seqs{2n}\).
This observation will play an important role in the rest of the paper.
It should be mentioned that the notion introduced in \rdefn{D.nat-ext} is in some sense analogous to the central extension of a matricial Carath\'eodory or Schur sequence.
We get the following remark, which is essential for our further considerations.
\breml{R26-SK1}
Let \(\kappa\in\NOinf\), let \(\seqska\) be a sequence of complex \tpqa{matrices}, and let \(\sHp{\kappa}\) be the \tsHpo{\(\seqska\)}.
Let \(n\in\NO\) be such that \(2n\leq\kappa\) and let \(\shHp{2n+1}\) be the \tsHp{} of the \tnatext{} \(\seqsh{2n+1}\) of \(\seqs{2n}\).
Then \rrem{CM2.1.65} and \rdefnss{CM2.1.64}{D.nat-ext} show that \(\hhp{j}=\hp{j}\) for all \(j\in\mn{0}{2n}\) and \(\hhp{2n+1}=\Opq\).
\erem
In the sequel, we will repeatedly use the sequence \(\seqsh{2n+1}\) given in \rdefn{D.nat-ext}.
\breml{R8Z}
Let \(\kappa\in\NOinf\), let \(\seqska\) be a sequence of complex \tpqa{matrices}, and let \(\sHp{\kappa}\) be the \tsHpo{\(\seqska\)}.
Then it is readily checked that \(\seq{\hp{j}^\ad}{j}{0}{\kappa}\) is the \tsHpo{\(\seqa{s^\ad}{\kappa}\)}.
\erem
\bpropnl{\tcf{}~\zitaa{MR2805417}{\cprop{2.10(b)}{454} and \cprop{2.15(b)}{457}}}{DFKMT2.30N
Let \(\kappa\in\NOinf\) and let \(\seqska\) be a sequence of complex \tqqa{matrices} with \tsHp{} \(\sHp{\kappa}\).
Then \(\seqska\) belongs to \(\Hggeq{\kappa}\) if and only if the following three conditions are fulfilled:
\bAeqi{0}
\il{DFKMT2.30N.I} \(\hp{2k}\in\Cggq\) for all \(k\in\NO\) such that \(2k\leq\kappa\).
\il{DFKMT2.30N.II} If \(\kappa\geq1\), then \(\hp{2k-1}^\ad=\hp{2k-1}\) as well as \(\ran{\hp{2k-1}}\subseteq\ran{\hp{2k-2}}\) hold true for all \(k\in\N\) fulfilling \(2k-1\leq\kappa\).
\il{DFKMT2.30N.III} If \(\kappa\geq2\), then \(\ran{\hp{2k}}\subseteq\ran{\hp{2k-2}}\) for all \(k\in\N\) such that \(2k\leq\kappa\).
\eAeqi
\eprop
\bpropnl{\tcf{}~\zitaa{MR2805417}{\cprop{2.10(d)}{454} and \cprop{2.15(c)}{457}}}{P1644}
Let \(\kappa\in\NOinf\) and let \(\seqs{2\kappa}\) be a sequence of complex \tqqa{matrices} with \tsHp{} \(\sHp{2\kappa}\).
Then \(\seqs{2\kappa}\) belongs to \(\Hgq{2\kappa}\) if and only if the following two conditions are fulfilled:
\bAeqi{0}
\il{P1644.I} \(\hp{2k}\in\Cgq\) for all \(k\in\mn{0}{\kappa}\).
\il{P1644.II} If \(\kappa\geq1\), then \(\hp{2k-1}^\ad=\hp{2k-1}\) for all \(k\in\mn{1}{\kappa}\).
\eAeqi
\eprop
\bpropl{L1237-A}
Let \(\kappa\in\NOinf\) and let \(\seqska\in\Hggeqka\).
Let \(n\in\NO\) be such that \(2n\leq\kappa\).
Then the \tnatext{} \(\seqsh{2n+1}\) of \(\seqs{2n}\) belongs to \(\Hggeq{2n+1}\).
\eprop
\bproof
Since \rrem{Schr2.5} yields \(\seqs{2n}\in\Hggeq{2n}\), the assertion follows from \rrem{R1624}, \zitaa{MR2805417}{\crem{2.1}{}}, and \zitaa{MR2570113}{\cprop{2.18}{770}}.
\eproof
Now we want to recall the definition of first \tScht{} of sequences from \(\Cpq\).
This requires a little preparation.
Let \(\kappa\in\NOinf\) and let \(\seqska\) be a sequence of complex \tpqa{matrices}.
Then the sequence \(\seqa{s^\rez}{\kappa}\) of complex \tqpa{matrices} given by \(\su{0}^\rez\defeq\su{0}^\mpi\) and, for all \(k\in\mn{1}{\kappa}\), recursively by
\
\su{k}^\rez
\defeq-\su{0}^\mpi\sum_{j=0}^{k-1}\su{k-j}\su{j}^\rez,
\]
is called the \emph{\trso{\(\seqska\)}}.
A detailed discussion of \trs{s} is given in \cite{MR3014197}.
As our main application of \trs{s} we explain the elementary step of the Schur type algorithm under consideration.
Let \(\kappa\in\minf{2}\cup\set{\infi}\) and let \(\seqska\) be a sequence of complex \tpqa{matrices} with \trs{} \(\seqa{s^\rez}{\kappa}\).
Then the sequence \(\seqa{s^\St{1}}{\kappa-2}\) defined, for all \(j\in\mn{0}{\kappa-2}\), by
\
\su{j}^\St{1}
\defeq-\su{0}\su{j+2}^\rez\su{0}
\]
is said to be the \emph{first \tSchto{\(\seqska\)}}.
As considered in \zitaa{MR3014199}{\cdefn{9.1}{167}} already, the repeated application of the first \tScht{} in a natural way generates a corresponding algorithm for (finite or infinite) sequences of complex \tpqa{matrices}:
\breml{K-TRA}
Let \(\kappa\in\NOinf\) and let \(\seqska\) be a sequence of complex \tpqa{matrices}.
Then the sequence \(\seqa{s^\St{0}}{\kappa}\) given by \(\su{j}^\St{0}\defeq\su{j}\) for all \(j\in\mn{0}{\kappa}\), is called the \emph{\ath{0} \tSchto{\(\seqska\)}}.
If \(\kappa\geq2\), then the \ath{k} \tScht{} is defined recursively:
For all \(k\in\N\) fulfilling \(2k\leq\kappa\), the first \tScht{} \(\seqa{s^\St{k}}{\kappa-2k}\) of \(\seqa{s^\St{k-1}}{\kappa-2(k-1)}\) is called the \emph{\ath{k} \tSchto{\(\seqska\)}}.
\erem
\section{Special matrix polynomials}\label{S-VII}
In this section, we discuss special matrix polynomials, which have been used, \teg{}, in \cite[\cform{(4.13)}{226}]{MR1624548} and \cite[\capp{C}{}]{MR3380267} already.
Such matrix polynomials will be applied for the description of the solution set \(\RFOqskg{2n}\) of Problem~\rprobR{2n}{\lleq}, which was recognized as an equivalent reformulation of the Hamburger moment problem \mprobR{2n}{\lleq}.
More precisely, these matrix polynomials act as generating matrix-valued functions of the linear fractional transformations which establish the description of the set \(\RFOqskg{2n}\).
\bnotal{BezWV}
Let \(A,B\in\Cpq\).
Then let \(V_{A,B}\colon\C\to\Coo{\rk{p+q}}{\rk{p+q}}\) be defined by
\[
V_{A,B}\rk{z}
\defeq
\begin{pmat}[{|}]
\Opp & -A\cr\-
A^\mpi & z\Iq-A^\mpi B\cr
\end{pmat}.
\]
We set \(V_A\defeq V_{A, \Opq}\) (see also \cite[\cch{4}{219}]{MR1624548}).
\enota
\bnotanl{see {\cite[\cpage{267}]{MR3380267}}}{SN-Tra}
Let \(\kappa\in\NOinf\) and let \(\seqska\) be a sequence of complex \tpqa{matrices}.
For all \(k\in\NO\) such that \(2k\leq\kappa\), let \(\seqa{s^\St{k}}{\kappa-2k}\) be the \ath{k} \tSchto{\(\seqska\)}.
In view of \rnota{BezWV}, for all \(n\in\NO\) fulfilling \(2n\leq\kappa\), let
\beql{SN-Tra.0
\fV^{\rk{\seqs{2n}}}
\defeq
\begin{cases}
V_{\su{0}^\St{0}}\tincase{n =0}\\
V_{\su{0}^\St{0},\su{1}^\St{0}}V_{\su{0}^\St{1},\su{1}^\St{1}}\dotsm V_{\su{0}^\St{n-1},\su{1}^\St{n-1}}V_{\su{0}^\St{n}}\tincase{n\geq1}
\end{cases}
\eeq
and, for all \(n\in\NO\) fulfilling \(2n+1\leq\kappa\), let
\beql{SN-Tra.1
\fV^{\rk{\seqs{2n+1}}}
\defeq V_{\su{0}^\St{0},\su{1}^\St{0}}V_{\su{0}^\St{1},\su{1}^\St{1}}\dotsm V_{\su{0}^\St{n},\su{1}^\St{n}}.
\eeq
\enota
The matrix-valued functions introduced in \rnota{SN-Tra} can be expressed by the \tsHp{} if the given sequence \(\seqska\) belongs to \(\Hggeqka\).
This is caused by the following theorem:
\bthmnl{\zitaa{MR3014199}{\cthm{9.15}{178}}}{HS}
Let \(\kappa\in\NOinf\) and let \(\seqska\in\Hggeq{\kappa}\) with \tsHp{} \(\sHp{\kappa}\).
For each \(k\in\NO\) such that \(2k\leq\kappa\), let \(\seqa{s^\St{k}}{\kappa-2k}\) be the \ath{k} \tSchto{\(\seqska\)}.
Then \(\hp{2k}=\su{0}^\St{k}\) for all \(k\in\NO\) such that \(2k\leq\kappa\) and \(\hp{2k+1}=\su{1}^\St{k}\) for all \(k\in\NO\) fulfilling \(2k+1\leq\kappa\).
\ethm
Given \(n\in\N\) and arbitrary rectangular complex matrices \(A_1,A_2,\dotsc,A_n\), we write \(\col\seq{A_j}{j}{1}{n}\) (\tresp{}, \(\row\seq{A_j}{j}{1}{n}\)) for the block column (\tresp{}, block row) built from the matrices \(A_1,A_2,\dotsc,A_n\) if their numbers of columns (\tresp{}, rows) are all equal.
\bnotal{B.N.deg}
Let \(P\) be a complex \tpqa{matrix} polynomial.
For each \(n\in\NO\), let \(
\cvuo{n}{P}
\defeq\col\seq{A_j}{j}{0}{n}
\), where \((A_j)_{j=0}^\infi\) is the uniquely determined sequence of complex \tpqa{matrices}, such that \(P(w)=\sum_{j=0}^\infi w^jA_j\) holds true for all \(w\in\C\).
Denote by \(\deg P\defeq\sup\setaca{j\in\NO}{A_j\neq\Opq}\) the \emph{degree of \(P\)}.
If \(k\defeq\deg P\geq0\), then the matrix \(A_k\) is called the \emph{leading coefficient matrix of \(P\)}.
\enota
In particular, we have \(\deg P=-\infty\), if \(P(z)=\Opq\) for all \(z\in\C\).
\breml{B.R.P=euY}
If \(P\) is a complex \tqqa{matrix} polynomial, then \(P=\eu{n}\cvuo{n}{P}\) for all \(n\in\NO\) with \(n\geq \deg P\), where \(\eu{n}\colon\C\to\Coo{q}{(n+1)q}\) is defined by
\beql{NMB}
\eua{n}{z}
\defeq\mat{z^0\Iq,z^1\Iq,z^2\Iq,\dotsc,z^n\Iq}.
\eeq
\erem
\bdefnl{143.D1419}
Let \(\kappa\in\NOinf\) and let \(\seqs{2\kappa}\) be a sequence of complex \tqqa{matrices}.
A sequence \(\seq{P_k}{k}{0}{\kappa}\) of complex \tqqa{matrix} polynomials is called \emph{\tmosb{\(\seq{\su{j}}{j}{0}{2\kappa}\)}}, if it satisfies the following conditions:
\bAeqi{0}
\il{143.D1419.I} For each \(k\in\mn{0}{\kappa}\), the matrix polynomial \(P_k\) has degree \(k\) and leading coefficient matrix \(\Iq\).
\il{143.D1419.II} \(\ek{\cvuo{n}{P_j}}^\ad\Hu{n}\ek{\cvuo{n}{P_k}}=\Oqq\) for all \(j,k\in\mn{0}{\kappa}\) with \(j\neq k\), where \(n\defeq\max\set{j,k}\) and \(\Hu{n}\) is given by \eqref{N2}.
\eAeqi
\edefn
Using \rrem{B.R.P=euY}, we can conclude from~\zitaa{MR2805417}{\cpropss{5.8(a1)}{479}{5.9(a)}{481}}:
\bpropnl{\zitaa{MR4181333}{\cprop{D.5}{488}}}{B.P.oMP-lGs}
Let \(\kappa\in\NOinf\), let \(\seqs{2\kappa}\in\Hggq{2\kappa}\), and let \(\seq{P_k}{k}{0}{\kappa}\) be a sequence of complex \tqqa{matrix} polynomials, satisfying condition~\ref{143.D1419.I} of \rdefn{143.D1419}.
Then \(\seq{P_k}{k}{0}{\kappa}\) is a \tmosb{\(\seqs{2\kappa}\)} if and only if \(\Hu{k-1}X_{k}=\yuu{k}{2k-1}\) for all \(k\in\mn{1}{\kappa}\), where \(X_{k}\) is taken from the \tbr{} \(\cvuo{k}{P_k}=\tmat{-X_{k}\\ \Iq}\) and where \(\Hu{k-1}\) and \(\yuu{k}{2k-1}\) are given by \eqref{N2} and \eqref{yz}, respectively.
\eprop
The following considerations of this section are aimed at having closer look at the four \tqqa{blocks} of the \taaa{2q}{2q}{matrix}-valued function \(\fV^{\rk{\seqs{2n}}}\).
We will see that these are particular \tqqa{matrix} polynomials which satisfy recurrence formulas with the \tsHp{} as coefficients.
For each \(\kappa\in\NOinf\), let
\beql{K5.11}
\ev{\kappa}
\defeq\sup\setaca{k\in\NO}{2k-1\leq\kappa}.
\eeq
\bdefnl{K19}
Let \(\kappa\in\NOinf\), let \(\seqska\) be a sequence of complex \tqqa{matrices}, and let \(\sHp{\kappa}\) be the \tsHpo{\(\seqska\)}.
Let \(\pa{0}, \pb{0}, \pc{0}, \pd{0}\colon\C\to\Cqq\) be defined by
\begin{align}\label{K19.0}
\pa{0}\rk{z}&\defeq\Oqq,&
\pb{0}\rk{z}&\defeq\Iq, &
\pc{0}\rk{z}&\defeq\Oqq,&
&\text{and}&
\pd{0}\rk{z}&\defeq\Iq.
\end{align}
If \(\kappa\geq 1\), then let \(\pa{1}, \pb{1}, \pc{1}, \pd{1}\colon\C\to\Cqq\) be given via
\begin{align}\label{K19.1
\pa{1}\rk{z}&\defeq\hp{0}, &
\pb{1}\rk{z}&\defeq z\Iq-\hp{0}^\mpi \hp{1}, &
\pc{1}\rk{z}&\defeq\hp{0}, &
&\text{and} &
\pd{1}\rk{z}&\defeq z\Iq-\hp{1}\hp{0}^\mpi.
\end{align}
If \(\kappa\geq 2\), then, for all \(k\in\mn{2}{\infi}\) fulfilling \(2k-1\leq\kappa\), let \(\pa{k}, \pb{k}, \pc{k}, \pd{k}\colon\C\to\Cqq\) be defined recursively by
\begin{align}
\paa{k}{z}&\defeq\pa{k-1}\rk{z}(z\Iq-\hp{2k-2}^\mpi \hp{2k-1})-\pa{k-2}\rk{z}\hp{2k-4}^\mpi \hp{2k-2},\label{K19.pa}\\%\label{K8.3}
\pba{k}{z}&\defeq\pb{k-1}\rk{z}(z\Iq-\hp{2k-2}^\mpi \hp{2k-1})-\pb{k-2}\rk{z}\hp{2k-4}^\mpi \hp{2k-2},\label{K19.pb}\\%\label{K8.4}
\pca{k}{z}&\defeq\rk{z\Iq-\hp{2k-1}\hp{2k-2}^\mpi}\pc{k-1}\rk{z}-\hp{2k-2}\hp{2k-4}^\mpi \pc{k-2}\rk{z},\label{K19.pc
\intertext{and}
\pda{k}{z}&\defeq\rk{z\Iq-\hp{2k-1}\hp{2k-2}^\mpi}\pd{k-1}\rk{z}-\hp{2k-2}\hp{2k-4}^\mpi \pd{k-2}\rk{z}.\label{K19.pd
\end{align}
Regarding \eqref{K5.11}, we call the quadruple \(\abcd{\ev{\kappa}}\) the \emph{\tabcd{}}, abbreviating \sabcd{}, \emph{associated with \(\seqska\)}.
\edefn
\breml{SK32R}
Under the assumptions of \rdefn{K19}, for all \(k\in\NO\) such that \(2k-1\leq\kappa\), the matrix-valued functions \(\pa{k}\), \(\pb{k}\), \(\pc{k}\), and \(\pd{k}\) indeed are matrix polynomials, where the matrix polynomials \(\pb{k}\) and \(\pd{k}\) both have degree \(k\) and the same leading coefficient matrix \(\Iq\) (see also \zitaa{MR2805417}{\cthm{5.5}{475}}).
\erem
\breml{BK8.1}
Let \(\kappa\in\NOinf\), let \(\seqska\) be a sequence of complex \tqqa{matrices}, and let \(\abcd{\ev{\kappa}}\) be the \sabcdo{\(\seqska\)}.
In view of \rrem{CM2.1.65} and \rdefn{K19}, for all \(m\in\mn{0}{\kappa}\), then \(\abcd{\ev{m}}\) equals the \sabcdo{\(\seqs{m}\)}.
\erem
\bpropnl{\tcf{}~\zitaa{MR2805417}{\cthm{5.5(a)}{475}}}{143.T1336}
Let \(\kappa\in\NOinf\), let \(\seqs{2\kappa}\in\Hggq{2\kappa}\), and let \(\abcd{\kappa}\) be the \sabcdo{\(\seqs{2\kappa}\)}.
Then \(\seq{\pb{k}}{k}{0}{\kappa}\) is a \tmosb{\(\seqs{2\kappa}\)}.
\eprop
Let \(\kappa\in\NOinf\) and let \(\seqska\) be a sequence of complex \tpqa{matrices}.
For all \(m\in\mn{0}{\kappa}\), then let
\beql{M.N.S}
\SUu{m}
\defeq
\Mat{
s_0 & s_1 & s_2 & \hdots & s_m \\
\NM & s_0 & s_1 & \hdots & s_{m-1} \\
\NM & \NM & s_0 & \hdots & s_{m-2} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
\NM & \NM & \NM & \hdots & s_0
}.
\eeq
\bnotal{B.N.sec}
Let \(\kappa\in\NOinf\), let \(\seqs{\kappa}\) be a sequence of complex \tqqa{matrices}, and let \(P\) be a complex \tqqa{matrix} polynomial with degree \(k\defeq\deg P\) satisfying \(k\leq\kappa+1\).
Then let \(P^\secra{s}\colon\C\to\Cqq\) be defined by \(P^\secra{s}(z)=\Oqq\) if \(k\leq0\) and by
\(
P^\secra{s}(z)
\defeq\eua{k-1}{z}\mat{\Ouu{kq}{q},\SUu{k-1}}\cvuo{k}{P}
\)
if \(k\geq1\).
\enota
\breml{143.R1532}
Let \(\kappa\in\NOinf\), let \(\seqska\) be a sequence of complex \tqqa{matrices}, and let \(P\) and \(Q\) be two complex \tqqa{matrix} polynomials, each having degree at most \(\kappa+1\).
Then \((P+Q)^\secra{s}=P^\secra{s}+Q^\secra{s}\).
Furthermore, \((PA)^\secra{s}=P^\secra{s}A\) for all \(A\in\Cqq\).
\erem
\blemnl{\zitaa{MR4181333}{\clem{E.4}{489}}}{143.L1644}
Let \(\kappa\in\Ninf\), let \(\seqska\) be a sequence of complex \tqqa{matrices}, let \(k\in\N\) with \(2k-1\leq\kappa\), and let \(P\) be a complex \tqqa{matrix} polynomial with degree \(k\) and leading coefficient matrix \(\Iq\), satisfying \(\Hu{k-1}X=\yuu{k}{2k-1}\), where the matrix \(X\) is taken from the \tbr{} \(\cvuo{k}{P}=\tmat{-X\\ \Iq}\).
Let the matrix polynomial \(Q\) be defined by \(Q(w)\defeq wP(w)\).
For all \(z\in\C\), then \(Q^\secra{s}(z)=zP^\secra{s}(z)\).
\elem
Using \rnota{B.N.sec} we have:
\bpropl{P1203}
Let \(\kappa\in\NOinf\), let \(\seqska\in\Hggeq{\kappa}\), and let \(\abcd{\ev{\kappa}}\) be the \sabcdo{\(\seqska\)}.
For all \(k\in\NO\) with \(2k-1\leq\kappa\), then \(\pa{k}=\pb{k}^\secra{s}\).
\eprop
\bproof
For all \(\ell\in\NO\) with \(2\ell-1\leq\kappa\), from \rrem{SK32R} we see that \(\pb{\ell}\) is a matrix polynomial with \(\deg\pb{\ell}=\ell\leq\kappa+1\) and leading coefficient matrix \(\Iq\).
Because of \(\deg\pb{0}=0\) and \rnota{B.N.sec}, we have \(\pb{0}^\secra{s}(z)=\Oqq\) for all \(z\in\C\).
In view of \eqref{K19.0}, hence \(\pb{0}^\secra{s}=\pa{0}\).
In the case \(\kappa=0\), the proof is complete.
Now suppose \(\kappa\geq1\).
Using \(\deg\pb{1}=1\), \rnotass{B.N.sec}{B.N.deg}, \eqref{NMB}, \eqref{M.N.S}, and \eqref{Hp.01}, we obtain
\[
\pb{1}^\secra{s}(z)
=\eua{0}{z}\mat{\Oqq,\SUu{0}}\cvuo{1}{\pb{1}}
=\Iq\cdot\mat{\Oqq,\su{0}}\matp{\uk}{\Iq}
=\su{0}
=\hp{0}
\]
for all \(z\in\C\).
In view of \eqref{K19.1}, hence \(\pb{1}^\secra{s}=\pa{1}\).
In the case \(1\leq\kappa\leq2\) the proof is complete.
Now suppose \(\kappa\geq3\).
Then there exists an integer \(\ell \in\minf{2}\) with \(2\ell -1\leq\kappa\) such that \(\pa{\ell -1}=\pb{\ell -1}^\secra{s}\) and \(\pa{\ell -2}=\pb{\ell -2}^\secra{s}\) hold true.
From \(\seqska\in\Hggeq{\kappa}\) we can infer \(\seqs{2\ell -2}\in\Hggq{2\ell -2}\).
Regarding \rrem{BK8.1}, we can thus apply \rprop{143.T1336} to see that \(\seq{\pb{k}}{k}{0}{\ell -1}\) is a \tmosb{\(\seqs{2\ell -2}\)}.
In view of \rprop{B.P.oMP-lGs}, then \rlem{143.L1644} yields \(B_{\ell -1}^\secra{s}(z)=z\pb{\ell -1}^\secra{s}(z)\) for all \(z\in\C\), where \(B_{\ell -1}\colon\C\to\Cqq\) is defined by \(B_{\ell -1}(z)\defeq z\pba{\ell -1}{z}\).
Because of \(\pa{\ell -1}=\pb{\ell -1}^\secra{s}\), we hence obtain \(B_{\ell -1}^\secra{s}=A_{\ell -1}\), where \(A_{\ell -1}\colon\C\to\Cqq\) is defined by \(A_{\ell -1}(z)\defeq z\paa{\ell -1}{z}\).
According to \eqref{K19.pb} and \eqref{K19.pa}, we have furthermore \(\pb{\ell }=B_{\ell -1}-\pb{\ell -1}\hp{2\ell -2}^\mpi\hp{2\ell -1}-\pb{\ell -2}\hp{2\ell -4}^\mpi\hp{2\ell -2}\) and \(\pa{\ell }=A_{\ell -1}-\pa{\ell -1}\hp{2\ell -2}^\mpi\hp{2\ell -1}-\pa{\ell -2}\hp{2\ell -4}^\mpi\hp{2\ell -2}\).
Regarding \(\deg B_{\ell -1}=\ell \leq\kappa+1\), the application of \rrem{143.R1532} yields then
\[\begin{split}
\pb{\ell }^\secra{s}
&=\rk{B_{\ell -1}-\pb{\ell -1}\hp{2\ell -2}^\mpi\hp{2\ell -1}-\pb{\ell -2}\hp{2\ell -4}^\mpi\hp{2\ell -2}}^\secra{s}\\
&=B_{\ell -1}^\secra{s}-\pb{\ell -1}^\secra{s}\hp{2\ell -2}^\mpi\hp{2\ell -1}-\pb{\ell -2}^\secra{s}\hp{2\ell -4}^\mpi\hp{2\ell -2}\\
&=A_{\ell -1}-\pa{\ell -1}\hp{2\ell -2}^\mpi\hp{2\ell -1}-\pa{\ell -2}\hp{2\ell -4}^\mpi\hp{2\ell -2}
=\pa{\ell }.
\end{split}\]
Thus, \(\pa{k}=\pb{k}^\secra{s}\) is inductively proved for all \(k\in\NO\) with \(2k-1\leq\kappa\).
\eproof
Now we write the recurrence formulas stated in \rdefn{K19} in an alternative form.
\breml{MD51N}
Let \(\kappa\in\Ninf\) and let \(\seqska\) be a sequence of complex \tqqa{matrices} with \tsHp{} \(\sHp{\kappa}\) and \sabcd{} \(\abcd{\ev{\kappa}}\).
For every choice of \(k\in\N\) such that \(2k+1\leq\kappa\) and \(z\in\C\), then
\begin{align*}
\Mat{
\paa{k}{z}&\pa{k+1}\rk{z}\\
\pba{k}{z}&\pb{k+1}\rk{z}
}
&=
\Mat{
\pa{k-1}\rk{z}&\paa{k}{z}\\
\pb{k-1}\rk{z}&\pba{k}{z}
}
\begin{pmat}[{|}]
\Oqq&-\hp{2k-2}^\mpi \hp{2k}\cr\-
\Iq &z\Iq -\hp{2k}^\mpi\hp{2k+1}\cr
\end{pmat}
\intertext{and}
\Mat{
\pca{k}{z}&\pda{k}{z}\\
\pc{k+1}\rk{z}&\pd{k+1}\rk{z}
}
&=
\begin{pmat}[{|}]
\Oqq&\Iq \cr\-
-\hp{2k}\hp{2k-2}^\mpi&z\Iq -\hp{2k+1}\hp{2k}^\mpi\cr
\end{pmat}
\Mat{
\pc{k-1}\rk{z}&\pd{k-1}\rk{z}\\
\pca{k}{z}&\pda{k}{z}
}.
\end{align*}
\erem
\breml{BK8.2}
Let \(\kappa\in\NOinf\), let \(\seqska\) be a sequence of \tH{} complex \tqqa{matrices}, and let \(\abcd{\ev{\kappa}}\) be the \sabcdo{\(\seqska\)}.
In view of \rremss{R8Z}{A.R.A++*}, then \(\pca{k}{z}=[\paa{k}{\ko{z}}]^\ad\) and \(\pda{k}{z}=[\pba{k}{\ko{z}}]^\ad\) hold true for every choice of \(z\in\C\) and \(k\in\NO\) fulfilling \(2k-1\leq\kappa\).
\erem
Now we are going to consider a quadruple of \tqqa{matrix} polynomials which, as we will see, connects the \sabcdo{\(\seqs{2n}\)} introduced in \rdefn{K19} with the particular sequence \(\seqsh{2n+1}\) introduced in \rdefn{D.nat-ext}.
\bnotal{N-abcdO}
Let \(\kappa\in\NOinf\) and let \(\seqska\) be a sequence of complex \tqqa{matrices} with \tsHp{} \(\sHp{\kappa}\) and \sabcd{} \(\abcd{\ev{\kappa}}\), where \(\ev{\kappa}\) is given in \eqref{K5.11}.
Let \(\pao{1},\pbo{1},\pco{1},\pdo{1}\colon\C\to\Cqq\) be defined by
\begin{align}\label{abcdO-1}
\pao{1}\rk{z}&\defeq\hp{0},&
\pbo{1}\rk{z}&\defeq z\Iq,&
\pco{1}\rk{z}&\defeq\hp{0},&
\pdo{1}\rk{z}&\defeq z\Iq.
\end{align}
For all \(k\in\mn{2}{\infi}\) fulfilling \(2k-2\leq\kappa\), let \(\pao{k},\pbo{k},\pco{k},\pdo{k}\colon\C\to\Cqq\) be defined by
\begin{align*}
\pao{k}\rk{z}&\defeq z\paa{k-1}{z}-\pa{k-2}\rk{z}\hp{2k-4}^\mpi \hp{2k-2},&
\pbo{k}\rk{z}&\defeq z\pba{k-1}{z}-\pb{k-2}\rk{z}\hp{2k-4}^\mpi \hp{2k-2},\\
\pco{k}\rk{z}&\defeq z\pca{k-1}{z}-\hp{2k-2}\hp{2k-4}^\mpi \pc{k-2}\rk{z},&
\pdo{k}\rk{z}&\defeq z\pda{k-1}{z}-\hp{2k-2}\hp{2k-4}^\mpi \pd{k-2}\rk{z}.
\end{align*}
\enota
In view of \eqref{K19.0} and \eqref{K19.1}, we have in particular
\begin{align*}
\pao{2}\rk{z}&=z\hp{0},&
&&
\pbo{2}\rk{z}
&=z^2\Iq-z\hp{0}^\mpi\hp{1}-\hp{0}^\mpi\hp{2},\\
\pco{2}\rk{z}&=z\hp{0},&
&\text{and}&
\pdo{2}\rk{z}
&=z^2\Iq-z\hp{1}\hp{0}^\mpi-\hp{2}\hp{0}^\mpi
\end{align*}
for all \(z\in\C\).
Regarding \rdefn{K19} and \rnota{N-abcdO}, we see:
\breml{R1358}
Let \(\kappa\in\Ninf\) and let \(\seqska\) be a sequence of complex \tqqa{matrices} with \tsHp{} \(\sHp{\kappa}\) and \sabcd{} \(\abcd{\ev{\kappa}}\).
Then
\begin{align*}
\paa{k}{z}&=\paoa{k}{z}-\paa{k-1}{z}\hp{2k-2}^\mpi\hp{2k-1},&
\pba{k}{z}&=\pboa{k}{z}-\pba{k-1}{z}\hp{2k-2}^\mpi\hp{2k-1},\\
\pca{k}{z}&=\pcoa{k}{z}-\hp{2k-1}\hp{2k-2}^\mpi\pca{k-1}{z},&
\pda{k}{z}&=\pdoa{k}{z}-\hp{2k-1}\hp{2k-2}^\mpi\pda{k-1}{z}
\end{align*}
for all \(k\in\N\) with \(2k-1\leq\kappa\) and all \(z\in\C\).
\erem
The following result shows that the \tqqa{matrix} polynomials introduced in \rnota{N-abcdO} occur in the \sabcdo{\(\seqsh{2n+1}\)}.
\bleml{L1237-B}
Let \(\kappa\in\NOinf\) and let \(\seqska\) be a sequence of complex \tqqa{matrices}.
Let \(n\in\NO\) be such that \(2n\leq\kappa\) and let \(\seqsh{2n+1}\) be the \tnatexto{\(\seqs{2n}\)}.
Let \(\abcd{\ev{\kappa}}\) be the \sabcdo{\(\seqska\)} and let \(\habcd{n+1}\) be the \sabcdo{\(\seqsh{2n+1}\)}.
Then
\begin{align*
\pah{k}&=\pa{k}, &
\pbh{k}&=\pb{k}, &
\pch{k}&=\pc{k}, &
&\text{and} &
\pdh{k}&=\pd{k}
\end{align*}
for each \(k\in\mn{0}{n}\).
Furthermore,
\begin{align*
\pah{n+1}&=\pao{n+1}, &
\pbh{n+1}&=\pbo{n+1}, &
\pch{n+1}&=\pco{n+1}, &
&\text{and} &
\pdh{n+1}&=\pdo{n+1}.
\end{align*}
\elem
\bproof
Regarding \rrem{R26-SK1} and \rnota{N-abcdO}, this can be seen from the (recursive) construction in \rdefn{K19}.
\eproof
In particular, we see from \rlem{L1237-B} that the quadruple \([\pao{n+1},\pbo{n+1},\pco{n+1},\pdo{n+1}]\) is completely determined by the sequence \(\seqs{2n}\).
In the remaining considerations of this section, we always consider sequences of \tqqa{matrices} which are \tHnnde{}.
This is done against to the background of \rthm{SK7}.
\bleml{BK8.6}
Let \(\kappa\in\NOinf\), let \(\seqska\in\Hggeq{\kappa}\), and let \(\abcd{\ev{\kappa}}\) be the \sabcdo{\(\seqska\)}.
Then \(\det\pba{k}{z}\neq 0\) as well as \(\det\pda{k}{z}\neq 0\) hold true for every choice of \(k\in\NO\) fulfilling \(2k-1\leq\kappa\) and for each \(z\in\C\setminus\R\).
\elem
\bproof
The case \(\kappa=0\) is trivial.
Consider the case that \(\kappa=2\tau\) with some \(\tau\in\Ninf\).
Since \eqref{K5.11} yields \(\ev{\kappa}=\tau\) and \rrem{Schr2.7} provides \(\Hggeq{2\tau}\subseteq\Hggq{2\tau}\), then the assertion follows immediately from \zitaa{MR2805417}{\cthm{5.5}{475}}.
Consider the case that \(\kappa=2n-1\) with some \(n\in\N\).
Since we have supposed that \(\seqska\) belongs to \(\Hggeq{\kappa}\), there exists a matrix \(\su{2n}\in\Cqq\) such that \(\seqs{2n}\) belongs to \(\Hggq{2n}\).
Let \(\dabcd{n}\) be the \sabcdo{\(\seqs{2n}\)}.
Regarding \eqref{K5.11}, the application of \rrem{BK8.1} with \(m=2n-1\) to the sequence \(\seqs{2n}\) then yields that \(\dabcd{n}\) is the \sabcdo{\(\seqs{2n-1}\)}, \tie{}, with \(\seqska\).
Thus, we receive that \(\pb{k}^\diamond=\pb{k}\) and \(\pd{k}^\diamond=\pd{k}\) for all \(k\in\mn{0}{n}\).
Consequently, regarding \(\ev{\kappa}=n\), the application of \zitaa{MR2805417}{\cthm{5.5}{475}} to the sequence \(\seqs{2n}\) completes the proof.
\eproof
\bleml{K15-1}
Let \(\kappa\in\NOinf\) and let \(\seqska\in\Hggeq{\kappa}\).
For every choice of \(n\in\NO\) fulfilling \(2n\leq\kappa\) and \(z\in\C\setminus\R\), then \(\det\pboa{n+1}{z}\neq0\) and \(\det\pdoa{n+1}{z}\neq0\).
\elem
\bproof
Let \(n\in\NO\) be such that \(2n\leq\kappa\).
\rprop{L1237-A} shows that the \tnatext{} \(\seqsh{2n+1}\) of \(\seqs{2n}\) belongs to \(\Hggeq{2n+1}\).
Let \(\habcd{n+1}\) be the \sabcdo{\(\seqsh{2n+1}\)}.
Using \rlem{L1237-B}, then we perceive that \(\pbh{n+1}=\pbo{n+1}\) and \(\pdh{n+1}=\pdo{n+1}\) hold true.
Since the application of \rlem{BK8.6} to \(\seqsh{2n+1}\) yields \(\det\pbh{n+1}\rk{z}\neq 0\) and \(\det\pdh{n+1}\rk{z}\neq 0\) for all \(z\in\C\setminus\R\), the proof is complete.
\eproof
\breml{R1019}
Let \(\kappa\in\NOinf\) and let \(\seqska\in\Hggeq{\kappa}\) with \tsHp{} \(\sHp{\kappa}\).
From \rprop{DFKMT2.30N} we infer
\begin{align}\label{R1019.0
\hp{j}^\ad&=\hp{j}&\text{for all }j&\in\mn{0}{\kappa}
\end{align}
and \(\ran{\hp{2k-1}}\subseteq\ran{\hp{2k-2}}\) for all \(k\in\N\) fulfilling \(2k-1\leq\kappa\) as well as \(\ran{\hp{2k}}\subseteq\ran{\hp{2k-2}}\) for all \(k\in\N\) fulfilling \(2k\leq\kappa\).
Using \eqref{R1019.0} and \rrem{tsa2}, we can conclude \(\nul{\hp{2k-2}}\subseteq\nul{\hp{2k-1}}\) for all \(k\in\N\) fulfilling \(2k-1\leq\kappa\) and \(\nul{\hp{2k-2}}\subseteq\nul{\hp{2k}}\) for all \(k\in\N\) such that \(2k\leq\kappa\).
Hence, the application of \rrem{tsa12} yields
\begin{align}
\hp{2k-2}\hp{2k-2}^\mpi\hp{2k-1}&=\hp{2k-1},&
\hp{2k-1}\hp{2k-2}^\mpi\hp{2k-2}&=\hp{2k-1}&
\text{for all }k&\in\N\text{ with }2k-1\leq\kappa\label{R1019.1
\intertext{and}
\hp{2k-2}\hp{2k-2}^\mpi\hp{2k}&=\hp{2k},&
\hp{2k}\hp{2k-2}^\mpi\hp{2k-2}&=\hp{2k}&
\text{ for all }k&\in\N\text{ with }2k\leq\kappa.\label{R1019.2
\end{align}
\erem
The following considerations yield interesting interrelations between the sequences forming a \tabcd{}.
\bleml{L0908}
Let \(\kappa\in\Ninf\) and let \(\seqska\in\Hggeq{\kappa}\) with \tsHp{} \(\sHp{\kappa}\) and \sabcd{} \(\abcd{\ev{\kappa}}\).
For all \(n\in\NO\) such that \(2n+1\leq\kappa\), then
\beql{L0908.R4A}
\Mat{\pc{n}&\pd{n}\\\pc{n+1}&\pd{n+1}}
\Mat{\Oqq&-\Iq \\\Iq &\Oqq}
\Mat{\pa{n}&\pa{n+1}\\\pb{n}&\pb{n+1}}
=
\begin{pmat}[{|}]
\pd{n}\pa{n}-\pc{n}\pb{n}&\pd{n}\pa{n+1}-\pc{n}\pb{n+1}\cr\-
\pd{n+1}\pa{n}-\pc{n+1}\pb{n}&\pd{n+1}\pa{n+1}-\pc{n+1}\pb{n+1}\cr
\end{pmat}
\eeq
and
\beql{L0908.R4c}
\Mat{\pc{n}&\pd{n}\\\pc{n+1}&\pd{n+1}}
\Mat{\Oqq&-\Iq \\\Iq &\Oqq}
\Mat{\pa{n}&\pa{n+1}\\\pb{n}&\pb{n+1}}
=
\Mat{
\Oqq&\hp{2n}\\
-\hp{2n}&\Oqq
}.
\eeq
\elem
\bproof
The identity \eqref{L0908.R4A} follows for all \(n\in\NO\) with \(2n+1\leq\kappa\) by straightforward computation.
According to \rrem{R1019}, we have \eqref{R1019.1} and \eqref{R1019.2}.
Let \(\varepsilon\colon\C\to\C\) be defined by \(\varepsilon\rk{z}\defeq z\).
From \eqref{L0908.R4A}, \rdefn{K19}, and \eqref{R1019.1} we get
\begin{multline}\label{L0908.R4BaaD}
\Mat{\pc{0}&\pd{0}\\\pc{1}&\pd{1}}
\Mat{\Oqq&-\Iq \\\Iq &\Oqq}
\Mat{\pa{0}&\pa{1}\\\pb{0}&\pb{1}}
=
\begin{pmat}[{|}]
\pd{0}\pa{0}-\pc{0}\pb{0}&\pd{0}\pa{1}-\pc{0}\pb{1}\cr\-
\pd{1}\pa{0}-\pc{1}\pb{0}&\pd{1}\pa{1}-\pc{1}\pb{1}\cr
\end{pmat}\\
=
\begin{pmat}[{|}]
\Oqq&\hp{0}\cr\-
-\hp{0}&\rk{\varepsilon \Iq -\hp{1}\hp{0}^\mpi}\hp{0}-\hp{0}\rk{\varepsilon \Iq -\hp{0}^\mpi\hp{1}}\cr
\end{pmat}
=\Mat{
\Oqq&\hp{0}\\
-\hp{0}&\Oqq
}.
\end{multline}
If \(\kappa\leq2\), then, regarding \eqref{L0908.R4BaaD}, the proof is finished.
Now suppose \(\kappa\geq3\).
Regarding \eqref{L0908.R4BaaD}, the following statement holds true:
\bAeqi{0}
\il{L0908.I} There is an \(m\in\N\) fulfilling \(2m+1\leq\kappa\) such that \eqref{L0908.R4c} holds true for all \(n\in\mn{0}{m-1}\).
\eAeqi
Taking into account \rrem{MD51N}, \ref{L0908.I}, and \eqref{R1019.2}, we conclude
\beql{L0908.R4Baz}\begin{split}
&\Mat{\pc{m}&\pd{m}\\\pc{m+1}&\pd{m+1}}
\Mat{\Oqq&-\Iq \\\Iq &\Oqq}
\Mat{\pa{m}&\pa{m+1}\\\pb{m}&\pb{m+1}}\\
&=
\begin{pmat}[{|}]
\Oqq&\Iq \cr\-
-\hp{2m}\hp{2m-2}^\mpi&\varepsilon \Iq -\hp{2m+1} \hp{2m}^\mpi\cr
\end{pmat}\\
&\qquad\times
\Mat{\pc{m-1}&\pd{m-1}\\\pc{m}&\pd{m}}
\Mat{\Oqq&-\Iq \\\Iq &\Oqq}
\Mat{\pa{m-1}&\pa{m}\\\pb{m-1}&\pb{m}}
\begin{pmat}[{|}]
\Oqq&-\hp{2m-2}^\mpi \hp{2m}\cr\-
\Iq &\varepsilon \Iq -\hp{2m}^\mpi \hp{2m+1}\cr
\end{pmat}\\
&=
\begin{pmat}[{|}]
\Oqq&\Iq \cr\-
-\hp{2m}\hp{2m-2}^\mpi&\varepsilon \Iq -\hp{2m+1} \hp{2m}^\mpi\cr
\end{pmat}
\Mat{\Oqq&\hp{2m-2}\\-\hp{2m-2}&\Oqq}
\begin{pmat}[{|}]
\Oqq&-\hp{2m-2}^\mpi \hp{2m}\cr\-
\Iq &\varepsilon \Iq -\hp{2m}^\mpi \hp{2m+1}\cr
\end{pmat}\\
&=\Mat{
\Oqq&\hp{2m-2}\hp{2m-2}^\mpi \hp{2m}\\
-\hp{2m}\hp{2m-2}^\mpi\hp{2m-2}&R_{m+1}}
=\Mat{\Oqq& \hp{2m}\\- \hp{2m}&R_{m+1}},
\end{split}\eeq
where
\[
R_{m+1}
\defeq-\hp{2m}\hp{2m-2}^\mpi\hp{2m-2}\rk{\varepsilon \Iq -\hp{2m}^\mpi \hp{2m+1}}+\rk{\varepsilon \Iq -\hp{2m+1} \hp{2m}^\mpi}\hp{2m-2}\hp{2m-2}^\mpi \hp{2m}.
\]
Using \eqref{R1019.2} and \eqref{R1019.1}, one can easily see that
\beql{L0908.R4Bax}\begin{split}
R_{m+1}
&=-\hp{2m}\rk{\varepsilon \Iq -\hp{2m}^\mpi \hp{2m+1}}+\rk{\varepsilon \Iq -\hp{2m+1} \hp{2m}^\mpi}\hp{2m}\\
&=-\varepsilon\hp{2m}+\hp{2m+1}+\varepsilon\hp{2m}-\hp{2m+1}
=\Oqq.
\end{split}\eeq
By virtue of \eqref{L0908.R4Baz}, \eqref{L0908.R4Bax}, and \ref{L0908.I}, we get that \eqref{L0908.R4c} is fulfilled for all \(n\in\mn{0}{m}\).
Therefore, \eqref{L0908.R4c} is proved by mathematical induction.
\eproof
\bleml{P8.14}
Let \(\kappa\in\NOinf\) and let \(\seqska\in\Hggeq{\kappa}\) with \tsHp{} \(\sHp{\kappa}\) and \sabcd{} \(\abcd{\ev{\kappa}}\).
Then \(\pda{n}{z}\paa{n}{z}=\pca{n}{z}\pba{n}{z}\) is valid for each \(n\in\NO\) fulfilling \(2n-1\leq\kappa\) and all \(z\in\C\).
Furthermore, if \(\kappa\geq 1\), then \(\pda{n-1}{z}\paa{n}{z}-\pc{n-1}\rk{z}\pba{n}{z}=\hp{2n-2}\) and \(\pda{n}{z}\pa{n-1}\rk{z}-\pca{n}{z}\pba{n-1}{z}=-\hp{2n-2}\)
hold true for all \(n\in\N\) fulfilling \(2n-1\leq\kappa\) and all \(z\in\C\).
\elem
\bproof
This is a consequence of \eqref{K19.0} and \rlem{L0908}.
\eproof
Against the background of later application in \rprop{CH428} below, we consider now a sequence \(\seqs{2n+1}\in\Hggeq{2n+1}\).
We are going to determine the \tqqa{blocks} of \(\fV^{\rk{\seqs{2n+1}}}\) in terms of the \tqqa{matrix} polynomials \(\pa{n},\pa{n+1},\pb{n},\pb{n+1}\) introduced in \rdefn{K19}.
\bpropl{C2A3}
Let \(n\in\NO\) and let \(\seqs{2n+1}\in\Hggeq{2n+1}\) with \tsHp{} \(\sHp{2n+1}\) and \sabcd{} \(\abcd{n+1}\).
Then the matrix-valued function \(\fV^{\rk{\seqs{2n+1}}}\colon\C\to\Coo{2q}{2q}\) given by \rnota{SN-Tra} admits, for each \(z\in\C\), the representation
\[
\fV^{\rk{\seqs{2n+1}}}\rk{z}
=
\Mat{
-\paa{n}{z}\hp{2n}^\mpi & -\paa{n+1}{z} \\
\pba{n}{z}\hp{2n}^\mpi & \pba{n+1}{z}
}.
\]
\eprop
\bproof
Because of assumption \(\seqs{2n+1}\in\Hggeq{2n+1}\) and \rthm{HS}, we have \(\hp{2k}=\su{0}^\St{k}\) and \(\hp{2k+1}=\su{1}^\St{k}\) for all \(k\in\mn{0}{n}\).
In view of \eqref{SN-Tra.1}, \rnota{BezWV}, \eqref{K19.0}, and \eqref{K19.1}, then
\[
\fV^{(\seqs{1})}\rk{z}
=V_{\su{0}^\St{0},\su{1}^\St{0}}\rk{z}
=V_{\hp{0},\hp{1}}\rk{z}
=
\begin{pmat}[{|}]
\Oqq & -\hp{0} \cr\-
\hp{0}^\mpi & z\Iq-\hp{0}^\mpi \hp{1}\cr
\end{pmat}
=
\Mat{
-\pa{0}\rk{z}\hp{0}^\mpi & -\pa{1}\rk{z} \cr\-
\pb{0}\rk{z}\hp{0}^\mpi & \pb{1}\rk{z}\cr
}
\]
follows for all \(z\in\C\).
In the case \(n=0\), the proof is finished.
Now suppose \(n\geq1\).
Proceeding inductively, we assume that there is an \(m\in\mn{1}{n}\) such that
\[
\fV^{(\seqs{2k-1})}\rk{z}
=
\Mat{
-\pa{k-1}\rk{z}\hp{2(k-1)}^\mpi & -\paa{k}{z} \\
\pb{k-1}\rk{z}\hp{2(k-1)}^\mpi & \pba{k}{z}
}
\]
is valid for every choice of \(k\in\mn{1}{m}\) and \(z\in\C\).
Consequently, from \eqref{SN-Tra.1}, \rnota{BezWV}, \eqref{K19.pa}, and \eqref{K19.pb} we get then
\[\begin{split}
\fV^{(\seqs{2m+1})}\rk{z}
&=\fV^{(\seqs{2m-1})}\rk{z}V_{\su{0}^\St{m},\su{1}^\St{m}}\rk{z}
=\fV^{(\seqs{2m-1})}\rk{z}V_{\hp{2m},\hp{2m+1}}\rk{z}\\
&=
\Mat{
-\pa{m-1}\rk{z}\hp{2(m-1)}^\mpi & -\pa{m}\rk{z}\\
\pb{m-1}\rk{z}\hp{2(m-1)}^\mpi & \pb{m}\rk{z}
}
\begin{pmat}[{|}]
\Oqq & -\hp{2m} \cr\-
\hp{2m}^\mpi & z\Iq-\hp{2m}^\mpi \hp{2m+1}\cr
\end{pmat}\\
&=
\begin{pmat}[{|}]
-\pa{m}\rk{z}\hp{2m}^\mpi & \pa{m-1}\rk{z}\hp{2(m-1)}^\mpi \hp{2m}-\pa{m}\rk{z}\rk{z\Iq-\hp{2m}^\mpi \hp{2m+1}}\cr\-
\pb{m}\rk{z}\hp{2m}^\mpi &-\pb{m-1}\rk{z}\hp{2(m-1)}^\mpi \hp{2m}+\pb{m}\rk{z}\rk{z\Iq-\hp{2m}^\mpi \hp{2m+1}} \cr
\end{pmat}\\
&=
\Mat{
-\pa{m}\rk{z}\hp{2m}^\mpi & -\pa{m+1}\rk{z} \\
\pb{m}\rk{z}\hp{2m}^\mpi & \pb{m+1}\rk{z}
}
\end{split}\]
for all \(z\in\C\).
The assertion is proved inductively.
\eproof
\bpropl{CH428}
Let \(n\in\NO\) and let \(\seqs{2n}\in\Hggeq{2n}\) with \tsHp{} \(\sHp{2n}\) and \sabcd{} \(\abcd{n}\).
Then the matrix-valued function \(\fV^{\rk{\seqs{2n}}}\colon\C\to\Coo{2q}{2q}\) given by \rnota{SN-Tra} admits the representation
\beql{N101N}
\fV^{\rk{\seqs{2n}}}\rk{z}
=
\Mat{
-\paa{n}{z}\hp{2n}^\mpi & -\paoa{n+1}{z} \\
\pba{n}{z}\hp{2n}^\mpi & \pboa{n+1}{z}
}
\eeq
for each \(z\in\C\),
where \(\pao{n+1}\) and \(\pbo{n+1}\) are defined in \rnota{N-abcdO}.
\eprop
\bproof
\rprop{L1237-A} shows that the \tnatext{} \(\seqsh{2n+1}\) of \(\seqs{2n}\) belongs to \(\Hggeq{2n+1}\).
Let \(\habcd{n+1}\) be the \sabcdo{\(\seqsh{2n+1}\)}.
\rlem{L1237-B} yields then \(\pah{n}=\pa{n}\) and \(\pbh{n}=\pb{n}\) as well as \(\pah{n+1}=\pao{n+1}\) and \(\pbh{n+1}=\pbo{n+1}\).
Let \(\shHp{2n+1}\) be the \tsHpo{\(\seqsh{2n+1}\)}.
From \rrem{R26-SK1} then we recognize \(\hhp{2n}=\hp{2n}\) and \(\hhp{2n+1}=\Oqq\).
Because of \rthm{HS}, thus \(\hat{s}_1^\St{n}=\Oqq\).
According to \rnota{BezWV}, hence \(V_{\hat{s}_{0}^\St{n},\hat{s}_{1}^\St{n}}=V_{\hat{s}_{0}^\St{n}}\).
Taking into account \eqref{SN-Tra.1} and \eqref{SN-Tra.0}, then we conclude \(\fV^{\rk{\seqsh{2n+1}}}=\fV^{\rk{\seqsh{2n}}}\).
In view of \rdefn{D.nat-ext}, thus \(\fV^{\rk{\seqsh{2n+1}}}=\fV^{\rk{\seqs{2n}}}\) follows.
Consequently, the application of \rprop{C2A3} to \(\seqsh{2n+1}\) completes the proof.
\eproof
Now we are able to rewrite \zitaa{FKKM20}{\cthm{8.11}{43}} in terms of the \tabcd{}.
\bthml{CM123}
Let \(n\in\NO\) and let \(\seqs{2n}\in\Hggeq{2n}\) with \tsHp{} \(\sHp{2n}\) and \sabcd{} \(\abcd{n}\).
Further, let \(\pars{n}\), \(\pbrs{n}\), \(\paors{n+1}\), and \(\pbors{n+1}\) be the restrictions of \(\pa{n}\), \(\pb{n}\), \(\pao{n+1}\), and \(\pbo{n+1}\) onto \(\pip\), respectively, where \(\pao{n+1}\) and \(\pbo{n+1}\) are defined in \rnota{N-abcdO}.
Then:
\benui
\il{CM123.a} For each pair \(\copa{\phi}{\psi}\in\PRFqa{\hp{2n}}\), the function \(\det\rk{\pbrs{n}\hp{2n}^\mpi\phi+\pbors{n+1}\psi}\) does not vanish identically and the matrix-valued function \(F\defeq-\rk{\pars{n}\hp{2n}^\mpi\phi+\paors{n+1}\psi}\rk{\pbrs{n}\hp{2n}^\mpi\phi+\pbors{n+1}\psi}^\inv\) belongs to \(\RFOqskg{2n}\).
\il{CM123.b} For each \(F\in\RFOqskg{2n}\), there exists a pair \(\copa{\phi}{\psi}\in\PRFqa{\hp{2n}}\) such that both \(\phi\) and \(\psi\) are holomorphic in \(\pip\), that
\beql{L1204.1
\det\rk*{\pba{n}{z}\hp{2n}^\mpi\phi\rk{z}+\pboa{n+1}{z}\psi\rk{z}}
\neq0
\eeq
holds true for all \(z\in\pip\), and that \(F\) admits the representation
\beql{L1204.2
F\rk{z}
=-\ek*{\paa{n}{z}\hp{2n}^\mpi\phi\rk{z}+\paoa{n+1}{z}\psi\rk{z}}\ek*{\pba{n}{z}\hp{2n}^\mpi\phi\rk{z}+\pboa{n+1}{z}\psi\rk{z}}^\inv
\eeq
for all \(z\in\pip\).
\il{CM123.c} For each \(\ell\in\set{1,2}\) let \(\copa{\phi_\ell}{\psi_\ell}\in\PRFqa{\hp{2n}}\) and let \(F_\ell\defeq-\rk{\pars{n}\hp{2n}^\mpi\phi_\ell+\paors{n+1}\psi_\ell}\rk{\pbrs{n}\hp{2n}^\mpi\phi_\ell+\pbors{n+1}\psi_\ell}^\inv\).
Then \(F_1=F_2\) if and only if \(\cpcl{\phi_1}{\psi_1}=\cpcl{\phi_2}{\psi_2}\).
\eenui
\ethm
\bproof
Using \rthm{HS} and the notation therein, we receive \(\hp{2n}=\su{0}^\St{n}\).
From \rprop{CH428} we obtain the \tqqa{\tbr{}} \eqref{N101N} for all \(z\in\C\).
Applying \zitaa{FKKM20}{\cthm{8.11}{43}} and comparing the \tqqa{\tbr{}} therein with \eqref{N101N} completes the proof.
\eproof
As already observed above, parametrizations of the set \(\RFOqskg{2n}\) with independent parameters are given in \cite{MR1624548,MR1395706,Thi06,FKKM20}.
Now we get a refinement of a result (see \zitaa{MR3380267}{\cthm{12.1}{272}}), which concerns the moment problem~\mprobR{2n}{=}.
\bthml{T1153}
Let \(n\in\NO\) and let \(\seqs{2n}\in\Hggeq{2n}\) with \tsHp{} \(\sHp{2n}\) and \sabcd{} \(\abcd{n}\).
Further, let \(\pars{n}\), \(\pbrs{n}\), \(\paors{n+1}\), and \(\pbors{n+1}\) be the restrictions of \(\pa{n}\), \(\pb{n}\), \(\pao{n+1}\), and \(\pbo{n+1}\) onto \(\pip\), respectively, where \(\pao{n+1}\) and \(\pbo{n+1}\) are defined in \rnota{N-abcdO}.
Then:
\benui
\il{T1153.a} For each \(G\in\Pqevena{\hp{2n}}\), the function \(\det\rk{\pbrs{n}\hp{2n}^\mpi G+\pbors{n+1}}\) does not vanish identically and the matrix-valued function \(F\defeq-\rk{\pars{n}\hp{2n}^\mpi G+\paors{n+1}}\rk{\pbrs{n}\hp{2n}^\mpi G+\pbors{n+1}}^\inv\) belongs to \(\RFOqsg{2n}\).
\il{T1153.b} For each \(F\in\RFOqsg{2n}\), there exists a unique \(G\in\Pqevena{\hp{2n}}\) such that \(\det\rk{\pba{n}{z}\hp{2n}^\mpi G\rk{z}+\pboa{n+1}{z}}
\neq 0\) and \(F\rk{z}=-\ek{\paa{n}{z}\hp{2n}^\mpi G\rk{z}+\paoa{n+1}{z}}\ek{\pba{n}{z}\hp{2n}^\mpi G\rk{z}+\pboa{n+1}{z}}^\inv\) for all \(z\in\pip\).
\eenui
\ethm
\bproof
Using \rthm{HS} and the notation therein, we receive \(\hp{2n}=\su{0}^\St{n}\).
From \rprop{CH428} we obtain the \tqqa{\tbr{}} \eqref{N101N} for all \(z\in\C\).
Applying \zitaa{MR3380267}{\cprop{11.22(a)}{270} and \cthm{12.1}{272}} completes the proof.
\eproof
\section{Particular rational matrix-valued functions}\label{S1129}
In \rsec{Cha13}, we will focus on the values of the respective functions belonging to the solution set \(\RFOqskg{2n}\) of Problem~\rprobR{2n}{\lleq}, \tie{}, we want to describe the set
\begin{align}\label{N128N}
\setaca*{F\rk{w}}{F\in\RFOqskg{2n}}
\end{align}
for each \(\seqs{2n}\in\Hggeq{2n}\) and each \(w\in\pip\).
In order to realize this aim, we are essentially guided by \rprop{CH428} and \rthm{CM123}.
These statements contain essential information about the description of the set \(\RFOqskg{2n}\).
The concrete shape of the \tqqa{blocks} of the \taaa{2q}{2q}{matrix} polynomial \(\fV^{\rk{\seqs{2n}}}\) given by \eqref{N101N} suggest to consider a particular sequence of rational \tqqa{matrix}-valued functions.
It will turn out that these functions play a key role in our further considerations.
We start with a little preparation.
\breml{TC}
Let \(\kappa\in\NOinf\), let \(\seqska\in\Hggeqka\) with \sabcd{} \(\abcd{\ev{\kappa}}\), and let \(n\in\NO\) be such that \(2n-1\leq\kappa\).
According to \rrem{SK32R}, the functions \(\det\pb{n}\) and \(\det\pd{n}\) are non-trivial polynomials for which, in view of \eqref{NM} and \rlem{BK8.6}, the respective sets of zeros \(\nst{\det\pb{n}}\) and \(\nst{\det\pd{n}}\) are finite and, in particular, discrete subsets of \(\C\) such that \(\nst{\det\pb{n}}\cup\nst{\det\pd{n}}\subseteq\R\).
Consequently, \(\pb{n}^\inv\) and \(\pd{n}^\inv\) are matrix-valued functions meromorphic in \(\C\), which fulfill \(\C\setminus\nst{\det\pb{n}}\subseteq\holpt{\pb{n}^\inv }\), \(\C\setminus\nst{\det\pd{n}}\subseteq\holpt{\pd{n}^\inv }\), and, in particular, \(\CR\subseteq\holpt{\pb{n}^\inv }\cap\holpt{\pd{n}^\inv }\).
\erem
The following sequence of rational matrix-valued functions will play a key role of our following considerations.
We note that a procedure is used here which is applied in a similar way in \cite{MR2656833} for the matricial Carath\'eodory problem.
\bdefnl{K17.1}
Let \(\kappa\in\NOinf\) and let \(\seqska\in\Hggeqka\) with \tsHp{} \(\sHp{\kappa}\) and \sabcd{} \(\abcd{\ev{\kappa}}\).
Let \(\XF{-1}\colon\C\to\Cqq\) be defined by \(\XF{-1}\rk{z}\defeq\Oqq\).
For all \(n\in\NO\) such that \(2n\leq\kappa\), let \(\XF{2n}\defeq\hp{2n}\pb{n}^\inv\pbo{n+1}\).
For all \(n\in\NO\) fulfilling \(2n+1\leq\kappa\), let \(\XF{2n+1}\defeq\hp{2n}\pb{n}^\inv\pb{n+1}\).
Then \(\seqX{\kappa}\) is called the \emph{\tsXFo{\(\seqska\)}}.
\edefn
In view of \eqref{K19.0}, \eqref{abcdO-1}, \eqref{Hp.01}, \eqref{K19.1}, and \eqref{R1019.1}, we have
\begin{align}\label{XF.01}
\XFa{-1}{z}&=\Oqq,&
\XFa{0}{z}&=z\hp{0}=z\su{0},&
&\text{and}&
\XFa{1}{z}&=z\hp{0}-\hp{1}=z\su{0}-\su{1}
\end{align}
for all \(z\in\C\).
The \tXF{s} can be considered in a wider sense as analogue of the rational \tqqa{matrix} valued function \(\Theta_n\) defined in \zitaa{MR2222523}{\cpage{294}}
\breml{XF-tru}
Let \(\kappa\in\NOinf\), let \(\seqska\in\Hggeqka\) with \tsXF{} \(\seqX{\kappa}\) and let \(m\in\mn{0}{\kappa}\).
Regarding \rdefn{K17.1}, \rnota{N-abcdO}, and \eqref{K5.11}, in view of \rremsss{Schr2.5}{CM2.1.65}{BK8.1} then \(\seqs{m}\) belongs to \(\Hggeq{m}\) and \(\seqX{m}\) equals the \tsXFo{\(\seqs{m}\)}.
\erem
We introduce a notation which is operated throughout the attaching considerations.
For each \(m\in\NO\), let
\begin{align}\label{SK5-11}
\ef{m}
&\defeq\max\setaca{k\in\NO}{2k\leq m}&
&\text{and}&
\eff{m}
&\defeq2\ef{m},
\end{align}
\tie{}, if \(m=2n\) or \(m=2n+1\) for some \(n\in\NO\), then \(\ef{m}=n\) and \(\eff{m}=2n\).
\breml{R0735}
Let \(\kappa\in\NOinf\) and let \(\seqska\in\Hggeqka\) with \tsXF{} \(\seqX{\kappa}\).
In view of \rdefn{K17.1} and \rrem{TC}, then \(\CR\subseteq\C\setminus\nst{\det\pb{\ef{m}}}\subseteq\holpt{\XF{m}}\) for all \(m\in\mn{0}{\kappa}\).
\erem
\breml{R52SK}
Let the assumptions of \rdefn{K17.1} be fulfilled.
In view of \rrem{TC}, then:
\benui
\il{R52SK.b} Let \(n\in\NO\) be such that \(2n\leq\kappa\).
For all \(z\in\CR\), then \(\det\pba{n}{z}\neq0\) and \(\XFa{2n}{z}=\hp{2n}\ek{\pba{n}{z}}^\inv\pboa{n+1}{z}\) hold true.
\il{R52SK.a} Let \(n\in\NO\) be such that \(2n+1\leq\kappa\).
For all \(z\in\CR\), then \(\det\pba{n}{z}\neq0\) and
\beql{Z18}
\XFa{2n+1}{z}
=\hp{2n}\ek*{\pba{n}{z}}^\inv\pba{n+1}{z}.
\eeq
\eenui
\erem
Now we are going to prove that the \tsXF{}, which is introduced in \rdefn{K17.1}, can also be expressed in terms of the sequence \(\seq{\pd{k}}{k}{0}{\ev{\kappa}}\) instead of the sequence \(\seq{\pb{k}}{k}{0}{\ev{\kappa}}\).
This is a consequence of the following result.
We use the notation given by \eqref{NM}.
\bpropl{KN}
Let \(\kappa\in\NOinf\) and let \(\seqska\in\Hggeq{\kappa}\) with \tsHp{} \(\sHp{\kappa}\) and \sabcd{} \(\abcd{\ev{\kappa}}\).
Then:
\benui
\il{KN.a} If \(k\in\NO\) is such that \(2k-1\leq\kappa\), then \(\nst{\det\pb{k}}\cup\nst{\det\pd{k}}\subseteq\R\).
\il{KN.b} If \(\kappa\geq 1\), for all \(n\in\NO\) fulfilling \(2n+1\leq\kappa\), then
\begin{multline}\label{KN.1
\hp{2n}\ek*{\pba{n}{z}}^\inv \pba{n+1}{z}
=\pda{n+1}{z}\ek*{\pda{n}{z}}^\inv \hp{2n}\\
\text{for all }z\in\C\setminus\rk*{\nst{\det\pb{n}}\cup\nst{\det\pd{n}}}.
\end{multline}
\il{KN.c} If \(\kappa\geq 1\), for all \(n\in\NO\) fulfilling \(2n+1\leq\kappa\), then
\begin{multline}\label{KN.2
\hp{2n}^\mpi \hp{2n}\ek*{\pba{n+1}{z}}^\inv \pba{n}{z}\hp{2n}^\mpi
=\hp{2n}^\mpi \pda{n}{z}\ek*{\pda{n+1}{z}}^\inv \hp{2n}\hp{2n}^\mpi\\
\text{for all }z\in\C\setminus\rk*{\nst{\det\pb{n+1}}\cup\nst{\det\pd{n+1}}}.
\end{multline}
\eenui
\eprop
\bproof
\eqref{KN.a} Use \rlem{BK8.6}.
\eqref{KN.b}, \eqref{KN.c} Suppose \(\kappa\geq 1\).
According to \rrem{R1019}, we have \eqref{R1019.1} and \eqref{R1019.2}.
The subsequent part of our proof proceeds inductively.
From \eqref{K19.0}, \eqref{K19.1}, and \eqref{R1019.1} we discern that \(\nst{\det\pb{0}}=\emptyset\), that \(\nst{\det\pd{0}}=\emptyset\), and that
\beql{CN3}\begin{split}
\hp{0}\ek*{\pb{0}\rk{z}}^\inv \pb{1}\rk{z}
&=\hp{0}\Iq^\inv \rk{z\Iq-\hp{0}^\mpi \hp{1}}
=z\hp{0}-\hp{0}\hp{0}^\mpi \hp{1}
=z\hp{0}-\hp{1}\\
&=z\hp{0}-\hp{1}\hp{0}^\mpi \hp{0}
= \rk{z\Iq-\hp{1}\hp{0}^\mpi}\Iq^\inv \hp{0}
=\pd{1}\rk{z}\ek*{\pd{0}\rk{z}}^\inv \hp{0}
\end{split}\eeq
holds true for all \(z\in\C\).
Consequently, \eqref{KN.1} is proved for \(n=0\).
For each \(z\in\C\setminus\rk{\nst{\det\pb{1}}\cup\nst{\det\pd{1}}}\), from \eqref{CN3} we conclude
\[
\hp{0}\rk*{\ek*{\pb{1}\rk{z}}^\inv \pb{0}\rk{z}}^\inv
=\hp{0}\ek*{\pb{0}\rk{z}}^\inv \pb{1}\rk{z}
=\pd{1}\rk{z}\ek*{\pd{0}\rk{z}}^\inv \hp{0}
=\rk*{\pd{0}\rk{z}\ek*{\pd{1}\rk{z}}^\inv}^\inv \hp{0}
\]
and, hence,
\[
\pd{0}\rk{z}\ek*{\pd{1}\rk{z}}^\inv \hp{0}
=\hp{0}\ek*{\pb{1}\rk{z}}^\inv \pb{0}\rk{z},
\]
which implies
\[
\hp{0}^\mpi \hp{0}\ek*{\pb{1}\rk{z}}^\inv \pb{0}\rk{z}\hp{0}^\mpi
=\hp{0}^\mpi \pd{0}\rk{z}\ek*{\pd{1}\rk{z}}^\inv \hp{0}\hp{0}^\mpi.
\]
Thus, \eqref{KN.2} is checked for \(n=0\).
Consequently, there is an \(m\in\NO\) fulfilling \(2m+1\leq\kappa\) such that \eqref{KN.1} and \eqref{KN.2} are valid for all \(n\in\mn{0}{m}\).
If \(2m+1\in\set{\kappa, \kappa-1}\), then \eqref{KN.b} and \eqref{KN.c} are proved.
Assume that \(2m+1\leq\kappa -2\).
We consider an arbitrary \(z\in\C\setminus\rk{\nst{\det\pb{m+1}}\cup\nst{\det\pd{m+1}}}\).
Using \eqref{K19.pb}, \eqref{R1019.1}, and \eqref{R1019.2}, we obtain
\beql{CN3-1}\begin{split}
&\hp{2m+2}\ek*{\pb{m+1}\rk{z}}^\inv \pb{m+2}\rk{z}\\
&= \hp{2m+2}\ek*{\pb{m+1}\rk{z}}^\inv \ek*{\pb{m+1}\rk{z}\rk{z\Iq-\hp{2m+2}^\mpi \hp{2m+3}}-\pb{m}\rk{z}\hp{2m}^\mpi \hp{2m+2}}\\
&= z\hp{2m+2}-\hp{2m+2}\hp{2m+2}^\mpi \hp{2m+3}-\hp{2m+2}\ek*{\pb{m+1}\rk{z}}^\inv \pb{m}\rk{z}\hp{2m}^\mpi \hp{2m+2}\\
&= z\hp{2m+2}-\hp{2m+3}-\hp{2m+2}\hp{2m}^\mpi \hp{2m}\ek*{\pb{m+1}\rk{z}}^\inv \pb{m}\rk{z}\hp{2m}^\mpi \hp{2m+2}.
\end{split}\eeq
Similarly, \eqref{K19.pd}, \eqref{R1019.1}, and \eqref{R1019.2} yield
\beql{CN3-2
\pd{m+2}\rk{z}\ek*{\pd{m+1}\rk{z}}^\inv \hp{2m+2}
= z\hp{2m+2}-\hp{2m+3}-\hp{2m+2}\hp{2m}^\mpi \pd{m}\rk{z}\ek*{\pd{m+1}\rk{z}}^\inv \hp{2m}\hp{2m}^\mpi \hp{2m+2}.
\ee
Since \eqref{KN.2} is assumed to be valid for \(n=m\), the comparison of \eqref{CN3-1} and \eqref{CN3-2} provides that \eqref{KN.1} holds true for \(n=m+1\).
If \(z\) belongs to \(\C\setminus(\nst{\det\pb{m+1}}\cup\nst{\det\pd{m+1}}\cup\nst{\det\pb{m+2}}\cup\nst{\det\pd{m+2}})\), then \eqref{KN.1}, used with \(n=m+1\), implies
\[
\hp{2m+2}\rk*{\ek*{\pb{m+2}\rk{z}}^\inv \pb{m+1}\rk{z}}^\inv
=\rk*{\pd{m+1}\rk{z}\ek*{\pd{m+2}\rk{z}}^\inv}^\inv \hp{2m+2}
\]
and, consequently,
\
\hp{2m+2}\ek*{\pb{m+2}\rk{z}}^\inv \pb{m+1}\rk{z}
= \pd{m+1}\rk{z}\ek*{\pd{m+2}\rk{z}}^\inv \hp{2m+2}.
\]
Continuity arguments yield that this identity is valid for all \(z\in\C\setminus(\nst{\det\pb{m+2}}\cup\nst{\det\pd{m+2}})\).
Thus, multiplication from the left and from the right by \(\hp{2m+2}^\mpi\) shows that \eqref{KN.2} holds true for \(n=m+1\).
Consequently, \rpartss{KN.b}{KN.c} are proved by mathematical induction.
\eproof
Now we state the announced alternative description of the \tsXF{}.
\bpropl{K17-5}
Let \(\kappa\in\NOinf\) and let \(\seqska\in\Hggeqka\) with \tsHp{} \(\sHp{\kappa}\) and \sabcd{} \(\abcd{\ev{\kappa}}\).
Further, let \(\seqX{\kappa}\) be the \tsXFo{\(\seqska\)}.
Then:
\benui
\il{K17-5.b} Let \(n\in\NO\) be such that \(2n\leq\kappa\).
For all \(z\in\CR\), then \(\det\pda{n}{z}\neq0\) and
\beql{K17-5.A}
\XFa{2n}{z}
=\pdoa{n+1}{z}\ek*{\pda{n}{z}}^\inv\hp{2n}.
\eeq
In particular, \(\XF{2n}=\pdo{n+1}\pd{n}^\inv\hp{2n}\).
\il{K17-5.a} Let \(n\in\NO\) be such that \(2n+1\leq\kappa\).
For all \(z\in\CR\), then \(\det\pda{n}{z}\neq0\) and
\beql{GR2}
\XFa{2n+1}{z}
=\pda{n+1}{z}\ek*{\pda{n}{z}}^\inv\hp{2n}.
\eeq
In particular, \(\XF{2n+1}=\pd{n+1}\pd{n}^\inv\hp{2n}\).
\eenui
\eprop
\bproof
\eqref{K17-5.b} Let \(z\in\CR\).
In the case \(n=0\), equation \eqref{K17-5.A} follows immediately from \eqref{XF.01}, \eqref{K19.0}, and \eqref{abcdO-1}.
Now suppose \(n\geq1\).
According to \rpartss{KN.a}{KN.c} of \rprop{KN}, we realize that \(\det\pba{n}{z}\neq0\) and \(\det\pda{n}{z}\neq0\) hold true, and that \(\hp{2n-2}^\mpi \hp{2n-2}\ek{\pba{n}{z}}^\inv\pba{n-1}{z}\hp{2n-2}^\mpi=\hp{2n-2}^\mpi\pda{n-1}{z}\ek{\pda{n}{z}}^\inv \hp{2n-2}\hp{2n-2}^\mpi\) is valid.
Since \rrem{R1019} shows \(\hp{2n}\hp{2n-2}^\mpi \hp{2n-2}=\hp{2n}\) and \(\hp{2n-2}\hp{2n-2}^\mpi \hp{2n}=\hp{2n}\), multiplication form the left and from the right by \(\hp{2n}\) yields then \(\hp{2n}\ek{\pba{n}{z}}^\inv\pba{n-1}{z}\hp{2n-2}^\mpi\hp{2n}=\hp{2n}\hp{2n-2}^\mpi\pda{n-1}{z}\ek{\pda{n}{z}}^\inv\hp{2n}\).
Taking additionally into account \rremp{R52SK}{R52SK.b} and \rnota{N-abcdO}, we thus conclude
\[\begin{split}
\XFa{2n}{z}
&=\hp{2n}\ek*{\pba{n}{z}}^\inv\pboa{n+1}{z}
=\hp{2n}\ek{\pba{n}{z}}^\inv\ek*{z\pba{n}{z}-\pba{n-1}{z}\hp{2n-2}^\mpi\hp{2n}}\\
&=z\hp{2n}-\hp{2n}\ek*{\pba{n}{z}}^\inv\pba{n-1}{z}\hp{2n-2}^\mpi\hp{2n}
=z\hp{2n}-\hp{2n}\hp{2n-2}^\mpi\pda{n-1}{z}\ek*{\pda{n}{z}}^\inv\hp{2n}\\
&=\ek*{z\pda{n}{z}-\hp{2n}\hp{2n-2}^\mpi\pda{n-1}{z}}\ek{\pda{n}{z}}^\inv\hp{2n}
=\pdoa{n+1}{z}\ek*{\pda{n}{z}}^\inv\hp{2n}.
\end{split}\]
In view of \rrem{TC} and continuity arguments, we perceive \(\XF{2n}=\pdo{n+1}\pd{n}^\inv\hp{2n}\).
\eqref{K17-5.a} Let \(z\in\CR\).
According to \rpartss{KN.a}{KN.b} of \rprop{KN}, we realize that \(\det\pba{n}{z}\neq0\) and \(\det\pda{n}{z}\neq0\) hold true, and that \(\hp{2n}\ek{\pba{n}{z}}^\inv\pba{n+1}{z}=\pda{n+1}{z}\ek{\pda{n}{z}}^\inv \hp{2n}\) is valid.
In view of \rremp{R52SK}{R52SK.a}, then \eqref{GR2} follows.
\rrem{TC} and continuity arguments yield \(\XF{2n+1}=\pd{n+1}\pd{n}^\inv\hp{2n}\).
\eproof
We proceed by providing essential as well as technical characteristics of the \tsXFo{given} data \(\seqska\), which prove beneficial when treating the description of the set given in \eqref{N128N} generally.
\bleml{K17-2}
Let \(\kappa\in\Ninf\) and let \(\seqska\in\Hggeqka\) with \tsHp{} \(\sHp{\kappa}\) and \tsXF{} \(\seqX{\kappa}\).
For every choice of \(n\in\NO\) fulfilling \(2n+1\leq\kappa\), then \(\XF{2n}-\XF{2n+1}=\hp{2n+1}\) and, for all \(z\in\CR\), in particular,
\beql{N47SK}
\XFa{2n}{z}-\XFa{2n+1}{z}
=\hp{2n+1}.
\eeq
\elem
\bproof
Let \(n\in\NO\) be such that \(2n+1\leq\kappa\).
\rrem{R1019} yields \(\hp{2n}\hp{2n}^\mpi\hp{2n+1}=\hp{2n+1}\).
In view of \rremss{R52SK}{R1358}, for all \(z\in\CR\), we then acquire
\[\begin{split
\XFa{2n}{z}
&=\hp{2n}\ek{\pba{n}{z}}^\inv\pboa{n+1}{z}
=\hp{2n}\ek*{\pba{n}{z}}^\inv\ek*{\pba{n+1}{z}+\pba{n}{z}\hp{2n}^\mpi\hp{2n+1}}\\
&=\hp{2n}\ek{\pba{n}{z}}^\inv\pba{n+1}{z}+\hp{2n}\hp{2n}^\mpi\hp{2n+1}
=\XFa{2n+1}{z}+\hp{2n+1}.
\end{split}\]
Regarding \rdefn{K17.1} and \rrem{TC}, continuity arguments complete the proof.
\eproof
The following result contains an interesting connection between a sequence \(\seqs{2n}\in\Hggeq{2n}\) and its \tnatext{} \(\seqsh{2n+1}\), which describes the interplay between sequences of \tXF{s}.
\bleml{L1520}
Let \(\kappa\in\NOinf\), let \(\seqska\in\Hggeqka\) with \tsXF{} \(\seqX{\kappa}\), and let \(n\in\NO\) with \(2n\leq\kappa\).
Then the \tnatext{} \(\seqsh{2n+1}\) of \(\seqs{2n}\) belongs to \(\Hggeq{2n+1}\) and the \tsXF{} \(\seqXh{2n+1}\) associated with \(\seqsh{2n+1}\) fulfills \(\XFh{2n+1}=\XF{2n}\) and, for all \(z\in\CR\), in particular, \(\XFh{2n+1}\rk{z}=\XFa{2n}{z}\).
\elem
\bproof
\rprop{L1237-A} yields \(\seqsh{2n+1}\in\Hggeq{2n+1}\).
Let \(\shHp{2n+1}\) be the \tsHpo{\(\seqsh{2n+1}\)}.
Then, the application of \rlem{K17-2} to the sequence \(\seqsh{2n+1}\) provides \(\XFh{2n}-\XFh{2n+1}=\hhp{2n+1}\).
According to \rrem{R26-SK1}, we have \(\hhp{2n+1}=\Oqq\).
From \rrem{XF-tru} we can conclude \(\XFh{2n}=\XF{2n}\).
Consequently, we obtain \(\XFh{2n+1}=\XFh{2n}-\hhp{2n+1}=\XFh{2n}=\XF{2n}\).
Regarding \rremss{TC}{R0735}, the proof is complete.
\eproof
\bleml{L0625
Let \(\kappa\in\NOinf\), let \(\seqska\in\Hggeqka\) with \tsXF{} \(\seqX{\kappa}\), and let \(m\in\mn{-1}{\kappa}\).
Then \(\ek{\XFa{m}{z}}^\ad=\XFa{m}{\ko{z}}\) for all \(z\in\holpt{\XF{m}}\).
In particular, \(\ek{\XFa{m}{x}}^\ad=\XFa{m}{x}\) for all \(x\in\R\cap\holpt{\XF{m}}\).
\elem
\bproof
\rrem{R1019} yields \eqref{R1019.0}.
By virtue of \eqref{XF.01}, we see then \(\XF{-1}\rk{\ko{z}}=\Oqq=\ek{\XF{-1}\rk{z}}^\ad\) and \(\XF{0}\rk{\ko{z}}
=\ko{z}\hp{0}
=\ko{z}\hp{0}^\ad
=\ek{\XF{0}\rk{z}}^\ad\) for all \(z\in\C\).
Thus, in the case \(m\leq0\), the proof is complete.
Now assume \(m\geq1\).
First we consider the case that \(m=2n\) is fulfilled with some \(n\in\N\).
According to \eqref{R1019.0}, we have \(\hp{2n}^\ad=\hp{2n}\) and \(\hp{2n-2}^\ad=\hp{2n-2}\).
Consequently, \rrem{A.R.A++*} shows that \(\rk{\hp{2n-2}^\mpi}^\ad=\hp{2n-2}^\mpi\) is valid.
Because of \rrem{R1624}, we can infer from \rrem{BK8.2} furthermore \(\pda{n}{\ko{z}}=\ek{\pba{n}{z}}^\ad\) and \(\pda{n-1}{\ko{z}}=\ek{\pba{n-1}{z}}^\ad\) for all \(z\in\C\).
For all \(z\in\C\setminus\nst{\det\pb{n}}\), then \(\det\pda{n}{\ko{z}}\neq0\) and, using \rpropp{K17-5}{K17-5.b}, \rnota{N-abcdO}, and \rdefn{K17.1}, we receive moreover
\begin{multline*}
\XF{2n}\rk{\ko{z}}
=\pdoa{n+1}{\ko z}\ek*{\pda{n}{\ko z}}^\inv\hp{2n}
=\ek*{\ko{z}\pda{n}{\ko{z}}-\hp{2n}\hp{2n-2}^\mpi\pd{n-1}\rk{\ko{z}}}\ek*{\pda{n}{\ko{z}}}^\inv\hp{2n}\\
=\rk*{\ko{z}\ek*{\pba{n}{z}}^\ad-\hp{2n}^\ad (\hp{2n-2}^\mpi)^\ad\ek*{\pba{n-1}{z}}^\ad}\ek*{\pba{n}{z}}^\invad\hp{2n}^\ad
=\ek*{\pboa{n+1}{z}}^\ad\ek*{\pba{n}{z}}^\invad\hp{2n}^\ad
=\ek*{\XFa{2n}{z}}^\ad.
\end{multline*}
In the case that \(m=2n+1\) is fulfilled with some \(n\in\NO\),
using \rpropp{K17-5}{K17-5.a} instead of \rpropp{K17-5}{K17-5.b}, we obtain similarly
\[
\XF{2n+1}\rk{\ko{z}}
=\pda{n+1}{\ko{z}}\ek*{\pda{n}{\ko{z}}}^\inv\hp{2n}
=\ek*{\pba{n+1}{z}}^\ad\ek*{\pba{n}{z}}^\invad\hp{2n}^\ad
=\ek*{\XFa{2n+1}{z}}^\ad
\]
for all \(z\in\C\setminus\nst{\det\pb{n}}\).
Taking into account \eqref{SK5-11}, we have thus shown \(\XF{m}\rk{\ko{z}}=\ek{\XFa{m}{z}}^\ad\) for all \(z\in\C\setminus\nst{\det\pb{\ef{m}}}\).
By continuity, then \(\ek{\XFa{m}{z}}^\ad=\XFa{m}{\ko{z}}\) follows for all \(z\in\holpt{\XF{m}}\).
\eproof
\bleml{K17-6}
Let \(\kappa\in\NOinf\), let \(\seqska\in\Hggeqka\), and let \(\sHp{\kappa}\) be the \tsHpo{\(\seqska\)}.
Further, let \(\seqX{\kappa}\) be the \tsXFo{\(\seqska\)}.
For all \(m\in\mn{0}{\kappa}\) and all \(z\in\CR\), then \(\ran{\XFa{m}{z}}=\ran{\hp{\eff{m}}}\) and \(\nul{\XFa{m}{z}}=\nul{\hp{\eff{m}}}\).
\elem
\bproof
Let \(z\in\CR\).
\rlem{BK8.6} yields \(\det\pba{k}{z}\neq0\) and \(\det\pda{k}{z}\neq0\) for all \(k\in\NO\) with \(2k-1\leq\kappa\).
Furthermore, \rlem{K15-1} yields \(\det\pboa{n+1}{z}\neq0\) and \(\det\pdoa{n+1}{z}\neq0\) for all \(n\in\NO\) with \(2n\leq\kappa\).
Taking into account \rrem{R52SK} and \rprop{K17-5}, we can conclude from \rrem{R1241} then \(\ran{\XFa{2n+1}{z}}=\ran{\hp{2n}}\) and \(\nul{\XFa{2n+1}{z}}=\nul{\hp{2n}}\) for all \(n\in\NO\) with \(2n+1\leq\kappa\) as well as \(\ran{\XFa{2n}{z}}=\ran{\hp{2n}}\) and \(\nul{\XFa{2n}{z}}=\nul{\hp{2n}}\) for all \(n\in\NO\) with \(2n\leq\kappa\).
In view of \eqref{SK5-11}, the proof is complete.
\eproof
Now we consider Moore--Penrose inverses of the matrix-valued functions belonging to the \tsXFo{the} given data \(\seqska\).
\bleml{L0946}
Let \(\kappa\in\Ninf\) and let \(\seqska\in\Hggeqka\) with \tsHp{} \(\sHp{\kappa}\), \sabcd{} \(\abcd{\ev{\kappa}}\), and \tsXF{} \(\seqX{\kappa}\).
Then \(\det\pba{n+1}{z}\neq0\) and \(\ek{\XFa{2n+1}{z}}^\mpi=\hp{2n}^\mpi\hp{2n}\ek{\pba{n+1}{z}}^\inv\pba{n}{z}\hp{2n}^\mpi\) as well as \(\det\pda{n+1}{z}\neq0\) and \(\ek{\XFa{2n+1}{z}}^\mpi=\hp{2n}^\mpi\pda{n}{z}\ek{\pda{n+1}{z}}^\inv\hp{2n}\hp{2n}^\mpi\) are valid for all \(n\in\NO\) such that \(2n+1\leq\kappa\) and all \(z\in\CR\).
\elem
\bproof
Let \(n\in\NO\) be such that \(2n+1\leq\kappa\) and let \(z\in\CR\).
In view of \rlem{BK8.6}, all the matrices \(B_{n+1}\defeq\pba{n+1}{z}\), \(B_{n}\defeq\pba{n}{z}\), \(D_{n+1}\defeq\pda{n+1}{z}\), and \(D_{n}\defeq\pda{n}{z}\) are invertible.
We set \(L\defeq D_{n}D_{n+1}^\inv\), \(R\defeq B_{n+1}^\inv B_{n}\), \(M\defeq\hp{2n}\), \(N\defeq LMR^\inv\), and \(X\defeq MR^\inv\).
Then the matrices \(L\) and \(R\) are invertible and \(R^\inv=B_{n}^\inv B_{n+1}\).
Consequently, \(N=D_{n}D_{n+1}^\inv MB_{n}^\inv B_{n+1}\) and \(X=MB_{n}^\inv B_{n+1}\).
\rpropp{KN}{KN.b} provides \(MB_{n}^\inv B_{n+1}=D_{n+1}D_{n}^\inv M\).
Hence,
\[
M
=D_{n}D_{n+1}^\inv D_{n+1}D_{n}^\inv M
=D_{n}D_{n+1}^\inv MB_{n}^\inv B_{n+1}
=N.
\]
Applying \rlem{KB-5}, then we conclude
\begin{align}
X^\mpi
&=N^\mpi NRM^\mpi
=M^\mpi MB_{n+1}^\inv B_{n}M^\mpi,
X^\mpi
&=N^\mpi LMM^\mpi
=M^\mpi D_{n}D_{n+1}^\inv MM^\mpi.\label{L0946.2}
\end{align}
On the other hand, from \rrem{R52SK} we recognize that \(\XFa{2n+1}{z}=MB_{n}^\inv B_{n+1}=X\) holds true.
Comparing this to \eqref{L0946.2} completes the proof.
\eproof
\bleml{K17-7}
Let \(\kappa\in\NOinf\) and let \(\seqska\in\Hggeqka\) with \tsHp{} \(\sHp{\kappa}\) and \tsXF{} \(\seqX{\kappa}\).
Then:
\benui
\il{K17-7.a} \(\XFa{2n}{z}=z\hp{2n}-\hp{2n}\ek{\XFa{2n-1}{z}}^\mpi\hp{2n}\) for all \(n\in\NO\) fulfilling \(2n\leq\kappa\) and all \(z\in\CR\).
\il{K17-7.b} If \(\kappa\geq1\), then \(\XFa{2n+1}{z}=z\hp{2n}-\hp{2n}\ek{\XFa{2n-1}{z}}^\mpi\hp{2n}-\hp{2n+1}\) for all \(n\in\NO\) fulfilling \(2n+1\leq\kappa\) and all \(z\in\CR\).
\eenui
\elem
\bproof
\eqref{K17-7.a} Let \(n\in\NO\) be such that \(2n\leq\kappa\) and let \(z\in\CR\).
We first consider the case \(n=0\).
By virtue of \rdefn{K17.1} and \eqref{XF.01}, we see then \(z\hp{0}-\hp{0}\ek{\XFa{-1}{z}}^\mpi\hp{0}=z\hp{0}-\hp{0}\cdot\Oqq^\mpi\cdot\hp{0}=z\hp{0}=\XFa{0}{z}\).
Now suppose \(n\geq1\).
From \rlem{L0946} we can infer \(\det\pba{n}{z}\neq0\) and
\beql{FL1}
\ek*{\XFa{2n-1}{z}}^\mpi
=\hp{2n-2}^\mpi\hp{2n-2}\ek{\pba{n}{z}}^\inv\pba{n-1}{z}\hp{2n-2}^\mpi.
\eeq
\rrem{R1019} yields \(\hp{2n}\hp{2n-2}^\mpi\hp{2n-2}=\hp{2n}\).
Consequently, from \eqref{FL1} we perceive \(\hp{2n}\ek{\XFa{2n-1}{z}}^\mpi=\hp{2n}\ek{\pba{n}{z}}^\inv\pba{n-1}{z}\hp{2n-2}^\mpi\).
In view of \rrem{R52SK} and \rnota{N-abcdO}, we have \(\XFa{2n}{z}=\hp{2n}\ek{\pba{n}{z}}^\inv\ek{z\pba{n}{z}-\pba{n-1}{z}\hp{2n-2}^\mpi\hp{2n}}\).
Therefore,
\[
z\hp{2n}-\hp{2n}\ek*{\XFa{2n-1}{z}}^\mpi\hp{2n}
=z\hp{2n}-\rk*{\hp{2n}\ek{\pba{n}{z}}^\inv \pba{n-1}{z}\hp{2n-2}^\mpi}\hp{2n}
=\XFa{2n}{z}.
\]
\eqref{K17-7.b} Suppose \(\kappa\geq1\) and that \(n\in\NO\) is such that \(2n+1\leq\kappa\).
Then \rlem{K17-2} shows that \eqref{N47SK} holds true for all \(z\in\CR\).
Thus, applying \rpart{K17-7.a} completes the proof of \rpart{K17-7.b}.
\eproof
We note that \rlem{K17-7} plays an essential role for \rlem{P1056} and \rprop{P1129} below, which contain significant observations about the \tsXF{}.
Now we are going to look at the \tsXF{} under the light of a particular Schur transform, which was introduced in \cite{MR3380267}.
Before doing this we recall the following notion.
\bdefnnl{\zitaa{MR3380267}{\csec{8}{218}}}{K11
Let \(\mG\) be a \tne{} subset of \(\C\), let \(F\colon\mG\to\Cpq\) be a matrix-valued function, and let \(A\) and \(B\) be complex \tpqa{matrices}.
Then let \(F^\ABFt{A}{B}\colon\mG\to\Cpq\) be defined by \(F^\ABFt{A}{B}\rk{z}\defeq-A\rk{z\Iq+\ek{F\rk{z}}^\mpi A}+B\).
The function \(F^\ABFt{A}{B}\) is called the \emph{\tABSchto{A}{B}{\(F\)}}.
\edefn
\bleml{P1056}
Let \(\kappa\in\NOinf\), let \(\seqska\in\Hggeqka\), and let \(\sHp{\kappa}\) be the \tsHpo{\(\seqska\)}.
Further, let \(\seqX{\kappa}\) be the \tsXFo{\(\seqska\)}.
Then \(\XFa{2n}{z}=\XF{2n-1}^\ABFt{-\hp{2n}}{\Oqq}\rk{z}\) for all \(n\in\NO\) such that \(2n\leq\kappa\) and all \(z\in\CR\).
Moreover \(\XFa{2n+1}{z}=\XF{2n-1}^\ABFt{-\hp{2n}}{-\hp{2n+1}}\rk{z}\) for all \(n\in\NO\) fulfilling \(2n+1\leq\kappa\) and all \(z\in\CR\).
\elem
\bproof
This is a direct consequence of \rdefn{K11} and \rlem{K17-7}.
\eproof
Now we state an interrelation to the class \(\RFq\).
\bpropl{P1129}
Let \(\kappa\in\NOinf\), let \(\seqska\in\Hggeqka\), and let \(\sHp{\kappa}\) be the \tsHpo{\(\seqska\)}.
Further, let \(\seqX{\kappa}\) be the \tsXFo{\(\seqska\)}.
For all \(m\in\mn{0}{\kappa}\), then \(G_m\defeq\rstr_{\pip}\XF{m}\) belongs to the class \(\RFq\) and \(\beta_{m}=\hp{\eff{m}}\), where \((\alpha_{m}, \beta_{m}, \nu_{m})\) is the \tNpo{\(G_m\)} and \(\eff{m}\) is given by \eqref{SK5-11}.
\eprop
\bproof
\rprop{DFKMT2.30N} yields \(\hp{j}^\ad=\hp{j}\) for all \(j\in\mn{0}{\kappa}\) and \(\hp{2k}\in\Cggq\) for all \(k\in\NO\) fulfilling \(2k\leq\kappa\) hold true.
The following considerations of our proof are divided into three parts:
(I) From \eqref{XF.01} we get \(G_0\rk{w}=w\hp{0}\) for all \(w\in\pip\).
Regarding \(\hp{0}\in\Cggq\), \rthm{Theo21} yields \(G_0\in\RFOq \) and \(\beta_{0}=\hp{0}\).
If \(\kappa=0\), then, in view of \eqref{SK5-11}, the proof is complete.
(II) Suppose \(\kappa\geq1\).
From \eqref{XF.01} we receive \(G_1\rk{w}=-\hp{1}+w\hp{0}\) for all \(w\in\pip\).
Regarding \(\hp{0}\in\Cggq\) and \(\hp{1}^\ad=\hp{1}\), consequently \rthm{Theo21} provides \(G_1\in\RFq\) and \(\beta_1=\hp{0}\).
If \(\kappa=1\), then, in view of \eqref{SK5-11}, the proof is finished.
(III) Suppose \(\kappa\geq2\).
According to parts~(I) and~(II) of the proof, there is an \(n\in\N\) fulfilling \(2n\leq\kappa\) such that \(G_l\in\RFq\) and \(\beta_{ l}=\hp{\eff{l}}\) hold true for each \(l\in\mn{0}{2n-1}\).
\rlem{P1056} shows that \(\XFa{2n}{z}=\XF{2n-1}^\ABFt{-\hp{2n}}{\Oqq}\rk{z}\) is valid for all \(z\in\CR\).
In particular, \(G_{2n}=G_{2n-1}^\ABFt{-\hp{2n}}{\Oqq}\).
Taking into account \(G_{2n-1}\in\RFq\) as well as \(\hp{2n}\in\Cggq\) and \(\Oqq\in\CHq\), from \zitaa{MR3380267}{\cprop{8.6}{219}} we conclude \(G_{2n}\in\RFq\).
According to \rlem{fkm12bP3.3}, we acquire
\begin{align}\label{SKM11}
\beta_{2n}&=\lim_{y\to\infty}\rk{\iu y}^\inv G_{2n}\rk{\iu y}&
&\text{and}&
\beta_{2n-1}&=\lim_{y\to\infty}\rk{\iu y}^\inv G_{2n-1}\rk{\iu y}.
\end{align}
Because of \rlem{K17-6} and \(\beta_{2n-1}=\hp{2n-2}\), we obtain
\[
\rank G_{2n-1}\rk{w}
=\dim\ran{G_{2n-1}\rk{w}}
=\dim\ran{\hp{2n-2}}
=\rank\hp{2n-2}
=\rank\beta_{2n-1}
\]
for all \(w\in\pip\).
Therefore, the second equation in \eqref{SKM11} and \rprop{MPK} provide \(\lim_{y\to\infty}\rk{\iu y\ek{G_{2n-1}\rk{\iu y}}^\mpi}=\beta_{2n-1}^\mpi\) and, consequently,
\beql{SKM13}
\lim_{y\to\infty}\rk*{\rk{\iu y}^\inv\ek*{G_{2n-1}\rk{\iu y}}^\mpi}
=\Oqq.
\eeq
Using \eqref{SKM11}, \rlemp{K17-7}{K17-7.a}, and \eqref{SKM13}, we perceive
\beql{SKM}\begin{split}
\beta_{2n}
&=\lim_{y\to\infty}\rk{\iu y}^\inv\rk*{\iu y\hp{2n}-\hp{2n}\ek*{G_{2n-1}\rk{\iu y}}^\mpi\hp{2n}}\\
&=\hp{2n}-\hp{2n}\ek*{\lim_{y\to\infty}\rk*{\rk{\iu y}^\inv\ek*{G_{2n-1}\rk{\iu y}}^\mpi}}\hp{2n}
=\hp{2n}.
\end{split}\eeq
Now assume that \(2n+1\leq\kappa\).
From \rlem{K17-2} we recognize that \eqref{N47SK} holds true for each \(z\in\CR\).
Thus, by virtue of \eqref{N47SK} and \(G_{2n}\in\RFq\), from \rthm{Theo21} we conclude
\[
G_{2n+1}\rk{w}
=G_{2n}\rk{w}-\hp{2n+1}
=\alpha_{2n}-\hp{2n+1}+w\beta_{2n}+\int_\R\frac{1+xw}{x-w}\nu_{2n}\rk{\dif x}
\]
for all \(w\in\pip\).
In view of \(\hp{2n+1}^\ad=\hp{2n+1}\), \rthm{Theo21}, and \eqref{SKM}, consequently, \(G_{2n+1}\in\RFq\) and \(\beta_{2n+1}=\beta_{2n}=\hp{2n}\) hold true.
For each \(m\in\mn{0}{\kappa}\), in view of \eqref{SK5-11}, hence, \(G_m\in\RFq\) and \(\beta_{m}=\hp{\eff{m}}\) are proved inductively.
\eproof
\bcorl{C1213}
Let \(\kappa\in\NOinf\), let \(\seqska\in\Hggeqka\) with \sabcd{} \(\abcd{\ev{\kappa}}\) and \tsXF{} \(\seqX{\kappa}\), and let \(m\in\mn{0}{\kappa}\).
In view of \rprop{P1129}, denote by \(\rk{\alpha_m,\beta_m,\nu_{m}}\) the \tNpo{\(G_m\defeq\rstr_{\pip}\XF{m}\)}.
Then \(\nu_m\rk{\R\setminus\nst{\det\pb{\ef{m}}}}=\Oqq\), where \(\ef{m}\) is given by \eqref{SK5-11}.
\ecor
\bproof
From \rrem{SK32R} we can conclude that \(\det\pb{\ef{m}}\) is a non-zero complex polynomial.
Consequently, \(\nst{\det\pb{\ef{m}}}\) is a finite subset of \(\C\).
Hence, there are an integer \(\ell\in\N\) and a sequence \(\seq{\mathcal{I}_k}{k}{1}{\ell}\) of pairwise disjoint open (real, bounded or unbounded) intervals such that \(\R\setminus\nst{\det\pb{\ef{m}}}=\bigcup_{k=1}^\ell\mathcal{I}_\ell\).
Consider an arbitrary \(k\in\mn{1}{\ell}\).
Using \rrem{R0735}, we can infer then \(\pip\cup\mathcal{I}_k\subseteq\holpt{\XF{m}}\).
Therefore, \(G_{m,k}\defeq\rstr_{\pip\cup\mathcal{I}_k}\XF{m}\) is a continuous continuation of \(G_m\) onto \(\pip\cup\mathcal{I}_k\).
\rlem{L0625} furthermore shows \(G_{m,k}\rk{\mathcal{I}_k}\subseteq\CHq\).
Since \rprop{P1129} yields \(G_m\in\RFq\), we can then apply \zitaa{MR2570113}{\crem{B.3}{823}} to obtain \(\nu_m\rk{\mathcal{I}_k}=\Oqq\).
Hence, \(\nu_m\rk{\R\setminus\nst{\det\pb{\ef{m}}}}=\sum_{k=1}^\ell\nu_m\rk{\mathcal{I}_k}=\Oqq\) follows.
\eproof
\bcorl{C1232}
Let \(\kappa\in\NOinf\), let \(\seqska\in\Hggeqka\) with \tsXF{} \(\seqX{\kappa}\), and let \(m\in\mn{-1}{\kappa}\).
Then \(\frac{1}{\rim z}\rim\XFa{m}{z}\in\Cggq\) for every choice of \(z\in\CR\).
\ecor
\bproof
For \(m=-1\) this is an immediate consequence or \rdefn{K17.1}.
Now suppose \(m\in\mn{0}{\kappa}\).
Because of the definition of the class \(\RFq\) and \rprop{P1129}, we realize that \(\frac{1}{\rim w}\rim\XFa{m}{w}\in\Cggq\) for all \(w\in\pip\).
Now we consider an arbitrary \(z\in\lhp\).
Then \(w\defeq\ko{z}\) belongs to \(\pip\) and, consequently, as already shown, \(\frac{1}{\rim w}\rim\XFa{m}{w}\in\Cggq\) holds true.
Because of \rrem{R0735} and \rlem{L0625}, we acquire \(\ek{\XFa{m}{z}}^\ad=\XFa{m}{w}\), implying \(\rim\XFa{m}{w}=-\rim\XFa{m}{z}\).
Thus, additionally taking into account \(\rim w=-\rim z\), we receive \(\frac{1}{\rim z}\rim\XFa{m}{z}\in\Cggq\).
In view of \(\CR=\pip\cup\lhp\), the proof is complete.
\eproof
Now we extend the results of \rlem{K17-6}.
\bpropl{L1347}
Let \(\kappa\in\NOinf\), let \(\seqska\in\Hggeqka\) with \tsHp{} \(\sHp{\kappa}\) and \tsXF{} \(\seqX{\kappa}\), and let \(m\in\mn{0}{\kappa}\).
Then, for each \(z\in\CR\), the equations
\beql{L1347.1
\Ran{\ek*{\XFa{m}{z}}^\ad}
=\Ran{\XFa{m}{z}}
=\Ran{\rim\XFa{m}{z}}
=\ran{\hp{\eff{m}}}
\eeq
and
\beql{L1347.2}
\Nul{\ek*{\XFa{m}{z}}^\ad}
=\Nul{\XFa{m}{z}}
=\Nul{\rim\XFa{m}{z}}
=\nul{\hp{\eff{m}}}
\eeq
hold true, where \(\eff{m}\) is given by \eqref{SK5-11}.
\eprop
\bproof
(I) First we consider an arbitrary \(w\in\pip\).
By virtue of \rprop{P1129}, the matrix-valued function \(G_m\defeq\rstr_{\pip}\XF{m}\) belongs to \(\RFq\) with \(\beta_m=\hp{\eff{m}}\), where \((\alpha_m, \beta_m, \nu_m)\) signifies the \tNpo{\(G_m\)}.
Thus, from \zitaa{MR2988005}{\crem{3.5}{1772}} we recognize that \(\ran{\ek{G_m\rk{w}}^\ad}=\ran{G_{m}\rk{w}}\) and \(\nul{\ek{G_{m}\rk{w}}^\ad}=\nul{G_{m}\rk{w}}\) are valid.
Hence, \(\ran{\ek{\XFa{m}{w}}^\ad}=\ran{\XFa{m}{w}}\) and \(\nul{\ek{\XFa{m}{w}}^\ad}=\nul{\XFa{m}{w}}\).
Setting \(\gamma_m\defeq\nu_m\rk{\R}\), furthermore \zitaa{MR2988005}{\cprop{3.7}{1772}} yields \(\ran{G_m\rk{w}}=\ran{\alpha_m}+\ran{\beta_m}+\ran{\gamma_m}\) and \(\nul{G_m\rk{w}}=\nul{\alpha_m}\cap\nul{\beta_m}\cap\nul{\gamma_m}\) as well as \(\ran{\rim G_m\rk{w}}=\ran{\beta_m}+\ran{\gamma_m}\) and \(\nul{\rim G_m\rk{w}}=\nul{\beta_m}\cap\nul{\gamma_m}\).
Taking into account \rlem{K17-6} and \(\beta_m=\hp{\eff{m}}\), we can infer
\
\Ran{\XFa{m}{w}}
=\ran{\hp{\eff{m}}}
=\ran{\beta_m}
\subseteq\Ran{\rim G_m\rk{w}}
\subseteq\Ran{G_m\rk{w}}
=\Ran{\XFa{m}{w}}
\]
and
\
\Nul{\XFa{m}{w}}
=\Nul{G_{m}\rk{w}}
\subseteq\Nul{\rim G_m\rk{w}}
\subseteq\nul{\beta_m}
=\nul{\hp{\eff{m}}}
=\Nul{\XFa{m}{w}}.
\]
Consequently, \(\ran{\rim\XFa{m}{w}}=\ran{\rim G_{m}\rk{w}}=\ran{\hp{\eff{m}}}=\ran{\XFa{m}{w}}\) and \(\nul{\rim\XFa{m}{w}}=\nul{\rim G_{m}\rk{w}}=\nul{\hp{\eff{m}}}=\nul{\XFa{m}{w}}\).
In particular, \eqref{L1347.1} and \eqref{L1347.2} hold true for all \(z\in\pip\).
(II) Let \(z\in\lhp\).
Then \(w\defeq\ko{z}\) belongs to \(\pip\).
According to \rrem{R0735} and \rlem{L0625}, we get \(\XFa{m}{w}=\ek{\XFa{m}{z}}^\ad\).
Hence, \(\XFa{m}{z}=\ek{\XFa{m}{w}}^\ad\) and \(\rim\XFa{m}{z}=-\rim\XFa{m}{w}\) follow.
Consequently, we obtain \eqref{L1347.1} and \eqref{L1347.2} for all \(z\in\lhp\) as well by applying \cpart{(I)}.
(III) Because of~(I),~(II), and \(\CR=\pip\cup\lhp\), the assertion is proved.
\eproof
\bcorl{C1236}
Let \(\kappa\in\NOinf\), let \(\seqska\in\Hgeqka\) with \tsXF{} \(\seqX{\kappa}\), and let \(m\in\mn{0}{\kappa}\).
Then \(\det\XFa{m}{z}\neq0\) for all \(z\in\CR\).
Furthermore \(\rim\XFa{m}{z}\in\Cgq\) for all \(z\in\pip\).
\ecor
\bproof
\rrem{Schr2.7} yields \(\seqska\in\Hggeqka\).
Furthermore, we see easily that \(\hp{\eff{m}}\in\Cgq\) and in particular \(\ran{\hp{\eff{m}}}=\Cq\).
Applying \rprop{L1347} and \rcor{C1232} completes the proof.
\eproof
\bpropl{M2140}
Let \(\kappa\in\minf{3}\cup\set{\infi}\), let \(\seqska\in\Hggeqka\) with \tsHp{} \(\sHp{\kappa}\), \sabcd{} \(\abcd{\ev{\kappa}}\), and \tsXF{} \(\seqX{\kappa}\), and let \(n\in\N\) be such that \(2n+1\leq\kappa\).
For every choice of \(z\) and \(w\) in \(\CR\), then \(\det\pda{n}{z}\neq0\) and \(\det\pba{n}{\ko{w}}\neq0\) as well as
\begin{multline*}
\XFa{2n+1}{z}-\ek*{\XFa{2n+1}{w}}^\ad
=\rk{z-\ko{w}}\hp{2n}+\hp{2n}\ek*{\pba{n}{\ko{w}}}^\inv\pb{n-1}\rk{\ko{w}}\hp{2n-2}^\mpi\\
\lbtimes\rk*{\XFa{2n-1}{z}-\ek*{\XFa{2n-1}{w}}^\ad}\hp{2n-2}^\mpi\pda{n-1}{z}\ek*{\pda{n}{z}}^\inv\hp{2n}.
\end{multline*}
\eprop
\bproof
According to \rrem{R1019}, we have \eqref{R1019.2}.
Consider arbitrary \(z,w\in\CR\).
We set \(Z\defeq\XFa{2n-1}{z}\) and \(W\defeq\XF{2n-1}\rk{\ko{w}}\).
Regarding \eqref{SK5-11}, from \rprop{L1347} we recognize that \(\ran{Z^\ad}=\ran{Z}=\ran{\hp{2n-2}}=\ran{W}=\ran{W^\ad}\) is valid.
Using \rrem{tsb3}, we conclude then
\begin{align}\label{M2140.1}
Z^\mpi Z
&=\OPu{\ran{Z^\ad}}
=\OPu{\ran{W^\ad}}
=W^\mpi W
&\text{and}&
WW^\mpi
&=\OPu{\ran{W}}
=\OPu{\ran{Z}}
=ZZ^\mpi.
\end{align}
In view of \rrem{R0735}, \rlem{L0625} yields \(W=\ek{\XFa{2n-1}{w}}^\ad\) and
\beql{KD4}
\XFa{2n+1}{z}-\ek*{\XFa{2n+1}{w}}^\ad
=\XFa{2n+1}{z}-\XF{2n+1}\rk{\ko{w}}.
\eeq
According to \rlem{L0946}, we have \(\det\pda{n}{z}\neq0\) and \(\det\pba{n}{\ko{w}}\neq0\) as well as
\begin{align}
Z^\mpi&=\hp{2n-2}^\mpi\pda{n-1}{z}\ek*{\pda{n}{z}}^\inv\hp{2n-2}\hp{2n-2}^\mpi,
W^\mpi&=\hp{2n-2}^\mpi\hp{2n-2}\ek*{\pba{n}{\ko{w}}}^\inv\pb{n-1}\rk{\ko{w}}\hp{2n-2}^\mpi.\label{KD8}
\end{align}
From \rlemp{K17-7}{K17-7.b} we obtain \(\XFa{2n+1}{z}=z\hp{2n}-\hp{2n}Z^\mpi\hp{2n}-\hp{2n+1}\) and \(\XF{2n+1}\rk{\ko{w}}=\ko{w}\hp{2n}-\hp{2n}W^\mpi\hp{2n}-\hp{2n+1}\).
Then, using additionally \eqref{KD4}, \eqref{M2140.1}, \eqref{KD8}, and \eqref{R1019.2}, we receive
\[\begin{split}
&\XFa{2n+1}{z}-\ek*{\XFa{2n+1}{w}}^\ad
=\XFa{2n+1}{z}-\XF{2n+1}\rk{\ko{w}}\\
&=\rk{z\hp{2n}-\hp{2n}Z^\mpi\hp{2n}-\hp{2n+1}}-\rk{\ko{w}\hp{2n}-\hp{2n}W^\mpi\hp{2n}-\hp{2n+1}}\\
&=\rk{z-\ko{w}}\hp{2n}-\hp{2n}\rk{Z^\mpi-W^\mpi}\hp{2n}
=\rk{z-\ko{w}}\hp{2n}-\hp{2n}\rk{Z^\mpi ZZ^\mpi-W^\mpi WW^\mpi}\hp{2n}\\
&=\rk{z-\ko{w}}\hp{2n}-\hp{2n}\rk{W^\mpi WZ^\mpi-W^\mpi ZZ^\mpi}\hp{2n}\\
&=\rk{z-\ko{w}}\hp{2n}-\hp{2n}W^\mpi\rk{W-Z}Z^\mpi\hp{2n}\\
&=\rk{z-\ko{w}}\hp{2n}-\hp{2n}\rk*{\hp{2n-2}^\mpi\hp{2n-2}\ek*{\pba{n}{\ko{w}}}^\inv\pb{n-1}\rk{\ko{w}}\hp{2n-2}^\mpi}\rk{W-Z}\\
&\qquad\lbtimes\rk*{\hp{2n-2}^\mpi\pda{n-1}{z}\ek*{\pda{n}{z}}^\inv\hp{2n-2}\hp{2n-2}^\mpi}\hp{2n}\\
&=\rk{z-\ko{w}}\hp{2n}+\hp{2n}\ek*{\pba{n}{\ko{w}}}^\inv\pb{n-1}\rk{\ko{w}}\hp{2n-2}^\mpi\rk{Z-W}\hp{2n-2}^\mpi\pda{n-1}{z}\ek*{\pda{n}{z}}^\inv\hp{2n}.
\end{split}\]
In view of \(W=\ek{\XFa{2n-1}{w}}^\ad\), the proof is complete.
\eproof
\bcorl{M2145}
Let \(\kappa\in\minf{3}\cup\set{\infi}\), let \(\seqska\in\Hggeqka\) with \tsHp{} \(\sHp{\kappa}\), \sabcd{} \(\abcd{\ev{\kappa}}\), and \tsXF{} \(\seqX{\kappa}\), and let \(n\in\N\) be such that \(2n+1\leq\kappa\).
For all \(z\in\CR\), then
\[
\rim\XFa{2n+1}{z}
=\ek*{\rim\rk{z}}\hp{2n}+\hp{2n}\ek*{\pba{n}{\ko{z}}}^\inv\pba{n-1}{\ko{z}}\hp{2n-2}^\mpi\ek*{\rim\XFa{2n-1}{z}}\hp{2n-2}^\mpi\pda{n-1}{z}\ek*{\pda{n}{z}}^\inv\hp{2n}.
\]
\ecor
\bproof
This follows by application of \rprop{M2140} with \(w=z\).
\eproof
\section{Description of the values of the solutions}\label{Cha13}
In this section, ultimately we target an explicit description of the set
\beql{FR}
\setaca*{F\rk{w}}{F\in\RFOqskg{2n}},
\eeq
\tie{}, a parametrization of the set
\
\setaca*{\int_\R\frac{1}{x-w}\sigma\rk{\dif x}}{\sigma\in\MggqRskg{2n}}
\
with respect to an arbitrarily prescribed sequence \(\seqs{2n}\in\Hggeq{2n}\) and an arbitrarily chosen point \(w\in\pip\).
As already mentioned, the set \eqref{FR} can be described as matrix ball.
In order to realize our aim, we are going to introduce a notation which will turn out to give possible parameters of the matrix ball.
In order to justify that this notation is correct, we need a little preparation.
Observe that the following constructions are well defined due to \rremss{SK32R}{R0735}:
\bnotal{N1326}
Let \(\kappa\in\Ninf\) and let \(\seqska\in\Hggeqka\) with \tsHp{} \(\sHp{\kappa}\), \sabcd{} \(\abcd{\ev{\kappa}}\), and \tsXF{} \(\seqX{\kappa}\).
For all \(n\in\NO\) such that \(2n+1\leq\kappa\) let \(\PA{n},\PB{n},\PC{n},\PD{n}\colon\CR\to\Cqq\) be defined by
\begin{align*}
\PAa{n}{z}&\defeq\paa{n}{z}\hp{2n}^\mpi\ek*{\XFa{2n+1}{z}}^\ad-\paa{n+1}{z},&
\PBa{n}{z}&\defeq\pba{n}{z}\hp{2n}^\mpi\ek*{\XFa{2n+1}{z}}^\ad-\pba{n+1}{z},\\
\PCa{n}{z}&\defeq\ek*{\XFa{2n+1}{z}}^\ad\hp{2n}^\mpi\pca{n}{z}-\pca{n+1}{z},&
\PDa{n}{z}&\defeq\ek*{\XFa{2n+1}{z}}^\ad \hp{2n}^\mpi\pda{n}{z}-\pda{n+1}{z}.
\end{align*}
\enota
In view of \eqref{K19.0}, \eqref{K19.1}, \rlem{L0625}, and \eqref{XF.01}, for all \(z\in\CR\), we have
\begin{align}\label{ABCD0}
\PAa{0}{z}&=-\hp{0},&
\PBa{0}{z}&=\ko z\hp{0}^\mpi\hp{0}-z\Iq,&
\PCa{0}{z}&=-\hp{0},&
\PDa{0}{z}&=\ko z\hp{0}\hp{0}^\mpi-z\Iq.
\end{align}
\breml{R1352}
Let \(\kappa\in\Ninf\) and let \(\seqska\in\Hggeqka\).
Using \eqref{R1019.0}, \rremssss{A.R.A++*}{R1624}{BK8.2}{R0735}, and \rlem{L0625}, it is readily checked that \(\PCa{n}{z}=\ek{\PAa{n}{\ko{z}}}^\ad\) and \(\PDa{n}{z}=\ek{\PBa{n}{\ko{z}}}^\ad\) hold true for all \(n\in\NO\) fulfilling \(2n+1\leq\kappa\) and all \(z\in\CR\).
\erem
\bleml{L2412}
Let \(\kappa\in\Ninf\), let \(\seqska\in\Hggeqka\), and let \(n\in\NO\) be such that \(2n+1\leq\kappa\).
Then \(\det\PBa{n}{z}\neq0\) and \(\det\PDa{n}{z}\neq0\) for all \(z\in\CR\).
\elem
\bproof
Let \(z\in\CR\).
We consider an arbitrary \(x\in\nul{\PBa{n}{z}}\).
Thus
\beql{L2412.1}
\pba{n}{z}\hp{2n}^\mpi\ek{\XFa{2n+1}{z}}^\ad x
=\pba{n+1}{z}x
\eeq
\rlem{BK8.6} shows that
\begin{align}\label{L2412.2}
\det\pba{n}{z}&\neq0&
&\text{and}&
\det\pba{n+1}{z}&\neq0
\end{align}
hold true.
Because of \eqref{L2412.1} and \eqref{L2412.2}, we acquire
\beql{L2412.3}
\hp{2n}^\mpi\ek{\XFa{2n+1}{z}}^\ad x
=\ek*{\pba{n}{z}}^\inv\pba{n+1}{z}x.
\eeq
Regarding \eqref{SK5-11}, \rprop{L1347} yields \(\ran{\ek{\XFa{2n+1}{z}}^\ad}=\ran{\hp{2n}}\).
\rremp{tsa12}{tsa12.a} implies
\beql{L2412.4}
\hp{2n}\hp{2n}^\mpi\ek*{\XFa{2n+1}{z}}^\ad
=\ek*{\XFa{2n+1}{z}}^\ad.
\eeq
By virtue of \rrem{R52SK}, we perceive \eqref{Z18}.
Taking into account \eqref{Z18}, \eqref{L2412.3}, and \eqref{L2412.4}, we get
\[
\XFa{2n+1}{z}x
=\hp{2n}\ek*{\pba{n}{z}}^\inv\pba{n+1}{z}x
=\hp{2n}\hp{2n}^\mpi\ek*{\XFa{2n+1}{z}}^\ad x
=\ek*{\XFa{2n+1}{z}}^\ad x.
\]
This implies \(\rk{\XFa{2n+1}{z}-\ek{\XFa{2n+1}{z}}^\ad}x=\Ouu{q}{1}\).
Therefore, \(x\in\nul{\rim\XFa{2n+1}{z}}\).
In view of \rprop{L1347}, we get \(x\in\nul{\ek{\XFa{2n+1}{z}}^\ad}\).
Because of \eqref{L2412.1}, then \(\pba{n+1}{z}x=\Ouu{q}{1}\) follows.
Thus, using \eqref{L2412.2}, we receive \(x=\Ouu{q}{1}\).
Hence, \(\nul{\PBa{n}{z}}\subseteq\set{\Ouu{q}{1}}\).
Consequently, \(\det\PBa{n}{z}\neq0\) for all \(z\in\CR\).
Using \rrem{R1352}, we then conclude \(\det\PDa{n}{z}\neq0\) for all \(z\in\CR\) as well.
\eproof
\bleml{L1257}
Let \(\kappa\in\Ninf\), let \(\seqska\in\Hggeqka\), and let \(n\in\NO\) be such that \(2n+1\leq\kappa\).
Then \(\PDa{n}{z}\PAa{n}{z}=\PCa{n}{z}\PBa{n}{z}\) for all \(z\in\CR\).
\elem
\bproof
Let \(z\in\CR\).
According to \rlem{P8.14}, we acquire
\begin{align}
\pda{n}{z}\paa{n}{z}-\pca{n}{z}\pba{n}{z}&=\Oqq,&\pda{n}{z}\paa{n+1}{z}-\pca{n}{z}\pba{n+1}{z}&=\hp{2n},\label{L1257.1}\\
\pda{n+1}{z}\paa{n}{z}-\pca{n+1}{z}\pba{n}{z}&=-\hp{2n},&\pda{n+1}{z}\paa{n+1}{z}-\pca{n+1}{z}\pba{n+1}{z}&=\Oqq.\label{L1257.2}
\end{align}
Regarding \eqref{SK5-11}, \rprop{L1347} yields \(\ran{\ek{\XFa{2n+1}{z}}^\ad}=\ran{\hp{2n}}\) and \(\nul{\ek{\XFa{2n+1}{z}}^\ad}=\nul{\hp{2n}}\).
Using \rrem{tsa12}, then
\begin{align}\label{L1257.3}
\hp{2n}\hp{2n}^\mpi\ek*{\XFa{2n+1}{z}}^\ad&=\ek*{\XFa{2n+1}{z}}^\ad&
&\text{and}&
\ek*{\XFa{2n+1}{z}}^\ad\hp{2n}^\mpi\hp{2n}&=\ek*{\XFa{2n+1}{z}}^\ad
\end{align}
follow.
From \eqref{L1257.1}, \eqref{L1257.2}, and \eqref{L1257.3}, thus we get
\[\begin{split}
&\rk*{\ek*{\XFa{2n+1}{z}}^\ad\hp{2n}^\mpi\pda{n}{z}-\pda{n+1}{z}}\rk*{\paa{n}{z}\hp{2n}^\mpi\ek*{\XFa{2n+1}{z}}^\ad-\paa{n+1}{z}}\\
&\qquad-\rk*{\ek*{\XFa{2n+1}{z}}^\ad\hp{2n}^\mpi\pca{n}{z}-\pca{n+1}{z}}\rk*{\pba{n}{z}\hp{2n}^\mpi\ek*{\XFa{2n+1}{z}}^\ad-\pba{n+1}{z}}\\
&=\ek*{\XFa{2n+1}{z}}^\ad\hp{2n}^\mpi\ek*{\pda{n}{z}\paa{n}{z}-\pca{n}{z}\pba{n}{z}}\hp{2n}^\mpi\ek*{\XFa{2n+1}{z}}^\ad\\
&\qquad-\ek*{\XFa{2n+1}{z}}^\ad\hp{2n}^\mpi\ek{\pda{n}{z}\paa{n+1}{z}-\pca{n}{z}\pba{n+1}{z}}\\
&\qquad-\ek*{\pda{n+1}{z}\paa{n}{z}-\pca{n+1}{z}\pba{n}{z}}\hp{2n}^\mpi\ek*{\XFa{2n+1}{z}}^\ad+\pda{n+1}{z}\paa{n+1}{z}-\pca{n+1}{z}\pba{n+1}{z}\\
&=-\ek*{\XFa{2n+1}{z}}^\ad\hp{2n}^\mpi\hp{2n}-\rk{-\hp{2n}}\hp{2n}^\mpi\ek*{\XFa{2n+1}{z}}^\ad
=\Oqq.
\end{split}\]
In view of \rnota{N1326}, the proof is complete.
\eproof
\bleml{L1313}
Let \(\kappa\in\NOinf\), let \(\seqska\in\Hggeqka\) with \tsHp{} \(\sHp{\kappa}\), \sabcd{} \(\abcd{\ev{\kappa}}\), and \tsXF{} \(\seqX{\kappa}\), and let \(n\in\NO\) be such that \(2n\leq\kappa\).
For each \(z\in\CR\), then
\(
\det\rk{\ek{\XFa{2n}{z}}^\ad\hp{2n}^\mpi\pda{n}{z}-\pdoa{n+1}{z}}
\neq0
\).
\elem
\bproof
We consider an arbitrary \(z\in\CR\).
\rprop{L1237-A} shows that the \tnatext{} \(\seqsh{2n+1}\) of \(\seqs{2n}\) belongs to \(\Hggeq{2n+1}\).
Let \(\seqXh{2n+1}\) be the \tsXFo{\(\seqsh{2n+1}\)}.
Then \rlem{L1520} provides \(\XFha{2n+1}{z}=\XFa{2n}{z}\).
Let \(\shHp{2n+1}\) be the \tsHpo{\(\seqsh{2n+1}\)}.
Then \rrem{R26-SK1} yields \(\hhp{2n}=\hp{2n}\).
Let \(\habcd{n+1}\) be the \sabcdo{\(\seqsh{2n+1}\)}.
Then \rlem{L1237-B} shows \(\pdh{n}=\pd{n}\) and \(\pdh{n+1}=\pdo{n+1}\).
Consequently, we receive
\[
\ek*{\XFa{2n}{z}}^\ad\hp{2n}^\mpi\pda{n}{z}-\pdoa{n+1}{z}
=\ek*{\XFha{2n+1}{z}}^\ad \hhp{2n}^\mpi\pdha{n}{z}-\pdha{n+1}{z}.
\]
Regarding \rnota{N1326}, applying \rlem{L2412} to the sequence \(\seqsh{2n+1}\) completes the proof.
\eproof
Now we introduce the central object of this section.
Observe that the following constructions are well defined due to \rcor{C1232} and \rlemsss{BK8.6}{L1313}{L2412}, regarding \rnota{N1326}:
\bnotal{CB15}
Let \(\kappa\in\NOinf\), let \(\seqska\in\Hggeqka\), let \(\sHp{\kappa}\) be the \tsHpo{\(\seqska\)}, let \(\abcd{\ev{\kappa}}\) be the \sabcdo{\(\seqska\)}, and let \(\seqX{\kappa}\) be the \tsXFo{\(\seqska\)}.
\benui
\il{CB15.a} For each \(n\in\NO\) such that \(2n\leq \kappa\), let \(\lrk{2n},\rrk{2n},\mpk{2n}\colon\CR\to\Cqq\) be defined by
\begin{align*}
\lrka{2n}{z}&\defeq\ek*{\pda{n}{z}}^\inv\hp{2n}\sqrt{\rk{\rim z}^\inv\rim\XFa{2n}{z}}^\mpi,\\
\rrka{2n}{z}&\defeq\sqrt{\rk{\rim z}^\inv\rim\XFa{2n}{z}}^\mpi\hp{2n}\ek*{\pba{n}{z}}^\inv,
\intertext{and}
\mpka{2n}{z}&\defeq-\rk*{\ek*{\XFa{2n}{z}}^\ad\hp{2n}^\mpi\pda{n}{z}-\pdoa{n+1}{z}}^\inv\rk*{\ek*{\XFa{2n}{z}}^\ad\hp{2n}^\mpi\pca{n}{z}-\pcoa{n+1}{z}}.
\end{align*}
\il{CB15.b} For each \(n\in\NO\) such that \(2n+1\leq\kappa\), let \(\lrk{2n+1},\rrk{2n+1},\mpk{2n+1}\colon\CR\to\Cqq\) be given by
\begin{align*}
\lrka{2n+1}{z}&\defeq\ek{\pda{n}{z}}^\inv\hp{2n}\sqrt{\rk{\rim z}^\inv\rim\XFa{2n+1}{z}}^\mpi,\\
\rrka{2n+1}{z}&\defeq\sqrt{\rk{\rim z}^\inv\rim\XFa{2n+1}{z}}^\mpi\hp{2n}\ek{\pba{n}{z}}^\inv,
\intertext{and}
\mpka{2n+1}{z}&\defeq-\rk*{\ek*{\XFa{2n+1}{z}}^\ad\hp{2n}^\mpi\pda{n}{z}-\pda{n+1}{z}}^\inv\rk*{\ek*{\XFa{2n+1}{z}}^\ad\hp{2n}^\mpi\pca{n}{z}-\pca{n+1}{z}}.
\end{align*}
\eenui
\enota
Recall that \(\Kpq\) stands for the set of all contractive complex \tpqa{matrices}.
The set
\[
\cmb{M}{A}{B}
\defeq\setaca{M+AKB}{K\in\Kpq}
\]
signifies the (closed) matrix ball with \emph{center} \(M\), \emph{left semi-radius} \(A\), and \emph{right semi-radius} \(B\) with respect to given matrices \(M\in\Cpq\), \(A\in\Cpp\), and \(B\in\Cqq\).
The ambient theory dates back to Yu.~L.~Shmul\cprime yan \cite{MR0273377}, who, moreover, examined the operator case in the context of Hilbert spaces.
Observe that the particular case of matrices is elaborated in \zitaa{MR1152328}{\csec{1.5}{44--51}}.
Using \rnotap{CB15}{CB15.a}, now we are able to formulate the central result of this paper, which is a generalization of Kovalishina's result \zitaa{MR703593}{\S2}, who studied the non-degenerate case that the given sequence \(\seqs{2n}\) belongs to \(\Hgq{2n}\).
\bthml{T45CD}
Let \(n\in\NO\) and let \(\seqs{2n}\in\Hggeq{2n}\).
For each \(w\in\pip\), then
\[
\setaca*{F\rk{w}}{F\in\RFOqskg{2n}}
=\Cmb{\mpka{2n}{w}}{\rk{w-\ko w}^\inv\lrka{2n}{w}}{\rrka{2n}{w}},
\]
where \(\lrk{2n}\), \(\rrk{2n}\), and \(\mpk{2n}\) are given by \rnotap{CB15}{CB15.a}.
\ethm
The main goal of this section is to state a proof of \rthm{T45CD}.
Before doing this, we look at \rnota{CB15}.
At first view it looks a little bit surprising why the \rnota{CB15} is introduced for the two different indices \(2n\) and \(2n+1\), because \rnota{CB15} will be applied later to describe the Weyl matrix ball associated with a sequence \(\seqs{2n}\in\Hggeq{2n}\).
It turns out soon (see \rlemss{C220N}{L3N10} below) that the corresponding matrices introduced in \rpartss{CB15.a}{CB15.b} of \rnota{CB15} coincide.
The final steps for the proof of \rthm{T45CD} are \rpropss{L1204}{L1301}.
They are formulated in terms of \rnotap{CB15}{CB15.a}.
However, in the corresponding proofs we use the matrices in the form expressed by \rnotap{CB15}{CB15.b}.
The strategy of our proof of \rthm{T45CD} is in a wider sense inspired by the proof of \zitaa{MR2656833}{\cthm{1.1}{}}, which contains the description of Weyl matrix balls associated with a finite \tqqa{Carath\'eodory} sequence.
Against this background the terminology for the parameters of the Weyl matrix balls was already chosen in a similar way as in \cite{MR2656833}.
In both cases, the first obstacle was to guess the right conjecture about the parameters of the Weyl matrix ball we are striving for.
As in the Carath\'eodory case the conjectured parameters of the Weyl matrix balls are essentially determined by a rational \tqqa{matrix}-valued function and some constant matrices.
In our case, this corresponds to the rational \tqqa{matrix}-valued function \(\XF{2n}\) introduced in \rdefn{K17.1} and the \tHp{} \(\hp{2n}\) given by \rdefn{CM2.1.64}.
In the following, we will use the terms listed in \rnota{CB15} without always referring to their definition.
\breml{ABM-tru}
Let \(\kappa\in\NOinf\) and let \(\seqska\in\Hggeqka\).
Regarding \rnotass{CB15}{N-abcdO}, then one can see from \rremsss{CM2.1.65}{BK8.1}{XF-tru} that for each \(m\in\mn{0}{\kappa}\), the functions \(\lrk{m}\), \(\rrk{m}\), and \(\mpk{m}\) are built only from the matrices \(\su{0},\su{1},\dotsc,\su{m}\) and does not depend on the matrices \(\su{j}\) with \(j\geq m+1\).
Moreover, \rlemss{C220N}{L3N10} below even show that \(\lrk{m}\), \(\rrk{m}\), and \(\mpk{m}\) are independent of \(\su{j}\) with \(j\geq\eff{m}+1\), where \(\eff{m}\) is given by \eqref{SK5-11}.
\erem
\bleml{L0810}
Let \(\kappa\in\NOinf\) and let \(\seqska\in\Hggeqka\), and let \(z\in\CR\).
Then
\begin{align*}
\lrka{0}{z}&=\sqrt{\su{0}},&
\rrka{0}{z}&=\sqrt{\su{0}},&
&\text{and}&
\mpka{0}{z}&=\rk{\ko z-z}^\inv\su{0}.
\end{align*}
\elem
\bproof
First observe that \(\su{0}\in\Cggq\) is valid, because of \(\seqska\in\Hggeqka\) and that \(\ko z-z\neq0\) holds true, due to \(z\in\CR\).
According to \eqref{Hp.01}, we have \(\hp{0}=\su{0}\).
By virtue of \eqref{XF.01}, furthermore \(\XF{0}\rk{z}=z\su{0}\).
Consequently, \(\ek{\XFa{0}{z}}^\ad=\ko z\su{0}\) and \(\rim\XF{0}\rk{z}=\rim\rk{z}\su{0}\).
In particular, \(\rk{\rim z}^\inv\rim\XF{0}\rk{z}=\su{0}\).
\rrem{A.R.A+sqrt} yields \(\sqrt{\su{0}}^\mpi\su{0}=\sqrt{\su{0}}\) and \(\su{0}\sqrt{\su{0}}^\mpi=\sqrt{\su{0}}\).
Thus, taking additionally into account \rnotap{CB15}{CB15.a} and \eqref{K19.0}, we perceive \(\lrka{0}{z}=\ek{\pda{0}{z}}^\inv\hp{0}\sqrt{\rk{\rim z}^\inv\rim\XF{0}\rk{z}}^\mpi=\su{0}\sqrt{\su{0}}^\mpi=\sqrt{\su{0}}\) and \(\rrka{0}{z}=\sqrt{\rk{\rim z}^\inv\rim\XFa{0}{z}}^\mpi\hp{0}\ek{\pba{0}{z}}^\inv=\sqrt{\su{0}}^\mpi\su{0}=\sqrt{\su{0}}\).
Regarding \eqref{K19.0} and \eqref{abcdO-1}, we get furthermore
\beql{L0810.1}
\ek*{\XFa{0}{z}}^\ad\hp{0}^\mpi\pda{0}{z}-\pdoa{1}{z}
=\ko z\su{0}\hp{0}^\mpi\cdot\Iq-z\Iq
=\ko z\su{0}\su{0}^\mpi-z\Iq
\eeq
and
\beql{L0810.2}
\ek*{\XFa{0}{z}}^\ad\hp{0}^\mpi\pca{0}{z}-\pcoa{1}{z}
=\ko z\su{0}\hp{0}^\mpi\cdot\Oqq-\hp{0}
=-\su{0}.
\eeq
\rlem{L1313} yields that the matrix on the left-hand side of \eqref{L0810.1} is non-singular.
Consequently, the matrix on the right-hand side of \eqref{L0810.1} is non-singular as well.
Since \(\rk{\ko z\su{0}\su{0}^\mpi-z\Iq}\su{0}=\rk{\ko z-z}\su{0}\) holds true, we thus conclude \(\rk{\ko z-z}^\inv\su{0}=\rk{\ko z\su{0}\su{0}^\mpi-z\Iq}^\inv\su{0}\).
Using additionally \rnotap{CB15}{CB15.a}, \eqref{L0810.1}, and \eqref{L0810.2}, we get finally \(\mpka{0}{z}=-\rk{\ko z\su{0}\su{0}^\mpi-z\Iq}^\inv\rk{-\su{0}}={\rk{\ko z-z}}^\inv\su{0}\).
\eproof
Now we are going to verify the announced statements containing the coincidence of the corresponding matrices introduced in \rpartss{CB15.a}{CB15.b} of \rnota{CB15}, respectively.
\bleml{C220N}
Let \(\kappa\in\Ninf \), let \(\seqska\in\Hggeqka\), let \(n\in\NO\) be such that \(2n+1\leq\kappa\), and let \(z\in\CR\).
Then \(\lrka{2n}{z}=\lrka{2n+1}{z}\) and \(\rrka{2n}{z}=\rrka{2n+1}{z}\).
\elem
\bproof
Let \(\sHp{\kappa}\) be the \tsHpo{\(\seqska\)}.
From \rrem{R1019} we can infer \(\rim\hp{2n+1}=\Oqq\).
Using \rlem{K17-2}, then we get \(\rim\XFa{2n}{z}=\rim\XFa{2n+1}{z}\).
In view of \rnota{CB15}, the proof is complete.
\eproof
\bleml{L0819}
Let \(\kappa\in\Ninf\), let \(\seqska\in\Hggeqka\), and let \(n\in\NO\) be such that \(2n+1\leq\kappa\).
Let \(\PA{n},\PB{n},\PC{n},\PD{n}\) be given by \rnota{N1326}.
For all \(z\in\CR\), then the matrices \(\PBa{n}{z}\) and \(\PDa{n}{z}\) are invertible and, furthermore, \(\mpka{2n+1}{z}=-\PAa{n}{z}\ek{\PBa{n}{z}}^\inv\) as well as \(\mpka{2n+1}{z}=-\ek{\PDa{n}{z}}^\inv\PCa{n}{z}\) hold true.
\elem
\bproof
Let \(z\in\CR\).
\rlem{L2412} shows that the matrices \(\PBa{n}{z}\) and \(\PDa{n}{z}\) are invertible.
Because of \rlem{L1257}, we have \(\PDa{n}{z}\PAa{n}{z}=\PCa{n}{z}\PBa{n}{z}\).
Consequently, we conclude \(-\PAa{n}{z}\ek{\PBa{n}{z}}^\inv=-\ek{\PDa{n}{z}}^\inv\PCa{n}{z}\).
By virtue of \rnotass{CB15}{N1326}, we furthermore see \(\mpka{2n+1}{z}=-\ek{\PDa{n}{z}}^\inv\PCa{n}{z}\), which completes the proof.
\eproof
\bleml{L3N10}
Let \(\kappa\in\Ninf\), let \(\seqska\in\Hggeqka\), let \(n\in\NO\) be such that \(2n+1\leq\kappa\), and let \(z\in\CR\).
Then \(\mpka{2n}{z}=\mpka{2n+1}{z}\).
\elem
\bproof
Let \(\sHp{\kappa}\) be the \tsHpo{\(\seqska\)}, let \(\abcd{\ev{\kappa}}\) be the \sabcdo{\(\seqska\)}, and let \(\seqX{\kappa}\) be the \tsXFo{\(\seqska\)}.
Let \(\PCoa{n}{z}\defeq\ek{\XFa{2n}{z}}^\ad\hp{2n}^\mpi\pca{n}{z}-\pcoa{n+1}{z}\) and \(\PDoa{n}{z}\defeq\ek{\XFa{2n}{z}}^\ad\hp{2n}^\mpi\pda{n}{z}-\pdoa{n+1}{z}\).
According to \rlem{L1313}, we infer \(\det\PDoa{n}{z}\neq0\), whereas \rlem{L2412} yields \(\det\PDa{n}{z}\neq0\).
We see from \rnotass{CB15}{N1326} that
\begin{align}\label{IWN}
\mpka{2n}{z}&=-\ek*{\PDoa{n}{z}}^\inv\PCoa{n}{z}&
&\text{and}&
\mpka{2n+1}{z}&=-\ek*{\PDa{n}{z}}^\inv\PCa{n}{z}
\end{align}
hold true.
\rlem{K17-2} provides \(\XFa{2n}{z}-\XFa{2n+1}{z}=\hp{2n+1}\).
Since \rrem{R1019} yields \(\hp{2n+1}^\ad =\hp{2n+1}\), consequently, \(\ek{\XFa{2n}{z}}^\ad-\ek{\XFa{2n+1}{z}}^\ad=\hp{2n+1}\) follows.
Using \rrem{R1358}, we can furthermore infer that
\begin{align*}
\pcoa{n+1}{z}-\pca{n+1}{z}&=\hp{2n+1}\hp{2n}^\mpi\pca{n}{z}&
&\text{and}&
\pdoa{n+1}{z}-\pda{n+1}{z}&=\hp{2n+1}\hp{2n}^\mpi\pda{n}{z}
\end{align*}
are valid.
Regarding \rnota{N1326}, hence we obtain
\
\PCoa{n}{z}-\PCa{n}{z}
=\rk*{\ek*{\XFa{2n}{z}}^\ad-\ek*{\XFa{2n+1}{z}}^\ad}\hp{2n}^\mpi\pca{n}{z}-\ek*{\pcoa{n+1}{z}-\pca{n+1}{z}}
=\Oqq
\
and, analogously, \(\PDoa{n}{z}-\PDa{n}{z}=\Oqq.\)
In view of \eqref{IWN}, the proof is then complete.
\eproof
We continue providing further statements that address essential properties of the objects defined in \rnota{CB15}.
Moreover, these results prepare to embracing the capability of establishing the desired parametrization.
\bleml{CL18}
Let \(\kappa\in\NOinf\) and let \(\seqska\in\Hggeqka\).
For each \(m\in\mn{0}{\kappa}\), then the matrix-valued functions \(\lrk{m},\rrk{m},\mpk{m}\colon\CR\to\Cqq\) given by \rnota{CB15} are continuous.
\elem
\bproof
Let \(m\in\mn{0}{\kappa}\) and let \(\seqX{\kappa}\) be the \tsXFo{\(\seqska\)}.
Using \rrem{R0735}, we can infer that \(f,g\colon\CR\to\Cqq\) defined by \(f\rk{z}\defeq\ek{\XFa{m}{z}}^\ad\) and \(g\rk{z}\defeq\rk{\rim z}^\inv\rim\XFa{m}{z}\), respectively, both are continuous.
According to \rcor{C1232}, we have furthermore \(g\rk{z}\in\Cggq\) for all \(z\in\CR\).
Because of \rrem{ZR2}, then \(r\colon\CR\to\Cggq\) defined by \(r\rk{z}\defeq\sqrt{g\rk{z}}\) is continuous.
Let \(\sHp{\kappa}\) be the \tsHpo{\(\seqska\)}.
From \rprop{L1347} we can conclude \(\ran{g\rk{z}}=\ran{\hp{\eff{m}}}\) for all \(z\in\CR\).
In view of \rrem{A.R.r-sqrt}, then \(\ran{r\rk{z}}=\ran{\hp{\eff{m}}}\) for all \(z\in\CR\) follows.
In particular, \(\rank r\rk{z}=\rank\hp{\eff{m}}=\rank r\rk{w}\) for every choice of \(z\) and \(w\) in \(\CR\).
Consequently, using \rlem{LemC1}, we receive that \(\rho\colon\CR\to\Cqq\) given by \(\rho\rk{z}\defeq\ek{r\rk{z}}^\mpi\) is continuous.
For our following considerations, we note that \(m=2n\) or \(m=2n+1\), where \(n\defeq\ef{m}\) is given by \eqref{SK5-11}.
We first show that \(\lrk{m}\) and \(\rrk{m}\) are continuous.
\rrem{SK32R} and \rlem{BK8.6} provide that \(\pb{n}\) and \(\pd{n}\) are matrix polynomials which fulfill \(\det\pba{n}{z}\neq0\) and \(\det\pda{n}{z}\neq0\) for all \(z\in\CR\).
Therefore, \rlem{LemC1} yields that \(\eta,\theta\colon\CR\to\Cqq\) defined by \(\eta\rk{z}\defeq\ek{\pba{n}{z}}^\inv\) and \(\theta\rk{z}\defeq\ek{\pda{n}{z}}^\inv\), respectively, both are continuous.
Taking into account that \rnota{CB15} implies \(\lrka{m}{z}=\theta\rk{z}\hp{2n}\rho\rk{z}\) and \(\rrka{m}{z}=\rho\rk{z}\hp{2n}\eta\rk{z}\)
for all \(z\in\CR\), we obtain that both \(\lrk{m}\) and \(\rrk{m}\) are continuous.
We now show that \(\mpk{m}\) is continuous.
Consider the case \(m=2n\).
Since \(f\) is continuous, we conclude from \rrem{SK32R} and \rnota{N-abcdO} that \(c,d\colon\CR\to\Cqq\) defined by \(c\rk{z}\defeq f\rk{z}\hp{2n}^\mpi\pca{n}{z}-\pcoa{n+1}{z}\) and \(d\rk{z}\defeq f\rk{z}\hp{2n}^\mpi\pda{n}{z}-\pdoa{n+1}{z}\), respectively, both are continuous.
\rlem{L1313} shows that \(\det d\rk{z}\neq0\) holds true for all \(z\in\CR\).
By virtue of \rlem{LemC1}, then \(\delta\colon\CR\to\Cqq\) given by \(\delta\rk{z}\defeq\ek{d\rk{z}}^\inv\) is continuous as well.
Since we realize from \rnotap{CB15}{CB15.a} that \(\mpk{2n}=-\delta c\) is valid, thus we obtain that \(\mpk{2n}\) is continuous.
Using \rlem{L2412} instead of \rlem{L1313}, the case \(m=2n+1\) can be treated analogously.
\eproof
Now we verify that, for each \(z\in\CR\), the spaces \(\nul{\lrka{m}{z}}\) and \(\ran{\rrka{m}{z}}\) are completely determined by \(\hp{\eff{m}}\) and, consequently, independent of \(z\).
\bpropl{CL19}
Let \(\kappa\in\NOinf\), let \(\seqska\in\Hggeqka\) with \tsHp{} \(\sHp{\kappa}\), and let \(z\in\CR\).
For all \(m\in\mn{0}{\kappa}\), then
\begin{align}\label{BG}
\nul{\lrka{m}{z}}&=\nul{\hp{\eff{m}}}&
&\text{as well as}&
\ran{\rrka{m}{z}}&=\ran{\hp{\eff{m}}}
\end{align}
and, in particular, \(\rank\lrka{m}{z}=\rank\hp{\eff{m}}\) and \(\rank\rrka{m}{z}=\rank\hp{\eff{m}}\), where \(\lrk{m}\) and \(\rrk{m}\) are given by \rnota{CB15} and where \(\eff{m}\) is given by \eqref{SK5-11}.
\eprop
\bproof
Our proof is divided into four parts:
(I) First we consider the case that \(m=2n+1\) with some \(n\in\NO\).
According to \eqref{SK5-11}, then \(\eff{m}=2n\).
We use the notation introduced in the proof of \rlem{CL18}.
Regarding \rnotap{CB15}{CB15.b} and \rlem{BK8.6}, then \rrem{R1241} shows that
\begin{align}
\Nul{\lrka{2n+1}{z}}&=\Nul{\ek*{\pda{n}{z}}^\inv\hp{2n}\rho\rk{z}}=\Nul{\hp{2n}\rho\rk{z}}\label{N9V}\\
\intertext{and}
\Ran{\rrka{2n+1}{z}}&=\Ran{\rho\rk{z}\hp{2n}\ek*{\pba{n}{z}}^\inv}=\Ran{\rho\rk{z}\hp{2n}}\label{M2V}
\end{align}
are valid.
According to the proof of \rlem{CL18} and the notation therein, we also get \(\ek{r\rk{z}}^\ad=r\rk{z}\) and \(\ran{r\rk{z}}=\ran{\hp{\eff{m}}}\).
Hence, using \(\eff{m}=2n\) and \rrem{A.R.r-mpi}, we receive
\begin{align}
\Ran{\rho\rk{z}}
=\Ran{\ek*{r\rk{z}}^\mpi}
=\Ran{\ek*{r\rk{z}}^\ad}
=\Ran{r\rk{z}}
=\ran{\hp{2n}}\label{LBM}
\intertext{and}
\Nul{\rho\rk{z}}
=\Nul{\ek*{r\rk{z}}^\mpi}
=\Nul{\ek*{r\rk{z}}^\ad}
=\Nul{r\rk{z}}
=\nul{\hp{2n}}.\label{NVB}
\end{align}
Since \rrem{R1019} yields \(\hp{2n}^\ad=\hp{2n}\), in view of \eqref{LBM} and \eqref{NVB}, then \rlem{C2} provides \(\nul{\hp{2n}\rho\rk{z}}=\nul{\rho\rk{z}}\) and \(\ran{\rho\rk{z}\hp{2n}}=\ran{\rho\rk{z}}\).
Consequently, in view of \eqref{N9V} and \eqref{NVB}, we obtain \(\nul{\lrka{2n+1}{z}}=\nul{\hp{2n}\rho\rk{z}}=\nul{\rho\rk{z}}=\nul{\hp{2n}}\) and, because of \eqref{M2V} and \eqref{LBM}, analogously, \(\ran{\rrka{2n+1}{z}}=\ran{\hp{2n}}\).
Regarding \eqref{SK5-11}, thus \eqref{BG} is proved in the case that \(m\) is a positive odd integer.
(II) Consider now the case that \(m=2n\) with some \(n\in\NO\).
\rprop{L1237-A} shows that the \tnatext{} \(\seqsh{2n+1}\) of \(\seqs{2n}\) belongs to \(\Hggeq{2n+1}\).
By virtue of \rdefn{D.nat-ext} and \rrem{ABM-tru}, we have \(\lrhka{2n}{z}=\lrka{2n}{z}\) and \(\rrhka{2n}{z}=\rrka{2n}{z}\), where \(\lrhk{2n}\) and \(\rrhk{2n}\) are built according to \rnotap{CB15}{CB15.a} from the sequence \(\seqsh{2n+1}\).
\rlem{C220N} provides \(\lrhka{2n}{z}=\lrhka{2n+1}{z}\) and \(\rrhka{2n}{z}=\rrhka{2n+1}{z}\), where \(\lrhk{2n+1}\) and \(\rrhk{2n+1}\) are built according to \rnotap{CB15}{CB15.b} from the sequence \(\seqsh{2n+1}\).
Let \(\shHp{2n+1}\) be the \tsHpo{\(\seqsh{2n+1}\)}.
According to \rrem{R26-SK1}, then \(\hhp{2n}=\hp{2n}\).
Using \cpart{(I)} of the proof, consequently
\[
\Nul{\lrka{2n}{z}}
=\Nul{\lrhka{2n}{z}}
=\Nul{\lrhka{2n+1}{z}}
=\nul{\hhp{2n}}
=\nul{\hp{2n}}
\]
and, analogously, \(\ran{\rrka{2n}{z}}=\ran{\hp{2n}}\).
Regarding \eqref{SK5-11}, thus we realize that \eqref{BG} is proved in the case that \(m\) is a non-negative even integer as well.
(III) Summarizing parts~(I) and~(II), we see that \eqref{BG} is proved for all \(m\in\mn{0}{\kappa}\).
(IV) For all \(m\in\mn{0}{\kappa}\), from \eqref{BG} we conclude finally \(\rank\lrka{m}{z}=q-\dim\nul{\lrka{m}{z}}=q-\dim\nul{\hp{\eff{m}}}=\rank\hp{\eff{m}}\) and, obviously, \(\rank\rrka{m}{z}=\rank\hp{\eff{m}}\).
\eproof
Now we state some technical auxiliary results.
\bleml{L1433}
Let \(n\in\NO\), let \(\seqs{2n+1}\in\Hggeq{2n+1}\), and let \(z\in\CR\).
Then the matrix \(\PDa{n}{z}\) is invertible, the matrix \(\rk{\rim z}^\inv\rim\XFa{2n+1}{z}\) is \tnnH{}, and the equation
\beql{L1433.A}
\rk{\ko z-z}\ek*{\PDa{n}{z}}^\inv\hp{2n}\hp{2n}^\mpi
=\lrka{2n+1}{z}\sqrt{\rk{\rim z}^\inv\rim\XFa{2n+1}{z}}^\mpi
\eeq
holds true, where \(\PD{n}\) is given by \rnota{N1326} and \(\lrk{2n+1}\) is given by \rnotap{CB15}{CB15.b}.
\elem
\bproof
From \rlem{L2412} we see that \(\det\PDa{n}{z}\neq0\).
By virtue of \rcor{C1232}, we acquire that \(B\defeq\rk{\rim z}^\inv\rim\XFa{2n+1}{z}\) satisfies \(B\in\Cggq\).
According to \rpropp{K17-5}{K17-5.a}, we have \(\det\pda{n}{z}\neq0\) and \(\XFa{2n+1}{z}=\pda{n+1}{z}\ek{\pda{n}{z}}^\inv\hp{2n}\).
\rprop{L1347} yields \(\nul{\ek{\XFa{2n+1}{z}}^\ad}=\nul{\hp{2n}}\), which, in view of \rremp{tsa12}{tsa12.b}, implies \(\ek{\XFa{2n+1}{z}}^\ad\hp{2n}^\mpi\hp{2n}=\ek{\XFa{2n+1}{z}}^\ad\).
Taking additionally into account \rnota{N1326}, we hence get
\[\begin{split}
\PDa{n}{z}\ek*{\pda{n}{z}}^\inv\hp{2n}
&=\ek*{\XFa{2n+1}{z}}^\ad\hp{2n}^\mpi\hp{2n}-\pda{n+1}{z}\ek{\pda{n}{z}}^\inv\hp{2n}\\
&=\ek*{\XFa{2n+1}{z}}^\ad-\XFa{2n+1}{z}
=-2\iu\rim\rk*{\XFa{2n+1}{z}}
=-2\iu\rim\rk{z}B
=\rk{\ko z-z}B
\end{split}\]
and, consequently,
\beql{FR3}
\ek*{\pda{n}{z}}^\inv\hp{2n}
=\rk{\ko z-z}\ek*{\PDa{n}{z}}^\inv B.
\eeq
By virtue of \rprop{L1347}, we acquire \(\ran{B}=\ran{\hp{2n}}\).
Thus, from \rrem{tsb3} we can conclude \(BB^\mpi=\hp{2n}\hp{2n}^\mpi\).
Additionally, using \eqref{FR3}, \rrem{A.R.A+>}, and \rnotap{CB15}{CB15.b}, we obtain finally
\[\begin{split}
\rk{\ko z-z}\ek*{\PDa{n}{z}}^\inv\hp{2n}\hp{2n}^\mpi
&=\rk{\ko z-z}\ek*{\PDa{n}{z}}^\inv BB^\mpi
=\ek*{\pda{n}{z}}^\inv\hp{2n}B^\mpi\\
&=\ek*{\pda{n}{z}}^\inv\hp{2n}\sqrt{B}^\mpi\sqrt{B}^\mpi
=\lrka{2n+1}{z}\sqrt{B}^\mpi.\qedhere
\end{split}\]
\eproof
\bleml{L0852}
Let \(n\in\NO\), let \(\seqs{2n+1}\in\Hggeq{2n+1}\), and let \(z\in\CR\).
Let \(\cpP\in\Cqq\) be such that \(\ran{\cpP}\subseteq\ran{\hp{2n}}\) and let \(\cpQ\in\Cqq\).
Furthermore, let
\begin{align}
\STA{2n+1}&\defeq\paa{n}{z}\hp{2n}^\mpi \cpP+\paa{n+1}{z}\cpQ&
&\text{and}&
\STB{2n+1}&\defeq\pba{n}{z}\hp{2n}^\mpi \cpP+\pba{n+1}{z}\cpQ.\label{L0852.AB}
\end{align}
Then the matrices \(\PDa{n}{z}\) and \(\pba{n}{z}\) are invertible, the matrix \(\rk{\rim z}^\inv\rim\XFa{2n+1}{z}\) is \tnnH{}, and the identities
\beql{L0852.C}
\rk{z-\ko z}\rk*{\ek*{\PDa{n}{z}}^\inv\PCa{n}{z}\STB{2n+1}-\STA{2n+1}}
=\lrka{2n+1}{z}\sqrt{\rk{\rim z}^\inv\rim\XFa{2n+1}{z}}^\mpi\rk*{\cpP+\ek*{\XFa{2n+1}{z}}^\ad \cpQ}
\eeq
and
\[
\hp{2n}\ek*{\pba{n}{z}}^\inv\STB{2n+1}
=\cpP+\XFa{2n+1}{z}\cpQ
\]
hold true, where \(\PC{n}\) and \(\PD{n}\) are given by \rnota{N1326} and \(\lrk{2n+1}\) is given by \rnotap{CB15}{CB15.b}.
\elem
\bproof
From \rlem{L1433} we realize that \(\PDa{n}{z}\) is invertible, that \(\rk{\rim z}^\inv\rim\XFa{2n+1}{z}\) is \tnnH{}, and that \eqref{L1433.A} holds true.
By virtue \rlem{P8.14}, we receive that
\begin{align}\label{L0852.1}
\pda{n}{z}\paa{n}{z}&=\pca{n}{z}\pba{n}{z}&
&\text{and}&
\pda{n+1}{z}\paa{n+1}{z}&=\pca{n+1}{z}\pba{n+1}{z}
\end{align}
as well as
\begin{align}\label{L0852.2}
\pda{n}{z}\paa{n+1}{z}-\pca{n}{z}\pba{n+1}{z}&=\hp{2n}
&\text{and}&
\pda{n+1}{z}\paa{n}{z}-\pca{n+1}{z}\pba{n}{z}&=-\hp{2n
\end{align}
hold true.
\rprop{L1347} yields \(\ran{\ek{\XFa{2n+1}{z}}^\ad}=\ran{\hp{2n}}\) and \(\nul{\ek{\XFa{2n+1}{z}}^\ad}=\nul{\hp{2n}}\).
Thus, in view of \rrem{tsa12}, we obtain
\begin{align}\label{L0852.3
\hp{2n}\hp{2n}^\mpi\ek*{\XFa{2n+1}{z}}^\ad&=\ek*{\XFa{2n+1}{z}}^\ad&
&\text{and}&
\ek*{\XFa{2n+1}{z}}^\ad\hp{2n}^\mpi\hp{2n}&=\ek*{\XFa{2n+1}{z}}^\ad.
\end{align}
Using \rnota{N1326}, \eqref{L0852.AB}, \eqref{L0852.1}, \eqref{L0852.2}, and \eqref{L0852.3}, we perceive
\beql{L0852.4}\begin{split
\PCa{n}{z}\STB{2n+1}-\PDa{n}{z}\STA{2n+1}
&=\rk*{\ek*{\XFa{2n+1}{z}}^\ad\hp{2n}^\mpi\pca{n}{z}-\pca{n+1}{z}}\rk*{\pba{n}{z}\hp{2n}^\mpi \cpP+\pba{n+1}{z}\cpQ}\\
&\qquad-\rk*{\ek*{\XFa{2n+1}{z}}^\ad\hp{2n}^\mpi\pda{n}{z}-\pda{n+1}{z}}\rk*{\paa{n}{z}\hp{2n}^\mpi \cpP+\paa{n+1}{z}\cpQ}\\
&=\ek*{\XFa{2n+1}{z}}^\ad\hp{2n}^\mpi\ek*{\pca{n}{z}\pba{n}{z}-\pda{n}{z}\paa{n}{z}}\hp{2n}^\mpi \cpP\\
&\qquad+\ek*{\XFa{2n+1}{z}}^\ad\hp{2n}^\mpi\ek*{\pca{n}{z}\pba{n+1}{z}-\pda{n}{z}\paa{n+1}{z}}\cpQ\\
&\qquad-\ek*{\pca{n+1}{z}\pba{n}{z}-\pda{n+1}{z}\paa{n}{z}}\hp{2n}^\mpi \cpP\\
&\qquad-\ek*{\pca{n+1}{z}\pba{n+1}{z}-\pda{n+1}{z}\paa{n+1}{z}}\cpQ\\
&=-\ek*{\XFa{2n+1}{z}}^\ad\hp{2n}^\mpi\hp{2n}\cpQ-\hp{2n}\hp{2n}^\mpi \cpP
=-\ek*{\XFa{2n+1}{z}}^\ad \cpQ-\hp{2n}\hp{2n}^\mpi \cpP\\
&=-\hp{2n}\hp{2n}^\mpi\ek*{\XFa{2n+1}{z}}^\ad \cpQ-\hp{2n}\hp{2n}^\mpi \cpP
=-\hp{2n}\hp{2n}^\mpi\rk*{\cpP+\ek*{\XFa{2n+1}{z}}^\ad \cpQ}.
\end{split}\eeq
Because of \(\ek{\PDa{n}{z}}^\inv\PCa{n}{z}\STB{2n+1}-\STA{2n+1}=\ek{\PDa{n}{z}}^\inv\ek{\PCa{n}{z}\STB{2n+1}-\PDa{n}{z}\STA{2n+1}}\) and \eqref{L0852.4} as well as \eqref{L1433.A}, we conclude that \eqref{L0852.C} holds true.
By virtue of the assumption \(\ran{\cpP}\subseteq\ran{\hp{2n}}\) and \rremp{tsa12}{tsa12.a}, we acquire \(\hp{2n}\hp{2n}^\mpi \cpP=\cpP\).
According to \rremp{R52SK}{R52SK.a}, we have \(\det\pba{n}{z}\neq0\) and \(\XFa{2n+1}{z}=\hp{2n}\ek{\pba{n}{z}}^\inv\pba{n+1}{z}\).
Regarding \eqref{L0852.AB}, then it follows
\[
\hp{2n}\ek*{\pba{n}{z}}^\inv\STB{2n+1}
=\hp{2n}\hp{2n}^\mpi \cpP+\hp{2n}\ek*{\pba{n}{z}}^\inv\pba{n+1}{z}\cpQ
=\cpP+\XFa{2n+1}{z}\cpQ.\qedhere
\]
\eproof
\bleml{L0827}
Let \(n\in\NO\) and let \(\seqs{2n+1}\in\Hggeq{2n+1}\).
Let \(P\defeq\OPu{\ran{\hp{2n}}}\) and \(Q\defeq\OPu{\nul{\hp{2n}}}\).
Let \(C\in\Cqq\), let \(z\in\CR\), and let \(E\defeq-\ek{\XFa{2n+1}{z}}^\ad\) and \(B\defeq\rk{\rim z}^\inv\rim E\).
Then \(B\in\Cggq\) and the matrices \(\cpP\defeq E\sqrt{B}^\mpi-E^\ad\sqrt{B}^\mpi CP\) and \(\cpQ\defeq\sqrt{B}^\mpi-\sqrt{B}^\mpi CP+Q\) fulfill \(\det\rk{\pba{n}{z}\hp{2n}^\mpi \cpP +\pba{n+1}{z}\cpQ}\neq0\).
\elem
\bproof
Clearly, \(B=\rk{\rim z}^\inv\rim\XFa{2n+1}{z}\).
\rcor{C1232} then shows that \(B\in\Cggq\).
Obviously, we have \(-E^\ad=\XFa{2n+1}{z}\).
Therefore, \(\cpP=\XFa{2n+1}{z}\sqrt{B}^\mpi CP-\ek{\XFa{2n+1}{z}}^\ad\sqrt{B}^\mpi\).
Hence,
\begin{multline}\label{N368H}
\pba{n}{z}\hp{2n}^\mpi\cpP +\pba{n+1}{z}\cpQ
=\ek*{\pba{n}{z}\hp{2n}^\mpi\XFa{2n+1}{z}-\pba{n+1}{z}}\sqrt{B}^\mpi CP\\
-\rk*{\pba{n}{z}\hp{2n}^\mpi\ek{\XFa{2n+1}{z}}^\ad-\pba{n+1}{z}}\sqrt{B}^\mpi+\pba{n+1}{z}Q.
\end{multline}
We consider an arbitrary \(v\in\nul{\pba{n}{z}\hp{2n}^\mpi \cpP+\pba{n+1}{z}\cpQ}\).
From \eqref{N368H} then
\begin{multline}\label{L0827.1}
\ek*{\pba{n}{z}\hp{2n}^\mpi\XFa{2n+1}{z}-\pba{n+1}{z}}\sqrt{B}^\mpi CPv+\pba{n+1}{z}Q v\\
=\rk*{\pba{n}{z}\hp{2n}^\mpi\ek{\XFa{2n+1}{z}}^\ad-\pba{n+1}{z}}\sqrt{B}^\mpi v
\end{multline}
follows.
Since \rrem{R52SK} yields \(\det\pba{n}{z}\neq0\) and \(\XFa{2n+1}{z}=\hp{2n}\ek{\pba{n}{z}}^\inv\pba{n+1}{z}\), multiplication of equation \eqref{L0827.1} from the left by \(\hp{2n}\ek{\pba{n}{z}}^\inv\) provides
\begin{multline}\label{N38}
\ek*{\hp{2n}\hp{2n}^\mpi\XFa{2n+1}{z}-\XFa{2n+1}{z}}\sqrt{B}^\mpi CPv+\XFa{2n+1}{z}Q v\\
=\rk*{\hp{2n}\hp{2n}^\mpi\ek*{\XFa{2n+1}{z}}^\ad-\XFa{2n+1}{z}}\sqrt{B}^\mpi v.
\end{multline}
\rprop{L1347} provides \(\ran{\ek{\XFa{2n+1}{z}}^\ad}=\ran{\XFa{2n+1}{z}}=\ran{\hp{2n}}\) and \(\nul{\rim\XFa{2n+1}{z}}=\nul{\XFa{2n+1}{z}}=\nul{\hp{2n}}\).
In view of \rremp{tsa12}{tsa12.a}, we obtain then \(\hp{2n}\hp{2n}^\mpi\ek{\XFa{2n+1}{z}}^\ad=\ek{\XFa{2n+1}{z}}^\ad\) and \(\hp{2n}\hp{2n}^\mpi\XFa{2n+1}{z}=\XFa{2n+1}{z}\).
Consequently, from \eqref{N38} we get
\beql{M38}
\XFa{2n+1}{z}Q v
=\rk*{\ek*{\XFa{2n+1}{z}}^\ad-\XFa{2n+1}{z}}\sqrt{B}^\mpi v.
\eeq
\rrem{R.P} yields \(\ran{Q}=\nul{\hp{2n}}\).
In view of \(\nul{\XFa{2n+1}{z}}=\nul{\hp{2n}}\), hence \(\ran{Q}=\nul{\XFa{2n+1}{z}}\), implying \(\XFa{2n+1}{z}Qv=\Ouu{q}{1}\).
Taking additionally into account \(\rim\rk{z}B=\rim\XFa{2n+1}{z}\), from \eqref{M38} we can conclude \(\Ouu{q}{1}=-2\iu\rim\rk{z}B\sqrt{B}^\mpi v\).
Since \rrem{A.R.A+sqrt} yields \(B\sqrt{B}^\mpi=\sqrt{B}\), hence \(\Ouu{q}{1}=-2\iu\rim\rk{z}\sqrt{B}v\), which, in view of \(z\in\CR\), implies \(v\in\nul{\sqrt{B}}\).
According to \rrem{A.R.r-sqrt}, we have \(\nul{\sqrt{B}}=\nul{B}\).
In view of \(B=\rk{\rim z}^\inv\rim\XFa{2n+1}{z}\) and \(\nul{\rim\XFa{2n+1}{z}}=\nul{\hp{2n}}\), we thus obtain \(v\in\nul{\hp{2n}}\), implying \(Qv=v\).
From \rrem{R.P} we see \(\nul{P}=\ran{\hp{2n}}^\oc\).
According to \rrem{tsa2}, we have \(\ran{\hp{2n}}^\oc=\nul{\hp{2n}^\ad}\).
Since \rrem{R1019} yields \(\hp{2n}^\ad=\hp{2n}\), then \(\nul{P}=\nul{\hp{2n}}\) follows, implying \(Pv=\Ouu{q}{1}\).
By virtue of \rrem{A.R.r-mpi}, we see \(\nul{\sqrt{B}^\mpi}=\nul{\sqrt{B}^\ad}=\nul{\sqrt{B}}\).
In view of \(v\in\nul{\sqrt{B}}\), thus \(\sqrt{B}^\mpi v=\Ouu{q}{1}\).
Using additionally \(Pv=\Ouu{q}{1}\) and \(Qv=v\), from \eqref{L0827.1} we obtain then \(\pba{n+1}{z}v=\Ouu{q}{1}\).
Since \rlem{BK8.6} yields \(\det\pba{n+1}{z}\neq0\), we get \(v=\Ouu{q}{1}\).
Hence, \(\nul{\pba{n}{z}\hp{2n}^\mpi \cpP+\pba{n+1}{z}\cpQ}\subseteq\set{\Ouu{q}{1}}\), which completes the proof.
\eproof
The following result plays an important role in the proofs of \rpropss{L1204}{L1301}, which provide the two sides of \rthm{T45CD}.
\bpropl{L1745}
Let \(n\in\NO\), let \(\seqs{2n+1}\in\Hggeq{2n+1}\), and let \(z\in\CR\).
Let \(P\defeq\OPu{\ran{\hp{2n}}}\) and \(Q\defeq\OPu{\nul{\hp{2n}}}\).
Then there exist matrices \(\cpP,\cpQ\in\Cqq\) such that the following three conditions are fulfilled:
\baeqi{0}
\il{L1745.i} \(\rim\rk{z}\rim\rk{\cpQ ^\ad \cpP }\in\Cggq\).
\il{L1745.ii} \(P\cpP =\cpP \), \(\cpP P=\cpP \), and \(\cpQ P=\cpQ -Q\).
\il{L1745.iii} \(\det\rk{\pba{n}{z}\hp{2n}^\mpi \cpP +\pba{n+1}{z}\cpQ }\neq0\).
\eaeqi
If \(\cpP,\cpQ\in\Cqq\) are arbitrary matrices such that~\ref{L1745.i}--\ref{L1745.iii} are fulfilled, then the matrix \(\rk{\rim z}^\inv\rim\XFa{2n+1}{z}\) is \tnnH{}, the matrix
\[
\cpC
\defeq\sqrt{\rk{\rim z}^\inv\rim\XFa{2n+1}{z}}^\mpi\rk*{\cpP +\ek*{\XFa{2n+1}{z}}^\ad \cpQ }\rk*{\cpP +\XFa{2n+1}{z}\cpQ }^\mpi\sqrt{\rk{\rim z}^\inv\rim\XFa{2n+1}{z}}
\]
is contractive, and the identity
\begin{multline}\label{L1745.A}
-\ek*{\paa{n}{z}\hp{2n}^\mpi \cpP +\paa{n+1}{z}\cpQ }\ek*{\pba{n}{z}\hp{2n}^\mpi \cpP +\pba{n+1}{z}\cpQ }^\inv\\
=\mpka{2n+1}{z}+\rk{z-\ko z}^\inv \lrka{2n+1}{z}\cpC \rrka{2n+1}{z}
\end{multline}
holds true, where \(\lrk{2n+1}\), \(\rrk{2n+1}\), and \(\mpk{2n+1}\) are given by \rnotap{CB15}{CB15.b}.
\eprop
\bproof
\rrem{R1019} yields \(\hp{2n}^\ad=\hp{2n}\).
According to \rrem{tsa2}, we have \(\nul{\hp{2n}^\ad}=\ran{\hp{2n}}^\oc\).
Consequently, \(\nul{\hp{2n}}=\ran{\hp{2n}}^\oc\) follows.
Using \rrem{A.R.0<P<1}, we can thus conclude \(P+Q=\Iq\).
From \rlem{BK8.6} we easily see then that, \teg{}, \(\cpP=\Oqq\) and \(\cpQ=\Iq\) fulfill~\ref{L1745.i}--\ref{L1745.iii}.
Now let \(\cpP,\cpQ\in\Cqq\) be arbitrarily chosen such that~\ref{L1745.i}--\ref{L1745.iii} are fulfilled.
In order to apply \rlem{L1104}, we set \(E\defeq-\ek{\XFa{2n+1}{z}}^\ad\), \(b\defeq\rim\rk{z}\), and \(B\defeq b^\inv\rim E\).
Clearly, then
\begin{align}\label{L1745.2}
\XFa{2n+1}{z}&=-E^\ad,&
\ek*{\XFa{2n+1}{z}}^\ad&=-E,&
&\text{and}&
\rk{\rim z}^\inv\rim\XFa{2n+1}{z}&=B.
\end{align}
Because of~\ref{L1745.ii} and \rrem{R.P}, we discern
\beql{L1745.1}
\ran{\cpP}
=\ran{P\cpP}
\subseteq\ran{P}
=\ran{\hp{2n}}.
\eeq
Let \(\STA{2n+1}\) and \(\STB{2n+1}\) be given by \eqref{L0852.AB}.
Regarding \eqref{L1745.1} and \eqref{L1745.2}, from \rlem{L0852} we can conclude then that the matrices \(\PDa{n}{z}\) and \(\pba{n}{z}\) are both invertible, that \(B\) is \tnnH{}, and that
\beql{L1745.11A}
\rk{z-\ko z}\rk*{\ek*{\PDa{n}{z}}^\inv\PCa{n}{z}\STB{2n+1}-\STA{2n+1}}
=\lrka{2n+1}{z}\sqrt{B}^\mpi\rk{\cpP-E\cpQ}
\eeq
and
\beql{L1745.11B}
\hp{2n}\ek*{\pba{n}{z}}^\inv\STB{2n+1}
=\cpP-E^\ad\cpQ
\eeq
hold true.
In view of \eqref{L1745.2} and \rnotap{CB15}{CB15.b}, we have
\begin{align}\label{L1745.12}
\cpC &=\sqrt{B}^\mpi\rk{\cpP-E\cpQ}\rk{\cpP-E^\ad\cpQ}^\mpi\sqrt{B}&
&\text{and}&
\rrka{2n+1}{z}&=\sqrt{B}^\mpi\hp{2n}\ek*{\pba{n}{z}}^\inv.
\end{align}
\rprop{L1347} yields \(\ran{\rim\XFa{2n+1}{z}}=\ran{\ek{\XFa{2n+1}{z}}^\ad}=\ran{\XFa{2n+1}{z}}=\ran{\hp{2n}}\) and \(\nul{\rim\XFa{2n+1}{z}}=\nul{\ek{\XFa{2n+1}{z}}^\ad}=\nul{\XFa{2n+1}{z}}=\nul{\hp{2n}}\).
Regarding \eqref{L1745.2}, consequently,
\begin{align}\label{L1745.4}
\ran{B}&=\ran{E}=\ran{E^\ad}=\ran{\hp{2n}}&
&\text{and}&
\nul{B}&=\nul{E}=\nul{E^\ad}=\nul{\hp{2n}}
\end{align}
follow.
Taking into account \eqref{L0852.AB} and~\ref{L1745.iii}, we perceive \(\det\STB{2n+1}\neq0\).
Therefore, using \eqref{L1745.11B} and \rrem{R1241}, we obtain
\beql{L1745.5}
\ran{\cpP-E^\ad\cpQ }
=\Ran{\hp{2n}\ek*{\pba{n}{z}}^\inv\STB{2n+1}}
=\ran{\hp{2n}}.
\eeq
The combination of \eqref{L1745.1}, \eqref{L1745.4}, and \eqref{L1745.5} yields \(\ran{\cpP}\subseteq\ran{E}=\ran{B}=\ran{\cpP-E^\ad\cpQ}\).
Regarding that \(z\in\CR\) implies \(b\in\R\setminus\set{0}\) and taking additionally into account the first equation in \eqref{L1745.12} and \(B\in\Cggq\), we hence can apply \rlem{L1104} and~\ref{L1745.i} to obtain that \(\cpC \) is contractive.
From \rnotass{CB15}{N1326} we conclude
\beql{L1745.6}
\mpka{2n+1}{z}
=-\ek*{\PDa{n}{z}}^\inv\PCa{n}{z}.
\eeq
From \rrem{R.P} and \eqref{L1745.4} we see \(\ran{Q}=\nul{\hp{2n}}=\nul{E}=\nul{E^\ad}\), implying \(EQ=\Oqq\) and \(E^\ad Q=\Oqq\).
Hence, using additionally~\ref{L1745.ii}, we infer
\beql{L1745.16}
\rk{\cpP-E\cpQ}P
=\cpP P-E\cpQ P
=\cpP-E\rk{\cpQ -Q}
=\cpP-E\cpQ+EQ
=\cpP-E\cpQ
\eeq
and
\[
\rk{\cpP-E^\ad\cpQ}P
=\cpP P-E^\ad\cpQ P
=\cpP-E^\ad\rk{\cpQ -Q}
=\cpP-E^\ad\cpQ+E^\ad Q
=\cpP-E^\ad\cpQ.
\]
By virtue of \(P+Q=\Iq\), we can thus conclude \(\rk{\cpP-E^\ad\cpQ}Q=\Oqq\).
In view of \(\ran{Q}=\nul{\hp{2n}}\), we can infer then \(\nul{\hp{2n}}\subseteq\nul{\cpP-E^\ad\cpQ}\).
Furthermore, because of \eqref{L1745.5} and a familiar result of linear algebra, we acquire \(\dim\nul{\hp{2n}}=q-\dim\ran{\hp{2n}}=q-\dim\ran{\cpP-E^\ad\cpQ}=\dim\nul{\cpP-E^\ad\cpQ}<\infty\).
Therefore, \(\nul{\hp{2n}}=\nul{\cpP-E^\ad\cpQ}\) follows.
Using \rrem{tsa2}, we can infer then \(\ran{\hp{2n}^\ad}=\ran{\rk{\cpP-E^\ad\cpQ}^\ad}\).
Taking additionally into account \(\hp{2n}^\ad=\hp{2n}\) and \rrem{tsb3}, we obtain \(P=\OPu{\ran{\hp{2n}}}=\OPu{\ran{\hp{2n}^\ad}}=\OPu{\ran{\rk{\cpP-E^\ad\cpQ}^\ad}}=\rk{\cpP-E^\ad\cpQ}^\mpi\rk{\cpP-E^\ad\cpQ}\).
Consequently, using \eqref{L1745.16}, we recognize that
\beql{L1745.14}
\rk{\cpP-E\cpQ}\rk{\cpP-E^\ad\cpQ}^\mpi\rk{\cpP-E^\ad\cpQ}
=\rk{\cpP-E\cpQ}P
=\cpP-E\cpQ
\eeq
is fulfilled.
Regarding \(\det\STB{2n+1}\neq0\), from \eqref{L1745.11B} we receive
\beql{L1745.13}
\rk{\cpP-E^\ad\cpQ}\STB{2n+1}^\inv
=\hp{2n}\ek*{\pba{n}{z}}^\inv.
\eeq
\rrem{tsb3} yields \(\sqrt{B}\sqrt{B}^\mpi=\OPu{\ran{\sqrt{B}}}\).
From \rrem{A.R.r-sqrt} and \eqref{L1745.4}, we discern \(\ran{\sqrt{B}}=\ran{B}=\ran{\hp{2n}}\).
Consequently,
\beql{L1745.21}
\sqrt{B}\sqrt{B}^\mpi\hp{2n}
=\OPu{\ran{\sqrt{B}}}\hp{2n}
=\OPu{\ran{\hp{2n}}}\hp{2n}
=\hp{2n}.
\eeq
Regarding that \(z\in\CR\) implies \(z-\ko z\neq0\) and using \(\det\STB{2n+1}\neq0\), \eqref{L1745.6}, \eqref{L1745.11A}, \eqref{L1745.14}, \eqref{L1745.13}, \eqref{L1745.21}, and \eqref{L1745.12}, finally we obtain
\[\begin{split}
-\STA{2n+1}\STB{2n+1}^\inv-\mpka{2n+1}{z}
&=-\STA{2n+1}\STB{2n+1}^\inv-\rk*{-\ek*{\PDa{n}{z}}^\inv\PCa{n}{z}}
=\rk*{\ek*{\PDa{n}{z}}^\inv\PCa{n}{z}\STB{2n+1}-\STA{2n+1}}\STB{2n+1}^\inv\\
&=\rk{z-\ko z}^\inv \lrka{2n+1}{z}\sqrt{B}^\mpi\rk{\cpP-E\cpQ}\STB{2n+1}^\inv\\
&=\rk{z-\ko z}^\inv \lrka{2n+1}{z}\sqrt{B}^\mpi\rk{\cpP-E\cpQ}\rk{\cpP-E^\ad\cpQ}^\mpi\rk{\cpP-E^\ad\cpQ}\STB{2n+1}^\inv\\
&=\rk{z-\ko z}^\inv \lrka{2n+1}{z}\sqrt{B}^\mpi\rk{\cpP-E\cpQ}\rk{\cpP-E^\ad\cpQ}^\mpi\hp{2n}\ek*{\pba{n}{z}}^\inv\\
&=\rk{z-\ko z}^\inv \lrka{2n+1}{z}\sqrt{B}^\mpi\rk{\cpP-E\cpQ}\rk{\cpP-E^\ad\cpQ}^\mpi\sqrt{B}\sqrt{B}^\mpi\hp{2n}\ek*{\pba{n}{z}}^\inv\\
&=\rk{z-\ko z}^\inv \lrka{2n+1}{z}\cpC \rrka{2n+1}{z}.
\end{split}\]
Regarding~\ref{L1745.iii} and \eqref{L0852.AB}, thus \eqref{L1745.A} is proved as well.
\eproof
We keep in mind that we target to achieve a parametrization of the set \(\setaca{F\rk{w}}{F\in\RFOqskg{2n}}\) with respect to an arbitrarily prescribed sequence \(\seqs{2n}\in\Hggeq{2n}\) and an arbitrarily chosen \(w\in\pip\).
The next proposition provides the proof of one half of \rthm{T45CD}.
\bpropl{L1204}
Let \(n\in\NO\), let \(\seqs{2n}\in\Hggeq{2n}\), and let \(F\in\RFOqskg{2n}\).
For each \(w\in\pip\), then
\beql{L1204.A}
F\rk{w}
\in\Cmb{\mpka{2n}{w}}{\rk{w-\ko w}^\inv\lrka{2n}{w}}{\rrka{2n}{w}},
\eeq
where \(\lrk{2n}\), \(\rrk{2n}\), and \(\mpk{2n}\) are given by \rnotap{CB15}{CB15.a}.
\eprop
\bproof
Let \(\sHp{2n}\) be the \tsHpo{\(\seqs{2n}\)} and let \(\abcd{n}\) be the \sabcdo{\(\seqs{2n}\)}.
Using \rthmp{CM123}{CM123.b}, we perceive that there exists a pair \(\copa{\phi}{\psi}\in\PRFqa{\hp{2n}}\) such that both \(\phi\) and \(\psi\) are holomorphic in \(\pip\) and that \eqref{L1204.1} and \eqref{L1204.2}
hold true for all \(z\in\pip\).
\rrem{R1019} yields \(\hp{2n}^\ad=\hp{2n}\).
Hence, setting \(P\defeq\OPu{\ran{\hp{2n}}}\) and \(Q\defeq\OPu{\nul{\hp{2n}}}\), from \rlem{CfolA15} we obtain that there exists a pair \(\copa{\cpfP}{\cpfQ}\in\cpcl{\phi}{\psi}\) such that all four equations in \eqref{ST37N} are valid.
In particular, in view of \rrem{Ma09_1.11}, we recognize that \(\copa{\cpfP}{\cpfQ}\in\PRFq\) and that there exist a \tqqa{matrix}-valued function \(g\) meromorphic in \(\pip\) and a discrete subset \(\mD_1\) of \(\pip\) such that \(\phi\), \(\psi\), \(\cpfP\), \(\cpfQ\), and \(g\) are holomorphic in \(\pip\setminus\mD_1\) and that
\begin{align}\label{L1204.3
\cpfP\rk{z}&=\phi\rk{z}g\rk{z},&
\cpfQ\rk{z}&=\psi\rk{z}g\rk{z},&
&\text{and}&
\det g\rk{z}&\neq0
\end{align}
as well as
\begin{align}\label{L1204.4
P\cpfP\rk{z}&=\cpfP\rk{z},&
\cpfP\rk{z}P&=\cpfP\rk{z},&
&\text{and}&
\cpfQ\rk{z}P
=\cpfQ\rk{z}-Q
\end{align}
hold true for all \(z\in\pip\setminus\mD_1\).
Using the first two equations in \eqref{L1204.3}, we obtain \(\paa{n}{z}\hp{2n}^\mpi\cpfP\rk{z}+\paoa{n+1}{z}\cpfQ\rk{z}=\ek{\paa{n}{z}\hp{2n}^\mpi\phi\rk{z}+\paoa{n+1}{z}\psi\rk{z}}g\rk{z}\) and \(\pba{n}{z}\hp{2n}^\mpi\cpfP\rk{z}+\pboa{n+1}{z}\cpfQ\rk{z}=\ek{\pba{n}{z}\hp{2n}^\mpi\phi\rk{z}+\pboa{n+1}{z}\psi\rk{z}}g\rk{z}\) for all \(z\in\pip\setminus\mD_1\).
Taking additionally into account the last equation in \eqref{L1204.3} as well as \eqref{L1204.1} and \eqref{L1204.2}, we can conclude
\beql{L1204.5
\det\rk*{\pba{n}{z}\hp{2n}^\mpi \cpfP\rk{z}+\pboa{n+1}{z}\cpfQ\rk{z}}
=\det\rk*{\pba{n}{z}\hp{2n}^\mpi\phi\rk{z}+\pboa{n+1}{z}\psi\rk{z}}\cdot\det g\rk{z}
\neq0
\eeq
and
\beql{L1204.6}\begin{split
&-\ek*{\paa{n}{z}\hp{2n}^\mpi \cpfP\rk{z}+\paoa{n+1}{z}\cpfQ\rk{z}}\ek*{\pba{n}{z}\hp{2n}^\mpi \cpfP\rk{z}+\pboa{n+1}{z}\cpfQ\rk{z}}^\inv\\
&=-\rk*{\ek*{\paa{n}{z}\hp{2n}^\mpi\phi\rk{z}+\paoa{n+1}{z}\psi\rk{z}}g\rk{z}}\rk*{\ek*{\pba{n}{z}\hp{2n}^\mpi\phi\rk{z}+\pboa{n+1}{z}\psi\rk{z}}g\rk{z}}^\inv\\
&=-\ek*{\paa{n}{z}\hp{2n}^\mpi\phi\rk{z}+\paoa{n+1}{z}\psi\rk{z}}\ek*{\pba{n}{z}\hp{2n}^\mpi\phi\rk{z}+\pboa{n+1}{z}\psi\rk{z}}^\inv
=F\rk{z}
\end{split}\eeq
for all \(z\in\pip\setminus\mD_1\).
Regarding \(\copa{\cpfP}{\cpfQ}\in\PRFq\) and \rdefn{def-nev-paar}, we realize that \(\cpfP\) and \(\cpfQ\) are \tqqa{matrix}-valued functions meromorphic in \(\pip\), that there is a discrete subset \(\mD_2\) of \(\pip\) such that \(\cpfP\) and \(\cpfQ\) are holomorphic in \(\pip\setminus\mD_2\), and that, in view of \rrem{IG.b}, moreover,
\beql{L1204.7
\rim\rk*{\ek*{\cpfQ\rk{z}}^\ad \cpfP\rk{z}}
\in\Cggq
\eeq
is valid for each \(z\in\pip\setminus\mD_2\).
Now we consider an arbitrary \(w\in\pip\).
Obviously, \(\mD\defeq\mD_1\cup\mD_2\) is a discrete subset of \(\pip\).
Consequently, it is feasible to choose a sequence \(\rk{z_\ell}_{\ell=1}^\infi\) of points belonging to \(\pip\setminus\mD\) converging to \(w\).
Taking into account that \(\cpfP\) and \(\cpfQ\) are holomorphic in \(\pip\setminus\mD\) and that \eqref{L1204.4} and \eqref{L1204.7} hold true for all \(z\in\pip\setminus\mD\), for each \(\ell\in\N\), the matrices \(\cpP_\ell\defeq \cpfP\rk{z_\ell}\) and \(\cpQ_\ell\defeq \cpfQ\rk{z_\ell}\) fulfill
\begin{align}\label{L1204.8
P\cpP_\ell&=\cpP_\ell,&
\cpP_\ell P&=\cpP_\ell,&
\cpQ_\ell P&=\cpQ_\ell-Q,&
&\text{and}&
\rim\rk{z_\ell}\rim\rk{\cpQ_\ell^\ad \cpP_\ell}&\in\Cggq.
\end{align}
Since \eqref{L1204.5} and \eqref{L1204.6} hold true for all \(z\in\pip\setminus\mD\), we receive
\beql{L1204.9
\det\rk*{\pba{n}{z_\ell}\hp{2n}^\mpi \cpP_\ell+\pboa{n+1}{z_\ell}\cpQ_\ell}
\neq0
\eeq
and
\beql{L1204.10
F\rk{z_\ell}
=-\ek*{\paa{n}{z_\ell}\hp{2n}^\mpi \cpP_\ell+\paoa{n+1}{z_\ell}\cpQ_\ell}\ek*{\pba{n}{z_\ell}\hp{2n}^\mpi \cpP_\ell+\pboa{n+1}{z_\ell}\cpQ_\ell}^\inv
\eeq
for all \(\ell\in\N\).
Let \(\seqsh{2n+1}\) be the \tnatexto{\(\seqs{2n}\)} and let \(\habcd{n+1}\) be the \sabcdo{\(\seqsh{2n+1}\)}.
\rlem{L1237-B} yields then \(\pah{n}=\pa{n}\) and \(\pbh{n}=\pb{n}\) as well as \(\pah{n+1}=\pao{n+1}\) and \(\pbh{n+1}=\pbo{n+1}\).
Let \(\shHp{2n+1}\) be the \tsHpo{\(\seqsh{2n+1}\)}.
According to \rrem{R26-SK1}, we have \(\hhp{2n}=\hp{2n}\).
Consequently, regarding \eqref{L1204.9} and \eqref{L1204.10} we can conclude
\beql{L1204.11
\det\rk*{\pbh{n}\rk{z_\ell}\hhp{2n}^\mpi\cpP_\ell+\pbh{n+1}\rk{z_\ell}\cpQ_\ell}
\neq0
\eeq
and
\beql{L1204.12
F\rk{z_\ell}
=-\ek*{\pah{n}\rk{z_\ell}\hhp{2n}^\mpi\cpP_\ell+\pah{n+1}\rk{z_\ell}\cpQ_\ell}\ek*{\pbh{n}\rk{z_\ell}\hhp{2n}^\mpi\cpP_\ell+\pbh{n+1}\rk{z_\ell}\cpQ_\ell}^\inv
\eeq
for all \(\ell\in\N\).
From \rprop{L1237-A} we infer that \(\seqsh{2n+1}\) belongs to \(\Hggeq{2n+1}\).
Regarding \(\hhp{2n}=\hp{2n}\), we see that \(P=\OPu{\ran{\hhp{2n}}}\) and \(Q=\OPu{\nul{\hhp{2n}}}\).
Thus, in view of \eqref{L1204.8}, \eqref{L1204.11}, and \eqref{L1204.12}, the application of \rprop{L1745} provides that, for all \(\ell\in\N\), there exists a contractive complex \tqqa{matrix} \(\cpC _\ell\) such that
\beql{L1204.13
F\rk{z_\ell}
=\mphk{2n+1}\rk{z_\ell}+\rk{z_\ell-\ko{z_\ell}}^\inv\lrhk{2n+1}\rk{z_\ell}\cpC _\ell\rrhk{2n+1}\rk{z_\ell}
\eeq
holds true, where \(\lrhk{2n+1}\), \(\rrhk{2n+1}\), and \(\mphk{2n+1}\) are built according to \rnotap{CB15}{CB15.b} from the sequence \(\seqsh{2n+1}\).
Now we continue by a natural limit procedure.
Since \((\cpC _\ell)_{\ell=1}^\infi\) is a sequence of contractive complex \tqqa{matrices}, \rrem{KTK} yields the existence of a subsequence \((\cpC _{\ell_m})_{m=1}^\infi\) and a contractive complex \tqqa{matrix} \(\cpC\) such that \(\lim_{m\to\infty}\cpC_{\ell_m}=\cpC\).
Since \(F\) belongs to \(\RFOqskg{2n}\), we recognize that \(F\) is holomorphic in \(\pip\) and, in particular, continuous.
Thus, \(\lim_{m\to\infty} z_{\ell_m}=w\) implies \(\lim_{m\to\infty} F\rk{z_{\ell_m}}=F\rk{w}\).
Regarding \(\seqsh{2n+1}\in\Hggeq{2n+1}\), from \rlem{CL18} we discern that all the matrix-valued functions \(\lrhk{2n+1}\), \(\rrhk{2n+1}\), and \(\mphk{2n+1}\) are continuous in \(\CR\).
Thus, \(\lim_{m\to\infty}\lrhka{2n+1}{z_{\ell_m}}=\lrhka{2n+1}{w}\) and \(\lim_{m\to\infty}\rrhka{2n+1}{z_{\ell_m}}=\rrhka{2n+1}{w}\) as well as \(\lim_{m\to\infty}\mphka{2n+1}{z_{\ell_m}}=\mphka{2n+1}{w}\).
Consequently, letting \(m\to\infty\), from \eqref{L1204.13} we perceive \(F\rk{w}=\mphka{2n+1}{w}+\rk{w-\ko w}^\inv\lrhka{2n+1}{w}\cpC \rrhka{2n+1}{w}\).
In view of \(\seqsh{2n+1}\in\Hggeq{2n+1}\), from \rlem{C220N} we obtain \(\lrhk{2n}=\lrhk{2n+1}\) and \(\rrhk{2n}=\rrhk{2n+1}\), whereas \rlem{L3N10} yields \(\mphk{2n}=\mphk{2n+1}\), where \(\lrhk{2n}\), \(\rrhk{2n}\), and \(\mphk{2n}\) are built according to \rnotap{CB15}{CB15.a} from the sequence \(\seqsh{2n+1}\).
By virtue of \rdefn{D.nat-ext} and \rrem{ABM-tru}, we have \(\lrhk{2n}=\lrk{2n}\) and \(\rrhk{2n}=\rrk{2n}\) as well as \(\mphk{2n}=\mpk{2n}\).
Therefore, we can finally conclude \(F\rk{w}=\mpka{2n}{w}+\rk{w-\ko w}^\inv\lrka{2n}{w}\cpC \rrka{2n}{w}\).
Since \(\cpC \) is contractive, then \eqref{L1204.A} is proved.
\eproof
Now we are able to state and prove the missing other side of the proof of \rthm{T45CD}.
As in the proof of \rprop{L1204} we will see that the \tnatext{} \(\seqsh{2n+1}\) of \(\seqs{2n}\) plays an essential role.
\bpropl{L1301}
Let \(n\in\NO\) and let \(\seqs{2n}\in\Hggeq{2n}\).
Furthermore, let \(w\in\pip\) and let \(C\) be a contractive complex \tqqa{matrix}.
Then there exists a rational matrix-valued function \(F\) which belongs to \(\RFOqskg{2n}\) and which fulfills \(F\rk{w}=\mpka{2n}{w}+{\rk{w-\ko w}}^\inv\lrka{2n}{w}C\rrka{2n}{w}\), where \(\lrk{2n}\), \(\rrk{2n}\), and \(\mpk{2n}\) are given by \rnotap{CB15}{CB15.a}.
\eprop
\bproof
\rprop{L1237-A} shows that the \tnatext{} \(\seqsh{2n+1}\) of \(\seqs{2n}\) belongs to \(\Hggeq{2n+1}\).
Let \(\seqXh{2n+1}\) be the \tsXFo{\(\seqsh{2n+1}\)}.
Let \(\Ec\defeq-\ek{\XFha{2n+1}{w}}^\ad\), let \(b\defeq\rim\rk{w}\), let \(\Bc\defeq b^\inv\rim\Ec\), and let \(\Pc\defeq\OPu{\ran{\Ec}}\) and \(\Qc\defeq\OPu{\nul{\Ec}}\).
Clearly, then
\begin{align}\label{L1301.1}
\XFha{2n+1}{w}&=-\Ec^\ad,&
\ek*{\XFha{2n+1}{w}}^\ad&=-\Ec,&
&\text{and}&
\rk{\rim w}^\inv\rim\XFha{2n+1}{w}&=\Bc.
\end{align}
From \rcor{C1232} we can infer then \(\Bc\in\Cggq\).
Obviously, the (constant) matrix-valued functions \(\phi,\psi\colon\pip\to\Cqq\) defined by
\begin{align}\label{L1301.2
\phi\rk{z}&\defeq\Ec\sqrt{\Bc}^\mpi-\Ec^\ad\sqrt{\Bc}^\mpi C\Pc&
&\text{and}&
\psi\rk{z}&\defeq\sqrt{\Bc}^\mpi-\sqrt{\Bc}^\mpi C\Pc+\Qc,
\end{align}
respectively, fulfill
\beql{L1301.11}
\holpt{\phi}
=\holpt{\psi}
=\pip.
\eeq
Let \(\shHp{2n+1}\) be the \tsHpo{\(\seqsh{2n+1}\)}.
\rprop{L1347} yields \(\ran{\rim\XFha{2n+1}{z}}=\ran{\ek{\XFha{2n+1}{z}}^\ad}=\ran{\hhp{2n}}\) and \(\nul{\rim\XFha{2n+1}{z}}=\nul{\ek{\XFha{2n+1}{z}}^\ad}=\nul{\hhp{2n}}\) for all \(z\in\pip\).
By virtue of \eqref{L1301.1}, consequently,
\begin{align}\label{L1301.3}
\ran{\Bc}&=\ran{\Ec}=\ran{\hhp{2n}}&
&\text{and}&
\nul{\Bc}&=\nul{\Ec}=\nul{\hhp{2n}}
\end{align}
follow.
Regarding that \(w\in\pip\) implies \(b\in(0,\infp)\) and taking into account \(B\in\Cggq\) and \(\ran{\Bc}=\ran{\Ec}\) as well as \eqref{L1301.2} and \(C\in\Kqq\), we can apply \rlem{L2043} to discern that
\begin{align}\label{L1301.4}
\rank\Mat{\phi\rk{z}\\\psi\rk{z}}&=q&
&\text{and}&
\rim\rk*{\ek*{\psi\rk{z}}^\ad \phi\rk{z}}&\in\Cggq,
\end{align}
that
\begin{align}\label{L1301.5}
P\phi\rk{z}&=\phi\rk{z},&
\phi\rk{z}P&=\phi\rk{z},&
&\text{and}&
\psi\rk{z}P&=\psi\rk{z}-Q,
\end{align}
and that
\beql{L1301.6}
\sqrt{B}^\mpi\ek*{\phi\rk{z}-E\psi\rk{z}}\ek*{\phi\rk{z}-E^\ad \psi\rk{z}}^\mpi\sqrt{B}
=PCP
\eeq
hold true for all \(z\in\pip\).
In view of \rdefn{def-nev-paar} and \rrem{IG.b}, from \eqref{L1301.11} and \eqref{L1301.4} we recognize that the pair \(\copa{\phi}{\psi}\) belongs to \(\PRFq\).
Let \(\sHp{2n}\) be the \tsHpo{\(\seqs{2n}\)}.
\rrem{R26-SK1} yields \(\hhp{2n}=\hp{2n}\).
Regarding \eqref{L1301.3}, consequently
\begin{align}\label{L1301.0}
P&=\OPu{\ran{E}}=\OPu{\ran{\hhp{2n}}}=\OPu{\ran{\hp{2n}}}&
&\text{and}&
Q&=\OPu{\nul{E}}=\OPu{\nul{\hhp{2n}}}=\OPu{\nul{\hp{2n}}}.
\end{align}
From \eqref{L1301.0} and \eqref{L1301.5} we receive \(\OPu{\ran{\hp{2n}}}\phi\rk{z}=\phi\rk{z}\) for all \(z\in\pip\).
According to \rnota{CbezA5}, hence \(\copa{\phi}{\psi}\in\PRFqa{\hp{2n}}\).
Let \(\abcd{n}\) be the \sabcdo{\(\seqs{2n}\)}.
In view \rnota{N-abcdO}, let \(\pars{n}\), \(\pbrs{n}\), \(\paors{n+1}\), and \(\pbors{n+1}\) be the restrictions of \(\pa{n}\), \(\pb{n}\), \(\pao{n+1}\), and \(\pbo{n+1}\) onto \(\pip\), respectively.
From \rthmp{CM123}{CM123.a} we can then infer that the function \(\det\rk{\pbrs{n}\hp{2n}^\mpi\phi+\pbors{n+1}\psi}\) does not vanish identically in \(\pip\) and that the matrix-valued function
\beql{L1301.7
F
\defeq-\rk{\pars{n}\hp{2n}^\mpi\phi+\paors{n+1}\psi}\rk{\pbrs{n}\hp{2n}^\mpi\phi+\pbors{n+1}\psi}^\inv
\eeq
belongs to \(\RFOqskg{2n}\).
Since from \eqref{L1301.2} we recognize that \(\phi\) and \(\psi\) are constant matrix-valued functions and since \(\pars{n}\), \(\pbrs{n}\), \(\paors{n+1}\), and \(\pbors{n+1}\) are restrictions of matrix polynomials, we see that \(F\) is a rational matrix-valued function.
Let
\begin{align}\label{L1301.10}
\cpP&\defeq\phi\rk{w}&
&\text{and}&
\cpQ&\defeq\psi\rk{w}.
\end{align}
By virtue of \eqref{L1301.10} and \eqref{L1301.2}, we see then that
\begin{align}\label{L1301.9}
\cpP&=\Ec\sqrt{\Bc}^\mpi-\Ec^\ad\sqrt{\Bc}^\mpi C\Pc&
&\text{and}&
\cpQ&=\sqrt{\Bc}^\mpi-\sqrt{\Bc}^\mpi C\Pc+\Qc
\end{align}
hold true.
Now we are going to justify that all assumptions for the application of \rprop{L1745} to the sequence \(\seqsh{2n+1}\) are satisfied.
Let \(\habcd{n+1}\) be the \sabcdo{\(\seqsh{2n+1}\)}.
According to \eqref{L1301.0}, we have \(\Pc=\OPu{\ran{\hhp{2n}}}\) and \(\Qc=\OPu{\nul{\hhp{2n}}}\).
Using additionally \(\seqsh{2n+1}\in\Hggeq{2n+1}\) and \(\Ec=-\ek{\XFha{2n+1}{w}}^\ad\) as well as \(\Bc=\rk{\rim w}^\inv\rim\Ec\) and \eqref{L1301.9}, we can apply \rlem{L0827} to obtain
\beql{L1301.8}
\det\rk*{\pbha{n}{w}\hhp{2n}^\mpi\cpP+\pbha{n+1}{w}\cpQ}
\neq0.
\eeq
\rlem{L1237-B} yields \(\pah{n}=\pa{n}\) and \(\pbh{n}=\pb{n}\) as well as \(\pah{n+1}=\pao{n+1}\) and \(\pbh{n+1}=\pbo{n+1}\).
Taking additionally into account the definition of \(\pars{n}\), \(\pbrs{n}\), \(\paors{n+1}\), and \(\pbors{n+1}\) as well as \(\hhp{2n}=\hp{2n}\), from \eqref{L1301.11}, \eqref{L1301.10}, and \eqref{L1301.8} we can conclude \(\det\rk{\pbrsa{n}{w}\hp{2n}^\mpi\phi\rk{w}+\pborsa{n+1}{w}\psi\rk{w}}\neq0\) and, in view of \eqref{L1301.7}, consequently
\begin{multline}\label{L1301.15
-\ek*{\paha{n}{w}\hhp{2n}^\mpi\cpP+\paha{n+1}{w}\cpQ}\ek*{\pbha{n}{w}\hhp{2n}^\mpi\cpP+\pbha{n+1}{w}\cpQ}^\inv\\
=-\ek*{\parsa{n}{w}\hp{2n}^\mpi\phi\rk{w}+\paorsa{n+1}{w}\psi\rk{w}}\ek*{\pbrsa{n}{w}\hp{2n}^\mpi\phi\rk{w}+\pborsa{n+1}{w}\psi\rk{w}}^\inv
=F\rk{w}.
\end{multline}
In view of \(\rim\rk{w}\in(0,\infp)\) and \eqref{L1301.10}, from \eqref{L1301.4} and \eqref{L1301.5}, we can conclude \(\rim\rk{w}\rim\rk{\cpQ^\ad\cpP}\in\Cggq\) as well as \(P\cpP=\cpP\), \(\cpP P=\cpP\), and \(\cpQ P=\cpQ-Q\).
Regarding \(\seqsh{2n+1}\in\Hggeq{2n+1}\) as well as \(\Pc=\OPu{\ran{\hhp{2n}}}\) and \(\Qc=\OPu{\nul{\hhp{2n}}}\) and using additionally \eqref{L1301.8}, we can thus apply \rprop{L1745} to infer that the matrix
\beql{L1301.12}
\mathbf{K}
\defeq\sqrt{\rk{\rim w}^\inv\rim\XFha{2n+1}{w}}^\mpi\rk*{\cpP+\ek*{\XFha{2n+1}{w}}^\ad\cpQ}\rk*{\cpP+\XFha{2n+1}{w}\cpQ}^\mpi\sqrt{\rk{\rim w}^\inv\rim\XFha{2n+1}{w}}
\eeq
fulfills
\begin{multline}\label{L1301.13
-\ek*{\paha{n}{w}\hhp{2n}^\mpi\cpP +\paha{n+1}{w}\cpQ }\ek*{\pbha{n}{w}\hhp{2n}^\mpi \cpP +\pbha{n+1}{w}\cpQ }^\inv\\
=\mphka{2n+1}{w}+\rk{w-\ko w}^\inv \lrhka{2n+1}{w}\mathbf{K} \rrhka{2n+1}{w},
\end{multline}
where \(\lrhk{2n+1}\), \(\rrhk{2n+1}\), and \(\mphk{2n+1}\) are built according to \rnotap{CB15}{CB15.b} from the sequence \(\seqsh{2n+1}\).
By virtue of \eqref{L1301.12}, \eqref{L1301.1}, \eqref{L1301.10}, and \eqref{L1301.6}, we discern
\beql{L1301.17}
\mathbf{K}
=\sqrt{B}^\mpi\ek*{\phi\rk{w}-E\psi\rk{w}}\ek*{\phi\rk{w}-E^\ad \psi\rk{w}}^\mpi\sqrt{B}
=\Pc C\Pc.
\eeq
In view of \(\seqsh{2n+1}\in\Hggeq{2n+1}\), from \rlem{C220N} we obtain \(\lrhka{2n}{w}=\lrhka{2n+1}{w}\) and \(\rrhka{2n}{w}=\rrhka{2n+1}{w}\), whereas \rlem{L3N10} yields \(\mphka{2n}{w}=\mphka{2n+1}{w}\), where \(\lrhk{2n}\), \(\rrhk{2n}\), and \(\mphk{2n}\) are built according to \rnotap{CB15}{CB15.a} from the sequence \(\seqsh{2n+1}\).
Because of \rdefn{D.nat-ext} and \rrem{ABM-tru}, we can discern \(\lrhka{2n}{w}=\lrka{2n}{w}\) and \(\rrhka{2n}{w}=\rrka{2n}{w}\) as well as \(\mphka{2n}{w}=\mpka{2n}{w}\).
Consequently,
\begin{align}\label{L1301.14
\lrhka{2n+1}{w}&=\lrka{2n}{w},&
\rrhka{2n+1}{w}&=\rrka{2n}{w},&
&\text{and}&
\mphka{2n+1}{w}&=\mpka{2n}{w}.
\end{align}
From \rprop{CL19} we get \(\nul{\lrka{2n}{w}}=\nul{\hp{2n}}\) and \(\ran{\rrka{2n}{w}}=\ran{\hp{2n}}\).
According to \eqref{L1301.0}, we have \(\Pc=\OPu{\ran{\hp{2n}}}\) and \(\Qc=\OPu{\nul{\hp{2n}}}\).
Clearly, then \(P\rrka{2n}{w}=\rrka{2n}{w}\) follows.
By virtue of \rrem{R.P}, we see furthermore \(\ran{P}=\ran{\hp{2n}}\) and \(\ran{Q}=\nul{\hp{2n}}\).
Hence, we get \(\lrka{2n}{w}Q=\Oqq\).
\rrem{tsa2} provides \(\nul{\hp{2n}^\ad}=\ran{\hp{2n}}^\oc\).
Since \rrem{R1019} yields \(\hp{2n}^\ad=\hp{2n}\), then \(\nul{\hp{2n}}=\ran{\hp{2n}}^\oc\) follows.
Thus, \rrem{A.R.0<P<1} yields \(P+Q=\Iq\), implying \(\lrka{2n}{w}P=\lrka{2n}{w}\).
Consequently, we infer \(\lrka{2n}{w}\Pc C\Pc\rrka{2n}{w}=\lrka{2n}{w}C\rrka{2n}{w}\).
Using additionally \eqref{L1301.15}, \eqref{L1301.13}, \eqref{L1301.17}, and \eqref{L1301.14}, we finally conclude
\[\begin{split}
F\rk{w}
&=-\ek*{\paha{n}{w}\hhp{2n}^\mpi\cpP+\paha{n+1}{w}\cpQ}\ek*{\pbha{n}{w}\hhp{2n}^\mpi\cpP+\pbha{n+1}{w}\cpQ}^\inv\\
&=\mphka{2n+1}{w}+\rk{w-\ko w}^\inv \lrhka{2n+1}{w}\mathbf{K} \rrhka{2n+1}{w}\\
&=\mpka{2n}{w}+\rk{w-\ko w}^\inv \lrka{2n}{w}\Pc C\Pc\rrka{2n}{w}
=\mpka{2n}{w}+\rk{w-\ko w}^\inv \lrka{2n}{w}C\rrka{2n}{w}.\qedhere
\end{split}\]
\eproof
Note that \rprop{L1301} provides the remarkable additional information that each matrix from the matrix ball \(\cmb{\mpka{2n}{w}}{\rk{w-\ko w}^\inv\lrka{2n}{w}}{\rrka{2n}{w}}\) can be attained as value of a rational \tqqa{matrix}-valued function belonging to \(\RFOqskg{2n}\).
In conclusion, we embrace the opportunity to prove the principal \rthm{T45CD}:
\bproof[Proof of \rthm{T45CD}]
Combine \rpropss{L1204}{L1301}.
\eproof
We add the following result, which describes, for arbitrarily given \(w\in\pip\), explicitly a rational matrix-valued function \(F\) belonging to \(\RFOqsg{2n}\), the value \(F(w)\) of which coincides with the center of the matrix ball considered in \rthm{T45CD}.
\bpropl{P1004}
Let \(n\in\NO\) and let \(\seqs{2n}\in\Hggeq{2n}\) with \tsHp{} \(\sHp{2n}\), \sabcd{} \(\abcd{n}\), and \tsXF{} \(\seqX{2n}\).
Denote by \(\pars{n}\), \(\pbrs{n}\), \(\paors{n+1}\), and \(\pbors{n+1}\) the restrictions of \(\pa{n}\), \(\pb{n}\), \(\pao{n+1}\), and \(\pbo{n+1}\) onto \(\pip\), respectively, where \(\pao{n+1}\) and \(\pbo{n+1}\) are defined in \rnota{N-abcdO}.
Let \(w\in\pip\) and let \(G\colon\pip\to\Cqq\) be defined by \(G\rk{z}\defeq-\ek{\XFa{2n}{w}}^\ad\).
Then \(G\in\Pqevena{\hp{2n}}\), the function \(\det\rk{\pbrs{n}\hp{2n}^\mpi G+\pbors{n+1}}\) does not vanish identically, and \(F\defeq-\rk{\pars{n}\hp{2n}^\mpi G+\paors{n+1}}\rk{\pbrs{n}\hp{2n}^\mpi G+\pbors{n+1}}^\inv\) is a rational matrix-valued function belonging to \(\RFOqsg{2n}\) fulfilling \(F\rk{w}=\mpka{2n}{w}\), where \(\mpk{2n}\) is defined in \rnotap{CB15}{CB15.a}.
\eprop
\bproof
From \rcor{C1232} we can infer \(\rim\XFa{2n}{w}\in\Cggq\).
Setting \(E\defeq-\ek{\XFa{2n}{w}}^\ad\), hence \(\rim E=-\rim\rk{\ek{\XFa{2n}{w}}^\ad}=\rim\XFa{2n}{w}\in\Cggq\) follows.
\rprop{L1347} yields \(\nul{\ek{\XFa{2n}{w}}^\ad}=\nul{\hp{2n}}\).
Consequently, \(\nul{E}=\nul{\hp{2n}}\).
Using \rlem{L1654}, we can then infer \(G\in\Pqevena{\hp{2n}}\).
Thus, \rthmp{T1153}{T1153.a} shows that \(\det\rk{\pbrs{n}\hp{2n}^\mpi G+\pbors{n+1}}\) does not vanish identically, and that \(F\) belongs to \(\RFOqsg{2n}\).
From \rprop{L1237-A} we see that the \tnatext{} \(\seqsh{2n+1}\) of \(\seqs{2n}\) belongs to \(\Hggeq{2n+1}\).
Hence, we can apply \rlem{L0819} to the sequence \(\seqsh{2n+1}\) to obtain \(\det\PBha{n}{w}\neq0\) and \(\mphka{2n+1}{w}=-\PAha{n}{w}\ek{\PBha{n}{w}}^\inv\), where \(\mphk{2n+1}\) and \(\PAh{n},\PBh{n}\) are built according to \rnotass{CB15}{N1326}, \tresp{}, from the sequence \(\seqsh{2n+1}\).
Let \(\habcd{n+1}\) be the \sabcdo{\(\seqsh{2n+1}\)}.
\rlem{L1237-B} yields then \(\pah{n}=\pa{n}\) and \(\pbh{n}=\pb{n}\) as well as \(\pah{n+1}=\pao{n+1}\) and \(\pbh{n+1}=\pbo{n+1}\).
Let \(\shHp{2n+1}\) be the \tsHpo{\(\seqsh{2n+1}\)}.
\rrem{R26-SK1} yields then \(\hhp{2n}=\hp{2n}\).
Let \(\seqXh{2n+1}\) be the \tsXFo{\(\seqsh{2n+1}\)}.
\rlem{L1520} shows that \(\XFha{2n+1}{w}=\XFa{2n}{w}\) holds true.
Regarding additionally \rnota{N1326}, we thus get
\begin{multline*}
\parsa{n}{w}\hp{2n}^\mpi G\rk{w}+\paorsa{n+1}{w}
=-\paa{n}{w}\hp{2n}^\mpi\ek*{\XFa{2n}{w}}^\ad+\paoa{n+1}{w}\\
=-\paha{n}{w}\hhp{2n}^\mpi\ek*{\XFha{2n+1}{w}}^\ad+\paha{n+1}{w}
=-\rk*{\paha{n}{w}\hhp{2n}^\mpi\ek*{\XFha{2n+1}{w}}^\ad-\paha{n+1}{w}}
=-\PAha{n}{w}
\end{multline*}
and, similarly, \(\pbrsa{n}{w}\hp{2n}^\mpi G\rk{w}+\pborsa{n+1}{w}=-\PBha{n}{w}\).
Consequently, the inequality \(\det\rk{\pbrsa{n}{w}\hp{2n}^\mpi G\rk{w}+\pborsa{n+1}{w}}\neq0\) and the equation
\[\begin{split}
F\rk{w}
&=-\ek*{\parsa{n}{w}\hp{2n}^\mpi G\rk{w}+\paorsa{n+1}{w}}\ek*{\pbrsa{n}{w}\hp{2n}^\mpi G\rk{w}+\pborsa{n+1}{w}}^\inv\\
&=-\PAha{n}{w}\ek*{\PBha{n}{w}}^\inv
=\mphka{2n+1}{w}
\end{split}\]
follow.
In view of \(\seqsh{2n+1}\in\Hggeq{2n+1}\), we can apply \rlem{L3N10} to the sequence \(\seqsh{2n+1}\) to obtain \(\mphk{2n}=\mphk{2n+1}\), where \(\mphk{2n}\) is built according to \rnotap{CB15}{CB15.a} from the sequence \(\seqsh{2n+1}\).
By virtue of \rdefn{D.nat-ext} and \rrem{ABM-tru}, we have \(\mphk{2n}=\mpk{2n}\).
Therefore, we can finally conclude \(F\rk{w}=\mpka{2n}{w}\).
\eproof
For the following designation, we refer to \rnota{CB15}.
There is a connection with the quantities introduced there.
Observe that the following constructions are well defined due to \rlemss{BK8.6}{L2412}, regarding \rnota{N1326}:
\bnotal{N2150M}
Let \(\kappa\in\NOinf\) and let \(\seqska\in\Hggeqka\) with \tsHp{} \(\sHp{\kappa}\), \sabcd{} \(\abcd{\ev{\kappa}}\), and \tsXF{} \(\seqX{\kappa}\).
\benui
\il{N2150M.a} For all \(n\in\NO\) such that \(2n\leq\kappa\), let \(\lpsdrk{2n},\rpsdrk{2n}\colon\CR\to\Cqq\) be defined by
\begin{align*}
\lpsdrka{2n}{z}&\defeq\abs{z-\ko z}^\inv\ek*{\pda{n}{z}}^\inv\hp{2n}\ek*{\rk{\rim z}^\inv\rim\XFa{2n}{z}}^\mpi\hp{2n}\ek*{\pda{n}{z}}^\invad
\intertext{and}
\rpsdrka{2n}{z}&\defeq\abs{z-\ko z}^\inv\ek*{\pba{n}{z}}^\invad\hp{2n}\ek*{\rk{\rim z}^\inv\rim\XFa{2n}{z}}^\mpi\hp{2n}\ek*{\pba{n}{z}}^\inv.
\end{align*}
\il{N2150M.b} If \(\kappa\geq1\), for all \(n\in\NO\) such that \(2n+1\leq\kappa\), let \(\lpsdrk{2n+1},\rpsdrk{2n+1}\colon\CR\to\Cqq\) be defined by
\begin{align*}
\lpsdrka{2n+1}{z}&\defeq\abs{z-\ko z}^\inv\ek*{\pda{n}{z}}^\inv\hp{2n}\ek*{\rk{\rim z}^\inv\rim\XFa{2n+1}{z}}^\mpi\hp{2n}\ek*{\pda{n}{z}}^\invad
\intertext{and}
\rpsdrka{2n+1}{z}&\defeq\abs{z-\ko z}^\inv\ek*{\pba{n}{z}}^\invad\hp{2n}\ek*{\rk{\rim z}^\inv\rim\XFa{2n+1}{z}}^\mpi\hp{2n}\ek*{\pba{n}{z}}^\inv.
\end{align*}
\eenui
\enota
\breml{R228S}
Let \(\kappa\in\NOinf\), let \(\seqska\in\Hggeqka\), and let \(m\in\mn{0}{\kappa}\).
Using \eqref{R1019.0} in \rrem{R1019} as well as \rrem{A.R.A+>}, it is readily checked that the matrix-valued functions \(\lrk{m},\rrk{m}\colon\CR\to\Cqq\) given by \rnota{CB15} and \(\lpsdrk{m},\rpsdrk{m}\colon\CR\to\Cqq\) given by \rnota{N2150M} are connected via \(\lpsdrka{m}{z}=\abs{z-\ko z}^\inv\ek{\lrka{m}{z}}\ek{\lrka{m}{z}}^\ad\) and \(\rpsdrka{m}{z}={\abs{z-\ko z}}^\inv\ek{\rrka{m}{z}}^\ad\ek{\rrka{m}{z}}\) for all \(z\in\CR\).
In particular, \(\lpsdrka{m}{z}\in\Cggq\) and \(\rpsdrka{m}{z}\in\Cggq\) as well as \(\ek{\rk{z-\ko z}^\inv\lrka{m}{z}}\ek{\rk{z-\ko z}^\inv\lrka{m}{z}}^\ad=\abs{z-\ko z}^\inv\sqrt{\lpsdrka{m}{z}}\sqrt{\lpsdrka{m}{z}}^\ad\) and \(\ek{\rrka{m}{z}}^\ad\rrka{m}{z}=\abs{z-\ko z}\sqrt{\rpsdrka{m}{z}}^\ad\sqrt{\rpsdrka{m}{z}}\) hold true for all \(z\in\CR\).
\erem
\breml{LRC-tru}
Let \(\kappa\in\NOinf\) and let \(\seqska\in\Hggeqka\).
Regarding \rnota{N2150M}, then one can see from \rremsss{CM2.1.65}{BK8.1}{XF-tru} that for each \(m\in\mn{0}{\kappa}\), the functions \(\lpsdrk{m}\) and \(\rpsdrk{m}\) are built only from the matrices \(\su{0},\su{1},\dotsc,\su{m}\) and does not depend on the matrices \(\su{j}\) with \(j\geq m+1\).
Moreover, the following \rlem{L0844} even shows that \(\lpsdrk{m}\) and \(\rpsdrk{m}\) are independent of \(\su{j}\) with \(j\geq\eff{m}+1\), where \(\eff{m}\) is given by \eqref{SK5-11}.
\erem
\bleml{L0844}
Let \(\kappa\in\Ninf\), let \(\seqska\in\Hggeqka\), and let \(n\in\NO\) be such that \(2n+1\leq\kappa\).
Then \(\lpsdrka{2n}{z}=\lpsdrka{2n+1}{z}\) and \(\rpsdrka{2n}{z}=\rpsdrka{2n+1}{z}\) for all \(z\in\CR\).
\elem
\bproof
Use \rrem{R228S} and \rlem{C220N}.
\eproof
Now we cite a result for operator balls due to Yu.~L.~Shmul\cprime{}yan.
\bpropnl{\zitaa{MR0273377}{\cthm{1.3}{868}}}{d92t152}
Let \(M\in\Cpq\), let \(A_1, A_2\in\Cpp\), and let \(B_1, B_2\in\Cqq\).
Then the following statements are equivalent:
\baeqi{0}
\il{d92t152.i} \(\cmb{M}{A_1}{B_1}=\cmb{M}{A_2}{B_2}\).
\il{d92t152.ii} One of the following two conditions is satisfied:
\begin{enumerate}
\item Each of the pairs \(\rk{A_1,B_1}\) and \(\rk{A_2,B_2}\) contains a zero matrix.
\item There exists a positive real number \(\rho\) such that the equations \(A_1A_1^\ad=\rho A_2A_2^\ad\) and \(B_1^\ad B_1=\rho^\inv B_2^\ad B_2\) are satisfied.
\end{enumerate}
\eaeqi
\eprop
Note that there is a detailed proof of \rprop{d92t152} also in \zitaa{MR1152328}{\cthm{1.5.2}{49}}.
Now we are able to reformulate \rthm{T45CD} with a new representation of the matrix ball in question.
\bthml{F49}
Let \(n\in\NO\) and let \(\seqs{2n}\in\Hggeq{2n}\).
For all \(w\in\pip\), then
\[
\setaca*{F\rk{w}}{F\in\RFOqskg{2n}}
=\Cmb{\mpka{2n}{w}}{\sqrt{\lpsdrk{2n}\rk{w}}}{\sqrt{\rpsdrk{2n}\rk{w}}}.
\]
\ethm
\bproof
Regarding \rthm{T45CD} and \rrem{R228S}, the assertion follows by applying \rprop{d92t152}.
\eproof
Now we derive a connection between the right and left semi-radii of the matrix ball described in \rthm{F49}.
\bpropl{L2024}
Let \(\kappa\in\NOinf\), let \(\seqska\in\Hggeqka\), and let \(m\in\mn{0}{\kappa}\).
For all \(z\in\CR\), then \(\rpsdrka{m}{z}=\lpsdrka{m}{\ko{z}}\), where \(\lpsdrk{m}\) and \(\rpsdrk{m}\) are given by \rnota{N2150M}.
\eprop
\bproof
Let \(\sHp{\kappa}\) be the \tsHpo{\(\seqska\)}, let \(\abcd{\ev{\kappa}}\) be the \sabcdo{\(\seqska\)}, and let \(\seqX{\kappa}\) be the \tsXFo{\(\seqska\)}.
We consider an arbitrary \(z\in\CR\).
According to \rnota{N2150M}, we have \(\lpsdrka{m}{\ko z}=\abs{\ko z-z}^\inv\ek{\pda{n}{\ko z}}^\inv\hp{2n}\ek{\rk{\rim\ko z}^\inv\rim\XFa{m}{\ko z}}^\mpi\hp{2n}\ek{\pda{n}{\ko z}}^\invad\) and \(\rpsdrka{m}{z}=\abs{z-\ko z}^\inv\ek{\pba{n}{z}}^\invad\hp{2n}\ek{\rk{\rim z}^\inv\rim\XFa{m}{z}}^\mpi\hp{2n}\ek{\pba{n}{z}}^\inv\), where \(n\defeq\ef{m}\) is given by \eqref{SK5-11}.
Because of \rrem{R1624}, we have \(\su{j}^\ad=\su{j}\) for all \(j\in\mn{0}{\kappa}\).
Consequently, we can apply \rrem{BK8.2} to get \(\pda{n}{\ko{z}}=\ek{\pba{n}{z}}^\ad\).
Regarding \rrem{R0735}, \rlem{L0625} provides \(\XFa{m}{\ko{z}}=\ek{\XFa{m}{z}}^\ad\), implying \(\rim\XFa{m}{\ko{z}}=-\rim\XFa{m}{z}\).
Taking additionally into account \(\rim\ko z=-\rim z\), we consequently perceive
\[\begin{split}
\abs{\ko z-z}\lpsdrka{m}{\ko{z}}
&=\ek*{\pda{n}{\ko z}}^\inv\hp{2n}\ek*{\rk{\rim\ko z}^\inv\rim\XFa{m}{\ko z}}^\mpi\hp{2n}\ek*{\pda{n}{\ko z}}^\invad\\
&=\ek*{\pba{n}{z}}^\invad\hp{2n}\ek*{\rk{\rim z}^\inv\rim\XFa{m}{z}}^\mpi\hp{2n}\ek*{\pba{n}{z}}^\inv
=\abs{z-\ko z}\rpsdrka{m}{z},
\end{split}\]
which, in view of \(\abs{\ko z-z}=\abs{z-\ko z}\neq0\) yields the assertion.
\eproof
\section{Some connections to I.~V.~Kovalishina's investigations in the non-degenerate case}\label{S1031}
In this section, we first sketch the approach developed by I.~V.~Kovalishina \cite{MR703593} who considered the non-degenerate case of a sequence \(\seqs{2n}\in\Hgq{2n}\).
Actually, she studied simultaneously the Hamburger moment problem, the matricial Carath\'eodory problem, and the matricial Nevanlinna--Pick problem in the framework of V.~P.~Potapov's method of Fundamental Matrix Inequalities (FMI~method).
What concerns the computation of the corresponding Weyl matrix balls, she concentrated on the Nevanlinna--Pick case.
The computation of the Weyl matrix balls for the Hamburger case along the line of I.~V.~Kovalishina was realized by Chen/Hu~\zitaa{MR1740433}{\cpage{77}} and also in an unpublished handwritten manuscript by Yu.~M.~Dyukarev \cite{Dyu04}.
The bridge to our approach is built by representing the canonical \tqqa{blocks} of the resolvent matrix of Kovalishina in terms of orthogonal matrix polynomials of the first and the second type.
These \tqqa{matrix} polynomials turn out to be strongly interrelated to our \tabcd{} introduced in \rdefn{K19} in terms of which the parameters of the corresponding Weyl matrix balls are written (see \rnota{CB15}, \rthm{T45CD}).
In order to recall I.~V.~Kovalishina's description of the set \(\RFOqskg{2n}\), we need some notations.
Let \(\rv{0}\defeq\Iq\) and let \(\T{0}\defeq\Oqq\).
Furthermore, for all \(k\in\N\), let \(\rv{k}\defeq\smat{\Iq\\\Ouu{kq}{q}}\) and let \(\T{k}\defeq\smat{\Ouu{q}{kq}&\Oqq\\\Iu{kq}&\Ouu{kq}{q}}\).
Obviously, for all \(k\in\NO\) and all \(z\in\C\), the matrix \(\Iu{\rk{k+1}q}-z\T{k}\) is invertible.
In particular, for all \(k\in\NO\), the function \(\RT{k}\colon\C\to\Coo{\rk{k+1}q}{\rk{k+1}q}\) given by \(\RTa{k}{z}\defeq{\rk{\Iu{\rk{k+1}q}-z\T{k}}}^\inv\) is well defined.
One can easily see, that for all \(z\in\C\) and all \(n\in\NO\), we have
\beql{Rblock}
\RTa{n}{z}
=
\Mat{
\Iq&\Oqq&\Oqq&\hdots&\Oqq&\Oqq\\
z\Iq&\Iq&\Oqq&\hdots&\Oqq&\Oqq\\
z^2\Iq&z\Iq&\Iq&&\Oqq&\Oqq\\
\vdots&\vdots&\vdots&\ddots&&\vdots\\
z^{n-1}\Iq&z^{n-2}\Iq&z^{n-3}\Iq&\hdots&\Iq&\Oqq\\
z^n\Iq&z^{n-1}\Iq&z^{n-2}\Iq&\hdots&z\Iq&\Iq
}
\eeq
and, in view of \eqref{NMB}, hence
\begin{align}\label{Rv=E}
\RTa{n}{z}\rv{n}&=\ek*{\eua{n}{\ko z}}^\ad&
&\text{and}&
\rv{n}^\ad\ek*{\RTa{n}{\ko z}}^\ad&=\eua{n}{z}.
\end{align}
Let \(\kappa\in\NOinf\) and let \(\seqska\) be a sequence of complex \tpqa{matrices}.
Then let
\begin{align}\label{u}
\ru{0}&\defeq\Opq&
&\text{and}&
\ru{n}
&\defeq
\Mat{
\Opq\\
-\yuu{0}{n-1}
}
\end{align}
for all \(n\in\N\) with \(n-1\leq\kappa\).
\bnotal{N.rU}
Let \(n\in\NO\) and let \(\seqs{2n}\in\Hgq{2n}\).
Then let \(\rU{n}\colon\C\to\Coo{2q}{2q}\) be defined by \(\rUa{n}{z}\defeq\Iu{2q}+\iu z\mat{\ru{n},\rv{n}}^\ad\ek*{\RTa{n}{\ko z}}^\ad\Hu{n}^\inv\mat{\ru{n},\rv{n}}\Jimq\).
\enota
Now we are able to present Kovalishina's description of the set \(\RFOqskg{2n}\).
\bthmnl{\cite{MR703593}}{T1111}
Let \(n\in\NO\) and let \(\seqs{2n}\in\Hgq{2n}\).
Further, let \(\smat{U_{n;11}&U_{n;12}\\U_{n;21}&U_{n;22}}\) be the \tqqa{\tbr{}} of the restriction of \(\rU{n}\) onto \(\pip\).
Then:
\benui
\il{T1111.a} For each \(\copa{\phi}{\psi}\in\PRFq\), the function \(\det\rk{U_{n;21}\phi+U_{n;22}\psi}\) does not vanish identically and the matrix-valued function \(F\defeq\rk{U_{n;11}\phi+U_{n;12}\psi}\rk{U_{n;21}\phi+U_{n;22}\psi}^\inv\) belongs to \(\RFOqskg{2n}\).
\il{T1111.b} For each \(F\in\RFOqskg{2n}\), there exists a pair \(\copa{\phi}{\psi}\in\PRFq\) such that \(F=\rk{U_{n;11}\phi+U_{n;12}\psi}\rk{U_{n;21}\phi+U_{n;22}\psi}^\inv\).
\il{T1111.c} For each \(\ell\in\set{1,2}\) let \(\copa{\phi_\ell}{\psi_\ell}\in\PRFq\) and let \(F_\ell\defeq\rk{U_{n;11}\phi_\ell+U_{n;12}\psi_\ell}\rk{U_{n;21}\phi_\ell+U_{n;22}\psi_\ell}^\inv\).
Then \(F_1=F_2\) if and only if \(\cpcl{\phi_1}{\psi_1}=\cpcl{\phi_2}{\psi_2}\).
\eenui
\ethm
The function \(\rU{n}\) introduced in \rnota{N.rU} satisfies a useful functional equation.
\blemnl{\cite{MR703593}}{L1720}
Let \(n\in\NO\) and let \(\seqs{2n}\in\Hgq{2n}\).
Then the matrix-valued function \(\rU{n}\) given by \rnota{N.rU} is a matrix polynomial of degree at most \(n+1\) and the following statements hold true:
\benui
\il{L1720.a} For every choice of \(w,z\in\C\),
\[
\Jimq-\rUa{n}{z}\Jimq\ek*{\rUa{n}{w}}^\ad
=\iu\rk{\ko w-z}\mat{\ru{n},\rv{n}}^\ad\ek*{\RTa{n}{\ko z}}^\ad\Hu{n}^\inv\RTa{n}{\ko w}\mat{\ru{n},\rv{n}}.
\]
\il{L1720.b} For \(w\in\pip\), the matrix \(\Jimq-\rUa{n}{w}\Jimq\ek*{\rUa{n}{w}}^\ad\) is \tnnH{}.
In particular, \(\Jimq-\rUa{n}{x}\Jimq\ek*{\rUa{n}{x}}^\ad=\Ouu{2q}{2q}\) is fulfilled for all \(x\in\R\).
\il{L1720.c} For all \(z\in\C\), the matrix \(\rUa{n}{z}\) is non-singular and \(\ek{\rUa{n}{z}}^\inv=\Jimq\ek{\rUa{n}{\ko z}}^\ad\Jimq\) holds true.
In particular, \(\rU{n}^\inv\) is a matrix polynomial of degree at most \(n+1\), which satisfies the representation \(\ek{\rUa{n}{z}}^\inv=\Iu{2q}-\iu z\mat{\ru{n},\rv{n}}^\ad\Hu{n}^\inv\RTa{n}{z}\mat{\ru{n},\rv{n}}\Jimq\) for all \(z\in\C\).
\eenui
\elem
Note that a detailed proof of \rlem{L1720} is given, \teg, in \zitaa{Roe03}{\clem{8.7}{89}}.
\blemnl{\cite{MR703593,MR1740433,Dyu04}}{L1737}
Under the assumptions of \rlem{L1720}, one can see from \rlem{L1720} that \(\rW{n}\colon\C\to\Coo{2q}{2q}\) defined by \(\rWa{n}{z}\defeq\ek{\rUa{n}{z}}^\invad\rk{-\Jimq}\ek{\rUa{n}{z}}^\inv\) fulfills \(\det\rWa{n}{z}\neq0\) and \(\ek{\rWa{n}{z}}^\inv=\Jimq\rWa{n}{\ko z}\Jimq\) for all \(z\in\C\).
Moreover, \rlem{L1720} shows that, for all \(z\in\C\), the \tqqa{\tbr}
\[
\rWa{n}{z}
=
\Mat{
-\rRa{n}{z}&\rSa{n}{z}\\
\ek{\rSa{n}{z}}^\ad&-\rTa{n}{z}
}
\]
holds true, where
\begin{align*}
\rRa{n}{z}&\defeq\iu\rk{\ko z-z}\rv{n}^\ad\ek*{\RTa{n}{z}}^\ad\Hu{n}^\inv\ek*{\RTa{n}{z}}\rv{n},\\
\rSa{n}{z}&\defeq\iu\Iq+\iu\rk{\ko z-z}\rv{n}^\ad\ek*{\RTa{n}{z}}^\ad\Hu{n}^\inv\ek*{\RTa{n}{z}}\ru{n},
\intertext{and}
\rTa{n}{z}&\defeq\iu\rk{\ko z-z}\ru{n}^\ad\ek*{\RTa{n}{z}}^\ad\Hu{n}^\inv\ek*{\RTa{n}{z}}\ru{n}.
\end{align*}
In particular, for all \(w\in\pip\), the matrices \(\rRa{n}{w}\) and \(-\rRa{n}{\ko w}\) are \tpH{}, where \(-\ek{\rRa{n}{\ko w}}^\inv=\ek{\rSa{n}{w}}^\ad\ek{\rRa{n}{w}}^\inv\rSa{n}{w}-\rTa{n}{w}\).
\elem
Note that a detailed proof of \rlem{L1737} is stated in \zitaa{Tau14}{\clemss{10.6}{265}{10.8}{269}}.
\bnotal{N.rCdg}
Let \(n\in\NO\) and let \(\seqs{2n}\in\Hgq{2n}\).
Then let \(\rC{n},\rg{n},\rd{n}\colon\pip\to\Coo{q}{q}\) be defined by
\[
\rCa{n}{w}
\defeq\ek*{\rRa{n}{w}}^\inv\rSa{n}{w}
\]
and by
\begin{align*}
\rga{n}{w}&\defeq\ek*{\rRa{n}{w}}^\inv&
&\text{and}&
\rda{n}{w}&\defeq\ek*{\rSa{n}{w}}^\ad\ek*{\rRa{n}{w}}^\inv\rSa{n}{w}-\rTa{n}{w}.
\end{align*}
\enota
\breml{R1542}
Let \(n\in\NO\) and let \(\seqs{2n}\in\Hgq{2n}\).
According to \rlem{L1737}, for all \(w\in\pip\), then \(\rga{n}{w},\rda{n}{w}\in\Cgq\).
\erem
\breml{R9.10}
Under the assumptions of \rlem{L1720}, from \rlem{L1737}, for all \(z\in\C\), we conclude
\[
\rWa{n}{z}\Jimq\rWa{n}{\ko z}
=\rWa{n}{z}\Jimq\rWa{n}{\ko z}\Jimq^2
=\rWa{n}{z}\ek*{\rWa{n}{z}}^\inv\Jimq
=\Jimq
\]
and, in view of \eqref{JQ}, in particular
\[\begin{split}
\Oqq
&=\mat{\Iq,\Oqq}\Jimq\Mat{\Iq\\\Oqq}
=\mat{\Iq,\Oqq}\rWa{n}{z}\Jimq\rWa{n}{\ko z}\Mat{\Iq\\\Oqq}\\
&=\iu\rk*{\rRa{n}{z}\ek*{\rSa{n}{\ko z}}^\ad-\rSa{n}{z}\rRa{n}{\ko z}},
\end{split}\]
which implies \(\rCa{n}{w}=\ek{\rRa{n}{w}}^\inv\rSa{n}{w}=\ek{\rSa{n}{\ko w}}^\ad\ek{\rRa{n}{\ko w}}^\inv\) for all \(w\in\pip\).
\erem
Using \rnota{N.rCdg} and \rrem{R1542}, now we are able for \(w\in\pip\) to describe the set \(\setaca{F\rk{w}}{\RFOqskg{2n}}\) as a matrix ball.
\bthmnl{\cite{MR703593,MR1740433,Dyu04}}{T1546}
Let \(n\in\NO\) and let \(\seqs{2n}\in\Hgq{2n}\).
For all \(w\in\pip\), then
\[
\setaca*{F\rk{w}}{F\in\RFOqskg{2n}}
=\Cmb{\rCa{n}{w}}{\sqrt{\rga{n}{w}}}{\sqrt{\rda{n}{w}}}.
\]
\ethm
From \rthm{T1546}, which is dedicated to the non-degenerate case, \rthm{F49}, where the general case is considered, and \rprop{d92t152} it is clear that there is an explicit connection between the parameters of the matrix balls in question, where the centers coincide necessarily by a result due to Shmul\cprime{}yan \zitaa{MR0273377}{\csec{1}{}} (see also \zitaa{MR1152328}{\ccor{1.5.1}{47}}).
The main aim of this section is now to verify that the parameters of the matrix ball occurring in \rthm{T1546} coincide even with the matrices occurring in \rthm{F49}.
This requires some considerations.
Let \(\kappa\in\NOinf\) and let \(\seqska\in\Hggeqka\).
Then, in view of \rnota{B.N.deg}, for all \(n\in\NO\) with \(2n-1\leq\kappa\), let
\beql{yn}
\yu{n}
\defeq\cvuo{n}{\pb{n}}.
\eeq
Regarding \rrem{SK32R}, in particular \(\yu{0}=\Iq\).
Because of \rremss{SK32R}{B.R.P=euY}, \eqref{Rv=E}, and \eqref{yn}, for all \(n\in\NO\) such that \(2n-1\leq\kappa\) and all \(z\in\C\), we can infer
\beql{PbR}
\pba{n}{z}
=\rv{n}^\ad\ek*{\RTa{n}{\ko z}}^\ad\yu{n}
\eeq
and, using additionally \rremss{R1624}{BK8.2}, hence
\beql{PdR}
\pda{n}{z}
=\yu{n}^\ad\RTa{n}{z}\rv{n}.
\eeq
\bleml{L0952}
Let \(\kappa\in\NOinf\), let \(\seqska\in\Hggeqka\), and let \(\abcd{\ev{\kappa}}\) be the \sabcdo{\(\seqska\)}.
For all \(k\in\NO\) with \(2k-1\leq\kappa\), then \(\paa{k}{z}=-\ru{k}^\ad\ek{\RTa{k}{\ko z}}^\ad\yu{k}\) and \(\pca{k}{z}=-\yu{k}^\ad\RTa{k}{z}\ru{k}\) for all \(z\in\C\).
\elem
\bproof
Let \(k\in\NO\) with \(2k-1\leq\kappa\) and let \(z\in\C\).
In view of \rprop{P1203}, then \(\pa{k}=\pb{k}^\secra{s}\).
Using \eqref{u}, \rrem{SK32R}, \rnota{B.N.sec}, and \eqref{K19.0}, in the case \(k=0\) we easily obtain then
\[
-\ru{0}^\ad\ek*{\RTa{0}{\ko z}}^\ad\yu{0}
=\Oqq
=\pb{0}^\secra{s}(z)
=\paa{0}{z}.
\]
Now suppose \(k\geq1\).
In view of \eqref{yz} and \rrem{R1624}, we have \(\yuu{0}{k-1}^\ad=\zuu{0}{k-1}\).
Using \eqref{yz}, \eqref{Rblock}, \eqref{NMB}, and \eqref{M.N.S}, we conclude
\[
\zuu{0}{k-1}\ek*{\RTa{k-1}{\ko z}}^\ad
=\row\Seq{\sum_{\ell=0}^jz^\ell\su{j-\ell}}{j}{0}{k-1}
=\eua{k-1}{z}\SUu{k-1}.
\]
From \rrem{SK32R}, \rnota{B.N.sec}, and \eqref{yn} we know
\(
\pb{k}^\secra{s}(z)
=\eua{k-1}{z}\mat{\Ouu{kq}{q},\SUu{k-1}}\yu{k}
\).
Taking additionally into account \eqref{u} and \eqref{Rblock}, then
\[
\begin{split}
-\ru{k}^\ad\ek*{\RTa{k}{\ko z}}^\ad\yu{k}
&=-\mat{\Oqq,-\zuu{0}{k-1}}
\Mat{
\uk&\uk\\
\Ouu{kq}{q}&\ek{\RTa{k-1}{\ko z}}^\ad
}\yu{k}\\
&=\mat*{\Oqq,\zuu{0}{k-1}\ek{\RTa{k-1}{\ko z}}^\ad}\yu{k}
=\mat*{\Oqq,\eua{k-1}{z}\SUu{k-1}}\yu{k}\\
&=\eua{k-1}{z}\mat{\Ouu{kq}{q},\SUu{k-1}}\yu{k}
=\pb{k}^\secra{s}(z)
=\paa{k}{z}
\end{split}
\]
follows.
Using \rremss{R1624}{BK8.2}, we can easily get the second identity from the first one.
\eproof
\breml{N9.3}
Let \(\kappa\in\Ninf\) and let \(\seqs{2\kappa}\in\Hggq{2\kappa}\).
In view of \rpropss{143.T1336}{B.P.oMP-lGs} and the notation given in \eqref{yn}, for all \(n\in\mn{1}{\kappa}\), we have \(\yu{n}=\tmat{-\Xu{n}\\\Iq}\) with some matrix \(\Xu{n}\) belonging to \(\tu{n}\defeq\setaca{r\in\Coo{nq}{q}}{\Hu{n-1}r=\yuu{n}{2n-1}}\).
If \(\seqs{2\kappa}\in\Hgq{2\kappa}\), then, for all \(n\in\mn{1}{\kappa}\), the matrix \(\Hu{n-1}\) is non-singular and \(\yu{n}=\rY{n}\) where
\beql{YY}
\rY{n}
\defeq
\Mat{
-\Hu{n-1}^\inv\yuu{n}{2n-1}\\
\Iq
}.
\eeq
\erem
Setting \(\rY{0}\defeq\Iq\) and using \eqref{YY}, we see from \rprop{P1644} that the following construction is well defined:
\bnotal{N.pn}
Let \(\kappa\in\NOinf\) and let \(\seqs{2\kappa}\in\Hgq{2\kappa}\) with \tsHp{} \(\sHp{2\kappa}\).
Then, for all \(k\in\mn{0}{\kappa}\), let \(\pna{k}, \pnb{k},\pnc{k}, \pnd{k}\colon\C\to\Cqq\) be defined by
\begin{align*}
\pnaa{k}{z}&\defeq-\ru{k}^\ad\ek*{\RTa{k}{\ko z}}^\ad\rY{k}\sqrt{\hp{2k}}^\inv,&
\pnba{k}{z}&\defeq\rv{k}^\ad\ek*{\RTa{k}{\ko z}}^\ad\rY{k}\sqrt{\hp{2k}}^\inv,\\
\pnca{k}{z}&\defeq-\sqrt{\hp{2k}}^\inv\rY{k}^\ad\RTa{k}{z}\ru{k},&
\pnda{k}{z}&\defeq\sqrt{\hp{2k}}^\inv\rY{k}^\ad\RTa{k}{z}\rv{k}.
\end{align*}
\enota
\breml{R48lr}
Under the assumption of \rnota{N.pn}, for every choice of \(k\in\mn{0}{\kappa}\) and \(z\in\C\), we have obviously \(\pnca{k}{z}=\ek{\pnaa{k}{\ko z}}^\ad\) and \(\pnda{k}{z}=\ek{\pnba{k}{\ko z}}^\ad\).
\erem
\bleml{R1704}
Let \(\kappa\in\NOinf\) and let \(\seqs{2\kappa}\in\Hgq{2\kappa}\) with \tsHp{} \(\sHp{2\kappa}\) and \sabcd{} \(\abcd{\kappa}\).
For all \(k\in\mn{0}{\kappa}\), then
\begin{align*}
\pna{k}\sqrt{\hp{2k}}&=\pa{k},&
\pnb{k}\sqrt{\hp{2k}}&=\pb{k},&
\sqrt{\hp{2k}}\pnc{k}&=\pc{k},&
\sqrt{\hp{2k}}\pnd{k}&=\pd{k}.
\end{align*}
\elem
\bproof
In view of \rnota{N.pn} and \rrem{N9.3}, the assertion follows from \eqref{PbR}, \eqref{PdR}, and \rlem{L0952}.
\eproof
An important step in the approach due to Kovalishina is now to write the matrices introduced in \rlem{L1737} in terms of the matrix polynomials from \rnota{N.pn}
\bpropnl{\cite{MR703593}}{P1550}
Let \(n\in\NO\) and let \(\seqs{2n}\in\Hgq{2n}\).
For all \(z\in\C\), then
\begin{align*}
\rRa{n}{z}&=\iu\rk{\ko z-z}\sum_{k=0}^n\ek*{\pnda{k}{z}}^\ad\pnda{k}{z},&
\rSa{n}{z}&=\iu\Iq-\iu\rk{\ko z-z}\sum_{k=0}^n\ek*{\pnda{k}{z}}^\ad\pnca{k}{z},
\end{align*}
and
\[
\rTa{n}{z}
=\iu\rk{\ko z-z}\sum_{k=0}^n\ek*{\pnca{k}{z}}^\ad\pnca{k}{z}.
\]
\eprop
Note that detailed proofs of \rprop{P1550} are given in \cite{Dyu04} and \zitaa{Tau14}{\cbem{10.16}{280}}.
As a consequence of \rprop{P1550} and \rrem{R48lr} we will express the parameters of the matrix ball occurring in \rthm{T1546} in terms of the matrix polynomials from \rnota{N.pn}.
\bcorl{C1645}
Let \(n\in\NO\) and let \(\seqs{2n}\in\Hgq{2n}\).
For all \(w\in\pip\), then
\begin{align*}
\rCa{n}{w}&=\rk*{\iu\rk{\ko w-w}\sum_{k=0}^n\ek*{\pnda{k}{w}}^\ad\pnda{k}{w}}^\inv\rk*{\iu\Iq-\iu\rk{\ko w-w}\sum_{k=0}^n\ek*{\pnda{k}{w}}^\ad\pnca{k}{w}},\\
\rCa{n}{w}&=\rk*{\iu\Iq-\iu\rk{\ko w-w}\sum_{k=0}^n\pnaa{k}{w}\ek*{\pnba{k}{w}}^\ad}\rk*{\iu\rk{\ko w-w}\sum_{k=0}^n\pnba{k}{w}\ek*{\pnba{k}{w}}^\ad}^\inv
\end{align*}
and
\begin{align*}
\rga{n}{w}&=\rk*{\iu\rk{\ko w-w}\sum_{k=0}^n\ek*{\pnda{k}{w}}^\ad\pnda{k}{w}}^\inv,&
\rda{n}{w}&=\rk*{\iu\rk{\ko w-w}\sum_{k=0}^n\pnba{k}{w}\ek*{\pnba{k}{w}}^\ad}^\inv.
\end{align*}
\ecor
A proof of \rcor{C1645} can be easily obtained using \rlem{L1737} and \rremss{R9.10}{R48lr}.
We omit the details.
The final considerations are aimed to look at above investigations under the view of \rsec{Cha13}.
\bleml{L0824}
Let \(\kappa\in\Ninf\), let \(\seqska\in\Hgeqka\) with \tsHp{} \(\sHp{\kappa}\) and \sabcd{} \(\abcd{\ev{\kappa}}\), and let \(n\in\NO\) be such that \(2n+1\leq\kappa\).
Then \(\det\hp{2n}\neq0\) and, for each \(z\in\CR\), moreover
\begin{align}
\PAa{n}{z}\hp{2n}^\inv\ek*{\pba{n}{z}}^\ad&=\paa{n}{z}\hp{2n}^\inv\ek*{\pba{n+1}{z}}^\ad-\paa{n+1}{z}\hp{2n}^\inv\ek*{\pba{n}{z}}^\ad,\label{K1}\\
\PBa{n}{z}\hp{2n}^\inv\ek*{\pba{n}{z}}^\ad&=\pba{n}{z}\hp{2n}^\inv\ek*{\pba{n+1}{z}}^\ad-\pba{n+1}{z}\hp{2n}^\inv\ek*{\pba{n}{z}}^\ad,\notag\\
\ek*{\pda{n}{z}}^\ad\hp{2n}^\inv\PCa{n}{z}&=\ek*{\pda{n+1}{z}}^\ad\hp{2n}^\inv\pca{n}{z}-\ek*{\pda{n}{z}}^\ad\hp{2n}^\inv\pca{n+1}{z},\notag\\
\ek*{\pda{n}{z}}^\ad\hp{2n}^\inv\PDa{n}{z}&=\ek*{\pda{n+1}{z}}^\ad\hp{2n}^\inv\pda{n}{z}-\ek*{\pda{n}{z}}^\ad\hp{2n}^\inv\pda{n+1}{z}.\notag
\end{align}
\elem
\bproof
First observe that \rrem{Schr2.7} yields \(\seqska\in\Hggeqka\).
Clearly, we have \(\seqs{2n}\in\Hgq{2n}\).
From \rprop{P1644} we see then \(\hp{2n}\in\Cgq\).
In particular, \(\hp{2n}^\ad=\hp{2n}\) and \(\det\hp{2n}\neq0\).
Taking into account \rremp{R52SK}{R52SK.a} and \(\hp{2n}^\ad=\hp{2n}\), we can infer \(\det\pba{n}{z}\neq0\) and \(\ek{\XFa{2n+1}{z}}^\ad=\ek{\pba{n+1}{z}}^\ad\ek{\pba{n}{z}}^\invad\hp{2n}\).
Consequently, \(\ek{\XFa{2n+1}{z}}^\ad\hp{2n}^\inv\ek*{\pba{n}{z}}^\ad=\ek{\pba{n+1}{z}}^\ad\).
Regarding additionally \rnota{N1326} and \(\det\hp{2n}\neq0\), we can conclude then
\[\begin{split}
\PAa{n}{z}\hp{2n}^\inv\ek*{\pba{n}{z}}^\ad
&=\rk*{\paa{n}{z}\hp{2n}^\inv\ek*{\XFa{2n+1}{z}}^\ad-\paa{n+1}{z}}\hp{2n}^\inv\ek*{\pba{n}{z}}^\ad\\
&=\paa{n}{z}\hp{2n}^\inv\ek*{\pba{n+1}{z}}^\ad-\paa{n+1}{z}\hp{2n}^\inv\ek*{\pba{n}{z}}^\ad
\end{split}\]
and, analogously, \(\PBa{n}{z}\hp{2n}^\inv\ek{\pba{n}{z}}^\ad=\pba{n}{z}\hp{2n}^\inv\ek{\pba{n+1}{z}}^\ad-\pba{n+1}{z}\hp{2n}^\inv\ek{\pba{n}{z}}^\ad\).
Using \rpropp{K17-5}{K17-5.a} instead of \rremp{R52SK}{R52SK.a}, the remaining equations can be obtained similarly.
\eproof
\bleml{L2038}
Let \(\kappa\in\minf{3}\cup\set{\infi}\), let \(\seqska\in\Hgeqka\) with \tsHp{} \(\sHp{\kappa}\) and \sabcd{} \(\abcd{\ev{\kappa}}\), and let \(n\in\N\) be such that \(2n+1\leq\kappa\).
For all \(z\in\CR\), then
\begin{align*}
\PAa{n}{z}\hp{2n}^\inv\ek*{\pba{n}{z}}^\ad&=\rk{\ko z-z}\paa{n}{z}\hp{2n}^\inv\ek*{\pba{n}{z}}^\ad+\PAa{n-1}{z}\hp{2n-2}^\inv\ek*{\pba{n-1}{z}}^\ad,\\
\PBa{n}{z}\hp{2n}^\inv\ek*{\pba{n}{z}}^\ad&=\rk{\ko z-z}\pba{n}{z}\hp{2n}^\inv\ek*{\pba{n}{z}}^\ad+\PBa{n-1}{z}\hp{2n-2}^\inv\ek*{\pba{n-1}{z}}^\ad,\\
\ek*{\pda{n}{z}}^\ad\hp{2n}^\inv\PCa{n}{z}&=\rk{\ko z-z}\ek*{\pda{n}{z}}^\ad\hp{2n}^\inv\pca{n}{z}+\ek*{\pda{n-1}{z}}^\ad\hp{2n-2}^\inv\PCa{n-1}{z},\\
\ek*{\pda{n}{z}}^\ad\hp{2n}^\inv\PDa{n}{z}&=\rk{\ko z-z}\ek*{\pda{n}{z}}^\ad\hp{2n}^\inv\pda{n}{z}+\ek*{\pda{n-1}{z}}^\ad\hp{2n-2}^\inv\PDa{n-1}{z}.
\end{align*}
\elem
\bproof
First observe that \rrem{Schr2.7} yields \(\seqska\in\Hggeqka\).
\rrem{R1019} then shows \(\hp{j}\in\CHq\) for all \(j\in\mn{0}{\kappa}\).
Clearly, we have \(\seqs{2n}\in\Hgq{2n}\).
From \rprop{P1644} we see then \(\hp{2n},\hp{2n-2}\in\Cgq\).
In particular, \(\det\hp{2n}\neq0\) and \(\det\hp{2n-2}\neq0\).
Let \(z\in\CR\).
Taking into account \rdefn{K19} and \(\hp{2n+1},\hp{2n},\hp{2n-2}\in\CHq\) as well as \(\det\hp{2n}\neq0\) and \(\det\hp{2n-2}\neq0\), we can conclude \(\paa{n+1}{z}=\paa{n}{z}\rk{z\Iq-\hp{2n}^\inv\hp{2n+1}}-\paa{n-1}{z}\hp{2n-2}^\inv\hp{2n}\) and \(\ek{\pba{n+1}{z}}^\ad=\rk{\ko z\Iq-\hp{2n+1}\hp{2n}^\inv}\ek{\pba{n}{z}}^\ad-\hp{2n}\hp{2n-2}^\inv\ek{\pba{n-1}{z}}^\ad\).
Consequently,
\[
\paa{n+1}{z}\hp{2n}^\inv\ek*{\pba{n}{z}}^\ad
=z\paa{n}{z}\hp{2n}^\inv\ek*{\pba{n}{z}}^\ad-\paa{n}{z}\hp{2n}^\inv\hp{2n+1}\hp{2n}^\inv\ek*{\pba{n}{z}}^\ad-\paa{n-1}{z}\hp{2n-2}^\inv\ek*{\pba{n}{z}}^\ad
\]
and
\[
\paa{n}{z}\hp{2n}^\inv\ek*{\pba{n+1}{z}}^\ad
=\ko z\paa{n}{z}\hp{2n}^\inv\ek*{\pba{n}{z}}^\ad-\paa{n}{z}\hp{2n}^\inv\hp{2n+1}\hp{2n}^\inv\ek*{\pba{n}{z}}^\ad-\paa{n}{z}\hp{2n-2}^\inv\ek*{\pba{n-1}{z}}^\ad
\]
follow.
Using \rlem{L0824}, we can infer that \eqref{K1} and
\[
\PAa{n-1}{z}\hp{2n-2}^\inv\ek*{\pba{n-1}{z}}^\ad
=\paa{n-1}{z}\hp{2n-2}^\inv\ek*{\pba{n}{z}}^\ad-\paa{n}{z}\hp{2n-2}^\inv\ek*{\pba{n-1}{z}}^\ad
\]
hold true.
Thus, we obtain
\[\begin{split}
\PAa{n}{z}\hp{2n}^\inv\ek*{\pba{n}{z}}^\ad
&=\rk*{\ko z\paa{n}{z}\hp{2n}^\inv\ek*{\pba{n}{z}}^\ad-\paa{n}{z}\hp{2n}^\inv\hp{2n+1}\hp{2n}^\inv\ek*{\pba{n}{z}}^\ad-\paa{n}{z}\hp{2n-2}^\inv\ek*{\pba{n-1}{z}}^\ad}\\
&\;-\rk*{z\paa{n}{z}\hp{2n}^\inv\ek*{\pba{n}{z}}^\ad-\paa{n}{z}\hp{2n}^\inv\hp{2n+1}\hp{2n}^\inv\ek*{\pba{n}{z}}^\ad-\paa{n-1}{z}\hp{2n-2}^\inv\ek*{\pba{n}{z}}^\ad}\\
&=\rk{\ko z-z}\paa{n}{z}\hp{2n}^\inv\ek*{\pba{n}{z}}^\ad-\paa{n}{z}\hp{2n-2}^\inv\ek*{\pba{n-1}{z}}^\ad+\paa{n-1}{z}\hp{2n-2}^\inv\ek*{\pba{n}{z}}^\ad\\
&=\rk{\ko z-z}\paa{n}{z}\hp{2n}^\inv\ek*{\pba{n}{z}}^\ad+\PAa{n-1}{z}\hp{2n-2}^\inv\ek*{\pba{n-1}{z}}^\ad.
\end{split}\]
In view of \rdefn{K19} and \rlem{L0824} the remaining assertions can be obtained similarly.
\eproof
\bleml{L0609}
Let \(\kappa\in\Ninf\) and let \(\seqska\in\Hgeqka\) with \tsHp{} \(\sHp{\kappa}\) and \sabcd{} \(\abcd{\ev{\kappa}}\).
For all \(n\in\NO\) such that \(2n+1\leq\kappa\) and all \(z\in\CR\), then
\begin{align*}
\PAa{n}{z}\hp{2n}^\inv\ek*{\pba{n}{z}}^\ad&=-\Iq+\rk{\ko z-z}\sum_{k=0}^n\paa{k}{z}\hp{2k}^\inv\ek*{\pba{k}{z}}^\ad,\\
\PBa{n}{z}\hp{2n}^\inv\ek*{\pba{n}{z}}^\ad&=\rk{\ko z-z}\sum_{k=0}^n\pba{k}{z}\hp{2k}^\inv\ek*{\pba{k}{z}}^\ad,\\
\ek*{\pda{n}{z}}^\ad\hp{2n}^\inv\PCa{n}{z}&=-\Iq+\rk{\ko z-z}\sum_{k=0}^n\ek*{\pda{k}{z}}^\ad\hp{2k}^\inv\pca{k}{z},
\intertext{and}
\ek*{\pda{n}{z}}^\ad\hp{2n}^\inv\PDa{n}{z}&=\rk{\ko z-z}\sum_{k=0}^n\ek*{\pda{k}{z}}^\ad\hp{2k}^\inv\pda{k}{z}.
\end{align*}
\elem
\bproof
First observe that \rrem{Schr2.7} yields \(\seqska\in\Hggeqka\).
Clearly, we have \(\seqs{0}\in\Hgq{0}\).
From \rprop{P1644} we get then \(\hp{0}\in\Cgq\).
In particular, \(\det\hp{0}\neq0\).
Let \(z\in\CR\).
By virtue of \eqref{ABCD0}, we can conclude \(\PAa{0}{z}=-\hp{0}\), \(\PBa{0}{z}=\rk{\ko z-z}\Iq\), \(\PCa{0}{z}=-\hp{0}\), and \(\PDa{0}{z}=\rk{\ko z-z}\Iq\).
Taking additionally into account \eqref{K19.0}, then \(\PAa{0}{z}\hp{0}^\inv\ek{\pba{0}{z}}^\ad=-\Iq+\rk{\ko z-z}\paa{0}{z}\hp{0}^\inv\ek{\pba{0}{z}}^\ad\) and \(\PBa{0}{z}\hp{0}^\inv\ek{\pba{0}{z}}^\ad=\rk{\ko z-z}\pba{0}{z}\hp{0}^\inv\ek{\pba{0}{z}}^\ad\) as well as \(\ek{\pda{0}{z}}^\ad\hp{0}^\inv\PCa{0}{z}=-\Iq+\rk{\ko z-z}\ek{\pda{0}{z}}^\ad\hp{0}^\inv\pca{0}{z}\) and \(\ek{\pda{0}{z}}^\ad\hp{0}^\inv\PDa{0}{z}=\rk{\ko z-z}\ek{\pda{0}{z}}^\ad\hp{0}^\inv\pda{0}{z}\)
follow.
Combining these equations with \rlem{L2038} completes the proof.
\eproof
\bpropl{L0926}
Let \(\kappa\in\Ninf\), let \(\seqska\in\Hgeqka\) with \tsHp{} \(\sHp{\kappa}\) and \sabcd{} \(\abcd{\ev{\kappa}}\), let \(n\in\NO\) be such that \(2n+1\leq\kappa\), and let \(z\in\CR\).
Then the matrices \(\lpsdrka{2n+1}{z}\) and \(\rpsdrka{2n+1}{z}\) are invertible and fulfill
\begin{align}
\ek*{\lpsdrka{2n+1}{z}}^\inv&=\frac{\abs{z-\ko z}}{z-\ko z}\rk*{\ek*{\pda{n}{z}}^\ad\hp{2n}^\inv\pda{n+1}{z}-\ek*{\pda{n+1}{z}}^\ad\hp{2n}^\inv\pda{n}{z}}\label{L0926.1}
\intertext{and}
\ek*{\rpsdrka{2n+1}{z}}^\inv&=\frac{\abs{z-\ko z}}{z-\ko z}\rk*{\pba{n+1}{z}\hp{2n}^\inv\ek*{\pba{n}{z}}^\ad-\pba{n}{z}\hp{2n}^\inv\ek*{\pba{n+1}{z}}^\ad}\label{L0926.2}.
\end{align}
\eprop
\bproof
First observe that \rrem{Schr2.7} yields \(\seqska\in\Hggeqka\).
Clearly, we have \(\seqs{2n}\in\Hgq{2n}\).
From \rprop{P1644} we see then \(\hp{2n}\in\Cgq\).
In particular, \(\hp{2n}^\ad=\hp{2n}\) and \(\det\hp{2n}\neq0\).
In view of \(\det\hp{2n}\neq0\) and \eqref{SK5-11}, we can infer from \rprop{L1347} that \(\det\rim\XFa{2n+1}{z}\neq0\).
Regarding additionally \rnotap{N2150M}{N2150M.b} and \(\det\hp{2n}\neq0\), we can then conclude \(\det\lpsdrka{2n+1}{z}\neq0\) and
\[
\ek*{\lpsdrka{2n+1}{z}}^\inv
=\abs{z-\ko z}\ek*{\pda{n}{z}}^\ad\hp{2n}^\inv\ek*{\rk{\rim z}^\inv\rim\XFa{2n+1}{z}}\hp{2n}^\inv\pda{n}{z}
\]
Taking into account \rpropp{K17-5}{K17-5.a} and \(\hp{2n}^\ad=\hp{2n}\), we can infer \(\det\pda{n}{z}\neq0\) as well as \eqref{GR2} and \(\ek{\XFa{2n+1}{z}}^\ad=\hp{2n}\ek{\pda{n}{z}}^\invad\ek{\pda{n+1}{z}}^\ad\).
Hence, using additionally \(\rim\XFa{2n+1}{z}=\frac{1}{2\iu}\rk{\XFa{2n+1}{z}-\ek{\XFa{2n+1}{z}}^\ad}\), we obtain \eqref{L0926.1}.
Applying \rremp{R52SK}{R52SK.a} instead of \rpropp{K17-5}{K17-5.a}, inequality \(\det\rpsdrka{2n+1}{z}\neq0\) and equation \eqref{L0926.2} can be obtained similarly.
\eproof
\bpropl{L0627}
Let \(\kappa\in\Ninf\), let \(\seqska\in\Hgeqka\) with \tsHp{} \(\sHp{\kappa}\) and \sabcd{} \(\abcd{\ev{\kappa}}\), let \(n\in\NO\) be such that \(2n+1\leq\kappa\), and let \(z\in\CR\).
Then the matrices \(\lpsdrka{2n+1}{z}\) and \(\rpsdrka{2n+1}{z}\) are invertible and fulfill
\begin{align*}
\ek*{\lpsdrka{2n+1}{z}}^\inv&=\abs{z-\ko z}\sum_{k=0}^n\ek*{\pda{k}{z}}^\ad\hp{2k}^\inv\pda{k}{z}&
&\text{and}&
\ek*{\rpsdrka{2n+1}{z}}^\inv&=\abs{z-\ko z}\sum_{k=0}^n\pba{k}{z}\hp{2k}^\inv\ek*{\pba{k}{z}}^\ad.
\end{align*}
\eprop
\bproof
Using \rprop{L0926} and \rlemss{L0824}{L0609}, we can infer \(\det\lpsdrka{2n+1}{z}\neq0\) and
\[\begin{split}
\ek*{\lpsdrka{2n+1}{z}}^\inv
&=\frac{\abs{z-\ko z}}{z-\ko z}\rk*{\ek*{\pda{n}{z}}^\ad\hp{2n}^\inv\pda{n+1}{z}-\ek*{\pda{n+1}{z}}^\ad\hp{2n}^\inv\pda{n}{z}}\\
&=-\frac{\abs{z-\ko z}}{z-\ko z}\ek*{\pda{n}{z}}^\ad\hp{2n}^\inv\PDa{n}{z}
=\abs{z-\ko z}\sum_{k=0}^n\ek*{\pda{k}{z}}^\ad\hp{2k}^\inv\pda{k}{z}.
\end{split}\]
The remaining assertions can be obtained similarly.
\eproof
\bpropl{P1229}
Let \(n\in\NO\), let \(\seqs{2n}\in\Hgq{2n}\) with \tsHp{} \(\sHp{2n}\) and \sabcd{} \(\abcd{n}\), and let \(z\in\CR\).
Then the matrices \(\lpsdrka{2n}{z}\) and \(\rpsdrka{2n}{z}\) are invertible and fulfill
\begin{align*}
\ek*{\lpsdrka{2n}{z}}^\inv&=\abs{z-\ko z}\sum_{k=0}^n\ek*{\pda{k}{z}}^\ad\hp{2k}^\inv\pda{k}{z}&
&\text{and}&
\ek*{\rpsdrka{2n}{z}}^\inv&=\abs{z-\ko z}\sum_{k=0}^n\pba{k}{z}\hp{2k}^\inv\ek*{\pba{k}{z}}^\ad.
\end{align*}
\eprop
\bproof
First observe that \rrem{Schr2.7} yields \(\seqs{2n}\in\Hggeq{2n}\).
\rprop{L1237-A} then shows that the \tnatext{} \(\seqsh{2n+1}\) of \(\seqs{2n}\) belongs to \(\Hggeq{2n+1}\).
Let \(\shHp{2n+1}\) be the \tsHpo{\(\seqsh{2n+1}\)}.
Then \rrem{R26-SK1} yields \(\hhp{j}=\hp{j}\) for all \(j\in\mn{0}{2n}\).
From \rprop{P1644} we see \(\hp{2k}\in\Cgq\) for all \(k\in\mn{0}{n}\).
Hence, we can infer \(\hhp{2k}\in\Cgq\) for all \(k\in\mn{0}{n}\).
Furthermore, \rrem{R1019} provides \(\hhp{j}^\ad=\hhp{j}\) for all \(j\in\mn{0}{2n+1}\).
Using \rprop{P1644}, then one can check that \(\seqsh{2n+1}\) belongs to \(\Hgeq{2n+1}\).
Thus, we can apply \rprop{L0627} to the sequence \(\seqsh{2n+1}\) to obtain \(\det\lpsdrkha{2n+1}{z}\neq0\) and \(\ek{\lpsdrkha{2n+1}{z}}^\inv=\abs{z-\ko z}\sum_{k=0}^{n}\ek{\pdha{k}{z}}^\ad\hhp{2k}^\inv\pdha{k}{z}\), where the matrix-valued function \(\lpsdrkh{2n+1}\) is built according to \rnotap{N2150M}{N2150M.b} from the sequence \(\seqsh{2n+1}\) and \(\habcd{n+1}\) is the \sabcdo{\(\seqsh{2n+1}\)}.
According to \rdefn{D.nat-ext}, we have \(\shu{j}=\su{j}\) for all \(j\in\mn{0}{2n}\).
Hence, \rrem{BK8.1} yields \(\pdh{k}=\pd{k}\) for all \(k\in\mn{0}{n}\) and \rrem{LRC-tru} shows \(\lpsdrkha{2n}{z}=\lpsdrka{2n}{z}\).
Furthermore, \rlem{L0844} provides \(\lpsdrkha{2n}{z}=\lpsdrkha{2n+1}{z}\).
Taking additionally into account that \(\hhp{j}=\hp{j}\) holds true for all \(j\in\mn{0}{2n}\), we can conclude then that \(\lpsdrka{2n}{z}\) is invertible and fulfills \(\ek{\lpsdrka{2n}{z}}^\inv=\abs{z-\ko z}\sum_{k=0}^{n}\ek{\pda{k}{z}}^\ad\hp{2k}^\inv\pda{k}{z}\).
The remaining assertions can be obtained similarly.
\eproof
\bleml{L0723}
Let \(\kappa\in\Ninf\), let \(\seqska\in\Hgeqka\) with \tsHp{} \(\sHp{\kappa}\) and \sabcd{} \(\abcd{\ev{\kappa}}\), let \(n\in\NO\) be such that \(2n+1\leq\kappa\), and let \(z\in\CR\).
Then
\begin{align*}
\mpka{2n+1}{z}&=-\rk*{\rk{z-\ko z}\sum_{k=0}^n\ek*{\pda{k}{z}}^\ad\hp{2k}^\inv\pda{k}{z}}^\inv\rk*{\Iq+\rk{z-\ko z}\sum_{k=0}^n\ek*{\pda{k}{z}}^\ad\hp{2k}^\inv\pca{k}{z}}
\intertext{and}
\mpka{2n+1}{z}&=-\rk*{\Iq+\rk{z-\ko z}\sum_{k=0}^n\paa{k}{z}\hp{2k}^\inv\ek*{\pba{k}{z}}^\ad}\rk*{\rk{z-\ko z}\sum_{k=0}^n\pba{k}{z}\hp{2k}^\inv\ek*{\pba{k}{z}}^\ad}^\inv.
\end{align*}
\elem
\bproof
First observe that \rrem{Schr2.7} yields \(\seqska\in\Hggeqka\).
Therefore, we can apply \rlem{L0819} to infer \(\det\PDa{n}{z}\neq0\) and \(\mpka{2n+1}{z}=-\ek{\PDa{n}{z}}^\inv\PCa{n}{z}\).
Clearly, we have \(\seqs{2n}\in\Hgq{2n}\).
From \rprop{P1644} we see then \(\hp{2k}\in\Cgq\), and, in particular, \(\det\hp{2k}\neq0\) for all \(k\in\mn{0}{n}\).
Furthermore, \rlem{BK8.6} yields \(\det\pda{n}{z}\neq0\).
Taking additionally into account \rlem{L0609}, we can thus infer
\[\begin{split}
\mpka{2n+1}{z}
&=-\ek*{\PDa{n}{z}}^\inv\PCa{n}{z}
=-\rk*{-\ek*{\pda{n}{z}}^\ad\hp{2n}^\inv\PDa{n}{z}}^\inv\rk*{-\ek*{\pda{n}{z}}^\ad\hp{2n}^\inv\PCa{n}{z}}\\
&=-\rk*{\rk{z-\ko z}\sum_{k=0}^n\ek*{\pda{k}{z}}^\ad\hp{2k}^\inv\pda{k}{z}}^\inv\rk*{\Iq+\rk{z-\ko z}\sum_{k=0}^n\ek*{\pda{k}{z}}^\ad\hp{2k}^\inv\pca{k}{z}}.
\end{split}\]
The remaining identity can be obtained similarly.
\eproof
\bpropl{P1157}
Let \(n\in\NO\), let \(\seqs{2n}\in\Hgq{2n}\) with \tsHp{} \(\sHp{2n}\) and \sabcd{} \(\abcd{n}\), and let \(z\in\CR\).
Then
\begin{align*}
\mpka{2n}{z}&=-\rk*{\rk{z-\ko z}\sum_{k=0}^n\ek*{\pda{k}{z}}^\ad\hp{2k}^\inv\pda{k}{z}}^\inv\rk*{\Iq+\rk{z-\ko z}\sum_{k=0}^n\ek*{\pda{k}{z}}^\ad\hp{2k}^\inv\pca{k}{z}}
\intertext{and}
\mpka{2n}{z}&=-\rk*{\Iq+\rk{z-\ko z}\sum_{k=0}^n\paa{k}{z}\hp{2k}^\inv\ek*{\pba{k}{z}}^\ad}\rk*{\rk{z-\ko z}\sum_{k=0}^n\pba{k}{z}\hp{2k}^\inv\ek*{\pba{k}{z}}^\ad}^\inv.
\end{align*}
\eprop
\bproof
First observe that \rrem{Schr2.7} yields \(\seqs{2n}\in\Hggeq{2n}\).
\rprop{L1237-A} then shows that the \tnatext{} \(\seqsh{2n+1}\) of \(\seqs{2n}\) belongs to \(\Hggeq{2n+1}\).
Let \(\shHp{2n+1}\) be the \tsHpo{\(\seqsh{2n+1}\)}.
Then \rrem{R26-SK1} yields \(\hhp{j}=\hp{j}\) for all \(j\in\mn{0}{2n}\).
As in the proof of \rprop{P1229} we can infer \(\seqsh{2n+1}\in\Hgeq{2n+1}\).
Thus, we can apply \rlem{L0723} to the sequence \(\seqsh{2n+1}\) to obtain \(\mphka{2n+1}{z}=-\rk{\rk{z-\ko z}\sum_{k=0}^n\ek{\pdha{k}{z}}^\ad\hhp{2k}^\inv\pdha{k}{z}}^\inv\rk{\Iq+{\rk{z-\ko z}}\sum_{k=0}^n\ek{\pdha{k}{z}}^\ad\hhp{2k}^\inv\pcha{k}{z}}\), where the matrix-valued function \(\mphk{2n+1}\) is built according to \rnotap{CB15}{CB15.b} from the sequence \(\seqsh{2n+1}\) and \(\habcd{n+1}\) is the \sabcdo{\(\seqsh{2n+1}\)}.
According to \rdefn{D.nat-ext}, we have \(\shu{j}=\su{j}\) for all \(j\in\mn{0}{2n}\).
Hence, \rrem{BK8.1} yields \(\pch{k}=\pc{k}\) and \(\pdh{k}=\pd{k}\) for all \(k\in\mn{0}{n}\) and \rrem{ABM-tru} shows \(\mphka{2n}{z}=\mpka{2n}{z}\).
Furthermore, \rlem{L3N10} provides \(\mphka{2n}{z}=\mphka{2n+1}{z}\).
Taking additionally into account that \(\hhp{j}=\hp{j}\) holds true for all \(j\in\mn{0}{2n}\), we can conclude then \(\mpka{2n}{z}=-\rk{\rk{z-\ko z}\sum_{k=0}^n\ek{\pda{k}{z}}^\ad\hp{2k}^\inv\pda{k}{z}}^\inv\rk{\Iq+\rk{z-\ko z}\sum_{k=0}^n\ek{\pda{k}{z}}^\ad\hp{2k}^\inv\pca{k}{z}}\).
The remaining identity can be obtained similarly.
\eproof
Now we are able to see that the parameters of the matrix balls stated in \rthmss{T1546}{F49} indeed coincide:
\bpropl{P1712}
Let \(n\in\NO\) and let \(\seqs{2n}\in\Hgq{2n}\).
For all \(w\in\pip\), then
\begin{align*}
\mpka{2n}{w}&=\rCa{n}{w},&
\lpsdrka{2n}{w}&=\rga{n}{w},&
&\text{and}&
\rpsdrka{2n}{w}&=\rda{n}{w}.
\end{align*}
\eprop
\bproof
For all \(w\in\pip\) we have \(\abs{w-\ko w}=\abs{2\iu\rim w}=2\rim w=\iu\rk{\ko w-w}\).
Combining \rpropss{P1157}{P1229} with \rcor{C1645} and \rlem{R1704} completes the proof.
\eproof
|
2,877,628,090,700 | arxiv | \section{Introduction}
Star formation in galaxies is regulated by a wealth of complex
physical mechanisms, such as the formation, growth and merger of dark
matter halos, and the cooling and heating of baryon gas by radiative
and feedback processes. Characterising the star formation history
of observed galaxies represents, therefore, an important step in
understanding how galaxies form and evolve. Despite being the most
abundant type of galaxy in the universe, dwarf (low-mass) galaxies
remain elusive as far as their formation and evolution is concerned.
Their observed blue colour is usually taken as an indication that these
galaxies are dominated by young stellar populations
\citep[e.g.][]{Kauffmannb2003}. By studying stacked
spectra from the Sloan Digital Sky Survey (SDSS), \cite{Heavens2004} found a very
different formation history between low- and high-mass galaxies.
While galaxies with stellar masses smaller than $10^{10}$ M$_{\odot}$
could be well represented by a flat star formation rate over the past
3 Gyr and a declining rate towards earlier epochs,
more massive galaxies generally form most of their stellar masses
earlier. Recent investigations about low-mass galaxies, however,
challenge the stereotype that low-mass galaxies are all young.
For local group dwarf galaxies in which individual stars can be
resolved, the star formation history can be obtained
through `archaeological' age reconstruction
\citep[e.g.][]{Mateo1998,Dolphin2005,Aloisi2007,
Tolstoy2009,Weisz2011,Annibali2013,Weisz2014a,Sacchi2016,Albers2019}.
The analysis based on the colour-magnitude diagram of resolved stellar
populations in general suggests that most stars in dwarf galaxies
were formed more than 5 Gyr ago. For example, \cite{Weisz2011}
analysed the star formation histories (SFHs) of 60 nearby
(D<4 Mpc) dwarf galaxies and found that these galaxies
on average have formed half of their stars before $z\sim 2$,
regardless of their morphological types.
The existence of such an old stellar population is supported by
other types of observations. Using the CANDELS survey,
\cite{vanderWell2011} found a population of extreme emission
line galaxies at redshift $z\sim1.7$, with a number density
so high that they can contribute a significant fraction of the
total stellar mass contained in present-day dwarf galaxies
with masses between $10^8$ and $10^9$ M$_{\odot}$.
These authors suggested that most of the stellar mass of
these dwarf galaxies should have formed before $z\sim 1$.
\cite{Kauffmann2014} used the 4000 \AA\ break and H$\delta_A$
indices in combination with SFR/M$_*$ derived from emission line
measurements to constrain the SFHs of a sample of SDSS galaxies with
stellar masses in the range $10^8-10^{10}{\rm M}_{\odot}$,
and concluded that galaxies with stellar masses smaller than
$10^9{\rm M}_{\odot}$ are not all young but with half-mass
formation times ranging from 1 to 10 Gyr. Similar conclusions
have also been reached by spectral energy
distribution (SED) modelling of a sample of blue
compact dwarf galaxies with masses between $10^7$ and
$10^9 {\rm M}_{\odot}$ \citep{Janowiecki2017} and a sample of HII galaxies
\citep{Telles2018}. Stars in most of these galaxies are best described
by two or more stellar populations, with the oldest population
often dominating the stellar mass.
In agreement with with those observations, the empirical model presented
in \citet{Lu2014} and \citet{Lu2015} predicts that dwarf galaxies
in small dark matter halos ($M_h<10^{11}h^{-1}{\rm M_{\odot}}$)
had a strong episode of star formation at $z>2$, producing
significant amounts of old stars in them. However, such an
old population is not predicted in many other models \citep[][]{Lim2017}.
SFHs of low-mass galaxies have also been investigated using hydrodynamical
simulations. For example, \cite{Digby2019} analyzed the SFHs of a number of
field and satellite dwarf galaxies in the APOSTLE and Auriga
simulations, and found that the predicted star mass fractions of stars of
different ages are quite different from those observed in real surveys.
\cite{Kimmel2019} analyzed about 500 dwarf galaxies in the FIRE-2
zoom-in simulations and found that the cumulative SFHs of the simulated
galaxies do not match those observed.
Clearly, accurate measurements of SFHs of low-mass galaxies can
provide important constraints on galaxy formation models.
However, the investigations so far have their limitations.
For example, the number of dwarf galaxies for which stars can be
resolved is very limited \citep{Weisz2011}, and so it is difficult to
draw reliable statistical conclusions. Methods based on
SED fitting \citep[e.g.][]{Janowiecki2017, Telles2018}
can make use of galaxy photometry over a wide wavelength
coverage, but they may lose spectral features that contain
information about the SFH. This shortcoming may be remedied
by methods using galaxy spectra, but high SNR spectra are
needed to probe the faint old stellar population.
The stacking of spectra of individual galaxies has been
used to achieve a high enough SNR for such analysis
\citep[e.g.][]{Kauffmann2014}, but such stacking mixes
the signals of individual galaxies.
Here we intend to make our contribution
by analyzing a sample of low-mass galaxies selected from the Mapping
Nearby Galaxies at Apache Point Observatory (MaNGA; \citealt{Bundy2015}).
With its integral-field spectroscopy (IFS), MaNGA provides a large
number of integral-field unit (IFU) spectra of individual galaxies.
These not only allow us to obtain high signal-to-noise composite spectra
for individual galaxies, which is essential for constraining
the SFH in detail, but also to study spatial variations
of the SFH within individual galaxies. The large sample of low-mass
galaxies also makes it possible to study how the SFH depends on
other galaxy properties. In addition, MaNGA is designed to
overlap as much as possible with other observations, which allows
us to make use of information from other observations, thereby
to get more observational constraints. Our analysis is based on
our newly developed stellar population synthesis (SPS) code,
Bayesian Inference for Galaxy Spectra (BIGS), which has been successfully
used to constrain the IMF of MaNGA early-type galaxies \citep{Zhou2019}.
BIGS fits the full composite spectrum of a galaxy and constrains
its SFH along with other properties of its stellar population.
The Bayesian approach provides a statistically rigorous way to explore
potential degeneracies in model parameters, and to distinguish between different
models through Bayesian evidence. Moreover, the flexibility of BIGS also
allows us to add new observational constraints in our inferences.
The paper is organised as follows. In \S\ref{sec:data} we present our
data reduction process, including sample selection and spectral
stacking procedure. We then introduce the SPS model and the Bayesian
approach used to fit galaxy spectra in \S\ref{sec:analysis}. Our
main results are presented in \S\ref{sec:results}, and we discuss
some potential uncertainties in \S\ref{sec:Uncertainties}.
Finally, we summarize and discuss our results in \S\ref{sec:summary}.
Throughout this work we use a standard $\Lambda$CDM cosmology
with $\Omega_{\Lambda}=0.7$, $\Omega_{\rm M}=0.3$
and $H_0$ = 70 \kms Mpc$^{-1}$.
\section{Data}
\label{sec:data}
\subsection{The MaNGA survey}
As one of the three core programmes in the fourth-generation Sloan Digital Sky Survey
(SDSS-IV, \citealt{Blanton2017}), MaNGA aims to collect high resolution,
spatially resolved spectra for about 10,000 nearby galaxies in the redshift
range $0.01 < z < 0.15$ \citep{Yana2016,Wake2017}. MaNGA targets are selected
from the NASA Sloan Atlas catalogue
\footnote{\label{foot:nsa}\url{ http://www.nsatlas.org/}} (NSA, \citealt{Blanton2005}),
and are chosen to cover the stellar mass range
$5\times10^8 {\rm M}_{\odot}h^{-2} \leq M_*\leq 3 \times 10^{11}
{\rm M}_{\odot}h^{-2}$. For each target, the MaNGA IFU covers a radius up
to either $1.5 R_e$ or $2.5 R_e$ ($R_e$ being the effective radius), to
construct the “Primary” and “Secondary” samples, respectively \citep{Law2015}.
MaNGA observes the selected galaxies with the two dual-channel BOSS spectrographs
\citep{smee2013} on the Sloan 2.5~m telescope
\citep{Gunn2006}, which provides simultaneous wavelength coverage
over $3600-10,300$ {\AA}, with a spectral resolution $R\sim2000$
\citep{Drory2015}. The spectrophotometry calibration of MaNGA is
described in detail in \cite{Yanb2016}, while the initial performance
is given in \cite{Yana2016}. Raw data from MaNGA are calibrated and reduced
by the Data Reduction Pipeline (DRP; \citealt{Law2016}) to produce spectra with
a relative flux calibration better than $5\%$ over more than $80\%$ of the
wavelength range \citep{Yana2016}. In addition, MaNGA provides measurements
of stellar kinematics (velocity and velocity dispersion), emission-line
properties (kinematics, fluxes, and equivalent widths), and spectral
indices for each spaxel through the MaNGA Data Analysis Pipeline \citep[DAP;][]{Westfall2019,Belfiore2019}.
\subsection{UKIDSS}
Near-infrared (NIR) photometric data are commonly used to trace the
stellar mass of galaxies, which can be compared with the mass
estimated from the stellar population synthesis modeling of the optical
spectra provided by MaNGA. As described in \cite{Yanb2016}, the MaNGA
targets are chosen to overlap as much as possible with the United Kingdom
Infrared Telescope (UKIRT) Infrared Deep Sky Survey (UKIDSS)
footprint. UKIDSS uses the Wide Field Camera (WFCAM) on the 3.8 m United
Kingdom Infra-red Telescope (UKIRT,\citealt{Casali2007}), providing ZYJHK images over a
large sky coverage. The basic information of the survey can be found
in \cite{Lawrence2007}, the photometric system is described in \cite{Hewett2006}, and the calibration is described in \cite{Hodgkin2009}.
The UKIDSS data is reduced by the official pipeline and the science products
are released through
the WFCAM Science Archive\footnote{\label{foot:WSA}\url{http://wsa.roe.ac.uk/}}
(hereafter WSA, \citealt{Hambly2008}).
\subsection{Sample selection}
The galaxy sample used here is selected from the internal data release
of MaNGA, the MaNGA Product Launch 7 (MPL-7),
which includes a total of 4,621 unique galaxies and has been
made public available together with the SDSS fifteen data release
\citep[SDSS DR15][]{Aguado2019}.
We select a set of the least massive galaxies ($M_*<10^9$ M$_{\odot}$)
according to their total stellar masses given by NSA.
During the selection, we exclude galaxies with apparent problems
in MaNGA DRP and DAP data processing. Galaxies that host AGN or with
severe sky line contamination at the red end of the spectra
are also excluded. After this selection process, we obtain a sample
of 254 low-mass galaxies.
We then cross-match galaxies in this sample with data from WSA,
selecting galaxies that have measurements of SDSS u, g, r, i, z and
UKIDSS Y, J, H, K band magnitudes. This cross-match yields a total of
752 galaxies in MaNGA MPL7 and 22 of them are in the least massive
sample. Considering potential differences between SDSS and MaNGA,
such as flux calibrations, we convolve the spectrum of a galaxy
obtained by stacking its spaxels within 1~$R_e$ (see below)
with the SDSS filters to derived its MaNGA $(g-r)$ colour.
We only select galaxies whose MaNGA $(g-r)$ colours are
within 0.05 mag of the $(g-r)$ colours listed in WSA.
This yields a final sample of 19 low-mass galaxies with both
optical and NIR photometry.
In what follows, we use
the sample of 254 low-mass galaxies selected from MaNGA
to investigate the statistical properties of the SFHs of these
galaxies, and use the 19 low-mass galaxies with photometry from UKIDSS
to study additional constraints on the SFHs from the NIR photometry.
\subsection{Spectral stacking}
The original spectra provided by MaNGA DRP have typical $r$-band
SNRs of $4-8$ {\AA}$^{-1}$ toward the outer radii of galaxies \citep{Law2016}.
For the low-mass galaxies considered here, the SNR can be as low as
2 {\AA}$^{-1}$ due to their relatively low surface brightness.
One thus needs to combine spectra in each IFU plate
of each individual galaxy to obtain a stacked spectrum with
a sufficiently high SNR.
We use two kinds of stacked spectra of every individual galaxy.
First, to study the global SFHs of individual galaxies and the overall variations of
SFH from galaxy to galaxy, we bin spaxels with elliptical annuli inside one $R_e$ of each galaxy to form
a single spectrum. Second, to study the variations of the
SFH within a galaxy, we divide spaxels of the galaxy into three
radial bins, $(0.0-0.3)R_e$, $(0.3-0.7)R_e$ and $(0.7-1.2)R_e$,
according to their normalised radii of elliptical annuli
given in MaNGA DAP, and stack the spectra within
individual radial bins. These radial bins are similar to those
used in related MaNGA studies, such as \cite{Zheng_etal2019}.
The stacking procedure used here is similar to that in \cite{Zhou2019},
and we refer the reader to that paper for details.
As discussed in the MaNGA DAP paper \citep[see][for details]{Westfall2019},
the SNR of stack spectra deviates from the simple noise propagating formula,
because the spaxels provided by DRP are not fully independent.
Here we use the
correction term given by \cite{Westfall2019} to account
for the covariance between spaxels and to estimate the SNR.
The correction term can be written as
\begin{equation}
n_{\rm real}/n_{\rm no covar}= 1 + 1.62 \log(N_{\rm bin})
\end{equation}
where $N_{\rm bin}$ is the number of spectra used in the stacking,
$n_{\rm real}$ and $n_{\rm no covar}$ are the corrected noise
vectors and noise vectors that assuming no covariance between
pixels (namely, those generated from the simple noise propagating
formula) respectively. With this correction, the typical SNR of
the stacked spectra is around 40 pixel$^{-1}$.
\section{Analysis}
\label{sec:analysis}
The inferences of stellar population properties from galaxy spectra
can be achieved by comparing stellar population models with the
observed spectra. In practice, two complementary approaches,
absorption-feature modeling and full-spectrum fitting, have been
widely used. The full spectrum in principle contains more information, but the information can be fully used only when
both the continuum shape and spectral features are all modelled
accurately, and when the SNR of the spectra is sufficiently high.
In contrast, absorption features can be selected to have the greatest
sensitivity to the main parameters of interest, such as age,
metallicity and $\alpha$ enhancement \citep[][]{Worthey1994}.
Apparently because of the short wavelength window, absorption features
may avoid influences from spectral regions that are not properly
described by the model. However, the
shortcoming is that some important information contained in
the rest of the spectrum may be missed. In this paper, we adopt the
full spectrum fitting method using Bayesian statistics to infer the SFH
of individual galaxies and its co-variance with other properties.
\subsection{The spectral synthesis model}
To accurately model galaxy spectra, proper templates that meet the
resolution and wavelength coverage of the data are crucial.
Several popular codes are available to model simple stellar
populations (SSPs) of given ages and metallicity, including
BC03 \citep{BC03}, M05 \citep{Maraston2005}, and E-MILES \citep{Vazdekis2016}.
These SSP models can be combined with an assumption of the
SFH of a galaxy to predict its spectrum.
Different SSP models are based on different stellar templates and
isochrones, and thus have their own merits and shortcomings.
Given that the wavelength range of the original MaNGA data is
$3600-10300$ {\AA}, and the median redshift of MaNGA galaxies
is $z\sim$0.03, we select models that have uniform wavelength
coverage to $\sim 9000$ {\AA}. In addition, the UKIDSS data
require an extension of the coverage to $\sim 2.5\,{\rm \mu m}$.
With these considerations, we decide to adopt the
E-MILES\footnote{\label{foot:emiles}\url{http://miles.iac.es/}}
model.
The MILES models first presented in \cite{Vazdekis2010} are constructed with the MILES \citep{miles2006MNRAS} stellar library. Later on, EMILES models extend MILES both bluewards and redwards using CaT \citep{Cenarro2001},
Indo-U.S. \citep{Valdes2004} and the IRTF stellar library \citep{Cushing2005,Rayner2009}. With all these stellar templates combined, the E-MILES SSP
spectra cover the wavelength range from 1680.2 {\AA} to
$5{\rm \mu m}$ with a moderately high spectral resolution. In
particular, the SSPs reach a resolution of 2.51 {\AA} (FWHM)
over the range from 3540 {\AA} to 8950 {\AA}, which covers the
main portion of the spectral range of MaNGA. The spectral
resolution decreases towards longer wavelengths, but is
sufficient for photometry calculations.
The E-MILES model is computed for several IMFs.
As our focus is on low-mass galaxies, which tend to
have a bottom-light IMF \citep{LHY2018}, we choose the model
constructed with the Chabrier IMF. Moreover, E-MILES provides two
sets of isochrones, the Padova+00 isochrones \citep{Girardi2000}
and the BaSTI isochrones
\footnote{\label{foot:BaSTI}\url{http://www.oa-teramo.inaf.it/BASTI}}.
The impact of using different isochrones has not been
investigated extensively in the literature. For MaNGA galaxies,
\cite{Ge2019} found that using different isochrones
does not lead to significant changes in the fitting quality of
observed spectra. Using mock spectra, however,
the authors found that the Padova+00 model works better at
low metallicity ([Z/H]<-1.0), while the BaSTI model works better
for galaxies of higher metallicity. Since low-mass galaxies
in general are metal poor, we use E-MILES templates with
Padova+00 isochrones.
\subsubsection{Star formation history}
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{figures/method_demo_step.pdf}
\caption{An example of fitting mock spectra with different SFH models.
The red line in the right panel shows the SFH used to generate the mock spectrum.
White noise is added to the mock spectrum so that SNR=40.
Blue solid and green dash lines are the best-fit results of the mock
spectrum from 3800{\AA} to 8900 {\AA} using the $\Gamma$ and $\Gamma$ +B
SFH models, respectively, while the orange solid lines show the best-fit
results from the stepwise model. Result
of the $\Gamma$+B model using the optical spectrum plus the
$(g-K)$ colour is shown as yellow dash-dotted lines.
Residuals of the best-fit spectra
are shown at the bottom of the left panel, with zero points of the
blue, green and orange lines shifted by 0.2,0.1,-0.1, respectively.
}
\label{fig:SFH_example}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{figures/method_demo_step_noburst.pdf}
\caption{An example similar to Fig.~\ref{fig:SFH_example}, but for a mock SFH that
does not contain a significant early burst.}
\label{fig:SFH_example2}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{figures/sample_test_fold_plot.pdf}
\caption{The distribution of the difference in the mass
fraction of the old stellar
population (defined to be stars with ages larger than 8 Gyrs)
between the best-fit model and the input SFH. The three columns
are results for mock spectra with different levels of SNR,
as indicated. Upper panels are results from mock spectra with
high fraction (>0.5) of old stellar populations (>8 Gyr),
similar to the one shown in Fig.~\ref{fig:SFH_example}, while
the lower panels are for those with a lower fraction (<0.5)
of old stars, similar to the one shown in Fig.~\ref{fig:SFH_example2}.
The four distributions are results obtained from different SFH models,
as indicated. Probability distributions are each normalized to 1.}
\label{fig:SFH_mock_sample}
\end{figure*}
In general, the details of the star formation history
of a galaxy can be very complicated. However, limited by
data quality, the observed galaxy spectra may be modeled
in terms of a number of important stellar populations.
For low-mass galaxies,
observations based on resolved stars \citep[e.g.][]{Weisz2011}
have shown that the SFHs of different galaxies are quite similar
over most of the cosmic time, while differences are only seen in
the recent few Gyrs. These observations also indicate that
low-mass galaxies generally had enhanced star formation
in the early universe ($z>1$) where they formed more than
half of their stars, and that some of them have gone through
complex star formation histories during the recent 1 Gyr.
Such two-phase star formation is also seen in empirical models
such as that of \cite{Lu2015}.
Traditionally, SFHs of galaxies have been
modelled with two distinct approaches: a parametric approach
that assumes a functional form specified by a small number of parameters,
and a non-parametric approach that models the SFH as a histogram
(a step-wise function) in a number of time bins. In
our analysis, we use both approaches and make comparisons between them.
The parametric model adopted here is motivated by the
empirical model of \cite{Lu2015}, who found that the SFH of a dwarf galaxy
can be well represented by a burst in the early universe followed by a
continuous SFH. To describe such a SFH, we model a smooth component
using a $\Gamma$ function:
\begin{equation}
\Psi(t)=\frac{1}{\tau\gamma(\alpha,t_0/\tau)}
\left({t_0-t\over \tau}\right)^{\alpha-1}
e^{-(t_0-t)/\tau}\,,
\label{gamma-sfh}
\end{equation}
where $t_0-t$ is the look-back time,
and $\gamma(\alpha,t_0/\tau)\equiv \int_0^{t_0/\tau} x^{\alpha-1}e^{-x}\,dx$.
The flexibility of the function allows cases where a galaxy
is dominated by old stars (with both $\alpha$ and $\tau$ small)
or dominated by younger populations (with both $\alpha$ and
$\tau$ large). On top of this continuous SFH, we include
an additional burst component to mimic the old stellar
component in dwarf galaxies seen in resolved stars.
The burst is specified by two free parameters:
$t_b$ describing when the burst occurs, and $f_{b}$
specifying the relative fraction of stellar mass formed in the burst.
Thus we consider the following two SFH model types:
\begin{itemize}
\item $\Gamma$ model: SFH given by equation (\ref{gamma-sfh});
\item $\Gamma$+B model: SFH given by equation (\ref{gamma-sfh}) plus a burst.
\end{itemize}
Non-parametric models are more flexible. In real applications, however, the
flexibility depends strongly on the number of time bins used,
and the accuracy of the inference that can be achieved is still limited by data quality.
In addition, using too many time bins will lead to degeneracy in
the solution and increase the computational time for sampling the posterior
distribution. \cite{Panter2007} used 11 time bins in their
analysis of SDSS galaxies, but found the model to be too ambitious
for most galaxies. To model the color-magnitude diagram (CMD) based on
resolved stars, \cite{Weisz2011} used a 6-step model to describe
the SFH of local dwarf galaxies. In our analysis, we adopt
a stepwise SFH model similar to \cite{Weisz2011}.
The model is described by the average star formation rates
in 7 time intervals: $0\to 0.2$, $0.2\to 0.5$, $0.5\to 1.0$, $1\to 2$,
$2\to 6$, $6\to 10$ and $10 \to 14$ Gyr.
Since only the relative fraction of stars formed
in each interval change the spectral shape, the
model is specified by six free parameters.
We test the validity of these three forms of SFH using mock
spectra generated
from theoretical SFHs from the empirical model of \cite{Lu2015}.
To do this, we first convolve the theoretical
SFHs with the E-MILES SSPs to obtain the corresponding noise-free
composite spectra. Different levels of Gaussian noise are then added to
the spectra to mimic real observations.
An example of fitting such a mock spectrum with SNR=40 is shown in
Fig.~\ref{fig:SFH_example}.
Note that, in Fig.~\ref{fig:SFH_example}
and other figures, the burst component
of the $\Gamma$+B model is represented by a Gaussian
peak with finite width, even though it is modelled as a single SSP
in the fitting. As one can see, although both the
$\Gamma$ and $\Gamma$+B models can give a reasonable fit to
the mock spectrum, the former fails to recover the input SFH.
On the other hand, the $\Gamma$+B model has the flexibility to
recover the early burst component, although the exact burst time is not reproduced.
Similar to $\Gamma$+B, the step-wise SFH can also reproduce the
early burst component, although the specific shape of the SFH is not
accurately reproduced due to the time resolution used.
For comparison, we also show the result
obtained using both the optical spectrum and $(g-K)$ colour to
constrain the model (see \S\ref{ssec_fittingproc}).
In this case the recovered SFH (the yellow curve) closely
matches the input SFH, with approximately the same burst
strength and time. Fig.~\ref{fig:SFH_example2} shows another example in which
the mock SFH does not contain any significant burst.
Here the inclusion of a burst component in the model and the NIR
photometry in the constraint does not lead to any significant changes
in the inferred SFH, as the $\Gamma$ function already has the
flexibility to approximately describe this kind of SFHs.
From the cumulative plots shown in the right panels of
Figs.~\ref{fig:SFH_example} and \ref{fig:SFH_example2},
we can see that all models, except the $\Gamma$ model, predict
similar half-mass formation times and old stellar fractions,
although the predicted shapes of the SFH are quite different
in early times. Because of this, we will use the old stellar fraction
and half mass formation time to characterize the old population,
without paying much attention to the exact shape of the derived SFH.
To compare the true to the estimated physical properties,
we analyze 2,000 such mock spectra randomly chosen from the
empirical models of \cite{Lu2015} which SNR ranging from 10 to 70.
The differences in the mass fraction of the old stellar population
(with stellar age >8 Gyrs) between the best-fit and input SFHs
are shown in Fig.~\ref{fig:SFH_mock_sample}. It is seen that, the
$\Gamma$ model systematically underestimates the mass fraction of
the old population, regardless of the SNR.
The $\Gamma$+B model can well recover the input old fraction, with
some exceptions of underestimation at low SNR.
The stepwise model also recovers the old fraction well,
but there is a systematical underestimate at low SNR.
In addition, including the NIR photometry as an additional constraint
can improve the accuracy of the derived mass fraction, especially
when the SNR is low. Although these test results are for ideal cases,
where the stellar populations are perfectly described by the the SSPs, they do indicate that
the $\Gamma$+B model and the step-wise model can both recover the
the old stellar population predicted by the empirical model
in some low-mass galaxies, although the details of the
SFH may not be modeled accurately.
In what follows, we will apply these SFH models to MaNGA spectra
and examine whether the data prefer a particular SFH model,
and how model inferences are affected by the assumption of
the SFH.
\subsubsection{Dust attenuation}
Dust attenuation can also affect the inferred SFH. Since dust absorption is
more significant at shorter wavelengths, a stellar population that
contains more dust can mimic an older and/or more metal-rich population,
producing the well known age-metallicity-dust degeneracy. Thus,
dust attenuation has to be properly taken into account in order to
make unbiased inferences from the observed spectra.
In practice, dust attenuation is usually treated as an additional model
parameter specifying the attenuation curve assumed.
For star-forming galaxies, the Calzetti Law \citep{Calzetti2000} is
widely used. For galaxies with complex stellar populations,
the two-component dust model of \cite{Charlot2000} may be adopted to
account for the difference in dust attenuation between star burst clouds
(stellar populations younger than 10 Myr) and older stellar populations.
As different attenuation curves are very similar
to each other in the optical and NIR bands, we
follow \cite{Charlot2000} and use a single optical depth
parameter to describe the attenuation of the stellar
population in a galaxy.
\subsubsection{The implementation of BIGS}
\label{ssec:BIGS}
In complex problems, such as the spectrum fitting problem addressed here,
the likelihood function can be complicated and may not be represented by
simple analytical functions. One thus needs an efficient sampling method
to sample the posterior distribution. In addition, as
the Bayesian evidence ratio involves integration in high-dimensional
space, an effective numerical method is also needed to evaluate it.
BIGS adopts a Bayesian sampler, MULTINEST \citep{Feroz2009,Feroz2013},
which uses the nest sampling algorithm to estimate the
Bayesian evidence, and gives the posterior distribution as a by-product.
Briefly, BIGS works as follows. For each data spectrum,
we pre-process it using pPXF \citep{Cappellari2017} and obtain the velocity distribution
of the source. We then convolve our template spectra with a Gaussian
that accounts for both the instrumental resolution and
velocity dispersion of stars. The data spectrum and the templates
are then provided to BIGS. Using the prior distribution of model parameters,
BIGS uses MULTINEST to generate a proposal parameter vector for the
spectral synthesis model. These parameters, which specify
the SFH, metallicity and dust attenuation of the stellar population,
are used to generate a model spectrum to be compared with the
data spectrum to calculate the corresponding likelihood. The MULTINEST sampler would
either accept or reject the proposal according to the posterior probability and
generates a new proposal for the model parameters, until a convergence criterion is
reached. Once converged, the posterior distribution of model parameters,
together with the Bayesian evidence, are stored for
statistical analyses of the model.
\subsection{The fitting procedure}
\label{ssec_fittingproc}
We compare spectra predicted by the E-MILES SPS model
to the observed MANGA spectra to infer the SFHs of the low-mass galaxies
using the procedure described below. To begin with, we mask some of the
spectral regions to ensure the validity of the fitting. For example,
we mask the observed continua in the wavelength range
6800-8100 {\AA}, as difficulties are commonly found in fitting the
observed continua in this wavelength range, either due to issues in
flux calibrations in the templates, residual telluric absorption,
or even flux calibrations in the data
(see \citealt{Zhou2019} for more discussions).
In addition, as the E-MILES templates do not contain the youngest
stellar population (<0.06 Gyr), we also mask the very blue end of the
spectra (<3800 {\AA}) in the fitting.
To take into account effects of stellar kinematics and instrumental
resolution, we first use the software pPXF to pre-fit the data spectra and get an effective velocity dispersion,
$\sigma_{\rm ppxf}$. This
effective velocity dispersion is then used to convolve with the E-MILES template
spectra to generate artificially broadened templates that are used to
compute the synthesised spectra to be compared with the
corresponding data spectra. In this step, apparent emission lines in the spectra
are also identified, and masked out in subsequent analyses.
After this pre-processing, we first normalise the model and data spectra in
the wavelength window $4500-5500$ {\AA} and then send them to BIGS.
BIGS runs the fitting loop as described in \S\ref{ssec:BIGS},
assuming a flat prior and a $\chi^2$-like likelihood function.
In the fitting that uses only the MaNGA spectra, the likelihood
function is defined as
\begin{equation}
\label{likelyhood}
\ln {L(\theta)}\propto-\frac{1}{2}\sum_{i,j=1}^N\left(f_{\theta,i}-f_{D,i}\right)\left({\cal
M}^{-1}\right)_{ij}\left(f_{\theta,j}-f_{D,j}\right)\,
\end{equation}
where $N$ is the total number of wavelength bins, $f_{\theta}$ and $f_{D}$ are
the flux predicted from the parameter set $\theta$ and that of the data spectrum,
respectively, and ${\cal M}_{ij}\equiv
\langle \delta f_{D, i}\delta f_{D, j} \rangle$
is the covariance matrix of the data. For spectra that have
UKIDSS observations, we use the following definition:
\begin{equation}
\label{eq:likelyhood_NIR}
\ln {L(\theta)}\propto-\frac{1}{2}\sum_{i,j=1}^N\left(f_{\theta,i}-f_{D,i}\right)\left({\cal
M}^{-1}\right)_{ij}\left(f_{\theta,j}-f_{D,j}\right)-
\frac{(K_{\theta}-K_{D})^2}{2\sigma_{K}^2}\,
\end{equation}
where $K_{\theta}$ and $K_{D}$ are the $(g-K)$ colour predicted from the
parameter set $\theta$ and that from the data, respectively.
The uncertainty in the $(g-K)$ colour is denoted by $\sigma_{K}$.
In general, it is difficult to model $\sigma_{K}$.
The value of $\sigma_{K}$ is related to the accuracy of UKIDSS photometry,
which has an uncertainty of less than 2\% in the $K$
band \citep{ Dye2006}. In addition, $\sigma_{K}$ should also include
the relative flux variation between the MaNGA stacked
spectra and the NIR photometry, which is hard to model.
To minimize such influence, we only select galaxies with
$\Delta (g-r)<0.05$ between the MaNGA and UKIDSS archive data.
In this case, our test shows that setting $\sigma_{K}=0.02$
is appropriate to describe the constraints from the NIR observation.
We list all the fitting parameters
for the $\Gamma$+B model in Table \ref{tab:SFH_parameters},
together with their prior distributions (assumed to be flat).
For the priors of the step-wise SFH, we assume that
the SFR at the last time interval, 10-14 Gyr, is a constant
normalized to be 1 (0 in logarithmic scale), and that the prior distributions
of the SFR in all other time bins are flat between $-2$ and $2$
in logarithmic space. All other parameters are
specified in the same way as for the $\Gamma$+B model.
\begin{table}
\centering
\caption{Priors of model parameters used to fit galaxy spectra}
\label{tab:SFH_parameters}
\begin{tabular}{lccr}
\hline
Parameter & description & Prior range\\
\hline
$\log(Z/Z_{\odot})$ & Metallicity & $[-2.3, 0.2]$\\
$\tau$ & SFH parameter in Eq.\,(\ref{gamma-sfh}) & $[0.0,10.0]$\\
$\alpha$ & SFH parameter in Eq.\,(\ref{gamma-sfh}) &$[0.0,20.0]$\\
$\tau_v$ & Dust optical depth at 5500 \AA & $[0.0,2.0]$\\
$f_{\rm burst}$ & relative fraction of old populations & $[0.0,1.0]$\\
$\log(Z_{burst}/Z_{\odot})$ & Metallicity of the old population & $[-2.3, 0.2]$\\
$A_{burst}$ (Gyr) & Age of the old population & $[0.0,14.0]$\\
\hline
\end{tabular}
\end{table}
\section{Results}
\label{sec:results}
\subsection{Bayesian model selection}
\label{ssec:modelselection}
\begin{figure}
\includegraphics[height=70mm]{figures/ev.pdf}\\\
\caption{The evidence ratio between the $\Gamma$+B and
$\Gamma$ models as a function of SNR of the stacked spectra.
Each red dot stands for the result of
a MaNGA galaxy obtained by fitting its stacked spectrum. Blue stars
are the median values in five SNR bins and are linked by a blue line.
Each of green triangles stands for the result obtained from fitting both
the MaNGA stacked spectrum and the $(g-K)$ (see \ref{ssec_nir}).
Yellow stars are the median values in two SNR bins and are connected
by a yellow line. Error bars are $1\sigma$ scatter among galaxies
in individual bins.
}
\label{fig:evratio}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=65mm]{figures/ev_step.pdf}\\\
\caption{The evidence ratio between the $\Gamma$+B and stepwise models
as a function of SNR of the stacked spectra. Each red dot stands for
the result of a MaNGA galaxy obtained by fitting its stacked spectrum.
Blue stars are the median values in five SNR bins and are linked by
a blue line. Error bars are $1\sigma$ scatter among galaxies
in individual bins.
}
\label{fig:evratio_step}
\end{figure}
\begin{figure*}
\includegraphics[height=0.35\textwidth]{figures/sfh_example_real.pdf}\\\
\caption{
Top panels: optical images of six example galaxies. Bottom panels: the
corresponding best fit SFHs inferred from the three SFH models, as labelled.
}
\label{fig:SFH_real_example}
\end{figure*}
Our goal is to investigate the existence or absence
of an old stellar population in low-mass galaxies.
To this end, we fit the stacked spectra (each being a stack of
pixels within the effective radius of a galaxy) of individual
galaxies with three SFH models: $\Gamma$, $\Gamma$+B
and stepwise, while keeping all other parts of the model intact.
We first examine if the data shows a preference to one of the three models.
As described above, the Bayesian evidence ratio provided by BIGS
can serve as a discriminator between different model families.
Fig.~\ref{fig:evratio} shows the evidence ratio between the two continuous models, $\ln(E_{\rm \Gamma+B}/E_{\rm \Gamma})$, obtained from
the stacked spectra, as a function of the signal-to-noise ratio
(SNR) of the stacked spectra. The median values in five SNR bins
are shown as blue stars to represent the global trend,
with the error bars showing the $1\sigma$ scatter
among galaxies in individual bins.
Fig.~\ref{fig:evratio} shows clearly that the median value of the
evidence ratio increases with increasing SNR.This indicates that the SFHs of these low-mass galaxies are more likely
a composite of two distinct stellar components than a single
component. Although the young stellar component may dominate the
luminosity and makes the galaxies blue, the faint old stellar
component that formed early may contribute significantly to their
total stellar masses, as we will quantify next.
As comparison, we plot in Fig.~\ref{fig:evratio_step} the evidence ratio
between $\Gamma$+B and the step-wise models. Although the number of free parameters
in the step-wise model is much larger than that in the $\Gamma$+B model,
the Bayesian evidences of the two models are comparable. The medians of the evidence ratio are close to 1, with
large dispersion among different galaxies. This indicates that
some of the galaxies have a preference to the $\Gamma$+B model,
while others have the opposite preference. The absent of a systematic
trend suggests that both models have similar abilities to describe
the overall SFH in the current data, while the preferences
of different galaxies to different models indicate the
intrinsic variations in the detailed shapes of their SFH.
It is, however, not straightforward to quantify the
preference between different models. Bayesian model selection
has been used for such purpose, but a quantitative criterion for
model selection is still not established.
The widely adopted Jeffreys' scale uses a ratio of
$E_1/E_2>150$, or $\ln(E_1/E_2)>5$, where $E_1$ and $E_2$ are
the evidences of the two competing models $1$ and $2$, respectively,
to indicate a 'strong' preference to model 1
\citep[e.g.][]{Hobson2009}. But the validity of this scale
has been questioned \citep[e.g.][]{Nesseris2013}.
In spectral fitting, the situation may be even worse.
The number of data points is large
(thousands of pixels for each spectra) and the current SSP
models are still not perfect, which may lead to very large
evidence ratios between different models
\citep[e.g.][]{Han2019}. In what follows, we simply assume
that a model is preferred when it has the largest evidence
among all models considered.
\subsection{The stellar populations in low-mass galaxies}
\subsubsection{The star formation history}
\begin{figure}
\includegraphics[height=0.4\textwidth]{figures/cum_sfh_mix.pdf}\\\
\caption{
The cumulative SFH inferred from the posterior distribution
constrained by the stacked spectra of different galaxies
within $1 R_e$. The red, blue and green solid lines show the results
from the best-fit $\Gamma$+B, $\Gamma$ and stepwise models,
respectively. The thick orange line shows the
average SFH of the preferred model for each galaxy,
selected by Bayesian evidence.
The shaded region around each line represents the variance
of the mean SFH, estimated from the jackknife
resampling method. The result of \citet{Weisz2011} is shown as
a black dash-dotted line.
Results obtained from the MaNGA stacked spectra plus the $(g-K)$
colour (see \ref{ssec_nir}) using the $\Gamma$+B model and the $\Gamma$ model are shown with red and blue dash lines, respectively. The horizontal black dashed line marks the position
of half of the total star formation.
}
\label{fig:cum_SFH}
\end{figure}
\begin{figure}
\includegraphics[height=0.4\textwidth]{figures/cum_sfh_free_cs.pdf}\\\
\caption{
The cumulative SFH inferred from the posterior distribution
constrained by the stacked spectra of different galaxies
within $1 R_e$. The red, blue and green lines show the best fits
of the $\Gamma$+B, $\Gamma$ and preferred SFHs, respectively.
Solid and dash lines are
mean results for central and satellite galaxies,
respectively. The shaded region around each line represents
the variance of the mean SFH, estimated from the jackknife
resampling method. The horizontal black dashed line marks the
position of half of the total star formation. }
\label{fig:cum_SFH_cs}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=0.4\textwidth]{figures/t_half_errored_free.pdf}
\includegraphics[height=0.4\textwidth]{figures/t_half_errored_preferred.pdf}
\caption{The distribution of the half mass formation time inferred from the
best-fit SFH models. Red and blue histograms show the results of
central and satellite galaxies, respectively. Solid histograms are obtained using the $\Gamma$+B SFH
(top panel) and the preferred SFH (bottom panel),
while dashed ones are for the $\Gamma$ SFH.
Error bars, only shown for the $\Gamma$+B SFH (top panel)
and the preferred SFH (bottom panel), are obtained
from the jackknife resampling method. Each histogram is normalized to 1
over all the bins used.
}
\label{fig:t_half_stack}
\end{figure}
\begin{figure*}
\centering
\includegraphics[height=0.85\textwidth]{figures/mass_fractions_mix.pdf}
\caption{The distribution of the mass fraction of stars in the old (>8 Gyr, left),
middle-age (4-8 Gyr, middle), and young (<4 Gyr,right) stellar populations. Top panels are results obtained from the $\Gamma$+B SFH model, middle panels
are from the $\Gamma$ model, while bottom panels are from the preferred SFH. Red and blue histograms are
for centrals and satellites, respectively,
with dash lines showing the median values. Black dots are results
of \citet{Weisz2011}. All histograms are normalized to 1
over the bins used.}
\label{fig:hist_fractions}
\end{figure*}
We first obtain the best-fit SFH for each galaxy from
the posterior distribution. Results for six representative galaxies are shown in
Fig.~\ref{fig:SFH_real_example}. As can be seen,
results from the $\Gamma$+B model and the stepwise model
are broadly consistent with each other, both predicting the
existence of an old stellar population for most (but not all)
galaxies, while the $\Gamma$ model clearly misses such a
population.
We plot the average cumulative SFH in Fig.~\ref{fig:cum_SFH}.
To estimate the statistical uncertainty,
we divide the galaxy sample into 20 sub-samples of
similar sizes, and make 20 different jackknife copies
from these sub-samples by eliminating one of the
sub-samples. The variances of the mean SFH inferred from
the 20 jackknife copies are shown in Fig.~\ref{fig:cum_SFH} as
the shaded regions around the corresponding lines.
The results derived from the $\Gamma$+B model (red lines)
indicate that low-mass galaxies on average formed about
half of their stellar masses more than 8 Gyrs ago,
which may be associated with the starburst events observed in
extreme emission line galaxies detected in the CANDELS
survey \citep{vanderWell2011}. As a comparison, the black line
shows the cumulative SFHs of local dwarf galaxies in the mass range
$10^8-10^9{\rm M}_{\odot}$ obtained from resolved
stars by \citet{Weisz2011}. It is remarkable
that the results obtained from the $\Gamma$+B model are in
good agreement with that of \citet{Weisz2011}.
In contrast, the results obtained from the
$\Gamma$ model (blue lines) indicate that most of
the stellar mass in low-mass galaxies formed recently.
This discrepancy is expected. Since an old stellar
population is much fainter than the young population
of the same mass, a model that is not sufficiently
flexible to allow for the existence of both populations
will miss the old population. As shown in \S \ref{ssec:modelselection},
this limitation of the $\Gamma$ model weakens its
ability to describe the true SFH, which leads to
the smaller Bayesian evidence in comparison to the
$\Gamma$+B model in fitting spectra of sufficiently high SNR.
For comparison, we also show in Fig.~\ref{fig:cum_SFH} the
result obtained from the stepwise model as a green line.
In addition, we also obtain the best-fit SFH from the preferred
model for each galaxy based on Bayesian model selection.
The result, referred to as the {\it preferred SFH}, is plotted
in Fig.~\ref{fig:cum_SFH} as a thick orange line.
The two results are in qualitative agreement with those from
the $\Gamma$+B model, indicating
that the $\Gamma$+B model may be sufficiently
broad for the current data, and that our conclusion does not
depend on the exact form assumed for the SFH,
as long as it is sufficiently flexible.
In what follows, we will thus focus on the
results derived from the $\Gamma$+B model and use them to
compare with those in the literature. In addition, we will also
show results from the preferred SFH to demonstrate how our
results may vary due to the use of a different SFH model.
In Fig.~\ref{fig:cum_SFH_cs}, we show results separately for central
and satellite galaxies, using the central/satellite classification
from Galaxy Environment for MaNGA Value
Added Catalog (GEMA-VAC, \citealt{Argudo2015}). This VAC uses the
group catalogue of \cite{Yang2007} to separate MaNGA galaxies
into centrals and satellites. Environmental effects can be
seen from Fig.~\ref{fig:cum_SFH_cs},
in that satellite galaxies appear to form their stars
slightly earlier than central galaxies.
This is expected, as low-mass satellites may have their star
formation quenched by environmental effects of their host
halos \citep[e.g.][]{vandenBosch2008,Peng2012}.
In addition to the cumulative SFH, we also derive
the half-mass formation time, $t_{\rm half}$, defined as the
look-back time when a galaxy forms half of its final stellar mass,
from the best-fit SFH models and plot the results in
Fig.~\ref{fig:t_half_stack}. Again, GEMA-VAC is used to divide
our sample into centrals (red) and satellites (blue).
For both the $\rm \Gamma+B$ model and the preferred SFH,
$t_{\rm half}$ varies from 2 Gyr to 12 Gyr, with a broad peak at
about 9 Gyr. In contrast, $t_{\rm half}$
inferred from the $\rm \Gamma$ model is
smaller on average, ranging from zero to 12 Gyr with a broad
peak at $<4$ Gyr.
Comparing results obtained for centrals and satellites,
one sees that the peak at 9 Gyr is weaker for centrals, and there is
a weak excess of central galaxies at $t_{\rm half}\sim 4 {\rm Gyr}$.
This excess may indicate a secondary
star formation episode for some of the central galaxies,
as is expected from the "gappy" star formation history found
by \citet{Wright2019}. These results are in rough agreement
with those of \citet{Kauffmann2014}, who found that the distribution of
$t_{\rm half}$ for low-mass galaxies, derived from analysis of
Dn4000 and ${\rm H}_{\delta_A}$ absorption features,
is quite broad and shows double peaks.
We note that there is a significant peak in the distribution
of $t_{\rm half}$ at around 13 Gyr inferred from $\Gamma$+B.
However, our extensive test shows that this peak may not
be real. We only use a single SSP to describe the burst in the
early-Universe, and the age of the burst is confined to be within the age of
Universe. As the spectra of old SSPs are insensitive to the age
\citep[e.g.][]{BC03}, a galaxy with an early starburst that contributes
more than half of its stellar mass could have the best-fit
$t_{\rm half}$ at the edge of the prior. By setting the prior
age range to be the maximum age of the SSP model, which is
18 Gyr for the EMILES model, we found that some best-fit ages
would move out of the previous boundary, and the peak at 13 Gyr
would disappear. These results indicate that the current model
is not able to describe the ages of old stellar populations
accurately. Because of this limitation, the exact ages of the old
population predicted by the model should be treated with caution.
We emphasise, however, that this limitation does not affect the
conclusion that these galaxies contain large fractions
of old stars. Indeed, the peak is much weaker in the distribution
inferred from the preferred SFH, but the predicted fraction of
galaxies with $t_{\rm half}> 8\,{\rm Gyr}$ is similar
for both the $\Gamma$+B and the preferred SFH.
\begin{figure*}
\centering
\includegraphics[height=0.3\textwidth]{figures/sfh_cum_radial_free.pdf}
\includegraphics[height=0.3\textwidth]{figures/sfh_cum_radial_mix.pdf}
\caption{The cumulative SFH inferred from the best-fit
$\Gamma$+B model (top panels) and the preferred SFH
(bottom panels). Solid lines in all three panels show results for radial bins
[0.0-0.3]$R_e$, [0.3-0.7]$R_e$, and [0.7-1.2]$R_e$, respectively.
As a reference, results of stacking spaxels of the entire galaxy are shown
by dash lines in all panels. Red and blue lines are for central
and satellite galaxies, respectively. The shaded region around
each line represents the variance of the mean SFH,
estimated from the jackknife resampling method. The horizontal black dashed line
marks the position of half of the total star formation.}
\label{fig:sfh_cum}
\end{figure*}
\subsubsection{The old stellar populations}
Fig.~\ref{fig:hist_fractions} shows the mass fraction of stars in
three populations, old (>8~Gyr), middle-age (4-8 Gyr), and young
(<4 Gyr). We show the results for central and satellites galaxies
separately, and separately for ${\rm \Gamma+B}$ (upper panels),
$\rm \Gamma$ (middle panels) and the preferred SFH (bottom panels).
For ${\rm \Gamma+B}$ and the preferred SFH,
the fractions are $\sim60\%$, $20\%$ and $20\%$ for the three
populations, respectively, and are quite similar for both
centrals and satellites.
For the $\Gamma$ model, the corresponding
fraction is about one third for each of the three populations.
Therefore, the $\Gamma$ model significantly underestimates
the fraction of old stars while overestimating the young
fraction in comparison with the ${\rm \Gamma +B}$ model
and the preferred SFH.
For comparison, we show the results obtained from the resolved observation
of \citet{Weisz2011} as black dots. Although the resolved data is sparse,
there is a good match of the data with the distribution inferred from
both $\Gamma$+B and the preferred SFH,
but a significant mismatch with the inferences of
the $\Gamma$ model, indicating again that the inclusion of an early episode
of star formation is the minimal requirement to explain
the stellar population in low-mass galaxies.
Note that the amount of stars in the middle age
(4-8 Gyr) is relatively small both in the predictions of
$\Gamma$+B/preferred SFH and in the results based on resolved stars.
This is not well reproduced in hydrodynamic simulations
\citep{Digby2019}, suggesting that the observational data
can provide important constraints on the formation and evolution
of low-mass galaxies.
\subsubsection{Radial dependence}
\label{ssec_radial}
\iffalse
\begin{figure*}
\centering
\includegraphics[height=0.6\textwidth]{figures/t_half_radial_errored_free.pdf}
\caption{The distribution of the half mass formation time inferred from the best-fit
$\Gamma$+B model. The three columns are for radial bins of [0.0-0.3]$R_e$,
[0.3-0.7]$R_e$, and [0.7-1.2]$R_e$ respectively, with results from the
entire galaxy shown in blue as a reference.
Top and bottom rows are for central and satellite galaxies,
respectively. Error bars are obtained from the jackknife re-sampling method.
Histograms are normalized to 1 over the bins adopted.}
\label{fig:t_half}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[height=0.65\textwidth]{figures/t_half_radial_errored_mix.pdf}
\caption{Similar to Fig.~\ref{fig:t_half}, but for the preferred SFH.}
\label{fig:t_half_preferred}
\end{figure*}
\fi
The spatially resolved spectra provided by MaNGA allow us to investigate
the SFHs in different parts of a galaxy. Here we use MaNGA spectra stacked
in four radial bins, $[0.0-0.3)R_e$, $[0.3-0.7) R_e$, $[0.7-1.2)R_e$,
and the entire galaxy with all spaxels,
to investigate how the SFH varies with radius.
We derive the average SFH from the radially stacked spectra
using the $\Gamma$+B model
and the step-wise model. Fig.~\ref{fig:sfh_cum}
shows the cumulative distribution of the SFH obtained for the four
radial intervals. Results are shown for the $\Gamma$+B model
(top panels) and the preferred SFH (bottom panels),
and separately for central and satellite galaxies.
It is seen that the old stellar population exists not
only in the central part, but spreads over the entire galaxy,
making a significant contribution to the total stellar mass.
Satellites form their stars slightly earlier than
centrals and this is true for all radii. In addition, there
is an indication that stars in the innermost part on
average formed earlier, by $\sim 1\,{\rm Gyr}$, than
in the outer part. However, the signal for the age
gradient is too weak and the uncertainty in the result is
too large to draw a definite conclusion.
\subsection{Constraints from NIR}
\label{ssec_nir}
\begin{figure*}
\centering
\includegraphics[height=0.4\textwidth]{figures/nir_example.pdf}
\caption{ An example showing the changes in the inferred SFH by including
UKIDSS NIR colour in the fitting. The left panel shows the optical
image of this galaxy, and the right panel shows the best fit SFHs
from different models. Red and blue lines show the results from the
$\Gamma$+B and $\Gamma$ models, respectively. Dash lines are
results from the stack spectra from MaNGA, while solid lines are
results that include the $(g-K)$ colours.}
\label{fig:SFH_nir}
\end{figure*}
The results presented above are obtained using MANGA optical
spectra as constraints. The SSP templates from the E-MILES model
in fact have wavelength coverage from 1600 {\AA} to 5 ${\rm \mu m}$.
Thus, these SSP templates allow us to predict the fluxes in
both optical and NIR bands once a set of model parameters are given.
By comparing the predicted colour with observations, we can get additional
constraints on the stellar population of galaxies.
Compared to spectroscopic observations, broad band measurements
are much easier to make, although they may lose some spectral information.
As old stars emit the majority of their light in NIR,
a combination of optical spectra and
NIR photometry is expected to provide better constraints on
the SFH model which includes early star formation.
In this subsection, we demonstrate this
using the combination of UKIDSS $K$-band flux and MaNGA optical
spectrum.
The fitting example of mock spectra in Fig.~\ref{fig:SFH_example}
illustrates the improvement of the derived SFH when $(g-K)$ colour
is included in the fitting. As a real example, Fig.~\ref{fig:SFH_nir}
shows the fitting results for MaNGA 9876-3703, one of the galaxies
in our sample.
From the optical image shown in the left panel, one can see that
this galaxy is quite blue, indicative of ongoing star formation.
Indeed, if we assume a simple $\Gamma$ model to fit
its MaNGA spectra, the SFH obtained is dominated by recent star formation,
with almost no population older than 8 Gyr, as shown by the blue lines.
In contrast, the use of the $\Gamma$+B model reveals the existence of
a significant old stellar population. However, neither of
the two constrained models can recover perfectly the observed
$(g-K)$ colour for this galaxy. Using the posterior distribution of
the model parameters obtained from the E-MILES templates,
the best-fit of the $\Gamma$ and $\Gamma$+B models predicts
$(g-K)=2.08$ and $(g-K)=2.22$, respectively, while the measurement
from WSA is $(g-K)=2.43$. These results indicate that
the lack of constraints from NIR may still lead to biased inferences
of the stellar population, even if a proper SFH is assumed.
In order to check the effects of including the NIR photometry,
we implement a new set of fitting, adopting the modified likelihood
function described by equation (\ref{eq:likelyhood_NIR}).
The results obtained for MaNGA 9876-3703 are plotted in
Fig.~\ref{fig:SFH_nir} with solid lines. The inclusion of
the $(g-K)$ colour changes the fitting result of
the $\Gamma$+B model significantly, in that the fraction
of the old population increases significantly.
The predicted $(g-K)$ colour is $2.38$, very close to the
observed value. In contrast, the result obtained from the $\Gamma$ model
is not affected as much by the inclusion of the NIR data,
with $(g-K)=2.27$, which is still too blue.
We apply the same fitting to all the 19 galaxies that have UKIDSS NIR
photometry. The green triangles in Fig.~\ref{fig:evratio} show the
evidence ratio between the $\Gamma$+B and the $\Gamma$ models
as a function of SNR of the stacked spectrum. Yellow stars
are the median of the values in two SNR bins, divided at $SNR=40$.
Compared to the red dots that only use the MaNGA stacked
spectra, there is a significant increase in the evidence ratio, indicating
an increase in the ability of discriminating the two SFHs.
The red and blue dash lines in Fig.~\ref{fig:cum_SFH}
show the cumulative SFHs for the NIR sample obtained from the
best-fit $\Gamma$+B and $\Gamma$ model, respectively.
Compared to the solid lines that show results using MaNGA spectra
alone, the result obtained with the $\Gamma$+B model including
the NIR data shows a significant increase in the stellar mass
formed 8 Gyr ago, although the uncertainty is large
owing to the limited sample size. In contrast, the result
obtained by the $\Gamma$ model does not change much.
In Fig.~\ref{fig:compare_nir}, we compare the half mass formation
times and the fractions of old (>8 Gyr) population derived from
optical-only and optical plus NIR data. Results are shown for both the $\Gamma$+B model (squares) and the stepwise model (stars). We
also indicate the preferred SFH with a circle.
It is seen that for most galaxies, the inclusion of
the NIR constraint tends to increase $t_{\rm half}$,
or equivalently, to produce a higher fraction
in the old population. This suggests that the
old fraction inferred from the optical spectra
alone may be an underestimate.
\begin{figure}
\centering
\includegraphics[height=0.4\textwidth]{figures/t_half_compare_nir_colored_mix.pdf}
\includegraphics[height=0.4\textwidth]{figures/fraction_compare_nir_colored_mix.pdf}
\caption{Comparison of the half mass formation time (top) and the fraction
of old population (bottom) between fitting only the MaNGA stacked
spectra (horizontal axis) and fitting both optical spectra and
NIR photometry (vertical axis), colour coded by the $(g-K)$
colour of galaxies. Squares are results obtained from the $\Gamma$+B model, while stars are results from the stepwise model.
Results from the same galaxy are linked by a blue dash line,
while the preferred SFH model for each galaxy is marked by
a red circle.}
\label{fig:compare_nir}
\end{figure}
However, we should point
out that those results also raise a concern.
The tension between the predictions based on optical spectra
and NIR photometry indicates that the model adopted may not be
sufficiently general. To examine this problem in more detail,
we apply the posterior predictive check method to the sample
of 19 galaxies with NIR photometry. The detail is presented in the
Appendix \ref{appedix_nir}.
In general, we find that the model is often over-constrained
by the optical spectra, so that the posterior predictive distribution
(PPD) of the NIR photometry is very narrow and the observed NIR data
is almost always rejected by the posterior predictive check (PPC).
In Bayesian statistics, the inferences obtained from a data set
apply only to the model (hypothesis) assumed. If the model is
not general enough to accommodate all the information in the data,
inferences may still be made for the assumed model.
In this case, one might want to use all available data,
hopefully to obtain a balance between different constraints.
In summary, the combination of the NIR photometry and optical
spectra provide additional evidence that an early old stellar population
exists in most of the low-mass galaxies. However, due to the limitation
of the assumed model, the current analysis is unable to reach a
quantitative consistency between the optical-only and
optical$+$NIR results. This should be kept in mind when
one is concerned with the details of the inferred SFH.
\begin{figure*}
\centering
\includegraphics[height=70mm]{figures/ev_free_sc.pdf}
\caption{
The evidence ratio between the $\Gamma$+B and $\Gamma$+2B models (left panel)
and between the $\Gamma$+2B and $\Gamma$+3B models (right panel)
as a function of SNR of the stacked spectra.Each red dot stands for
the result of a MaNGA galaxy. Blue stars connected by a blue line
are the median values in five SNR bins. Error bars are $1\sigma$ scatter among galaxies in individual
bins.
}
\label{fig:sfh_compare_scburst}
\end{figure*}
\begin{figure}
\centering
\includegraphics[height=0.7\textwidth]{figures/compare_scburst_thalf.pdf}
\caption{
The distribution of the difference in the half mass formation time
between the $\Gamma$+B and $\Gamma$+2B models (top), and
the distribution of the difference normalized by the
inference uncertainty (bottom). The mean value and standard deviation
are indicated for the histogram in each panel.
}
\label{fig:sfh_compare_scburst_thalf}
\end{figure}
\section{Uncertainties}
\label{sec:Uncertainties}
The results obtained above are based on the analysis of
MaNGA spectra and UKIDSS NIR photometry with the use of BIGS.
We adopt the state-of-the art SPS model based on the E-MILES
SSP templates, and assume three models for the SFH.
In this section we examine further potential uncertainties
in different parts of our analysis.
\subsection{SFH models}
We first examine whether or not our SFH model is sufficient to
characterise the SFH of real galaxies.
To this end, we first extend our SFH model by including additional
bursts in the fitting. Models that have two and three bursts are
referred to as the $\Gamma$+2B model and $\Gamma$+3B model, respectively. Fig.~\ref{fig:sfh_compare_scburst} shows the evidence ratio
between these models as a function of SNR. Compared with
$\Gamma$+B model, the $\Gamma$+2B model is preferred by
some galaxies, but the global trend for this preference is mild.
The evidence ratio between $\Gamma$+2B and $\Gamma$+B is much smaller than
that between $\Gamma$+B and $\Gamma$ shown in Fig.~\ref{fig:evratio},
indicating a weaker preference for more complex models.
Inspecting The evidence ratio between
$\Gamma$+2B and $\Gamma$+3B model, one can see that,
even for galaxies which prefer a second burst, a third burst
is unnecessary. These results show again that, given the current
data quality and the SPS model, the $\Gamma$+B model seems
to be sufficient.
We further test whether the additional burst changes our
basic conclusion. To this end, we estimate the difference between
$t_{\rm half}$ derived from SFH models with one and two bursts,
denoted by $\Delta t_{\rm half}$, for individual galaxies.
In addition, we compare $\Delta t_{\rm half}$ with the
inference uncertainty:
${\rm err}_{ t_{\rm half}} \equiv
\left(\sigma^2_{\rm t_{\rm half,~\Gamma +B}} +
\sigma^2_{\rm t_{\rm half,~\Gamma + 2B}}\right)^{0.5}$,
where $\sigma_{\rm t_{\rm half,~\Gamma +B}}$ and
$\sigma_{\rm t_{\rm half,~\Gamma +B}}$ are the standard deviations
of the posterior distribution of $t_{\rm half}$,
inferred from the $\Gamma$+B and $\Gamma$+2B models, respectively.
We use the ratio between $\Delta t_{\rm half}$ and ${\rm err}_{ t_{\rm half}}$
to characterize how well the differences in the derived
$t_{\rm half}$ are accounted for by the inference uncertainty.
The results plotted in Fig.~\ref{fig:sfh_compare_scburst_thalf}
show that adding a new burst into the model have almost negligible
effects on the derived $t_{\rm half}$. The differences between the
derived $t_{\rm half}$ are only slightly larger than the
inference uncertainty, indicating that the $\Gamma$+B
model is an acceptable choice for our purpose.
We also briefly discuss the differences between the
parametric and non-parametric models. Non-parametric models avoid
the limitation imposed by assuming a specific functional form,
but the total number of time intervals (time resolution) is usually
limited by the data. In practice, codes that focus on spectral
fitting and stellar population parameters,
such as STARLIGHT \citep{STARLIGHT} and pPXF \citep{Cappellari2004},
usually adopt the non-parametric approach, while the ones
that focus on constraining model parameters, such as
CIGAL \citep{Noll2009,Boquien2019} and BEAGLE \citep{Chevallard2016},
prefer the parametric approach. Our analysis uses a stepwise model
with 7 time intervals. As shown by both the mock tests
and the fitting to real data, the stepwise model in
general give results similar to the $\Gamma$+B model.
This consistency indicate that, as long as the SFH model is
sufficiently flexible to describe the major stellar populations,
the results from our spectral fitting are robust.
For brevity, we will use the $\Gamma$+B model
for the rest of our discussion.
\begin{figure}
\includegraphics[height=70mm]{figures/ev_bc_gamma_free.pdf}\\\
\includegraphics[height=70mm]{figures/ev_mix_free.pdf}\\\
\caption{The evidence ratio between the $\Gamma$+B SFH and
the $\Gamma$ SFH as a function of SNR of the stacked spectra,
derived from the BC03 model (top panel) and the mixed model
(bottom panel). Each red dot stands for the result of a MaNGA
galaxy. Blue stars are the median values in five SNR bins
connected by a blue line. Error bars are $1\sigma$ scatter
of galaxies in individual bins.
}
\label{fig:evratio_bc}
\end{figure}
\subsection{SSP models}
\label{ssec:discussion_ssp}
Spectral modelling is based on the linear combination of SSP templates.
Thus, the accuracy and completeness of the SSP templates
can influence fitting results. Unfortunately, the uncertainties
of the SSP templates are not well-understood.
As a test of this uncertainty, we use the BC03 model to perform a
consistent check. In contrast to the E-MILES model, the BC03 SSPs
are constructed with the STELIB \citep{Borgne2003} empirical
stellar templates. Models assuming Padova isochrones \citep{Bertelli1994}
and the Chabrier IMF are used in the comparison.
The top panel of Fig.~\ref{fig:evratio_bc} shows the evidence ratio
between the $\Gamma$+B and $\Gamma$ models, derived from the
fitting with BC03 templates, as a function of the SNR.
The trend seen here is similar to that shown in Fig.~\ref{fig:evratio}
although is slightly weaker.
We have also made a test using a mixture of E-MILES and BC03.
In this model, the model fluxes are calculated as
$F=F_{\rm BC}\times f_{BC}+F_{\rm EM}\times (1-f_{BC})$,
where $F_{\rm BC}$ and $F_{\rm EM}$ are the fluxes obtained
from the BC03 and E-MILES templates. The relative contribution of
BC03 is described by $f_{BC}$, which is treated as a free parameter.
The evidence ratio derived from this model, shown
in the bottom panel of Fig.~\ref{fig:evratio_bc},
is found to lie between the E-MILES and BC03 results.
We plot the cumulative SFH inferred from the best-fit $\Gamma$+B model
using E-MILES, BC03 and the mixture of the two in Fig.~\ref{fig:sfh_compare_bc}.
The underestimation of the old stellar population by the
simple $\Gamma$ SFH can be seen easily by comparing it with
the $\Gamma$+B SFH, regardless of the assumed SSP model.
For $\Gamma$+B SFH, the inferred shapes of the SFH from the two
SSP models are in overall
agreement with each other. The prediction of the mixture model falls
between the two; it is closer to E-MILES at early time
and becomes closer to BC03 at later time.
All the models reveal the presence of an old stellar
population that dominates the stellar masses,
but they differ in detail in their predictions
of the burst strength and age.
These differences can be
caused by the different constructions of the two SSP
models, such as the underlying stellar templates,
the isochrones used to calculate the SSPs,
and even the methods used to populate stars in
parameter space, but it is beyond the scope of the
present paper to figure out the exact difference between
the two SSP models. However, our test does show that
the results obtained above are qualitatively robust
against the variation of the SSP templates, but that
quantitative details can be affected significantly
by using different SSP models.
\begin{figure}
\centering
\includegraphics[height=0.45\textwidth]{figures/compare_mix_revised.pdf}
\caption{
The cumulative SFH inferred from the best-fit SFH models using
E-MILES templates (dash line), BC03 templates (dash-dotted line),
and the mixed model (solid line). Lines show the mean SFHs of sample,
while the shaded region shows the scatter among sample galaxies
obtained from the mixed model.
Red and blue colours are for the $\Gamma$+B and $\Gamma$ models, respectively.
}
\label{fig:sfh_compare_bc}
\end{figure}
\subsection{The NIR photometry}
\label{ssec:discussion_nir}
The accuracy of the predicted NIR photometry depends on the
accuracy of the SSP templates in the NIR. For the E-MILES model,
the SSPs at $\lambda>8950$ {\rm \AA} are constructed
using the empirical IRTF stellar template \citep{Cushing2005,Rayner2009}.
This treatment provides a self-consistent E-MILES SSP spectra
with moderately high resolution ($\sigma=60$\kms) at the NIR.
In contrast, the BC03 model uses theoretical, low resolution
NIR spectra of BaSel \citep{Westera2002}. This difference
may affect the predicted NIR photometry. To test this uncertainty,
we apply the BC03 model to the sample that has both MaNGA and
UKIDSS data. We plot the half mass formation look-back time,
$t_{\rm half}$, and the fraction of old (>8 Gyr) stellar
population derived from optical only data and
optical plus NIR data in Fig.~\ref{fig:compare_nir_bc}.
Comparing with the results shown in Fig.~\ref{fig:compare_nir},
we see that the two SSP models reach the same conclusion
that the inclusion of NIR data enhances the significance
of the old stellar population.
\begin{figure}
\centering
\includegraphics[height=0.4\textwidth]{figures/t_half_compare_nir_bc_colored.pdf}
\includegraphics[height=0.4\textwidth]{figures/fraction_compare_nir_bc_colored.pdf}
\caption{Comparison of the half mass formation time (top) and the fraction
of old population (bottom) obtained from fitting only the MaNGA stacked
spectra (horizontal axis) and fitting both optical spectra and
NIR photometry (vertical axis), using the BC03 templates.
Results are colour coded by the $(g-K)$ colour of the galaxy.}
\label{fig:compare_nir_bc}
\end{figure}
In addition to the BC03 model, here we mention briefly another SSP model
family, the SSP model of Maraston \citep{Maraston2005,Maraston2011}.
This model was first presented in \cite{Maraston2005} using low-resolution
theoretical BaSeL stellar libraries, and an updated version using a
set of stellar templates was published in \cite{Maraston2011}.
The Maraston model contains treatments of TP-AGB and HB stars
that are different from those in E-MILES and BC03.
These treatments lead to redder colours for SSPs with ages around 1 Gyr.
As the dwarf galaxies studied here have quite a significant population
of such ages, the difference in the predicted
$(g-K)$ colour can be as large as $\sim 0.15$.
Unfortunately, the Maraston model cannot be compared
with E-MILES. The high-resolution model, which is needed for
our modeling, is based on theoretical MARCS templates
that have very limited age and metallicity coverages.
The models based on empirical templates have relatively wider
coverage in age and metallicity, but they generally only cover the
optical range. Due to these reasons, it is difficult to make
a meaningful comparison with this model.
In summary, our current SPS modelling is sufficient to describe the stellar
population in the galaxies studied here, and our method is in general
robust against the variation in both the SSP model and the SFH model.
Thus, the main conclusions we have reached are robust, while
the details are still uncertain.
\section{Summary and discussion}
\label{sec:summary}
In this paper, we analyse the spectra of a sample of low-mass galaxies using
the Bayesian inference code BIGS. Our sample galaxies are selected from
the SDSS IV-MaNGA IFU survey, and some of the galaxies also have
NIR photometry obtained from the WSA catalogue. We stack the spatially
resolved spectra of each galaxy within a radius of $1.0 R_e$ to
obtain a representative high SNR spectrum for the galaxy.
We also stacked spectra in three radial bins, $[0.0-0.3] R_e$,
$[0.3-0.7] R_e$ and $[0.7-1.2] R_e$, to study possible radial
variations. We analyse the stacked spectra using a full spectrum fitting
approach, making use of the MaNGA spectra from 3400 \AA \ to 8900 \AA.
In our analyses, we adopt the state-of-the-art E-MILES SSP templates,
and assume different SFH models to derive the stellar
population properties. We use Bayesian model selection to distinguish
between different models, and use the posterior distribution to constrain
model parameters. Our main results can be summarised as follows:
\begin{itemize}
\item
Based on Bayesian model selection, we demonstrate that low-mass
galaxies contain an old stellar population that may be missed
in results obtained from low-resolution spectra and a
too restrictive model for the SFH.
\item
The best fit SFHs for both parametric and non-parametric
models all show that most of the low-mass galaxies have formed more
than half of their stellar mass at $z>2$.
The half mass formation time and the cumulative SFH from
our spectra fitting of the unresolved stellar populations are in
good agreement with those obtained from resolved
stars \citep{Weisz2011}.
\item
The average mass fraction of the old stellar population derived from
SFH models that are sufficiently flexible
is as high as $\sim0.6$, while a simple $\Gamma$ model
significantly underestimates this fraction.
This result is consistent with the resolved observation
\citep{Weisz2011}, but inconsistent with some hydrodynamic
simulations \citep[e..g.][]{Digby2019}.
\item
Central galaxies on average have more recent star formations
than satellite galaxies, indicating that star formation in
satellite galaxies may be affected by their environment.
\item
Old stellar population in a low-mass galaxy
exists not only in the central part, but is spread over the
entire galaxy. On average, the variation of the
the SFH with radius is rather weak.
\item
A model of SFH needs to be sufficiently flexible to reproduce
an observed optical spectrum and the $(g-K)$ colour simultaneously.
A higher fraction of the old stellar population is obtained
when the $(g-K)$ colour is included as an additional constraint.
However, the results obtained from the optical and NIR data
have some tension, indicating that the current spectral
synthesis model may not be sufficiently flexible to accomodate
the data.
\item
We test potential uncertainties both in the SSP model and the SFH model,
and find that our main results about the existence of an old stellar
population in low-mass galaxies are robust.
However, the details of the
SFH are still poorly constrained.
\end{itemize}
Although SFHs in low-mass galaxies have been studied
quite extensively, the underlying physical processes
that regulate star formation are still poorly
understood. For example, our results suggest that the SFH of
low-mass galaxies may consist of an early star formation
episode, where about half of the stellar mass was formed,
followed by a secondary and more extended phase of star formation.
This type of SFH is consistent with the empirical model
of \cite{Lu2014}, which predicts that many dwarf galaxies
have experienced a phase of rapid star formation
at $z>2$. This enhanced star formation at $z>2$ may be
related to the fast accretion of dark matter halos
\citep[e.g.][]{Zhao2003,MoMao2004} and seems to be required
by the observed upturn in the low-mass end of the
stellar mass function of galaxies \cite[][]{Lan2016},
but is not predicted by many models of galaxy formation
\citep[][]{Lim2017}. In particular, the mass fractions in
different stellar populations obtained from our analysis
are not well reproduced by current hydrodynamic simulations
\citep[e.g.][]{Digby2019}. These suggest that the feedback
effect assumed in the model to suppress star formation in
low-mass halos may be too strong at high redshift.
Clearly, the observational constraints on the SFH
we obtain here can provide important information about
the feedback processes operating in the population of low-mass
galaxies.
Of all the approaches adopted to probe the SFH of low-mass galaxies,
the most reliable methods are perhaps those based on resolved stars.
However, such observations can be made only for a small number
of nearby galaxies. In contrast, methods based on stellar
population models can use a large sample of galaxies.
Among the approaches based on spectral synthesis,
SED fitting of broad-band photometry
\citep[e.g.][]{Janowiecki2017,Telles2018} and absorption line analysis
\citep[e.g.][]{Kauffmann2014} have been used to infer the SFHs of low-mass
galaxies. Compared to these two approaches, our method based on
full spectral fitting can in principle extract more information
from the spectra. However, SED fitting has the advantage
that muti-band photometry is easier to obtain.
As we have shown, analysis based on spectra with
limited wavelength coverage can result in biases in the inferred
stellar population. Our Bayesian analysis that combines MaNGA
optical spectra and NIR photometry from UKIDSS is an attempt to
overcome this difficulty. As shown in \S\ref{ssec_nir}, this
approach is promising in probing the stellar population in
low-mass galaxies, in particular in revealing the old population
that may be missed in earlier investigations.
However, it should also be noted that this approach is still in
its early stage, and more explorations are needed to take full
advantage of it. Accurate and self-consistent
SSP templates are crucial for this type of analysis.
As seen in \S\ref{sec:Uncertainties}, although variations
in the SSP model generally do not change our results qualitatively,
they do affect the details of inferences.
Care must be taken in calibrating different observations.
In the future, with improvements of our understanding about
stellar spectra and of stellar spectral templates,
the method and analysis proposed in this paper are
expected to provide an important avenue to explore the star
formation in low-mass galaxies.
\section*{Acknowledgements}
This work is supported by the National Key R\&D Program of China
(grant No. 2018YFA0404502, 2018YFA0404503), and the National
Science Foundation of China (grant Nos. 11821303, 11973030,
11761131004, 11761141012). MB acknowledges FONDECYT regular grant 1170618. G.R. is supported by the National Research Foundation of Korea (NRF) through Grants No.
2017R1E1A1A01077508 and 2020R1A2C1005655 funded by the Korean Ministry of Education,
Science and Technology (MoEST), and by the faculty research fund of Sejong University.
Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P.
Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions.
SDSS-IV acknowledges support and resources from the Center for High-Performance Computing at
the University of Utah. The SDSS web site is www.sdss.org.
\par
SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrof\'isica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, Lawrence Berkeley National Laboratory, Leibniz Institut f\"ur Astrophysik Potsdam (AIP), Max-Planck-Institut f\"ur Astronomie (MPIA Heidelberg), Max-Planck-Institut f\"ur Astrophysik (MPA Garching), Max-Planck-Institut f\"ur Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observat\'ario Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Aut\'onoma de M\'exico, University of Arizona, University of colourado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.
\section*{Data availability}
The data underlying this article were accessed from: SDSS DR15 \url{https://www.sdss.org/dr15/}; UKIDSS \url{http://wsa.roe.ac.uk/}. The derived data generated in this research will be shared on reasonable request to the corresponding author.
\bibliographystyle{mnras}
|
2,877,628,090,701 | arxiv | \section{Introduction}
{\em Quadratic gravity} refers to theories generalizing General Relativity (GR) by adding terms quadratic in the curvature to the Lagrangian density. The motivations for such modifications go back several decades ago (see the critic paper \cite{Rose1991}), and today there is a general consensus that
modern string theory (see e.g. \cite{Gaume2015}) and other approaches to quantum gravity (see e.g. \cite{Olmo2011})
present that structure, even with higher powers of the curvature tensor, in their effective actions.
On the other hand, many times it is convenient to have a description of concentrated
sources, that is, of concentrated matter and energy in gravity
theories. These concentrated sources represent for instance thin
shells of matter (or braneworlds, or domain walls) and impulsive
matter or gravitational waves. They can mathematically be modelled by
using distributions, such as Dirac deltas or the like, hence, one has
to resort to using {\em tensor distributions}. However, one cannot
simply assume that the metric is a distribution because the products
of distributions is not well defined in general, and therefore the
curvature (and Einstein) tensor will not be defined. Thus, one must
identify the class of metrics whose curvature is defined as a
distribution, and such that the field equations make sense. For
sources on thin shells, the appropriate class of metrics were
identified in \cite{I,L,T} in GR, further discussed in
\cite{GT}. Essentially, these are the metrics which are smooth except
on localized hypersurfaces where the metric is only continuous.
We carry on a similar program in the most general quadratic theory of
gravity, where extra care must be taken: the field equations, as well
as the Lagrangian density, contain products of Riemann tensors, and,
moreover, their second derivatives. Therefore, the {\em singular
distributional part} ---such as the Dirac deltas--- cannot arise in
the Riemann tensor itself, which can have at most finite jumps except in some very excepctional situations. We identify these and then concentrate on the generic, and more relevant, situation
performing a detailed calculation using the rigorous calculus of tensor
distributions (see the Appendices for definitions and fundamental
formulas with derivations) to obtain the energy-momentum quantities on
the shells. They depend on the extrinsic geometrical properties of the
hypersurface supporting it, as well as on the possible discontinuities
of the curvature and their derivatives.
Surprisingly, and as already demonstrated in \cite{Senovilla13,Senovilla14,Senovilla15}, a
contribution of ``dipole'' type also appears in the energy-momentum
content supported on the shell. This is what we call a double layer,
in analogy with the terminology used in classical electrodynamics
\cite{J} for the case of electrodipole surface
distributions. This analogy make the interpretation of these double layers somewhat misterious, as there are no negative masses ---ad thus no mass dipoles--- in gravitation.
One of our purposes is to shed some light into this new mystery. From our results and those in \cite{Senovilla13,Senovilla14,Senovilla15}, these double layers seem to arise when abrupt changes in the Einstein tensor occur.
We also find the field equations obeyed by all these energy-momentum
quantities, which generalize the traditional Israel equations
\cite{I}, and describe the conservation of energy and
momentum. Actually, we explicitly prove that the full energy-momentum tensor
is divergence-free (in the distributional sense) by virtue of the
mentioned field equations.
Previous works on junction conditions in quadratic gravity include
\cite{BD,D,DSS,BF} ---see also \cite{DD,GW} for the Gauss-Bonnet
case---, but none of them provided the correct full field equations
with matter outside the shell, and they all missed the double-layer
contributions, which are fundamental for the energy-momentum
conservation.
Maybe this is due to the
extended use of Gaussian coordinates based on the thin shell: this
prevents from making a mathematically sound analysis of the
distributional part of the energy-momentum tensor, as the derivatives
of the Dirac delta supported on the shell seem to be ill-defined in
those coordinates. This is explained in detail in Appendix
\ref{App:E}.
The paper
is structured as follows. In Section \ref{sec:matching} we present
a purely geometric review on spacetimes with distributional curvature constructed by joining
smooth spacetimes. The quadratic gravity field equations are introduced in
Section \ref{sec:quadratic_grav}, where the proper junction conditions for the
description of thin shells (layers) are found. This is achieved by using distributional calculus,
briefly reviewed in the Appendices. In Section \ref{sec:compute_deltas},
the matter content supported on the layer, i.e.
the distributional part of the global energy momentum tensor, is found
to contain a ``usual'' Dirac-delta term $\widetilde{T}_{\mu\nu}\delta^\Sigma$ together with another contribution of double-layer type as
mentioned above; the latter is denoted by $\underline{t}_{\mu\nu}$.
Then, both $\widetilde{T}_{\mu\nu}$ and $\underline{t}_{\mu\nu}$ are computed
in terms of geometrical quantities: the curvatures at either side of the layer and the extrinsic and intrinsic geometry of the hypersurface supporting it.
The tensor $\widetilde{T}_{\mu\nu}$ is decomposed into the proper energy momentum of the shell
$\tau_{\alpha\beta}$, external flux momentum $\tau_\alpha$ and external pressure (or tension) $\tau$
corresponding to the completely tangent, tangent-normal and normal parts respectively.
The double layer energy-momentum tensor distribution is found to resemble the energy-momentum content of
a dipole surface charge distribution with strength $\mu_{\alpha\beta}$.
This strength depends on the jump of the Einstein --or equivalently the Ricci--- tensor at the layer.
The allowed jumps of the curvature (and its derivatives up to second order)
at the layer are determined in Section \ref{sec:nG2}, again from a purely geometrical perspective.
The general quadratic gravity field equations are obtained in Section \ref{sec:field_eqs_S}. These are the inherited field equations on the layer, and they involve $\tau_{\alpha\beta}$, $\tau_\alpha$, $\tau$
and $\mu_{\alpha\beta}$ together with jumps on the layer of the spacetime energy-momentum tensor. These fundamental equations are the generalization of the Israel equations in GR
to the general quadratic gravity theories. The covariant conservation of the full energy-momentum tensor with its distributional parts
is explicitly demonstrated in Section \ref{section:divergence}, where we discuss how the double layer
term is necessary for that.
The field equations on the layer are analysed and further discussed in Section \ref{sec:8},
where a classification of the junction conditions in the following cases are presented:
proper matching, thin shells with no double layers, and pure double layers.
In particular we find that if there is no double layer, then no external flux momentum $\tau_\alpha$
nor external tension $\tau$ can exist.
Finally, in Section \ref{sec:consequences} some comparisons with the general GR case,
and particular matchings of spacetimes, are provided.
It is found that any GR solution containing a proper matching
hypersurface will contain a double layer and/or a thin shell at the matching hypersurface
if the true theory is quadratic. Therefore, if any quantum
regimes require, excite or switch on quadratic terms in the Lagrangian density, then GR
solutions modelling two regions with different matter contents will develop thin shells and
double layers on their interfaces.
In order to have a self-contained text, we devote some Appendices to review
distributional calculus in manifolds and to present some
useful general calculations with distributions. On the other hand,
we present in Appendix \ref{App:E} a (we hope clarifying)
discussion about the difficulty, and in fact inconvenience,
of using Gaussian coordinates for dealing with layers in quadratic Lagrangian
theories, as it has been often done in the literature.
\section{Junction: spacetimes with distributional curvature}\label{sec:matching}
The space-time is given by an $(n+1)$-dimensional Lorentzian manifold $(V,g)$.
Let us consider the case where $(V,g)$ possesses two different regions, say with different matter contents or different gravitational fields, separated by a border. This border will locally be a hypersurface $\Sigma\subset V$ which can have any causal character, the physically more interesting case arising when it is {\em timelike}, which we will assume throughout in this paper. $\Sigma$ divides the manifold $V$ into two regions $V^\pm$, as shown schematically in Fig.\ref{fig:glued}.
\begin{figure}[!ht]
\includegraphics[height=8cm]{glued}
\caption{\footnotesize{Schematic diagram of the situation under consideration: $\Sigma$ is a timelike hypersurface separating two regions of the space-time, $V^+$ and $V^-$, with corresponding smooth metrics $g^+$ and $g^-$. These two metrics also have well defined, definite limits, when approaching $\Sigma$. If, and only if, the first fundamentals forms inherited by $\Sigma$ from $V^+$ and $V^-$} agree, one can build a local coordinate system such that the entire metric is continuous across $\Sigma$ too. In that case, one can define a unique unit normal $n^\mu$, which we choose to point from $V^-$ towards $V^+$, as shown.}
\label{fig:glued}
\end{figure}
The metrics $g_{\mu\nu}^\pm$ are assumed to be smooth on $V^\pm$ respectively ($\mu,\nu,\dots = 0,1,\dots,n$). An important observation is that one can actually deal with two different coordinate systems on $V^\pm$. In fact, this is {\em needed} in most practical problems, as one is usually given two distinct solutions of the field equations that are to be matched: for instance, one solution describing an interior with matter and another describing vacuum; or a background solution upon which a localized perturbation, such as a wave front or a shell of matter, propagates. Thus, we will be presented with two sets of local coordinates $\{x^\mu_{\pm}\}$ {\em with no relation whatsoever}, each valid on the corresponding part $V^\pm$ \cite{I}.
Two corresponding timelike hypersurfaces $\Sigma^{\pm}\subset V^{\pm}$ which bound the regions $V^{\pm}$ must be chosen on each $\pm$-side to be matched. Of course, these two hypersurfaces are to be identified in the final glued spacetime, so that they must be diffeomorphic. The junction of $V^+$ with $V^-$ by identifying $\Sigma^+$ with $\Sigma^-$
depends crucially on the particular diffeomorphism used for this identification, hence we assume that this has already
been chosen and is known. The glued global manifold $V$ is defined as the disjoint union of $V^{+}$ and $V^{-}$ with diffeomorphically related points of $\Sigma^{+}$ and $\Sigma^{-}$ identified. This unique
hypersurface is the {\em matching hypersurface} we denote simply by $\Sigma$.
Let $\{\xi^a\}$ be a set of local coordinates on $\Sigma$ ($a,b,\dots =1,\dots , n$). Then, there are two parametric representations
$$
x_{\pm}^\mu =x_{\pm}^\mu (\xi^a)
$$
of $\Sigma$, one for each imbedding into each of $V^\pm$. As explained in the Appendix \ref{App:B} , in order to have well defined curvature tensors in the sense of distributions we need a global metric which is at least continuous across $\Sigma$. As is known \cite{CD,MS}, this happens if and only if the two first fundamental forms $h^{\pm}$ of $\Sigma$ inherited from both sides $V^\pm$ agree. This agreement requires the equalities on $\Sigma$
\begin{equation}
h^+_{ab} = h_{ab}^- ,\hspace{1cm}
h^\pm_{ab} \equiv g_{\mu\nu}^\pm (x(\xi))\frac{\partial x_\pm^\mu}{\partial \xi^a}\frac{\partial x_\pm^\nu}{\partial \xi^b}
\label{h=h}
\end{equation}
and implies that one can build local coordinate systems in which the metric can be extended to be continuous across $\Sigma$.
The unique metric defined on the entire manifold that coincides with
$g^{\pm}$ in the respective $V^{\pm}$ and is continuous across $\Sigma$ is denoted simply by $g$.
Let $n^\pm_\mu$ be the unit normals to $\Sigma$ as seen from $V^\pm$ respectively. They are fixed up to a sign by the conditions
$$
n^\pm_\mu \frac{\partial x_\pm^\mu}{\partial \xi^a}=0, \quad n^\pm_\mu n^{\pm\mu}=1
$$
and one must choose one of them (say $n^-_\mu$) pointing outwards from $V^-$ and the other ($n^+_\mu$) pointing towards $V^+$. Hence, the two bases on the tangent spaces at any point of $\Sigma$
$$
\{n^{+\mu},\frac{\partial x_+^\mu}{\partial \xi^a}\} \quad \quad \leftrightarrow \quad \quad\{n^{-\mu},\frac{\partial x_-^\mu}{\partial \xi^a}\}
$$
agree and are then identified, so we drop the $\pm$ (even though, in explicit calculations, one can still use both versions using the two coordinate systems on each side). We denote by $\vec{e}_a$ the vector fields tangent to $\Sigma$ defined by the above imbeddings
$$
\vec{e}_a := \left.\frac{\partial x_+^\mu}{\partial \xi^a}\frac{\partial}{\partial x^\mu_+}\right|_\Sigma =\left.\frac{\partial x_-^\mu}{\partial \xi^a}\frac{\partial}{\partial x^\mu_-}\right|_\Sigma .
$$
Note that $\{\vec{e}_a\}$ are defined only on $\Sigma$. The basis dual to $\{n^\mu,e^\mu_a\}$ is denoted by
$$
\{n_\mu, \omega^a_\mu\}
$$
where the one-forms $\bm{\omega}^a$ are characterized by
$$
n^\mu \omega^a_\mu =0, \hspace{1cm} e^\mu_b \omega^a_\mu =\delta^a_b .
$$
The space-time version of the first fundamental form, which is now unique due to (\ref{h=h}), is given by the projector to $\Sigma$ (defined only on $\Sigma$)
\begin{equation}
h_{\mu\nu}=g_{\mu\nu}-n_\mu n_\nu .\label{proj}
\end{equation}
Notice that
$$
n^\mu h_{\mu\nu} =0, \hspace{1cm} h_{\mu\rho} h^\rho{}_\nu =h_{\mu\nu}, \hspace{1cm} h^\mu{}_\mu =n, \hspace{1cm} h_{\mu\nu}e^\mu_a e^\nu_b =h_{ab}
$$
and that
$$
e^\mu_a =h_{ab}\, \omega^b_\nu \, g^{\nu\mu} , \hspace{1cm} e^\mu_c \omega^c_\nu =h^\mu_\nu \, .
$$
Despite all the above, the extrinsic curvatures, or second fundamental forms, inherited by $\Sigma$ from both sides $V^\pm$ will be, in principle, different, because the derivatives of the metric are not continuous in general. We denote them by $K^\pm_{\mu\nu}$, and they are defined, as usual, by
$$
K^\pm_{\mu\nu}:= h^\rho{}_{\nu}h^\sigma_\mu \nabla^\pm_\rho n_\sigma \hspace{1cm} K^\pm_{\mu\nu}=K^\pm_{\nu\mu}
$$
where only tangent derivatives are involved. Obviously $n^\mu K^\pm_{\mu\nu}=0$, thus only the $n(n+1)/2$ components tangent to $\Sigma$ are non-identically vanishing.
In terms of the imbeddings these components are given by
\begin{equation}
K^\pm_{ab} \equiv -n^\pm_\mu \left(\frac{\partial^2 x^\mu_\pm}{\partial\xi^a\partial\xi^b}+\Gamma^{\pm \mu}_{\rho\sigma}\frac{\partial x_\pm^\rho}{\partial \xi^a} \frac{\partial x_\pm^\sigma}{\partial \xi^b}\right), \label{2FF}
\end{equation}
which is adapted to explicit calculations. These components correspond to the second fundamental form, defined as a tensor
in $\Sigma$ by
$$
K^\pm_{ab} =-n_\mu e^\rho_a\nabla^\pm_\rho e^\mu_b =e^\mu_b e^\rho_a\nabla^\pm_\rho n_\mu \, .
$$
As shown in the Appendix \ref{App:C}, the Riemann tensor can be computed in the distributional sense and acquires, in general, a singular part proportional to the distribution $\delta^\Sigma$ supported on $\Sigma$ ---which is defined in Appendix \ref{App:B}---:
\begin{equation}
\underline{R}^\alpha{}_{\beta\mu\nu}=R^{+\alpha}{}_{\beta\mu\nu}\underline{\theta} + R^{-\alpha}{}_{\beta\mu\nu}(\underline{1}-\underline{\theta})+\delta^\Sigma H^\alpha{}_{\beta\mu\nu} .\label{Riedist}
\end{equation}
Here
$H^\alpha{}_{\beta\mu\nu}$ is called the singular part of the Riemann tensor distribution and as shown in the Appendix \ref{App:C}
reads
$$
H^\alpha{}_{\beta\lambda\mu}= n_{\lambda}\left
[\Gamma^{\alpha}_{\beta\mu} \right ] - n_{\mu} \left [
\Gamma^{\alpha}_{\beta\lambda} \right ]
$$
where the square brackets always denote the jump of the enclosed object across $\Sigma$ according to the definition (\ref{discont}) given in Appendix \ref{App:B}.
We can provide a more interesting formula for this singular part. First, note that from the general formula for the discontinuities of derivatives (\ref{discf}) in Appendix \ref{App:D2} we have
$$
\left[\partial_\alpha g_{\mu\nu} \right]= n_\alpha \zeta_{\mu\nu}
$$
for some symmetric tensor field $\zeta_{\mu\nu}$ defined only on $\Sigma$. This immediately gives
\begin{equation}
\left [ \Gamma^{\alpha}_{\beta\lambda} \right ] =\frac{1}{2} \left(\zeta^\alpha{}_\beta n_\lambda +\zeta^\alpha{}_\lambda n_\beta -n^\alpha \zeta_{\beta\lambda} \right)\label{Gammadisc}
\end{equation}
which implies
\begin{equation}
H_{\alpha\beta\lambda\mu} = \frac{1}{2}\left(- n_{\alpha}\zeta_{\beta\mu}n_{\lambda}+n_{\alpha}\zeta_{\beta\lambda}n_{\mu}-n_{\beta}\zeta_{\alpha\lambda}n_{\mu}+n_{\beta}\zeta_{\alpha\mu}n_{\lambda} \right).\label{HRie0}
\end{equation}
Note that this expression is invariant under the change $\zeta_{\mu\nu}\longrightarrow \zeta_{\mu\nu}+n_\mu X_\nu +n_\nu X_\mu$ for arbitrary $X_\mu$ and thus only the part of $\zeta_{\mu\nu}$ {\em tangent} to $\Sigma$ enters into the formula. Actually, one can prove the existence of $C^1$, piecewise $C^\infty$, changes of coordinates that remove any normal part of $\zeta_{\mu\nu}$ arising in (\ref{Gammadisc}) ---see, e.g., \cite{MS}. Thus, from now on we assume that such a change has been performed and we will restrict ourselves to assuming that $\zeta_{\mu\nu}$ is tangent to $\Sigma$: $n^\mu \zeta_{\mu\nu} =0$. But using (\ref{2FF}) together with (\ref{Gammadisc}) we deduce
\begin{equation}
K^+_{ab} -K^-_{ab} = -n_\mu \left[\Gamma^\mu_{\rho\sigma}\right] e^\rho_a e^\sigma_b= \frac{1}{2} \zeta_{\rho\sigma}e^\rho_a e^\sigma_b \label{discK}
\end{equation}
that is to say, the tangent part of $\zeta_{\mu\nu}$ is characterized by the difference of the two $\pm$-second fundamental forms. Thus, defining the {\em jump} on $\Sigma$ of the second fundamental form as usual
\begin{equation}
\left[K_{\mu\nu}\right]:= K^{+}_{\mu\nu}-K^{-}_{\mu\nu}, \hspace{1cm} n^{\mu}\left[K_{\mu\nu}\right] =0 \label{Kdisc}
\end{equation}
we can rewrite (\ref{HRie0}) as the desired formula for the singular part of the Riemann tensor distribution:
\begin{equation}
H_{\alpha\beta\lambda\mu} = - n_{\alpha}\left[K_{\beta\mu}\right]n_{\lambda}+n_{\alpha}\left[K_{\beta\lambda}\right]n_{\mu}-n_{\beta}\left[K_{\alpha\lambda}\right]n_{\mu}+n_{\beta}\left[K_{\alpha\mu}\right]n_{\lambda} .\label{HRie}
\end{equation}
This important formula informs us that {\em the singular part of the Riemann tensor distribution vanishes if, and only if, the jump of the second fundamental form vanishes}.
By contractions on (\ref{Riedist}) we get (with obvious notations):
\begin{itemize}
\item The Ricci tensor distribution
\begin{equation}
\underline{R}_{\beta\mu}=R^+_{\beta\mu}\underline{\theta} +R^-_{\beta\mu} (\underline{1}-\underline{\theta}) +H_{\beta\mu} \delta^\Sigma \label{Ricdist}
\end{equation}
where its singular part is given by
\begin{equation}
H_{\beta\mu}:= H^\rho{}_{\beta\rho\mu} =-\left[K_{\beta\mu}\right] -\left[K^\rho{}_{\rho}\right] n_\beta n_\mu .\label{HRic}
\end{equation}
Thus, {\em the singular part of the Ricci tensor distribution vanishes if, and only if, the jump of the second fundamental form vanishes, hence, if and only if that of the full Riemann tensor distribution does}.
\item The scalar curvature distribution
\begin{equation}
\underline R = R^+\underline{\theta} +R^- (\underline{1}-\underline{\theta}) + H \delta^\Sigma \label{scalardist}
\end{equation}
whose singular part reads
\begin{equation}
H:= H^\rho{}_{\rho}=-2\left[K^\mu{}_{\mu}\right] . \label{Hscalar}
\end{equation}
It follows that {\em the singular part of the scalar curvature distribution vanishes if, and only if, the jump of the \underline{trace} of second fundamental form vanishes}.
\item And the Einstein tensor distribution
\begin{equation}
\underline{G}_{\beta\mu} := \underline{R}_{\beta\mu}-\frac{1}{2} g_{\beta\mu} \underline R =G^+_{\beta\mu}\underline{\theta} +G^-_{\beta\mu} (\underline{1}-\underline{\theta}) +{\cal G}_{\beta\mu} \delta^\Sigma \label{Gdist}
\end{equation}
with a singular part
\begin{equation}
{\cal G}_{\beta\mu} = -\left[K_{\beta\mu}\right]+h_{\beta\mu}\left[K^\rho{}_{\rho}\right] , \hspace{1cm} n^\mu {\cal G}_{\beta\mu} =0 \label{HG}
\end{equation}
which is tangent to $\Sigma$.
\end{itemize}
A general result proven in \cite{MS} is that the second Bianchi identity holds in the distributional sense:
$$
\nabla_{\rho}\underline{R}^\alpha{}_{\beta\mu\nu}+\nabla_{\mu}\underline{R}^\alpha{}_{\beta\nu\rho}+\nabla_{\nu}\underline{R}^\alpha{}_{\beta\rho\mu}=0
$$
from where one deduces by contraction
$$
\nabla^\beta \underline{G}_{\beta\mu}=0
$$
for the Einstein tensor distribution. By using (\ref{Gdist}) and the general formula (\ref{nablaT1}) this implies
\begin{equation}
0= \nabla^\beta \underline{G}_{\beta\mu} = n^\beta \left[G_{\beta\mu}\right] \underline{\delta}^\Sigma +\nabla^\beta \left({\cal G}_{\beta\mu}\underline{\delta}^\Sigma \right)\, .\label{divG=0}
\end{equation}
The second summand on the righthand side is computed according to the general formula (\ref{nablaTdelta}) in Appendix \ref{App:D1}
\begin{eqnarray*}
\nabla^\beta \left({\cal G}_{\beta\mu}\underline{\delta}^\Sigma \right)=
g^{\beta\rho}\nabla_\rho \left({\cal G}_{\beta\mu}\underline{\delta}^\Sigma \right)=
g^{\beta\rho}\nabla_\sigma \left({\cal G}_{\beta\mu} n_\rho n^\sigma\delta^\Sigma \right)+g^{\beta\rho}
h^\lambda_\rho\nabla_\lambda {\cal G}_{\beta\mu} \delta^\Sigma =h^{\rho\lambda}
\nabla_\lambda {\cal G}_{\rho\mu}\, \delta^\Sigma
\end{eqnarray*}
which, via (\ref{nabla=nabla1}) finally gives
$$
\nabla^\beta \left({\cal G}_{\beta\mu}\underline{\delta}^\Sigma \right)=\left(\overline\nabla^\beta {\cal G}_{\beta\mu}-K^\Sigma_{\rho\sigma}{\cal G}^{\rho\sigma}n_\mu \right)\delta^\Sigma \, .
$$
Introducing this into (\ref{divG=0}) we arrive at
$$
0=\underline{\delta}^\Sigma\left(n^\beta \left[G_{\beta\mu}\right] +\overline\nabla^\beta {\cal G}_{\beta\mu}-\frac{1}{2}n_\mu {\cal G}^{\rho\sigma}(K^+_{\rho\sigma}+K^-_{\rho\sigma})\right)
$$
which implies, by taking the normal and tangent components, the following relations
\begin{eqnarray}
(K^+_{\rho\sigma}+K^-_{\rho\sigma}){\cal G}^{\rho\sigma} = 2n^\beta n^\mu \left[ G_{\beta\mu}\right]=2n^\beta n^\mu \left[ R_{\beta\mu}\right]-[R], \label{1}\\
\overline\nabla^\beta {\cal G}_{\beta\mu}=-n^\rho h^\sigma{}_\mu \left[ G_{\rho\sigma}\right]=-n^\rho h^\sigma{}_\mu \left[ R_{\rho\sigma}\right] \label{2} .
\end{eqnarray}
(These equations can also be obtained \cite{I} by using part of the Gauss and Codazzi equations for $\Sigma$ on both sides, specifically (\ref{gauss}) and ({\ref{coda}) in Appendix \ref{App:D1}).
{\bf Remark:} A very important remark is that all formulae in this section are {\em purely geometric}, independent of any field equations, and therefore valid in any theory of gravity based on a Lorentzian manifold.
\section{Quadratic gravity}
\label{sec:quadratic_grav}
We are going to concentrate on the case of quadratic theories of gravity because, apart from its own intrinsic interest and as we are going to discuss, they allow for cases where gravitational double layers arise.
Let us consider a quadratic theory of gravity in $n+1$ dimensions described by the Lagrangian density
\begin{equation}
\mathcal{L} = \frac{1}{2\kappa}\left(R-2\Lambda + a_1 R^2 + a_2 R_{\mu\nu}R^{\mu \nu} + a_3 R_{\alpha \beta \mu \nu} R^{\alpha \beta \mu \nu}\right)+\mathcal{L}_{matter},\label{lag}
\end{equation}
where $\kappa =c^4/8\pi G$ is the gravitational coupling constant, $\Lambda$ is the cosmological constant, $a_1,a_2,a_3$ are three constants selecting the particular theory, and $\mathcal{L}_{matter}$ is the Lagrangian density describing the matter fields. $\Lambda^{-1}$ and $a_1,a_2,a_3$ have physical units of $L^{2}$.
The field equations derived from this Lagrangian read (see e.g. \cite{F} and references therein)
\begin{equation}
G_{\alpha\beta}+\Lambda g_{\alpha\beta}+G^{(2)}_{\alpha\beta}=\kappa T_{\alpha\beta}, \label{fe}
\end{equation}
where $T_{\alpha\beta}$ is the energy-momentum tensor of the matter fields derived from $\mathcal{L}_{matter}$,
$G_{\alpha\beta}$ is the Einstein tensor and $G^{(2)}_{\alpha\beta}$ encodes the part that comes from the quadratic terms:
\begin{eqnarray}
G^{(2)}_{\alpha \beta}&=& 2\left\{\frac{}{} a_1 R R_{\alpha \beta} -2a_3 R_{\alpha \mu}R_{\beta}^\mu + a_3 R_{\alpha \rho \mu \nu}R_\beta{}^{\;\rho \mu \nu} + (a_2 + 2a_3)R_{\alpha \mu \beta \nu}R^{\mu \nu} \right.\nonumber\\
&&\left. - \left(a_1 + \frac{1}{2}a_2 + a_3\right)\nabla_\alpha \nabla_\beta R+\left( \frac{1}{2}a_2 + 2 a_3 \right)\Box R_{\alpha \beta}\right\}\nonumber\\
&&-\frac{1}{2} g_{\alpha \beta}\left\{(a_1 R^2 + a_2 R_{\mu\nu}R^{\mu \nu} + a_3 R_{\rho \gamma \mu \nu} R^{\rho \gamma \mu \nu}) - (4a_1 + a_2) \Box R \right\} \label{eq:G2}
\end{eqnarray}
where $\Box:= g^{\mu\nu}\nabla_\mu\nabla_\nu$ is notation for the D'Alembertian in $(V,g)$.
If we want to find the proper junction conditions, or a description of thin shells or braneworlds in these theories, we have to resort to the distributional calculus (see Appendices) and use the formulas provided in the previous section.
Then, in order to have the Lagrangian density as well as the tensor $G^{(2)}_{\alpha \beta}$ well defined in a distributional sense ---so that the field equations (\ref{fe}) are sensible mathematically---, one has to avoid any multiplication of singular distributions (such as ``$\delta^\Sigma \delta^\Sigma$''). One could also hope for some cancellation of such terms between different parts of the Lagrangian, and of $G^{(2)}_{\alpha \beta}$, and this is discussed in the following subsection for completeness, but one has to bear in mind that these cancellations are probably ill defined anyway, and thus not relevant. In order to properly deal with products of distributions
we would need a more general calculus, based e.g. on Colombeau algebras \cite{Colombeau, Vickers}, and hope that those cancellations certainly occur and are well defined.
\subsection{Dubious possible cancellation of non-linear $\delta^\Sigma \delta^\Sigma$ terms}
Let us start by examining the Lagrangian (\ref{lag}) recalling that the different curvature terms possess now singular parts proportional to $\delta^\Sigma$, as given in
(\ref{HRie}) and its contractions (\ref{HRic}) and (\ref{Hscalar}). One could naively compute the products of these singular parts arising from the quadratic terms in (\ref{lag}) and collect them in a common-factor fashion. The result would be a term of type
$$
\delta^\Sigma\deltaS \left( 2\kappa_1 [K_\rho^\rho]^2 +2 \kappa_2 [K_{\alpha\beta}][K^{\alpha\beta}] \right)
$$
where we have introduced the abbreviations
\begin{equation}
\kappa_1:= 2a_1+a_2/2,\qquad
\kappa_2:= 2a_3+a_2/2 .
\label{def:kappas}
\end{equation}
to be used repeatedly in what follows. Then, one should require the vanishing of the term in brackets. A similar naive compilation should be performed with the non-linear distributions arising from the quadratic terms in the field equations (\ref{eq:G2}). Imposing again that the full combination must vanish, and separating the resulting condition into its normal and tangent parts to $\Sigma$ we would find
\begin{eqnarray}
&&\left \lbrace \kappa_1 [K_\rho^\rho]^2 + \kappa_2 (3[K^{\mu \nu}][K_{\mu \nu}] - 2[K_\rho^\rho]^2 ) \right \rbrace n_\alpha n_\beta \label{eq:normaldeltadelta}\\
&&+\kappa_1 [K_\rho^\rho](2[K_{\alpha\beta}] - [K_\rho^\rho]h_{\alpha\beta}) + \kappa_2 (2[K_\rho^\rho][K_{\alpha\beta}]-[K_{\mu\nu}][K^{\mu\nu}]h_{\alpha\beta}) = 0.\label{eq:tangentdeltadelta}
\end{eqnarray}
The normal (\ref{eq:normaldeltadelta}) and tangent (\ref{eq:tangentdeltadelta}) parts should vanish separately. In particular the trace of the tangent part reads
\begin{equation}
\kappa_1 [K_\rho^\rho]^2(2 - n) + \kappa_2 (2[K_\rho^\rho]^2-n[K_{\mu\nu}][K^{\mu\nu}]) = 0.\label{eq:tracetangentdeltadelta}
\end{equation}
We see directly that $\kappa_1 = \kappa_2 = 0$ solves (\ref{eq:normaldeltadelta}) and (\ref{eq:tangentdeltadelta}), but in order to find all solutions we compute the determinant of the system (\ref{eq:normaldeltadelta}) and (\ref{eq:tracetangentdeltadelta}). This yields
\begin{eqnarray}
(3-n)[K_\rho^\rho]^2([K_\rho^\rho]^2 - [K^{\mu \nu}][K_{\mu \nu}]) = 0.
\end{eqnarray}
Take first $[K_\rho^\rho]=0$. Then, (\ref{eq:normaldeltadelta}) and (\ref{eq:tracetangentdeltadelta}) reduce to $\kappa_2 [K_{\mu \nu}][K^{\mu \nu}] = 0$. If $[K_\rho^\rho]\neq 0$ but
$[K_\rho^\rho]^2 = [K^{\mu \nu}][K_{\mu \nu}]$, (\ref{eq:normaldeltadelta}) reads $(\kappa_1 + \kappa_2)[K_\rho^\rho]^2=0$ and (\ref{eq:tracetangentdeltadelta}) is redundant since it becomes $(\kappa_1 + \kappa_2)[K_\rho^\rho]^2(2[K_{\alpha\beta}] - [K_\rho^\rho] h_{\alpha\beta})=0$. Thus, $\kappa_1 + \kappa_2 =0$ would follow.
Finally, if $n=3$ (and $[K_\rho^\rho]^2 \neq 0$), (\ref{eq:normaldeltadelta}),
(\ref{eq:tangentdeltadelta}) and (\ref{eq:tracetangentdeltadelta}) yield
a new possibility not considered so far, summarized in
\begin{equation}
[K_{\alpha\beta}] = \frac{1}{3} h_{\alpha\beta} \Rightarrow [K_\rho^\rho] = 1, \quad [K_{\alpha\beta}][K^{\alpha\beta}] = \frac{1}{3}, \quad \quad \kappa_1 -\kappa_2=0.
\end{equation}
In short, each of the following possibilities would seem to allow for the mutual annihilation of "$\delta^\Sigma \delta^\Sigma$" terms in (\ref{eq:G2}) ---and in (\ref{lag})---:
\begin{enumerate}
\item $\kappa_1 = \kappa_2 = 0$.
\item $[K_\rho^\rho] = 0$ and $\kappa_2 = 0$.
\item $[K_\rho^\rho]^2 = [K_{\mu\nu}][K^{\mu\nu}] = 0$.
\item $[K_\rho^\rho]^2 = [K_{\mu\nu}][K^{\mu\nu}] \neq 0$ and $\kappa_1 + \kappa_2 = 0$.
\item If the spacetime is 4-dimensional, $\kappa_1 - \kappa_2 = 0$ and $[K_{\alpha\beta}] = h_{\alpha\beta}/3$.
\end{enumerate}
Despite we have included this analysis here for completeness, we should not forget that these cases are not mathematically correct, and therefore they should not be taken seriously unless a more rigorous study is performed showing its feasibility. To understand the problems behind these naive calculations, we want to emphasize that there is no known way to give a sensible meaning to $\delta^\Sigma \delta^\Sigma$, let alone to things such as $f \delta^\Sigma \delta^\Sigma$. Thus, taking for granted that combinations of type $f_1\delta^\Sigma \delta^\Sigma + f_2 \delta^\Sigma\deltaS$ are related to $(f_1+f_2)\delta^\Sigma \delta^\Sigma$ is, at least, dubious. Such difficulties were, for instance, noted in \cite{DD} for the Gauss-Bonnet case ---corresponding to the possibility 1 above---, and one has to resort to analyzing thick shells, that is, layers with a finite width, or to a setting more general than distributions,
such as the theory of nonlinear generalized functions described in \cite{Colombeau, Vickers} and references therein. The thin shell formalism is simply not available.
Therefore, we will abandon this route for now, and in this paper we will concentrate on the generic and well-defined cases analyzed in the next subsection.
\subsection{Well defined possibilities: no $\delta^\Sigma\deltaS$ terms}
The only mathematically well-defined possibilities in the available theory of distributions for the thin shell formalism, as just argued, are those where no $\delta^\Sigma\deltaS$ term ever arises, leading to two different possibilities if we let aside the case of GR (defined by $a_1 =a_2 =a_3 =0$):
\begin{enumerate}
\item If either $a_2$ or $a_3$ is different from zero, then products of the Ricci tensor by itself, or by the Riemann tensor, appear in (\ref{eq:G2}) and these are ill-defined if the singular parts (\ref{HRie}) and (\ref{HRic}) are non-zero. Thus, we must demand that the singular parts (\ref{HRie}) and (\ref{HRic}) vanish which happens, as proven above, if and only if the jump of the second fundamental form vanishes. Thus, in this situation it is indispensable to require
\begin{equation}
\left[K_{\mu\nu}\right] =0 .\label{Kdisc=0}
\end{equation}
In this case, all the curvature tensors are tensor distributions associated to tensor fields ---see Appendix \ref{App:A}---with possible discontinuities across $\Sigma$. Observe that then the Lagrangian density (\ref{lag}) is also a well defined, locally integrable, function.
\item If on the other hand $a_2=a_3=0$, then only products of $R$ by itself or by the Ricci tensor appear in (\ref{eq:G2}), and thus it is enough to demand that $R$ is a locally integrable function without singular part. Hence, in this case it is enough to require that (\ref{Hscalar}) vanishes, that is to say, that the trace of the second fundamental form has no jump: $[K^\rho_\rho]=0$. Observe that, again, the Lagrangian density (\ref{lag}) is in this case a well-defined locally integrable function.
\end{enumerate}
In any of the above two possibilities, expression (\ref{fe}) with (\ref{eq:G2}) has a remarkable property: {\em there are no terms quadratic in derivatives of the curvature tensors}. Taking into account that tensor distributions can be covariantly differentiated according to the rules explained in the appendices, the derivatives of the curvature tensors may have singular parts and still the field equations (\ref{fe}) are mathematically sound. This opens the door for the existence of matching hypersurfaces which represent {\em double layers}. Case 2 above was extensively treated in \cite{Senovilla13,Senovilla14,Senovilla15}, where gravitational double layers were found for the first time. Therefore, we will here concentrate in the more general case 1, and thus we will assume hereafter that (\ref{Kdisc=0}) holds. Notice that (\ref{Kdisc=0}) coincide precisely with the matching conditions that are needed in General Relativity to avoid distributional matter contents, as follows from (\ref{HG}) together with the Einstein field equations.
Once (\ref{Kdisc=0}) is enforced, the lefthand side of the field equations (\ref{fe}) can be computed in the distributional sense. From (\ref{Riedist}) and (\ref{Kdisc=0}) we know that the Riemann tensor distribution
\[
\underline R_{\alpha\beta\mu\nu}=R^+_{\alpha\beta\mu\nu}\underline{\theta}+R^-_{\alpha\beta\mu\nu}(\underline{1}-\underline{\theta}),
\]
is actually associated to a locally integrable (and piecewise differentiable) tensor field. However, this tensor field may be discontinuous across $\Sigma$, and thus $[R_{\alpha\beta\mu\nu}]$ may be non-vanishing. This leads, when computing covariant derivatives of $\underline R_{\alpha\beta\mu\nu}$, to singular terms proportional $\delta^\Sigma$ and its derivatives. And these are going to arise in $\underline G^{(2)}_{\alpha\beta}$.
Thus, the energy-momentum tensor on the righthand side of (\ref{fe}) must be treated as a tensor distribution and contain such terms, localized on $\Sigma$, giving the energy-matter contents of the thin shell or double layer.
In order to compute this matter content supported on $\Sigma$ we only have to calculate the singular part of $\underline G^{(2)}_{\alpha\beta}$, because ${\cal G}_{\alpha\beta}$ in (\ref{Gdist}) vanishes as follows from (\ref{Kdisc=0}) with (\ref{HG}). But
the only terms in (\ref{eq:G2}) that are relevant for this singular part
are $\nabla_\alpha\nabla_\beta R$ and $\Box R_{\alpha\beta}$ (and its contraction $\Box R$). More precisely,
we need to obtain the singular part of the expression
\begin{eqnarray}
&& - \left(2a_1 + a_2 + 2a_3\right)\nabla_\alpha \nabla_\beta \underline R+\left( a_2 + 4 a_3 \right)\Box \underline R_{\alpha \beta}+ \left(2a_1 +\frac{1}{2} a_2\right) \Box \underline R\, g_{\alpha\beta}\nonumber\\
&&=- \left(\kappa_1+\kappa_2\right)\nabla_\alpha \nabla_\beta \underline R+2\kappa_2\Box \underline R_{\alpha \beta}+ \kappa_1 \Box \underline R\, g_{\alpha\beta} .
\label{eq:sing_expression}
\end{eqnarray}
This is the purpose of the next section.
\section{Energy-momentum on the layer $\Sigma$}
\label{sec:compute_deltas}
From (\ref{Ricdist}) and the assumption (\ref{Kdisc=0}) we know that
$$
\underline R_{\alpha \beta} = R_{\alpha \beta}^+ \, \underline{\theta} + R_{\alpha \beta}^- (\underline{1}-\underline{\theta})
$$
from where, using the general formula (\ref{nablaT1}) twice we deduce
\begin{eqnarray}
\nabla_\nu \underline R_{\alpha \beta} &=& \nabla_\nu R_{\alpha \beta}^+ \, \underline{\theta} + \nabla_\nu R_{\alpha \beta}^- (\underline{1}-\underline{\theta}) + [R_{\alpha \beta}] n_\nu \delta^\Sigma,\nonumber \\
\nabla_\mu \nabla_\nu \underline R_{\alpha \beta} &=& \nabla_\mu \nabla_\nu R_{\alpha \beta}^+\, \underline{\theta} + \nabla_\mu \nabla_\nu R_{\alpha \beta}^- (\underline{1}-\underline{\theta}) + [\nabla_\nu R_{\alpha \beta}]n_\mu \delta^\Sigma + \nabla_\mu \left([R_{\alpha \beta}]n_\nu \delta^\Sigma \right). \label{eq:dd_ricci_dist}
\end{eqnarray}
Via contractions here, or directly from (\ref{scalardist}), we also obtain
\begin{eqnarray}
\underline R &=& R^+ \underline{\theta} + R^- (\underline{1}-\underline{\theta}),\nonumber \\
\nabla_\nu \underline R &=& \nabla_\nu R^+ \underline{\theta} + \nabla_\nu R^- (\underline{1}-\underline{\theta}) + [R]n_\nu \delta^\Sigma, \nonumber \\
\nabla_\mu \nabla_\nu \underline R &=& \nabla_\mu \nabla_\nu R^+ \underline{\theta} + \nabla_\mu \nabla_\nu R^- (\underline{1}-\underline{\theta}) + [\nabla_\nu R]n_\mu \delta^\Sigma + \nabla_\mu \left([R]n_\nu \delta^\Sigma \right)
\label{eq:2_der_R}
\end{eqnarray}
as well as
\begin{eqnarray}
\Box \underline R_{\alpha\beta}= \Box R_{\alpha \beta}^+ \underline{\theta} + \Box R_{\alpha \beta}^- (\underline{1}-\underline{\theta}) + n^\rho[\nabla_\rho R_{\alpha\beta}] \delta^\Sigma + g^{\mu \nu} \nabla_\mu \left([R_{\alpha \beta}]n_\nu \delta^\Sigma\right), \\
\Box \underline R = \Box R^+ \underline{\theta} + \Box R^- (\underline{1}-\underline{\theta}) + n^\rho[\nabla_\rho R] \delta^\Sigma + g^{\mu \nu} \nabla_\mu \left([R]n_\nu \delta^\Sigma\right).
\end{eqnarray}
Thus, we need to control the discontinuities of the Ricci tensor and the scalar curvature, and also to provide an expression for the singular distribution $\nabla_\mu \left([R_{\alpha \beta}]n_\nu \delta^\Sigma \right)$ supported on $\Sigma$. The general formula (\ref{nablaTdelta}) provides
$$
\nabla_\mu\left(n_\nu [R_{\alpha\beta}]\delta^\Sigma\right)=\nabla_\rho \left( [R_{\alpha\beta}] n_\mu n_\nu n^\rho \delta^\Sigma \right)+ \left\{ h^\rho_\mu \nabla_\rho (n_\nu [R_{\alpha\beta}] )-K^{\rho}{}_\rho \, [R_{\alpha\beta}] n_\mu n_\nu \right\} \delta^\Sigma .
$$
At this point we introduce a 4-covariant tensor distribution $\underline{\Delta}_{\mu\nu\alpha\beta}$ with support on $\Sigma$, which takes care of the first summand here and is defined by
$$
\underline{\Delta}_{\mu\nu\alpha\beta}:= \nabla_\rho \left([R_{\alpha\beta}]n_\mu n_\nu n^\rho \, \delta^\Sigma \right)
$$
or equivalently by
$$
\left \langle \underline{\Delta}_{\mu\nu\alpha\beta}, Y^{\mu \nu \alpha \beta}\right \rangle := - \int_\Sigma [R_{\alpha \beta}] n_\nu n_\mu n^\rho \nabla_\rho Y^{\mu \nu \alpha \beta} d \sigma .
$$
Note that $\underline{\Delta}_{\mu\nu\alpha\beta}=\underline{\Delta}_{\nu\mu\alpha\beta}=\underline{\Delta}_{\mu\nu\beta\alpha}$. In summary, we have
$$
\nabla_\mu\left(n_\nu [R_{\alpha\beta}]\delta^\Sigma\right)= \underline{\Delta}_{\mu\nu\alpha\beta}+ \left\{n_\nu h^\rho_\mu \nabla_\rho [R_{\alpha\beta}] +[R_{\alpha\beta}] (K_{\mu\nu}-K^{\rho}{}_\rho \, n_\mu n_\nu) \right\} \delta^\Sigma
$$
and therefore (\ref{eq:dd_ricci_dist}) becomes
\begin{eqnarray*}
\nabla_\mu \nabla_\nu \underline R_{\alpha \beta} = \nabla_\mu \nabla_\nu R_{\alpha \beta}^+\, \underline{\theta} + \nabla_\mu \nabla_\nu R_{\alpha \beta}^- (\underline{1}-\underline{\theta}) +\underline{\Delta}_{\mu\nu\alpha\beta}\\
+\left\{ [\nabla_\nu R_{\alpha \beta}]n_\mu+ n_\nu h^\rho_\mu \nabla_\rho [R_{\alpha\beta}] +[R_{\alpha\beta}] (K_{\mu\nu}-K^{\rho}{}_\rho \, n_\mu n_\nu) \right\} \delta^\Sigma .
\end{eqnarray*}
From the general formula (\ref{disc1f}), conveniently generalised, we have
\begin{equation}
\label{eq:[d_ricci]}
[\nabla_\rho R_{\beta\mu}]=n_\rho r_{\beta\mu}+h^\sigma_\rho\nabla_\sigma[R_{\beta\mu}],
\end{equation}
where
\begin{equation}
r_{\beta\mu}:= n^\rho[\nabla_\rho R_{\beta\mu}], \hspace{1cm} r_{\beta\mu} =r_{\mu\beta} \label{rab}
\end{equation}
are the discontinuities of the normal derivatives of the Ricci tensor. Thus, we finally get
\begin{eqnarray}
\nabla_\mu \nabla_\nu \underline R_{\alpha \beta} = \nabla_\mu \nabla_\nu R_{\alpha \beta}^+\, \underline{\theta} + \nabla_\mu \nabla_\nu R_{\alpha \beta}^- (\underline{1}-\underline{\theta}) +\underline{\Delta}_{\mu\nu\alpha\beta}\hspace{1cm} \nonumber \\
+\left\{ r_{\alpha\beta} \, n_\nu n_\mu+ n_\mu h^\rho_\nu \nabla_\rho [R_{\alpha\beta}]+n_\nu h^\rho_\mu \nabla_\rho [R_{\alpha\beta}] +[R_{\alpha\beta}] (K_{\mu\nu}-K^{\rho}{}_\rho \, n_\mu n_\nu) \right\} \delta^\Sigma .\label{nablanablaRic}
\end{eqnarray}
Observe that the entire singular part is symmetric in $(\alpha\beta)$ and in $(\mu\nu)$.
From (\ref{nablanablaRic}) we immediately get all the sought terms. First, by contracting with $g^{\alpha\beta}$ we find \cite{Senovilla13,Senovilla14,Senovilla15}
\begin{eqnarray}
\nabla_\mu \nabla_\nu \underline R = \nabla_\mu \nabla_\nu R^+\, \underline{\theta} + \nabla_\mu \nabla_\nu R^- (\underline{1}-\underline{\theta}) +\underline{\Delta}_{\mu\nu}\hspace{1cm} \nonumber \\
+\left\{ b n_\nu n_\mu+ n_\mu \overline{\nabla}_\nu [R]+n_\nu \overline{\nabla}_\mu [R] +[R] (K_{\mu\nu}-K^{\rho}{}_\rho \, n_\mu n_\nu) \right\} \delta^\Sigma \label{nablanablaR}
\end{eqnarray}
where \cite{Senovilla14,Senovilla15}
\begin{equation}
b:= r^\rho_\rho =n^\rho\nabla_\rho [R] \label{b}
\end{equation}
measures the discontinuity on the normal derivative of the scalar curvature, and \cite{Senovilla14}
$$
\underline{\Delta}_{\mu\nu} := g^{\alpha\beta}\underline{\Delta}_{\mu\nu\alpha\beta}
$$
is a 2-covariant symmetric tensor distribution with support on $\Sigma$ acting as follows\footnote{There are some errata in the formulae for $\underline{\Delta}_{\mu\nu}$ and $\underline{\Omega}_{\mu\nu}$ in \cite{Senovilla13}, and for $\underline{t}_{\mu\nu}$ in \cite{Senovilla14,Senovilla15}: in all cases $Y$ must be replaced by $Y^{\mu\nu}$.}
\begin{equation}
\left \langle \underline\Delta_{\mu \nu}, Y^{\mu \nu}\right \rangle := - \int_\Sigma [R] n_\nu n_\mu n^\rho \nabla_\rho Y^{\mu \nu} d \sigma ; \hspace{1cm} \underline\Delta_{\mu \nu} = \nabla_\rho \left([R] n_\mu n_\nu n^\rho \delta^\Sigma \right).
\label{def:dl}
\end{equation}
Similarly, contracting (\ref{nablanablaRic}) with $g^{\mu\nu}$ we readily get
\begin{eqnarray}
\Box R_{\alpha \beta} = \Box R_{\alpha \beta}^+ \underline{\theta} + \Box R_{\alpha \beta}^- (\underline{1}-\underline{\theta}) + r_{\alpha \beta} \delta^\Sigma + g^{\mu \nu} \underline{\Delta}_{\mu\nu\alpha\beta}
\label{eq:box_ricci_dist}
\end{eqnarray}
where the last distribution acts as follows
\begin{eqnarray*}
\left\langle g^{\mu\nu}\underline{\Delta}_{\mu\nu\alpha\beta}, Y^{\alpha\beta}\right\rangle=
\left\langle \underline{\Delta}_{\mu\nu\alpha\beta}, g^{\mu\nu}Y^{\alpha\beta}\right\rangle=
-\int_\Sigma[R_{\alpha\beta}]n_\nu n_\mu n^\rho\nabla_\rho(Y^{\alpha\beta}g^{\mu\nu})d\sigma\nonumber\\
=-\int_\Sigma[R_{\alpha\beta}] n^\rho\nabla_\rho Y^{\alpha\beta}d\sigma ; \hspace{2cm} g^{\mu\nu}\underline{\Delta}_{\mu \nu \alpha \beta}=\nabla_\rho\left([R_{\alpha\beta}]n^\rho\delta^\Sigma\right).
\end{eqnarray*}
Finally, by tracing either of (\ref{nablanablaR}) or (\ref{eq:box_ricci_dist}) we easily derive
\begin{equation}
\Box \underline{R} = \Box R^+ \underline{\theta} + \Box R^- (\underline{1}-\underline{\theta}) + b\, \delta^\Sigma + \underline\Delta,
\end{equation}
where we have introduced the notation $\underline\Delta:= g^{\mu \nu} \underline\Delta_{\mu \nu}$. Note that \cite{Senovilla13}
\[
\left\langle \underline\Delta, Y\right\rangle=\left\langle g^{\mu\nu}\underline\Delta_{\mu \nu}, Y\right\rangle=-\int_\Sigma [R]n^\rho \nabla_\rho Y d\sigma ;
\hspace{1cm} \underline\Delta = \nabla_\rho \left([R] n^\rho \delta^\Sigma\right)
\]
What we have proven is that the distribution $\underline{G}^{(2)}_{\alpha\beta}$ takes the following form
\begin{equation}
\underline{G}^{(2)}_{\alpha\beta} = G^{(2)+}_{\alpha\beta}\underline{\theta} + G^{(2)-}_{\alpha\beta}(\underline{1}-\underline{\theta})+\widetilde{G}_{\alpha\beta} \delta^\Sigma +\mathscr{G}_{\alpha\beta} \label{G2dist}
\end{equation}
where
\begin{equation}
\widetilde{G}_{\alpha\beta}=2\kappa_2 r_{\alpha\beta}+\kappa_1 b g_{\alpha\beta} -(\kappa_1 +\kappa_2) \left\{ b n_\alpha n_\beta+ n_\alpha \overline{\nabla}_\beta [R]+n_\beta \overline{\nabla}_\alpha [R] +[R] (K_{\alpha\beta}-K^{\rho}{}_\rho \, n_\alpha n_\beta) \right\} ,\label{tildeG}
\end{equation}
and after a trivial rearrangement
\begin{eqnarray}
\mathscr{G}_{\alpha\beta}=\kappa_1 \left( g_{\alpha\beta}\underline\Delta - \underline\Delta_{\alpha \beta}\right) + \kappa_2 \left( 2 g^{\mu \nu} \underline{\Delta}_{\mu \nu \alpha \beta} - \underline\Delta_{\alpha \beta}\right) . \label{arranged1}
\end{eqnarray}
From (\ref{arranged1}) we define two new 2-covariant tensor distributions with support on $\Sigma$ \cite{Senovilla14}:
\begin{equation}
\underline{\Omega}_{\alpha\beta} := g_{\alpha\beta}\underline\Delta - \underline\Delta_{\alpha \beta}=\nabla_\rho\left( [R] h_{\alpha\beta} n^\rho\delta^\Sigma\right) ; \hspace{7mm} \left<\underline{\Omega}_{\alpha\beta},Y^{\alpha\beta} \right>=
-\int_\Sigma [R] h_{\alpha\beta}\, n^\rho\nabla_\rho Y^{\alpha\beta}d\sigma \label{Omega}
\end{equation}
and
\begin{equation}
\underline{\Phi}_{\alpha\beta} := g^{\mu \nu} \underline{\Delta}_{\mu \nu \alpha \beta} -\frac{1}{2} \underline\Delta_{\alpha \beta}-\frac{1}{2}\underline{\Omega}_{\alpha\beta}
=\nabla_\rho \left( [G_{\alpha\beta}] n^\rho \delta^\Sigma\right) ; \hspace{3mm}
\left<\underline{\Phi}_{\alpha\beta},Y^{\alpha\beta} \right>=
-\int_\Sigma [G_{\alpha\beta}]\, n^\rho\nabla_\rho Y^{\alpha\beta} d\sigma \label{Phi }
\end{equation}
(recall that $[G_{\alpha\beta}]$ is tangent to $\Sigma$, $n^\alpha[G_{\alpha\beta}]=0$, due to (\ref{1}) and (\ref{2}) together with the vanishing of ${\cal G}_{\alpha\beta}$ as follows from (\ref{HG}) and (\ref{Kdisc=0})). With these definitions, (\ref{arranged1}) is rewritten simply as
\begin{equation}
\mathscr{G}_{\alpha\beta} = (\kappa_1+\kappa_2) \underline{\Omega}_{\alpha\beta} +2\kappa_2 \underline{\Phi}_{\alpha\beta}; \hspace{1cm} \mathscr{G}_{\alpha\beta} = \nabla_\rho \left(\left\{(\kappa_1+\kappa_2) [R] h_{\alpha\beta} +2\kappa_2[G_{\alpha\beta}]\right\}n^\rho \delta^\Sigma \right). \label{Glayer}
\end{equation}
Given the structure (\ref{G2dist}), the field equations (\ref{fe}) can only be satisfied if the energy-momentum tensor on the righthand side is a tensor distribution with the following terms
\begin{equation}
\underline T_{\mu\nu}=T^+_{\mu\nu} \underline\theta +T^-_{\mu\nu} (\underline{1}-\underline\theta)+\widetilde{T}_{\mu\nu} \delta^\Sigma + \underline{t}_{\mu\nu} \label{emt0}
\end{equation}
where $\widetilde{T}_{\mu\nu}$ is a symmetric tensor field defined only on $\Sigma$ and $\underline{t}_{\mu\nu}$ is by definition the singular part of $\underline{T}_{\mu\nu}$ with support on $\Sigma$ {\em not proportional} to $\delta^\Sigma$. We perform an orthogonal decomposition of $\tilde{T}_{\mu\nu}$ into tangent, normal-tangent and normal parts with respect to $\Sigma$
\begin{equation}
\widetilde{T}_{\mu\nu} =\tau_{\mu\nu}+\tau_\mu n_\nu +\tau_\nu n_\mu +\tau n_\mu n_\nu \label{emtortog}
\end{equation}
with
\begin{eqnarray*}
\tau_{\mu\nu} := h^\rho_\mu h^\sigma_\nu \widetilde{T}_{\rho\sigma}, \hspace{3mm} \tau_{\mu\nu} =\tau_{\nu\mu} , \hspace{3mm} n^\mu\tau_{\mu\nu}=0; \hspace{7mm} \tau_{\mu} := h^\rho_\mu \widetilde{T}_{\rho\nu}n^\nu, \hspace{3mm} n^\mu \tau_\mu =0; \hspace{7mm} \tau := n^\mu n^\nu \widetilde{T}_{\mu\nu}
\end{eqnarray*}
so that
\begin{equation}
\underline T_{\mu\nu}=T^+_{\mu\nu} \underline\theta +T^-_{\mu\nu} (\underline{1}-\underline\theta)+\left(\tau_{\mu\nu}+\tau_\mu n_\nu +\tau_\nu n_\mu +\tau n_\mu n_\nu \right) \underline\delta^{\Sigma} + \underline{t}_{\mu\nu}. \label{emt}
\end{equation}
Following \cite{Senovilla14,Senovilla15} the proposed names for the objects in (\ref{emt}) supported on $\Sigma$, with their respective explicit expressions, are:
\begin{enumerate}
\item the {\em energy-momentum tensor} $\tau_{\alpha\beta}$ on $\Sigma$, given by
\begin{equation}
\kappa \tau_{\alpha\beta} =-(\kappa_1 + \kappa_2) [R] K_{\alpha\beta}+\kappa_1 b h_{\alpha\beta} + 2 \kappa_2 r_{\mu\nu} h^\mu_\alpha h^\nu_\beta .\label{tauexc}
\end{equation}
$\tau_{\alpha\beta}$ is the only quantity usually defined in standard shells.
\item the {\em external flux momentum} $\tau_{\alpha}$ defined by
\begin{equation}
\kappa \tau_\alpha =-(\kappa_1 + \kappa_2)\overline{\nabla}_\alpha [R] + 2\kappa_2 r_{\mu\nu}n^\mu h^\nu_\alpha . \label{tauex}
\end{equation}
This momentum vector describes normal-tangent components of $\underline{T}_{\mu\nu}$ supported on $\Sigma$. Nothing like that exists in GR.
\item the {\em external pressure or tension} $\tau$
\begin{equation}
\kappa \tau = (\kappa_1 + \kappa_2) [R] K^\rho_\rho + \kappa_2 (2r_{\mu\nu}n^\mu n^\nu -b), \label{taue}
\end{equation}
Taking the trace of (\ref{tauexc}) one obtains a relation between $b$, $\tau$ and the trace of $\tau_{\mu\nu}$:
\begin{equation}
\kappa \left(\tau^{\rho}{}_{\rho}+\tau\right) = (\kappa_1 n+\kappa_2) b \label{para_sacar_b}
\end{equation}
The scalar $\tau$ measures the total normal pressure/tension supported on $\Sigma$. Again, such a scalar does not exist in GR.
\item the {\em double-layer energy-momentum tensor distribution} $\underline t_{\alpha\beta}$, which is defined by
\begin{equation}
\kappa \underline t_{\alpha\beta} = \mathscr{G}_{\alpha\beta} = \nabla_\rho \left(\left\{(\kappa_1+\kappa_2) [R] h_{\alpha\beta} +2\kappa_2[G_{\alpha\beta}]\right\}n^\rho \delta^\Sigma \right)
\end{equation}
or, equivalently, by acting on any test tensor field $Y^{\alpha\beta}$ as
\begin{equation}
\kappa \left<\underline t_{\alpha\beta},Y^{\alpha\beta}\right> = - \int_\Sigma \left\{(\kappa_1+\kappa_2) [R] h_{\alpha\beta} +2\kappa_2[G_{\alpha\beta}]\right\} n^\rho\nabla_\rho Y^{\alpha\beta} \, . \label{t}
\end{equation}
$\underline{t}_{\alpha\beta}$ is a symmetric tensor distribution of ``delta-prime'' type: it has support on $\Sigma$ but its product with objects intrinsic to $\Sigma$ is not defined unless their extensions off $\Sigma$ are known. As argued in \cite{Senovilla14,Senovilla15}, $\underline{t}_{\alpha\beta}$ resembles the energy-momentum content of double-layer surface charge distributions, or ``dipole distributions'', with strength
\begin{equation}
\kappa \mu_{\alpha\beta}:= (\kappa_1+\kappa_2) [R] h_{\alpha\beta} +2\kappa_2[G_{\alpha\beta}] ,\hspace{7mm} \mu_{\alpha\beta}=\mu_{\beta\alpha}, \hspace{5mm} n^\alpha \mu_{\alpha\beta} =0. \label{strength}
\end{equation}
We note in passing that
\begin{equation}
\kappa \mu^\rho{}_\rho = (\kappa_1 n +\kappa_2) [R], \hspace{1cm} \kappa \underline{t}^\rho{}_\rho =(\kappa_1 n +\kappa_2)\underline\Delta
\label{tracestrength}
\end{equation}
The appearance of such double layers is remarkable, as ``massive dipoles" do not exist. However, in quadratic theories of gravity they arise, as we have just shown, in the generic situation when thin shells are considered. In this case, $\underline{t}_{\alpha\beta}$ seems to represent the idealization of abrupt changes, or jumps, in the curvature of the space-time.
\end{enumerate}
\section{Curvature discontinuities}\label{sec:nG2}
In the next section, we are going to derive the field equations satisfied by the energy-momentum quantities (\ref{tauexc}), (\ref{tauex}), (\ref{taue}) and (\ref{strength}) supported on $\Sigma$. To that end, we have to perform a detailed calculation of the discontinuities of the field equations (\ref{fe}): they obviously include the discontinuities of the energy-momentum tensor $T_{\mu\nu}$ which must be related to the energy-momentum content concentrated on $\Sigma$.
The discontinuity of the lefthand side of (\ref{fe}) contains $[G_{\alpha\beta}^{(2)}]$ (actually, we will only need $n^\alpha[G_{\alpha\beta}^{(2)}]$) and this involves discontinuities of quadratic terms in the Riemann tensor, such as
$[R^2]$, $[R_{\alpha\beta}R^{\alpha\beta}]$, $[R_{\alpha\beta\mu\nu}R^{\alpha\beta\mu\nu}]$,
$[R R_{\alpha \beta}]$, $[R_{\alpha \mu}R_{\beta}^\mu]$, $[R_{\alpha \rho \mu \nu}R_\beta{}^{\;\rho \mu \nu}]$
and $[R_{\alpha \mu \beta \nu}R^{\mu \nu}]$, as well as discontinuities of derivatives of the curvature tensors, such as $[\nabla_\alpha\nabla_\beta R]$, $[\Box R_{\alpha\beta}]$ or $[\Box R]$. Thus, we have to use systematically the rules (\ref{discprod}) and either of (\ref{disc1f}) or (\ref{disc1f1}) supplemented with (\ref{Kdisc=0}), and we also need to have some knowledge on the discontinuities of the Riemann tensor (and its derivatives).
\subsection{Discontinuities of the curvature tensors}
Thus, let us start by controlling the allowed discontinuities of the Riemann tensor across $\Sigma$. From the requirement (\ref{Kdisc=0}) we know that $\zeta_{\mu\nu}=0$ and thus
$$
[\Gamma^\alpha_{\beta\mu}] =0.
$$
Then, the general formula (\ref{discf}) gives
$$
\left[\partial_\lambda \Gamma^\alpha_{\beta\mu} \right]= n_\lambda \gamma^\alpha{}_{\beta\mu}
$$
for some functions $\gamma^\alpha{}_{\beta\mu}$ such that $\gamma^\alpha{}_{\beta\mu}=\gamma^\alpha{}_{\mu\beta}$, and therefore
\begin{equation}
\left[R_{\alpha\beta\lambda\mu}\right]=n_\lambda \gamma_{\alpha\beta\mu} -n_\mu \gamma_{\alpha\beta\lambda} .
\end{equation}
The antisymmetry of $R_{\alpha\beta\lambda\mu}$ in $[\alpha\beta]$ implies then that $\gamma_{(\alpha\beta)\mu}=-B_{\alpha\beta} n_\mu$ for some symmetric tensor $B_{\alpha\beta}=B_{\beta\alpha}$ defined only on $\Sigma$. Hence
$$
\gamma_{\alpha\beta\mu} =\tilde{\gamma}_{\alpha\beta\mu} -B_{\alpha\beta} n_\mu, \hspace{1cm} \tilde{\gamma}_{\alpha\beta\mu}=-\tilde{\gamma}_{\beta\alpha\mu}
$$
and
$$
\left[R_{\alpha\beta\lambda\mu}\right]=n_\lambda \tilde\gamma_{\alpha\beta\mu} -n_\mu \tilde\gamma_{\alpha\beta\lambda} .
$$
However, the symmetry of $ \gamma_{\alpha\beta\mu}$ in $(\beta\mu)$ implies $\tilde\gamma_{\alpha[\beta\mu]}-B_{\alpha[\beta} n_{\mu]}=0$ as well as $\tilde\gamma_{[\alpha\beta\mu]}=0$ from where one easily derives
$$
\tilde\gamma_{\alpha\beta\mu}=2\tilde\gamma_{\mu[\beta\alpha]}= n_\alpha B_{\beta\mu} -n_\beta B_{\alpha\mu}
$$
and we recover the standard formula \cite{MS}
\begin{equation}
\label{eq:[Rie]}
[R_{\alpha\beta\lambda\mu}]=n_\alpha n_\lambda B_{\beta\mu}-n_\lambda n_\beta B_{\alpha\mu}-n_\mu n_\alpha B_{\beta\lambda}+n_\mu n_\beta B_{\alpha\lambda}.
\end{equation}
As argued after formula (\ref{HRie}), there is no loss of generality by assuming that $B_{\alpha\beta}$ is tangent to $\Sigma$, i.e. such that
$$
B_{\alpha\beta}n^\alpha=0 \hspace{1cm} \Longrightarrow \hspace{3mm} B_{\alpha\beta} = B_{ab}\omega^a_\alpha \omega^b_\beta .
$$
Given this plus the symmetry of $B_{\alpha\beta}$ there are $(n^2+n)/2$ independent allowed discontinuities for the curvature tensor, all encoded in $B_{ab}$.
Successive contractions on (\ref{eq:[Rie]}) provide
\begin{equation}
\label{eq:[ricci]}
[R_{\beta\mu}]=B_{\beta\mu}+\frac{1}{2} n_\beta n_\mu [R],\qquad [R]=2B^\rho_\rho,
\end{equation}
or equivalently
\begin{equation}
B_{\beta\mu} = [R_{\beta\mu}]- \frac{1}{2}[R] n_\beta n_\mu =[G_{\beta\mu}] +\frac{1}{2} h_{\beta\mu} [R] \, .\label{B=G}
\end{equation}
In other words, the $n(n+1)/2$ allowed independent discontinuities of the Riemann tensor can be chosen to be the discontinuities of the $\Sigma$-tangent part of the Einstein tensor (or equivalently, of the Ricci tensor).
\subsection{Discontinuities of the curvature tensors derivative}
Concerning the covariant derivative of the Riemann tensor, the general formula (\ref{disc1f}) leads to
\begin{equation}
\label{eq:[d_Rie]}
[\nabla_{\rho}R_{\alpha\beta\lambda\mu}]=n_\rho r_{\alpha\beta\lambda\mu}+ h^{\sigma}_\rho\nabla_\sigma[R_{\alpha\beta\lambda\mu}],
\end{equation}
where $r_{\alpha\beta\mu\nu}$ is a tensor field defined only on $\Sigma$ and with the symmetries of a Riemann
tensor. Using the second Bianchi identity for the Riemann tensor the previous formula implies
$$
n_{[\rho} r_{\alpha\beta]\lambda\mu}+h_{\sigma[\rho} \nabla^\sigma[R_{\alpha\beta]\lambda\mu}]=0
$$
which, on using (\ref{eq:[Rie]}) and after some calculations, implies the following structure for
$r_{\alpha\beta\mu\nu}$:
\begin{eqnarray}
r_{\alpha\beta\mu\nu}&=& K_{\alpha\mu}B_{\nu\beta}
-K_{\alpha\nu}B_{\mu\beta}+K_{\beta\nu}B_{\mu\alpha}-K_{\beta\mu}B_{\nu\alpha}\nonumber\\
&+&\left(\overline{\nabla}_\mu B_{\rho\nu}-\overline{\nabla}_\nu B_{\rho\mu}\right)(n_\alpha h^\rho_\beta-n_\beta h^\rho_\alpha)
+\left(\overline{\nabla}_\alpha B_{\rho\beta}-\overline{\nabla}_\beta B_{\rho\alpha}\right)(n_\mu h^\rho_\nu-n_\nu h^\rho_\mu)\nonumber\\
&+& n_\alpha n_\mu \rho_{\beta\nu} -n_\alpha n_\nu \rho_{\beta\mu} - n_\beta n_\mu \rho_{\alpha\nu}+ n_\beta n_\nu \rho_{\alpha\mu}, \label{eq:rel_tB}
\end{eqnarray}
where $\rho_{\beta\mu}$ is a new symmetric tensor field, defined only on $\Sigma$ and tangent to $\Sigma$, $n^\beta\rho_{\beta\mu}=0$, which encodes the {\em allowed new} independent discontinuities of the covariant derivative of the Riemann tensor. There are $n(n+1)/2$ of those again.
As far as we know, relation (\ref{eq:rel_tB}) has only been derived in \cite{MS}.
Contraction of (\ref{eq:rel_tB}) leads to the equation (\ref{eq:[d_ricci]}), but now with an explicit expression for the discontinuity of the normal derivative of the Ricci tensor which reads, on using (\ref{B=G})
\begin{eqnarray}
r_{\beta\nu} &=&\rho_{\beta\nu} + K^\rho_\rho B_{\beta\nu} +\frac{1}{2}[R] K_{\beta\nu} -K_{\rho\beta}B^\rho_\nu- B_{\rho\beta}K^\rho_\nu \nonumber\\
&-&n_\beta\overline{\nabla}_\rho [G^\rho _\nu] -n_\nu \overline{\nabla}_\rho [G^\rho _\beta] \nonumber\\
&+&n_\beta n_\nu \rho^\alpha_\alpha, \label{disnormDRicc}
\end{eqnarray}
where a natural orthogonal decomposition of $r_{\beta\mu}$ appears: the first line is its complete tangent part which, given that $\rho_{\beta\nu}$ entails the allowed new independent discontinuities, is in itself a symmetric tensor field tangent to $\Sigma$ codifying those discontinuities. We are going to denote it by
\begin{equation}
{\cal R}_{\beta\mu}:= h^\rho_\beta h^\sigma_\mu r_{\rho\sigma} =h^\rho_\beta h^\sigma_\mu n^\lambda [\nabla_\lambda R_{\rho\sigma}]; \label{rtangent}
\end{equation}
the second line is its tangent-normal part, which is completely determined by the covariant derivative within $\Sigma$ of the discontinuity of the Einstein tensor
\begin{equation}
n^\beta h^\nu_\mu r_{\beta\nu} = -\overline{\nabla}^\rho [G_{\rho\mu}];
\label{eq:dRnnt}
\end{equation}
and finally, the third line gives the total normal component of $r_{\beta\mu}$, which can be related to the discontinuity (\ref{b}) of the normal derivative of $R$ by simply taking the trace $r^\rho_\rho =b $ leading to
\begin{equation}
r_{\beta\mu}n^\beta n^\mu =\frac{b}{2}+K^{\rho\sigma} [G_{\rho\sigma}] . \label{eq:dRnnt1}
\end{equation}
Using this we get a useful relation for the trace of $\mathcal{R}_{\alpha\beta}$, that does not depend on $\rho_{\alpha\beta}$
\begin{equation}
\mathcal{R}_\alpha^\alpha = \frac{b}{2} - K^{\rho \sigma}[G_{\rho \sigma}]. \label{traceRcal}
\end{equation}
\subsection{Second-order derivative discontinuities}
Let us now consider the jumps in the second derivatives of the Ricci tensor. The starting point is equation (\ref{eq:[d_ricci]}). We can find an expression for the second summand there by differentiating (\ref{eq:[ricci]}) along
$\Sigma$ and using the general rule (\ref{nabla=nabla1}) (see Appendix \ref{App:D1}),
\begin{equation}
\label{eq:[d_diff_ricci]}
h^\sigma_\rho\nabla_\sigma[R_{\beta\mu}]=\frac{1}{2}n_\beta n_\mu\overline{\nabla}_\rho [R]
+n_{(\mu}\left(K_{\beta)\rho}[R] -2B_{\beta)\lambda}K^\lambda_\rho\right) +\overline{\nabla}_\rho B_{\beta\mu}.
\end{equation}
The jumps of the second-order derivatives of the Ricci tensor,
due to the general formula (\ref{disc1f}), can be written as
\begin{equation}
\label{eq:[dd_ricci]}
[\nabla_\lambda\nabla_\rho R_{\beta\mu}]=n_\lambda A_{\rho\beta\mu}+h^\sigma_\lambda\nabla_\sigma [\nabla_\rho R_{\beta\mu}]
\end{equation}
where $A_{\rho\beta\mu}=A_{\rho(\beta\mu)}$ is a shorthand for
$$
A_{\rho\beta\mu}=n^\lambda [\nabla_\lambda \nabla_\rho R_{\beta\mu}].
$$
The last term
$h^\sigma_\lambda\nabla_\sigma [\nabla_\rho R_{\beta\mu}]$ can be further expanded
by first using (\ref{eq:[d_ricci]}) to obtain
\[
h^\sigma_\lambda\nabla_\sigma [\nabla_\rho R_{\beta\mu}]=
K_{\lambda\rho}r_{\beta\mu} +n_\rho h^\sigma_\lambda\nabla_\sigma r_{\beta\mu}
+h^\sigma_\lambda \nabla_\sigma \left(h^\gamma_\rho \nabla_\gamma[R_{\beta\mu}]\right).
\]
and then computing the last summand here, which leads to
\begin{eqnarray}
&&h^\sigma_\lambda\nabla_\sigma [\nabla_\rho R_{\beta\mu}]=
K_{\lambda\rho} r_{\beta\mu}+ 2 n_{(\mu} K_{\beta)(\lambda}\overline{\nabla}_{\rho)}[R] +[R] K_{\rho(\beta}K_{\mu)\lambda}-4 K^\gamma_{(\rho}\overline{\nabla}_{\lambda)} B_{\gamma(\beta} n_{\mu)}\nonumber \\
&&\qquad +n_\beta n_\mu\left(\frac{1}{2}\overline{\nabla}_\lambda\overline{\nabla}_\rho [R]
-[R]K_\lambda^\sigma K_{\rho\sigma}
+2K^\gamma_\rho K^\sigma_\lambda B_{\sigma\gamma}\right)
\nonumber\\
&&\qquad + \left(\overline{\nabla}_\lambda K^\gamma_\rho-n_\rho K^\sigma_\lambda K^\gamma_\sigma\right)\left([R] h_{\gamma(\beta}-2B_{\gamma(\beta}\right)n_{\mu)}
-\frac{1}{2} n_\mu n_\beta n_\rho K_\lambda^\sigma\overline{\nabla}_\sigma [R]
\nonumber\\
&&\qquad +\overline{\nabla}_\lambda \overline{\nabla}_\rho B_{\beta\mu} -2K^\gamma_\rho B_{\gamma(\beta}K_{\mu)\lambda}
- n_\rho K^\sigma_\lambda \overline{\nabla}_\sigma B_{\beta\mu}
+ n_\rho h_\lambda^\sigma \nabla_\sigma r_{\beta\mu}.
\label{eq:A}
\end{eqnarray}
Let us stress the fact that all the terms in
the first two lines in the above expression are symmetric in $(\lambda \rho)$.
Concerning $A_{\rho\beta\mu}$,
let us first decompose it into normal and tangential parts
by
$$
A_{\rho\beta\mu}=n_\rho A_{\beta\mu}+h^\gamma_\rho A_{\gamma\beta\mu},
\hspace{1cm} A_{\beta\mu} := n^\rho A_{\rho\beta\mu}, \hspace{5mm} A_{\beta\mu} =A_{\mu\beta} .
$$
In order to obtain an expression for $h^\gamma_\rho A_{\gamma\beta\mu}$
we take the antisymmetric part of (\ref{eq:[dd_ricci]}) with respect to $[\lambda \rho]$,
and contract with $n^\lambda$.
For the left hand side of (\ref{eq:[dd_ricci]}) we
use the Ricci identity applied to the Ricci tensor at both sides $V^\pm$, and take the
difference of the limits on $\Sigma$, so that
\[
[(\nabla_{\lambda}\nabla_{\rho} -\nabla_{\rho}\nabla_{\lambda})R_{\beta\mu}]=[R^\gamma{}_{\beta\rho\lambda}R_{\gamma\mu}]+[R^\gamma{}_{\mu\rho\lambda}R_{\beta\gamma}].
\]
For the right hand side of (\ref{eq:[dd_ricci]}), after the contraction with $n^\lambda$, we get
\[
A_{\rho\beta\mu}-n^\lambda n_{\rho} A_{\lambda\beta\mu}-n^\lambda h^\sigma_\rho \nabla_\sigma [\nabla_\lambda R_{\beta\mu}]=
h^\gamma_{\rho} A_{\gamma\beta\mu}-n^\lambda h^\sigma_\rho \nabla_\sigma [\nabla_\lambda R_{\beta\mu}].
\]
Isolating $h^\gamma_\rho A_{\gamma\beta\mu}$ and using (\ref{eq:A}) for the last term of the above equation,
it is then straightforward to obtain
\begin{eqnarray}
&&A_{\rho\beta\mu}=n_\rho A_{\beta\mu}
+n^\nu[R^\gamma{}_{\beta\rho\nu}R_{\gamma\mu}]
+n^\nu[R^\gamma{}_{\mu\rho\nu}R_{\beta\gamma}]
+h^\sigma_\rho\nabla_\sigma r_{\beta\mu}\nonumber \\
&&\qquad -\frac{1}{2}n_\mu n_\beta K^\sigma_\rho\overline{\nabla}_\sigma [R]
-K^\sigma_\rho \overline{\nabla}_\sigma B_{\beta\mu}
-[R] K^\sigma_\rho K_{\sigma(\beta} n_{\mu)}
+ 2K^\sigma_\rho K^\gamma_\sigma B_{\gamma(\beta} n_{\mu)}.
\label{eq:ddRn}
\end{eqnarray}
The expression for $[\nabla_\lambda\nabla_\rho R_{\beta\mu}]$ now follows
by combining (\ref{eq:[dd_ricci]}) with
(\ref{eq:A}) and (\ref{eq:ddRn}). After little rearrangements, that reads
\begin{eqnarray}
&&[\nabla_\lambda\nabla_\rho R_{\beta\mu}]=n_\lambda n_\rho A_{\beta\mu}
+n_\lambda n^\nu\left([R^\gamma{}_{\beta\rho\nu}R_{\gamma\mu}]+[R^\gamma{}_{\mu\rho\nu}R_{\beta\gamma}]\right)
+2n_{(\lambda} h_{\rho)}{}^\sigma\nabla_\sigma r_{\beta\mu}\nonumber\\
&&\qquad - n_\mu n_\beta n_{(\lambda} K_{\rho)}{}^\sigma\overline{\nabla}_\sigma [R]
-2 n_{(\lambda} K_{\rho)}{}^\sigma \overline{\nabla}_\sigma B_{\beta\mu}
- 2 [R] n_{(\lambda} K_{\rho)}{}^\sigma K_{\sigma(\beta} n_{\mu)}\nonumber \\
&&\qquad +4n_{(\lambda} K_{\rho)}{}^\sigma K^\gamma_\sigma B_{\gamma(\beta} n_{\mu)}
+ 2 n_{(\mu} K_{\beta)(\lambda}\overline{\nabla}_{\rho)}[R] +[R] K_{\rho(\beta}K_{\mu)\lambda}-4 K^\gamma{}_{(\rho}\overline{\nabla}_{\lambda)} B_{\gamma(\beta} n_{\mu)}\nonumber \\
&&\qquad +n_\beta n_\mu\left(\frac{1}{2}\overline{\nabla}_\lambda\overline{\nabla}_\rho [R]
-[R] K_\lambda^\sigma K_{\rho\sigma}
+2K^\gamma_\rho K^\sigma_\lambda B_{\sigma\gamma}\right) + K_{\lambda\rho} r_{\beta\mu}
\nonumber\\
&&\qquad + \overline{\nabla}_\lambda K^\gamma_\rho\left([R] h_{\gamma(\beta}-2B_{\gamma(\beta}\right)n_{\mu)}
+\overline{\nabla}_\lambda \overline{\nabla}_\rho B_{\beta\mu}-2K^\gamma_\rho B_{\gamma(\beta}K_{\mu)\lambda}.
\label{eq:[ddRicci]}
\end{eqnarray}
We must stress the fact that there are still terms in (\ref{eq:[ddRicci]}), i.e. $A_{\beta\mu}$ and $r_{\beta\mu}$,
that are not completely independent.
The contraction of (\ref{eq:[ddRicci]}) with $g^{\rho\lambda}$ yields
\begin{eqnarray}
&&[\Box R_{\beta\mu}]=A_{\beta\mu}+K r_{\beta\mu}+
n_\mu n_\beta\left(\frac{1}{2} \Box [R]-[R]K^{\rho\sigma} K_{\rho\sigma}+2K^{\sigma\rho}K_\rho^\gamma B_{\sigma\gamma}\right) \nonumber \\
&&\qquad +2n_{(\mu} h^\lambda_{\beta)}\left(\overline{\nabla}_\rho [R] K^\rho_\lambda-2K^{\gamma\rho}\overline{\nabla}_\rho B_{\gamma\lambda}+\frac{1}{2}\overline{\nabla}_\rho K^\rho_\lambda - \overline{\nabla}_\rho K^{\rho\gamma}B_{\gamma\lambda} \right)\nonumber\\
&&\qquad +[R] K_{\rho\beta}K_{\mu}^\rho+\overline\Box B_{\beta\mu}-2K^\gamma_\rho K^\rho_{(\mu}B_{\beta)\gamma},
\label{eq:[box_ricci]}
\end{eqnarray}
while contracting with $g^{\beta\mu}$ we obtain \cite{Senovilla13}
\begin{equation}
[\nabla_\nu\nabla_\mu R]=A^\rho_\rho n_\nu n_\mu+2n_{(\nu}\overline{\nabla}_{\mu)}b
-2n_{(\nu} K_{\mu)}^\lambda\overline{\nabla}_\lambda [R]+b K_{\nu\mu}+\overline{\nabla}_\nu\overline{\nabla}_\mu [R].
\label{eq:[dd_ricciS]}
\end{equation}
From any of the previous we readily have
\begin{equation}
[\Box R]=A^\rho_\rho+bK+\overline\Box [R] .
\label{eq:[box_ricciS]}
\end{equation}
The energy-momentum quantities (\ref{tauexc}-\ref{taue}) will arise from the discontinuities of the {\em normal} components of the lefthand side of (\ref{fe}). In other words, we will only need to consider $n^\alpha[G^{(2)}_{\alpha\beta}]$.
Observe then that $A_{\beta\mu}$ only appears in $[\Box R_{\beta\mu}]$,
and since we only need the terms contracted with the normal once, in particular $n^\beta[\Box R_{\beta\mu}]$,
we are only interested in controlling $n^\beta A_{\beta\mu}$. This can be done by using the identities $2\nabla^\rho R^\pm_{\rho\mu}=\nabla_\mu R^\pm$ at both sides of $\Sigma$, and taking the difference after one further differentiation:
\begin{equation}
n^\nu[\nabla_\nu\nabla^\rho R_{\rho\mu}]=\frac{1}{2}n^\nu [\nabla_\nu\nabla_\mu R].
\label{eq:bianchi_contr}
\end{equation}
The lefthand side here comes from (\ref{eq:[dd_ricci]}) combined with (\ref{eq:ddRn}) after one contraction,
whereas for the righthand side
we simply have to contract (\ref{eq:[dd_ricciS]}) with $n^\nu$.
Equation (\ref{eq:bianchi_contr}) is thus found to be equivalent to
\begin{eqnarray}
n^\rho A_{\rho\mu}
+n^\sigma\left(-[R^\gamma_\sigma R_{\gamma\mu}]+[R_{\gamma\mu\rho\sigma} R^{\gamma\rho}]\right)+h^{\beta\sigma}\nabla_\sigma r_{\beta\mu}-K^{\beta\sigma}\overline{\nabla}_\sigma B_{\beta\mu}\nonumber \\
-n_\mu\left(\frac{1}{2}[R] K_{\rho\sigma} K^{\rho\sigma} -K^{\sigma\beta}K_\sigma^\gamma B_{\gamma\beta}\right)
=\frac{1}{2}\left(A^\rho_\rho n_\mu+ \overline{\nabla}_\mu b -K_\mu^\lambda\overline{\nabla}_\lambda [R]\right).
\label{eq:for_nddRn}
\end{eqnarray}
\subsection{Discontinuities of terms quadratic in the curvature}
Now, let us concern ourselves with the many terms in (\ref{eq:G2}) quadratic in curvature tensors. To start with,
using (\ref{eq:[ricci]}) with (\ref{discprod}) we readily obtain
\begin{eqnarray}
&&[R_{\alpha\beta}R^{\alpha\beta}]=2 [R^{\alpha\beta}]R^\Sigma_{\alpha\beta} = \left(B^{\alpha\beta}+\frac{1}{2} [R] n^\alpha n^\beta\right)R^\Sigma_{\alpha\beta}, \\
&&[R R_{\alpha\beta}]=R^\Sigma [R_{\alpha\beta}] +[R] R^\Sigma_{\alpha\beta} = R^\Sigma\left(B_{\alpha\beta}+\frac{1}{2}n_\alpha n_\beta [R]\right) +{R} R^\Sigma_{\alpha\beta}, \label{eq:[RR]}\\
&& [R^2]=2[R]R^\Sigma.
\label{eq:[RR]1}
\end{eqnarray}
Regarding $n^\alpha[R_{\alpha \mu}R_{\beta}^\mu]$, let us first consider the contraction $n^\sigma n^\mu [R^\gamma_\sigma R_{\gamma\mu}]$. The chain of equalities
\begin{equation}
n^\sigma n^\mu [R^\gamma_\sigma R_{\gamma\mu}]=2n^\sigma n^\muR^\Sigma{}^\gamma_\sigma[R_{\gamma\mu}]=[R] n^\gamma n^\muR^\Sigma{}_{\gamma\mu}
\label{eq:nn[RR]1}
\end{equation}
follows from (\ref{eq:[ricci]}) and (\ref{discprod}). Half-adding the two $\pm$ equations (\ref{gauss}) and using the result in (\ref{eq:nn[RR]1}) we derive
\begin{equation}
n^\sigma n^\mu [R^\gamma_\sigma R_{\gamma\mu}]=\frac{1}{2} [R] (R^\Sigma-\overline{R}+(K^\rho_\rho)^2 - K_{\rho\sigma} K^{\rho\sigma} ).
\label{eq:nn[RR]}
\end{equation}
Analogous procedures using the Gauss equation (\ref{gauss0}) accordingly yield
\begin{eqnarray}
&&n^\sigma h^\mu_\nu [R^\gamma_\sigma R_{\gamma\mu}]=B_{\alpha\nu}(\overline{\nabla}_\beta K^{\beta\alpha}-\overline{\nabla}^\alpha K^\rho_\rho )+\frac{1}{2}[R](\overline{\nabla}_\alpha K^\alpha_\nu-\overline{\nabla}_\nu K^\rho_\rho ),\label{eq:ne[RR]}\\
&&n^\alpha n^\beta[R_{\alpha\mu\beta\nu} R^{\mu\nu}]=
\left(2R^\Sigma_{\mu\nu}- \overline{R}_{\mu\nu} +K^\rho_\rho K_{\mu\nu}-K_{\mu\sigma}K^\sigma_\nu\right) B^{\mu\nu},
\label{eq:nn[RieRic]}\\
&&n^\alpha h^\beta_\lambda[R_{\alpha\mu\beta\nu} R^{\mu\nu}]=
(\overline{\nabla}_\beta K_{\lambda\alpha}-\overline{\nabla}_\lambda K_{\beta\alpha}) B^{\alpha\beta}-(\overline{\nabla}_\alpha K^{\alpha\beta}-\overline{\nabla}^\beta K^\rho_\rho )B_{\beta\lambda}.
\label{eq:ne[RieRic]}\\
&&n^\alpha n^\beta[R_{\alpha\rho\mu\nu} R_\beta{}^{\rho\mu\nu}]=4n^\alpha n^\beta[R_{\alpha\mu\beta\nu} R^{\mu\nu}]-4R^\Sigma_{\rho\sigma} B^{\rho\sigma},
\label{eq:nn[RieRie]}\\
&&n^\alpha h^\beta_\lambda [R_{\alpha\rho\mu\nu} R_\beta{}^{\rho\mu\nu}]=2 B^{\alpha\beta}(\overline{\nabla}_\alpha K_{\lambda\beta}-\overline{\nabla}_\lambda K_{\alpha\beta}),\\
&&[R_{\alpha\beta\mu\nu} R^{\alpha\beta\mu\nu}]=2n^\alpha n^\beta[R_{\alpha\rho\mu\nu} R_\beta{}^{\rho\mu\nu}]= 8B^{\alpha\beta}(R^\Sigma_{\alpha\beta}-\overline{R}_{\alpha\beta}+K^\rho_\rho K_{\alpha\beta}-K_{\alpha\rho}K^\rho_\beta).
\label{eq:[RieRie]}
\end{eqnarray}
\subsection{Discontinuities of the quadratic part $[G^{(2)}_{\alpha\beta}]$}
We are now ready to compute the full $n^\alpha[G^{(2)}_{\alpha\beta}]$. To keep track of the different terms, we split the compilation of terms in three parts, corresponding to the terms multiplied by either of the three constants $a_1, a_2,a_3$ in (\ref{eq:G2}).
\begin{itemize}
\item Terms with $a_1$:
The terms in (\ref{eq:G2}) that go with $a_1$ are
\[
G^{(2)a_1}_{\alpha\beta}:= 2 R R_{\alpha\beta}-2\nabla_\beta\nabla_\alpha R
-\frac{1}{2}g_{\alpha\beta}R^2+2g_{\alpha\beta}\Box R,
\]
and we can compute their jump using (\ref{eq:[RR]}), (\ref{eq:[dd_ricciS]}) and (\ref{eq:[box_ricciS]}) to obtain
\begin{equation}
n^\alpha n^\beta[G^{(2)a_1}_{\alpha\beta}]=2[R] R^\Sigma_{\alpha\beta}n^\alpha n^\beta
+2b K^\rho_\rho+2\overline\Box [R]
\label{eq:nnGa1}
\end{equation}
and
\begin{equation}
n^\alpha h^\beta_\mu [G^{(2)a_1}_{\alpha\beta}]=
2[R] R^\Sigma_{\alpha\beta}n^\alpha h^\beta_\mu-2\overline{\nabla}_\mu b+2 K_\mu^\alpha \overline{\nabla}_\alpha [R].
\label{eq:neGa1}
\end{equation}
\item Terms with $a_2$:
The terms in (\ref{eq:G2}) relative to $a_2$ are
\[
G^{(2)a_2}_{\alpha\beta}:= 2 R_{\alpha\mu\beta\nu} R^{\mu\nu}-\nabla_\beta\nabla_\alpha R
+\Box R_{\alpha\beta}-\frac{1}{2}g_{\alpha\beta}\left(R_{\mu\nu} R^{\mu\nu}-\Box R\right).
\]
Before using (\ref{eq:nn[RieRic]}) and (\ref{eq:ne[RieRic]}) it is convenient to
write down $n^\alpha [\Box R_{\alpha\beta}]$ using (\ref{eq:[box_ricci]}) combined with (\ref{eq:for_nddRn}),
since some terms simplify. With the help of
(\ref{eq:nn[RieRic]}), (\ref{eq:ne[RieRic]}), (\ref{eq:[dd_ricciS]}), (\ref{eq:[RR]})
and (\ref{rtangent}-\ref{eq:dRnnt1}) it is then easy to get
\begin{eqnarray}
n^\alpha n^\beta[G^{(2)a_2}_{\alpha\beta}] &=& \frac{b}{2} K^\rho_\rho + \overline\Box [R] + \frac{1}{4}[R](R^\Sigma-\overline{R} + (K^\rho_\rho)^2)-\frac{3}{4}[R] K_{\rho\sigma} K^{\rho\sigma}
\nonumber\\
&&+ {\cal R}_{\rho\sigma} K^{\rho\sigma} + \overline{\nabla}_\rho\overline{\nabla}_\mu [G^{\rho\mu}] +B^{\mu\nu}(R^\Sigma_{\mu\nu} - \overline{R}_{\mu\nu} + K^\rho_\rho K_{\mu\nu}) , \\
n^\alpha h^\beta_\mu[G^{(2)a_2}_{\alpha\beta}]&=& -\frac{1}{2}\overline{\nabla}_\mu b + \frac{3}{2} K^\alpha_\mu \overline{\nabla}_\alpha [R]+ [R] \left( \overline{\nabla}_\alpha K^\alpha_\mu - \frac{1}{2} \overline{\nabla}_\mu K\right) -\overline{\nabla}_\alpha {\cal R}^\alpha_\mu + K^\alpha_\mu\overline{\nabla}^\nu[G_{\nu\alpha}] \nonumber\\
&& + B^{\alpha\beta}(\overline{\nabla}_\beta K_{\alpha\mu} -\overline{\nabla}_\mu K_{\alpha\beta})
-B_{\alpha\mu}\overline{\nabla}_\beta K^{\alpha\beta}-K^{\alpha\beta}\overline{\nabla}_\beta B_{\alpha\mu}.
\end{eqnarray}
\item Terms with $a_3$:
Regarding $a_3$ we have
\[
G^{(2)a_3}_{\alpha\beta}:= -4R_{\alpha\mu}R^\mu_\beta + 2 R_{\alpha \rho \mu \nu} R_\beta^{\; \rho \mu \nu} + 4 R_{\alpha\mu\beta\nu} R^{\mu \nu}
- 2\nabla_\beta\nabla_\alpha R + 4 \Box R_{\alpha\beta} - \frac{1}{2} g_{\alpha\beta}R_{\rho\gamma\mu\nu}R^{\rho\gamma\mu\nu}.
\]
All terms have already appeared except for the last one, for which we use (\ref{eq:[RieRie]}).
Straightforward calculations lead to
\begin{eqnarray}
n^\alpha n^\beta [G^{(2)a_3}_{\alpha\beta}] &=& 4 {\cal R}_{\alpha\beta} K^{\alpha\beta} + 4 \overline{\nabla}_\alpha \overline{\nabla}_\beta [G^{\alpha\beta}] +4[G_{\alpha\rho}]K^{\alpha\beta}K^\rho_\beta + 2\overline\Box [R]\nonumber \\
&& +4 B^{\alpha\beta}(R^\Sigma_{\alpha\beta}-\overline{R}_{\alpha\beta}+K_{\alpha\beta}K^\rho_\rho-K_{\alpha\rho}K^\rho_\beta) , \\
n^\alpha h^\beta_\mu [G^{(2)a_3}_{\alpha\beta}] &=& +4 K^\alpha_\mu \overline{\nabla}^\beta[G_{\beta\alpha}] - 4\overline{\nabla}_\alpha{\cal R}^\alpha_\mu + 4K^\alpha_\mu \overline{\nabla}_\alpha [R]
- 4 \overline{\nabla}_\beta (B_{\alpha\mu}K^{\alpha\beta}) \nonumber \\
&& +2[R]\overline{\nabla}_\alpha K^\alpha_\mu -4B_{\beta\mu}\overline{\nabla}_\alpha K^{\alpha\beta}+4 B^{\alpha\beta}(\overline{\nabla}_\beta K_{\alpha\mu} -\overline{\nabla}_\mu K_{\alpha\beta}).
\end{eqnarray}
\end{itemize}
Collecting all the above, we finally obtain
\begin{eqnarray}
&&n^\alpha n^\beta [G^{(2)}_{\alpha\beta}]=\kappa_1\left\{
bK^\rho_\rho+\overline\Box [R]+\frac{1}{2} \left(R^\Sigma-\overline{R}+(K^\rho_\rho)^2-K_{\rho\sigma} K^{\rho\sigma}\right)\right\}\nonumber \\
&&\qquad+\kappa_2\left\{2 {\cal R}_{\alpha\beta} K^{\alpha\beta} +2\overline{\nabla}_\alpha \overline{\nabla}_\beta [G^{\alpha\beta}] +2B^{\alpha\beta}(R^\Sigma_{\alpha\beta}-\overline{R}_{\alpha\beta}+K_{\alpha\beta}K^\rho_\rho-K_{\alpha\rho}K^\rho_\beta)\right.\nonumber \\
&&\qquad \left. +2[G_{\alpha\mu}]K^{\alpha\beta}K_\beta^\mu+\overline\Box [R]\right\} \label{eq:nn[G2]}
\end{eqnarray}
\begin{eqnarray}
&&n^\alpha h^\beta_\mu [G^{(2)}_{\alpha\beta}]=\kappa_1\left\{[R](\overline{\nabla}_\alpha K^\alpha_\mu-\overline{\nabla}_\mu K^\rho_\rho)-\overline{\nabla}_\mu b+K_\mu^\alpha\overline{\nabla}_\alpha [R]\right\}\nonumber \\
&&\qquad+\kappa_2\left\{-2\overline{\nabla}_\alpha{\cal R}^\alpha_\mu+2K^\alpha_\mu \overline{\nabla}^\beta[G_{\beta\alpha}] +2 B^{\alpha\beta}(\overline{\nabla}_\beta K_{\alpha\mu}-\overline{\nabla}_\mu K_{\alpha\beta})+2K^\alpha_\mu \overline{\nabla}_\alpha [R]\right.\nonumber\\
&&\qquad+\left.[R]\overline{\nabla}_\alpha K^\alpha_\mu-2B_{\alpha\mu}\overline{\nabla}_\beta K^{\alpha\beta}-2K^{\alpha\beta}\overline{\nabla}_\beta B_{\alpha\mu}\right\}.\label{eq:ne[G2]}
\end{eqnarray}
{\bf Remark:} As a final remark, we would like to stress that all the discontinuities computed in this section \ref{sec:nG2} are purely geometrical, and therefore valid in any theory based on a Lorentzian manifold whenever (\ref{Kdisc=0}) holds.
\section{Field equations on the layer $\Sigma$}
\label{sec:field_eqs_S}
Relations (\ref{eq:nn[G2]}) and (\ref{eq:ne[G2]}) are the equations we were looking for, but we wish to rewrite them in terms of the (derivatives of) the energy-momentum quantities supported on $\Sigma$ given in (\ref{tauexc}-\ref{taue}) and (\ref{strength}). Observe, first of all, that the three relations (\ref{rtangent}), (\ref{eq:dRnnt}) and (\ref{eq:dRnnt1}) allow us to rewrite the energy-momentum contents supported on $\Sigma$ (\ref{tauexc}-\ref{taue}) as follows
\begin{eqnarray}
\kappa \tau_{\alpha\beta} =-(\kappa_1 + \kappa_2) [R] K_{\alpha\beta}+\kappa_1 b h_{\alpha\beta} + 2 \kappa_2 {\cal R}_{\alpha\beta} ,\label{tauexc1}
\\
\kappa \tau_\alpha =-(\kappa_1 + \kappa_2)\overline{\nabla}_\alpha [R] - 2\kappa_2 \overline{\nabla}^\rho[G_{\rho\alpha}] , \label{tauex1}
\\
\kappa \tau = (\kappa_1 + \kappa_2) [R] K^\rho_\rho + 2\kappa_2 K^{\rho\sigma} [G_{\rho\sigma}], \label{taue1}
\end{eqnarray}
and using the definition of the double-layer strength (\ref{strength}) the last two here can be rewritten as
\begin{eqnarray}
\tau_\alpha =-\overline{\nabla}^\rho\mu_{\rho\alpha} , \label{tauex2}
\\
\tau = K^{\rho\sigma} \mu_{\rho\sigma}. \label{taue2}
\end{eqnarray}
Now, a direct computation provides the following expressions for some combinations of derivatives of these objects:
\begin{eqnarray}
\kappa \left( \overline{\nabla}^\beta \tau_{\alpha\beta} + K^\rho_\rho \tau_\alpha +\overline{\nabla}_\alpha \tau \right) = -(\kappa_1 + \kappa_2) \left(K_\alpha{}^\beta\overline{\nabla}_\beta [R] + [R] (\overline{\nabla}^\beta K_{\alpha\beta}-\overline{\nabla}_\alpha K^\rho_\rho)\right)\nonumber\\
+\kappa_1 \overline{\nabla}_\alpha b +2\kappa_2(\overline{\nabla}^\beta {\cal R}_{\alpha\beta}+\overline{\nabla}_\alpha(K^{\rho\sigma}[G_{\rho\sigma}])+K^\rho_\rho \overline{\nabla}^\mu[G_{\mu\alpha}]),
\end{eqnarray}
\begin{eqnarray}
\kappa \left( \tau_{\alpha\beta}K^{\alpha\beta} - \overline{\nabla}^\alpha \tau_\alpha \right) = (\kappa_1 + \kappa_2)(\overline\Box [R] - [R] K_{\rho\sigma}K^{\rho\sigma})
\\
+ \kappa_1 b K^\rho_\rho + 2\kappa_2 (K^{\rho\sigma} {\cal R}_{\rho\sigma} + \overline{\nabla}^\alpha \overline{\nabla}^\beta [G_{\alpha\beta}]).
\end{eqnarray}
Using these, equations (\ref{eq:ne[G2]}) and (\ref{eq:nn[G2]}) become respectively (after some rewriting using (\ref{gauss2}) and (\ref{gauss3}) and (\ref{fe}))
\begin{eqnarray*}
\kappa\left( n^\alpha h^\rho_\beta [T_{\alpha\rho}]+ \overline{\nabla}^\alpha \tau_{\alpha\beta} + K^\rho_\rho \tau_\beta +\overline{\nabla}_\beta \tau \right)
&=& 2\kappa_2 \left \lbrace K^{\alpha\rho}\overline{\nabla}_\beta [G_{\alpha\rho}] - K^\rho_\rho \overline{\nabla}^\alpha [G_{\alpha\beta}]\right.\nonumber\\
&& \left. + \overline{\nabla}_\rho ([G^{\alpha\rho}]K_{\alpha\beta}) - \overline{\nabla}_\rho([G_{\alpha\beta}]K^{\alpha\rho}) \right \rbrace, \\
\kappa\left( n^\alpha n^\beta [T_{\alpha\beta}]+ \overline{\nabla}^\alpha \tau_\alpha - \tau_{\alpha\beta}K^{\alpha\beta} \right)&=&
(\kappa_1 +\kappa_2) [R]\left( n^\alpha n^\beta R^\Sigma_{\alpha\beta} + K_{\alpha\beta}K^{\alpha\beta}\right)\nonumber\\&&+2\kappa_2
[G^{\mu\nu}]\left (n^\alpha n^\gamma R^\Sigma_{\alpha \mu\gamma\nu} + K_\mu^\rho K_{\nu\rho} \right ).
\end{eqnarray*}
Using now the definition of the strength (\ref{strength}) these become
\begin{eqnarray*}
n^\alpha h^\rho_\beta [T_{\alpha\rho}]+ \overline{\nabla}^\alpha \tau_{\alpha\beta} + K^\rho_\rho \tau_\beta +\overline{\nabla}_\beta \tau
&=& K^{\alpha\rho}\overline{\nabla}_\beta\mu_{\alpha\rho} - K^\rho_\rho \overline{\nabla}^\alpha \mu_{\alpha\beta} \nonumber \\
&& + \overline{\nabla}_\rho (\mu^{\alpha\rho}K_{\alpha\beta}) - \overline{\nabla}_\rho(\mu_{\alpha\beta}K^{\alpha\rho}) \\
n^\alpha n^\beta [T_{\alpha\beta}]+ \overline{\nabla}^\alpha \tau_\alpha - \tau_{\alpha\beta}K^{\alpha\beta} &= &
\mu^{\mu\nu}\left (n^\alpha n^\gamma R^\Sigma_{\alpha \mu\gamma\nu} + K_\mu^\rho K_{\nu\rho} \right).
\end{eqnarray*}
Recalling here the relations (\ref{tauex2}) and (\ref{taue2}) between $\tau_\alpha$ and $\tau$ with the double-layer strength $\mu_{\alpha\beta}$, we finally obtain the following field equations
\begin{empheq}[box=\fbox]{align}
n^\alpha h^\rho_\beta [T_{\alpha\rho}]+ \overline{\nabla}^\alpha \tau_{\alpha\beta}
&= -\mu_{\alpha\rho} \overline{\nabla}_\beta K^{\alpha\rho} + \overline{\nabla}_\rho (\mu^{\alpha\rho}K_{\alpha\beta}) - \overline{\nabla}_\rho(\mu_{\alpha\beta}K^{\alpha\rho}) ,
\label{eq:1} \\
n^\alpha n^\beta [T_{\alpha\beta}] - \tau_{\alpha\beta}K^{\alpha\beta} &=
\overline{\nabla}^\alpha\overline{\nabla}^\beta \mu_{\alpha\beta} + \mu^{\mu\nu}\left (n^\alpha n^\gamma R^\Sigma_{\alpha \mu\gamma\nu} + K_\mu^\rho K_{\nu\rho} \right ).\label{eq:2}
\end{empheq}
These are the fundamental field equations satisfied by the energy-momentum quantities (\ref{tauexc}) and (\ref{strength}) within $\Sigma$. They generalize the classical Israel equations of GR \cite{I} and they are very satisfactory from a physical point of view. They possess an obvious structure with a clear interpretation as energy-momentum conservation relations. There are three type of terms in these relations. The first type is given by the corresponding first summands on the lefthand side. They simply describe the jump of the normal components of the energy-momentum tensor across $\Sigma$. Therefore, they are somehow the main source for the energy-momentum contents in $\Sigma$. The second type of terms are those on the lefthand side involving $\tau_{\alpha\beta}$, the energy-momentum tensor in the shell/layer $\Sigma$. We want to remark that the first equation (\ref{eq:1}) provides the divergence of $\tau_{\alpha\beta}$. Finally, the third type of terms are those on the righthand side, involving the
strength
$\mu_{\alpha\beta}$ of a double layer. These terms act also as sources of the energy-momentum contents within $\Sigma$, combined with extrinsic geometric properties of $\Sigma$ and curvature components in the space-time.
An alternative version of (\ref{eq:1}), after use of the Codazzi equation (\ref{codazzi}), reads
\begin{empheq}[box=\fbox]{align}
n^\alpha h^\rho_\beta [T_{\alpha\rho}]+ \overline{\nabla}^\alpha \tau_{\alpha\beta}
= \mu^{\alpha\rho} n^\sigma R^\Sigma_{\sigma \alpha \lambda \rho}h^\lambda_\beta +K_{\alpha\beta} \overline{\nabla}_\rho \mu^{\alpha\rho} - \overline{\nabla}_\rho(\mu_{\alpha\beta}K^{\alpha\rho}) .
\label{eq:2bis}
\end{empheq}
Note that the allowed jumps in the Riemann tensor (\ref{eq:[Rie]}) lead to $ n^\sigma [R_{\sigma \alpha\lambda\rho}] h^{\alpha}_\gamma h^\lambda_\beta h^{\rho}_\xi = 0$ and therefore the term $\mu^{\alpha\rho} n^\sigma R^\Sigma_{\sigma \alpha\lambda\rho}h^\lambda_\beta$ in the last formula can be written simply as $\mu^{\alpha\rho} n^\sigma R_{\sigma \alpha\lambda\rho} h^\lambda_\beta$.
\section{Energy-momentum conservation}\label{section:divergence}
The divergence of the lefthand side of the field equations (\ref{fe}) vanishes identically due to the Ricci and Bianchi identities, and therefore, the conservation equation for the energy-momentum tensor $\nabla_\mu T^{\mu\nu}=0$ follows. In our situation, however, we are dealing with tensor distributions, and with (\ref{fe}) considered in a distributional sense. The question arises if whether or not the energy-momentum tensor distribution (\ref{emt}) is covariantly conserved. We know that the Bianchi and Ricci identities hold for distributions (see Appendices),
hence it is expected that the divergence of the $\underline{T}_{\mu\nu}$ vanishes when distributions are considered. In this section we prove that this is the case, when taking into account the fundamental field equations (\ref{eq:1}) and (\ref{eq:2}). The following calculation can be alternatively seen, therefore, as an independent derivation of (\ref{eq:1}) and (\ref{eq:2}) ---from the covariant conservation of $\underline{T}_{\mu\nu}$.
From (\ref{emt0}) and (\ref{nablaT1}) we directly get
\begin{equation}
\nabla^\alpha \underline{T}_{\alpha\beta} =n^\alpha [T_{\alpha\beta}] \delta^\Sigma +\nabla^\alpha (\widetilde T_{\alpha\beta}\delta^\Sigma ) + \nabla^\alpha \underline{t}_{\alpha\beta}.\label{step}
\end{equation}
Let us first compute the middle term on the righthand side. From the orthogonal decomposition (\ref{emtortog})
$$
\nabla^\alpha (\widetilde T_{\alpha\beta}\delta^\Sigma ) = \nabla^\alpha \left( \left\{\tau_\beta+\tau n_\beta \right\} n_\alpha \delta^\Sigma\right) +\nabla^\alpha \left(\left\{\tau_{\alpha\beta}+\tau_\alpha n_\beta \right\} \delta^\Sigma\right)
$$
and using the general formula (\ref{nablaTdelta}) the second summand can be expanded to get
$$
\nabla^\alpha (\widetilde T_{\alpha\beta}\delta^\Sigma ) = \nabla^\alpha \left( \left\{\tau_\beta+\tau n_\beta \right\} n_\alpha \delta^\Sigma\right) +\left(\overline{\nabla}^\alpha \tau_{\alpha\beta}
-\tau_{\alpha\rho}K^{\alpha\rho} n_\beta +\tau^\alpha K_{\alpha\beta} +n_\beta \overline{\nabla}^\alpha \tau_\alpha \right)\delta^\Sigma
$$
so that with the help of (\ref{tauex2}) we get
\begin{eqnarray}
\nabla^\alpha (\widetilde T_{\alpha\beta}\delta^\Sigma ) &=& \nabla^\alpha \left( \left\{\tau_\beta+\tau n_\beta \right\} n_\alpha \delta^\Sigma\right) \nonumber \\
&+& \left(\overline{\nabla}^\alpha \tau_{\alpha\beta} -\tau_{\alpha\rho}K^{\alpha\rho} n_\beta- K_{\alpha\beta} \overline{\nabla}^\rho \mu_{\rho\alpha}- n_\beta \overline{\nabla}^\alpha \overline{\nabla}^\rho\mu_{\alpha\rho} \right)\delta^\Sigma .\label{step1}
\end{eqnarray}
With respect to the last term in (\ref{step}), on using definitions (\ref{t}) and (\ref{strength}) we can write for any test vector field $Y^\beta$ and using the Ricci identity
\begin{eqnarray*}
\left<\nabla^\alpha \underline{t}_{\alpha\beta} , Y^\beta \right> =-\left<\underline{t}_{\alpha\beta} , \nabla^\alpha Y^\beta \right>=\int_\Sigma \mu_{\alpha\beta} n^\rho \nabla_\rho \nabla^\alpha Y^\beta d\sigma \\
=\int_\Sigma \left(\mu_{\alpha\beta} n^\rho \left\{\nabla^\alpha \nabla_\rho Y^\beta +R^\beta{}_{\sigma\rho}{}^\alpha Y^\sigma \right\} \right) d\sigma \\
=\int_\Sigma \mu_{\alpha\beta} n^\rho \nabla^\alpha \nabla_\rho Y^\beta d\sigma - \left<n^\rho\mu^{\alpha\sigma} R^\Sigma_{\rho\alpha\beta\sigma}\delta^\Sigma,Y^\beta\right>.
\end{eqnarray*}
The first integral here can be expanded as
\begin{eqnarray*}
\int_\Sigma \mu_{\alpha\beta} n^\rho \nabla^\alpha \nabla_\rho Y^\beta d\sigma =\int_\Sigma \mu_{\alpha\beta} \left\{\nabla^\alpha (n^\rho \nabla_\rho Y^\beta) - K^{\alpha\rho} \nabla_\rho Y^\beta\right\} d\sigma \\
=\int_\Sigma n^\rho\nabla_\rho Y^\beta \left(\mu_{\alpha\sigma} K^{\alpha\sigma} n_\beta- \overline{\nabla}^\alpha \mu_{\alpha\beta} \right) d\sigma
-\int_\Sigma \mu_{\alpha\beta} K^{\alpha\rho} \left(\overline{\nabla}_\rho \overline{Y}^\beta +(n_\sigma Y^\sigma) K_\rho{}^\beta \right) d\sigma \\
= \int_\Sigma (\tau n_\beta +\tau_\beta) n^\rho\nabla_\rho Y^\beta d\sigma +\int_\Sigma Y^\beta \left(\overline{\nabla}_\rho(\mu_{\alpha\beta} K^{\alpha\rho}) -n_\beta \mu_{\alpha\sigma} K^{\alpha\rho}K_\rho{}^\sigma \right)d\sigma \\
=- \left<\nabla^\alpha \left( \left\{\tau_\beta+\tau n_\beta \right\} n_\alpha \delta^\Sigma\right), Y^\beta \right>
+\left<\left(\overline{\nabla}_\rho(\mu_{\alpha\beta} K^{\alpha\rho}) -n_\beta \mu_{\alpha\sigma} K^{\alpha\rho}K_\rho{}^\sigma \right)\delta^\Sigma , Y^\beta \right>
\end{eqnarray*}
so that we arrive at
\begin{equation}
\nabla^\alpha \underline{t}_{\alpha\beta}= -\nabla^\alpha \left( \left\{\tau_\beta+\tau n_\beta \right\} n_\alpha \delta^\Sigma\right)+
\left(\overline{\nabla}_\rho(\mu_{\alpha\beta} K^{\alpha\rho}) -n_\beta \mu_{\alpha\sigma} K^{\alpha\rho}K_\rho{}^\sigma -n^\rho\mu^{\alpha\sigma} R^\Sigma_{\rho\alpha\beta\sigma}\right)\delta^\Sigma . \label{step2}
\end{equation}
Adding up (\ref{step1}) and (\ref{step2}) to (\ref{step}) we finally obtain
\begin{eqnarray*}
\nabla^\alpha \underline{T}_{\alpha\beta} &=&\left\{ n^\alpha [T_{\alpha\beta}] +\overline{\nabla}^\alpha \tau_{\alpha\beta} -\tau_{\alpha\rho}K^{\alpha\rho} n_\beta +
\overline{\nabla}_\rho(\mu_{\alpha\beta} K^{\alpha\rho}) \right. \\
&& \left. - n_\beta \mu_{\alpha\sigma} K^{\alpha\rho}K_\rho{}^\sigma -n^\rho\mu^{\alpha\sigma} R^\Sigma_{\rho\alpha\beta\sigma} - K_{\alpha\beta} \overline{\nabla}^\rho \mu_{\rho\alpha}- n_\beta \overline{\nabla}^\alpha \overline{\nabla}^\rho\mu_{\alpha\rho} \right\} \delta^\Sigma .
\end{eqnarray*}
The fundamental equations (\ref{eq:2}) and (\ref{eq:2bis}) prove the vanishing of this expression leading to
$$
\nabla^\alpha \underline{T}_{\alpha\beta} =0
$$
as claimed. As remarked in \cite{Senovilla14,Senovilla15}, this calculation shows that the double-layer energy-momentum distribution $\underline{t}_{\alpha\beta}$ is essential to keep energy-momentum conservation. Without the double-layer contribution the total energy-momentum tensor distribution $\underline{T}_{\alpha\beta}$ would not be covariantly conserved.
\section{Matching hypersurfaces, thin shells and double layers}
\label{sec:8}
Once we have discussed the junction in the case of gravity theories with quadratic terms, and have obtained the corresponding field equations on $\Sigma$, we are in disposition to analyze their consequences. Before entering into this discussion, it is convenient to remark the following important result.
\begin{result}\label{NoDL}
If there is no double layer (that is $\mu_{\alpha\beta}=0$), then there can be neither external flux momentum $\tau_\alpha$ nor external pressure/tension $\tau$.
\end{result}
This follows directly from expressions (\ref{tauex2}) and (\ref{taue2}). In other words, there exist non-vanishing external flux momentum and/or external pressure/tension {\em only if} there is a double layer.
Thus, there are three levels of junction depending on whether or not thin shells and/or double layers are allowed. We will term them as:
\begin{itemize}
\item {\em Proper matching}: this is the case where the matching hypersurface $\Sigma$ does not support any distributional matter content, describing simply an interface with jumps in the energy-momentum tensor, so that there are neither thin shells nor double layers. This situation models, for instance, the gravitational field of stars (non-empty interior) with a vacuum exterior. Or the case of vacuoles in cosmological surroundings.
\item {\em Thin shells, but no double layer}: This is an idealized situation where an enormous quantity of matter is concentrated on a very thin region mathematically described by $\Sigma$ but no double layer is permitted to exist. Thus, delta-type terms proportional to $\delta^\Sigma$ are allowed, and the expression (\ref{tauexc}) provides the energy-momentum tensor of the thin shell. However, from Result \ref{NoDL} the other possible quantities (\ref{tauex}) and (\ref{taue}) accompanying $\delta^\Sigma$ vanish identically. This situation is analogous to that in GR where only (\ref{tauexc}) appears. The main difference with a generic quadratic gravity arises in the explicit expression for (\ref{tauexc}), as the field equations turn out to adopt the same form.
\item {\em Double layers}: this is the general case with no further assumptions, which describes a large concentration of matter on $\Sigma$, as in the previous case, but accompanied with a brusque jump in the curvature of the spacetime. Still, there are several sub-possibilities depending on the vanishing or not of any of (\ref{tauexc}), (\ref{tauex}) or (\ref{taue}). There is also an extreme possibility, that we term a {\em pure double layer}, where the thin shell is not present but the double layer is: this is the case with all (\ref{tauexc}), (\ref{tauex}) and (\ref{taue}) vanishing but a non-vanishing (\ref{t}). Nothing like any of these different possibilities can be described in GR.
\end{itemize}
We classify the junction condition for these different cases in turn.
\subsection{Thin shells without double layer}
\label{subsecion:thinshells}
From (\ref{t}) follows that the strength of the double layer $\mu_{\alpha\beta}$ must be set to zero, and thus from (\ref{strength}) we have
\begin{equation}
(\kappa_1 +\kappa_2) [R] h_{\alpha\beta} +2\kappa_2 [G_{\alpha\beta}]=0 \hspace{1cm} \Longrightarrow \hspace{3mm} (\kappa_2 +n\kappa_1) [R]=0, \label{mu=0}
\end{equation}
which implies that $\tau$ and $\tau_\alpha$ both vanish (see Result \ref{NoDL}).
Hence, only the tangential part of the distributional energy momentum tensor on $\Sigma$ survives, given explicitly by (\ref{tauexc1}). Its trace, upon using (\ref{traceRcal}), reads
\begin{equation}
\kappa \tau_\alpha^\alpha = (n\kappa_1 + \kappa_2)b - K^{\alpha \beta}\mu_{\alpha \beta} = (n\kappa_1 + \kappa_2)b. \label{trace_emsigma}
\end{equation}
The equations (\ref{eq:1}) and (\ref{eq:2}) in this case read
\begin{equation}
n^\alpha h^\rho_\beta [T_{\alpha\rho}]= -\overline{\nabla}^\alpha \tau_{\alpha\beta},\qquad
n^\alpha n^\beta [T_{\alpha\beta}]= \tau_{\alpha\beta} K^{\alpha\beta}. \label{Israelcond_thinshell}
\end{equation}
Observe that, remarkably, these are identical with the Israel conditions derived in GR.
We have to distinguish whether $\kappa_2=0$ or not.
\begin{itemize}
\item $\kappa_2 \neq 0$.
If $(n\kappa_1 + \kappa_2) \neq 0$ relations (\ref{mu=0}) imply that $[R]=0$ and $[G_{\alpha\beta}] = 0$ in full.
Direct consequences are $[R_{\alpha\beta}] = [R_{\alpha\beta\mu\nu}] = 0$, and the discontinuities in the derivatives are given by
\begin{equation}
\left[\nabla_\mu R_{\alpha\beta\lambda \nu}\right] =(n_\alpha n_\lambda {\cal{R}}_{\beta \nu} -n_\alpha n_\nu \mathcal{R}_{\beta \lambda} -n_\beta n_\lambda \mathcal{R}_{\alpha \nu} +n_\beta n_\nu \mathcal{R}_{\alpha \lambda})n_\mu, \label{noshellsDRiemann}
\end{equation}
for some symmetric tensor $\mathcal{R}_{\alpha\beta}$ tangent to $\Sigma$.
From (\ref{para_sacar_b}) we get $b = 2 \mathcal{R}_\rho^\rho$ and therefore the energy-momentum tensor (\ref{tauexc}) on $\Sigma$ just reads
\begin{equation}
\kappa \tau_{\alpha\beta} = \kappa_1 b h_{\alpha\beta} + 2 \kappa_ 2 \mathcal{R}_{\alpha\beta}. \nonumber
\end{equation}
With regard to the exceptional possibility $n \kappa_1 + \kappa_2 = 0$, equation (\ref{mu=0}) implies in particular that the tensor $B_{\alpha\beta}$ is proportional to the first fundamental form. The explicit relation reads
\begin{equation}
B_{\alpha\beta} = \frac{1}{2n}[R] h_{\alpha\beta}, \nonumber
\end{equation}
which for the discontinuity of the Riemann tensor produces
\begin{equation}
\left[R_{\alpha\beta\lambda \mu }\right] = \frac{[R]}{2n}\left(n_\alpha n_\lambda h_{\beta \mu}-n_\lambda n_\beta h_{\alpha \mu}-n_\mu n_\alpha h_{\beta \lambda}+n_\mu n_\beta h_{\alpha \lambda}\right). \label{noshellparticularRie}
\end{equation}
Taking contractions in this last expression we find the allowed jumps in the Ricci and Einstein tensor
\begin{equation}
\left[R_{\alpha\beta}\right] =\frac{[R] }{2} \left(\frac{1}{n} h_{\alpha\beta}+ n_\alpha n_\beta \right) \Rightarrow \left[G_{\alpha\beta}\right]=\frac{1-n}{2n}[R] h_{\alpha\beta}. \label{noshellparticularGandRic}
\end{equation}
Note $[R]$ is the only degree of freedom allowed for the discontinuities of the curvature tensors.
The remaining allowed discontinuities of the derivative of the Ricci tensor are encoded in $r_{\alpha\beta} = n^\mu [\nabla_\mu R_{\alpha\beta}]$, so that
\begin{equation}
\left[\nabla_\mu R_{\alpha\beta}\right] = r_{\alpha\beta} n_\mu +\frac{1}{2}\left(n_\alpha n_\beta + \frac{1}{n}h_{\alpha\beta}\right)\overline{\nabla}_\mu [R] + \left(\frac{1-n}{2n} \right) [R] \left(n_\alpha K_{\beta \mu} + n_\beta K_{\alpha\mu}\right). \label{noshellparticularDRicci}
\end{equation}
Recalling that $b=r^\alpha_\alpha = n^\rho [\nabla_\rho R]$ the explicit form of the energy momentum tensor on $\Sigma$ can be obtained from (\ref{tauexc1}). Due to (\ref{trace_emsigma}), $\tau_{\alpha\beta}$ is traceless. Nevertheless, the relevance of this exceptional case is probably marginal, as the coupling constants satisfy a dimensionally dependent condition.
\item $\kappa_2 = 0$.
We have to assume then that $\kappa_1 \neq 0$, as otherwise all the terms (\ref{tauexc}), (\ref{tauex}) and (\ref{taue}) vanish identically and thus there are no thin shells.
Let us also recall that $a_2$ and $a_3$ are assumed not to vanish simultaneously, as that case was fully analysed in \cite{Senovilla13,Senovilla14,Senovilla15}, so it would be more precise to label this case as $a_2 = -4a_3$ with $a_1 \neq a_3$.
This case reduces to the condition $[R]=0$ (see (\ref{mu=0})). All the remaining jumps on the curvature tensor and its derivatives are allowed.
The energy-momentum tensor on $\Sigma$ is just given by
\begin{equation}
\kappa \tau_{\alpha\beta}=\kappa_1 b h_{\alpha\beta}, \label{noshellk2zerotau}
\end{equation}
with $b=n^\alpha[\nabla_\alpha R]$,
and therefore the thin shell $\Sigma$ only contains, at most,
a ``cosmological constant''-type of matter content.
\end{itemize}
\subsection{Proper matching hypersurface}
In addition to the requirement imposed in the previous case of thin shells, we demand now that the full $\tilde T_{\alpha\beta}$ vanishes. Thus we have to add $\tau_{\alpha\beta} = 0$ to the conditions discussed in the previous Subsection \ref{subsecion:thinshells}.
In general, from (\ref{Israelcond_thinshell}) we have
\begin{equation}
n^\alpha [T_{\alpha\beta}]=0 \label{Israelcond}
\end{equation}
which adopt exactly the same form as in GR and we call the generalized Israel conditions. They imply the continuity of the normal components of the energy-momentum tensor across $\Sigma$.
Again, we have to distinguish two cases depending on whether $\kappa_2$ vanishes or not.
\begin{itemize}
\item $\kappa_2\neq 0$.
If $(n\kappa_1 + \kappa_2) \neq 0$, we already know from the previous section that $[R] = 0$ and $[G_{\alpha\beta}] = 0$. The trace relation (\ref{trace_emsigma}) provides $b=0$ and moreover $\tau_{\alpha\beta}=0$ implies, via (\ref{tauexc1}), ${\cal R}_{\alpha\beta}=0$. Plugging this information into (\ref{noshellsDRiemann}) it follows that the derivatives of the curvature tensors do not present discontinuities.
\begin{result}
In the generic case with $4a_3 +a_2\neq0$ and $4a_3 +(1+n)a_2 +4na_1 \neq 0$, the full set of matching conditions amount to those of GR (agreement of the first and second fundamental forms on $\Sigma$) plus the agreement of the Ricci tensor and its first derivative on $\Sigma$:
\begin{equation}
[R_{\alpha\beta}]=0 , \hspace{2cm} [\nabla_\rho R_{\alpha\beta}]=0. \label{junction}
\end{equation}
\end{result}
This actually implies that the full Riemann tensor and its first derivatives have no jumps across $\Sigma$:
$$
[R_{\alpha\beta\lambda\mu}]=0 , \hspace{2cm} [\nabla_\rho R_{\alpha\beta\lambda\mu}]=0.
$$
With regard to the exceptional possibility $\kappa_2 +n \kappa_1 =0$, the curvature tensors satisfy (\ref{noshellparticularRie}) and (\ref{noshellparticularGandRic}). Now $\tau_{\alpha\beta}=0$ provides
$$
{\cal R}_{\alpha\beta} =\frac{1}{2n} \left( (n-1) [R] K_{\alpha\beta} +b h_{\alpha\beta}\right) ,
$$
and thus $r_{\alpha\beta}=n^\rho [\nabla_\rho R_{\alpha\beta}]$ gets determined in terms of $[R]$ and $b$, so that (\ref{noshellparticularDRicci}) for $[\nabla_\mu R_{\alpha\beta}]$ reads
\begin{eqnarray}
\left[\nabla_\mu R_{\alpha\beta}\right] &=& \left(\frac{1}{2n} \left( (n-1) [R] K_{\alpha\beta} +b h_{\alpha\beta}\right) -2n_{(\alpha} \overline{\nabla}_{\beta)} [R] +\left(\frac{b}{2} + \frac{1-n}{2n}[R]K_\rho^\rho \right)n_\alpha n_\beta \right)n_\mu \nonumber \\
&+&\frac{1}{2}\left(n_\alpha n_\beta + \frac{1}{n}h_{\alpha\beta}\right)\overline{\nabla}_\mu [R] + \left(\frac{1-n}{2n} \right) [R] \left(n_\alpha K_{\beta \mu} + n_\beta K_{\alpha\mu}\right). \nonumber
\end{eqnarray}
Hence, the entire set of discontinuities of the Riemann tensor and its first derivative can be written just in terms of $[R]$ and $b=n^\rho [\nabla_\rho R]$, which remain as two free degrees of freedom. As mentioned before, this case is probably irrelevant due to its defining condition depending on the dimension $n$.
\item $\kappa_2 =0$ but $\kappa_1\neq 0$.
From the results from the previous section we know that $[R]=0$ and the energy momentum on $\Sigma$ is given by (\ref{noshellk2zerotau}). Thus, for a proper matching we find $b=0$.
The discontinuity in the derivative is
\begin{eqnarray*}
[\nabla_\mu R_{\alpha\beta}] &=& n_\mu\left([R_{\rho\nu}]K^{\rho\nu} n_\alpha n_\beta -2 \overline{\nabla}^\rho [R_{\rho(\beta}]n_{\alpha)} + {\cal R}_{\alpha\beta}\right) \\
&+& \overline{\nabla}_\mu [R_{\alpha\beta}] -2K_\mu^\rho [R_{\rho(\alpha}] n_{\beta)} ,
\end{eqnarray*}
where also ${\cal R}^\rho_\rho = -K^{\alpha\beta}[R_{\alpha\beta}]$.
\item $\kappa_1 = \kappa_2 = 0$.
Or equivalently $a_1 = a_3 = -a_2/4$. In this case all the terms (\ref{tauexc}), (\ref{tauex}) and (\ref{taue}) and (\ref{t}) vanish identically and thus there are no further restrictions other than $[K_{ab}] = 0$. The junction conditions are just the same as in GR. This is the case where the quadratic part of the Lagrangian (\ref{lag}) is the Gauss-Bonnet term \cite{Lovelock71}.
\end{itemize}
\subsection{The double layer fauna; pure double layers}
The generic occurrence in quadratic gravity, as shown above, is that any thin shell comes accompanied by a double layer, which in turn generically implies the existence of non-zero external pressure/tension and external flux momentum. However, there are several special possibilities in which one of these quantities, or all, disappear. This gives rise to a fauna of different kinds of double layers. There is also the possibility that the double layer term (\ref{t}) is non-zero while the remaining distributional part in the energy-momentum tensor, that is $\tilde T_{\alpha\beta}\delta^\Sigma$, vanishes. In other words, a double layer without a classical thin shell. We call such a case a {\em pure double layer}. In the rest of this section we explore this novel possibility.
For pure double layers, the vanishing of the external pressure $\tau$ plus the energy flux $\tau_\alpha$ first imply,
by virtue of (\ref{tauex}) and (\ref{tauex2})
\begin{equation}
\mu_{\alpha\beta}K^{\alpha\beta}=0,\quad \overline{\nabla}^\rho \mu_{\rho\alpha}=0.
\label{pure_DB_1}
\end{equation}
In particular, then, the double layer strength is conserved.
The first equation in (\ref{pure_DB_1}) yields
\begin{equation}
(\kappa_1 +\kappa_2)[R] K^\sigma_\sigma + 2\kappa_2 K^{\rho \sigma}[G_{\rho \sigma}] = 0\label{dlnoshell2}
\end{equation}
while the second gives
\begin{equation}
(\kappa_1 +\kappa_2) \overline{\nabla}_\alpha [R] + 2\kappa_2 \overline{\nabla}^\rho [G_{\rho\alpha}]= 0. \label{dlnoshell3}
\end{equation}
Equation (\ref{dlnoshell2}) combined with the vanishing of the trace of $\tau_{\alpha \beta}$ provides
\begin{equation}
(\kappa_1 n + \kappa_2)b=0 \label{dlnoshell1}
\end{equation}
so that, generically --- $n \kappa_1 + \kappa_2 \neq 0$ --- one has $b=0$. A first consequence is that the jump in the derivative of the Ricci scalar is now tangent to $\Sigma$ and fully determined by the tangent derivative of $[R]$
\begin{equation}
[\nabla_\alpha R] = \overline{\nabla}_\alpha [R].\label{nablaR=nablaR}
\end{equation}
The vanishing of $\tau_{\alpha \beta}$, using (\ref{tauexc}), is now equivalent to
\begin{equation}
\kappa_2 {\cal R}_{\alpha\beta} = (\kappa_1 + \kappa_2)\frac{[R]}{2} K_{\alpha \beta}. \label{dlnoshell4}
\end{equation}
The expression for the discontinuity of the normal derivative of the Ricci tensor has to be studied depending on $\kappa_2$ vanishing or not.
\begin{itemize}
\item $\kappa_2\neq 0$
The relations above allow us to write the discontinuity of the normal derivative of the Ricci tensor as
\begin{equation}
r_{\alpha\beta} = \frac{1}{2}\left(1+ \frac{\kappa_1}{\kappa_2} \right)([R]K_{\alpha\beta} + n_\beta \overline{\nabla}_\alpha [R] + n_\alpha \overline{\nabla}_\beta [R] - K[R] n_\alpha n_\beta), \nonumber
\end{equation}
whereas the tangent part of the derivative keeps its original form given in (\ref{eq:[d_diff_ricci]}).
\item $\kappa_2=0$ (and $\kappa_1 \neq 0$).
Equations (\ref{dlnoshell3}) and (\ref{dlnoshell4}) read
\begin{equation}
\overline{\nabla}_\alpha [R] = 0, \quad [R]K_{\alpha\beta}=0, \label{eq:dlk1zero}
\end{equation}
and (\ref{dlnoshell2}) is automatically satisfied. Thus, (\ref{nablaR=nablaR}) implies
$[\nabla_\alpha R] = 0$.
Observe that since $\kappa_2=0$, (\ref{strength}) establishes that the strength of the double layer is proportional to $[R]$. Hence, in order to have a nonzero $\mu_{\alpha\beta}$, $[R]$ cannot vanish.
Then $K_{\alpha\beta}=0$ necessarily, and the allowed jumps are encoded in $[R_{\alpha\beta}]$ and $r_{\alpha\beta}$.
\end{itemize}
For completeness, we provide finally the formulas for the exceptional case $n\kappa_1 + \kappa_2=0$ ---discarding $\kappa_1 = \kappa_2 = 0$ for which the double layer simply disappears. The equations $\tau = 0$, $\tau_\alpha= 0$ and $\tau_{\alpha \beta} = 0$ result, respectively, in
\begin{eqnarray}
(1-n) [R] K^\beta_\beta - 2n K^{\alpha\beta}[G_{\alpha\beta}] &=& 0, \nonumber \\
(1-n)\overline{\nabla}_\alpha [R] -2n \overline{\nabla}^\rho [G_{\rho \alpha}] &=&0, \nonumber \\
(1-n)[R] K_{\alpha\beta} - b h_{\alpha\beta} + 2n \cal{R}_{\alpha \beta} &=& 0. \nonumber
\end{eqnarray}
While the third equation provides $\mathcal{R}_{\alpha \beta}$, the first two constitute
constraints on the allowed jumps of the Ricci tensor that should be analysed
in each particular situation.
In all cases, the allowed discontinuity in the derivative of the Ricci tensor can be written as
\begin{eqnarray}
r_{\alpha\beta} &=& - \frac{1}{2n} \left((1-n)[R]K_{\alpha\beta} - bh_{\alpha\beta}\right) -\frac{1-n}{2n} \left(n_\beta \overline{\nabla}_\alpha [R] + n_\alpha \overline{\nabla}_\beta [R]\right) \nonumber\\
&& +\frac{1}{2}\left(b + \frac{1-n}{n}[R]K^\rho_\rho \right)n_\alpha n_\beta . \nonumber
\end{eqnarray}
Observe that now the strength of the double layer is traceless, $\mu^\rho_\rho=0$
(see e.g.(\ref{tracestrength})).
\section{Consequences}
\label{sec:consequences}
The proper matching conditions in GR are the agreement of the first and second fundamental forms on $\Sigma$. Therefore, any matching hypersurface in GR satisfies (\ref{Kdisc=0}), and the allowed jumps in the energy-momentum tensor are equivalent to non-vanishing discontinuities of the Ricci (and Riemann) tensor. Thus, in GR properly matched space-times will generally have $[R_{\alpha\beta}]\neq 0$.
This simple known fact implies that any GR-solution containing a proper matching hypersurface {\em will contain a double layer and/or a thin shell at the matching hypersurface if the true theory is quadratic} !
At least two relevant consequences follow from this fact: (i) generically, matched solutions in GR are no longer solutions in quadratic theories; and (ii) if any quantum regimes require, excite or switch on quadratic terms in the Lagrangian density, then GR solutions modelling two regions with different matter contents will develop thin shells and double layers on their interfaces. Let us elaborate.
Consider, for instance, the case of a perfect fluid matched to a vacuum in GR. As is well known, the GR matching hypersurface is determined by the condition that
$$
p^{GR} |_\Sigma =0
$$
where $p^{GR}$ is the isotropic pressure of the fluid in GR. It follows that the Ricci tensor has a discontinuity of the following type
$$
[G_{\alpha\beta}] = \left. \kappa \varrho^{GR} u_\alpha u_\beta \right|_\Sigma, \hspace{1cm} [R_{\alpha\beta}] =\left. \kappa \varrho^{GR} \left(u_\alpha u_\beta +\frac{1}{n-1} g_{\alpha\beta}\right)\right|_\Sigma
$$
$u^\alpha$ being the unit velocity vector of the perfect fluid. Therefore, using (\ref{tauexc1}-\ref{taue1}) and (\ref{t}) we see that the very same space-time has, in any quadratic theory of gravity, an energy-momentum tensor distribution with all type of thin-shell and double-layer terms.
Imagine the situation of a collapsing perfect fluid (to form a black hole, say) with vacuum exterior. Then one can use any of the known solutions in GR to describe this situation ---the reader may have in mind, for instance, the Oppenheimer-Snyder model. The GR solution describes this process accurately in the initial and intermediate stages, when the curvature of the space-time is moderate and the values of $a_1 R^2$, $ a_2 R_{\alpha\beta}R^{\alpha\beta}$ and $a_3 R_{\alpha\beta\mu\nu}R^{\alpha\beta\mu\nu}$ for instance, or other similar quantities, are small enough to render any quadratic terms in the Lagrangian totally negligible. However, as the collapse proceeds and one approaches the black hole regions ---and later the classical singularity--, regimes with very high curvatures are reached. Then, the quadratic terms coming from any quantum corrections (be they from string theory counter-terms, or any other) to the Einstein-Hilbert Lagrangian start to be important, and actually to dominate, the curvature being enormous. In this regime, the original matching hypersurface becomes actually a thin double layer.
Of course, the description of a global space-time via a matching is an approximation, and also the use of tensor distributions is also just another approximation to a real situation where a gigantic quantity of matter can be concentrated around a very thin region of the space-time. Nevertheless, both approximations are satisfactory in the sense that they are believed to actually mimic a realistic situation where the layer is thick and the jumps in the energy variables are extremely big, but finite. If this is the case, then the above reasoning seems to imply that, {\em if quadratic theories of gravity are correct}, at least in some extreme regimes, then a huge concentration of matter will develop around the interface of the interior and the exterior of the collapsing star. And this huge concentration will generically manifest as a shell with double-layer properties.
\section*{Acknowledgments}
The authors are supported by FONDOS FEDER under grant FIS2014-57956-P (Spanish government), and by projects GIU12/15 IT592-13 (Gobierno Vasco) and UFI 11/55 (UPV/EHU). JMMS is also supported by project P09-FQM-4496 (J. Andaluc\'{\i}a---FEDER). BR acknowledges financial support from the Basque Government grant BFI-2011-250.
\section*{Appendices}
|
2,877,628,090,702 | arxiv | \section{Introduction}
Since the celebrated discovery of the quantum anomalous Hall (QAH) effect~\cite{Yu61,Chang167}, the study of two-dimensional topological quantum systems and the search for new material platforms with a large topological gap has gained significant attention due to their strong potential in spintronic applications~\cite{bansil2016colloquium, RevModPhys.83.1057, RevModPhys.82.3045,RevModPhys.82.1539}.
The chiral edge state in QAH systems travels dissipationless along the edge and is believed to provide a promising solution to the Moore's Law issue in the silicon industry.
As the analog of the integer quantum Hall effect under a strong magnetic field, the QAH acquires the quantized Hall conductivity by intrinsic magnetism without Landau Levels.
From the theoretical proposal by F.D.M. Haldane~\cite{PhysRevLett.61.2015}, however, little progress was made in experiments on QAH systems before the field was cross-fertilized by
the discovery of the 2D quantum spin Hall (QSH) effect~\cite{doi:10.1126/science.1133734,doi:10.1126/science.1148047} and three-dimensional (3D) topological insulators (TIs) ~\cite{RevModPhys.82.3045,zhang2009topological,doi:10.1126/science.1173034,PhysRevB.81.205407,zhang2010crossover}. Note that the QAH is distinctly different from Chern insulator realizations in classical platforms, where the dissipationless character of the topological edge modes is not found~\cite{marin}.
The discovery of the QSH and 3D TIs implicitly paved the way for the realization of the QAH in experiments, as the field turned its focus to materials with significant spin-orbit coupling and a careful analysis of bulk and surface effect in semiconductor composites~\cite{Chang167, checkelsky2014trajectory,PhysRevLett.113.137201,chang2015high,PhysRevLett.118.246801,PhysRevB.103.235111}.
QSH systems and 3D TIs are time-reversal (TR) invariant topological systems. Their Hamiltonian is block-diagonal with different angular momenta $j=l\pm1/2$ being TR partners.
They form two subsystems that are also individually topological.
Each subsystem can be viewed as a QAH system. Their combination under TR symmetry generates the QSH or the surface states in 3D TIs.
Thus, breaking TR can potentially revert the above combination to yield the QAH.
The essential condition of this process is to retain one of the subsystem's topological characters while dissolving the other one~\cite{Wang:2015wr, Liu:2016tz, KOU201534, PhysRevB.78.195424}.
Theoretical proposals on magnetically doped HgTe/CdTe ~\cite{PhysRevB.85.125401} and InAs/GaSb ~\cite{PhysRevLett.113.147201} quantum wells nicely fulfilled the required criteria.
Unfortunately, experimental attempts did not succeed in realizing the QAH due to the failure in establishing long-range ferromagnetic order. Summarizing this direction of attempts, the immediate manipulation of the 2D QSH states does not seem particularly promising in achieving QAH at first sight.
Instead, the final breakthrough was made along the other route, i.e., magnetically doping the surface states of 3D TIs~\cite{Yu61, Chang167}.
Experiments found that doping tetradymite semiconductors Bi$_{2}$Te$_{3}$, Bi$_{2}$Se$_{3}$, and Sb$_{2}$Te$_{3}$ with Cr, Mn, and V can significantly enhance the bulk spin susceptibility and establish a ferromagnetic insulating phase through the van Vleck mechanism~\cite{Yu61, Chang167, PhysRevB.81.195203, KOU2013, KOU2013-2, PhysRevLett.114.146802, PhysRevB.71.115214}.
At the same time, the ferromagnetic exchange coupling makes one copy of the QAH subsystem revert to normal band order and, therefore, lose its nontrivial topology. Consequently, only one edge state survives, leading to the QAH.
In recent years, the MnBi$_{2}$Te$_{4}$-family materials have also provided a platform for QAH with intrinsic magnetism from the inserted MnTe bilayer~\cite{Luo2013FromOL}.
The interplay between the magnetic coupling and the topologically nontrivial bands endows the MnBi$_{2}$Te$_{4}$-family materials with rich topological phases~\cite{PhysRevLett.122.107202, PhysRevLett.122.206401, doi:10.1126/sciadv.aaw5685,10.1093/nsr/nwaa089, PhysRevX.9.041038, PhysRevLett.124.167204, PhysRevLett.124.197201, PhysRevLett.124.126402, PhysRevX.10.031013, PhysRevLett.126.176403, PhysRevLett.123.096401, PhysRevX.11.011039}.
Despite the successful realization of the QAH in magnetically-doped tetradymite semiconductors and MnBi2Te4-family materials, the operating temperature is limited to a relatively low value. Therefore, new QAH materials or QSH parent compounds with large topological gaps are strongly sought after.
A large gap QSH state was experimentally realized in the monolayer bismuthene system~\cite{Reis287, PhysRevB.98.165146}.
This system is geometrically equivalent to graphene, while a completely different mechanism characterizes its low-energy excitation.
The $\sigma$-bond states form an effectively two-orbital Kane-Mele model, which encourages a large onsite spin-orbital coupling (SOC) that supports a topological gap as large as 0.8 eV.
Bismuthene is a QSH system with the so-far largest topological gap confirmed in the experiment, which thus can potentially be used in room-temperature spintronic device applications.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{Figure-1.jpg}
\caption{(Color online) (a-b) Structural model of BiIn/Al$_2$O$_3$ in top and side views. (c) Half-passivation of BiIn with nitrogen. (d) The 2D Brillouin zone and the high-symmetry k-path adopted in the calculation. (e) Electronic structure of BiIn/Al$_{2}$O$_{3}$ without SOC (blue dashed line) and with SOC (red solid line). (f) Calculations of topological edge mode of a ribbon with zigzag termination. (g) and (h) are same as (e) and (f), but for nitrogen half-passivated BiIn/Al$_{2}$O$_{3}$.}
\label{Fig1}
\end{figure*}
This paper would like to revisit 2D QSH parent compounds as a hitherto less successful recipe to reach the QAH. However, as a game-changing starting point, we employ the room-temperature QSH in bismuthene as the parent state for our QAH design.
We start from a large-gap QSH system and introduce the long-range ferromagnetism by breaking the complete shell configuration of certain atoms, such that TR will be broken.
Our idea is inspired by recent studies on several 2D materials with honeycomb structure of elements of groups IV~\cite{PhysRevLett.111.136804,Chou_2014,wang2016tunable}, V~\cite{PhysRevB.90.085431,song2014quantum,jin2015quantum,C4CP02213K,hsu2016two}, and III-V~\cite{chuang2014prediction,crisostomo2015robust,yao2015predicted,li2015giant,zhao2015driving,PhysRevB.91.235306, PhysRevB.91.165430, PhysRevB.91.165430,jin2015quantum}.
QAH states were found in buckled Bi-III honeycomb systems by chemically decorating one side with hydrogen and the other side with nitrogen.
As a result, a net magnetic moment can be established~\cite{crisostomo2017chemically}.
Furthermore, we employ the fact that functionalizing semiconductors with nitrogen is an experimentally feasible operation; for example, it induces a magnetic moment in 3D TI Bi$_2$Te$_3$~\cite{Jin_2012}.
Thus, introducing charge imbalance and breaking complete shell configurations by functionalizing large-gap QSH insulators with nitrogen atoms may result in a large-gap QAH scenario.
We propose the experimentally feasible monolayer/substrate combination, i.e., Bi-III monolayer on Al$_{2}$O$_{3}$, as a new platform for realizing a large-gap QSH.
It represents a promising material candidate similar to recent proposals~\cite{xia2021hightemperature}, with high experimental feasibility for epitaxial synthesis.
We further tune these QSH systems to large-gap QAH insulators by functionalizing them with nitrogen with the largest bandgap as large as 405 meV.
In particular, we provide a unified, effective Hamiltonian description of QSH and QAH states that originate from 2D honeycomb systems with low-energy excitations dominated by $\sigma$-bonds.
Our paper is organized as follows.
In Section~\ref{Sec:QSH}, we propose $\alpha$-Al$_{2}$O$_{3}$ as the appropriate substrate for Bi-III monolayer that a large-gap QSH can be realized.
Based on these systems, we further modify them by functionalizing bismuth with nitrogen to introduce long-range ferromagnetism.
In Section~\ref{Sec:model}, we provide a detailed understanding of the QSH and QAH in Bi-III monolayer/Al$_{2}$O$_{3}$, as well as the topological transition between them by explicitly constructing an effective tight-binding model.
\section{Material platforms}
\label{Sec:QSH}
Corundum, i.e. $\alpha-$Al$_2$O$_3$, is a typical substrate material, and $\alpha$ is its most stable phase in nature.
It is widely used in experiments to epitaxial grow topological thin films~\cite{WOS:000375889700047, WOS:000209956000002, WOS:000335720300038, WOS:000501493600006}.
The experimental lattice constant of Al$_{2}$O$_{3}$ is $4.761$Å~\cite{peintinger2014quantum}.
The calculated equilibrium lattice constant of the three Bi-III monolayer systems studied in this work are 4.928 Å (BiTl), 4.805 Å (BiIn), and 4.521 Å (BiGa)~\cite{chuang2014prediction}.
Thus, the applied strains on Bi-III monolayers range from $-5\%$ to $3.5\%$, indicating high feasibility for experiment to grow them on Al$_{2}$O$_{3}$.
Our first-principles calculations are based on the generalized-gradient approximation (GGA) in the Perdew-Burke-Ernzerhof (PBE) form~\cite{PhysRev.136.B864, PhysRev.140.A1133, PhysRevLett.45.566, PhysRevB.23.5048, PhysRevLett.77.3865} within the framework of the density-functional theory (DFT) using projector-augmented-wave (PAW)~\cite{PhysRevB.59.1758} wave functions as implemented in the Vienna Ab-Initio Simulation Package (VASP)~\cite{PhysRevB.47.558,PhysRevB.54.11169}.
The effect of Van-der-Waals (VdW) interactions was taken into account by using the empirical correction scheme of Grimme (DFT-D2)~\cite{https://doi.org/10.1002/jcc.20495}.
The cut-off energy for the wave functions was set at 500 eV. To simulate the experimental situation, we only allow the atomic positions of atom Bi and group III element to relax while keeping the lattice constant and the atomic positions of the substrate unchanged. Atomic positions were optimized for each lattice constant value considered until the residual forces were less than $5 \times 10^{-3}$ eV/Å. The self-consistent convergence threshold for total energy was set at $10^{ -5 } $ eV. A vacuum layer of at least 20 Å along the z-direction was used to simulate thin films. A $\Gamma$-centered Monkhorst-Pack~\cite{PhysRevB.13.5188} grid of $9 \times 9 \times 1$ was used for 2D integrations in the Brillouin zone (BZ). In the phonon calculation, we use $5 \times 5 \times 1$ supercell, and the self-consistent convergence threshold for total energy was set at $10^{ -7 } $ eV.
Finally, the edge states were calculated by using our in-house code {\it LTM} (\mbox{\underline{L}ibrary for \underline{T}opological \underline{M}aterial calculations}) with the iterative Green's function approach~\cite{Sancho_1985} based on the maximally localized Wannier functions~\cite{PhysRevB.56.12847} obtained through the VASP2WANNIER90~\cite{MOSTOFI2008685}.
\subsection{QSH}
As Bi-III buckled monolayers alone are QSH systems~\cite{chuang2014prediction}, an appropriate substrate should less correlate with the Bi-III monolayers to maximally keep the topology of the latter.
We, thus, consider the Al-terminated (0001) surface of Al$_{2}$O$_{3}$ as observed in experiments in our present work.
If an O-terminated surface is used, additional hydrogen passivation would be needed~\cite{PhysRevB.84.155406}.
All three Bi-III/Al$_2$O$_3$ systems studied in our work share similar conclusions. Therefore, we only use BiIn/Al$_2$O$_3$ as an example in the main text.
Results on other systems can be found in the Supplementary Information.
Fig.~\ref{Fig1}(a) and ~\ref{Fig1}(b) illustrate the crystal structure of BiIn/Al$_2$O$_3$.
Here, we adopt a symmetric structure to simulate a semi-infinite substrate. Otherwise, the other side of the substrate needs to be carefully saturated to avoid any dangling state to interfere with the Fermi level.
The BiIn monolayer binds to the Al-terminated surface of Al$_{2}$O$_{3}$ via VdW force.
Full relaxation of the BiIn layer results in a separation of 1.68 \AA ~from the substrate surface and a bucking of 1.20 \AA~amplitude between the Bi and In layers.
The corresponding bulk electronic structure is displayed in Fig.~\ref{Fig1}(e) with the BZ shown in Fig.~\ref{Fig1}(d).
In the energy range shown in Fig.~\ref{Fig1}(e), all states are from BiIn. The substrate states are far away from the Fermi level without interfering with the topology of BiIn monolayer.
Without SOC, the system is a semi-metal with two parabolic bands touching at the $\Gamma$ point.
When SOC is included, the system becomes a semi-conductor and opens a bandgap as large as 420 meV, which is smaller than bismuthene/SiC(0001)~\cite{Reis287}, but much larger than HgTe ~\cite{doi:10.1126/science.1133734,doi:10.1126/science.1148047} and InSb/GaAs ~\cite{knez2012quantum}.
This gap is topological nontrivial.
A direct calculation of a semi-infinite ribbon with a zigzag boundary gives two counter-propagating edge modes connecting the valence and conduction bands, which is the characteristic of 2D QSH insulators.
The topological edge states reside right in the middle of the bulk bandgap, which is of high utility for device application.
In contrast to bismuthene/SiC(0001), the nontrivial topology here stems from the band inversion of $p_{x/y}$ orbitals.
In supplementary Fig.~S4, we show the orbital components for the states around the Fermi level, which are governed by all three $p$ orbitals of Bi and In.
Thus, the low-energy model description of BiIn/Al$_{2}$O$_{3}$ will not be the simple $p_{x}$ and $p_{y}$ model~\cite{Reis287, PhysRevB.98.165146}.
Due to the buckling structure of the BiIn monolayer, $p_{z}$ enters as an active orbital.
However, the band inversion is between $p_{x/y}$. Thus, the $p_{z}$ orbital plays a marginal role here.
Nevertheless, due to the participation of the $p_{z}$ orbital, the topological gap is no longer proportional to the onsite strength of $l_{z}s_{z}$ SOC.
Thus, the size of topological gaps is reduced in these buckled system as compared to bismuthene/SiC(0001)~\cite{Reis287}.
\subsection{QAH from chemical functionalization}
There are two strategies to break TR symmetry and introduce QAH in two-dimensional systems, i.e., by passivation with functionality and magnetic doping with transition-metal atoms.
So far, the candidate QAH materials theoretically proposed include HgTe quantum well magnetically doped with Mn~\cite{PhysRevLett.101.146802, PhysRevB.88.085315}, (Bi, Sb)$_{2}$Te$_{3}$ doped with Cr and V~\cite{Yu61}, group-IV and V honeycomb monolayers passivated by transition atoms~\cite{PhysRevB.82.161414, PhysRevB.89.035409, Kaloni2014, jin2015quantum, PhysRevLett.113.256401, Chen2016, Huang2018, HUANG2020246} or proximity to magnetic substrates~\cite{ PhysRevLett.112.116404} and the intrinsic magnetic intercalation in MnBi$_{2}$Te$_{4}$ family materials~\cite{PhysRevLett.122.107202, PhysRevLett.122.206401, doi:10.1126/sciadv.aaw5685,10.1093/nsr/nwaa089, PhysRevX.9.041038, PhysRevLett.124.167204, PhysRevLett.124.197201, PhysRevLett.124.126402, PhysRevX.10.031013, PhysRevLett.126.176403, PhysRevLett.123.096401, PhysRevX.11.011039}..
Some of them have been experimentally examined and confirmed~\cite{Chang167, chang2015high, checkelsky2014trajectory, PhysRevLett.113.137201}.
Their success stimulates our search for the QAH candidate systems with a larger topological gap and higher operating temperatures.
After obtaining the large-gap QSH, we further explored the half-passivation of BiIn monolayer with H, OH, F, Cl, Br, I, and N to search for the QAH.
We find that only N-adsorption yields a substantial charge transfer to nitrogen atoms from bismuth and indium atoms. A charge imbalance between the spin-up and spin-down electrons on nitrogen forms which breaks the TR symmetry.
Electrons at different nitrogen sites do not hop directly but through indium and bismuth atoms.
Thus, the ferromagnetic couplings are mediated by the superexchange mechanism and favored by the Goodenough-Kanamori rule~\cite{PhysRev.100.564, KANAMORI195987}.
As illustrated in Fig.~\ref{Fig1}(c), the nitrogen atom connects to bismuth but not to indium, i.e., half-passivation.
To remove the influence of electric dipole moment, we took a structure of Fig.~\ref{Fig1}(c) with space inversion symmetry by sandwiching Al$_2$O$_3$ with N-Bi-III on both sides.
Spin-polarized band calculation without SOC shows that chemical adsorption of nitrogen leads to an intrinsic magnetic moment of 2.0 $\mu_B$ per unit cell due to the uneven distribution of the additional charge between the spin-up and spin-down states of the nitrogen atom.
The ferromagnetic state is energetically more stable than the nonmagnetic and antiferromagnetic states.
Local moments mainly form at N and In.
The Curie temperature is theoretically estimated as $T_{c}\sim 110~K$ with classical Heisenberg model.
As a consequence of the ferromagnetism, we obtain a half-metal, i.e., spin-up bands are gapless while spin-down bands are gapped.
Two spin-up bands degenerate at the $\Gamma$ point.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Figure-2.png}
\caption{Electron localization function (ELF) of BiIn/Al$_{2}$O$_{3}$ before and after nitrogen adsorption. The crystal model overlays on the ELF map showing the valence charge centers. Al$_{2}$O$_{3}$ substrate displays the expected ionic-bonding nature with the positive and negative valences at Al and O sites, respectively. Bi-In and In-N bonds share valence electrons displaying a strong covalent bond nature. The signed number on the shoulder of each element denotes its valence.}
\label{Fig_ELF}
\end{figure}
Before discussing the topological nature, we first explain how the local magnetic moments form due to the half-passivation of nitrogen.
To this end, we calculated the electron localization function and the ionic charge of Bi, In, and N from the Bader-Charge method~\cite{doi:10.1063/1.3553716,HENKELMAN2006354,sanville2007improved,Tang_2009}.
The former helps to understand the bonding nature of Bi, In, and N. The latter directly tells us the change of valence charge after nitrogen adsorption.
Without nitrogen, bismuth/indium gains/loses valence electrons, respectively, forming valence Bi$^{0.48-}$ and In$^{0.49+}$. Such valence configurations are far from their formal valence. Thus, they will not form an ionic bond.
As shown in Fig.~\ref{Fig_ELF} (a), bismuth and indium share valence charges and form a covalent bond with strong metallic nature in between.
The Al-O bond in the substrate, however, remains ionic, and BiIn monolayer bonds with Al$_{2}$O$_{3}$ via vdw bonding.
After adsorption with nitrogen, the valence of indium does not change, but bismuth transfers valence electrons to nitrogen.
Similar to the Bi-In covalent bond shown in Fig.~\ref{Fig_ELF} (2), after nitrogen adsorption Bi-In bond remains covalent with metallic nature.
They form a positive valence center bonding with the negative valence center N$^{0.52-}$, indicating a strong hybridization of the electronic states on these atoms.
Thus, the charge polarization at the nitrogen site (see Tab.~\ref{Table-1}) will lift the spin degeneracy of the electronic structures.
As a result, bismuth and indium stay in the intrinsic magnetic field created by nitrogen charge polarization, and the electronic structure becomes spin-polarized.
The strong hybridization of the nitrogen, bismuth, and indium orbitals can also be seen from the projective electronic structure in Fig.~S3 of the Supplementary Information.
We also find that, except for the different binding energy levels, the spin-up and spin-down band curvatures are slightly different, implying a weak spin-exchange interaction between the spin-up and spin-down electrons.
Only Zeeman splitting plays a dominant role.
As BiIn is a QSH insulator, both spin-up and spin-down bands are topologically nontrivial.
Each contributes a dissipationless edge mode. The two edge currents propagate in opposite directions and relate by the TR symmetry.
Changing such a QSH state to a QAH state is sufficient to shift one spin component away from the Fermi level.
Thus, there is no need for exchange coupling to transform topologically trivial bands into nontrivial ones.
\begin{table}
\centering
\renewcommand\arraystretch{1.7}
\renewcommand\tabcolsep{5.0pt}
\caption{The gain/loss of valence electrons (-/+) before and after nitrogen-adsorption, the charge polarization, and the magnetic moments of N, Bi and the III group elements. }
\label{Table-1}
\begin{tabular}{cccccc} \toprule
& & \multicolumn{2}{c}{Ionic charge change} & Charge Pola. & M ($\mu_B$) \\ \midrule
& N &\diagbox {}{} & 0.57- & +1.07 & 0.822 \\
N-BiTl & Bi & 0.20- & 0.25+ & -0.14 & -0.012 \\
& Tl & 0.22+ & 0.38+ & +0.07 & -0.003 \\
\hline
& N & \diagbox {}{} & 0.52- & +1.11 & 0.877 \\
N-BiIn & Bi & 0.48- & 0.10+ & -0.16 & -0.021 \\
& In & 0.49+ & 0.47+ & -0.03 & -0.039 \\
\hline
& N & \diagbox {}{} & 0.56- & +1.08 & 0.827 \\
N-BiGa & Bi & 0.42- & 0.18+ & -0.12 & -0.011 \\
& Ga & 0.49+ & 0.41+ & +0.05 & -0.005 \\
\bottomrule
\end{tabular}
\end{table}
After understanding the origin of the ferromagnetic long-range order and the fully spin-polarized low-energy excitations, we further calculated the edge states with a zigzag termination in Fig.~\ref{Fig1}(h). A helical edge state exists connecting the valence band and conduction band corresponding to a Chern number $C=1$.
More importantly, for the BiIn topological bandgap shown in Fig.~\ref{Fig1}(g) remains as large as 266 meV, which still promises an excellent chance for the observation of the QAH in experiments.
We note that nitrogen doping/adsorption is feasible in experiments and has successfully induced ferromagnetic order in various semiconductors~\cite{doi:10.1021/jacs.6b12934, doi:10.1021/jp303465u, Liu2013, zhang2018nitrogen}.
The QAH states in nitrogen adsorbed BiIn/Al$_{2}$O$_{3}$ benefit from two features of the N-functionalization.
First, the nitrogen spin-polarization breaks the TR symmetry; Second, it does not induce additional states at the Fermi level. Consequently, a topological gap formed by spin-polarized band structure is obtained, which leads to the single helical edge mode.
\section{Tight-binding Model}
\label{Sec:model}
To better understand the low-energy physics and the topological transition between the QSH and QAH in N-BiIn monolayer/Al$_2$O$_3$, we analytically construct a tight-binding model (TBM) based on all p-orbitals of Bi and In, which reproduces the DFT electronic structure and the topological nature of the above DFT calculations.
We took the basis functions
$\left|p_{x \sigma}^{Bi}\right\rangle,\left|p_{y \sigma}^{Bi}\right\rangle,\left|p_{z \sigma}^{Bi}\right\rangle,\left|p_{x \sigma}^{In}\right\rangle,\left|p_{y \sigma}^{In}\right\rangle,\left|p_{z \sigma}^{In}\right\rangle$,
and construct an effective Hamiltonian with the following form:
\begin{eqnarray}\label{Eq:model}
\mathcal{H} & =& H_{0} + H_{M} + H_{SO}\;, \nonumber\\
H_{0}&=&\sum_{i\alpha\sigma} \varepsilon_{i\sigma}^{\alpha} c_{i\sigma}^{\alpha, \dagger} c_{i\sigma}^{\alpha}+\sum_{\langle i, j\rangle\alpha\beta\sigma} t_{i j \sigma}^{\alpha\beta}\left(c_{i\sigma}^{\alpha,\dagger}c_{j\sigma}^{\beta}+h . c .\right)\;, \nonumber\\
H_{M}&=&- \sum_{i\sigma\sigma^{\prime}} \lambda_{m}^{i} c_{i \sigma}^{\dagger} c_{i \sigma^{\prime}}[\hat{\mathbf{m}} \cdot \hat{\mathbf{s}}]_{\sigma \sigma^{\prime}}\;, \nonumber\\
H_{SO}&=& \sum_{i\alpha\beta\sigma\sigma^{\prime}} \left\langle \alpha \sigma| \lambda_{SO} \vec{L} \cdot \vec{S} |\beta \sigma^{\prime}\right\rangle c_{i\sigma}^{\alpha, \dagger}c_{i\sigma^{\prime}}^{ \beta}\;.
\end{eqnarray}
$H_{0}$ denotes the tight-binding model resulting only from single-particle hoppings.
$\varepsilon_{i\sigma}^{\alpha}$, $c_{i\sigma}^{\alpha}$/$c_{i\sigma}^{\alpha, \dagger}$ represent on-site energy, electron annihilation/creation operators at the $\alpha$-orbital and the $i$th atom, respectively.
$t_{i j \sigma}^{\alpha \beta}$ is the coupling strength of electrons with spin $\sigma$ at the $\alpha$-orbital of the $i$th atom with those electrons at the $\beta$-orbital of the $j$th atom, which can be easily calculated from Slater-Koster integrals~\cite{PhysRev.94.1498}.
$H_{M}$ and $H_{SOC}$ are the corresponding magnetic coupling and SOC terms.
$\vec{L}$ and $\vec{S}$ denote the orbital and spin angular momentum operators, and $\lambda_{SO}$ is the strength of SOC.
The potential difference between A and B sublattices reflects the corresponding environmental difference around Bi and In, breaking sublattice symmetry.
We note that $H_{0}$ and $H_{SOC}$ are similar to the tight-binding model of Bi-III monolayer on SiO$_{2}$~\cite{xia2021hightemperature}.
Additionally, in the current model, a TR symmetry breaking term $H_{M}$ is also introduced, accounting for the Zeeman splitting, which shifts the two spin-degenerate bands up and down with an equal amount of energy.
$\hat{\mathbf{m}}$ is the direction of magnetic polarization and $\lambda_{m}^{i}$ is the magnitude of magnetic polarization.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{Figure-3.png}
\caption{(Color online) Comparison of the $p$-orbital model in Eq.~(\ref{Eq:model}) and the DFT calculations. In (a-c), the blue dashed lines and the solid red lines correspond to the DFT results and the tight-binding fitting, respectively. (a-b) Spin-polarized electronic structure without SOC. (c) Spin-polarized electronic structure with SOC. (d) The topological edge states calculations using the tight-binding model in a ribbon geometry with a zigzag edge. The blue and yellow circles represent the probability of the wave functions residing at the left edge and the right edge.}
\label{Fig2}
\end{figure}
Because the presence of the intrinsic magnetization breaks time-reversal symmetry, we use two different sets of parameters to construct $H_{0}$.
Discarding all terms breaking spin-conservation which are not important at the level of our discussion, the Hamiltonian $H_{0}$ ($12 \times 12$) can be cast as a direct sum of two $6 \times 6$ Hamiltonian for the two spin sectors:
\begin{equation}\label{H_0}
H_{0} = \left(\begin{array}{cc}
H_{\uparrow \uparrow} & 0 \\
0 & H_{\downarrow \downarrow}
\end{array}\right)
\end{equation}
with
\begin{equation}
H_{\sigma \sigma} = \left(\begin{array}{cccccc}
\epsilon_{Bi,px}^{\sigma} & 0 & 0 & h_{xx}^{\sigma} & h_{xy}^{\sigma} & h_{xz}^{\sigma} \\
0 & \epsilon_{Bi,py}^{\sigma} & 0 &h_{yx}^{\sigma} & h_{yy}^{\sigma} & h_{yz}^{\sigma} \\
0 & 0 & \epsilon_{Bi,pz}^{\sigma} & h_{zx}^{\sigma} & h_{zy}^{\sigma} & h_{zz}^{\sigma} \\
\dagger & \dagger & \dagger & \epsilon_{In,px}^{\sigma} & 0 & 0 \\
\dagger & \dagger & \dagger & 0 & \epsilon_{In, py}^{\sigma} & 0 \\
\dagger & \dagger & \dagger & 0 & 0 & \epsilon_{In,pz}^{\sigma}
\end{array}\right)\;,
\end{equation}
where $\dagger$ represents complex conjugation.
We found that, to decently reproduce the DFT electronic structure, it is sufficient to consider only the nearest neighbor hopping between different sites.
More details of the model parameters can be found in the Supplementary Information.
In N-BiIn/Al$_2$O$_3$, the direction of magnetic polarization $\hat{\mathbf{m}} = (0,0,1)$ is normal to the monolayer plane.
Consequently, the Zeeman term only contributes diagonal elements to the effective Hamiltonian, i.e., intrinsic magnetization only leads to the spin-dependent energy shift and does not change the shape of the bands.
We denote the new onsite energy level as
\begin{equation}\label{onsite}
\Delta_{\alpha}^{\sigma} = \epsilon_{\alpha}^{\sigma} \mp \lambda_{m}^{\alpha}
\end{equation}
where $-$ and $+$ correspond to spin-up and spin-down electrons.
As for the SOC, we will only consider the simplest atomic SOC contribution, while discarding the Rashba contribution.
The latter only affects the details of the electronic structure, but is not essential to the topological phase transition.
\begin{subequations}\label{SOC}
\begin{align}
&\left\langle p_{y}|\vec{L} \cdot \vec{S}| p_{x}\right\rangle=i \sigma_{z} \\
&\left\langle p_{z}|\vec{L} \cdot \vec{S}| p_{x}\right\rangle=-i \sigma_{y} \\
&\left\langle p_{z}|\vec{L} \cdot \vec{S}| p_{y}\right\rangle=i \sigma_{x}
\end{align}
\end{subequations}
Due to the presence of $p_{z}$ orbital in this model, additional SOC terms between $p_{z}$ and $p_{x/y}$ orbitals appear as compared to the bismuthene model~\cite{Reis287, PhysRevB.98.165146}.
Adding SOC to the tight-binding model opens a large global gap and induces a band inversion at $\Gamma$.
To obtain these model parameters, we mainly fit three spin-up bands around the Fermi level and three spin-down ones around 1 eV, as shown in Fig.~\ref{Fig2}(a-b).
Other bands stay at higher binding energies. Thus, the quality of fitting on these bands is not crucial.
Here, we primarily fit the spin-up bands as it determines the topological nature and gap size of the system. On the other hand, the spin-down bands stay away from the Fermi level and are less critical to the low-energy model.
In Fig.~\ref{Fig2}, DFT bands are shown as blue dashed lines. Bands from the fitted tight-binding model are shown as solid red lines.
Spin-up and spin-down bands are different in shape, but degeneracy and symmetry remain identical.
Using two different parameter sets for the spin-up and spin-down electrons will capture such a difference.
When SOC is included, band inversion between the $p_x$ and $p_y$ orbitals occurs in the spin-up and spin-down bands, see appendix Fig.~S4 for all three systems.
However, as the Zeeman field shifts the electronic structure of the spin-down electrons upwards by 1 eV, only band inversion in the spin-up sector remains at the Fermi level.
Consequently, we effectively have a spin-polarized electronic structure with low-energy physics dominated by spin-up electrons only.
The topological gap, thus, stems only from one spin component.
{\it As a result, the topological nature and the nontrivial edge mode depend only on one spin sector, i.e., a single helical edge mode is obtained. }
We confirm the above analysis by directly calculating the edge states in Fig.~\ref{Fig2} (d), where blue and yellow bands correspond to the edge modes residing at the two different terminations in a ribbon calculation.
Along each edge, there is only one topological mode consistent with the DFT calculations shown in Fig.~\ref{Fig1} (h) and confirms the QAH nature of the proposed model in Eq.~(\ref{Eq:model}).
The proposed model correctly captures the QAH nature of N-BiIn/Al$_{2}$O$_{3}$, and it is generic to all four material systems studied in this work.
In supplementary Fig.~S7, we further show the comparison of the fitted model to the DFT calculations for other materials, as well as the calculated helical edge modes from the model.
The first three rows correspond to the spin-up, spin-down, and SOC bands.
Our tight-binding model can explain the topology of a large variety of material systems.
Replacing Bi with other group-V elements, such as Sb and As, is expected to result in similar band structures and the same QAH states.
Although Al$_{2}$O$_{3}$ may not be a good substrate candidate anymore in those cases due to lattice mismatch, the model we proposed most likely still applies.
\section{Discussions and Conclusions}
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{Figure-4.png}
\caption{(Color online) Electronic structure and topological phase transition in a $p_{x/y} + $ Zeeman model. (a -c) and (b - d) correspond to the
TR symmetric and symmetry-broken cases. Electronic structures in (a, b, d, e) correspond to the calculations of the simplified model with (a) $H_{0}$, (b) $H_{0}+H_{SO}$, (d) $H_{0} + H_{M}$, and (e) $H_{0}+H_{M}+H_{SO}$ terms, respectively.
(c) and (f) show the topological edge modes for the QSH and QAH states, respectively.}
\label{Fig3}
\end{figure}
We have proposed using two different parameter sets to reproduce the DFT electronic structure.
In this section, we further simplify the proposed model and explain the topological phase transition more transparently.
To obtain a universal effective modeling through simplification, we have identified two aspects of the full-scale generic model in Eq.~(\ref{Eq:model}) that turn out to be irrelevant for the topological nature and QSH-QAH phase transition in honeycomb-type electronic p-orbital models:
(i) the adoption of different parameter sets for spin-up and spin-down bands, and (ii) the $p_{z}$ orbitals.
Discarding them from Eq.~(\ref{Eq:model}), we arrive at a model on the following basis.
$\left|p_{x \uparrow}^{Bi}\right\rangle,\left|p_{y \uparrow}^{Bi}\right\rangle,\left|p_{x \uparrow}^{X}\right\rangle,\left|p_{y \uparrow}^{X}\right\rangle,\left|p_{x \downarrow}^{Bi}\right\rangle,\left|p_{y \downarrow}^{Bi}\right\rangle,\left|p_{x \downarrow}^{X}\right\rangle,\left|p_{y \downarrow}^{X}\right\rangle$.
$H_{0}$ is an $8 \times 8$ matrix consisting of two $4 \times 4$ matrices for the two spin sectors.
\begin{equation}
H_{\uparrow\uparrow} = H_{\downarrow\downarrow} = \left(\begin{array}{cccc}
\epsilon_{Bi,px} & 0 & h_{xx} & h_{xy} \\
0 & \epsilon_{Bi,py} &h_{yx} & h_{yy} \\ \\
\dagger & \dagger & \epsilon_{X,px} & 0 \\
\dagger & \dagger & 0 & \epsilon_{X,py}
\end{array}\right)
\end{equation}
The matrix elements can be found in Eq.~(S1) of the Supplementary Information.
Here, we have neglected the spin dependence of all matrix elements and require them to be identical in both spin sectors.
The Zeeman splitting and the SOC term also take the same form as in Eq.~(\ref{onsite}) and Eq.~(\ref{SOC}).
This simplified model is sufficient to explain the topological phase transition induced by nitrogen absorption.
We note that such a tight-binding model is generic to all two-dimensional spin-polarized $p_{xy}$-orbital system~\cite{Kaloni2014, PhysRevLett.113.256401, Chen2016, Huang2018, HUANG2020246}.
In Fig.~\ref{Fig3}, we show the electronic structure of the simplified model.
Without SOC and Zeeman splitting, this model shows two parabolic bands touching at the Fermi level as displayed in Fig.~\ref{Fig3}a.
Each band is doubly degenerate due to spin. SOC relieves the band degeneracy at $\Gamma$, leading to a topological QSH.
This model contains two counter-propagating edge modes inside the bulk energy gap, as shown in Fig.~\ref{Fig3} (c).
Half-passivation of nitrogen leads to the onset of a long-range ferromagnetic order, which induces the Zeeman splitting in our model.
Upon turning on the Zeeman splitting, the spin-degenerate bands shown in Fig.~\ref{Fig3} (a) start to separate.
We keep the spin-up band at the Fermi level by modifying the chemical potential, see Fig.~\ref{Fig3}d.
The separation of the two bands does not modify the topology of either band. When SOC is further included in Fig.~\ref{Fig3} (e), both bands become gapped and attain a topological character.
Because the Fermi level is now only governed by the spin-up bands, however, even though the spin-down bands at 2 eV are topological as well, this model will only demonstrate one helical edge mode on each edge, as shown in Fig.~\ref{Fig3} (f).
Thus, in Bi-III monolayer/Al$_{2}$O$_{3}$, the topological phase transition between QSH and QAH is mainly induced by Zeeman splitting. Both spin-up and spin-down bands remain topological, but one of them is removed from the Fermi level by the Zeeman field.
Consequently, the aspired QAH state is reached.
Here, magnetic order does a trivial job, and it induces the topological phase transition only by shifting the topological gap of one spin sector away from the
Fermi level, while not destroying its topology. The latter would require strong off-diagonal spin-exchange couplings, which are small in the materials we have considered.
In summary, through first-principle calculations, we propose that Al$_{2}$O$_{3}$ can be a promising substrate candidate to grow binary monolayers consisting of group III elements (Al, Ga, In, Tl) and bismuth (Bi).
These systems belong to the same category of the large-gap QSH states as bismuthene/SiC(0001), with topological gaps of hundreds of meV.
When further half-passivated with nitrogen, long rang ferromagnetic order is induced by breaking the complete shell configuration of N valence electrons.
Consequently, a transformation from QSH to QAH phases occurs. The topological gap remains considerably large in the QAH state, which promises a great chance to be in reach for application in spintronic devices.
We have further provided an analytical understanding of their low-energy topological physics by constructing a generic tight-binding model, which reveals that the topological phase transition between QSH and QAH is essentially induced by Zeeman splitting.
The nontrivial character of both the spin-up and spin-down bands remain unaffected by the onset of ferromagnetism.
At the same time, the topological gap of one spin is removed from the Fermi level, which effectively creates a spin-polarized half-metal and, thus, the QAH under SOC.
Our work provides a generalization of the bismuthene platform to the case of broken time-reversal symmetry. We believe that the proposed tight-binding model applies to all similar systems dominated by $p_{x}, p_{y}$ orbitals, which will allow us to supplement the experimental effort with theoretical guidance along with the search for large-gap QAH systems.
\section{Acknowledgement}
This work was supported by the National Key R$\&$D Program of China (2017YFE0131300), Sino-German mobility program (M-0006), National Natural Science Foundation of China under Grant No. 11874263, and Shanghai Technology Innovation Action Plan 2020-Integrated Circuit Technology Support Program (Project No. 20DZ1100605). W.S. wants to thank the financial support of Science and Technology Commission of Shanghai Municipality (STCSM) (Grant No. 22ZR1441800), Shanghai-XFEL Beamline Project (SBP) (31011505505885920161A2101001).
This work in W\"urzburg is funded by the Deutsche
Forschungsgemeinschaft (DFG, German Research Foundation) through
Project-ID 258499086 - SFB 1170 and through the W\"urzburg-Dresden
Cluster of Excellence on Complexity and Topology in Quantum Matter - ct.qmat Project-ID 390858490 - EXC 2147.
Part of the calculations were performed at the HPC Platform of ShanghaiTech University Library and Information Services, at the School of Physical Science and Technology, and at the Scientific Data Analysis Platform of Center for Transformative Science.
\bibliographystyle{apsrev4-1}
|
2,877,628,090,703 | arxiv |
\section{Applications to A/B testing}
\label{s:ab-testing}
\label{S:AB-TESTING}
We now discuss applications of the inference approach we developed in
Section~\ref{s:param-inf} to the problem of A/B testing of auctions.
\subsection{Estimating revenues of novel mechanisms}
\label{s:ab-revenue}
Consider the setup described in the introduction where an
auction house running auction $A$ would like to determine the revenue
of a novel mechanism $B$. The typical approach for doing so is to run
the auction $B$ with some probability $\epsilon>0$ and $A$ with the
remaining probability. Ideally, if in doing so, the auction house
obtains $\epsilon N$ bids in response to the auction $B$ out of a total of
$N$ bids, the revenue of $B$ can be estimated within an error bound of
\begin{align}
\label{error-ideal}
\Theta\left(\frac 1{\sqrt{\epsilon}}\right)\frac{\sup_{q}\{x_B'(q)\}}{\sqrt{N}}
\end{align}
where $x_B$ denotes the allocation rule corresponding to $B$. We
refer to this approach as {\em ideal A/B testing}.
In practice, however, instead of obtaining bids in equilibrium for
auction $B$, the analyst obtains bids in equilibrium for the aggregate
mechanism $C=(1-\epsilon) A +\epsilon B$. We can then use
Definition~\ref{d:estimator} to estimate the revenue of $B$.
As a consequence of Corollary~\ref{cor:allpay-y}, and noting that
$x_B'(q)/x_C'(q)\le 1/\epsilon$ for all quantiles $q$, we
obtain the following error bound.
\begin{corollary}
\label{cor1}
The revenue of a rank based mechanism $B$ can be estimated from $N$
bids of a rank-based mechanism $C=(1-\epsilon)A+\epsilon B$ with absolute error
bounded by
\begin{align}
\label{error-true}
\frac{80\, n \log(n/\epsilon)}{\sqrt{N}}.
\end{align}
\end{corollary}
Relative to the ideal situation described above, our error bound has a
better dependence on $\epsilon$ and a worse dependence on $n$. Note that
when $\epsilon$ is very small, our error bound of equation
\eqref{error-true} may be smaller than the ideal bound of equation
\eqref{error-ideal}. In fact, we obtain a non-trivial bound on the
error even when $\epsilon=0$, per Theorem~\ref{thm:allpay-simple}:
\begin{corollary}
\label{cor1-new}
The revenue of a rank based mechanism $B$ can be estimated from $N$
bids of any rank-based mechanism $C$ with absolute error
bounded by
\begin{align}
\label{error-true-new}
\frac{16\,n^2 \log N}{\sqrt{N}}.
\end{align}
\end{corollary}
This is not surprising: the ideal bound ignores information that we
can learn about the revenue of $B$ from the $(1-\epsilon)N$ bids obtained
when $B$ is not run.
When $B$ is a multi-unit auction, we obtain a slightly better error bound using
Theorem~\ref{thm:allpay-general} which is closer to the ideal bound of
equation \eqref{error-ideal}.
\begin{corollary}
\label{cor2}
The revenue of the highest-$k$-bids-win mechanism $B$ can be
estimated from $N$ bids of a rank-based mechanism $C=(1-\epsilon)A+\epsilon
B$ with absolute error bounded by
\begin{align}
\label{error-true-k}
\frac{80 \sup_{q}\{x_B'(q)\} \log (n/\epsilon)}{\sqrt{N}}.
\end{align}
\end{corollary}
\section{Proofs for Section~\ref{s:ab-testing}}
\label{s:proofs-5}
We will now prove Theorem~\ref{theorem:sign-AB}, restated here for convenience.
\begin{numberedtheorem}
{\ref{theorem:sign-AB}}
For arbitrary $n$-agent rank-based auctions $A$, $B_1$, and $B_2$ and
$N$ bids from the equilibrium bid distribution of mechanism $C=\epsilon
B_1+\epsilon B_2+(1-2\epsilon)A$, the estimator for the binary classifier
$\gamma={\bf 1}\{P_{B_1}-\alpha\,P_{B_2}>0\}$, that establishes
whether the revenue of mechanism $B_1$ exceeds $\alpha$ times the
revenue of mechanism $B_2$, has error rate bounded by
$$\exp\left(-O\left(\frac{Na^2}{\alpha^2n^3\log(n/\epsilon)}\right)\right),$$
where $a=|P_{B_1}-\alpha\,P_{B_2}|$, as long as $N\gg n/\epsilon a$.
\end{numberedtheorem}
Whereas we bound the expected absolute error of our revenue estimator
in Section~\ref{s:param-inf}, in this section we will require a
concentration result for the error. We state this concentration result
below and prove it in Section~\ref{s:concentration-proof}. We focus on
the main term in our error bound,
$\expect[q\not\in\Lambda]{-Z'_y(q)(\ebid(q)-\bid(q))}$. We
can split this error into two components, one corresponding to the
bias in the estimated bid function and the other corresponding to the
deviation of the estimated bids from their mean:
\begin{align}
\left|\expect[q\not\in\Lambda]{-Z'_y(q)(\ebid(q)-\bid(q))}\right|
\le \left|\expect[q\not\in\Lambda]{Z'_y(q)(\ebid(q)-\tilde{b}(q))}\right|
+ \left|\expect[q\not\in\Lambda]{Z'_y(q)(\tilde{b}(q)-\bid(q))}\right|.
\label{eq:bias-deviation-split}
\end{align}
Here, $\tilde{b}$ is a step function that equals the expectation of the
empirical bid function $\ebid$: $\tilde{b}(q) = \expect{\ebid(q)}$.
The bias of the estimator, i.e., the second term above, is small:
\begin{lemma}
\label{lem:bias-bound}
With $\tilde{b}$ defined as above,
$$\left|\expect[q\not\in\Lambda]{Z'_y(q)(\tilde{b}(q)-\bid(q))}\right|=
\frac{O(1)}{N} \sup_{q}\{\alloc'(q)\}\,\,\sup_{q}
\left\{ \frac{y'(q)}{\alloc'(q)} \right\}.$$
\end{lemma}
The deviation from the mean, i.e., the first term in equation~\eqref{eq:bias-deviation-split}, is concentrated.
\begin{lemma}
\label{lem:deviation-prob}
Let $\Delta=\sup_{q\not\in\Lambda}|(b'(q))^{-1}(\ebid(q)-\tilde{b}(q))|$. Then for any $a>0$,
$$\prob{\left|\expect[q\not\in\Lambda]{Z'_y(q)(\ebid(q)-\tilde{b}(q))}\right|\ge a\, \middle| \, \Delta}\le
\exp \left(-\frac{a^2}{n(80\Delta \rxy)^2}\right).$$
\end{lemma}
The proofs of Lemmas~\ref{lem:bias-bound} and \ref{lem:deviation-prob} are deferred to the next subsection.
We are now ready to prove Theorem~\ref{theorem:sign-AB}.
\begin{proofof}{Theorem~\ref{theorem:sign-AB}}
We need to bound the probability that the error in estimating
$\hat{P}_{B_1}-\alpha\hat{P}_{B_2}$ is greater than $|P_{B_1}-\alpha\,P_{B_2}|$. This
error can in turn be decomposed into the error in estimating $P_{B_1}$ and
that in estimating $P_{B_2}$. Denote $a=|P_{B_1}-\alpha\,P_{B_2}|>0$. Then,
\begin{align*}
& \prob{|(\hat{P}_{B_1}-\alpha\,\hat{P}_{B_2})
-(P_{B_1}-\alpha\,P_{B_2})|>a} \\
& \leq
\prob{|\hat{P}_{B_1}-P_{B_1}|>a/2}
+ \prob{|\hat{P}_{B_2}-P_{B_2}|>a/2\alpha}.
\end{align*}
Let $\alloc$ denote the allocation rule of the mechanism $C$ that we
are running, and let $\bid$ be the corresponding bid function. Now, recall that for
\begin{align*}
\Delta & =\sup_q|(b'(q))^{-1}(\widehat{b}(q)-b(q))| \text{ and } \\
\rxy[x_{B_1}] & =\sup_{q}\{x_{B_1}'(q)\}\max\left\{1, \log\sup_{q: x'_{B_1}(q)\ge 1}\frac{\alloc'(q)}{x_{B_1}'(q)},
\log\sup_{q}\frac{x_{B_1}'(q)}{\alloc'(q)} \right\}
\end{align*}
equations~\eqref{eq:error-allpay} and \eqref{eq:bias-deviation-split} bound the error in estimation as a sum of five terms. Of these, all but the first term in equation~\eqref{eq:bias-deviation-split} can be bounded by $O(n/\epsilon N)$ using Lemmas~\ref{second-term-bound}, \ref{third-term-bound}, \ref{fourth-term-bound}, and \ref{lem:bias-bound}. Then, Lemma~\ref{lem:deviation-prob} implies that, conditioned on $\Delta$,
$$
\prob{|\hat{P}_{B_1}-P_{B_1}|>a/2} \leq
2\exp\left(-\frac{1}{n(80\,\Delta\,\rxy[x_{B_1}])^2}\left(\frac a2-O\left(\frac {n}{\epsilon N}\right)\right)^2\right).
$$
Finally, $\rxy<n\log (n/\epsilon)$, and with high probability
$\Delta$ is at most a constant times $1/\sqrt{N}$ (Lemma~\ref{error
bid function}). Consequently, for $N\gg n/\epsilon a$,
$$
\prob{|\hat{P}_{B_1}-P_{B_1}|>a/2} \leq
\exp\left(-O\left(\frac{Na^2}{n^3\log(n/\epsilon)}\right)\right).
$$
Likewise,
$$
\prob{|\hat{P}_{B_2}-P_{B_2}|>a/2\alpha} \leq
\exp\left(-O\left(\frac{Na^2}{\alpha^2n^3\log(n/\epsilon)}\right)\right).
$$
\end{proofof}
\subsection{Concentration bound for the revenue estimator}
\label{s:concentration-proof}
\begin{numberedlemma}{\ref{lem:bias-bound}}
With $\tilde{b}$ defined as above,
$$\left|\expect[q\not\in\Lambda]{Z'_y(q)(\tilde{b}(q)-\bid(q))}\right|=
\frac{O(1)}{N} \sup_{q}\{\alloc'(q)\}\,\,\sup_{q}
\left\{ \frac{y'(q)}{\alloc'(q)} \right\}.$$
\end{numberedlemma}
\begin{proof}
We can write the function $\tilde{b}(i/N)$ as
\begin{align*}
\tilde{b}(i/N)& =\frac{N!}{(i-1)!(N-i)!}\int G(t)^{i-1}(1-G(t))^{N-i}g(t) t\,dt\\
&=\frac{N!}{(i-1)!(N-i)!}\int t^{i-1}(1-t)^{N-i}b(t)\,dt.
\end{align*}
Note that
$$\frac{N!}{(i-1)!(N-i)!} t^{i-1}(1-t)^{N-i}$$
is the density of the beta distribution with parameters $\alpha=i$ and $\beta=N-i+1$. Denote this density $f(t;\alpha,\beta)$. Then we can write
$$ \tilde{b}(i/N)=\int^1_0b(t)f(t;\alpha,\beta)\,dt.$$
Now let $q \in [i/N,\,(i+1)/N]$, and consider an expansion of $b(t)$ at $q$ such that
$$ b(t)=b(q)+b'(q)(t-q)+O((t-q)^2).$$
Now we substitute this expansion into the formula for $\tilde b(\cdot)$ above to get
$$ \tilde{b}(i/N) = b(q)+b'(q)\int^1_0(t-q)f(t;\alpha,\beta)\,dt+O(\int^1_0(t-q)^2f(t;\alpha,\beta)\,dt).$$
The mean of the beta distribution is $\alpha/(\alpha+\beta)$ and the variance is $\alpha\beta/((\alpha+\beta)^2(\alpha+\beta+1))$. This means that
$$ \tilde{b}\left(\frac{i}{N}\right)-b(q)=b'(q)\left(\frac{i}{N+1}-q\right)+O\left(\frac{1}{N^2}\right). $$
Thus
$$\sup_{q \in [i/N,(i+1)/N]}\left| \tilde{b}(i/N)-b(q)\right| \leq \sup_qb'(q)\frac{2}{N}+O\left(\frac{1}{N^2}\right).$$
Therefore, the expectation $\left|\smash{\hat{P}_y-\expect{\smash{\hat{P}_y}}}\right|$ is at most $O(1)/N
\, \sup_q\{\alloc'(q)\} \, \sup_q Z_y(q)$.
\end{proof}
We now focus on the deviation of our estimator from its mean. In order to obtain a concentration bound, we express the estimator as a sum over many independent terms.
To this end, we first identify the set of quantiles at which the
function $\ebid$ ``crosses'' the function $\tilde{b}$ from below. This
set is defined inductively.
Define $i_0=\delta_N N$. Then, inductively, let
$i_\ell$ be the smallest integer strictly greater than $i_{\ell-1}$
such that
$$\ebid\left(\frac{i_{\ell}-1}{N}\right)\le\tilde{b}\left(\frac{i_\ell-1}{N}\right) \, \text{ and }\,
\ebid\left(\frac{i_{\ell}}{N}\right)>\tilde{b}\left(\frac{i_\ell}{N}\right).$$
Let $i_{m-1}$ be the last integer so defined, and let $i_m=(1-\delta_N)N$. Let $I$
denote the set of indices $\{i_0,\ldots, i_m\}$.
Let $T_{i,j}$ denote the following integral:
$$T_{i,j} = \int_{q=i/N}^{q=j/N}
Z'_y(q)(\ebid(q)-\tilde{b}(q)) \, \text{d}q$$
Then, our goal is to bound the quantity
$\expect[\hat{\bid}]{|T_{0,N}|}$ where $T_{0,N}$ can be written
as the sum:
$$ T_{0,N} = \sum_{\ell=0}^{m-1}T_{i_{\ell}, i_{\ell+1}}. $$
We now claim that conditioned on $I$ and the maximum weighted bid
error, this is a sum over independent random variables.
\begin{lemma}
\label{lem:independent-sums}
Conditioned on the set of indices $I$ and
$\Delta=\sup_{q\not\in\Lambda}|(b'(q))^{-1}(\ebid(q)-\tilde{b}(q))|$, over the
randomness in the bid sample, the random variables $T_{i_{\ell},
i_{\ell+1}}$ are mutually independent.
\end{lemma}
\begin{proof}
Fix $I$ and $\ell$, and note that the function $\tilde{b}$ is fixed
(that is, it does not depend on the empirical bid sample). Then, the
sum $T_{i_{\ell}, i_{\ell+1}}$ depends only on the empirical bid
values $\ebid(q)$ for quantiles in the interval $[i_\ell/N,
i_{\ell+1}/N)$. By the definition of $I$, we know that the smallest
$i_\ell$ bids in the sample are all smaller than
$\tilde{b}((i_\ell-1)/N)\le\tilde{b}(i_\ell/N)$, and the largest $N-i_{\ell+1}$ bids in the
sample are all larger than $\tilde{b}(i_{\ell+1}/N)\ge \tilde{b}((i_{\ell+1}-1)/N)$. On the other
hand, the empirical bids $\ebid(q)$ for $q\in [i_\ell/N,
i_{\ell+1}/N)$ lie within $[\tilde{b}(i_\ell/N),
\tilde{b}((i_{\ell+1}-1)/N)]$. Therefore, conditioned on $i_\ell$ and
$i_{\ell+1}$, the latter set of empirical bids is independent of the
former set of empirical bids.
\end{proof}
Since within each interval $(i_{\ell}, i_{\ell+1})$ the multiplier
$\ebid(q)-\tilde{b}(q)$ changes sign only once, we can apply
the approach of Section~\ref{s:inference-k}, to bound each
individual $T_{i_{\ell}, i_{\ell+1}}$ by $40\Delta \rxy$. We then
apply Chernoff-Hoeffding bounds to obtain a bound on the proability
that $\expect[\hat{\bid}]{|T_{0,N}|\, | \, I, \Delta}$ exceeds some
value $a>0$.
\begin{numberedlemma}{\ref{lem:deviation-prob}}
Let $\Delta=\sup_{q\not\in\Lambda}|(b'(q))^{-1}(\ebid(q)-\tilde{b}(q))|$. Then for any $a>0$,
$$\prob{\left|\expect[q\not\in\Lambda]{Z'_y(q)(\ebid(q)-\tilde{b}(q))}\right|\ge a\, \middle| \, \Delta}\le
\exp \left(-\frac{a^2}{n(80\Delta \rxy)^2}\right).$$
\end{numberedlemma}
\begin{proof}
We will use Chernoff-Hoeffding bounds to bound the expectation of $T_{0,N}$ over the bid sample, conditioned on $I$ and $\Delta$. We first note that $T_{0,N}$ has mean zero because for any integer $i\in [0,N]$, $\expect[\text{samples}]{\ebid(i/N)} = \tilde{b}(i/N)$.
Next we note that the $T_{i,j}$'s are bounded random variables. Specifically, let $Q$ be an interval of quantiles over which the difference $\ebid(q)-\tilde{b}(q)$ does not change sign. Then, following the proof of Lemma~\ref{first-term-bound}, we can bound
\begin{align*}
|T_Q| & = \left|\int_Q Z'_y(q)(\ebid(q)-\tilde{b}(q)) \, \text{d}q \right|\\
& \le 40\Delta \underbrace{\,\,\sup_{q}\{y'(q)\}\,\,\max \left\{ 1,\log\sup_{q: y'(q)\ge 1} \frac{\alloc'(q)}{y'(q)},
\log\sup_{q}\frac{y'(q)}{\alloc'(q)} \right\}}_{=:\, \rxy}.
\end{align*}
Likewise, over an interval $Q$ where $Z'_y$ does not change sign, we again get $|T_Q|\le 40 \Delta \rxy$ with $\rxy$ defined as above. Moreover, for an interval $Q$ over which $Z'_y$ changes sign at most $t$ times, we have
$$ \int_Q |Z'_y(q)(\ebid(q)-\tilde{b}(q))| \, \text{d}q \le t\cdot 40\Delta \rxy.$$ Finally, noting that $Z_y$ is a weighted sum over the $n$ functions $Z_k$ defined for the $k$-unit auctions, and that by Lemma~\ref{lem:Z-bound-1} each $Z_k$ has a unique maximum, we note that $Z'_y$ changes sign at most $2n$ times.
We now apply Chernoff-Hoeffding bounds to bound the probability that the sum $\sum_{\ell=0}^{\ell=m-1}T_{i_{\ell}, i_{\ell+1}}$ exceeds some constant $a$. With $\tau_\ell$ denoting the upper bound on $|T_{i_{\ell}, i_{\ell+1}}|$, this probability is at most
$$\text{exp}\left(-\frac{a^2}{\sum_{\ell} \tau_\ell^2}\right).$$
By our observations above, for all $\ell$, $\tau_\ell\le 80\Delta \rxy$, and $\sum_\ell \tau_\ell \le \int_0^1 |Z'_y(q)(\ebid(q)-\tilde{b}(q))| \, \text{d}q \le 80n\Delta \rxy$. Therefore, $\sum_{\ell} \tau_\ell^2\le n(80\Delta \rxy)^2$. Since the bound does not depend on $I$, we can remove the conditioning
on $I$.
\end{proof}
\section{Inference methodology and error bounds for first-price auctions}
\label{s:fp-inf}
In this section we define and analyze an estimator for counterfactual
revenue the bids in first-price auctions. Our approach will be to
reduce this estimation problem to the all-pay estimation problem that
we solved previously. Recall that the all-pay estimator is a weighted
order statistic of the empirical all-pay bid function. Our
first-price estimator will map the empirical first-price bid function
to an empirical all-pay bid function and then apply to it the all-pay
estimator.
Recall that the Bayes-Nash equilibrium bid function of first-price
auction and all-pay auction are related by the payment identity.
Specifically an all-pay bid is deterministically equal to the expected
payment of the payment identity, while in a first-price auction an
agent only pays upon winning. To facilitate comparison to previous
results we notate the equilibrium bid function of the all-pay auction
as $\bidap$ and the equilibrium bid function of the first-price
auction as $\bidfp$. Given the allocation rule $\alloc$, the payment
identity requires $\bidap(q) = \alloc(q)\,\bidfp(q)$.
Consequently, an empirical all-pay bid function can be defined from
the empirical first-price bid function as $\ebidap(q) =
\alloc(q)\,\ebidfp(q)$. Note that while in previous
sections the empirical all-pay bid function is piece-wise constant
(similarly the empirical first-price bid function is piece-wise
constant), this empirical all-pay bid function is not piece-wise
constant.
Partition the quantile range into extreme quantiles
$\Lambda=[0,\delta_N]\cup[1-\delta_N, 1]$ and the moderate quantiles
$[\delta_N,1-\delta_N]$. Recall that truncation trades off a
(potentially diverging) variance of the estimator suggested by
\autoref{l:counterfactual-revenue} at the extreme quantiles with a
bias that can be bounded. Specifically, truncation replaces bids at
low quantiles with zero and bids at high quantiles with the upper
bound $\ebidap(1)$ (which, in terms of the first-price bids, is
$\alloc(1)\,\ebidfp(1)$).
As in \autoref{s:inference}, the estimator for counterfactual revenue
plugs the truncated empirical bid function into the counterfactual revenue
equation~\ref{eq:P_y-truncated} of
Lemma~\ref{l:counterfactual-revenue}. We obtain the following
estimator in terms of the empirical first-price bids:
\begin{align*}
\hat{P}_y & =
\expect[q\not\in\Lambda]{-Z'_y(q)\,\alloc(q)\,\ebidfp(q)}
+ Z_y(1-\delta_N)\,\alloc(1)\,\ebidfp(1).
\end{align*}
This estimator is a weighted order statistic as formalized in the
following definition.
\begin{definition}
\label{d:estimator-fp}
The estimator $\hat{P}_y$ (with truncation parameter $\delta_N$) for
the revenue of an auction with allocation rule $y$ from $N$ samples
$\ebidfp_1 \leq \cdots \leq \ebidfp_N$ from the equilibrium bid
distribution of a first price auction with allocation rule $x$ is:
\begin{align*}
\hat{P}_y
& \def[i,i+1]/N{[i,i+1]/N}
= \sum\nolimits_{i=\delta_N N}^{N-\delta_N N} \expect[q \in [i,i+1]/N]{-Z'_y(q)\,\alloc(q)}\,\ebidfp_i + Z_y(q)\,\alloc(1)\,\ebidfp_N.
\end{align*}
\end{definition}
To obtain a bound on the mean absolute error of the estimator
judiciously plug the identity relating first-price and all-pay
equilibrium bids into the error bound of
equation~\eqref{eq:error-allpay} to get:
\begin{align}
\label{eq:error-first-price}
|\emurevk-\murevk| \le &
\left|\expect[q\not\in\Lambda]{\smash{-Z'_k(q)\,\alloc(q)\,(\ebidfp(q)-\bidfp(q))}}\right|
+ \left| \expect[q\in\Lambda]{Z_k(q)\,\bidap'(q)}\right|\\
\notag & + \left| \smash{Z_k(1-\delta_N)\,(\bidap(1-\delta_N) -\alloc(1)\,\ebidfp_N)}\right|
+ \left| Z_k(\delta_N)\,\bidap(\delta_N)\right|
\end{align}
It is clear that terms that depend only on the equilibrium bid
functions and not the empirical bid functions need no further
analysis. Specifically \autoref{second-term-bound} and
\autoref{fourth-term-bound} bound the contribution to the error of the
second and fourth terms of equation~\eqref{eq:error-first-price}. It
remains to bound the contribution from the first and third terms.
These bounds come from relatively minor adjustments to the analogous
bounds for all-pay auctions.
For the third term, we can adapt the analysis of
\autoref{third-term-bound}. Denote the quantile of bid $\ebidfp_N$ by
$\hat{\quant}$, i.e., $\ebidfp_N = \bidfp(\hat{\quant})$. There are two parts
of the analysis, the first part is for the case $\hat{\quant} \geq
1-\delta_N$ and the second part is for the case $\hat{\quant} \leq
1-\delta_N$.
For the first part, the proof of \autoref{third-term-bound} upper
bounds $\ebidap_N$ by $\bidap(1)$. We can do the same for
$\alloc(1)\,\ebidfp_N$: $\alloc(1)\,\bidfp(\hat{\quant}) \leq \alloc(1)
\bidfp(1) = \bidap(1)$. The first inequality follows from the
monotonicity of the equilibrium bid function, i.e., $\ebidfp(\hat{\quant})
\leq \ebidfp(1)$ for $\hat{\quant} \leq 1$. Thus, we can upper bound the
error in the case that $\hat{\quant} \geq 1-\delta_N$ by
$\smash{Z_k(1-\delta_N)\,(\bidap(1) - \bidap(1-\delta_N))}$ which was
bounded already in the proof of \autoref{third-term-bound}.
For the second part, write
\begin{align*}
\alloc(1)\,\bidfp(\hat{\quant})
&= \alloc(1)\,\bidap(\hat{\quant}) / \alloc(\hat{\quant})\\
&\geq \alloc(\hat{\quant})\,\bidap(\hat{\quant}) / \alloc(\hat{\quant})\\
&= \bidap(\hat{\quant}),
\end{align*}
where inequality follows from monotonicity of $\alloc$. Thus, we can
upper bound the error in the case that $\hat{\quant} \leq 1-\delta_N$ by
$\smash{Z_k(1-\delta_N)\,(\bidap(1-\delta_N) - \bidap(\hat{\quant}))}$ which was
bounded already by \autoref{third-term-bound}.
To analyze the first term in the error bound of
equation~\eqref{eq:error-first-price}, we begin with the following upper
bound:
\begin{align*}
\left|\expect[q\not\in\Lambda]{\smash{-Z'_y(q)\alloc(q)(\ebidfp(q)-\bidfp(q))}}\right|
& \le \expect[q\not\in\Lambda]{\left|\frac{Z'_y(q)}{h(Z_y(q))}\right|}
\sup\limits_q\left|\alloc(q)h(Z_y(q))(\ebidfp(q)-\bidfp(q))\right|\\
& \le \expect[q\not\in\Lambda]{\left|\frac{Z'_y(q)}{h(Z_y(q))}\right|}
\sup\limits_q\left|h(Z_y(q))(\ebidfp(q)-\bidfp(q))\right|.
\end{align*}
We can then carry out an analysis identical to the proof of
\autoref{first-term-bound} with an appropriate choice of
$h(\cdot)$. The only difference is in the application of
\autoref{error bid function}. Whereas for all-pay auctions the lemma
bounds the weighted error in bids in terms of $\sup_q
\{q(1-q)x'(q)\}\le n/4$, in the case of first-price auctions, this
term is replaced by $\sup_q \{q(1-q)x'(q)/x(q)\}$, which is no more
than $n$ for rank-based allocation rules. We obtain the following
theorem.
\begin{theorem}\label{th: first price}
The expected absolute error in estimating the revenue of a position
auction with allocation rule $y$ using $N$ samples from the bid
distribution for a first-pay position auction with allocation rule
$x$ is bounded by both of the expressions below; Here $n$ is the number of positions
in the two position auctions.
\begin{align*}
\Err{P_y} & \le \frac{28n^2\log N}{\sqrt{N}},\\
\Err{P_y} & \le \frac{80}{\sqrt N}\,n \log \sup\nolimits_{q}
n \frac{y'(q)}{\alloc'(q)}.
\end{align*}
When $y$ is the highest-$k$-bids-win allocation rule, the latter bound
improves to:
\begin{align*}
\Err{P_k}
& \le \frac{80}{\sqrt{N}} \rxy[\kalloc]
\end{align*}
with $\rxy[\kalloc]$ as defined in equation~\eqref{eq:rxy}.
\end{theorem}
Because the error bounds in Theorem~\ref{th: first price} are
identical up to constant factors to those in
Theorems~\ref{thm:allpay-simple}, \ref{thm:allpay-general} and
Corollary~\ref{cor:allpay-y}, other results in Lemma~\ref{lem:univ},
Corollaries~\ref{cor1}, \ref{cor2}, \ref{cor3}, \ref{cor:universal},
and Theorems~\ref{theorem:sign-AB} and \ref{thm:sw} continue to hold
when bids are drawn from a first-price auction.
\section{Proofs for Section~\ref{s:param-inf}}
\label{s:proofs-3}
In this section we prove the results from Section~\ref{s:param-inf}
which analyze the error of the counterfactual revenue estimator for
both multi-unit and (more generally) rank-based auctions with all-pay
payment semantics.
Recall that for all-pay auctions with allocation rule
$\alloc(q)$, the equilibrium bid function $\bid(q)$
satisfies $\bid'(q) = v(q)\,\alloc'(q)$. From $N$ bids
in a mechanism with allocation rule $x$ we are estimating the
counterfactual revenue of a mechanism with allocation rule $y$.
Recall that for an implicit allocation rule $x$ and another allocation
rule $y$, we define the function $Z_y(q) = (1-q)
\frac{y'(q)}{x'(q)}$. When $y$ is the allocation rule corresponding to
a $k$-unit auction, we let $Z_k(q)$ denote $Z_{\kalloc}(q)$. Our
analysis treats the contribution to the error from extreme quantiles
$q \in \Lambda = [0,\delta_N]\cup [1-\delta_N, 1]$ for $\delta_N =
\max(25\log\log N,n)/N$ and moderate quantiles $q \not \in \Lambda$
separately. In equation~\eqref{eq:error-allpay}, restated below, the
first term is the error from moderate quantiles and the latter three
terms is the error from extremal quantiles.
\begin{align}
\tag{\ref{eq:error-allpay}}
|\emurevk-\murevk| \le &
\left|\expect[q\not\in\Lambda]{\smash{-Z'_k(q)(\hat{\bid}(q)-\bid(q))}}\right|
+ \left| \expect[q\in\Lambda]{Z_k(q)\,\bid'(q)}\right|\\
\notag & + \left| \smash{Z_k(1-\delta_N)\,(\bid(1-\delta_N) -\hat{\bid}_N)}\right|
+ \left| Z_k(\delta_N)\,\bid(\delta_N)\right|
\end{align}
The proofs in this appendix are organized as follows. The error in our
estimator for the revenue $\murevk$ of a $k$-unit auction from
moderate quantiles is analyzed in Section~\ref{sec:first-term-proof}.
Section~\ref{sec:alloc-bid-bounds} proves some basic properties of
allocation rules and bid functions for rank-based auctions that will
be employed in Section~\ref{sec:extreme-quantile-bounds} where the
error from extremal quantiles, specifically the three latter terms of
equation~\eqref{eq:error-allpay}, are analyzed. The main results from
Section~\ref{s:inference-k-thm}, namely
Theorems~\ref{thm:allpay-simple} and~\ref{thm:allpay-general} and
Corollary~\ref{cor:allpay-y} are proven in
Section~\ref{sec:inference-theorem-proofs}.
\subsection{Bounding the error from moderate quantiles}
\label{sec:first-term-proof}
We will now restate and prove Lemmas~\ref{lem:Z-bound-1}
and~\ref{first-term-bound-simple}, bounding the contribution to the
error of the estimator from moderate quantiles,
$\expect[q\not\in\Lambda]{|Z'_k(q)|\,
|\hat{\bid}(q)-\bid(q)|}$. The first lemma proves that
$Z_k$ has a single local maximum.
\begin{numberedlemma}{\ref{lem:Z-bound-1}}
For any rank-based auction and $k$-highest-bids-win auction
with allocation rules $\alloc$ and $\kalloc$, respectively, the
function $Z_k(q)=(1-q)
\frac{\kalloc'(q)}{\alloc'(q)}$ achieves a single local
maximum for $q\in [0,1]$.
\end{numberedlemma}
\begin{proof}
Consider the function $A(q) = 1/Z_k(q) = x'(q)/(1-q)x'_k(q)$.
Recall that $x'(q)$ is a weighted sum over $x'_j(q)$ for
$j\in\{1,\cdots,n-1\}$. Thus, $A(q)$ is a weighted sum over terms
$x'_j(q)/(1-q)x'_k(q)$. Let us look at these terms closely.
$$
\frac{x'_j(q)}{(1-q)x'_k(q)} = \alpha_{k,j} q^{k-j}(1-q)^{j-k-1}
$$
where coefficient $\alpha_{k,j}$ is a constant. The functions
$q^{k-j}(1-q)^{j-k-1}$ are convex. This implies that $A(q)$ which is a
weighted sum of convex functions is also convex. Consequently, it has
a unique minimum. Therefore, $Z_k(q) = 1/A(q)$ has a unique maximum.
\end{proof}
The following lemma gives the basic analysis of the error from
moderate quantiles. A key aspect of this proof is that its dependence
on $\sup_{q \not \in \Lambda} Z_k(q)$ is logarithmic.
Immediately following this proof we give a more refined analysis that
enables better bounds when estimating the revenue of counterfactual
mechanism $y$ from bids in $\alloc$ when the allocation rules of
$\alloc$ and $y$ are related.
\begin{numberedlemma}{\ref{first-term-bound-simple}}
For $Z_k$ and $\Lambda$ defined as above, the first error term in
equation~\eqref{eq:error-allpay} of the estimator $\hat{P}_k$ is bounded by:
\begin{align*}
\expect[\hat{\bid}]{\left|\expect[q\not\in\Lambda]{Z'_k(q)\, (\hat{\bid}(q)-\bid(q))}\right|}
&\le
\frac{8n\logN}{\sqrt{N}}\sup_{q}\{\kalloc'(q)\}
\end{align*}
\end{numberedlemma}
\begin{proof}
Recall from Section~\ref{s:inference-k} that we can write the error on the moderate quantiles as:
\begin{align}
\tag{\ref{eq:error-split}}
\left|\expect[q\not\in\Lambda]{Z'_k(q) \,
(\hat{\bid}(q)-\bid(q))}\right|
& \le \expect[q\not\in\Lambda]{\left|\frac{Z'_k(q)}{Z_k(q)}\right|}
\sup_{q}\left|Z_k(q)\,(\hat{\bid}(q)-\bid(q))\right|.
\end{align}
Using Lemma~\ref{lem:Z-bound-1}, the first term on the right in equation~\eqref{eq:error-split},
$\expect[q\not\in\Lambda]{\left|\frac{Z'_k(q)}{Z_k(q)}\right|}$,
is bounded by $2(\sup_{q\not\in\Lambda}\log
Z_k(q)-\inf_{q\not\in\Lambda}\log Z_k(q))$.
We note that for $q\not\in\Lambda$, and any rank-based
allocation rule $y$, $y'(q)\in (\delta_N^n, n]$. Therefore,
$Z_k(q)\in [\delta_N^{n}/n, n\delta_N^{-n}]\in (N^{-n},N^n)$. Therefore, we have:
\begin{align*}
\expect[q\not\in\Lambda]{\left|\frac{Z'_k(q)}{Z_k(q)}\right|}
& < 4\log N^n = 4n\log N.
\end{align*}
To bound the second term on the right in equation~\eqref{eq:error-split}, we write:
\begin{align*}
\sup_{q}\left|Z_k(q)\,(\hat{\bid}(q)-\bid(q))\right|
& \le \sup_{q} \kalloc'(q)
\sup_q\left|\frac{1}{\alloc'(q)}\,(\hat{\bid}(q)-\bid(q))\right|\\
& \le \sup_{q} \kalloc'(q)
\sup_q\left|\frac{1}{\bid'(q)}\,(\hat{\bid}(q)-\bid(q))\right|.\\
\intertext{Invoking Lemma~\ref{error bid function}, the expected value of this term for random samples from the bid distribution is bounded as:}
\expect[\ebid]{\sup_{q}\left|Z_k(q)\,(\hat{\bid}(q)-\bid(q))\right|}
& \le \sup_{q} \kalloc'(q) \frac{1}{\sqrt{N}} \left(1+\frac{4n\log\log N}{\sqrt{N}}\right).
\end{align*}
Putting the two bounds together, we get,
\begin{align*}
\expect[\hat{\bid}]{\left|\expect[q\not\in\Lambda]{Z'_k(q)\, (\hat{\bid}(q)-\bid(q))}\right|}
&\le
\frac{4n\logN}{\sqrt{N}}\sup_{q}\{\kalloc'(q)\}\left(1+\frac{4n\log\log
N}{\sqrt{N}}\right)
\end{align*}
We may assume without loss of generality that
$4n\logN<\sqrt{N}$, otherwise the first term, and therefore the
entire error bound, exceeds $1$ and is trivially true. Under this
assumption, the term in brackets is no more than $2$, and the lemma follows.
\end{proof}
The following lemma gives a refinement of
Lemma~\ref{first-term-bound-simple} that enables better bounds when
estimating the revenue of counterfactual mechanism $y$ from bids in
$\alloc$ when the allocation rules of $\alloc$ and $y$ are related.
Unfortunately,
$\expect[q\not\in\Lambda]{\left|\frac{Z'_k(q)}{Z_k(q)}\right|}$
can be quite large, as $Z_k(q)$ can take on exponentially large
values at extreme quantiles (see Example~\ref{eg:extremal} in
Section~\ref{s:inference-k}). The main idea in the refined analysis
is a better factoring in the error from moderate quantiles in
equation~\eqref{eq:error-split}. We instead factor this error term as
follows, for an appropriate function $h(Z_k)$ which is just slightly
sublinear in $Z_k$.
\begin{align*}
\left|\expect[q\not\in\Lambda]{Z'_k(q) \,
(\hat{\bid}(q)-\bid(q))}\right|
& \le \expect[q\not\in\Lambda]{\left|\frac{Z'_k(q)}{h(Z_k)}\right|}
\sup_{q}\left|h(Z_k)\,(\hat{\bid}(q)-\bid(q))\right|.
\end{align*}
This factoring gives greater control in balancing the error generated from
the two terms. For an appropriate choice of the function $h(\cdot)$,
we obtain the following lemma.
\begin{lemma}
\label{first-term-bound}
For $Z_k$ and $\Lambda$ defined as above, the first error term in
equation~\eqref{eq:error-allpay} of the estimator $\hat{P}_k$ is bounded by:
\begin{align*}
&\expect[\hat{\bid}]{\left|\expect[q\not\in\Lambda]{Z'_k(q)\, (\hat{\bid}(q)-\bid(q))}\right|}\\
&\le \frac{40}{\sqrt{N}}\,\, \left(1+\frac{4n\log\log N}{\sqrt{N}}\right)
\sup_{q}\{\kalloc'(q)\}\,\,\max\left\{1,\log
\sup_{q: \kalloc'(q)\ge 1}\frac{\alloc'(q)}{\kalloc'(q)}, \log\sup_{q}\frac{\kalloc'(q)}{\alloc'(q)} \right\}.
\end{align*}
\end{lemma}
\begin{proof}
For any $\alpha>0$ we can write
$$
|\hat{P}_k-P_k|\leq \expect{\frac{\left(\log(1+Z_k(q))\right)^{\alpha}}{Z_k(q)}|Z'_k(q)|}
\sup\limits_q\left|\frac{Z_k(q)}{\left(\log(1+Z_k(q))\right)^{\alpha}}(\hat{b}(q)-b(q))\right|.
$$
We start by considering the first term.
Lemma~\ref{lem:Z-bound-1} shows that $Z'_k(\cdot)$ changes sign
only once. Consider the region where the sign of $Z'_k(\cdot)$ is constant
and make the change of variable $t=Z_k(q)$.
Denote $Z_k^*=\sup_qZ_k(q)$, and note that $\inf_qZ_k(q) \geq 0$.
The first term evaluates as
$$
\expect{\frac{\left(\log(1+Z_k(q))\right)^{\alpha}}{Z_k(q)}|Z'_k(q)|} \leq
2 \int^{Z_k^*}_0\frac{(\log\,(1+t))^{\alpha}}{t}\,dt.
$$
Note that for any $t>0$, $\log(1+t)\le t$. Thus,
$$
\int^{\delta}_0\frac{(\log\,(1+t))^{\alpha}}{t}\,dt< \frac{\delta^{\alpha}}{\alpha}.
$$
Now split the integral into two pieces as
$$
\int^{Z_k^*}_0\frac{(\log\,(1+t))^{\alpha}}{t}\,dt=\int^{1}_0\frac{(\log\,(1+t))^{\alpha}}{t}\,dt+\int^{Z_k^*}_1\frac{(\log\,(1+t))^{\alpha}}{t}\,dt.
$$
We just proved that the first piece is at most $1/\alpha$. Now we
upper bound the second piece and consider the integrand at $t \ge 1$. First, note that
$$
(\log\,(1+t))^{\alpha}=\left(\log\,t+\log(1+\frac{1}{t})\right)^{\alpha}\le\left(\log\,t+\frac{1}{t}\right)^{\alpha}\le(\log\,t+1)^{\alpha}.
$$
Thus, the integral behaves as
$$
\int^{Z_k^*}_1\frac{(\log\,(1+t))^{\alpha}}{t}\,dt\le\int^{Z_k^*}_1\frac{(\log\,(t)+1)^{\alpha}}{t}\,dt=\frac{1}{1+\alpha}(\log\,Z_k^*+1)^{1+\alpha}.
$$
Thus, we just showed that
$$
\expect{\frac{\left(\log(1+Z_k(q))\right)^{\alpha}}{Z_k(q)}|Z'_k(q)|} \leq \frac{2}{\alpha}+\frac{2}{1+\alpha}(\log\,Z_k^*+1)^{1+\alpha},
$$
which is at most $2(1+e)/\alpha$ for $\alpha<1/\log\,Z_k^*$.
Now consider the term
$$
\sup\limits_q\left|\frac{Z_k(q)}{\left(\log(1+Z_k(q))\right)^{\alpha}}(\hat{b}(q)-b(q))\right|.
$$
Note that $\log(1+t)\ge \min\{1,t\}/2$. So the first term can be bounded from above as
$$
\frac{Z_k(q)}{\left(\log(1+Z_k(q))\right)^{\alpha}} \leq 2^{\alpha} \max \left\{
Z_k(q),\,(Z_k(q))^{1-\alpha} \right\}.
$$
Thus using Lemma~\ref{error bid function},
\begin{align*}
&\expect{\sup\limits_q\left|\frac{Z_k(q)}{\left(\log(1+Z_k(q))\right)^{\alpha}}(\hat{b}(q)-b(q))\right|}\\
&\le \sup\limits_q\left|\frac{Z_k(q)}{\left(\log(1+Z_k(q))\right)^{\alpha}}
b'(q)\right|
\expect{ \sup\limits_q \left|\frac{\hat{b}(q)-b(q)}{b'(q)}\right|}\\
& \leq 2^{\alpha}\sup_q\left(\max \left\{
x'_k(q),\,(x'_k(q))^{1-\alpha}(x'(q))^{\alpha}
\right\}
\right)\frac{1}{\sqrt{N}}\left(1+ 16\frac{\log\log N}{\sqrt{N}}\sup_q q(1-q) b'(q)\right)\\
& \le
2^{\alpha}\sup_q\left(x'_k(q)\right)\left(\underbrace{\max\left(1,\sup_{q:x'_k(q)\ge 1}\frac{x'(q)}{x'_k(q)}\right)}_{=: \, A}\right)^{\alpha} \frac{1}{\sqrt{N}}\left(1+ \frac{4n\log\log N}{\sqrt{N}}\right).
\end{align*}
where the last inequality follows by noting that $b'(q)\le x'(q)\le
n$, and $q(1-q)\le 1/4$.
Now we combine the two evaluations together and pick
$\alpha=\min\{1,1/\log A, 1/\log Z_k^*\}$, with $A$ defined as above, to obtain
\begin{align*}
\expect{|\hat{P}_k-P_k|} &\leq
\frac{2(1+e)}{\alpha}2^{\alpha}A^{\alpha}\frac{1}{\sqrt{N}} \sup_q\left(x'_k(q)\right)\\
& \leq \frac{40}{\sqrt{N}} \,\,
\sup_{q}\{\kalloc'(q)\}\,\,\max\left\{1,\log A,\log \sup_{q}
\left\{ \frac{\kalloc'(q)}{\alloc'(q)} \right\}\right\} \left(1+ \frac{4n\log\log N}{\sqrt{N}}\right).
\end{align*}
\end{proof}
\input{appendix-new-deltaN-proofs}
\subsection{Proofs of main theorems}
\label{sec:inference-theorem-proofs}
This section gives the complete proofs for the main theorems of
Section~\ref{s:inference-k-thm}. These theorems follow fairly
directly from the previous lemmas.
\begin{numberedtheorem}{\ref{thm:allpay-simple}}
The mean absolute error in estimating the revenue of a rank-based
auction with allocation rule $y$ using $N$ samples from the bid
distribution for an all-pay rank-based auction with allocation rule
$x$ is bounded as below. Here $n$ is the number of positions in the
two auctions, and $\hat{P}_y$ is the estimator in Definition~\ref{d:estimator}
with $\delta_N$ set to $\max(25\log\log N,n)/N$.
\begin{align*}
\Err{P_y} & \le \frac{16n^2\log N}{\sqrt{N}}.
\end{align*}
\end{numberedtheorem}
\begin{proof}
As in the proof of Lemma~\ref{first-term-bound-simple}, we may assume
without loss of generality that $4n\logN<\sqrt{N}$, and indeed,
$16n^2\log N<\sqrt{N}$. This implies $\delta_N<1/n$, and then
Lemmas~\ref{first-term-bound-simple}, \ref{second-term-bound},
\ref{third-term-bound}, and \ref{fourth-term-bound} together imply
that the error in $P_k$ is bounded by:
\begin{align*}
\Err{\murevk} \le &
\,\frac{8n\logN}{\sqrt{N}}\sup_{q\not\in\Lambda}\{\kalloc'(q)\}
+ 2e\delta_N \kalloc'(\delta_N) + (2e+8) \delta_N^2 \kalloc'(1-\delta_N)
\end{align*}
Further, $16n^2\log N<\sqrt{N}$ also implies that the second and third
terms together are no larger than the first. The theorem then follows
by recalling that $\sup_{q} \kalloc'(q) \le n$.
\end{proof}
We will now prove the improved error bounds of
Theorem~\ref{thm:allpay-general} and Corollary~\ref{cor:allpay-y}.
Recall the definition of $\rxy$ from equation~\ref{eq:rxy} in
Section~\ref{s:inference-k-thm}.
\begin{align}
\tag{\ref{eq:rxy}}
\rxy &:= \sup_{q}\{y'(q)\}\,\max\left\{1, \,\log
\sup_{q: y'(q)\ge 1}\frac{\alloc'(q)}{y'(q)}, \,\log\sup_{q}\frac{y'(q)}{\alloc'(q)} \right\}.
\end{align}
Theorem~\ref{thm:allpay-general} follows from
Lemma~\ref{first-term-bound} in much the same way as
Theorem~\ref{thm:allpay-simple} does from
Lemma~\ref{first-term-bound-simple}. We may assume, without loss of
generality, that $\sqrt{N}<80$, in which case the errors from the
extreme quantiles get absorbed into the error from the moderate quantiles.
\begin{numberedtheorem}{\ref{thm:allpay-general}}
Let $\alloc$ and $\kalloc$ denote the allocation rules for any
all-pay rank-based auction and the $k$-highest-bids-win auction over
$n$ positions, respectively. Let $\hat{\murevk}$ denote the
estimator from Definition~\ref{d:estimator} for estimating the
revenue $P_k$ of the latter auction from $N$ samples of the
bid distribution of the former, with $\delta_N$ set to
$\max(25\log\log N,n)/N$. If $\delta_N\le 1/n$, the mean absolute
error of the estimator $\hat{\murevk}$ is bounded as follows.
\begin{align*}
\Err{\murevk}
\le & \,\frac{80}{\sqrt{N}}\,\rxy[\kalloc].
\end{align*}
\end{numberedtheorem}
We now generalize error bound to estimate the revenue $P_y$ of an
arbitrary rank-based auction with allocation rule $y$ from the bids of
another rank-based auction with allocation rule $x$.
\begin{numberedcorollary}{\ref{cor:allpay-y}}
Let $\alloc$ and $y$ denote the allocation rules for any two all-pay
rank-based auctions over $n$ positions. Let $\hat{P}_y$ denote the
estimator from Definition~\ref{d:estimator} for estimating the
revenue of the latter from $N$ samples of the bid
distribution of the former, with $\delta_N$ set to $\max(25\log\log
N,n)/N$. If $\delta_N\le 1/n$, the mean absolute error of the
estimator $\hat{P}_y$ is bounded as follows.
\begin{align*}
\Err{P_y}
\le & \,\frac{80}{\sqrt{N}}\, n \, \log \sup\nolimits_q n\, \frac{y'(q)}{x'(q)}.
\end{align*}
\end{numberedcorollary}
\begin{proof}
Write $y$ as a rank-based auction with weights $\wals$:
\begin{align*}
y & = \sum\nolimits_k \margwalk\,\kalloc, \,\,\,\text{and, } \,\,\,
P_y = \sum\nolimits_k \margwalk\,P_k.\\
\intertext{Accordingly, the error in $P_y$ is bounded by a weighted sum of the
error in $P_k$ which are bounded by Theorem~\ref{thm:allpay-general}. The weighted sum of these errors is simplified by observing
that $\kalloc'(q)\le y'(q)/\margwalk$ for all $k$ and $q$:}
\Err{P_y} & \le \sum\nolimits_k \margwalk\,\Err{P_k}\\
& \le \tfrac{80}{\sqrt{N}} \sum\nolimits_k \margwalk \rxy[\kalloc]\\
& \le \tfrac{80}{\sqrt{N}} \sum\nolimits_k \margwalk \sup_q
\{\kalloc'(q)\} \max\Big\{\log n,\ \log\tfrac{1}{\margwalk}+ \log\sup_q \tfrac{y'(q)}{x'(q)}\Big\}.\\
\end{align*}
We now simplify the terms one at a time. Recall that $\sup_q
\{\kalloc'(q)\}\le n$ for all $k$. The first and third terms can therefore be
simplified using $\sum\nolimits_k \margwalk\le 1$. For the second
term, we observe $\sum\nolimits_k \margwalk \log \frac{1}{\margwalk}
\le \log n$. We therefore have:
\begin{align*}
\Err{P_y}
& \yestag \label{eq:weak-bound-py}
= \frac{80}{\sqrt{N}}\, n\,\log \sup_q n\,\frac{y'(q)}{x'(q)}.
\end{align*}
\end{proof}
\subsection{Bounds for the allocation rules and bid distributions of
rank-based auctions}
\label{sec:alloc-bid-bounds}
In this section we prove some basic properties of allocation rules for
rank-based auctions. These properties will be useful, in
Section~\ref{sec:extreme-quantile-bounds}, for analizing the error of
the estimator at extreme quantiles. As desribed in
Section~\ref{s:prelim}, the allocation rule and its derivative for the
$n$-agent $k$-unit auction are
\begin{align*}
x_k(q) &= \sum_{i=0}^{k-1} \binom{n-1}{i} q^{n-i-1} (1-q)^{i},\\
x'_k(q) &= (n-1) \binom{n-2}{k-1} q^{n-k-1} (1-q)^{k-1}.\\
\end{align*}
We will be interested in the behavior of allocation rule $\kalloc$ and
its derivative $\kalloc'$ at the extremes, specifically for $q\in
[0,1/n]$ and $q\in[1-1/n,1]$. The allocation rule is steepest at
$q=(k-1)/(n-2)$ and is convex before this point and concave after it.
Specifically, $\kalloc[1]$ is steepest at $q=1$ and is convex and
$\kalloc[n-1]$ is steepest at $q=0$ and is concave. For all other $k
\in \{2,\ldots,n-2\}$, the allocation rule derivative $\kalloc'$ is
maximized between $1/(n-2) > 1/n$ and $(n-3)/(n-2) < 1-1/n$.
The following two lemmas bound the derivative of the allocation of
multi-unit auctions at extreme quantiles. Combining them we obtain
the subsequent theorem.
\begin{lemma}
For $k \in \{2,n-2\}$ units and $\delta < 1/n$, the allocation rule
derivative $x'_k$ satisfies:
\begin{enumerate}
\item $\sup_{q < \delta} x'_k(q) = x'_k(\delta)$ and
\item $\sup_{q > 1-\delta} x'_k(q) = x'_k(1-\delta)$.
\end{enumerate}
\end{lemma}
\begin{proof}
This lemma follows from convexity of the allocation rule $x_k$ on
$[0,1/n]$ and concavity on $[1-1/n,1]$.
\end{proof}
\begin{lemma}
For $k \in \{1,n-1\}$ units and $\delta < 1/n$, the allocation rule
derivative $x'_k$ satisfies:
\begin{enumerate}
\item $\sup_{q < \delta} x'_{n-1}(q) \leq e\,x'_{n-1}(\delta)$ and
\item $\sup_{q > 1-\delta} x'_1(q) \leq e\, x'_1(1-\delta)$.
\end{enumerate}
\end{lemma}
\begin{proof}
This lemma follows from the closed-from of the allocation rule derivatives as
$x'_1(q) = (n-1)\,q^{n-2}$ and $x'_{n-1}(q) = (n-1)\,(1-q)^{n-2}$.
Thus, $x'_1(1) = x'_{n-1}(0) = n-1$ and
\begin{align*}
x'_1(1-\delta) = x'_{n-1}(\delta) &= (n-1)\,(1-\delta)^{n-2} \\
&\geq \frac 1e\, (n-1)\\
&= \frac 1e\,x'_1(1) = \frac 1e\,x'_{n-1}(0).
\end{align*}
Concavity of $x_{n-1}$ and convexity of $x_1$, then, imply the result.
\end{proof}
\begin{theorem}
\label{t:extremal-xprime-bound}
For any $n$-agent rank-based mechanism with allocation rule $x$ and
$\delta < 1/n$, the allocation rule derivative $x'$ satisfies:
\begin{enumerate}
\item $\sup_{q < \delta} x'(q) \leq e\,x'(\delta)$ and
\item $\sup_{q > 1-\delta} x'(q) \leq e\, x'(1-\delta)$.
\end{enumerate}
\end{theorem}
The bid function $b(\cdot)$ can be bounded by the allocation rule
$x(\cdot)$ and its derivative $x'(\cdot)$ via the following lemma.
The subsequent theorem follows from the lemma via
Theorem~\ref{t:extremal-xprime-bound}.
\begin{lemma}
\label{l:bid-bound}
For any all-pay mechanism with allocation rule $x$ and $\delta \in
[0,1]$, the equilibrium bid function $b$ satisfies
\begin{enumerate}
\item \label{part:bid-first}
$b'(\delta) \leq x'(\delta)$,
\item \label{part:bid-second}
$b(\delta) \leq \delta \sup_{q < \delta} x'(\delta)$, and
\item \label{part:bid-third}
$b(1) - b(1-\delta) \leq \delta \sup_{q > 1-\delta} x'(\delta)$.
\end{enumerate}
\end{lemma}
\begin{proof}
The equilibrium bid function is defined by $b'(q) = v(q) x'(q)$ and
$b(0) = 0$ (where $v(q) \in [0,1]$ is the value function). Part
(\ref{part:bid-first}) follows from the upper bound $v(q) \leq 1$. Parts
(\ref{part:bid-second}) and (\ref{part:bid-third}) follow by upper
bounding $x'(q)$ by its supremum on the interval of the integral and
integrating the bound of part (\ref{part:bid-first}). For example for
part (\ref{part:bid-second}), $b(\delta) = \int_0^\delta v(r)\, x'(r)\, dr
\leq \int_0^\delta \sup_{q < \delta} x'(q) \,dr = \delta\,\sup_{q \leq
\delta} x'(q)$.
\end{proof}
\begin{theorem}
\label{t:bid-bound}
For any $n$-agent all-pay rank-based mechanism with allocation rule
$x$ and $\delta < 1/n$, the equilibrium bid function $b$ satisfies
\begin{enumerate}
\item $b(\delta) \leq \delta\, e\,x'(\delta)$, and
\item $b(1) - b(1-\delta) \leq \delta\,e\,x'(1-\delta)$.
\end{enumerate}
\end{theorem}
\subsection{Bounding the error at extreme quantiles}
\label{sec:extreme-quantile-bounds}
We now bound the remaining terms in
equation~\eqref{eq:error-allpay}. Once again these bounds rely on the
observation that for any quantile $q$, $Z_k(q)b(q)$ is bounded,
because $Z_k$ depends inversely on $x'(q)$, whereas $b(q)$ is roughly
proportional to it.
\begin{lemma}
\label{second-term-bound}
For $Z_y$ and $\Lambda$ as defined above, if $\delta_N\le 1/n$,
the second error term of
the estimator $\hat{P}_y$ is bounded as follows.
\begin{align*}
\expect[q\in\Lambda]{Z_y(q)\,\bid'(q)} &\le
e\,\delta_N\, y'(\delta_N) + e\,\delta_N^2 y'(1-\delta_N).
\end{align*}
\end{lemma}
\begin{proof}
Apply part~(\ref{part:bid-first}) of Lemma~\ref{l:bid-bound} and the
definition of $Z_y(q) = (1-q)\,y'(q)/x'(q)$ to obtain the following
upper bound:
$$
\expect[q\in\Lambda]{Z_y(q)\,\bid'(q)}
\le
\expect[q \in \Lambda]{(1-q)\,y'(q)}.
$$
For $q < \delta_N$, bound this epectation by $e\,\delta_N\,
y'(\delta_N)$ from Theorem~\ref{t:extremal-xprime-bound}. For $q >
1-\delta_N$, bound this expectation by $e\,\delta_N^2\,
y'(1-\delta_N)$.
Note, we could alternatively obtain the bound $\delta_N n$ by using
the fact that $\sup_q y'(q) \leq n$ (Fact~\ref{fact:max-slope}).
\end{proof}
\begin{lemma}
\label{third-term-bound}
For $Z_y$ and $\Lambda$ as defined above, if $\delta_N\le 1/n$, the third error term of
the estimator $\hat{P}_y$ is bounded as follows.
\begin{align*}
\expect[\hat{\bid}]{\left|\smash{Z_y(1-\delta_N)(\bid(1-\delta_N) -\hat{\bid}_N)}\right|}
& \le \delta_N y'(1-\delta_N)\, (e\delta_N+ \tfrac{8}{N}).
\end{align*}
\end{lemma}
\begin{proof}
Let $\hat{\quant}$ be the quantile of the highest of the $N$ observed bids,
i.e., $\bid(\hat{\quant}) = \ebid_N$.
Conditioned on $\hat{\quant}>1-\delta_N$, bid $\ebid_N = \bid(\hat{\quant})$ is
upper bounded by $\bid(1)$. Applying \autoref{t:bid-bound} to bound
$\bid(1) - \bid(1-\delta_N)$ gives conditional error bound of
$$\smash{Z_y(1-\delta_N)(\bid(1) - \bid(1-\delta_N))} \leq
e\,\delta_N^2 y'(1-\delta_N).$$
Now condition on $\hat{\quant} < 1-\delta_N$. For this conditioning,
Lemma~\ref{l:bid-bound} shows that $\bid(1-\delta_N) - \bid(\hat{\quant})
\le \alloc(1-\delta_N) - \alloc(\hat{\quant})$. We will now bound
$\expect[\hat{\quant}]{\alloc(1-\delta_N)-\alloc(\hat{\quant}) \given \hat{\quant} <
1-\delta_N}\prob{\hat{\quant} \leq 1-\delta_N}$ which is at most
$\expect[\hat{\quant}]{1-\alloc(\hat{\quant})}$.
We first analize $\expect[\hat{\quant}]{1-\alloc(\hat{\quant})}$ in the case
that $\alloc=\alloc_k$ is the allocation rule for the $k$-unit
auction. We have,
\begin{align*}
\expect[\hat{\quant}]{1-\alloc(\hat{\quant})} & = \int_0^1 (1-x_k(q)) N q^{N-1} dq \\
& = N \int_0^1 q^{N-1}\left( \sum_{i=k}^{i=n-1} {n-1 \choose i} q^{n-1-i}
(1-q)^i \right) dq \\
& = N \sum_{i=k}^{i=n-1} {n-1 \choose i} \int_0^1 q^{N+n-2-i} (1-q)^i
dq \\
& = N \sum_{i=k}^{i=n-1} {n-1 \choose i} \frac{(N+n-2-i)!
i!}{(N+n-1)!} \\
& = \frac{N}{N+n-1} \sum_{i=k}^{i=n-1} \frac{{n-1 \choose i}}{{N+n-2
\choose i}} \\
& \le \frac{N}{N+n-1} \sum_{i=k}^{i=n-1} \left(\frac{n}{N}\right)^i \\
& \le \frac{N}{N+n-1} \left(\frac{n}{N}\right)^k \frac{1}{1-n/N} \\
& \le 2 \left(\frac{n}{N}\right)^k,
\end{align*}
where the last inequality uses $N>1.5n$.
Substituting this back, we get for $x=x_k$:
\begin{align*}
& \expect[\hat{\bid}]{\left|\smash{Z_y(1-\delta_N)(\bid(1-\delta_N)
-\hat{\bid}_N)}\right|}\\
& \le \delta_N \frac{y'(1-\delta_N)}{x'_k(1-\delta_N)} \left\{
e\delta_Nx'_k(1-\delta_N) + 2 \left(\frac{n}{N}\right)^k \right\}\\
& = e\delta_N^2 y'(1-\delta_N) + 2\delta_N y'(1-\delta_N) \left\{
\frac{1}{(n-1){n-2\choose k-1} (1-\delta_N)^{n-1-k}\delta_N^{k-1}}
\left(\frac{n}{N}\right)^k \right\}\\
& \le e\delta_N^2 y'(1-\delta_N) + \frac{2}{N}\left(\frac{n}{n-1}\right) \delta_N y'(1-\delta_N)
\left(\frac{n}{N\delta_N}\right)^{k-1} \frac{1}{{n-2\choose k-1} (1-\delta_N)^{n-1-k}}\\
& \le e\delta_N^2 y'(1-\delta_N) + \frac{8}{N}\delta_N y'(1-\delta_N).
\end{align*}
Here the last inequality follows by noting that ${n-2\choose k-1}\ge
1$, $(1-\delta_N)^n>1/4$, and using $\delta_N\ge n/N$,
$\frac{n}{N\delta_N}\le 1$.
Finally, since $x$ is a linear combination of the $x_k$'s, we have,
\begin{align*}
& \expect[\hat{\bid}]{\left|\smash{Z_y(1-\delta_N)(\bid(1-\delta_N)
-\hat{\bid}_N)}\right|}\\
& \le \delta_N \frac{y'(1-\delta_N)}{x'(1-\delta_N)}
\left(e\delta_Nx'(1-\delta_N) + \expect[\hat{\bid}]{|1-x(q)|} \right)\\
& \le \max_k \delta_N \frac{y'(1-\delta_N)}{x'_k(1-\delta_N)}
\left(e\delta_Nx'_k(1-\delta_N) + \expect[\hat{\bid}]{|1-x_k(q)|}
\right)\\
& \le e\delta_N^2 y'(1-\delta_N) + \frac{8}{N}\delta_N y'(1-\delta_N). \qedhere
\end{align*}
\end{proof}
\begin{lemma}
\label{fourth-term-bound}
For $Z_y$ and $\Lambda$ as defined above, if $\delta_N\le 1/n$,
the fourth error term of the estimator $\hat{P}_y$ is bounded as
follows.
\begin{align*}
Z_y(\delta_N)\bid(\delta_N)
& \le e\,\delta_N\,y'(\delta_N).
\end{align*}
\end{lemma}
\begin{proof}
The lemma follows directly from the definition of $Z_y$ with the
upper-bound on $b(\delta_N)$ of Theorem~\ref{t:bid-bound}.
\end{proof}
\section{Constructing a position auction with a target vector of position weights}
\label{a:position-auction-construction}
In this section we show that in any position auction environment given
by position weights $\wals$, we can construct a rank based auction
with induced position weights $\iwals$ satisfying $\cumiwals \leq
\cumwals$. The allocation rule of the auction is constructed as a
(random) sequence of iron by rank and rank reserve operations.
\begin{lemma}
In any position auction environment with position weights $\wals$ and
target weights $\iwals$ satisfying $\cumiwals \leq \cumwals$, there
exists a (random) sequence of iron by rank and rank reserve operations
following which the induced position weights are exactly $\iwals$.
\end{lemma}
\begin{proof}
Suppose that we have $\cumiwals \leq \cumwals$. We will describe how
to assign agents to slots so as to obtain position weights
$\iwals$. Let $\yals$ denote the position weights corresponding
to an intermediate assignment. We begin by assigning the agent with
the $i$th largest bid to the $i$th slot for all $i\in [n]$. The
position weights for this assignment are given by
$\yals=\wals$. We will then construct a series of transformations
or reassignments of agents to positions, each time making a small
change to the weights $\yals$, so as to bring them closer to the
target $\iwals$. Each transformation is either an rank-based ironing
operation or a rank reserve.
Let $i$ denote the largest index such that $\cumiwalk=\cumyalk$
for all $k\le i$. We will now present a (randomized) tranformation
that will increase the value of $i$. Specifically, we will reassign
some agents to positions in a manner such that the resulting
position weights $\newyals$ satisfy: $\cumiwalk=\cumnewyalk$ for
$k\le i+1$ and $\cumyalk\ge \cumnewyalk\ge \cumiwalk$ for
$k>i+1$.
Consider the operation of ironing by rank over the interval
$\{i,\ldots,i'\}$ for some index $i'>i$. Recall that this operation
averages out the position weights over this interval, setting each
such weight equal to $(\cumyalk[i']-\cumyalk[i])/(i'-i)$,
while leaving all other position weights intact. It also preserves
cumulative weights at positions $k\le i$ and positions $k\ge
i'$. Note also that the larger that $i'$ is, the smaller is the
average weight $(\cumyalk[i']-\cumyalk[i])/(i'-i)$. In
particular, for any $i'>i+1$, this operation strictly decreases the
$i+1$th position weight.
Suppose that there exists an index $i'$, with $i+1<i'< n$, such that
$(\cumyalk[i']-\cumyalk[i])/(i'-i) \ge \iwalk[i+1]$ and
$(\cumyalk[i'+1]-\cumyalk[i])/(i'+1-i) < \iwalk[i+1]$. Let
$A:= (\cumyalk[i']-\cumyalk[i])/(i'-i)$ and
$B:=(\cumyalk[i'+1]-\cumyalk[i])/(i'+1-i)$. Let $\alpha\in
(0,1]$ be defined such that $\alpha A + (1-\alpha) B =
\iwalk[i+1]$. Now consider the following transformation. With
probability $\alpha$, we iron over the rank interval
$\{i,\ldots,i'\}$ and with probability $1-\alpha$, we iron over the
rank interval $\{i,\ldots,i'+1\}$. Let $\newyals$ and
$\cumnewyals$ denote the new positions weights and cumulative
position weights at the end of the (randomized) ironing
operation. Note that both of these ironing operations preserve the
position weights over positions $k\le i$ and $k>i'+1$. Over
positions $k\in \{i,\ldots,i'\}$, the new position weight
$\newyalk$ is exactly $\alpha A + (1-\alpha)B =
\iwalk[i+1]$. Finally, both ironing operations maintain the same
cumulative weight at position $i'+1$. Since $\cumnewyalk[i'] =
\cumyalk[i']= \cumiwalk[i']$ and $\cumnewyalk[i'+1] =
\cumyalk[i'+1]>\cumiwalk[i'+1]$, we get that the new position
weight at $i'+1$ is at least $\iwalk[i'+1]$. This completes one step
of our transformation.
Alternately, suppose that for $i'=n$, we have
$(\cumyalk[i']-\cumyalk[i])/(i'-i) \ge \iwalk[i+1]$. Let $A:=
(\cumyalk[i']-\cumyalk[i])/(i'-i)$ and let $\alpha\in [0,1]$ be
defined such that $\alpha A = \iwalk[i+1]$. Now consider the
following transformation. With probability $\alpha$, we iron over
the rank interval $\{i,\ldots,n\}$ and with probability $1-\alpha$,
we set a rank reserve of $i$, that is, we reject every agent with
rank $>i$. Note that both of these operations preserve the position
weights over positions $k\le i$. For $k>i$, the new position weights
are exactly $\alpha A = \iwalk[i+1]$. Therefore, once again, we
obtain $\cumiwalk=\cumnewyalk$ for $k\le i+1$ and $\cumyalk\ge
\cumnewyalk\ge \cumiwalk$ for $k>i+1$.
To summarize, we described a sequence of randomized operations. Each
step of the sequence increases the number of positions over which
the position weights corresponding to our current assignment,
$\yals$, match the target position weights, $\iwals$. After at
most $n$ such operations we obtain a randomized assignment of agents
to positions achieving the target position weights.
\end{proof}
\section{Finding the optimal iron by rank auction}
\label{s:iron-opt-app}
Recall that iron by rank auctions are weighted sums of multi-unit
auctions. Therefore, their revenue can be expressed as a weighted sum
over the revenues $P_k$ of $k$-unit auctions. We consider a position environment given by non-increasing weights $\wals
= (\walk[1],\ldots,\walk[n]$), with $\walk[0] = 0$, $\walk[1]=1$, and $\walk[n+1] = 0$. Define the cumulative position
weights $\cumwals = (\cumwalk[1],\ldots,\cumwalk[n])$ as $\cumwalk =
\sum_{j \leq i} \walk[j]$.
Define the {\em multi-unit revenue
curve} as the piece-wise constant function connecting the points
$(0,\murevk[0],\ldots,(n,\murevk[n])$. This function may or may not
be concave. Define the {\em ironed multi-unit revenue curve} as
$\imurevs = (\imurevk[1],\ldots,\imurevk[n])$ the smallest concave
function that upper bounds the multi-unit revenue curve. Define the
multi-unit marginal revenues as $\mumargs =
\mumargk[1],\ldots,\mumargk[n]$ and $\imumargs =
\imumargk[1],\ldots,\imumargk[n]$ as the left slope of the multi-unit
and ironed multi-unit revenue curves, respectively. I.e., $\mumargk = \murevk - \murevk[k-1]$ and $\imumargk = \imurevk - \imurevk[k-1]$.
We now see how the revenue of any position auction can be expressed in
terms of the multi-unit revenue curves and marginal revenues.
\begin{align*}
\expect{\text{revenue}} &= \sum_{k=0}^n \murevk\,\margwalk
= \sum_{k=0}^n \mumargk\,\walk\\
&\leq \sum_{k=0}^n \imurevk\,\margwalk
= \sum_{k=0}^n \imumargk\,\walk.
\end{align*}
The first equality follows from viewing the position auction with
weights $\wals$ as a convex combination of multi-unit auctions (where
its revenue is the convex combination of the multi-unit auction
revenues). The second and final inequality follow from rearranging
the sum (an equivalent manipulation to integration by parts). The
inequality follows from the fact that $\imurevs$ is defined as the
smallest concave function that upper bounds $\murevs$ and, therefore,
satisfies $\imurevk \geq \murevk$ for all $k$. Of course the
inequality is an equality if and only if $\margwalk = 0$ for every
$k$ such that $\imumargk > \mumargk$.
We now characterize the optimal ironing-by-rank position auction.
Given a position auction weights $\wals$ we would like the
ironing-by-rank which produces $\iwals$ (with cumulative weights
satisfying $\cumwals \geq \cumiwals$) with optimal revenue. By the
above discussion, revenue is accounted for by marginal revenues, and
upper bounded by ironed marginal revenues. If we optimize for ironed
marginal revenues and the condition for equality holds then this is
the optimal revenue. Notice that ironed revenues are concave in $k$,
so ironed marginal revenues are monotone (weakly) decreasing in $k$.
The position weights are also monotone (weakly) decreasing. The
assignment between ranks and positions that optimizes ironed marginal
revenue is greedy with positions corresponding to ranks with negative
ironed marginal revenue discarded. Tentatively assign the $k$th rank
agent to slot $k$ (discarding agents that correspond to discarded
positions). This assignment indeed maximizes ironed marginal revenue
for the given position weights but may not satisfy the condition for
equality of revenue with ironed marginal revenue. To meet this
condition with equality we can randomly permute (a.k.a., iron by rank)
the positions that corresponds to intervals where the revenue curve is
ironed. This does not change the surplus of ironed marginal revenue
as the ironed marginal revenues on this interval are the same, and the
resulting position weights $\iwals$ satisfy the condition for equality
of revenue and ironed marginal revenue.
\subsection{Comparing revenues}
\label{s:direct-comparison}
We have considered the case where the empirical task was to recover
the revenues for one mechanism ($y$) using the sample of bids
responding to another mechanism ($x$). In many practical situations
the empirical task is simply the verification of whether the revenue
from a given mechanism is higher than the revenue from another
mechanism. Or, equivalently, the task could be to verify whether one
mechanism provides revenue which is a certain percentage above that of
another mechanism. We now demonstrate that this is a much easier
empirical task in terms of accuracy than the task of inferring the
revenue.
\par
Suppose that we want to compare the revenues of mechanisms $B_1$ and
$B_2$ by mixing them in to an incumbent mechanism $A$, and running the
composite mechanism $C = \epsilon B_1 + \epsilon B_2 + (1-2\epsilon) A$.
Specifically, we would like to determine whether $\REV{B_1}>\alpha
\REV{B_2}$ for some $\alpha>0$. Consider a binary classifier
$\hat{\gamma}$ which is equal to $1$ when $\REV{B_1}>\alpha
\REV{B_2}$ and $0$ otherwise. Let $\gamma={\bf
1}\{P_{B_1}-\alpha\,P_{B_2}>0\}$ be the corresponding ``ideal"
classifier for the case where the distribution of bids from mechanism
$C$ is known precisely. To evaluate the accuracy of the classifier, we
need to evaluate the probability $\prob{\hat{\gamma}=1|\gamma=0}$,
and likewise, $\prob{\hat{\gamma}=0|\gamma=1}$. The classifier will
give the wrong output if the sampling noise in estimating
$\hat{P}_{B_1}-\alpha\,\hat{P}_{B_2}$ is greater than
$|P_{B_1}-\alpha\,P_{B_2}|$.
Our main result of this section says that fixing the number of
positions $n$, $\alpha$, and the difference
$|P_{B_1}-\alpha\,P_{B_2}|$, with the number of samples from the bid
distribution, $N$, being large enough, the probability of incorrect
output decreases exponentially with $N$.
\begin{theorem}
\label{theorem:sign-AB}
For arbitrary $n$-agent rank-based auctions $A$, $B_1$, and $B_2$ and
$N$ bids from the equilibrium bid distribution of mechanism $C=\epsilon
B_1+\epsilon B_2+(1-2\epsilon)A$, the estimator for the binary classifier
$\gamma={\bf 1}\{P_{B_1}-\alpha\,P_{B_2}>0\}$, that establishes
whether the revenue of mechanism $B_1$ exceeds $\alpha$ times the
revenue of mechanism $B_2$, has error rate bounded by
$$\exp\left(-O\left(\frac{Na^2}{\alpha^2n^3\log(n/\epsilon)}\right)\right),$$
where $a=|P_{B_1}-\alpha\,P_{B_2}|$, as long as $N\gg n/\epsilon a$.
\end{theorem}
We obtain a similar error bound when our goal is to estimate which of
$r$ different novel mechanisms obtains the most revenue, for any
$r>1$:
\begin{corollary}
Suppose that our goal is to determine which of $r$ rank-based
auctions, $B_1, B_2, \cdots, B_r$, obtains the most revenue while
running incumbent mechanism $A$, by running each of the novel
mechanisms with probability $\epsilon/r$. Then the error probability of
the corresponding classifier constructed using $N$ bids from
composite mechanism $C = \sum_{i=1}^{r} \epsilon/r B_i + (1-\epsilon)A$ is
bounded from above by
$$r\exp\left(-O\left(\frac{Na^2}{n^3\log(rn/\epsilon)}\right)\right),$$
where $a$ is the absolute difference between the revenue obtained by
the best two of the $r$ mechanisms.
\end{corollary}
\section{Discussion and Conclusions}
We conclude with some observations and discussion.
\begin{itemize}
\item Good inference requires careful design of the mechanism. Perfect
inference and perfect optimality cannot be achieved together.
\item We cannot achieve good accuracy in infering the revenue of an
arbitrary mechanism, or in infering the entire revenue curve. In
contrast, the multi-unit revenues $P_k$ are special functions that
depend linearly on the bid distribution (and not, for example, on
bid density). This property enables them to be learned accurately.
\item Rank based mechanisms achieve a good tradeoff between revenue
optimality and quality of inference in position environments: (1)
They are close to optimal regardless of the value distribution;
(2) Optimizing over this class for revenue requires estimating only
$n$ parameters $P_k$ that, by our observation above, are ``easy'' to
estimate accurately; (3) Rank based mechanisms satisfy the
necessary conditions on the slope of the allocation function that
enable good inference.
\end{itemize}
\section{Inference methodology and error bounds for all-pay auctions}
\label{s:inference}
\label{s:param-inf}
\label{S:PARAM-INF}
We will now develop a methodology and error bounds for estimating the
revenue of one rank-based auction using bids from another rank-based
auction. There are two advantages of the restriction of our analysis
to rank-based auctions. First, the allocation rule (in quantile
space) of a rank-based auction is independent of the bid and value
distribution; therefore, it is known and does not need to be
estimated. Second, the allocation rules that result from rank-based
auctions are well behaved, in particular their slopes are bounded, and
our error analysis makes use of this property.
Recall from Section~\ref{sec:rank-based-basics} that the
revenue of any rank-based auction can be expressed as a linear
combination of the multi-unit revenues $\murevk[1],\ldots,\murevk[n]$
with $\murevk$ equal to the per-agent revenue of the
highest-$k$-bids-win auction. Therefore, in order to estimate the
revenue of a rank-based auction, it suffices to estimate each
$\murevk$ accurately.
In Section~\ref{s:inference-equation} we derive the counterfactual
revenue estimator. We state and discuss the error bounds of this
estimator in Section~\ref{s:inference-k-thm}. Two bounds are given;
the first bound holds in worst case over counterfactual and incumbent
mechanisms and the second bound depends on the closeness of the
allocation rules of the counterfactual and incumbent mechanisms. The
main ideas of the derivation of the error bounds are given in
Section~\ref{s:inference-k}.
\subsection{The revenue estimator}
\label{s:inference-equation}
This section derives an estimator for counterfactual revenue of an
auction with a given allocation rule from the bids of an all-pay
rank-based auction. The estimator is based on the following two
observations. The first observation is that the revenue
equation~\eqref{eq:bne-rev} can be expanded with the inference
equation~\eqref{eq:ap-inf} and integrated by parts to give an
equation for counterfactual revenue as a linear function of the bid
function. Our estimator is then a weighted order statistic of the empirical bid
function. The second observation is that truncating the bid
distribution at its extremes results in a tradeoff of the variance of
the resulting estimator (which can diverge) with a bias (which is bounded).
Consider estimating the revenue of an auction with allocation rule $y$
from the bids of an all-pay rank-based auction. In terms of the
revenue curve $R(\cdot)$ or inverse demand function $\val(\cdot)$,
the per-agent revenue of the allocation rule $y$ is given by:
\begin{align}
\notag
P_y &= \expect[q]{y'(q)\,R(q)} =
\expect[q]{y'(q)\,\val(q)\,(1-q)}.\\
\intertext{Let $\alloc$ denote the allocation rule of the auction that we run,
and $\bid$ denote the bid distribution in BNE of this auction. Recall
that for an all-pay auction format, we can convert the bid
distribution into the value distribution as follows: $\val(q) =
\bid'(q)/\alloc'(q)$. Substituting this equation into the expression
for $P_y$ above we get}
\label{eq:inference-pre-integration-by-parts}
P_y &=
\expect[q]{y'(q)(1-q)\frac{\bid'(q)}{\alloc'(q)}}
= \expect[q]{Z_y(q)\,\bid'(q)}\\ \intertext{where
$Z_y(q)=(1-q)\frac{y'(q)}{\alloc'(q)}$.
Treating this expectation as an integral and integrating it by
parts, when the constant terms are zero, gives:}
\label{eq:inference-integration-by-parts}
P_y &= \expect[q]{-Z'_y(q)\,\bid(q)}.
\end{align}
The subsequent analysis will include consideration of the constant
terms when they are not zero.
The analysis above gives two ways to write counterfactual revenue.
Equation~\eqref{eq:inference-pre-integration-by-parts} writes the
revenue as linear in the derivative of the bid function while
equation~\eqref{eq:inference-integration-by-parts} writes it as linear
in the bid function. We will define our estimator in terms of former
for extreme quantiles and in terms of the latter for moderate
quantiles. The reason for this definition is that the latter gives a
simple and well behaved estimator in terms of the bid function, but
might diverge at the extremes;\footnote{Both
$Z_y(q)=(1-q)\frac{y'(q)}{\alloc'(q)}$ and
$Z'_y(q)$ can be infinite at the boundary $q \in \{0,1\}$
when $\alloc$ and $y$ are polynomials of different degrees.} while
the former at the extremes introduces only modest bias when
approximated by zero.
\begin{lemma}
\label{l:counterfactual-revenue}
\label{weights lemma}
The per-agent counterfactual revenue of a rank-based auction with
allocation rule $y$ can be expressed in terms of the bid function
$\bid$ of an all-pay mechanism $\alloc$ as:
\begin{align}
\label{eq:P_y}
P_y & =
\overbrace{\expect[q\not\in\Lambda]{-Z'_y(q)\,\bid(q)} +
Z_y(1-\delta_N)\,\bid(1-\delta_N)-Z_y(\delta_N)\,\bid(\delta_N)}^{\makebox[0in]{\tiny
contribution from moderate quantiles}}\ + \ \\ \notag &\quad\,
\underbrace{\expect[q\in\Lambda]{Z_y(q)\,\bid'(q)}}_{\makebox[0in]{\tiny
contribution from extreme quantiles}}\\
\intertext{%
where $Z_y(q) = (1-q) \frac{y'(q)}{\alloc'(q)}$,
extreme quantiles are $\Lambda = [0,\delta_N] \cup [1-\delta_N,1]$,
and the truncation parameter is $\delta_N \in [0,1/2]$. For bid
functions that are constant on the extreme quantiles, the
counterfactual revenue can be written as}
\label{eq:P_y-truncated}
P_y & =
\expect[q\not\in\Lambda]{-Z'_y(q)\,\bid(q)} +
Z_y(1-\delta_N)\,\bid(1).
\end{align}
In this latter case or when $\delta_N = 0$, the expressed revenue is
linear in the bid function.
\end{lemma}
\begin{proof}
The first part of the lemma follows from plugging the all-pay
inference equation~\eqref{eq:ap-inf} into the revenue
equation~\eqref{eq:bne-rev} and integrating by parts on moderate quantiles $[\delta_N,1-\delta_N]$. The second part
of the lemma simplifies the first part using $\bid'(q) = 0$ for
extremal $q \in \Lambda$, $\bid(0) = \bid(\delta_N) = 0$, and $\bid(1-\delta_N) = \bid(1)$.
\end{proof}
This formulation allows the estimation of $P_y$ directly as a weighted
order statistic of the observed bids, with $b(\cdot)$ replaced by the
estimated bid distribution $\hat{b}(\cdot)$. \autoref{error bid
function} tells us that, except at the extreme quantiles, the
estimated bid distribution $\hat{b}(\cdot)$ closely approximates
$b(\cdot)$. At the extreme quantiles there is a bias-variance
tradeoff. The variance from including the contribution to the revenue
from these quantiles in the estimator can greatly exceed the bias from
excluding them entirely. Thus, to prevent the larger error at the
extreme quantiles from degrading the accuracy of the estimator, these
estimated bids are rounded down to zero and up to the maximum observed
bid at the low and high extremes, respectively. Recall that the
estimated bid distribution $\hat{b}(\cdot)$ is defined, in
equation~\eqref{bid function}, as a piecewise constant function with
$N$ pieces. Thus, the estimator $\hat{P}_y =
\expect[q]{\smash{-Z'_y(q)\,\hat{\bid}(q)}}$ can be simplified as
expressed in the following definition.
\begin{definition}
\label{d:estimator}
The estimator $\hat{P}_y$ (with truncation parameter $\delta_N$) for
the revenue of an auction with allocation rule $y$ from $N$ samples
$\hat{b}_1 \leq \cdots \leq \hat{b}_N$ from the equilibrium bid
distribution of an all-pay auction with allocation rule $x$ is:
\begin{align*}
\hat{P}_y& = \sum_{i=\delta_N N}^{N-\delta_N N} \left[ \left(1-\frac
{i-1} N\right)\frac{y'(\frac {i-1}N)}{x'( \frac {i-1}N)} -
\left(1-\frac{i}{N}\right)\frac{y'(\frac{i}N)}{x'(\frac{i}N)}
\right] \, \hat{\bid}_i + \delta_N \frac{y'(1-\delta_N)}{x'(1-\delta_N)} \hat{b}_N.
\end{align*}
\end{definition}
This estimator is a weighted order statistic. Our main theorems set
the truncation parameter to a specific value $\delta_N = \max(25 \log
\log N,n)/N$ and show that, with no assumptions on the distribution of
values or bids, the truncated estimator's mean absolute error is
bounded. Importantly, this truncated estimator does not have any
parameters that need to be tuned to the distribution of bids or
values.
We refer to the estimator with truncation parameter set to zero as the
untruncated estimator. We study the untrucated estimator in
simulations in Section~\ref{s:simulations} and demonstrate that, when
$Z_y(0)$ and $Z_y(1)$ are small, it can be quite accurate with very
few bid samples.
\subsection{Error bounds}
\label{s:inference-k-thm}
Our first main result of this section is the following error bound for
the estimator of Definition~\ref{d:estimator}.
\begin{theorem}\label{thm:allpay-simple}
The mean absolute error in estimating the revenue of a rank-based
auction with allocation rule $y$ using $N$ samples from the bid
distribution for an all-pay rank-based auction with allocation rule
$x$ is bounded as below. Here $n$ is the number of positions in the
two auctions, and $\hat{P}_y$ is the estimator in Definition~\ref{d:estimator}
with $\delta_N$ set to $\max(25\log\log N,n)/N$.
\begin{align*}
\Err{P_y} & \le \frac{16n^2\log N}{\sqrt{N}}.
\end{align*}
\end{theorem}
Observe that the above error bound is independent of the allocation
rules $x$ and $y$. When $x$ and $y$ are similar to each other, our
estimator should in fact achieve a much better error rate than the one
above. For example, when $x$ and $y$ are identical, the error in
estimation should have the same dependence on the number of samples as
the statistical error in bids, namely $1/\sqrt{N}$. Our next
theorem quantifies this relationship.
In order to capture the dependence of our error bounds in estimating
$P_y$ on the relationship between the incumbent allocation rule $x$
and the counterfactual allocation rule $y$, we define a new
quantity, $\rxy$, as follows:
\begin{align}
\label{eq:rxy}
\rxy &:= \sup_{q}\{y'(q)\}\,\max\left\{1, \,\log
\sup_{q: y'(q)\ge 1}\frac{\alloc'(q)}{y'(q)}, \,\log
\sup_{q}\frac{y'(q)}{\alloc'(q)} \right\}.
\end{align}
\noindent
We then obtain the following theorem for the special case of
estimating the multi-unit revenues.
\begin{theorem}
\label{thm:allpay-general}
Let $\alloc$ and $\kalloc$ denote the allocation rules for any
all-pay rank-based auction and the $k$-highest-bids-win auction over
$n$ positions, respectively. Let $\hat{\murevk}$ denote the
estimator from Definition~\ref{d:estimator} for estimating the
revenue $P_k$ of the latter auction from $N$ samples of the
bid distribution of the former, with $\delta_N$ set to
$\max(25\log\log N,n)/N$. If $\delta_N\le 1/n$, the mean absolute
error of the estimator $\hat{\murevk}$ is bounded as follows.
\begin{align*}
\Err{\murevk}
\le & \,\frac{80}{\sqrt{N}}\,\rxy[\kalloc].
\end{align*}
\end{theorem}
We obtain a slightly worse error bound when $y$ is a general rank-based auction:
\begin{corollary}
\label{cor:allpay-y}
Let $\alloc$ and $y$ denote the allocation rules for any two all-pay
rank-based auctions over $n$ positions. Let $\hat{P}_y$ denote the
estimator from Definition~\ref{d:estimator} for estimating the
revenue of the latter from $N$ samples of the bid
distribution of the former, with $\delta_N$ set to $\max(25\log\log
N,n)/N$. If $\delta_N\le 1/n$, the mean absolute error of the
estimator $\hat{P}_y$ is bounded as follows.
\begin{align*}
\Err{P_y}
\le & \,\frac{80}{\sqrt{N}}\, n \, \log \sup\nolimits_q n\, \frac{y'(q)}{x'(q)}.
\end{align*}
\end{corollary}
We sketch the main ideas of Theorem~\ref{thm:allpay-simple} in
Section~\ref{s:inference-k}. The full proof and the refinement
necessary to obtain Theorem~\ref{thm:allpay-general} and
Corollary~\ref{cor:allpay-y} are given in Appendix~\ref{s:proofs-3}.
From Theorem~\ref{thm:allpay-general}, the error in the estimator
depends on the slopes of the allocation rules $x$ and $y$. The maximum
slope of the multi-unit allocation rules, and therefore also that of
any rank-based auction, is always bounded by $n$, the number of agents
in the auction (summarized as Fact~\ref{fact:max-slope}, below).
\begin{fact}
\label{fact:max-slope}
The maximum slope of the allocation rule $\alloc$ of any $n$-agent
rank-based auction is bounded by $n$: $\sup_q \alloc'(q)\le n$. More
specifically, the maximum slope of the allocation rule $\kalloc$ for
the $n$-agent highest-$k$-bids-win auction is bounded by $$\sup_q
\kalloc'(q)\in \left[\frac{1}{\sqrt{2\pi}},\frac{1}{\sqrt{\pi}}\right]
\frac{n-1}{\sqrt{\min\{k-1, n-k\}}} = \Theta\left(\frac
n{\sqrt{\min\{k,n-k\}}}\right).$$
\end{fact}
We evaluate the error bound given by Theorem~\ref{thm:allpay-general}
for a few special cases of $x$ and $x_k$. For simplicity in applying
Fact~\ref{fact:max-slope}, we assume $k < n/2$.
\begin{itemize}
\item When $\alloc=\kalloc$, $\rxy[\kalloc]\le n/\sqrt{k}$ and we get
an error bound of $80
\frac{n}{\sqrt{Nk}}$, which is the same (within a constant factor)
as the statistical error in bids.
\item The bound in the previous case degrades smoothly when $\alloc$
and $\kalloc$ are close but not identical, as in $\epsilon\kalloc'\le \alloc'\le
\kalloc'/\epsilon$ for $\epsilon>0$. We have $\rxy[\kalloc]\le
\log(1/\epsilon)\, n/\sqrt{k}$ and the error bound is: $80 \log(1/\epsilon)
\frac{n}{\sqrt{Nk}}$.
\item Finally, as long as $\alloc'\ge \epsilon\kalloc'$, that is, the
highest-$k$-bids-win auction is mixed in with $\epsilon$ probability
into $\alloc$, we observe via Fact~\ref{fact:max-slope} that $\sup_{q:
\kalloc'(q)\ge 1} \alloc'(q)/\kalloc'(q) \le \sup_q \alloc'(q)\le
n$, and obtain an error bound of $80 \log(n/\epsilon) \frac{n}{\sqrt{Nk}}$.
\end{itemize}
\subsection{Derivation of error bounds}
\label{s:inference-k}
We will now give the main ideas behind the proof
Theorem~\ref{thm:allpay-simple}. This analysis is non-trivial because
the estimator, which is a weighted order statistic, is based on
weights with magnitudes that can be exponentially large. Importantly
$Z_k(q) = (1-q) y'(q)/x'(q)$ and, while the numerator $y'(q)$ is
bounded by Fact~\ref{fact:max-slope} by $n$ (the number of agents),
the denominator can be exponentially small. Thus, both $Z_k(q)$ and
its derivative $Z'_k(q)$ can be exponentially big, specifically
$N^{cn}$ for absolute constant $c \in (0,1)$ (see
Example~\ref{eg:extremal}, below). The revenue estimator is a
weighted order statistic with weights proportional to $Z'_k$ and thus
straightforward analyses will not give good bounds on the error. We
begin with one such analysis that gives an error bound that is linear
in the maximum of $Z_k(q)$ and modify it to reduce the dependence on
this term to be logarithmic. For non-extremal quantiles $q$,
$Z_k(q)$ is bounded by $N^n$ and thus $\log Z_k(q)$ is at most
the $n \log N$ term that appears in the error bound of
Theorem~\ref{thm:allpay-simple}.
\begin{example}
\label{eg:extremal} The allocation rules and derivatives for the $k=1$ unit auction and the $k=n-1$ unit auction are:
\begin{align*}
\kalloc[n-1](q) &= 1-(1-q)^{n-1}; & \kalloc[n-1]'(q) &= (n-1)\,(1-q)^{n-2}.\\
\kalloc[1](q) &= q^{n-1};& \kalloc[1]'(q) &= (n-1) q^{n-2}.
\end{align*}
Consider the estimator for the revenue of the $(n-1)$-unit auction
from bids in the one-unit auction, i.e., $y = \kalloc[n-1]$ and
$\alloc = \kalloc[1]$, at the lower extreme quantile $q = \log
\log N / N$ and with number of samples $N \gg n$. We get $Z_k(q)
= (1-q) y'(q) / \alloc'(q) = (1-q)^{n-1}\,q^{2-n}
\approx q^{2-n}$ and $Z'_k(q) \approx (2-n)\,q^{1-n}$; thus
the magnitudes of both $Z_k(q)$ and $Z'_k(q)$ at quantile
$q = \log \log N / N$ are on the order of $[N/(\log \log
N)]^{n}$ which is upper and lower bounded by $N^{cn}$ for
appropriate absolute constants $c$. Recall that terms from extreme
quantiles below $\delta_N = O(\log \log N / N)$ are rounded down to zero
in the estimator; thus, this upper bound is tight for the subsequent
analysis.
\end{example}
The remainder of this section sketches the main ideas in deriving the
above logarithmic bound on the error. The full proof of
Theorem~\ref{thm:allpay-simple}, as well as the more detailed analysis
that gives Theorem~\ref{thm:allpay-general}, is given in
Appendix~\ref{s:proofs-3}.
Assume that the counterfactual auction $y$ is the highest-$k$-bids-win
auction for some $k$; denote the allocation rule of this auction by
$\kalloc$, and let $Z_k = Z_{\kalloc}$. We will prove
Theorem~\ref{thm:allpay-simple} for this special case. Then, by virtue
of the fact that $P_y$ is a weighted average of the constituent
$\murevk$'s, the theorem trivially extends to all rank-based auctions
$y$.
As in the statement of the theorem, let $\delta_N=\max(25\log\log
N,n)/N$, and let $\Lambda=[0,\delta_N]\cup[1-\delta_N, 1]$ denote the
set of extreme quantiles. Apply equation~\eqref{eq:P_y} and
equation~\eqref{eq:P_y-truncated} from
\autoref{l:counterfactual-revenue} to the true bid function and
truncated empirical bid function to write the counterfactual revenue
and the estimated revenue, respectively, as:
\begin{align}
\notag \murevk & =
\expect[q\not\in\Lambda]{-Z'_k(q)\,\bid(q)} +
\expect[q\in\Lambda]{Z_k(q)\,\bid'(q)} \\ \notag &\quad +
Z_k(1-\delta_N)\bid(1-\delta_N)-Z_k(\delta_N)\bid(\delta_N).\\ \notag
\emurevk & =
\expect[q\not\in\Lambda]{\smash{-Z'_k(q)\,\hat{\bid}(q)}} +
Z_k(1-\delta_N)\hat{\bid}_N.
\intertext{The mean absolute error is bounded by the
expected value of the absolute value of the difference in these two quantities:}
\label{eq:error-allpay}
|\emurevk-\murevk| \le &
\left|\expect[q\not\in\Lambda]{\smash{-Z'_k(q)(\hat{\bid}(q)-\bid(q))}}\right|
+ \left| \expect[q\in\Lambda]{Z_k(q)\,\bid'(q)}\right|\\
\notag & + \left| \smash{Z_k(1-\delta_N)\,(\bid(1-\delta_N) -\hat{\bid}_N)}\right|
+ \left| Z_k(\delta_N)\,\bid(\delta_N)\right|
\end{align}
There are now two steps to the analysis. The first step is an analysis
of the contribution to the error from moderate quantiles, i.e., the
first term in equation~\eqref{eq:error-allpay}. We will sketch this
step below. The second step is analysis of the contribution to the
error from extreme quantiles, i.e., the remaining terms in
equation~\eqref{eq:error-allpay}. In Appendix~\ref{s:proofs-3} we
show that the error from these terms is dominated by the error in the
first term. As a summary of this deferred analysis, e.g., for the
final term $Z_k(\delta_N)\, b(\delta_N)$, the denominator of
$Z_k(q)$ can be very small, but it is approximately proportional
to $\bid(q)$ in the numerator and can be canceled. The other
terms are similarly bounded.
The following straightforward analysis gives a bound on the error in
the estimator from moderate quantiles that is linear in $\sup_{q
\not\in\Lambda} Z_k(q)$. Specifically, the expected error is bounded as
\begin{align*}
\left|\expect[q\not\in\Lambda]{\smash{Z'_k(q) \,
(\hat{\bid}(q)-\bid(q))}}\right|
&\leq \expect[q\not\in\Lambda]{|Z'_k(q)|} \cdot
\sup\nolimits_{q \not\in \Lambda} |\hat{\bid}(q)-\bid(q)|.
\end{align*}
For the second term in this expression, Lemma~\ref{error bid function} provides a uniform
bound on the absolute error in bids
$|\hat{\bid}(q)-\bid(q)|$. For the first term, the
following lemma shows that $Z_k$ is single-peaked and, thus,
$\expect[q\not\in\Lambda]{|Z'_k(q)|} \le 2 \sup_{q\not\in\Lambda}
Z_k(q)$. The proof of this lemma, which is formally given in
Appendix~\ref{s:proofs-3}, follows from the fact that $\alloc$ is a
convex combination of multi-unit auctions and that the ratio of the
derivatives of the allocation rules of two multi-unit auctions is
single-peaked.
\begin{lemma}
\label{lem:Z-bound-1}
For any rank-based auction and $k$-highest-bids-win auction
with allocation rules $\alloc$ and $\kalloc$, respectively, the
function $Z_k(q)=(1-q)
\frac{\kalloc'(q)}{\alloc'(q)}$ achieves a single local
maximum for $q\in [0,1]$.
\end{lemma}
As described in Example~\ref{eg:extremal},
$\sup_{q\not\in\Lambda} Z_k(q)$ can be very large. In order
to obtain a better bound, we observe that the error in bids is large
precisely at quantiles where $Z_k$ is small and vice versa: $Z_k$
depends inversely on the slope of the allocation rule of the incumbent
auction, $\alloc'$, whereas, the error in bids is directly
proportional to the bid density $\bid'$, which in turn is proportional
to $\alloc'$. We utilize this observation as follows:
\begin{align}
\label{eq:error-split}
\left|\expect[q\not\in\Lambda]{Z'_k(q) \,
(\hat{\bid}(q)-\bid(q))}\right| & \le
\expect[q\not\in\Lambda]{\left|\frac{Z'_k(q)}{Z_k(q)}\right| \,
\left|Z_k(q)\,(\hat{\bid}(q)-\bid(q))\right|} \\
\notag
& \le \expect[q\not\in\Lambda]{\left|\frac{Z'_k(q)}{Z_k(q)}\right|}
\sup_{q}\left|Z_k(q)\,(\hat{\bid}(q)-\bid(q))\right|.
\end{align}
As the integral of $Z'_k(q) / Z_k(q)$ is $\log Z_k(q)$, this analysis
and the single-peaked-ness of $Z_k(q)$ gives an error bound that is
logarithmic instead of linear in $\sup_{q\not\in\Lambda} Z_k(q)$. The
following lemma, formally proved in Appendix~\ref{s:proofs-3},
summarizes the bound on the error from moderate quantiles.
\begin{lemma}
\label{first-term-bound-simple}
For $Z_k$ and $\Lambda$ defined as above, the first error term in
equation~\eqref{eq:error-allpay} of the estimator $\hat{P}_k$ is bounded by:
\begin{align*}
\expect[\hat{\bid}]{\left|\expect[q\not\in\Lambda]{Z'_k(q)\, (\hat{\bid}(q)-\bid(q))}\right|}
&\le \frac{8n\logN}{\sqrt{N}}\sup_{q \not\in \Lambda}\{\kalloc'(q)\}
\end{align*}
\end{lemma}
This lemma combines with analyses of the contribution to the error of
extremal quantiles to give Theorem~\ref{thm:allpay-simple}. The
refined bound of Theorem~\ref{thm:allpay-simple} comes from improved
factoring of the error term over that of equation~\ref{eq:error-split}
in Lemma~\ref{first-term-bound-simple}.
\section{Applications to instrumented optimization}
\label{s:iron-rank}
In this section we consider the problem of the principal who would
like a mechanism that optimizes revenue for the current distribution
of agent values while simultaneously enabling the inference necessary
to reoptimize the mechanism in the future, should the distribution of
values change. Recall, the optimal auctions of the classical theory
pool agents with distinct values and are thus not well suited to
counterfactual inference. In Section~\ref{s:iron-rank-opt} we develop
a theory for optimizing revenue over the class of all rank-based
auctions that resembles Myerson's theory for optimal auction design.
Importantly, the optimal rank-based auction does not require knowledge
of the full distribution, instead the multi-unit revenues
$(\murevk[1],\ldots,\murevk[n])$ are sufficient; moreover, the
previous developments of this paper enable the estimation of these
multi-unit revenues. Where Myerson's theory employs ironing by value
and value reserves, our approach analogously employs ironing by rank
and rank reserves. In Section~\ref{s:universal} we extend the
A/B-testing approach of Section~\ref{s:param-inf} to develop a
``universal B test mechanism'' that can be used to estimate all of the
multi-unit revenues $(\murevk[1],\ldots,\murevk[n])$ simultaneously.
Combined with the optimal rank-based mechanism A, the A/B-test mechanism
with this universal B test is simultaneously good for revenue and
counterfactual inference. In Section~\ref{sec:strict-opt} we take a
more principled approach to optimization subject to inference, and
solve for the revenue-optimal mechanism subject to the constraint that
all multi-unit revenues can be estimated. Finally, in
Section~\ref{s:rank-approx} we show that the revenue of an optimal
rank-based auction approximates the revenue of the optimal auction
of \citet{mye-81}.
We begin by reviewing position environments and rank-based
auctions. In a rank-based auction the allocation to an agent depends
solely on the ordinal rank of his bid among other agents' bids, and
not on the cardinal value of the bid. For a position environment, a
rank-based auction assigns agents (potentially randomly) to positions
based on their ranks. Consider a position environment given by
non-increasing weights $\wals = (\walk[1],\ldots,\walk[n]$). For
notational convenience, define
$\walk[n+1] = 0$. Define the cumulative position
weights $\cumwals = (\cumwalk[1],\ldots,\cumwalk[n])$ as $\cumwalk =
\sum_{j=1}^k \walk[j]$, and $\cumwalk[0]=0$. We can view the cumulative weights as defining
a piece-wise linear, monotone, concave function given by connecting
the point set $(0,\cumwalk[0]), \ldots, (n,\cumwalk[n])$.
Multi-unit highest-bids-win auctions form a basis for position
auctions. Consider the marginal position weights $\margwals =
(\margwalk[1],\ldots,\margwalk[n])$ defined by $\margwalk = \walk -
\walk[k+1]$. The allocation rule induced by the position auction with
weights $\wals$ is identical to the allocation rule induced by the
convex combination of multi-unit auctions where the $k$-unit auction
is run with probability $\margwalk$.
A randomized assignment of agents to positions based on their ranks
induces an expected weight to which agents of each rank are assigned,
e.g., $\iwalk$ for the $k$th ranked agent. These expected weights can
be interpreted as a position auction environment themselves with
weights $\iwals$. As for the original weights, we can define the
cumulative position weights $\cumiwals$ as $\cumiwalk = \sum_{j=1}^k
\walk[j]$. The following lemma, which can be found in \citet{DHH-13},
characterizes the position weights $\iwals$ that can be induced by any
rank-based auction in a position environment $\wals$.
\begin{lemma}[e.g., \citealp{DHH-13}]
\label{l:rank-based-feasibility}
There is a rank-based auction with induced position weights $\iwals$
for position environment with weights $\wals$ if and only if their
cumulative weights satisfy $\cumiwalk \leq \cumwalk$ for all $k$,
denoted $\cumiwals \leq \cumwals$.
\end{lemma}
Any feasible weights $\iwals$ can be constructed from $\wals$ by a
(random) sequence of the following two operations (cf. \citet{HLP-29},
and proof in Appendix~\ref{a:position-auction-construction}).
\begin{description}
\item[rank reserve] For a given rank $k$, all agents with ranks
between $k+1$ and $n$ are rejected. The resulting weights $\iwals$
are equal to $\wals$ except $\iwalk[k'] = 0$ for $k' > k$.
\item[iron by rank] Given ranks $k' < k''$, the ironing-by-rank
operation corresponds to, when agents are ranked, assigning the
agents ranked in an interval $\{k',\ldots,k''\}$ uniformly at
random to these same positions. The ironed position weights
$\iwals$ are equal to $\wals$ except the weights on the ironed
interval of positions are averaged. The cumulative ironed position
weights $\cumiwals$ are equal to $\cumwals$ (viewed as a concave
function) except that a straight line connects
$(k'-1,\cumiwalk[k'-1])$ to $(k'',\cumiwalk[k''])$. Notice that
concavity of $\cumwals$ (as a function) and this perspective of the
ironing procedure as replacing an interval with a line segment
connecting the endpoints of the interval implies that $\cumiwals \leq
\cumwals$ coordinate-wise, i.e., $\cumiwalk \leq \cumwalk$ for all
$k$.
\end{description}
\subsection{Optimal rank-based auctions}
\label{s:iron-rank-opt}
In this section we describe how to optimize for expected revenue over
the class of rank-based auctions. Recall that rank-based auctions
are linear combinations over $k$-unit auctions. The characterization
of Bayes-Nash equilibrium, cf.\@ equation~\eqref{eq:bne-rev}, shows
that revenue is a linear function of the allocation rule. Therefore,
the revenue of a position auction can be calculated as the convex
combination of the revenue $\murevk$ from the $k$-highest-bids-win
auction for $k\in \{0,\ldots, n\}$. Note that $\murevk[0] = \murevk[n] = 0$.
Given these multi-unit revenues, $\murevs =
(\murevk[0],\ldots,\murevk[n])$, the problem of designing the optimal
rank-based auction is well defined: given a position environment
with weights $\wals$, find the weights $\iwals$ for a rank-based
auction with cummulative weights $\cumiwals \leq \cumwals$ maximizing
the sum $\sum_{k} (\iwalk-\iwalk[k+1])P_k$. This optimization problem
is isomorphic to the theory of envy-free optimal pricing
developed by \citet{DHY-14}. We summarize this theory below; a
complete derivation can be found in Appendix~\ref{s:iron-opt-app}.
Define the {\em multi-unit revenue curve} as the piece-wise linear
function connecting the points $(0,\murevk[0]),\ldots,(n,\murevk[n])$.
This function may or may not be concave. Define the {\em ironed
multi-unit revenues} as $\imurevs =
(\imurevk[0],\ldots,\imurevk[n])$ according to the smallest concave
function that upper bounds the multi-unit revenue curve. Define the
multi-unit marginal revenues, $\mumargs =
(\mumargk[1],\ldots,\mumargk[n])$ and $\imumargs =
(\imumargk[1],\ldots,\imumargk[n])$, as the left slope of the
multi-unit and ironed multi-unit revenue curves, respectively. I.e.,
$\mumargk = \murevk - \murevk[k-1]$ and $\imumargk = \imurevk -
\imurevk[k-1]$. The proof of the following theorem is given in the
appendix.
\begin{theorem}
\label{thm:rank-based-opt}
Given a position environment with weights $\wals$, the revenue-optimal
rank-based auction is defined by position weights $\iwals$ that are
equal to $\wals$, except ironed on the same intervals as $\murevs$ is
ironed to obtain $\imurevs$, and set to $0$ at positions $k$ for which
$\imumargk$ is negative.
\end{theorem}
As is evident from this description of the optimal rank-based
auction, the only quantities that need to be ascertained to run this
auction is the multi-unit revenue curve defined by $\murevs$.
Therefore, an econometric analysis for optimizing rank-based
auctions need not estimate the entire value distribution; estimation
of the multi-unit revenues is sufficient.
\subsection{Universal B test}
\label{s:universal}
In Section~\ref{s:ab-revenue} we discussed how to estimate the revenue
of a single auction B from the bids of the A/B test mechanism
C. Corollary~\ref{cor1-new} shows that an A/B test is not necessary as
long as we have enough samples from the bid distribution: the revenue
of B can be estimated from any incumbent mechanism A directly. In
fact, we can estimate the revenue of {\em all} rank-based mechanisms
simultaneously from the bids of a single mechanism A. However, the
error in estimation depends suboptimally on the number of samples, as
$\log(N)/\sqrt{N}$ rather than $1/\sqrt{N}$. A
natural question is whether it is possible to estimate all rank-based
revenues simultaneously at an optimal error rate from bids of a single
incumbent auction. Precisely, we now consider the problem identifying
a B test mechanism for which the revenue of any position auction D can
be estimated from the equilibrium bids in the A/B test mechanism C.
Since the revenue of D is given by the convex combination of the
multi-unit revenues $\murevk$, it suffices to estimate all of these
multi-unit revenues. What properties should the auction B have in
order to enable this estimation? (Equivalently, what properties
should C have?)
\begin{definition}
A {\em universal B test mechanism} satisfies; for
any rank-based auctions A and D, any $\epsilon>0$, and auction C defined
by $x_C = (1-\epsilon)x_A +\epsilon x_B$; the revenue $\murevk[D]$
can be estimated from $N$ equilibrium bids of C with the dependence of
the mean absolute error on $N$ and $\epsilon$ bounded by
$O(\log(1/\epsilon)/\sqrt{N})$.
\end{definition}
Since the revenue of D can be estimated from the revenue of all
multi-unit auctions, Corollary~\ref{cor2} implies that it suffices to
mix every multi-unit auction into C with some small probability. The
uniform-stair mechanism (Definition~\ref{d:uniform-stair} in
Section~\ref{s:simulations}), with position weights $\walk =
\frac{n-k}{n-1}$ for each $k$, gives a mechanism B with such a
mixture.
\begin{corollary}
\label{cor3}
The uniform-stair position auction is a universal B test mechanism
with mean absolute error bounded by
$80\, n\log (n/\epsilon)/\sqrt{N}$
\end{corollary}
Next we observe that in fact we can get similar results by mixing in
just a few of the multi-unit auctions. In particular, in order to
estimate $\murevk$ accurately, it suffices to mix in a multi-unit
auction with no more than $k$ units, and another one with no less than
$k$ units. This gives us a more efficient universal B test for
simultaneously inferring all of the multi-unit revenues (see
Corollary~\ref{cor:universal}).
\begin{lemma}
\label{lem:univ}
The revenue of the highest-$k$-bids-win mechanism B can be
estimated from $N$ bids of a rank-based all-pay auction C $=$ $(1-2\epsilon)$A $+\epsilon
$B$_1+\epsilon $B$_2$ where A is an arbitrary rank-based auction, and
B$_1$ and B$_2$ are the highest-$k_1$-bids-win and
highest-$k_2$-bids-win auctions respectively, with $k_1\le k\le
k_2$. The absolute error of the estimator is bounded by
\begin{align*}
\frac{80}{\sqrt{N}} (n+\log (1/\epsilon))\,\,\sup_{q}\{x_k'(q)\}
\end{align*}
\end{lemma}
\begin{proof}
We begin by noting that for any $j$ and $k$ with $k\le j$,
$$
\frac{\alloc'_k(q)}{\alloc'_j(q)} = \frac{{n-2\choose
k-1}}{{{n-2}\choose {j-1}}} \left(\frac{q}{1-q}\right)^{j-k}.
$$
When $k\le j$ and $q\le 1/2$, this ratio is less than $2^n$.
Likewise, when
$k\ge j$ and $q\ge 1/2$, the ratio is less than $2^n$.
Therefore, for any $q$, and C$\ =\ (1-2\epsilon)$A$\ +\ \epsilon$B$_1\ +\ \epsilon$B$_2$
where B$_1$ and B$_2$ are the highest-$k_1$-bids-win and
highest-$k_2$-bids-win auctions respectively, with $k_1\le k\le k_2$,
we have
$$
\sup_q \frac{\alloc'_k(q)}{\alloc'_C(q)} \le \frac{2^n}{\epsilon}.
$$
Next we note that $\sup_q \alloc'_C(q)\le n$ and, therefore, $\sup_{q:
\kalloc'(q)\ge 1}\frac{\alloc_C'(q)}{\kalloc'(q)}\le n$.
Putting these quantities together with Theorem~\ref{thm:allpay-general}, we
get that the absolute error in estimating $\murevk$ from bids drawn from C is at
most
\begin{align*}
\Err{\murevk} \le & \frac{80}{\sqrt{N}}\,\,
\sup_{q}\{\kalloc'(q)\}\,\, \left(n+\log
1/\epsilon\right). \qedhere
\end{align*}
\end{proof}
\begin{corollary}
\label{cor:universal}
The rank-by-bid position auction with weights $w_1=1$, $w_k=1/2$ for
$1< k<n-1$, and $w_n=0$
is a universal B test mechanism
with mean absolute error bounded by $O(n(n+\log
(1/\epsilon))/\sqrt{N})$.
\end{corollary}
\subsection{Optimal rank-based auctions with strict monotonicity}
\label{sec:strict-opt}
Position auctions, by definition, have non-increasing position weights
$\wals$. The ironing in the iron-by-rank optimization of
Section~\ref{s:iron-rank-opt} converted the problem of optimizing
multi-unit marginal revenue subject to non-increasing position weight,
to a simpler problem of optimizing multi-unit marginal revenue without
any constraints. In this section, we describe the optimization of
rank-based auctions (i.e., ones for which position weights can be
shifted only downwards or discarded) subject to {\em strictly
decreasing} position weights. In particular, we can reinterpret the
decreasing position weights of the universal B test mechanism from
\autoref{s:universal} as such a strictness requirement. The optimal
mechanism with this strictness requirement will satisfy the same
inference guarantee proven for the A/B test while improving its
revenue.
As described by Lemma~\ref{l:rank-based-feasibility}, position weights
$\iwals$ are feasible as a rank-based auction in the position environment
$\wals$ if the cumulative position weights satisfy $\cumwalk \geq
\cumiwalk$ for all $k$. Suppose we would like to optimize $\iwals$
for position weight $\wals$ subject to the monotonicity constraint
that the difference in successive position weights is at least that of
some other position weights $\epswals =
(\epswalk[1],\ldots,\epswalk[n])$. Formally, $\margiwalk = \iwalk
- \iwalk[k+1] \geq \epswalk - \epswalk[k+1] = \margepswalk$ for all
$k$. For example, $\epswals$ could be $\epsilon$ times the position
weights of the universal B test mechanism of the preceding section.
We call an allocation rule satisfying these monotonicity constraints
an $\epswals$-strictly-monotone allocation rule. As non-trivial
ironing by rank always results in consecutive positions with the same
weight, i.e., $\margiwalk = 0$ for some $k$, the optimal rank-based
mechanism with strict monotonicity will require overlapping ironed intervals.
To our knowledge, performance optimization subject to a strict
monotonicity constraint has not previously been considered in the
literature. At a high level our approach is the following. We start
with $\wals$ which induces the cumulative position weights $\cumwals$
which constrain the resulting position weights $\iwals$ of any
feasible rank-based auction via its cumulative $\cumiwals$. We view
$\iwals$ as the combination of two position auctions. The first has
weakly monotone weights $\iyals = (\iyalk[1],\ldots,\iyalk[n])$; the
second has strictly monotone weights $\epswals =
(\epswalk[1],\ldots,\epswalk[n])$; and the combination has weights $\iwalk
= \iyalk + \epswalk$ for all $k$. The revenue of the combined position
auction is the sum of the revenues of the two component position
auctions. Since the second auction has fixed position weights, its
revenue is fixed. Since the first position auction is weakly monotone
and the second is strictly, the combined position auction is strictly
monotone and satisfies the constraint that
$\margiwalk \geq \margepswalk$ for all $k$.
This construction focuses attention on optimization of $\iyals$
subject to the induced constraint imposed by $\wals$ and after the
removal of the $\epswals$-strictly-monotone allocation rule. I.e.,
$\iwals$ must be feasible for $\wals$. The suggested feasibility
constraint for optimization of $\iyals$ is given by position weights
$\yals$ defined as $\yalk = \walk - \epswalk$. Notice that, in
this definition of $\yals$, a lesser amount is subtracted from
successive positions. Consequently, monotonicity of $\wals$ does
not imply monotonicity of $\yals$.
To obtain $\iyals$ from $\yals$ we may need to iron for two reasons,
(a) to make $\iyals$ monotone and (b) to make the multi-unit revenue
curve monotone. In fact, both of these ironings are good for revenue.
The ironing construction for monotonizing $\yals$ constructs the
concave hull of the cumulative position weights $\cumyals$. This
concave hull is strictly higher than the curve given by $\cumyals$ (i.e.,
connecting $(0,\cumyalk[0]), \ldots, (n,\cumyalk[n])$). Similarly the
ironed multi-unit revenue curve given by $\imurevs$ is the concave
hull of the multi-unit revenue curve given by $\murevs$. The correct
order in which to apply these ironing procedures is to first (a) iron
the position weights $\yals$ to make it monotone, and second (b) iron
the multi-unit revenue curve $\murevs$ to make it concave. This order
is important as the revenue of the position auction with weights
$\iyals$ is only given by the ironed revenue curve $\imurevs$ when the
$\margiyals = 0$ on the ironed intervals of $\imurevs$.
\begin{theorem}
\label{thm:rank-based-opt-strict}
The optimal $\epswals$-strictly-monotone rank-based auction for position
weights $\wals$ has position weights $\iwals$ constructed by
\begin{enumerate}
\item defining $\yals$ by $\yalk = \walk - \epswalk$ for all $k$,
\item averaging position weights of $\yals$ on intervals where $\yals$
should be ironed to be monotone,
\item averaging the resulting position weights on intervals where
$\murevs$ should be ironed to be concave to get $\iyals$, and
\item setting $\iwals$ as $\iwalk = \iyalk + \epswalk$;
\end{enumerate}
and is feasible for $\wals$ if $\epswals$ is feasible for $\wals$.
\end{theorem}
\begin{proof}
The proof of this theorem follows directly by the construction and its
correctness.
\end{proof}
As described previously, the rank-based auction given by $\iwals$ in
position environment given by $\wals$ can be implemented by a sequence
of iron-by-rank and rank-reserve operations. Such a sequence of
operations can be found, e.g., via an approach of \citet{AFHHM-12}
or \citet{HLP-29}.
The following proposition shows that this optimal
$\epswals$-strictly-monotone mechanism inherits the inference
properties of the mechanism with position weights $\epswals$, in
particular, the A/B testing results of
Corollaries~\ref{cor1}, \ref{cor2}, \ref{cor3},
and~\ref{cor:universal}.
\begin{proposition}
\label{prop:optimization-with-inference-metatheorem}
For position weights $\epswals$ defined as $\epsilon$ times the
position weights of a B test mechanism, if position weights $\epswals$
are feasible for $\wals$ then the optimal $\epswals$-strictly-monotone
rank-based auction for position weights $\wals$ has the same inference
guarantee as the A/B test with $\epsilon$ probability of B.
\end{proposition}
\subsection{Approximation via rank-based auctions}
\label{s:rank-approx}
In this section we show that the revenue of optimal rank-based auction
approximates the optimal revenue (over all auctions) for position
environments. Instead of making this comparison directly we will
instead identify a simple non-optimal rank-based auction that
approximates the optimal auction. Of course the optimal rank-based
auction of Theorem~\ref{thm:rank-based-opt} has revenue at least that
of this simple rank-based auction, thus its revenue also satisfies the
same approximation bound.
Our approach is as follows. Just as arbitrary rank-based mechanisms
can be written as convex combinations over $k$-highest-bids-win
auctions, the optimal auction can be written as a convex combination
over optimal $k$-unit auctions. We begin by showing that the revenue
of optimal $k$-unit auctions can be approximated by multi-unit
highest-bids-win auctions when the agents' values are distributed
according to a regular distribution (Lemma~\ref{lem:approx-regular},
below). In the irregular case, on the other hand, rank-based auctions
cannot compete against arbitrary optimal auctions. For example, if the
agents' value distribution contains a very high value with probability
$o(1/n)$, then an optimal auction may exploit that high value by
setting a reserve price equal to that value; on the other hand, a
rank-based mechanism cannot distinguish very well between values
correspond to quantiles above $1-1/n$. We show that rank-based
mechanisms can approximate the revenue of any mechanism that does not
iron or reserve price within the quantile interval $[1-1/n,1]$ (but
may arbitrarily optimize over the remaining
quantiles). Theorem~\ref{thm:rank-based-approx} presents the precise
statement.
\begin{lemma}
\label{lem:approx-regular}
For regular $k$-unit $n$-agent environments, there exists a $k' \leq
$k such that the highest-bid-wins auction that restricts supply to
$k'$ units (i.e., a rank reserve) obtains at least half the revenue of
the optimal auction.
\end{lemma}
\begin{proof}
This lemma follows easily from a result of \citet{BK-96} that states
that for agents with values drawn i.i.d.\@ from a regular distribution
the revenue of the $k'$-unit $n$-agent highest-bid-wins auction is at
least the revenue of the $k'$-unit $(n-k')$-agent optimal auction. To
apply this theorem to our setting, let us use $\opt{k}{n}$ to denote
the revenue of an optimal $k$-unit $n$-agent auction, and recall that
$nP_k$ is the revenue of a $k$-unit $n$-agent highest-bids-win
auction.
When $k\le n/2$, we pick $k'=k$. Then,
$$nP_k\ge \opt{k}{n-k} \ge \frac{(n-k)}{n} \opt{k}{n}\ge \frac
12\opt{k}{n},$$ and we obtain the lemma. Here the first inequality
follows from \citeauthor{BK-96}'s theorem and the third from the
assumption that $k\le n/2$. The second inequality follows via by
lower bounding $\opt{k}{n-k}$ by the following auction which has
revenue exactly $\frac{(n-k)}{n} \opt{k}{n}$: simulate the optimal
$k$-unit $n$-agent on the $n-k$ real agents and $k$ fake agents with
values drawn independently from the distribution. Winners of the
simulation that are real agents contribute to revenue and the
probability that an agent is real is $(n-k)/n$.
When $k>n/2$, we pick $k'=n/2$. As before we have:
\begin{align*}
nP_{n/2}\ge \opt{n/2}{n/2} &= \frac 12\opt{n}{n}\ge \frac 12\opt{k}{n}.\qedhere
\end{align*}
\end{proof}
\begin{lemma}
\label{lem:approx-irregular}
For (possibly irregular) $n$-agent environments with revenue
curve $R(\cdot)$ and quantile $q\le 1-1/n$, there exists an integer
$k\le (1-q)n$ such that the revenue of the $k$-highest-bids-win
auction is at least a quarter of $n R(q)$, the revenue from posting
a price of $\val(q)$.
\end{lemma}
\begin{proof}
First we get a lower bound on $\murevk$ for any $k$. For any value
$z$, the total expected revenue of the $k$-highest-bids-win auction is
at least $zk$ times the probability that at least $k+1$ agents have
value at least $z$. The median of a binomial random variable
corresponding to $n$ Bernoulli trials with success probability
$(k+1)/n$ is $k+1$. Thus, the probability that this binomial is at
least $k+1$ is at least $1/2$. Combining these observations by
choosing $z = \val(1-(k+1)/n)$ we have,
\begin{align*}
n \, \murevk &\geq \val(1-(k+1)/n)\, k / 2.\\
\intertext{Choosing $k = \lfloor (1-q)n\rfloor -1$, for which $\val(1-(k+1)/n) \geq \val(q)$, the bound simplifies to,}
n \, \murevk &\geq \val(q)\, k / 2.
\end{align*}
The ratio of $P_k$ and $R(q) = (1-q)\,\val(q)$ is therefore at least
\begin{align*}
\frac{k}{2(1-q)n} &> \frac{k}{2(k+2)}.
\end{align*}
For $q\le 1-3/n$ (or, $k\ge 2$) this ratio is at least $1/4$.
For $q\in (1-3/n,1-1/n]$, we pick $k=1$. Then, $\murevk[1]$ is at least $1/n$
times $\val(q)$ times the probability that at least two agents have a
value greater than or equal to $\val(q)$. We can verify for $n\ge 2$ that
$$\murevk[1] \ge \frac {\val(q)}n \left( 1-q^n-n(1-q)q^{n-1} \right)\ge \frac 14 (1-q)\,\val(q).$$
\end{proof}
\begin{theorem}
\label{thm:rank-based-approx}
For regular value distributions and position environments, the optimal
rank-based auction obtains at least half the revenue of the optimal
auction. For any value distribution (possibly irregular) and position
environments, the optimal rank-based auction obtains at least a
quarter of the revenue of the optimal auction that does not iron or
set a reserve price for the highest $1/n$ measure of values i.e.,
$q \in [1-1/n,1]$.
\end{theorem}
\begin{proof}
In the regular setting, the theorem follows from
Lemma~\ref{lem:approx-regular} by noting that the optimal auction
(that irons by value and uses a value reserve) in a position
environment is a convex combination of optimal $k$-unit auctions:
since the revenue of each of the latter can be approximated by that of
a $k'$-unit highest-bids-win auction with $k'\le k$, the revenue of
the convex combination can be approximated by that of the same convex
combination over $k'$-unit highest-bids-win auctions; the resulting
convex combination over $k'$-unit auctions satisfies the same position
constraint as the optimal auction.
In the irregular setting, once again, any auction in a position
environment is a convex combination of optimal $k$-unit auctions. The
expected revenue of any $k$-unit auction is bounded from above by the
expected revenue of the optimal auction that sells at most $k$ items
in expectation. The per-agent revenue of such an auction is bounded by
$\bar{R}(1-k/n)$, the revenue of the optimal allocation rule with ex
ante probability of sale $k/n$. Here $\bar{R}(\cdot)$ is the ironed
revenue curve (that does not iron on quantiles in
$[1-1/n,1]$). $\bar{R}(1-k/n)$ is the convex combination of at most two
points on the revenue curve $R(a)$ and $R(b)$, $a\le 1-k/n\le b
< 1-1/n$. Now, we can use Lemma~\ref{lem:approx-irregular} to obtain
an integer $k_a < n(1-a)$ such that $P_{k_a}$ is at least a quarter of
$R(a)$, likewise $k_b$ for $b$. Taking the appropriate convex
combination of these multi-unit auctions gives us a $4$-approximation
to the optimal auction $k$-unit auction (that does not iron over the
quantile interval $[1-1/n,1]$). Finally, the convex combination of
the multi-unit auctions with $k_a$ and $k_b$ corresponds to a position
auction with that is feasible for a $k$ unit auction (with respect to
serving the top $k$ positions with probability one, service
probability is only shifted to lower positions).
\end{proof}
\section{Introduction}
\label{s:intro}
This paper develops data-driven methods that enable a principal to
adjust the parameters of an auction so as to optimize its performance,
i.e., for {\em mechanism redesign}. These methods are especially
relevant to the design of electronic markets such as online
advertising, hotel booking platforms, online auctions,
etc.\footnote{We have had extensive discussions over the last decade
with R\&D teams at companies in this space, especially Microsoft,
which brought us to the model and questions studied in this paper.}
For a paradigmatic family of auctions, we derive counterfactual
estimators for the revenue and welfare of one auction in the family
from equilibrium bids in another. One application for these
estimators is a framework for A/B testing of auctions, a.k.a.,
randomized controlled trials, wherein an auctioneer can compare the
revenue of auctions A and B. Another application is in instrumented
optimization where we identify sufficient properties of an auction so
that the revenue of any counterfactual auction can be estimated from
equilibrium bids of the former; we then optimize revenue over auctions
with these properties.
Our main technical contribution is a method for
counterfactual revenue estimation: given two auctions we define an
estimator for the equilibrium revenue of one from equilibrium bids of
the other. Our estimator has a number of appealing properties that
contrast with the standard econometric approach to inference in
auctions. In the standard approach, first the value distribution is
inferred from bids of the first auction using equilibrium analysis,
and then the estimated value distribution is used to calculate the
expected equilibrium revenue of the second auction. To infer the
value distribution, the standard approach employs estimates of the
derivative of the bid function via an estimator that typically must be
tuned to trade-off bias and variance using assumptions on the bid
distribution. In contrast, our method estimates revenue directly from
the bids and our estimator requires no distribution dependent tuning.
Our method is statistically efficient with estimation error
proportional to one over the square root of the number of observed
bids.
Our work applies to first-price and all-pay position auctions, a model
popularized by studies of auctions for advertising on Internet search
engines (cf.\@ \citealp{var-06}, and
\citealp{EOS-07}).\footnote{First-price and all-pay position auctions
generalize classical single-item and multi-unit auction models and
are an important form of auction for theoretical study, see e.g.,
\citet{CH-13} and \citet{HT-16}. Unfortunately, our methods cannot
be directly applied to position auctions with the so-called
``generalized second price'' payment rule of Google's AdWords
platform.} A position auction is defined by a decreasing sequence
of weights which correspond to allocation probabilities, bidders are
assigned to weights assortatively by bid, and pay their bid if
allocated (first-price) or always (all-pay). The configurable
parameters in this family of auctions are the weights of the
positions. Note that classical revenue optimization in auctions, e.g.,
by reserve prices and ironing \citep{mye-81}, is at odds with
classical structural inference. Reserve pricing and ironing pool
bidders with distinct values and, thereafter, no procedure for
structural inference can distinguish them. The position auctions
(with neither ironing nor reserve prices) of our study do not exhibit
this stark behavior.
Given two position auctions B and C, each defined by positions
weights, and $N$ samples from the Bayes-Nash equilibrium bid
distribution from C, our estimator for the Bayes-Nash equilibrium
revenue of B is a {\em weighted order statistic}. We apply a formula
to the position weights in B and C to get a weight for each order
statistic of the $N$ bids, and then the estimator is the weighted sum
of the order statistics. The error bounds for this estimator are
proportional to $\sqrt {1/N}$ and a term derived from the position
weights of B and C.
The first application of our revenue estimator is to A/B testing of
auctions (see Section~\ref{s:ab-testing}). A/B testing, otherwise
known as randomized controlled trials, is an industry standard method
for evaluating, tuning, and optimizing Internet services and
e-commerce systems \citep[e.g.,][]{KLSH-09}. This form of online experimentation is happening
all the time and the participants of the experiments are almost always
unaware that the experiment is being conducted. Our framework for A/B
testing of auctions is motivated -- as we describe subsequently -- by
auction environments where {\em ideal A/B testing} is
impossible.\footnote{In ideal A/B testing, the bids in A and B are
respectively in equilibrium for A and B.} In our framework bidders
bid first and then the experimenter randomly selects and runs control
auction A or treatment auction B on the bids. Importantly, the
bidders are unaware of whether they are in the control A or treatment
B, but have instead bid according to the Bayes-Nash equilibrium of
auction C, the convex combination of auctions A and B. Our task of
A/B testing of auctions is then to calculate and compare estimates of
the revenues of A and B given bids in C. Note, a convex combination
of position auctions is a position auction with position weights given
by the convex combination. Suppose the A/B test auction C runs the
control auction $A$ with probability $1-\epsilon$ and the treatment
auction $B$ with probability $\epsilon$. Our main result for A/B
testing is that the revenue from B can be estimated from bids in C
with error that depends on $\epsilon$ as $\log (1/\epsilon)$. This
error bound improves exponentially over the $\sqrt {1/\epsilon}$
dependence on $\epsilon$ that would be obtained by ideal A/B testing.
The second application of our revenue estimator is to instrumented
optimization (see Section~\ref{s:iron-rank}). The error in our revenue
estimate for auction B from the bids in auction C depends on the
allocation rule for auction C. (It depends, in particular, on a
relationship between the position weights in B and C; for example, in
the A/B testing application, above, the position weights are related
by $\epsilon$.) Our first result shows that there is universal
treatment B such that in the A/B test mechanism C, the revenue of any
other position auction can be estimated. A solution, then, to the
instrumented optimization problem is to run the A/B test mechanism C
that corresponds to the revenue optimal position auction A and this
universal treatment B. Our second result incorporates a bound on the
desired rate of estimation into the revenue optimization problem and
derives the revenue optimal auction subject to good revenue inference.
Our analysis gives a tradeoff between revenue bounds (relative to the
optimal position auction) and the desired rate of inference. Finally,
we show that the revenue optimal position auction (without reserve
prices or ironing) approximates the revenue optimal auction (with
reserve prices and ironing); thus, there is little loss in revenue
from restricting to the family of position auctions for which our
inference methods are applicable.
Our bounds on the error of our estimator are expressed in terms of the
the number of samples (from the bid distribution, subsequently denoted
by $N$) and the number of bidders (in each auction, subsequently
denoted by $n$). A straightforward econometric analysis might treat
the number of bidders in each auction as a constant; consequently, the
bounds from such an analysis would not preclude the possibility that
there is a very large error until the number of samples is
exponentially larger than this number of bidders. Such an error bound
would have no practical significance as, for auctions with ten or more
bidders, the number of samples required for convergence to the limit
behavior could be more than the number of people on the planet.
Though their proofs are more complex, in contrast, our bounds show
that the dependence on the number of bidders is modest.
While much of this paper focuses on estimating and optimizing revenue,
in Section~\ref{s:welfare} we extend the method to the estimation of
social welfare.
\subsection{Motivating Example: Auctions for Internet Search Advertising.}
\label{s:example-search-advertising}
Our work is motivated by the auctions that sell advertisements on
Internet search engines (see historical discussion by
\citealp{FP-06}). The first-price position auction we study in this
paper was introduced in 1998 by the Internet listing company GoTo.
This auction was adapted by Google in 2002 for their AdWords platform,
modified to have a second-price-like payment rule, and is known as the
{\em generalized second-price auction}. Early theoretical studies of
equilibrium in the generalized second-price auction were conducted by
\citet{var-06} and \citet{EOS-07}; unlike the second-price auction for
which it is named, the generalized second-price auction does not admit
a truthtelling equilibrium.
Internet search advertising works as follows. A user looking for web
pages on a given topic specifies {\em keywords} on which to search. The
search engine then returns a listing of web pages that relate to these
keywords. Alongside these search results, the search engine displays
sponsored results. These results are conventionally explicitly
labeled as sponsored and appear in the {\em mainline}, i.e., above the
search results, or in the {\em sidebar}, i.e., to the right of the
search results. The mainline typically contains up to four ads and
the sidebar contains up to seven ads. The order of the ads is of
importance as the Internet user is more likely to read and click on
ads in higher positions on the page. In the classic model of
\citet{var-06} and \citet{EOS-07} the user's click behavior is
exogenously given by weights associated with the
positions,\footnote{Endogenous click models have also been considered,
e.g., \citet{AE-11}, but are less prevalent in the literature.} and
the weights are decreasing in position. An advertiser only pays for
the ad if the user clicks on it. Thus in the classic first-price
position auction, advertisers are assigned to positions in order of
their bids, and the advertisers on whose ads the user clicks each pay
their bids.
As described above, the ads in the mainline have higher click rates
than those in the sidebar. The mainline, however, is not required
to be filled to capacity (a maximum of four ads). In the first-price
position auction described above, the choice of the number of ads to
show in the mainline affects the revenue of the auction and, in the
standard auction-theoretic model of Bayes-Nash equilibrium, this
choice is ambiguous with respect to revenue ranking. For some
distributions of advertiser values, showing more ads in the mainline
gives more revenue, while for other distributions fewer ads gives more
revenue.
The keywords of the user enable the advertisers to target users with
distinct interests. For example, hotels in Hawaii may wish to show
ads to a user searching for ``best beaches,'' while a computer
hardware company would prefer users searching for ``laptop reviews.''
Thus, the search advertising is in fact a collection of many partially
overlapping markets, with some high-volume high-demand keywords and a
long tail of niche keywords. The conditions of each of these markets
are distinct and thus, as per the discussion of the preceding
paragraph, the number of ads to show in the mainline depends on the
keywords of the search.
One empirical method for evaluating two alternatives, e.g., showing
one or two mainline ads, is A/B testing
In the ideal setting of A/B
testing, the auctions for a given keyword would be randomly divided
into the A and B groups. In part A the advertisers would bid in
Bayes-Nash equilibrium for A and in part B they would bid in
equilibrium for B. Unfortunately, because we need to test both A and
B in each market, ideal A/B testing would require soliciting distinct
bids for each variant of the auction. This approach is impractical,
both from an engineering perspective and from a public relations
perspective. In practice, A/B tests are run on these ad platforms all
the time and without informing the advertisers. Of course,
advertisers can observe any overall change in the mechanism and adapt
their bids accordingly, i.e., they can be assumed to be in
equilibrium. Our approach of A/B testing, where bids are in
equilibrium for auction C, the convex combination of A and B, is
consistent with the industry standard practice for Internet search
advertising.
Our A/B testing framework is motivated specifically by the goal of
optimizing an auction to local characteristics of the market in which
the auction is run. It is important to distinguish this goal from
that of another framework for A/B testing which is commonly used to
evaluate global properties of auctions across a collection of disjoint
markets. This framework randomly partitions the individual markets
into a control group (where auction A is run) and a treatment group
(with auction B). From such an A/B test we can evaluate whether it is
better for all markets to run A or for all to run B. It cannot be
used, however, for our motivating application of determining the
number of mainline ads to show, where the optimal number naturally
varies across distinct markets. The work of \citet{OS-11} on reserve pricing
in Yahoo!'s ad auction demonstrates how such a global A/B test can be
valuable. They first used a parametric estimator for the value
distribution in each market to determine a good reserve price for that
market. Then they did a global A/B test to determine whether the
auction with their calculated reserve prices (the B mechanism) has
higher revenue on average than the auction with the original reserve
prices (the A mechanism). Our methods relate to and can replace the
first step of their analysis.
\input{related-work}
\section{Simulation evidence}
\label{s:simulations}
This section presents evidence from simulations that support,
extend, and improve on our theoretical results. Of interest are
\begin{itemize}
\item the dependence of the error bound on the number of samples $N$
and the number of bidders in each auction $n$,
\item the dependence of the error bound on the similarity between the
incumbent and counterfactual auctions, and
\item the role of truncation, i.e., the dropping of the extremal
quantiles of the empirical bid distribution in the estimator.
\end{itemize}
While we have not given theoretical bounds without truncation, our
simulations show that the error of the estimator without truncation is
small when the counterfactual and incumbent mechanism are similar.
Our first simulation study look at the performance of the estimator
without truncation when the number of samples $N$ is relatively small
and the incumbent and counterfactual auction are similar. The goal of
this study is to empirically evaluate the dependence of the
non-truncated estimator's error on $n$, $N$, and the auctions'
similarity (to be defined as $\epsilon$) and compare this error to the
theoretical bounds given for the estimator with truncation.
In this simulation setup we infer the revenue of a position auction B
using bid data from mixed auction
C$\ =\ (1-\epsilon)\,$A$\ +\ \epsilon\, $B where A is another position
auction. This scenario comes from the application to A/B testing
which we develop subsequently in Section~\ref{s:ab-testing}. Recall
that $x_A$ denotes the allocation rule of A, $x_B$ the allocation rule
of B, and $x_C(q) = (1-\epsilon)\,x_A(q) + \epsilon\,x_B(q)$ is the
allocation rule of C. For this experiment, we consider the following
three designs.
\begin{itemize}
\item {\bf Design 1:} Auction A is the one-unit auction, $x_A(q) =
q^{n-1}$; and auction B is the uniform-stair position auction
(defined below), $x_B(q)=q$.
\item {\bf Design 2:} Auction A is the uniform-stair position auction, $x_A(q) = q$; and Auction B is the one-unit auction, $x_B(q)=q^{n-1}$
\item {\bf Design 3:} Auction A is the $(n-1)$-unit auction, $x_A(q) = 1-(1-q)^{n-1}$; and auction B is the one-unit auction, $x_B(q)=q^{n-1}$.
\end{itemize}
\begin{definition}
\label{d:uniform-stair}
The {\em uniform-stair position auction} is given by position weights
$\wals$ with $\walk = \frac{n-k}{n-1}$ for each $k\in [1, n]$ and has allocation
rule $\alloc(q) = q$.
\end{definition}
Our second simulation study evaluates the role of truncation by
comparing the error of the estimator with and without truncation when
the incumbent and counterfactual mechanism are not similar.
In this simulation setup, the incumbent mechanism is the $(n-1)$-unit
auction with $n$ players (auction C) while the counterfactual
mechanism is the one-unit auction (auction B). The object of interest
is the estimated revenue of auction B from the bids in auction C.
Observe that Design~4 is equivalent to Design 3 with $\epsilon$ set to
zero.
\begin{itemize}
\item {\bf Design 4:} Auction B is the one-unit auction, $x_B(q) =
q^{n-1}$; and auction C is the $(n-1)$-unit auction, $x_C(q)=1-(1-q)^{n-1}$.
\end{itemize}
Notice that the auctions of Design~4 are quite different.
Specifically, auction C's revenue is driven by the bidders at the
bottom of the support of the distribution of values while the revenue
of auction B is determined by the bidders at the top of the support.
This dissimilarity between the auctions results in the truncated terms
in the estimator being high-variance. Thus, truncation serves to
lower the error of the estimator.
For all designs, the values of the bidders are drawn the beta
distribution with parameters $\alpha=\beta=2$. This distribution of
values is supported on $[0,1]$; it is unimodal with the mode and the
mean at $1/2$ and it is symmetric about the mean.
\subsection*{Methodology}
We perform simulations to calculate the mean absolute deviation of our
estimator $\hat{P}_B$ for the revenue of auction B with the auction's
expected revenue $P_B$. The allocation rules $\alloc_B$ and
$\alloc_C$, their derivatives $\alloc'_B$ and $\alloc'_C$, and the
revenue curve $R$ are calculated analytically. The expected
revenue $P_B$ is calculated from the revenue curve $R$ and
$\alloc'_B$ by equation~\eqref{eq:bne-rev} via numerical integration
(i.e., by averaging the values of $R(q)\,\alloc'_B(q)$ on
a grid). The equilibrium bids in auction C for values on a uniform
grid are calculated from equation~\eqref{eq:ap-inf} via numerical
integration on a grid. Each simulation draws $N$ bids from this set
of bids with replacement, the estimated revenue $\hat{P}_B$ is
calculated from Definition~\ref{d:estimator}, and the mean absolute
deviation is calculated by averaging $|P_B - \hat{P}_B|$ over 8000
Monte Carlo simulations.
\subsection*{Results and observations}
\autoref{cor:allpay-y} provides a bound for the mean absolute
deviation of the estimation for the revenues of auction B. We can
evaluate this bound individually for each design to obtain the bound
of $\frac{80}{\sqrt N}\, n\,\log \frac{n}{\epsilon}$ for Designs 1-3.
We now study the performance of the estimator without truncation for
the auction revenue for Designs 1-3. The empirical bounds are
compared to the theoretical bounds for the truncated estimator. The
justification for this study is as follows. First, when the auctions
are similar the truncated terms are not big, so truncating does not
help lower error. Thus, we expect the simulations to show similar
error as our theoretical bounds for the truncated estimator. Second,
truncation is only valid when the number of samples is very large
compared to the number of bidders in each auction (typically, we need
$N \geq 25\,n^2$); thus the truncated estimator cannot be studied in
the small $N$ regime.
In Figure~\ref{table:MC} we report our empirically observed mean
absolute error in revenue for each of the three designs; the auction
mixture is set as $\epsilon = .001$ and parameters $n$ and $N$ are
varied. In order to discern the dependence of the error on $N$ and
$n$, the values in Figure~\ref{table:MC} are normalized by the factor
$\sqrt{N}$. By replicating the Monte Carlo sampling we ensured that
the Monte Carlo sample size leads to the relative error of at most
6\%.
\def$1.22 \times 10^3${$1.22 \times 10^3$}
\def$2.65 \times 10^3${$2.65 \times 10^3$}
\def$5.75 \times 10^3${$5.75 \times 10^3$}
\def$12.4 \times 10^3${$12.4 \times 10^3$}
\def$26.6 \times 10^3${$26.6 \times 10^3$}
\def$56.7 \times 10^3${$56.7 \times 10^3$}
\def$120. \times 10^3${$120. \times 10^3$}
\def$255. \times 10^3${$255. \times 10^3$}
\def$538. \times 10^3${$538. \times 10^3$}
\begin{figure}
\begin{center}
\begin{tabular}{|c|ccccc|}
\hline
\multicolumn{6}{|c|}{{\bf Design 1:} $x_A(q)=x^{(1:n)}(q)$ and $x_B(q)=q$.}\\
\hline
$n=$&\multicolumn{5}{|c|}{$N=$}\\
\cline{2-6}
&$10^1$&$10^2$&$10^3$&$10^4$&$10^5$\\
\cline{2-6}
$2^1$ & 0.1215 & 0.1150 & 0.1169 & 0.1177 & 0.1196\\
$2^2$ & 0.5631 & 0.4164 & 0.3830 & 0.3823 & 0.3961\\
$2^3$ & 1.1846 & 0.5820 & 0.5147 & 0.5350 & 0.5031\\
$2^4$ & 1.4014 & 0.4977 & 0.4374 & 0.4630 & 0.4285\\
$2^5$ & 1.4764 & 0.3908 & 0.3059 & 0.3082 & 0.2854\\
$2^6$ & 1.3843 & 0.2811 & 0.1962 & 0.2019 & 0.1918\\
$2^7$ & 2.5895 & 0.2691 & 0.1598 & 0.1503 & 0.1508\\
$2^8$ & 1.3640 & 0.2839 & 0.1374 & 0.1302 & 0.1335\\
$2^9$ & 0.5693 & 0.4647 & 0.1309 & 0.1153 & 0.1175\\
\hline\hline
\multicolumn{6}{|c|}{{\bf Design 2:} $x_A(q)=q$ and $x_B(q)=x^{(1:n)}(q)$.}\\
\hline
$2^1$ & 0.1215 & 0.1150 & 0.1169 & 0.1177 & 0.1196\\
$2^2$ & 0.0814 & 0.0605 & 0.0582 & 0.0596 & 0.0642\\
$2^3$ & 0.0779 & 0.0653 & 0.0652 & 0.0672 & 0.0661\\
$2^4$ & 0.0690 & 0.0621 & 0.0612 & 0.0646 & 0.0623\\
$2^5$ & 0.0566 & 0.0522 & 0.0494 & 0.0508 & 0.0487\\
$2^6$ & 0.0425 & 0.0358 & 0.0355 & 0.0356 & 0.0349\\
$2^7$ & 0.0230 & 0.0281 & 0.0241 & 0.0248 & 0.0253\\
$2^8$ & 0.0117 & 0.0204 & 0.0171 & 0.0171 & 0.0186\\
$2^9$ & 0.0060 & 0.0158 & 0.0120 & 0.0118 & 0.0178\\
\hline\hline
\multicolumn{6}{|c|}{{\bf Design 3:} $x_A(q)=x^{(n-1:n)}(q)$; $x_B(q)=x^{(1:n)}(q)$.}\\
\hline
$2^1$ & 0.1215 & 0.1150 & 0.1169 & 0.1177 & 0.1196\\
$2^2$ & 0.2391 & 0.2257 & 0.2289 & 0.2262 & 0.2322\\
$2^3$ & 0.2162 & 0.1138 & 0.1090 & 0.1074 & 0.1100\\
$2^4$ & 0.1155 & 0.1061 & 0.1067 & 0.1118 & 0.1136\\
$2^5$ & 0.0870 & 0.0853 & 0.0842 & 0.0878 & 0.0887\\
$2^6$ & 0.0571 & 0.0636 & 0.0637 & 0.0665 & 0.0672\\
$2^7$ & 0.0370 & 0.0486 & 0.0458 & 0.0490 & 0.0546\\
$2^8$ & 0.0210 & 0.0340 & 0.0348 & 0.0370 & 0.0431\\
$2^9$ & 0.0097 & 0.0253 & 0.0257 & 0.0271 & 0.0371\\
\hline
\end{tabular}
\caption[!ht]{Mean absolute deviation for $\hat{P}_{B}$ across Monte-Carlo simulations
normalized by $\sqrt{N}$.}
\label{table:MC}
\end{center}
\end{figure}
We make the following observations that contrast the simulation
results with the theoretical bound of Corollary~\ref{cor:allpay-y}:
\begin{itemize}
\item {\bf Dependence on $N$:} Per our theoretical bound and
normalization, we expect the values reported in the table to stay
constant across different numbers of samples $N$ if the estimator
were truncated.
\item {\bf Dependence on $n$:} Per our theoretical bound and
normalization, we expect the values reported in the table to be
increasing with $n$ (as $n \log n$). The table shows an empirical
performance of the estimator that has much better dependence on $n$
and indicates that it may be possible to further improve the
theoretical bound.
\item {\bf Constants:} The simulations suggest that constants in the
theoretical bound are quite loose, specifically, for the designs
studied the theoretical bound is larger than one, i.e., trivial,
until $N \gg 10^6$. In the simulations the estimator worked quite
well even for $N = 10$.
\end{itemize}
One reason why the theoretical bound may be loose (both for constants
and dependence on $n$) is that the bounds were developed to be
versitile enough to capture the extremal cases where the incumbent and
counterfactual mechanism are quite different. For applications like
this simulation, where the incumbent and conterfactual mechanism are
related by a modest $\epsilon$, the analysis of initial steps in the
bound could be simplified and improved.
\begin{figure}
\begin{center}
\includegraphics[width=6in]{new-new-dependence-epsilon.png}
\caption[!ht]{Dependence of the relative median absolute
error from the mixture weight as $\log(1/\epsilon)$. The $x$-axis
displays values of $\epsilon$ on a log scale, and the $y$-axis plots
error on a linear scale.}
\label{figure:eps}
\end{center}
\end{figure}
We next illustrate the dependence of the estimation error on the
choice of the mixture weight $\epsilon$ for the three considered
designs. We fix the number of agents $n=32$ and the sample size
$N=1000$ and vary the mixture probability $\epsilon$ between 0 and 1,
exclusive. In Figure~\ref{figure:eps} we demonstrate the dependence
of the median absolute error computed as the ratio of the median
absolute error to estimated revenue as a function of $\log
1/\epsilon$. In Designs 1 and 3, the dependence on $\log 1/\epsilon$
at moderate values of $\epsilon$ is linear as expected by the
theoretical bound, and is sublinear at very small values or very large
values of $\epsilon$ showing that the theoretical bound is weak at
those values. Design 2 displays a different trend: as $\epsilon$
decreases, the error decreases at first and then flattens
out. Intuitively, because the bid distribution that auction A in
Design~2 generates has high density on its entire range, this auction
is much better for inferring the revenue of~B than~B
itself. Therefore, as the weight $1-\epsilon$ of A in C increases, our
inference of B's revenue improves until it hits a plateau at
$\epsilon\approx 0.1$.
Finally, we consider the challenge case of Design~4 where the data
from the $(n-1)$-unit auction C is used to infer the revenue of the
one-unit auction B. This inference is especially challenging as $n$
grows because the bids of the $(n-1)$-auction are pushed down due to
the competition between bidders with the lowest values. At the same
time, the revenue from the 1-unit auction is driven by competition
between bidders with the highest values. Specifically, the term
$\sup_q (x'_B(q)/x'_C(q))$ in the theoretical bound
diverges at extreme quantiles, and the truncation of the estimator
plays an active role in controling the error.
\begin{figure}
\begin{center}
\begin{tabular}{|c|cccccc|}
\hline
\multicolumn{7}{|c|}{{\bf Design 4:} $x_B(q)=x^{(1:n)}(q)$ and $x_C(q)=x^{(n-1:n)}(q)$.}\\
\hline
&\multicolumn{6}{|c|}{Estimator with truncation}\\
\cline{2-7}
$n=$&\multicolumn{6}{|c|}{$N=$}\\
\cline{2-7}
&$10^5$&$4 \cdot 10^6$&$5 \cdot 10^6$&$6 \cdot 10^6$&$7 \cdot 10^6$&$8 \cdot 10^6$\\
\cline{2-7}
2 &0.00020& 0.00014 & 0.00015 & 0.00013 & 0.00013 & 0.00014 \\
3 &0.00012& 0.00005 & 0.00005 & 0.00005 & 0.00005 & 0.00005 \\
4 &0.00011& 0.00014 & 0.00009 & 0.00017 & 0.00043 & 0.00019 \\
5 &0.00019& 0.00009 & 0.00011 & 0.00008 & 0.00010 & 0.00014 \\
10 &0.00009& 0.00007 & 0.00008 & 0.00010 & 0.00005 & 0.00004 \\
\hline
&\multicolumn{6}{|c|}{Estimator without truncation}\\
\cline{2-7}
2 &0.00012& 0.00014 & 0.00015 & 0.00013 & 0.00013 & 0.00014 \\
3 &0.00005& 0.00005 & 0.00005 & 0.00005 & 0.00005 & 0.00005 \\
4 &2.38256& 4.82498 & 7.47$\times 10^1$ & 1.67$\times 10^2$ & 2.82$\times 10^2$ & 3.53 $\times 10^2$ \\
5 &2.95732 & 3.99$\times 10^7$ & 6.31$\times 10^7$ & 9.16$\times 10^7$& 1.25$\times 10^8$ &2.02$\times 10^8$ \\
10& 3.20$\times 10^1$&3.48$\times 10^{10}$& 4.99$\times 10^{10}$&3.27$\times 10^{10}$& 2.09$\times 10^{10}$&3.95$\times 10^{10}$ \\
\hline
\end{tabular}
\caption[!ht]{The ratio of the mean absolute error of the revenue estimator in Monte Carlo simulations to the theoretical bound of Theorem~\ref{thm:allpay-simple}, i.e., $16\,n^2\,\log\,N / \sqrt{N}$. Ratios below one are consistent with the theoretical bound, ratios above one are inconsistent.}
\label{table:MC:large}
\end{center}
\end{figure}
Figure~\ref{table:MC:large} exhibits this behavior: for $n\ge 4$, the
error from the non-truncated estimator is orders of magnitude larger
than the error from the truncated estimator, and indeed larger than
the bound given by Theorem~\ref{thm:allpay-simple}. The truncated
estimator, on the other hand, exhibits stable behavior with respect to
both the sample size and the number of bidders, and its error is well
below the theoretical bound given by Theorem~\ref{thm:allpay-simple}.
Also shown in Figure~\ref{table:MC:large}, the non-truncated estimator
performs well when $n=2$ and $n=3$. When $n=2$, the auctions B and C
are one and the same. The $n=3$ case is interesting and the result of
a general observation: when the counterfactual auction is a $k$-unit
auction and the incumbent auction is a $(k+1)$-unit auction, the
function $Z(q)$ turns out to be linear in $q$, meaning that $Z'(q)$ is
constant, and the non-truncated estimator is a simple average of the
observed bids (scaled appropriately). In this case, the non-truncated
estimator is well-behaved and exhibits small error.
\section{Preliminaries}\label{s:prelim}
\label{S:PRELIM}
\subsection{Auction Theory}
A standard auction design problem is defined by a set $[n] =
\{1,\ldots,n\}$ of $n\ge 2$ agents, each with a private value $\vali$
for receiving a service. The values are bounded as $\vali\in[0,1]$
and are independently and identically distributed according to a
continuous distribution $F$. If $\alloci$ indicates the
probability of service and $\pricei$ the expected payment required,
agent $i$ has linear utility $\utili = \vali \alloci - \pricei$. An
auction elicits bids $\bids = (\bidi[1],\ldots,\bidi[n])$ from the
agents and maps the vector $\bids$ of bids to an allocation
$\tallocs(\bids) = (\talloci[1](\bids),\ldots,\talloci[n](\bids))$,
specifying the probability with which each agent is served, and prices
$\tprices(\bids) = (\tpricei[1](\bids),\ldots,\tpricei[n](\bids))$,
specifying the expected amount that each agent is required to pay. An
auction is denoted by $(\tallocs,\tprices)$.
\paragraph{Standard payment formats}
In this paper we study two standard payment formats. In a {\em
first-price} format, each agent pays his bid upon winning, that is,
$\tpricei(\bids) = \bidi \, \talloci(\bids)$. In an {\em all-pay} format,
each agent pays his bid regardless of whether or not he wins, that is,
$\tpricei(\bids) = \bidi$.
\paragraph{Bayes-Nash equilibrium}
The values are independently and identically distributed according to
a continuous distribution $F$. This distribution is common
knowledge to the agents. A strategy $\strati$ for agent $i$ is a
function that maps the
value of the agent to a bid. The distribution of values $F$ and a
profile of strategies $\strats = (\strati[1],\cdots,\strati[n])$
induces interim allocation and payment rules (as a function of bids)
as follows for agent $i$ with bid $\bidi$.
\begin{align*}
\talloci(\bidi) &= \expect[\valsmi \sim F]{\talloci(\bidi,\stratsmi(\valsmi))} \text{ and}\\
\tpricei(\bidi) &= \expect[\valsmi \sim F]{\tpricei(\bidi,\stratsmi(\valsmi))}.
\intertext{Agents have linear utility which can be expressed in the interm as:}
\tutili(\vali,\bidi) &= \vali\talloci(\bidi) - \tpricei(\bidi).
\end{align*}
The strategy profile forms a {\em Bayes-Nash equilibrium} (BNE) if for
all agents $i$, values $\vali$, and alternative bids $\bidi$, bidding
$\strati(\vali)$ according to the strategy profile is at least as good
as bidding $\bidi$. I.e.,
\begin{align}
\label{eq:br}
\tutili(\vali,\strati(\vali)) &\geq \tutili(\vali,\bidi).
\end{align}
A symmetric equilibrium is one where all agents bid by the same
strategy, i.e., $\strats$ statisfies $\strati = \strat$ for all $i$
and some $\strat$. For a symmetric equilibrium of a symmetric
auction, the interim allocation and payment rules are also symmetric,
i.e., $\talloci = \talloc$ and $\strati = \strat$ for all $i$. For
implicit distribution $F$ and symmetric equilibrium given by
stratey $\strat$, a mechanism can be described by the pair
$(\talloc,\tprice)$. \citet{CH-13} show that the equilibrium of every
auction in the class we consider is unique and symmetric.
The strategy profile allows the mechanism's outcome rules to be
expressed in terms of the agents' values instead of their bids; the
distribution of values allows them to be expressed in terms of the
agents' values relative to the distribution. This latter
representation exposes the geometry of the mechanism. Define the {\em
quantile} $q$ of an agent with value $\val$ to be the
probability that $\val$ is larger than a random draw from the
distribution $F$, i.e., $q=F(\val)$. Denote the agent's
value as a function of quantile as $\val(q) =
F^{-1}(q)$, and his bid as a function of quantile as
$\bid(q) = \strat(\val(q))$. The outcome rule of the
mechanism in quantile space is the pair
$(\alloc(q),\price(q)) =
(\talloc(\bid(q)),\tprice(\bid(q)))$.
\paragraph{Revenue curves and auction revenue}
\citet{mye-81} characterized Bayes-Nash equilibria and this
characterization enables writing the revenue of a mechanism as a
weighted sum of revenues of single-agent posted pricings. Formally,
the {\em revenue curve} $R(q)$ for a given value distribution
specifies the revenue of the single-agent mechanism that serves an
agent with value drawn from that distribution if and only if the
agent's quantile exceeds $q$: $R(q) =
\val(q)\,(1-q)$. Myerson's characterization of BNE then
implies that the expected revenue of a mechanism at BNE from an agent
facing an allocation rule $\alloc(q)$, notated $\REV{\alloc}$,
can be written as follows:
\begin{eqnarray}
\label{eq:bne-rev}
\REV{\alloc} = R(0)\,\alloc(0) + \expect[q]{R(q)\,\alloc'(q)} = R(1)\,\alloc(1) - \expect[q]{R'(q)\,\alloc(q)}
\end{eqnarray}
where $\alloc'$ and $R'$ denote the derivative of $\alloc$ and
$R$ with respect to $q$, respectively. For value
distributions supported on $[0,1]$, $R(0) = R(1) = 0$ and the
constant terms in equation~\eqref{eq:bne-rev} are identically zero.
The expected revenue of an auction is the sum over the agents of its
per-agent expected revenue; for auctions with symmetric equilibrium
allocation rule $\alloc$ this revenue is $n \, \REV{\alloc}$.
\paragraph{Position environments and rank-based auctions}
\label{sec:rank-based-basics}
A {\em position environment} expresses the feasibility constraint of
the auction designer in terms of {\em position weights} $\wals$
satisfying $1 \ge \walk[1]\ge \walk[2] \ge \cdots \ge \walk[n] \geq
0$. A {\em position auction} assigns agents (potentially randomly) to
positions $1$ through $n$, and an agent assigned to position $i$ gets
allocated with probability $\walk[i]$. The {\em rank-by-bid position
auction} orders the agents by their bids, with
ties broken randomly, and assigns agent $i$, with the $i$th largest
bid, to position $i$, with allocation probability $\walk[i]$. {\em
Multi-unit environments} are a special case and are defined for $k$
units as $\walk[j] = 1$ for $j \in \{1,\ldots,k\}$ and $\walk[j] = 0$
for $j \in \{k+1,\ldots,n\}$. The {\em highest-$k$-bids-win}
multi-unit auction is the special case of the rank-by-bid position
auction for the $k$-unit environment.
In our model with agent values drawn i.i.d.\@ from a continuous
distribution, rank-by-bid position auctions with either all-pay or
first-price payment semantics have a unique Bayes-Nash equilibrium and
this equilibrium is symmetric and efficient, i.e., in equilibrium, the
agents' bids and values are in the same order \citep{CH-13}.
Rank-by-bid position auctions can be viewed as convex combinations of
highest-bids-win multi-unit auctions. The {\em marginal weights} of a
position environment are $\margwals =
(\margwalk[1],\ldots,\margwalk[n])$ with $\margwalk = \walk -
\walk[k+1]$. Define $\margwalk[0] = 1-\walk[1]$ and note that the
marginal weights $\margwals$ can be interpreted as a probability
distribution over $\{0,\ldots,n\}$. As rank-by-bid position auctions
are efficient, the rank-by-bid position auction with weights $\wals$
has the exact same allocation rule as the mechanism that draws a
number of units $k$ from the distribution given by $\margwals$ and
runs the highest-$k$-bids-win auction.
Denote the highest-$k$-bids-win allocation rule as $\knalloc$ and its
per-agent revenue as $\murevk = \REV{\knalloc} =
\expect[q]{-R'(q)\,\knalloc(q)}$. This allocation
rule is precisely the probability an agent with quantile $q$ has
one of the highest $k$ quantiles of $n$ agents, or at most $k-1$ of
the $n-1$ remaining agents have quantiles greater than $q$.
Formulaically,
\begin{align*}
\knalloc(q) &= \sum_{i=0}^{k-1} \tbinom{n-1}{i}
q^{n-1-i}(1-q)^{i}.
\\\intertext{Importantly, the allocation rule (in quantile space) of a
rank-by-bid position auction does not depend on the distribution at
all. The allocation rule $\alloc$ of the rank-by-bid position
auction with weights $\wals$ is:}
\alloc(q) &= \sum\nolimits_k \margwalk\, \knalloc(q).
\\\intertext{By revenue equivalence \citep{mye-81}, the per-agent
revenue of the rank-by-bid position auction with weights $\wals$
is:}
\REV{\alloc} &= \sum\nolimits_k \margwalk\,\murevk.
\end{align*}
Of course, $\murevk[0] = \murevk[n] = 0$ as always serving or never
serving the agents gives zero revenue.
A {\em rank-based} auction is one where the probability that an agent
is served is a function only of the rank of the agent's bid among the
other bids and not the magnitudes of the bids. Any rank-based auction
induces a position environment where $\iwalk$ denotes the probability
that the agent with the $k$th ranked bid is served. This auction is
equivalent to the rank-by-bid position auction with these weights
$\iwals$. In a position environment with weights $\wals$, the following
lemma characterizes the weights $\iwals$ that are induced by
rank-based auctions.
\begin{lemma}[e.g., \citealp{DHH-13}]
There is a rank-based auction with induced position weights $\iwals$
for a position environment with weights $\wals$ if and only if their
cumulative weights satisfy $\sum_{j=1}^{k} \iwalk[j] \leq
\sum_{j=1}^{k} \walk[j]$ for all $k$.
\end{lemma}
\subsection{Inference}
As we discussed in the introduction, the traditional structural
inference in the auction settings is based on inferring distribution
of values, which is unobserved but can be inferred from the
distribution of bids, which is observed. Once the value distribution
is inferred, other properties of the value distribution such as its
corresponding revenue curve, which is fundamental for optimizing
revenue, can be obtained. In this section we briefly overview this
approach.
The key idea behind the inference of the value distribution from the
bid distribution is that the strategy which maps values to bids is a
best response, by equation~\eqref{eq:br}, to the distribution of bids.
As the distribution of bids is observed, and given suitable continuity
assumptions, this best response function can be inverted.
We assume that the value distribution function $F(\cdot)$, the
allocation rule $\alloc(\cdot)$, and consequently also the quantile function of bid
distribution $\bid(\cdot)$, are monotone, continuously
differentiable, and invertible.
\paragraph{Inference for first-price auctions}
Consider a first-price rank-based auction with a symmetric bid
function $\bid(q)$ and allocation rule $\alloc(q)$ in BNE.
To invert the bid function we solve for the bid that the agent with
any value would make. Continuity of this bid function implies that
its inverse is well defined. Applying this inverse to the bid
distribution gives the value distribution.
The utility of an agent with quantile $q$ as a function of his bid $z$
is
\begin{align*}
\yestag\label{eq:fp-util}
\util(q,z) &= (\val(q) - z) \, \alloc(\bid^{-1}(z)).\\
\intertext{Differentiating with respect to $z$ we get:}
\tfrac{d}{dz}\util(q,z) &= -\alloc(\bid^{-1}(z)) +
\big(\val(q) - z\big) \, \alloc'(\bid^{-1}(z))\,
\tfrac{d}{dz}\bid^{-1}(z).\\
\intertext{Here $\alloc'$ is the
derivative of $\alloc$ with respect to the quantile $q$. Because
$\bid(\cdot)$ is in BNE, the derivative $\tfrac{d}{dz}\util(q,z)$ is $0$ at
$z=\bid(q)$. Rarranging, we obtain:}
\yestag
\label{eq:fp-inf}
\val(q) &= \bid(q) + \tfrac{\alloc(q)\,\bid'(q)}{\alloc'(q)}
\end{align*}
\paragraph{Inference for all-pay auctions}
We repeat the calculation above for rank-based all-pay auctions; the starting
equation \eqref{eq:fp-util} is replaced with the analogous equation for all-pay auctions:
\begin{align*}
\yestag\label{eq:ap-util}
\util(q,z) &= \val(q)\,\alloc(\bid^{-1}(z)) - z.
\intertext{Differentiating with respect to $z$ we obtain:}
\tfrac{d}{dz}\util(q,z) &= \val(q)\,\alloc'(\bid^{-1}(z)) \,\frac{d}{dz}\bid^{-1}(z) - 1,\\
\intertext{Again the first-order condition of BNE implies that this expression is zero at $z = \bid(q)$; therefore,}
\yestag \label{eq:ap-inf}
\val(q) & = \tfrac{\bid'(q)}{\alloc'(q)}.
\end{align*}
\paragraph{Known and observed quantities}
Recall that the functions $\alloc(q)$ and $\alloc'(q)$ are
known precisely: these are determined by the rank-based auction
definition. The functions $\bid(q)$ and $\bid'(q)$ are
observed. The calculations above hold in the limit as the number of
samples from the bid distribution goes to infinity, at which point these
obserations are precise.
Equations~\eqref{eq:fp-inf} and~\eqref{eq:ap-inf} enable the value
function, or equivalently, the value distribution, to be estimated
from the estimated bid function and an estimator for the derivative of
the bid function, or equivalently, the density of the bid
distribution. Estimation of densities is standard; however, it
requires assumptions on the distribution, e.g., continuity, and the
convergence rates in most cases will be slower. Our main results do not take
this standard approach. Below we discuss errors in estimation of the
bid function.
\subsection{Statistical Model and Methods}
Our framework for counterfactual auction revenue analysis is based on directly
using the distribution of bids for inference. The main error in
estimation of the bid distribution is the {\em sampling error} due to
drawing only a finite number of samples from the bid distribution.
Evaluation of the auction revenue requires the knowledge of
the {\it quantile function} of bid distribution. While estimation of empirical
distributions is standard, quantile functions can be significantly more difficult to estimate
especially if the distribution density can approach zero on its support since the distribution
function is non-invertible at those points. As we show further,
estimation of the counterfactual auction revenues requires the knowledge of the
{\it density-weighted quantile function} which can be robustly estimated despite the
potential non-invertibility of the distribution function. In this subsection, we overview
the uniform absolute error bound of the density-weighted quantile function of the
bid distribution of a multi-unit auction based on the results in \citet{csorgo:83}.
The analyst obtains $N$ samples from the bid distribution. Each
sample is the corresponding agent's best response to the true
bid distribution.
We can estimate the quantile function of the equilibrium bid
distribution $\bid(q)$ as follows. Let
$\sbidi[1],\cdots,\sbidi[N]$ denote the $N$ samples
drawn from the bid distribution. Sort the bids so that $\sbidi[1]
\leq \sbidi[2] \leq \cdots \leq \sbidi[N]$ and define the {\em
estimated quantile function of the bid distribution} $\ebid(\cdot)$ as
\begin{align}\label{bid function}
\ebid(q) &= \sbidi &\forall i \in N, q \in [i-1,i)/N
\end{align}
\begin{definition}
For function
$\bid(\cdot)$,estimator $\ebid(\cdot)$ and the weighting function $\omega(\cdot) \geq 0$, the {\em weighted uniform mean absolute
error}
is defined as $$\expect[\ebid]{ \sup\nolimits_q \omega(q)\big| \bid(q) -
\ebid(q) \big|}.$$
\end{definition}
The main object that will arise in our subsequent analyses will be the
weighted quantile function of the bid distribution where the weights
are determined by the allocation rules of the auctions under
consideration, e.g., $\expect[q]{\omega(q)\,b(q)}$ for some quantile
weighting function given by $\omega(\cdot)$.\footnote{Estimators of
these functions, i.e., replacing the bid distribution with the
empirical bid distribution, are called $L$-statistics in the
statistics literature.} The important insight is that while the
estimation of the quantile function of the bid distribution
$\hat{b}(\cdot)$ maybe problematic around the points where the density
of the bid distribution is close to zero, the estimation of the
density-weighted quantile function is a lot more robust. As we will
show further, estimation of auction revenues involves such a
density-weighted form of the quantile function. Our error bounds are
based on the uniform convergence of quantile processes and weighted
quantile processes in \citet{csorgo:78}, \citet{csorgo:83}, and
\citet{cheng:97}.
For the quantile weighting function $\omega(q) = 1/b'(q)$, i.e., the
inverse derivative of the bid function, the $\sqrt{N}$-normalized mean
absolute error is bounded by a universal constant.
\begin{lemma}
\label{error bid function}
Suppose that $b$ and $b'$ exist on $(0,\,1)$ and
$
\sup_{q \in (0,\,1)}q(1-q)b'(q) <\infty.
$
Then the density-weighted uniform mean absolute error of the empirical quantile function
$\ebid(\cdot)$ on $q \in [\delta_N,\,1-\delta_N]$ with $\delta_N=\frac{25 \log\log N}{N}$ is bounded almost surely as
\begin{align*}\expect[\ebid]{
\sup\nolimits_{q \in [\delta_N,\,1-\delta_N]} \big|\sqrt{N}(b^{\prime}(q))^{-1}( \bid(q) - \ebid(q)) \big|} <
1+16\frac{\log\log N}{\sqrt{N}
\sup_q\,q(1-q)b'(q).
\end{align*}
\end{lemma}
This result is a consequence of statement (3.2.3) in Theorem 3.2.1 in
\cite{csorgo:83}. For all-pay auctions by equation \eqref{eq:ap-inf},
the term $\sup_q\{q(1-q)b'(q)\}$ is bounded by $\frac14
\sup_q\{x'(q)\}$. For first-price auctions by equation
\eqref{eq:fp-inf}, it is bounded by
$\sup_q\{q(1-q)x'(q)/x(q)\}$.
\subsection{Related Work}
Our work is motivated, in part, by field work in the past decade that
considers the empirical optimization of reserve prices in auctions
(e.g., \citealp{Ril-06}; \citealp{BM-09}; and \citealp{OS-11}). The
field study of \citeauthor{OS-11} is most similar to our theoretical
study and the motivating example of
Section~\ref{s:example-search-advertising}. They consider the
generalized second-price position auction of Internet search
advertising (on Yahoo!). They assume that the distribution of
advertiser (bidder) values is lognormal and use structural inference
to estimate the parameters of the distribution for the keywords of
each search. This allows for inference of the optimal reserve price.
They then suggest using a reserve price that is slightly smaller.
Finally, they evaluate the method of setting reserves via a global A/B
test that compairs the original reserves with their reserves across
all keywords. While the authors motivate the usage of reserves
slightly smaller than the optimal reserves for reasons of robustness,
in the context of our motivation these smaller reserves also allow
future inference around the optimal reserve price where the optimal
reserves do not.
The classical approach to counterfactual inference is based on
recovering the values of bidders by inverting their best responses
using the empirical distribution of bids. This approach was developed
by \citet{guerre} for single-unit first-price auctions and it has seen
application broadly in auction theory (e.g.\@ see
\citealp{athey:2007}, and \citealp{paarsch}). This classical method
of counterfactual estimation requires an estimator for the derivative
of the bid distribution. In contrast, our approach directly estimates
revenue without explicitly infering the values of bidders. Our
inference method is based on a technique similar to that in
\citet{mye-81} that ``integrates out'' best responses of agents so
that the auction revenue can be expressed directly in terms of
the observable bids.
Our estimator of the counterfactual auction revenue is simply a
weighted order statistic of samples from the bid distribution. Other
works have proposed using similarly simple estimators to obtain bounds
on the performance a counterfactual auctions. Unlike our work, these
estimators do not use the first-order condition of Bayes-Nash
equilbrium, but in exchange for a weakening of the assumptions of the
model, they obtain bounds instead of point estimates. For example,
\citet{coey:2014} consider ascending single-item auctions and use the
main theorem of \citet{BK-96}, the revenue submodularity of \citet{DRS-12},
and the expected second- and third-highest bids to bound the revenue
of the (counterfactual) optimal auction. See their related work
section for a discussion of similar studies.
The mechanism design literature has previously considered the problem
of an uninformed designer who wishes optimize a mechanism under three
conditions: (a) repeatedly on agents from the same population (each
agent participates only once; see \citealp{KL-03}; \citealp{BH-05};
and \citealp{CGM-15}), (b) with samples from the value distribution (see
\citealp{CR-14}, and \citealp{FHHK-14}), and (c) on the fly in one
mechanism (see \citealp{GHKSW-06}; \citealp{seg-03}; and
\citealp{BV-03}). These works exclusively consider mechanisms that
have truthtelling equilibria and for which, consequently, inference is
trivial. The papers listed in category (a) also consider a model
where the designer only learns the revenue of the mechanism in each
round and not the individual bids. These papers adapt methods from
the multi-armed bandit literature, e.g., \citet{ACFS-02}, which tradeoff
exploring the performance of mechanisms that the designer is less
informed about with exploiting the mechanisms which have been learned
to perform well. Our approach of instrumented optimization is similar
to the exploration steps of these multi-armed bandit algorithms,
except that we assume that bids are in equilibrium for the
distribution over mechanisms rather than for each individual
mechanism. This distinction is important for mechanisms that do not
have truthtelling equilibria.
Finally, the theory that we develop for optimizing revenue over the
class of rank-by-bid position auctions is isomorphic to the theory of
envy-free optimal pricing developed by \citet{DHY-14}.
\section{Inference for social welfare}
\label{s:welfare}
We now consider the problem of estimating the social welfare of a
rank-based auction using bids from another rank-based all-pay
auction. Consider a rank-based auction with induced position weights
$\wals$. By definition, the expected {\em per-agent} social welfare
obtained by this auction is as below, where $\evalk$ is the expected
value of the $k$th highest value agent, or the $k$th order statistic
of the value distribution.
$$
\sw = \frac 1n \sum_{k=1}^{n} \walk \evalk.
$$
We note that the value order statistics, $\evalk$, are closely
related to the expected revenues of the multi-unit auctions. The
$k$-unit second-price auction serves the top $k$ agents with
probability $1$, and charges each agent the $k+1$th highest value. Its
expected revenue is therefore $nP_k = k\evalk[k+1]$. We
therefore obtain:
$$
\sw = \walk[1]\frac{\evalk[1]}{n} + \sum_{k=1}^{n-1} \walk[k+1] \, \frac {P_{k}}{k}.
$$
The methodology developed in the previous sections can be used to
estimate the $P_k$'s in the above expression. The first order
statistic of the values, $\evalk[1]$, cannot be directly estimated in
this manner. Notate the expected value of an agent as
$$
\expval = \expect[q]{\val(q)} = \frac 1 n \sum_{k=1}^n \evalk.
$$
Therefore, we can calculate the social welfare of the position auction with weights $\wals$ as
\begin{eqnarray}
\sw = \walk[1] \expval - \sum_{k=2}^{n} (\walk[1]-\walk) \frac
{\evalk}{n} = \walk[1]\expval - \sum_{k=1}^{n-1} (\walk[1]-\walk[k+1]) \, \frac {P_k}{k}.
\label{eq:sw}
\end{eqnarray}
We now argue that $\expval$ can be estimated at a good rate from the
bids of another rank-based all-pay auction. Let $\alloc$ denote the allocation rule of the auction that we run,
and $\bid$ denote the bid distribution in BNE of this auction. Then we
note that
$$
\expval = \expect[q]{\val(q)} =
\expect[q]{\frac{\bid'(q)}{\alloc'(q)}} = \expect[q]{-\Zbar'(q)\bid(q)}
$$
where $\Zbar(q) = 1/\alloc'(q)$. We might now try to directly apply
Theorems~\ref{thm:allpay-simple} or \ref{thm:allpay-general} to bound
the error in our estimate of $\expval$. This does not immediately
work, as Lemma~\ref{lem:Z-bound-1} fails to hold for $\Zbar$. Instead,
we observe that since $\alloc'(q)$ is a degree $n-1$ polynomial and
has fewer than $n$ local minima, therefore $\Zbar$ has fewer than $n$
local maxima. We can therefore adapt the arguments for the
aforementioned theorems to obtain the following lemma:
\begin{lemma}
\label{lem:inference-expval}
The mean absolute error in estimating the expected value $\expval$
using $N$ samples from the bid distribution for an all-pay rank-based
auction with allocation rule $x$ is bounded as given by the two
expressions below. Here $n$ is the number of positions in the position
auction.
\begin{align*}
\Err{\expval} & \le \frac{8n^2\log N}{\sqrt{N}}\\
\Err{\expval} & \le \frac{40n}{\sqrt{N}} \max\left\{1, \log\sup\nolimits_{q\not\in\Lambda}\alloc'(q),
\log \sup\nolimits_{q\not\in\Lambda}\tfrac{1}{\alloc'(q)} \right\} \\
\end{align*}
\end{lemma}
As an example application of Lemma~\ref{lem:inference-expval}, we
adapt Corollary~\ref{cor3} to bound the error from estimating the
social welfare of any position auction using bids from another
position auction that is mixed with the uniform-stair auction. Recall
that the uniform-stair auction is a universal B test. Using the
universal B test of Corollary~\ref{cor:universal} instead of the
uniform-stair auction gives a slightly worse error bound, because the
slope of the allocation rule for that auction can be as small as
$N^{-O(n)}$. Other revenue estimation results can be similarly adapted
to estimate social welfare.
\begin{theorem}
\label{thm:sw}
For any rank-based auction A; uniform-stair auction B with position
weights $\walk = \frac{n-k}{n-1}$ for each $k\in [1, n]$; and all-pay
rank-based auction C with $x_C = (1-\epsilon)x_A + \epsilon x_B$; the mean
absolute error for estimating the social welfare of any rank-based
auction D from $N$ samples from the bid distribution of C is bounded
by:
\begin{align*}
O\left( \frac{n}{\sqrt{N}} +\frac{n\log n \log(n/\epsilon)}{\sqrt{N}}\right)
& = O\left( \frac{n\log n \log(n/\epsilon)}{\sqrt{N}} \right).
\end{align*}
\end{theorem}
The theorem follows by combining Lemma~\ref{lem:inference-expval} with
equation~\eqref{eq:sw} and Corollary~\ref{cor3}. The first term
follows from Lemma~\ref{lem:inference-expval} by noting that the
uniform-stair auction satisfies $x'(q)=1$ for all $q$. The second term
follows from the error bounds on $P_k$ given by Corollary~\ref{cor3};
The extra factor of $\log n$ (relative to the statement of the
corollary) arises from the fact that the total weight of the
multipliers for the terms in equation~\eqref{eq:sw} can be as large as
$\sum_{k=1}^n 1/k \approx \log n$.
|
2,877,628,090,704 | arxiv | \section{XOR games with $d$-outputs}
\section{Introduction.}
Quantum non-local correlations are one of the most intriguing aspects of Nature, evidenced in the violation of Bell inequalities. Besides their foundational interest, these correlations have also proven to be useful in information processing tasks such as secure device-independent randomness amplification and expansion \cite{rand}, cryptographic secure key generation \cite{securekey} and reduction of communication complexity \cite{CommCompl}.
Concerning such applications, it is typically of most interest to compute the classical and quantum value of the Bell expression, the classical value being the maximum over local realistic assignments of outcomes while the quantum value is the maximum attained using measurements on entangled quantum states. However, neither of these values is easy to calculate. Computing the classical value is done by means of an integer program and is in general a hard problem \cite{Pitowsky, Hastad}. On the other hand, it is not even known whether the quantum value is computable for all Bell inequalities, since there is a priori no restriction on the dimension of the Hilbert space for the quantum states and measurements; although in some instances it is possible to compute the value efficiently or to find a good approximation. A hierarchy of semi-definite programs from \cite{Navascues} is typically used to get (upper) bounds on the quantum value, although the quality of approximation achieved by these bounds remains unknown. The size of these programs also increases exponentially with the number of inputs and outputs in the Bell expression, so that a central problem of utmost importance in non-locality theory is to find easily computable good bounds to handle general classes of Bell inequalities.
An important class of Bell inequalities for which the quantum value \textit{can} be computed exactly is the class known as two-party binary \textsc{xor} games or equivalently as bipartite two-outcome correlation inequalities. In a binary \textsc{xor} game, the two parties Alice and Bob receive inputs $x \in [m_A], y \in [m_B]$ (we denote $[m_A] := \{1,\dots, m_A\}$) and respond with outputs $a, b \in \{0,1\}$. The winning constraint for each pair of inputs $(x, y)$ only depends on the \textsc{xor} modulo $2$ of the parties' answers, i.e., the Bell expression in the binary \textsc{xor} game only involves probabilities $P(a \oplus_{2} b = k| x, y)$ for $k \in \{0,1\}$. The fact that these are equivalent to Bell inequalities for correlation functions with binary outcomes is seen by noting that in this case the correlators $\mathcal{E}_{x,y}$ are given by $\mathcal{E}_{x,y} = \sum_{k=0,1} (-1)^k P(a \oplus_{2} b = k | x, y)$. For these games, it was shown in \cite{Cleve, Wehner} based upon a theorem by Tsirelson \cite{Tsirelson} that the quantum value can be computed efficiently by means of a semi-definite program, although computing the classical value is known to be a hard problem even for this class of games \cite{Hastad}. Besides binary \textsc{xor} games, few general results are known regarding the maximum quantum violation of classes of Bell inequalities.
The study of correlation Bell inequalities for binary outcomes was in part driven by the fact that many of the quantum information-processing protocols were developed for qubits, for which binary outcome games appear naturally. Recently, there has been much interest in developing applications of higher-dimensional entanglement \cite{Exp-high-dim, Qudit-Toffoli, Qudit-randomness, Qudit-key-dist} for which Bell inequalities with more than two outcomes may be naturally suited. Therefore, both for fundamental reasons as well as for these applications, the study of Bell inequalities with more outcomes is crucial.
A natural extension of the binary outcome \textsc{xor} games is to the class of generalized \textsc{xor}-d games, where the outputs of the two parties are not restricted to be binary, although the winning constraint still depends upon the generalized \text{xor} (addition modulo $d$), with $d$ being the number of outcomes.
The generalization can also be extended to the class known as \textsc{linear} games \cite{Hastad}, where the parties output answers that are elements of a finite Abelian group and the winning constraint depends upon the group operation acting on the outputs. Linear games are the paradigmatic example of non-local games with more than two outcomes, and a study of their classical and quantum values is crucial, especially in light of applications such as \cite{Jed}.
In the context of Bell inequalities, these were first studied in \cite{Buhrman} where a large alphabet generalization of the CHSH inequality called CHSH-d was considered, which has since been investigated in \cite{Liang, Ji, Bavarian, Howard, GRRH+15},
An important property of the \textsc{xor}-d games concerns their relationship with communication complexity, following \cite{vanDam, Wang} it is seen that correlations (boxes) winning a non-trivial total function \textsc{xor}-d game for prime $d$ can result in a trivialization of communication complexity.
A related information-theoretic principle called \textit{no quantum advantage in non-local computation} (no-NLC) has also been suggested in \cite{NLC}; this proposes that quantum correlations are those that do not provide any advantage over classical correlations in the task of distributed non-local computation of arbitrary binary functions, while general no-signaling correlations do. It is also of interest to investigate whether the above principle can be extended to functions of more outcomes.
In this paper, we present a novel efficiently computable bound to the quantum value of linear games and use it to derive several interesting properties, with particular emphasis on the important case of \textsc{xor}-d games for prime $d$. We illustrate the bound with the example of the CHSH-d game for prime and prime power $d$, recovering recent results derived using alternative (more technical) methods.
As another illustration, we use the bound to show that for uniformly chosen inputs, no non-trivial total function \textsc{xor}-d game can be won with a quantum strategy and consequently that
these no-signaling boxes that trivialize communication complexity cannot be realized within quantum theory. We further prove a large alphabet generalization of the no-NLC principle, showing that quantum theory provides no advantage in the task of non-local computation of a restricted class of functions with $d$ outcomes for prime $d$.
For the sake of clarity of exposition, we only include sketches of proofs in the main text with details deferred to the Appendices.
\section{A bound on the quantum value of linear games.}
\label{sec:lin-bound}
Linear games are a generalization of \textsc{xor} games to an arbitrary output alphabet size and are defined as follows:
\begin{mydef}
A two-player linear game $\textsl{g}^{l} = (q, f)$ is one where two players Alice and Bob receive questions $u$, $v$ from sets $Q_A$ and $Q_B$ respectively, chosen from a probability distribution $q(u,v)$ by a referee. They reply with respective answers $a, b \in (G,+)$ where $G$ is a finite Abelian group with associated operation $+$. The game is defined by a winning constraint $a + b = f(u,v)$ for some function $f : Q_A \times Q_B \rightarrow G$.
\end{mydef}
The most interesting linear games are arguably the \textsc{xor}-d games, denoted $\textsl{g}^{\oplus}$ which are the linear games corresponding to the cyclic group $\mathbb{Z}_d$, the integers with operation addition modulo $d$ ($\oplus_d$).
The value of the linear game is given by the expression
\begin{equation}
\omega_s(\textsl{g}^{l}) = \max_{\{P_{A,B|U,V}\} \in \mathcal{S}} \sum_{\substack{u \in Q_A \\ v \in Q_B}} \sum_{a,b \in G} q(u,v) V(a,b|u,v) P(a,b | u, v),
\end{equation}
where $V(a,b|u,v) = 1$ if $a + b =f(u,v)$ and $0$ otherwise and the maximum is taken over all boxes $\{P_{A,B|U,V}\}$ in the set $\mathcal{S}$ which may correspond to the set of classical $\mathcal{C}$, quantum $\mathcal{Q}$ or more general no-signaling boxes $\mathcal{NS}$.
The maximum classical value of the game (the maximum over all deterministic assignments of $a, b$ for each respective input $u,v$ or their convex combinations) is denoted $\omega_c(\textsl{g}^{l})$, the maximum value of the game achieved by a quantum strategy (POVM measurements on a shared entangled state of arbitrary Hilbert space dimension) is denoted $\omega_q(\textsl{g}^{l})$, while the maximum value achieved by a no-signaling strategy (where neither party can signal their choice of input using the correlations) is denoted $\omega_{ns}(\textsl{g}^{l})$. These games have been studied \cite{Hastad, Khot} in the context of hardness of approximation of several important optimization problems, in attempts to identify the existence of polynomial time algorithms to approximate the optimum solution of the problem to within a constant factor. Linear games belong to the class of unique games \cite{Kempe}; in a unique game $\textsl{g}^u$ for every answer $a$ of Bob, there is a unique answer $b = \pi_{u,v}(a)$ that wins the game, where $\pi_{u,v}$ is some permutation that depends on the input pair $(u,v)$. For every game in this class, a no-signaling box exists that wins the game, so that $\omega_{ns}(\textsl{g}^l) = \omega_{ns}(\textsl{g}^u) = 1$. Such a box for the general unique game with $d$ outcomes is defined by the entries $P(a,b|u,v) = 1/d$ if $b = \pi_{u,v}(a)$ and $0$ otherwise for all input pairs $(u,v)$, this strategy clearly wins the game, and is no-signaling since the output distribution seen by each party is fully random for every input, i.e., $P(a|u) = P(b|v) = 1/d$.
As in the case of Boolean functions \cite{BBLV,Brassard2}, the classical value $\omega_c(g^l)$ for any linear game is strictly greater than the pure random guess value $1/|G|$, this is shown in Lemma \ref{lem:cl-value}.
\begin{lemma}
\label{lem:cl-value}
For any linear game $g^{l}$ corresponding to a function $f(u,v)$ with $u \in Q_A, v \in Q_B$ and for an arbitrary probability distribution $q(u,v)$, we have
\begin{equation}
\label{cl-low-b}
\omega_c(g^{l}) \geq \frac{1}{|G|} \left( 1 + \frac{|G|-1}{m}\right),
\end{equation}
where $m = \min\{|Q_A|, |Q_B|\}$.
\end{lemma}
\begin{proof}
Let $d = |G|$, Alice and Bob receive inputs $u, v$ of $\log_d |Q_A|$ and $\log_d |Q_B|$ dits respectively. Suppose w.l.o.g that $|Q_A| \leq |Q_B|$ ($m = |Q_A|$), and let the two parties share a uniformly distributed random variable $w$ of $\log_d |Q_A|$ dits. The following classical strategy achieves the lower bound in Eq.(\ref{cl-low-b}). Bob outputs $b = f(w,v)$, while Alice checks if $u = w$ and if so outputs $a = e$; if not she outputs a uniformly distributed $a \in G$. In the case when $u = w$ which happens with probability $\frac{1}{m}$, $a + b = e + f(w,v) = f(u,v)$ and the strategy succeeds. When $u \neq w$, we have that $a + f(w,v)$ is uniformly random since $a$ is uniform, and the strategy succeeds with probability $\frac{1}{d}$. The value achieved by this strategy is therefore $\frac{1}{m} + \left(1 - \frac{1}{m}\right) \frac{1}{d}$.
\end{proof}
Computing the quantum value of the linear game is an onerous task, for which efficiently computable bounds are hard to find.
We now present a bound on the quantum value of a linear game in Theorem \ref{thm2} by using the norms of a set of \textit{game matrices} defined using the characters of the associated group. The detailed derivation of the bound is shown in the proof of this theorem presented in the Appendix \ref{sec:lin-bound-app}, and the utility and possible tightness of the bound (in scenarios such as the CHSH-d game that is applicable to tasks such as relativistic bit commitment \cite{Jed}) is considered in this section.
\begin{thm}\label{thm2}
\label{norm-bound}
The quantum value of a linear game $\textsl{g}^l$ with input sets $ Q_A, Q_B$ can be bounded as
\begin{eqnarray}
\label{xor-d-bound-2}
\omega_{q}(\textsl{g}^{l}) \leq \frac{1}{|G|} \left[ 1 + \sqrt{|Q_A| |Q_B|} \sum_{x \in G\setminus \{e\}} \Vert \Phi_{x} \Vert \right],
\end{eqnarray}
where
$\Phi_{x} = \sum_{(u,v) \in Q_A \times Q_B} q(u,v) \chi_{x}(f(u,v)) | u \rangle \langle v|$ are the game matrices, $\chi_{x}$ are the characters of the group $G$ and $\Vert \cdot \Vert$ denotes the spectral norm. In particular, for an \textsc{xor}-d game with $m_A$ and $m_B$ inputs for the two parties, the quantum value can be bounded as
\begin{eqnarray}
\label{eq:xor-d-bound-3}
\omega_{q}(\textsl{g}^{\oplus}) \leq \frac{1}{d} \left[ 1 + \sqrt{m_A m_B} \sum_{k= 1}^{d-1} \Vert \Phi_{k} \Vert \right],
\end{eqnarray}
with $\Phi_k = \sum_{\substack{u \in [m_A] \\ v \in [m_B]}} q(u,v) \zeta^{k f(u,v)} |u \rangle \!\langle v|$ and $\zeta = \exp{(2 \pi I/d)}$.
\end{thm}
\begin{proof}
We sketch the proof of the bound using the Fourier transform for the \textsc{xor}-d games here, the generalization to linear games uses the analogous Fourier transform on finite Abelian groups \cite{Terras} and is deferred to the Appendix \ref{sec:lin-bound-app}. For a quantum strategy given by projective measurements $\{\Pi_{u}^{a} \}, \{\Sigma_{v}^{b} \}$ on a pure state $| \Psi \rangle \in \mathbb{C}^{D \times D}$, we introduce the generalized correlators $\langle A_{u}^{x} \otimes B_{v}^{y} \rangle$
for unitary operators defined as
\begin{equation}
A_{u}^{x} = \sum_{a \in G} \zeta^{-ax} \Pi_{u}^{a} \; \; \text{and} \; \; B_{v}^{y} = \sum_{b \in G} \zeta^{-by} \Sigma_{v}^{b}.
\end{equation}
The probabilities $P(a,b|u,v)$ that enter the game expression are calculated from the inverse transform to be
\begin{eqnarray}
\label{eq:game-prob}
P(a \oplus_d b = f(u,v) | u,v) = \frac{1}{d} \sum_{k =0}^{d-1} \zeta^{k f(u,v)} \langle A_{u}^{k} \otimes B_{v}^{k} \rangle.
\end{eqnarray}
Now, with vectors $|\alpha_{k} \rangle, |\beta_{k} \rangle$ and the \textsc{xor}-d game matrices $\Phi_{k}$ defined as
\begin{eqnarray}
&&|\alpha_{k} \rangle = \sum_{u \in Q_A} \left((A_{u}^{k})^{\dagger} \otimes \leavevmode\hbox{\small1\kern-3.8pt\normalsize1} \right) |\Psi \rangle \otimes |u \rangle \; \; \text{,} \; \; \nonumber \\
&&|\beta_{k} \rangle = \sum_{v \in Q_B} \left(\leavevmode\hbox{\small1\kern-3.8pt\normalsize1} \otimes B_{v}^{k} \right) |\Psi \rangle \otimes |v \rangle, \nonumber \\
&&\Phi_{k} = \sum_{(u,v) \in Q_A \times Q_B} q(u,v) \zeta^{k f(u,v)} | u \rangle \langle v|,
\end{eqnarray}
the game expression $\sum_{(u,v) \in Q_A \times Q_B} q(u,v) P(a \oplus_d b = f(u,v)|u,v)$ can be rewritten using Eq.(\ref{eq:game-prob}) as
$(1/d) \sum_{k=0}^{d-1} \langle \alpha_k | \leavevmode\hbox{\small1\kern-3.8pt\normalsize1} \otimes \Phi_{k} | \beta_{k} \rangle$ and the norm bound in Eq.(\ref{eq:xor-d-bound-3}) follows.
\end{proof}
It should be noted that as shown in \cite{Kempe}, the quantum value of a linear game can be efficiently approximated, to be precise for any linear game $\textsl{g}^{l}$ with $\omega_q(\textsl{g}^{l}) = 1 - \delta$, there exists an efficient algorithm to approximate this value using a semi-definite program and a rounding procedure that gives an entangled strategy achieving $\omega_q^{\text{app}}(\textsl{g}^{l}) = 1 - 4 \delta'$, where $\delta/4 \leq \delta' \leq \delta$. While this is highly significant and useful for proving results such as a parallel repetition theorem for the quantum value of such games \cite{Kempe}, it would appear to be good for approximating the quantum value when the latter is close to unity, which is not the case for simple examples like the CHSH-d game. For uniform probability inputs $q(u,v) = 1/|Q_A| |Q_B|$ or when the input distribution possesses certain symmetries, as we shall see, the simple linear algebraic bound above supplements this result and proves to be very useful to derive other interesting properties of these games.
We first illustrate the applicability and possible tightness of the bound by considering the flagship scenario of the CHSH-d game which generalizes the well-known CHSH game to a higher dimensional output. In this game, Alice and Bob are asked questions $u, v$ chosen uniformly at random from a finite field $\mathbb{F}_d$ of size $d$ so that $q(u, v) = 1/d^2$, where $d$ is a prime, or a prime power. They return answers $a, b \in \mathbb{F}_d$ with an aim to satisfy $a \oplus b = u \cdot v$ where the arithmetic operations are from the finite field. In \cite{Bavarian}, an intensive study of this game was performed, with two significant results obtained on the asymptotic classical and quantum values of the game. We now apply Theorem \ref{thm2} to re-derive in a simple manner the upper bound for the quantum value of CHSH-d. Comparison with the numerical results of \cite{Ji, Liang} indicates that the bound in the following example of the CHSH-d game may not be tight in general, also note that the optimum value of the game for Pauli measurements was recently derived in \cite{Howard}.
\begin{example}[see also \cite{Bavarian}]
\textit{The quantum value of the CHSH-d game for prime and prime power $d$, i.e., $d = p^r$ where $p$ is prime and $r \geq 1$ is an integer, can be bounded as
\begin{equation}
\label{eq:Bavarian-bound}
\omega_q(CHSH-d) \leq \frac{1}{d} + \frac{d-1}{d \sqrt{d}}.
\end{equation} }
\end{example}
\begin{proof}
Let us consider the CHSH-d game with associated function $f(u, v) = u \cdot v$. The entries of the game matrix $\Phi_k$ for prime $d$ are by definition $\Phi_k(u,v) = q(u,v)\zeta^{k (u \cdot v)}$ where $\zeta = \exp{\frac{2 \pi I}{d}}$ and $u, v \in \{0, \dots, d-1\}$, and we consider uniform probability inputs $q(u,v) = 1/d^2$. It is readily seen that for prime $d$, the game matrices $\Phi_k$ for $k \in \{1, \dots, d-1\}$ are equal to each other up to a permutation of rows (or columns). Moreover, a direct calculation using $\sum_{j=0}^{d-1} \zeta^j = 0$ yields that $\Phi_k^{\dagger} \Phi_k = \leavevmode\hbox{\small1\kern-3.8pt\normalsize1}/d^3$, so that $\Vert \Phi_k \Vert = 1/d \sqrt{d}, \; \; \forall k \in [d-1]$. Substitution into Eq.(\ref{eq:xor-d-bound-3}) with $m_A = m_B = d$ yields the bound in Eq.(\ref{eq:Bavarian-bound}) for prime $d$.
Strictly analogous results are obtained for prime power $d = p^{r}$, where $p$ is prime and $r > 1$ is an integer. Note that here the operation $u \cdot v$ in the CHSH-d game is not defined as multiplication modulo $d$, but as multiplication in the finite field $\mathbb{F}_d$, see \cite{fields, Bavarian}. The non-zero elements of $\mathbb{F}_d$ under this multiplication operation form a cyclic group of size $d-1$, and we have $a^d = a, \; \; \forall a \in \mathbb{F}_d$. Here again,
the game matrices $\Phi_k$ for $k \in [d-1]$ are equal to each other up to a permutation of rows (or columns). By explicit calculation, using the following properties of the characters: $\chi_k(a+b) = \chi_k(a) \chi_k(b)$ for any $a, b \in \mathbb{F}_d$; $\chi_k(a) = 1 \Longleftrightarrow a = 0$ and $\sum_{a \in \mathbb{F}_d} \chi_k(a \cdot b) = 0$ for $b \neq 0$ we obtain that $\Phi_k^{\dagger} \Phi_k = \frac{1}{d^3} \leavevmode\hbox{\small1\kern-3.8pt\normalsize1}$ for all $k$. Substituting $\Vert \Phi_k \Vert = \frac{1}{d \sqrt{d}}, \; \; \forall k \in [d-1]$ into Eq.(\ref{eq:xor-d-bound-3}) with $|Q_A| = |Q_B| = d$ yields the bound.
\end{proof}
Given the quantum bound, a natural question is whether there are linear games where the quantum value $\omega_q(g^{l})$ equals one, i.e., can there be quantum strategies that win a linear game? The interest in the question also stems from the domain of communication complexity. Following the results of \cite{vanDam, Wang}, any non-trivial total function \textsc{xor}-d game for prime $d$ and $n$ dits as input $\textbf{u} = (u_1, \dots, u_n), \textbf{v} = (v_1, \dots, v_n)$ is won by a no-signaling box that can result in a trivialization of communication complexity. To elaborate, it was shown that any no-signaling box that wins a non-trivial total function \textsc{xor}-d game for prime $d$ must contain as a sub-box, one of the functional boxes of the form $P(a \oplus_d b = f(u,v) | u,v) = 1/d$ for $a, b, u, v \in \{0, \dots, d-1\}$; having $d^n$ copies of the box and addressing this sub-box in each, Alice and Bob can compute any function of $d$ outputs with a single dit of communication, resulting in a trivialization of communication complexity.
We now apply the bound to exclude these boxes that result in a trivialization of communication complexity from the set of quantum boxes. In particular, the following Lemma \ref{comm-comp-lem} shows that no non-trivial game for a total function $f(u,v)$ (a total function is one which is defined for all input pairs $(u,v)$) within the class of \textsc{xor}-d games $\textsl{g}^{\oplus}$ with uniformly chosen inputs can be won by a quantum strategy, meaning that there is no pseudo-telepathy game \cite{Brassard} within this class.
\begin{lemma}
\label{comm-comp-lem}
For \textsc{xor}-d games $\textsl{g}^{\oplus}$ corresponding to total functions with $m$ questions per player, when the input distribution is uniform $q(u, v) = 1/m^2$, $\omega_q(\textsl{g}^{\oplus}) = 1$ iff $\omega_c(\textsl{g}^{\oplus}) = 1$, i.e., when rank$(\Phi_1) = 1$.
\end{lemma}
\begin{proof}
The constraint that the input distributions of questions to the players are uniform, $q(u,v) = 1/m^2$ for all $u, v$, is equivalent to $\Vert \Phi_k \Vert \leq 1/m$ since both the maximum (absolute value) column sum and row sum matrix norms are equal to $1/m$. Now $\omega_q(\textsl{g}^{\oplus}) = 1$ requires from the bound in Eq.(\ref{eq:xor-d-bound-3}) that $\Vert \Phi_k \Vert = 1/m$ for all $k \in \{1, \dots, d-1\}$. Consider the matrix ${\Phi_1}^{\dagger} \Phi_1$ which has entries $({\Phi_1}^{\dagger} \Phi_1)_{u,v} = \sum_{w=1}^{m} q(w,u) q(w,v) \zeta^{-f(w,u) + f(w,v)}$, where $\zeta = \exp{(2 \pi I/d)}$ is the $d$-th root of unity. Let $\{ \lambda_j \}$ be the maximum eigenvector corresponding to eigenvalue $1/m^2$ of ${\Phi_1}^{\dagger} \Phi_1$, with complex entries $\lambda_j = \vert \lambda_j \vert \zeta^{{\theta}_j}$. Let the entries of the eigenvector be ordered by absolute value, $\vert \lambda_1 \vert \geq \dots \geq \vert \lambda_m \vert$ and consider the eigenvalue equation corresponding to $\lambda_1$, we have
\begin{equation}
\sum_{v, w = 1}^{m} |\lambda_v| \zeta^{-f(w,1) + f(w,v) + \theta_v} = m^2 |\lambda_1| \zeta^{\theta_1}.
\end{equation}
Clearly the above equation can only be satisfied when $\vert \lambda_j \vert = \vert \lambda_{j'} \vert \;\; \forall j, j'$ and when the phases add, i.e., when $f(w,v) - f(w,1) + \theta_v = f(w',v') - f(w',1) + \theta_{v'}\; \; \forall v,w,v',w'$, in particular choosing $w = w'$ here, we get $f(w,v) - f(w,v') = \theta_{v'} - \theta_{v} \; \forall w,v,v'$. With all $|\lambda_{j}|$ equal, the rest of the eigenvalue equations (for $u \neq 1$) lead to similar consistent constraint equations.
We deduce that $\omega_q(\textsl{g}^{\oplus}) = 1$ only when the columns of the game matrix $\Phi_1$ are proportional to each other, the proportionality factor between columns $k, l$ being $\zeta^{f(u,k) - f(u,l)} = \zeta^{\theta_l - \theta_k}$. In this case (with $\text{rank}(\Phi_1) = 1$), a classical winning strategy which always exists for the first column of the game matrix $\Phi_1$ can be straightforwardly extended to a classical winning strategy for the entire game, meaning $\omega_c(\textsl{g}^{\oplus}) = 1$ also.
\end{proof}
It was recently shown that all the extremal points of the no-signaling polytope for any number of inputs and outputs cannot be realized within quantum theory \cite{our}. It remains an open question whether \textit{all} such vertices lead to a trivialization of communication complexity (at least in a probabilistic setting), if so this would be a compelling reason for their exclusion from correlations that can be realized in nature. Also, note that while the exclusion of the boxes trivializing communication complexity from the quantum set is not surprising, we include it here as an illustration of the applicability of the bound. Indeed in subsequent work \cite{GRRH+15}, the techniques used in this paper have also been applied to exclude boxes that win games corresponding to partial functions $f(u,v)$ from the quantum set, this further illustrates the utility of the technique since these latter boxes do not trivialize communication complexity and therefore can't be excluded on that basis.
\section{Linear games with no quantum advantage: the task of non-local computation.}
\label{sec:nlc}
Even though the quantum non-local correlations cannot be used to transmit information, they enable the performance of several tasks impossible in the classical world, such as the expansion and amplification of intrinsic randomness, device-independent secure key generation, etc. An unexpected limitation of quantum correlations however is the fact that they do not provide any advantage over classical correlations in the performance of a fundamental information-theoretic task, namely the non-local distributed computation of Boolean functions \cite{NLC}, even though certain super-quantum no-signaling correlations do.
Consider a Boolean function $f(z_1, \dots, z_n)$ from $n$ bits to $1$ bit. A non-local (distributed) computation of the function is defined as follows. Two parties, Alice and Bob, are given inputs $(x_1, \dots, x_n)$ and $(y_1, \dots, y_n)$ obeying $x_i \oplus_2 y_i = z_i$, each bit $x_i, y_i$ being $0$ or $1$ with equal probability. This ensures that neither party has access to any input $z_i$ on their own. To perform the non-local computation, Alice and Bob must output bits $a$ and $b$ respectively such that $a \oplus_2 b = f(x_1 \oplus_2 y_1, \dots, x_n \oplus_2 y_n)$. Their goal is thus to maximize the probability of success in this task for some given input distribution $p(z_1, \dots z_n) = p(x_1 \oplus_2 y_1, \dots, x_n \oplus_2 y_n)$. In \cite{NLC}, it was shown that surprisingly for \textit{any} input distribution $p(z_1, \dots, z_n)$, Alice and Bob sharing quantum resources cannot do any better than classical resources (both give rise to only a linear approximation of the computation), while they could successfully perform the task if the resources they shared were limited by the no-signaling principle alone. This no-advantage in non-local computation (NANLC) was so striking that it was postulated as an information-theoretic principle that picks out quantum theory from among general no-signaling theories, in relation to the correlations that the theory gives rise to \cite{NLC}.
The above consideration of functions with a single-bit output is important since these encapsulate all decision problems, a natural class of problems used to define computational complexity classes. In the program of characterizing quantum correlations however, we must consider functions with multi-bit outputs as well as functions with higher input and output alphabets. We now use the bound (\ref{eq:xor-d-bound-3}) to construct a generalized non-local computation task for functions with higher input-output alphabet.
Consider the following generalization of the non-local computation task to \textsc{xor}-d games, namely the computation of the function $g(z_1, \dots, z_n)$ with $z_i \in \{0, \dots, d-1\}$ where $d$ is a prime. In these games which we label $NLC_d$, Alice and Bob receive $n$ dits $\textbf{x}_n = (x_1, \dots, x_n)$ and $\textbf{y}_n = (y_1, \dots, y_n)$ which obey $x_i \oplus_d y_i = z_i$. Their task is to output dits $a, b$ respectively such that
\begin{equation}
\label{eq:func-NLC}
a \oplus_d b = g(\textbf{x}_{n-1} \oplus_d \textbf{y}_{n-1}) \cdot (x_n\oplus_d y_{n}),
\end{equation}
where $\textbf{x}_{n-1} \oplus_d \textbf{y}_{n-1}$ is the dit-wise \textsc{xor} of the $n-1$ dits, i.e., $\{x_1 \oplus_d y_1, \dots, x_{n-1} \oplus_d y_{n-1}\}$ and $g$ is an arbitrary function from $n-1$ dits to $1$ dit. The inputs are chosen according to
\begin{align}\label{probdistr}
\frac{1}{d^{n+1}} p(\textbf{x}_{n-1} \oplus_d \textbf{y}_{n-1})
\end{align}
for $p(\textbf{x}_{n-1} \oplus_d \textbf{y}_{n-1})$ being an arbitrary probability distribution. As mentioned previously, all unique games including the \textsc{xor}-d games have no-signaling value of unity, so that in general $(1=) \omega_{ns}(NLC_d) > \omega_q(NLC_d)$.
We now present in Theorem \ref{thm-nlc} the result that the games $NLC_d$ defined above exhibit no quantum advantage, the detailed proof of this theorem is presented in Appendix \ref{sec:app-nlc}.
\begin{thm}
\label{thm-nlc}
The games $NLC_d$ for arbitrary prime $d$ and for input distribution satisfying \eqref{probdistr} have no quantum advantage, i.e., $\omega_c(NLC_d) = \omega_q(NLC_d)$.
\end{thm}
\textit{Sketch of proof.}
Consider the games $NLC_d$ for prime $d$ and arbitrary number $n$ of input dits for each party. Denote the total number of inputs for each party by $m=d^n$, and the corresponding game matrices by ${\Phi^{(n)}_k}$. The $NLC_d$ games are composed of ``building-block games"
$G(t):=\DE{a \oplus_d b= t \cdot (x \oplus_d y)}$,
with $t \in \{0, \dots, d-1\}$.
Denote the Fourier vectors as $|f_j\rangle$, i.e.,
$|f_{j} \rangle = \left(1, \zeta^j, \dots, \zeta^{(d-1)j}\right)^T$, where as usual $\zeta = \exp{\frac{2 \pi I}{d}}$.
We find that ${\Phi^{(n)}_k}^{\dagger} \Phi^{(n)}_k$ are block-circulant matrices and are hence diagonal in the basis formed by the tensor products of the Fourier vectors $\{|f_{i_1}\rangle \otimes \dots |f_{i_{n}} \rangle\}$ with $i_1, \dots, i_n \in \{0, \dots, d-1\}$. Explicit calculation of the maximum eigenvector yields that $\| \Phi^{(n)}_k \| = d \Lambda$ for $\Lambda := \max_{i_n \in \{0, \dots, d-1\}} \lambda(i_n)$ with $\lambda(i_n)$ being the number of times the
game $G{(d-1 \cdot i_n)}$ appears in the first row of $\Phi^{(n)}_k$. Let $\mu \in \{0, \dots, d-1\}$ denote the value of $i_n$ for which the maximum of $\lambda(i_n)$ is achieved.
For prime $d$, we obtain the following bound
on the quantum value in the uniform case
\begin{equation}
\label{uni-q-bound}
\omega_q(NLC_d) \leq \frac{1}{d}\left(1 + \frac{(d-1) \Lambda}{d^{n-1}} \right).
\end{equation}
The explicit classical strategy where Alice outputs deterministically $a = \mu x_n$ independent of her input $\textbf{x}_{n-1}$ and Bob outputs $b = \mu y_n$
independently of his input $\textbf{y}_{n-1}$ recovers this bound.
\qed
Let us state some open questions in this line of research. Note that the slight restriction in Eq. (\ref{eq:func-NLC}) (a fixed dependence on $x_n \oplus_d y_n$), means that the games do not cover the entire class of functions considered in \cite{NLC}, it remains open whether there is no quantum advantage for the remaining functions in this class as well. It is also of interest to identify other tasks beyond NLC where quantum correlations do not provide an advantage over classical ones, and the bound should be useful to characterize these. Also, we remark that the original NANLC principle (and most of the other principles proposed so far) is known to not pick out exactly the set of quantum correlations since there exists a set of the so-called almost quantum correlations \cite{Navascues} that also satisfies the principle. The generalized NANLC principle subsumes the original NANLC principle, since the latter corresponds to the special case $d=2$. While we expect it to be, it remains to be checked whether the generalized NANLC principle proposed here is also satisfied by the almost quantum set. Finally, it is also of interest to find whether any of the inequalities corresponding to these games define facets of the classical polytope (a facet of a polytope is a face with dimension one less than that of the polytope). Games with this property (having $\omega_c = \omega_q$ and defining facets of the classical polytope) define non-trivial boundaries of the quantum set and it has been posed as an open question in \cite{GYNI, UPB2} whether such games exist for two-party Bell scenarios.
\section{Conclusions.} In this paper, we have presented an easily computable bound on the quantum value of linear games, with particular emphasis on \textsc{xor}-d games for prime $d$. We have illustrated this bound by using to rule out from the quantum set a class of no-signaling boxes that result in a trivialization of communication complexity. To do this, we have shown that no uniform input total function \textsc{xor}-d game can be a pseudo-telepathy game. We have also shown how the recently discovered bound on the CHSH-d game in \cite{Bavarian} can be derived in a simple manner for prime and prime power $d$, in this context it is interesting to note that these games have recently found application in relativistic bit commitment \cite{Jed}. Finally, we have extended the NANLC principle to general prime dimensional output, showing that quantum theory provides no advantage over classical theories in the distributed non-local computation of a class of functions with prime dimensional output.
In the future, it would be interesting to extend the proposed bound on the quantum value to classes of Bell inequalities beyond linear games, especially to the more general unique games. Further applications of the bound such as in the device-independent detection of genuine multipartite entanglement \cite{BGLP, GM} for arbitrary Hilbert space dimensions, in multi-party communication complexity, as well as in the identification of information processing tasks with no quantum advantage \cite{NLC}, are of immediate interest.
\textit{ Acknowledgements.} We thank P. Horodecki and M. Horodecki for useful discussions, as well as Matej Pivoluska and J\c edrzej Kaniewski for useful comments on an earlier version of this manuscript. R.R. is supported by the ERC AdG grant QOLAPS and the Foundation for Polish Science TEAM project co-financed by the EU European Regional Development Fund. R. A. acknowledges support from the ERC AdG grant OSYRIS, the EU project SIQS, the Spanish project FOQUS and the John Templeton Foundation. G.M. acknowledges support from the Polish Ministry of Science and Higher Education Grant no. IdP2011 000361 and the Brazilian agency Fapemig (Fundação de Amparo à Pesquisa do estado de Minas Gerais).
|
2,877,628,090,705 | arxiv | \section{Introduction}\label{sec:int}
Motivated by an increasing amount of observational evidence that
in the center of several galaxies there is a black hole \cite{evidence},
Herrera-Nucamendi (hereafter refered to as HN)
\cite{ulises} developed recently, a theoretical approach
to determine the mass and rotation parameters of a Kerr black hole in terms
of the redshift-blueshift $z_{red}$, $z_{blue}$
of the photons emitted by particles orbiting around the black hole and
the radii of their trajectories.
In HN, an explicit expression for the rotation parameter
$a=a(r_c,z_{red},z_{blue},M)$ was found, where $r_c$ is the radius of
circular orbits; however, the mass parameter $M$ might only be found
by solving an eighth order polynomial. These circular
orbits are required to be bounded and stable. In this context,
given a set of observational data $\{z_{red},z_{blue},r_c\}$, that is,
a set of red and blue shifts emitted by particles orbiting
a Kerr black hole at different radii, what one would like to know is the
mass and rotation parameters in terms of that data set.
A detailed analysis of how this can be accomplished was recently
performed in \cite{us}, where the mass of the black hole
for SgrA$^{*}$ and its corresponding
angular momentum (recently estimated
\cite{estimates}: $M \sim 2.72 \times 10^6 M_{\odot}$
and $a \sim 0.9939 M$) were employed.
In that work, it was also found the mass parameter of axialsymmetric
non-rotating compact objects such as Schwarzschild and Reissner-Nordstrom
black holes in terms of the red-blue shift of light
and the orbiting radius of emitting particles.
In this analysis, we carry out this sort of study on
Einstein-Maxwell-Dilaton (EMD) spacetimes. Dilaton fields coupled
to Einstein-Maxwell fields appear naturally in the low energy limit
of string theory and as a result of dimensional reduction of the
5-dimensional Kaluza-Klein theory. As a candidate for dark matter,
dilaton fields have been employed to explain the large scale structure
of the universe \cite{LS} and at galactic
level, to explain the rotation curves \cite{RC}. A dilatonic compact object
has also been studied as a gravitational lens \cite{Lens}. More recently,
the properties and dynamics of black holes within the EMD theory has been
considered \cite{Dynamics}. In this paper, we apply the HN
approach to some static exact solution of the EMD theory, namely,
the Chatterjee class of solutions \cite{MaciasMatos} and
the Gibbons-Maeda spacetime \cite{GM}.
Both solutions have the Schwarzschild black hole as a special case in certain
limit of one of their parameters, which amounts to make the dilaton field
vanish. In \cite{us}, an explicit formula of $M=M(r_c,z)$ for the Schwarzscild
black hole was found, and it turned out that $z$ is bounded as
$|z|<1/\sqrt{2}$.
Hence, a study of the influence of the dilaton field upon attainment
of $M=M(r_c,z)$ and the bounds of $z$ is carried out in this analysis.
We provide a brief summary of the HN theoretical scheme in section II,
and in section III we deal with the non-rotating examples above mentioned.
\section{H-N Theoretical Approach}
\label{review}
Starting with a rotating axialsymmetric space-time
in spherical coordinates $(x^{\mu})=(t,r,\theta,\phi)$,
the geodesic equations for a massive
particle stem from the Lagrangian given by
\begin{equation}
\mathcal{L}= \frac{1}{2} \left (
g_{tt} \dot{t}^2+2 g_{t\phi}\dot{t} \dot{\phi} + g_{rr} \dot{r}^2 +
g_{\theta \theta} \dot{\theta}^2+ g_{\phi \phi} \dot{\phi}^2 \right )
\label{Lagrangian}
\end{equation}
The photons emitted by this massive particle move along null geodesics
whose equations come also from the same Lagrangian.
We assume that the space time is endowed
with two Killing vectors $\xi=(1,0,0,0)$, $\psi=(0,0,0,1)$,
hence the metric and the Lagrangian
depend only on the variables $r$ and $\theta$
; thus, there are two quantities
that are conserved along the geodesics of the massive particles
\begin{eqnarray}
p_t&=& \frac{\partial \mathcal{L}}{\partial \dot{t}} =
g_{tt} \dot{t} + g_{t \phi} \dot{\phi}=
g_{tt} U^t + g_{t \phi} U^{\phi}= -E, \nonumber \\
p_{\phi} &=& \frac{\partial \mathcal{L}}{\partial \dot{\phi}} =
g_{t \phi} \dot{t} + g_{\phi \phi} \dot{\phi}=
g_{t \phi} U^t + g_{\phi \phi} U^{\phi}= L,
\label{constants}
\end{eqnarray}
\noindent where $U^{\mu}=\frac{d x^{\mu}}{d \tau}$ is the 4-velocity
and $\tau$ is the proper
time. This 4-velocity is normalized to unity rendering
\begin{eqnarray}
-1 &=& g_{tt} (U^t)^2 + g_{rr} (U^r)^2 + g_{\theta \theta} (U^{\theta})^2+
g_{\phi \phi} (U^{\phi})^2 \nonumber \\
&& + g_{t \phi} U^t U^{\phi}.
\label{condition}
\end{eqnarray}
\noindent
Using (\ref{constants}), two of these 4-velocity components can
be written in terms of $g_{\mu \nu}$, $E$ and $L$ as
\begin{equation}
U^t = \frac{g_{\phi \phi} E + g_{t \phi} L}{g_{t\phi}^2 -g_{tt} g_{\phi \phi}}
\quad , \quad
U^{\phi} = -\frac{g_{t \phi} E + g_{tt } L}{g_{t\phi}^2 -g_{tt} g_{\phi \phi}}.
\label{us}
\end{equation}
\noindent
then (\ref{condition}) yields
\begin{equation}
g_{rr} \left ( U^r \right )^2 + V_{eff} =0,
\label{veff}
\end{equation}
\noindent
where $V_{eff}$ is an effective potential given by
\begin{equation}
V_{eff}=
1+ g_{\theta \theta} \left ( U^{\theta} \right )^2 -
\frac{E^2 g_{\phi \phi}+ L^2 g_{tt} + 2 E L g_{t \phi}}
{g_{t \phi}^2-g_{tt}g_{\phi \phi}}.
\label{veffexplicit}
\end{equation}
\noindent
Our aim is to get the parameters of an axialsymmetric
space-time in terms of the red and blue shifts
$z_{red}$ and $z_{blue}$ of light
emitted by massive particles orbiting around a compact object.
These photons have 4-momentum $k^{\mu}=(k^t,k^r,k^{\theta},k^{\phi})$
and move along null geodesics $k_{\mu} k^{\mu}=0$.
Using the same Lagrangian (\ref{Lagrangian})
one obtains two conserved quantities:
$-E_{\gamma}= g_{tt} k^t + g_{t\phi} k^{\phi}$ and
$L_{\gamma} =g_{\phi t} k^t + g_{\phi \phi} k^{\phi}$. By inverting these two
expressions one can write $k^t$ and $k^{\phi}$ in terms of $g_{\mu \nu}$,
$E_{\gamma}$ and $L_{\gamma}$.
The frequency shift $z$ associated to the emission and detection of
photons is defined as
\begin{equation}
1+z= \frac{\omega_e}{\omega_d}=
\frac{- k_{\mu} U^{\mu} |_e }{- k_{\nu} U^{\nu} |_d}.
\label{zzz}
\end{equation}
\noindent where $\omega_e$ is the frequency emitted by an observer
moving with the massive particle at the emission point $e$ and $\omega_d$
the frequency detected by an observer far away from the source
of emission, $U^{\mu}_e$ and $U^{\mu}_d$ are the 4-velocity of the emitter and
detector respectively. If the detector is located very far away from
the source ($r \to \infty$) then
$U^{\mu}_d=(1,0,0,0)$ since $U^r_d,U^{\theta}_d,U^{\phi}_d \to 0$,
whereas $U^t=E=1$. The frequency $\omega_e= -k_{\mu}U^{\mu}|_e$ is
explicitly given by \\
$\omega_e = \left ( E_{\gamma}U^t-L_{\gamma}U^{\phi}-g_{rr} U^r k^r-
g_{\theta \theta} U^{\theta} k^{\theta} \right)|_e $,\\
\noindent
with a similar expression for $\omega_d$; hence (\ref{zzz}) becomes
\begin{equation}
1+z=
\frac{\left ( E_{\gamma}U^t-L_{\gamma}U^{\phi}-g_{rr} U^r k^r-
g_{\theta \theta} U^{\theta} k^{\theta} \right)|_e}
{ \left ( E_{\gamma} U^t - L_{\gamma} U^{\phi}-g_{rr} U^r k^r-
g_{\theta \theta} U^{\theta} k^{\theta} \right)|_d }.
\label{oneplusz}
\end{equation}
\noindent This is a general expression for the red-blue shifts of light
emitted by massive particles that are orbiting around a compact object
measured by a distant observer. We shall study the particular case of
circular ($U^r=0$) and equatorial ($U^{\theta}$=0) motion which
simplify the expression (\ref{oneplusz}) to
\begin{equation}
1+z=
\frac{\left ( E_{\gamma}U^t-L_{\gamma}U^{\phi} \right)|_e}
{ \left ( E_{\gamma} U^t - L_{\gamma} U^{\phi} \right)|_d }
= \frac{U^t_e-b_eU^{\phi}_e}{U^t_d - b_d U^{\phi}}
\label{simple}
\end{equation}
\noindent where the apparent impact parameter $b=L_{\gamma}/E_{\gamma}$
of photons was introduced. Since it is defined in terms of quantities that are
conserved all the way from the point of emission to the point of
detection, one has that $b_e=b_d$. In addition, the
kinematic redshift-blueshift of photons $z_{kin}=z-z_c$ was considered;
here $z_c$ is the redshift corresponding to a photon emitted by a static
particle located at $b=0$
\begin{equation}
1+z_c= \frac{U^t_e}{U^t_d}.
\label{zc}
\end{equation}
\noindent The kinematic redshift $z_{kin} = (1+z) - (1+z_c)$ can
then be written as
\begin{equation}
z_{kin} =
\frac{(U^t-bU^{\phi}-\frac{1}{E_{\gamma}}g_{rr}U^rk^r-\frac{1}{E_{\gamma}}
g_{\theta \theta} U^{\theta}k^{\theta})|_e}
{(U^t-bU^{\phi}-\frac{1}{E_{\gamma}}g_{rr}U^rk^r-\frac{1}{E_{\gamma}}
g_{\theta \theta} U^{\theta}k^{\theta})|_d}-\frac{U^t_e}{U^t_d}
\label{zcine}
\end{equation}
\noindent
The analysis may be performed with either $z_{kin}$ using (\ref{zcine})
or with $z$ using (\ref{oneplusz}). We work with $z_{kin}$ in this paper.
The general expression (\ref{zcine}) acquires a rather simply form for
circular orbits ($U^r=0$) in the equatorial plane ($U^{\theta}=0$)
\begin{equation}
z_{kin}= \frac{U^t U_d^{\phi} b_d- U^t_d U^{\phi}_e b_e}
{U^t_d(U^t_d-b_d U^{\phi}_d)}.
\label{zkin}
\end{equation}
In (\ref{zkin}) we still need to take into account light
bending due to gravitational field, that is, to find
$b$ as a function of the location of the emitter $r$:
$b=b(r)$. The criteria employed in \cite{ulises} to
construct this mapping was choosing the maximum value of $z$ at a fixed
distance from the observed center of the source (at a fixed
$b$). From $k_{\mu} k^{\mu}=0$ with $k^r=k^{\theta}=0$ one arrives at
\begin{equation}
b_{\pm}= \frac{-g_{t \phi} \pm \sqrt{g_{t \phi}^2-g_{tt}g_{\phi \phi}}}{g_{tt}},
\label{bmm}
\end{equation}
\noindent $b_{\pm}$ can be evaluated at the emitter or detector
position. Since in general there are two different values of
$b_{\pm}$, there will be two different values of $z$ of photons
emitted by a receding ($z_1$) or an approaching object ($z_2$) with
respect to a distant observer. These kinematic shifts of photons emitted
either side of the central value $b=0$ read
\begin{equation}
z_1=\frac{U^t_e U^{\phi}_d b_{d_{-}}-U^t_d U^{\phi}_e b_{e_{-}}}
{U^t_d(U^t_d-U^{\phi}_d b_{d_{-}})},
\label{z1}
\end{equation}
\begin{equation}
z_2=\frac{U^t_e U^{\phi}_d b_{d_{+}}-U^t_d U^{\phi}_e b_{e_{+}}}
{U^t_d(U^t_d-U^{\phi}_d b_{d_{+}})}.
\label{z2}
\end{equation}
For the case of static space-times, that is, for
$g_{t \phi}=0$, the apparent impact parameter becomes
$b_{\pm}= \pm \sqrt{-g_{\phi \phi}/g_{t t}}$
and the effective potential (\ref{veffexplicit}) acquires a
rather simple form
\begin{equation}
V_{eff} = 1 + \frac{E^2}{g_{t t}}+\frac{L^2}{g_{\phi \phi}}.
\label{potentialreduce}
\end{equation}
\noindent
For circular orbits, $V_{eff}$ and its derivative
$\frac{d V_{eff}}{dr}$ vanish.
From these two conditions one finds two general expressions
for the constants of motion $E^2$ and $L^2$ valid for any non-rotating
axialsymmetric space-time
\begin{equation}
E^2=-\frac{g_{tt}^2 g_{\phi \phi}^{\prime}}
{g_{tt} g_{\phi\phi}^{\prime}-g_{tt}^{\prime} g_{\phi \phi}},
\quad L^2=\frac{g_{\phi \phi}^2 g_{tt}^{\prime}}
{g_{tt} g_{\phi\phi}^{\prime} - g_{tt}^{\prime}g_{\phi \phi} },
\label{E2L2}
\end{equation}
\noindent
where primes denote derivative with respect to $r$.
Stability of these circular orbits requires
$V_{eff}^{\prime \prime}>0$. If we use (\ref{E2L2}), a general expression for
$V_{eff}^{\prime \prime}$ in terms of $g_{\mu \nu}$ and its derivatives is found
\begin{equation}
V_{eff}^{\prime \prime} =
\frac{g_{\phi \phi}^{\prime} g_{tt}^{\prime\prime}
- g_{tt}^{\prime}
g_{\phi \phi}^{\prime \prime}}{g_{tt}g_{\phi \phi}^{\prime}-g_{tt}^{\prime} g_{\phi \phi}}
+\frac{2 g_{tt}^{\prime} g_{\phi \phi}^{\prime}}{g_{\phi \phi}g_{tt}},
\label{segundita}
\end{equation}
\noindent In the same way, using the explicit form of $E$ and $L$,
(\ref{E2L2}), in (\ref{us})
expressions for the 4-velocities in terms of solely the metric components
are obtained
\begin{equation}
U^{\phi}=\sqrt{\frac{g_{t t}^{\prime}}
{g_{tt} g_{\phi\phi}^{\prime} - g_{tt}^{\prime} g_{\phi \phi}}}, \quad
U^t= -\sqrt{\frac{-g_{\phi \phi}^{\prime}}
{g_{tt} g_{\phi\phi}^{\prime} - g_{tt}^{\prime} g_{\phi \phi} }}.
\label{velocidades}
\end{equation}
\noindent
and the angular velocity of the particles in these circular paths becomes
$\Omega = \sqrt{-g_{tt}^{\prime}/g_{\phi \phi}^{\prime}}$.
Since $b_{+}= - b_{-}$, the redshift $z_1=z_{red}$ and blueshift $z_2=z_{blue}$
are equal but with opposite sign: $z_1= -z_2$, the explicit expression is
\begin{equation}
z_1=\frac{-U^t_e U^{\phi}_d b_{d_{+}}+U^t_d U^{\phi}_e
b_{e_{+}}}{U^t_d(U^t_d+U^{\phi}_db_{d_{+}})}.
\label{zfinal}
\end{equation}
\noindent
Moreover, if the detector is located far away from the compact object
$r_d \to \infty$, and as we mentioned before,
$U_d^{\mu} \to (1,0,0,0)$; hence, (\ref{zfinal}) becomes
\begin{equation}
z_1 = U^{\phi}_e b_{e_{+}}=
\sqrt{\frac{-g_{\phi \phi}g_{tt}^{\prime}}{g_{tt} (g_{tt} g_{\phi\phi}^{\prime} -
g_{tt}^{\prime} g_{\phi \phi} )}}.
\label{zfinalFB}
\end{equation}
\section{Dilatonic Objects}
Since Dilaton and Einstein-Maxwell fields appear in the low energy
limit of string theory and as a result of dimensional reduction of
five-dimensional Kaluza-Klein theory, the four-dimensional effective
action of these theories can be written in the following form
\cite{horne}
\begin{equation}
S= \int d^4x \sqrt{-g} \left [ -R + 2 (\nabla \Phi)^2 +
e^{-2 \alpha \Phi} F_{\mu \nu} F^{\mu \nu} \right ]
\label{accion}
\end{equation}
\noindent where $g=det(g_{\mu \nu})$, $R$ is the Ricci scalar,
$\Phi$ the dilaton field, $F_{\mu \nu}$ the Faraday electromagnetic
tensor and $\alpha \ge 0$ the dilaton coupling constant whose
value determines the special theories contained in (\ref{accion}).
While $\alpha = \sqrt{3}$ leads to the Kaluza-Klein field equations
obtained by dimensional reduction from the five-dimensional Einstein
vacuum equations, $\alpha=1$ leads to the low energy
limit of string theory, and $\alpha=0$
leads to the Einstein-Maxwell theory minimally coupled to the dilaton
scalar field. The field equations obtained from the action (\ref{accion})
read
\begin{equation}
\left ( e^{-2\alpha \Phi} F^{\mu \nu} \right )_{;\mu}=0,
\label{eqn1}
\end{equation}
\begin{equation}
\Phi^{;\mu}_{\mu} + \frac{\alpha}{2} e^{-2\alpha \Phi} F_{\mu \nu}F^{\mu \nu}=0,
\label{eqn2}
\end{equation}
\begin{equation}
R_{\mu \nu} = 2 \Phi_{,\mu} \Phi_{,\nu} + 2 e^{-2\alpha \Phi}
\left ( F_{\mu \sigma}F_{\nu}^{\sigma}-\frac{g_{\mu \nu}}{4}
F_{\beta \gamma}F^{\beta \gamma} \right )
\label{eqn3}
\end{equation}
\noindent where a comma means partial differentiation and a semicolon
represents covariant derivative with respect to $g_{\mu \nu}$.
\subsection{Generalized Chatterjee space-time}
The generalized Chatterjee space-time was found some years ago
\cite{MaciasMatos}, and takes the form
\begin{equation}
ds^2 = \frac{dr^2}{(1-\frac{2M}{r})^{\delta}} +
(1-\frac{2M}{r})^{1-\delta} r^2 d\Omega^2
- (1-\frac{2M}{r})^{\delta} dt^2
\label{MM}
\end{equation}
\noindent where as usual $d\Omega^2=d\theta^2+\sin^2{\theta} d\phi^2$ and
$\delta$ is a free parameter. It possesses a dilaton scalar field yet
without electromagnetic fields. The dilaton field reads
\begin{equation}
e^{-2\alpha \Phi} = \frac{\kappa_0^2}{4} \left ( 1- \frac{2M}{r} \right )^{-\alpha \sqrt{|1-\delta^2|}}
\label{dilaton}
\end{equation}
\noindent It is apparent that for $\delta=1$ the dilaton field
becomes constant and (\ref{MM}) becomes the Schwarzschild black hole
with its event horizon at $r=2M$. For $\delta=1/2$, the metric
components are real provided that $r>2M$. We will restrict ourselves
to the region $r>2M$ henceforth.
The solution (\ref{MM}) includes the following three known
cases (see the details in \cite{MaciasMatos}):
\begin{enumerate}
\item $\delta=1$ corresponds to the Schwarzschild black hole.
\item $\delta=1/2$ and $\alpha=\sqrt{3}$, the solution (\ref{MM}) reduces
to the Kaluza-Klein soliton solution which corresponds to the
Chatterjee solution.
\item $\delta=2$ and $\alpha$ arbitrary represents a black hole in the
dilaton gravity theory framework.
\end{enumerate}
\subsubsection{Case $\delta=1$}
In \cite{us} the relationship (\ref{zfinalFB}) for a Schwarzschild
black hole ($\theta=\pi/2$) turned out to be
\begin{equation}
M= r_c \mathcal{F}(z) \quad \text{where}
\quad \mathcal{F}_{\pm}(z)=\frac{1+5z^2\pm \sqrt{1+10z^2+z^4}}{12z^2}.
\label{Mrz}
\end{equation}
\noindent being $M$ the mass parameter, $r_c$ the radius
of a massive particle's circular orbit
that emits light and $z$ its redshift. These
circular orbits are stable as long as
$V_{eff}^{\prime \prime}>0$. Since
\begin{equation}
V_{eff}^{\prime \prime}= \frac{2M(r_c-6M)}{r_c^2(r_c-2M)(r_c-3M)},
\label{V2explicita}
\end{equation}
\noindent is positive provided that $r_c > 6 M$;
then $\frac{r_c}{M}= \mathcal{F}^{-1} > 6$ which
is fulfilled if and only if $|z| < 1/\sqrt{2}$
and solely for the minus sign $\mathcal{F}_{-}(z)$.
Therefore, a measurement of the redshift $z$ of light
emitted by a particle that follows
a circular orbit of radius $r_c$ in the equatorial plane around a
Schwarzschild black hole will have a mass parameter determined by
$M = r_c \mathcal{F}_{-}(z)$, and $z$ must be such that $|z|< 1/\sqrt{2}$.
The energy, angular momentum, velocities $U^t$, $U^{\phi}$ and
the angular velocity of
the emitter, can be computed from (\ref{E2L2}),
(\ref{velocidades}) and written as function
of the measurable redshift $z$ and radius $r_c$ of the circular
orbit of the photon source \cite{us}. Since $|z|<1/\sqrt{2}$,
all these five quantities $E^2,L^2,U^{t},U^{\phi},\Omega$
are bounded by $z$ given a specific circular orbit of radius $r_c$.
\subsubsection{Cases $\delta \neq 1$}
For the Schwzarschild black hole, the redshift is bounded as
$|z|<1/\sqrt{2}$. We will determine now
whether $z$ is bounded for $\delta \neq 1$ and the dependance
of the mass parameter $M$ in terms of the frequency shift of photons
emitted by particles traveling in circular orbits of radius $r_c$, that is,
$M=M(r_c,z)$ will be found for any given $\delta \neq 1$.
To that end, we start with the relationship (\ref{zfinalFB})
which for this metric becomes
\begin{equation}
F(M;r,z)=z^2 \left [ r-M(1+2\delta) \right ] \left ( 1- \frac{2M}{r}
\right )^{\delta} \ - \delta M =0 .
\label{z2chatterjee}
\end{equation}
\noindent There is no analytic solution for this equation for arbitrary
$\delta$, its roots have to be found numerically.
Circular orbits on the equatorial plane exist provided that the
following two conditions
\begin{eqnarray}
E^2 &=& \left ( 1-\frac{2M}{r} \right )^{\delta}
\frac{r-M(1+\delta)}{r-M(1+2\delta)} >0 \nonumber \\
L^2 &=& \left ( 1-\frac{2M}{r} \right )^{-\delta}
\frac{M r \delta (r-2M)}{r-M(1+2\delta)} >0
\label{E2L2Chatter}
\end{eqnarray}
\noindent are simultaneously fulfilled. Since $r>2M$, $L^2$ is
positive as long as $r-M(1+2\delta)>0$, the fulfillment of this
inequality also guarantees that $E^2>0$. Stability of these circular orbits
requires
\begin{equation}
V''(r)=\frac{2M \delta \left [ (2+6\delta+4\delta^2)M^2 + r^2 - 2Mr (1+ 3 \delta) \right ] }{r^2 (r-2M)^2 (r-M(1+2\delta))}
\label{V2charttejee}
\end{equation}
\noindent be positive. For $\delta < 1/\sqrt{5}$, it turns that
$V''>0$ for all $r>2M$ and $r>M(1+2\delta)$ (which are the conditions to have
circular orbits). For $\delta \ge 1/\sqrt{5}$, (\ref{V2charttejee})
can be written as
\begin{equation}
V''= \frac{2M\delta (r-M \Lambda_{+})(r-M\Lambda_{-})}{r^2(r-2M)^2(r-M(1+2\delta))},
\label{VppFactor}
\end{equation}
\noindent where $\Lambda_{\pm}=1+3\delta \pm \sqrt{5\delta^2-1}$.
From this expression, it is apparent that circular orbits will be stable
for either $\Lambda_{+} < r/M$ or $r/M <\Lambda_{-}$. To perform the
analysis, we proceed in the following manner: for a fixed value of
$\delta$, a domain $\mathcal{D}_{r,z}= (r_{min},r_{max}) \times (z_{min},z_{max})$
is set and a search for positive roots of (\ref{z2chatterjee})
is carried out. We numerically find these roots using a hybrid
algorithm, a combination of bisection and Newton-Raphson
methods \cite{NR}.
At some points of the domain $\mathcal{D}_{r,z}$
there are more than one positive root of (\ref{z2chatterjee}).
We then test whether
the conditions (\ref{E2L2Chatter}) and $V'' >0$ for circular and stable orbits
are simultaneously satisfied. Those
roots of (\ref{z2chatterjee}) at a point $q \in \mathcal{D}_{r,z}$
that do not fulfill the conditions for circular and stable orbits are
discarded. What we have found is that, not for every single point
$q \in \mathcal{D}_{r,z}$
there is a root of $F(M;r,z)=0$ that
leads us to a circular stable orbit of radius $r_c$
followed by a photon emitter particle; only in a
subset $\mathcal{D} \subset \mathcal{D}_{r,z}$ does such a mass parameter exist.
Moreover, in all the surveys we have done on domains
with different sizes, in every point $q \in \mathcal{D}_{r,z}$
the parameter $M$ attained is unique.
We work with geometrized units ($G=c=1$) and we scale $M$ and $r$ by a
multiple of the solar mass $p M_{\odot}$.
We show the plots of $M=M(z,r_c)$ in figure \ref{masses}
for the three special cases: Kaluza-Klein soliton
($\delta=0$.$5$), Schwarzschild black hole ($\delta=1$) and Dilaton
black hole ($\delta=2$). It can be seen that as $\delta$ increases
its values, the mass obtained decreases and it is symmetric with respect to
the frequency shift $z$ ($z_{red}>0$ and $z_{blue}<0$). Given a set of $N$
pairs $\{ r_c, z\}_i$ of observed redshifts (blueshifts) $z$ of emitters
traveling around a Chatterjee object along circular orbits of radii $r_c$,
a Bayesian statistical analysis could be carried out in order to estimate
the black hole mass parameter.
\begin{figure}[htp]
\resizebox{70mm}{!}{\includegraphics{Masa05.eps}}\\
\resizebox{70mm}{!}{\includegraphics{Masa1.eps}}\\
\resizebox{70mm}{!}{\includegraphics{Masa2.eps}}\\
\caption{Mass parameters in terms of redshift ($z>0$) or
blueshift ($z<0$) of photons emitted by particles
in circular orbits of radius $r$ around
Chatterjee space-time. The upper graph corresponds to
the cases of Kaluza-Klein soliton ($\delta=1/2$), the middle
graph corresponds to Schwarzschild black hole ($\delta=1$)
and lower graph corresponds to a black hole in the dilaton gravity framework
($\delta=2$). $M$ and $r$ are in geometrized units and scaled by
$p M_{\odot}$ where $p$ is an arbitrary proportionality factor.}
\label{masses}
\end{figure}
In regard to the frequency shifts bounds, for $\delta > 1/2$, for each
$\delta$ there is a $z_b$ such that for $|z|< z_b$, there exists a
unique value of the mass parameter $M(r_c,z)$, and $z_b(\delta)$
does not depend on the radii $r_c$ of the circular orbits of photon emitters.
$z_b=z_b(\delta)$ is shown in the upper plot of figure
\ref{deltavsz}, asterisks (in blue) represent the
three known cases: Kaluza-Klein solution ($\delta=1/2$),
Schwarzschild black hole ($\delta=1$) and dilaton black hole
($\delta=2$). For $\delta$'s in a small vicinity of unity, the bound $z_b$
varies little around $1/\sqrt{2}$ which is the Schwarzschild frecuency shift
bound. For $\delta \in (1$.$7,3)$, $|z_b - 1/\sqrt{2}|<0$.$04$.
Nonetheless, for $\delta \in (1/\sqrt{5},1/2)$, there is a
significant departure from that Schwarzschild bound; furthermore,
there are two frequency shifts regions where there is a positive root
$M(r_c,z)$ of (\ref{z2chatterjee}) corresponding to circular and stable orbits
of photon emitters. The lower plot in figure \ref{deltavsz} shows
these two regions: $|z| > z_{b2}$ and $|z|<z_{b1}$ are
physically aceptable. For $0 < \delta < 1/\sqrt{5}$ the stability condition
$V''>0$ always holds and there is no bound for $z$.
\begin{figure}[htp]
\resizebox{60mm}{!}{\includegraphics{delta_zeta1New.eps}}\\
\resizebox{60mm}{!}{\includegraphics{deltazetaNew.eps}}\\
\caption{ The upper plot shows the value $z_b=z_b(\delta)$
such that, for $|z|<z_b$ there exists a
unique value of the mass parameter $M(r,z)$.
Asterisks (in blue) represent the
three known cases: Kaluza-Klein solution ($\delta=1/2$), Schwarzschild
black hole ($\delta=1$) and dilaton black hole ($\delta=2$).
In the lower plot, the two regions $|z| > z_{b2}$ and $|z|<z_{b1}$ are
physically aceptable. }
\label{deltavsz}
\end{figure}
The upper plot in figuer \ref{zChatt} shows the frequency shift $|z|$ as
function of $r/M$ for $\delta=0$.$45$. There is a discontinuity in the
frequency shift values physically acceptable corresponding to
$|z| > z_{b2}$ (green curve) and $|z|<z_{b1}$ (red curve).
The lower plot shows $|z|=|z|(r/M)$ for $\delta=0$.$5$
(red curve) and $\delta=2$ (green curve).
In the upper and lower plots, the dashed black curves
correspond to the Schwarszchild black hole.
For $\delta$'s not close to unity, there is a evident departure
from the Schwarzschild curve that might allow
us to distinguish (if observational data were available)
whether a Schwarzschild or Chaterjee space time is in the center of a galaxy.
\begin{figure}[htp]
\resizebox{55mm}{!}{\includegraphics{Zdelta45.eps}} \vspace{0.7cm}\\
\resizebox{55mm}{!}{\includegraphics{Zdelta52.eps}}\\
\caption{The upper plot ($\delta=0$.$45$) has a gap corresponding
to $|z| > z_{b2}$ (green curve) and $|z|<z_{b1}$ (red curve).
The lower plot shows $|z|=|z|(r/M)$ for $\delta=0$.$5$
(red curve) and $\delta=2$ (green curve).
In both plots, the dashed black curves correspond to the
Schwarszchild black hole. }
\label{zChatt}
\end{figure}
\subsection{Gibbons-Maeda space-time}
The Gibbons-Maeda metric could represent a charged static black hole
with a scalar field \cite{GM}. This metric reads
\begin{equation}
ds^2=-( 1-\frac{2M}{r}) dt^2 + ( 1-\frac{2M}{r})^{-1}
dr^2 + r (r- \frac{Q^2}{M}) d\Omega^2
\label{GM}
\end{equation}
\noindent
For $Q=0$ (\ref{GM}) becomes the Schwarzschild black hole. We shall work in the
region $r>2M$. Particles will follow circular and equatorial orbits
in Gibbons-Maeda space-time provided that
\begin{eqnarray}
L^2 &=& \frac{2r(Q^2-Mr)^2}{2M(2Q^2+r^2)-(6M^2+Q^2)r} > 0 \nonumber \\
E^2 &=& \frac{(r-2M)^2(-Q^2+2Mr)}{2M(2Q^2+r^2)-(6M^2+Q^2)r} > 0.
\label{E2L2GM}
\end{eqnarray}
\noindent $L^2>0$ holds if and only if
\begin{eqnarray}
P(r,M)&=&2M(2Q^2+r^2)-(6M^2+Q^2)r \nonumber \\
&=& 2M (r-r_{-})(r-r_{+})>0
\label{PrM}
\end{eqnarray}
\noindent where
\begin{equation}
r_{\pm}=\frac{6M^2+Q^2\pm 6\sqrt{(M^2-Q^2/2)(M^2-Q^2/18)}}{4M}
\label{rmasmenos}
\end{equation}
In order for $r_{\pm}$ to be real, $2M^2>Q^2$ must be required.
Since for Schwarzschild black hole, circular orbits exist solely for
$r>3M$ and $r_{-} \to 0$ and $r_{+} \to 3M$ as $Q \to 0$, we work in
the region $r>r_{+}$. It turns out that $r>r_{+}$ and $2M^2>Q^2$ imply
the fullfilment of $E^2>0$. Circular orbits are stable as long as the condition
\begin{equation}
V''=\frac{-4M^2(2Q^4-6MrQ^2+M(6M-r)r^2)}{P(r,M) r^2 (Mr-Q^2)(r-2M)} > 0,
\label{VppGM}
\end{equation}
\noindent holds and this is so as long as
$Mr^3-6M^2r^2+6MQ^2r-2Q^4>0$. The relation that connects $M,r,Q^2$ and $z$
comes from (\ref{zfinalFB}) and it reads
\begin{equation}
z^2=\frac{2Mr(Mr-Q^2)}{(r-2M) \left [ 2M(2Q^2+r^2)-(6M^2+Q^2)r \right ] }
\label{z2Maeda}
\end{equation}
In order to find $M=M(r_c,z;Q^2)$ one has to solve this cubic equation for $M$.
Equation (\ref{z2Maeda}) have either three real roots or one real plus
two imaginary roots. We keep only the real positive roots $M=M(r_c,z;Q^2)$ that
fulfill the conditions for circular and stable orbits and it turns out
that, from the analytic Cardan's formula, the only one that satisfies
those conditions is
\begin{equation}
M(r_c,z,Q^2)=\left [ 2 \sqrt{\frac{-p}{3}} \cos{(\frac{\phi}{3}+\frac{4 \pi}{3})}- \frac{a}{3} \right ] \quad \mbox{where} \nonumber
\end{equation}
\begin{eqnarray}
p &=& b - \frac{a^2}{3}, \quad q = c-\frac{ab}{3}+\frac{2a^3}{27},
\quad
\phi = \cos^{-1}\left [ \frac{\sqrt{27} q}{2p\sqrt{-p}} \right ] \nonumber \\
a &=& -\left ( \frac{2Q^2}{3r_c} + \frac{5z^2+1}{6z^2}r_c \right ), \quad
c = - \frac{r_cQ^2}{12} \nonumber \\
b &=& \left ( Q^2 \frac{3z^2+1}{6z^2} + \frac{r_c^2}{6} \right ).
\label{pqphi}
\end{eqnarray}
This analysis is performed numerically in the following
manner: we set a domain $\mathcal{D}=(Q_1,Q_2)\times (r_1,r_2) \times (z1,z2)$,
for each $q \in \mathcal{D}$ the roots of (\ref{z2Maeda}) are
found. With these roots at hand, we check whether the following conditions
are all satisfied: (i) $r>2M$, (ii) $r > r_{+}$, (iii) $2 M^2 > Q^2$ and
(iv) $M r^3 - rM^2 r^2 + 6 MQ^2r-2Q^4 > 0$. The second and third inequalities
guarantee that indeed, we have circular and equatorial orbits; the fourth
inequality guarantees stability of these orbits. As mentioned, it
turns out that the only root satisfying all the conditions has the form
given by (\ref{pqphi}). For given values of $r_c$ and $Q^2$,
we search for the minimum and maximum value of $z$ for which these four
conditions simultaneously hold, this process yields bounds of the frequency
shifts. For $Q= 0$, the result for Schwarzschild ($|z| < z_b= 1/\sqrt{2}$)
is recovered as it should be. Figure \ref{Cotas} shows the surfaces
$z_{min}=z_{min}(r_c,Q^2)$ and $z_{max}=z_{max}(r_c,Q^2)$. Only for
frequency shifts $z$ such that $|z| \in (z_{min},z_{max})$,
the corresponding values for the mass parameter $M=M(r_c,z,Q)$
are acceptable. We observe that the larger $Q^2$ is, the narrower the gap
between $z_{min}$ and $z_{max}$ becomes. For large $Q^2$'s and small $r_c$'s,
$z_{min}$ and $z_{max}$ sharply increase.
\begin{figure}[htp]
\centerline{ \epsfysize=6.5cm \epsfbox{Cotas2.eps}}
\caption{The two redshift surfaces $z_{min}$ and $z_{max}$
as function of the radius $r$ of circular orbits followed by photon
emitters around a Gibbson-Maeda spacetime and its charge
parameter $Q^2$ are shown. Only for redshifts $z \in (z_{min},z_{max})$
the corresponding values $M=M(r,z,Q^2)$ are acceptable.
$M$, $Q$, and $r$ are in geometrized units and scaled by $pM_{\odot}$
where $p$ is an arbitrary factor of proportionality .}
\label{Cotas}
\end{figure}
Due to the condition (iii) $2M^2 > Q^2$, as the charge $Q^2$ increases,
the corresponding acceptable value $M(r_c,z,Q^2)$ also increases. The upper
plot of figure \ref{MGM}, shows $M=M(r_c,z,Q^2=10)$ and $M>\sqrt{5}$ as it
should be according condition (iii), the lower plot shows
$M=M(r_c,z,Q^2=70)$ and $M>\sqrt{35}$.
$M$, $r_c$ and $Q$ are in geometrized units and scaled by $p M_{\odot}$
where $p$ is an arbitrary proportionality factor. For $Q=0$, the result
of the Schwarzschild black hole shown in the middle plot of figure
\ref{masses} with $z_b=1/\sqrt{2}$ is recovered.
\begin{figure}[htp]
\resizebox{75mm}{!}{\includegraphics{Masa2a.eps}}\\
\resizebox{75mm}{!}{\includegraphics{Masa4a.eps}}\\
\caption{Mass parameters in terms of redshift ($z>0$) and
blueshift ($z<0$) of photons emitted by particles
in circular orbits of radius $r$ around
Gibbons-Maeda space-time. The upper plot shows $M(r,z,Q^2=10)$ and the
lower plot $M(r,z,Q^2=70)$. $M$, $r$ and $Q$
are in geometrized units and scaled by $p M_{\odot}$ where $p$ is an
arbitrary proportionality factor.}
\label{MGM}
\end{figure}
\section{Final remarks}
There have been efforts to find astrophysical signatures of scalar fields.
In \cite{Matos-Brena} the following spacetime was considered
\begin{eqnarray}
ds^2 &=&-( 1-\frac{2M}{r} ) dt^2 + e^{2k_s} \frac{dr^2}{1-2M/r}
\nonumber \\
&& + r^2 \left ( e^{2k_s} d\theta^2 + \sin{\theta} d\phi^2 \right )
\label{MB}
\end{eqnarray}
\noindent where
\begin{eqnarray}
e^{2k_s}&=& \left ( 1 + \frac{M^2 \sin^2{\theta}}{r^2 (1-2M/r)} \right )^{-1/b^2}
\nonumber \\
\Phi&=&\frac{1}{2b} \ln \left( 1-\frac{2M}{r} \right ),
\label{PhiMB}
\end{eqnarray}
\noindent with $b$ an integration constant. As $b \to \infty$, the scalar field
vanishes and the Schwarzschild black hole is recovered. The metric
(\ref{MB}) was employed to model the Sun and the deflection of light
rays passing nearby was computed, the authors found that the current
observational errors for this effect set the limit $b > 0.02$. One of their
conclusions was that even if a scalar field of the form (\ref{PhiMB})
were actually present in the solar system, it cannot be presently detected.
If we apply HN formalism to the metric (\ref{MB}) considering equatorial
and circular orbits, then $M=M(r_c,z)$ and the bound for $z$
would be exactly the same as the one for the Schwarzschild black hole
since the components $g_{rr}$ and $g_{\phi \phi}$
are just the same. As a matter of fact, any solution to the EMD theory that
for equatorial and circular orbits leaves $g_{rr}=(1-2M/r)^{-1}$ and
$g_{\phi \phi}=r^2$ would yield the same results as for that of the
Schwarzschild metric.
From our analysis on the Chatterjee spacetime, we observed that, for
$\delta$'s not close to unity, the departure of our results from those
of the Schwarzschild black hole is noticeable and becomes truely apparent
for $\delta \in (1/\sqrt{5},1/2)$ where
there are two frequency shifts regions where there is a positive root
$M(r_c,z)$ of (\ref{z2chatterjee}) corresponding to circular and stable orbits
of photon emitters. For the Gibbons-Maeda metric, the bounds of frequency
shifts are two surfaces $z_{min}(r_c,Q^2)$ and $z_{max}(r_c,Q^2)$ whose gap
narrows as $Q^2$ increases and $r_c$ decreases their values, as shown in
figure \ref{Cotas}. It would be interesting to apply this formalism to
rotating dilaton black holes \cite{horne} and contrast the results with those
found in \cite{us} for the Kerr black hole. \\
\noindent {\bf ACKNOWLEDGMENTS} \\
S.V-A acknowledges partial support from PRODEP, under 4025/2016RED project.
F.A and R. B. acknowledge partial support by CIC-UMSNH.
The authors thank professor Ulises Nucamendi for useful comments and
discussions on the results of this work.
\thebibliography{99}
\bibitem{evidence} M. B. Begelman, Evidence for black holes, Science
{\bf 300}, 1898 (2003). Z. Q. Shen, K. Y. Lo, M.-C. Liang,
P. T. P. Ho, and J.-H. Zhao, A size of $\approx$ 1 au for the radio
source Sgr $A^*$ at the center of the Milky Way, Nature (London)
{\bf 438}, 62 (2005). A. M. Ghez, S. Salim, N. N. Weinberg,
J. R. Lu, T. Do, J. K. Dunn, K. Matthews, M. R. Morris, S. Yelda, E.
E. Becklin, T. Kremenek, M. Milosavljevic, and J. Naiman, Measuring
distance and propereties of the Milky Way's central supermassive
black hole with stellar orbits, Astrophys. J. {\bf 689}, 1044
(2008). M. R. Morris, L. Meyer, and A. M. Ghez, Galactic center
research: Manifestations of the central black hole,
Res. Astron. Astrophys. {\bf 12}, 995 (2012)
\bibitem{ulises} Alfredo Herrera and Ulises Nucamendi,
Kerr black hole parameters in terms of the redshift/blueshift of photons
emitted by geodesic particles, Phys. Rev. D {\bf 92}, 045024 (2015).
\bibitem{us} Ricardo Becerril, Susana Valdez-Alvarado, Ulises Nucamendi,
Obtaining mass parameters of compact objects from redshifts and blueshifts
emitted by geodesic particles around them, Phys. Rev. D {\bf 94},
124024 (2016).
\bibitem{estimates}
B. Aschenbach, N. Grosso, D. Porquet and P. Predehl,
X-ray flares reveal mass and angular momentum of the Galactic Center
black hole, A \& A {\bf 417}, 71–794, 8 (2004).
\bibitem{LS} T. Matos, F.S. Guzman and L. Ure\~na. Scalar Fields as Dark
Matter in the Universe, Class. Quantum Grav. {\bf 17}, (2000)
1707. Mikel Susperregi, Dark matter and dark energy from an
inhomogeneous dilaton, Phys. Rev. D {\bf 68}, 123509 (2003).
\bibitem{RC} T. Matos and F.S. Guzman. Scalar fields as dark matter in
spiral galaxies, Class. Quantum Grav. {\bf 17}, L9 (2000).
\bibitem{Lens} T. Matos and R. Becerril, An axially symmetric scalar field
as a gravitational lens, Class. Quantum Grav. {\bf 18}, 2015 (2001).
\bibitem{Dynamics} Eric W. Hirschmann, Luis Lehner, Steven L. Liebling,
and Carlos Palenzuela, Black Hole Dynamics in Einstein-Maxwell-Dilaton Theory.
arXiv:1706.09875, (2017).
\bibitem{MaciasMatos} T. Matos and A. Mac\'ias. Black holes from generalized
Chatterjee solutions in dilaton gravity, Modern Physics Letters A {\bf 9},
3707 (1994).
\bibitem{GM} G.W.Gibbons and Kei-ichi Maeda.
Black holes and membranes in higher-dimensional theories with dilaton fields,
Nuclear Physics B {\bf 298}, 741 (1988).
David Garfinkle, Gary T. Horowitz and Andrew Strominger,
Charged black holes in string theory, Phys. Rev. D {\bf 43}, 3140 (1991).
\bibitem{NR} William H. Press, Saul A. Teukolsky, William T. Vetterling and Brian P. Flannery, Numerical Recipes. Cambridge University Press.
\bibitem{Matos-Brena} T. Matos and Hugo V. Brena, Possible astrophysical signatures of scalar fields, Class. Quantum Grav. {\bf 17}, 1455 (2000).
\bibitem{horne} James H. Horne and Gary T. Horowitz, Rotating dilaton black holes. Phys. Rev. D {\bf 46}, 1340 (1992).
\end{document}
|
2,877,628,090,706 | arxiv | \section{Introduction}
\label{sec:introduction}
The perturbative QCD
description of many observables measured at colliders is plagued by
large corrections arising from soft and collinear parton emission,
even for fairly generic kinematical conditions.
For example, near threshold, large logarithmic corrections remain
\cite{Sterman:1987aj,Catani:1989ne}
after cancellation of singular virtual and real gluon contributions,
their large size being a result of the nearby threshold
restricting the real gluons to be soft. In terms of a (Mellin)
variable $N$, in terms of which threshold is approached by
$N \rightarrow \infty$, such large threshold corrections
take the form ($L = \ln N$),
\begin{equation}
\label{eq:29}
\alpha_s^i \sum_j^{2i}\, a_{ij} L^j\,,
\end{equation}
where the $a_{ij}$ depend in general on the process.
Another example
\cite{Dokshitzer:1978yd,Parisi:1979se,Altarelli:1984pt,Collins:1981uk,Collins:1981va,Collins:1985kg}
is when an identified part $F_P$ of a final state has
acquired small transverse momentum by soft recoil
($Q_T$) against the remaining, unmeasured part of the final state. Then the
perturbative expression for the differential cross section with
respect to $p_T$ of $F_P$ takes again the form of Eq.~\eqref{eq:29},
but with different coefficients $a_{ij}$ and with $L = \ln b$, $b$ being
the impact parameter Fourier conjugate to $Q_T$.
Such large logarithmic corrections can be brought
under control by all-order resummation, and there exists a large literature
demonstrating the viability, where applicable, of threshold, recoil as well
as their joint resummation, for a wide variety of observables.
It is interesting to try to extend all-order control
to classes of large terms beyond the logarithmic corrections.
One such new set consists of numerically
large constants (``$\pi^2$ terms'') originating from the same
infrared-sensitive regions of those Feynman diagrams that also
produce the logarithms \cite{Parisi:1980xd,Magnea:1990zb,Eynck:2003fn}.
Another important class of potentially large terms, of soft-collinear origin,
can be represented as
\begin{equation}
\label{eq:30}
\alpha_s^i \sum_j^{2i-1}\, d_{ij} \frac{\ln^jN}{N}\,.
\end{equation}
Their phenomenological importance was first demonstrated in
Ref.~\cite{Kramer:1996iq} in which the leading terms $j=2i-1$
were also summed to all orders for Higgs production
and Drell-Yan. The assessment of these terms was made
more meaningful in the context of a complete next-to-next-leading
order (NNLO)
\cite{Harlander:2002wh,Harlander:2001is,Anastasiou:2002yz,Anastasiou:2004xq,Ravindran:2003um,Ravindran:2003ia,Catani:2001ic}
calculation, and a consistent next-to-next-leading logarithmic (NNLL)
threshold-resummed result \cite{Catani:2003zt}. It is not yet clear how to sum next-to-leading terms in \eqref{eq:30}.
In this paper we examine the impact of the leading terms in
\eqref{eq:30} for a single particle inclusive observable,
the $p_T$ spectrum of prompt photons produced in hadronic collisions.
We do this in the context of both a
threshold \cite{Laenen:1998qw,Catani:1998tm,Catani:1999hs,Kidonakis:1999hq,Bolzoni:2005xn}
and joint \cite{Laenen:2000de,Laenen:2000ij,Sterman:2004yk,Li:1998is}
resummed calculation for this spectrum.
The paper is organized as follows. In section 2 we review briefly the
threshold and joint resummed prompt photon $p_T$ distribution.
In section 3 we describe and motivate our extension to include
the leading $\alpha_s^k \tfrac{\ln^{2k-1}N}{N}$ terms. In section 4
we assess the numerical impact of these corrections, and
we conclude in section 5.
\section{Threshold and joint resummation for prompt photon production}
\label{sec:thresh-joint-resumm}
We consider the inclusive $p_T$ distribution of prompt photons produced
in hadron-hadron collisions at center of mass (cm) energy $\sqrt{S}$
\begin{equation}
\label{eq:el-proc}
h_A(p_A) + h_B(p_B) \to \gamma(p_c)+X \>,
\end{equation}
where $h_{A,B}$ refer to the two incoming hadrons
and $X$ to the unobserved part of the final state.
The lowest order QCD processes producing the prompt photon
at partonic cm energy $\sqrt{s}$ are
\begin{equation}
\label{eq:parton-proc}
\begin{split}
&q(p_a) + \bar q(p_b) \to \gamma(p_c) + g(p_d)\>, \\
&g(p_a) + q(p_b) \to \gamma(p_c) + q(p_d)\>.
\end{split}
\end{equation}
The distance to threshold is customarily measured
by the variable $1-x_T^2$, where $x_T^2 = 4p_T^2/S$. At the parton level this
distance is given by $1-\hat{x}_T^2 = 1-4p_T^2/s$.
Below we review the result for
the joint resummed prompt photon $p_T$ distribution.
At the end of this section we recall
how the threshold resummed result may be derived
from it.
The joint resummation formalism for prompt photon production
\cite{Laenen:2000de,Laenen:2000ij}
implements the notion that, in
the presence of soft QCD radiation
with summed transverse momentum ${\bf Q}_T$ of soft
recoiling partons,
the actual transverse momentum produced
by the hard collision is not $\bf{p}_T$
but rather $\bf{p}\,'_T=\bf p_T-\bf Q_T/2$.
Stated more precisely: in the context of a refactorization analysis
\cite{Laenen:2000ij} one can
identify a short-distance process at cm energy $Q$
that produces a prompt photon with momentum $\bf{p}\,'_T$ in a recoiling frame.
One defines accordingly $\tilde x^2_T = 4 {p^\prime}^2_T/Q^2$.
The extreme situation $Q_T = 2 p_T$ in which all transverse momentum is produced
through soft recoil leads to a singularity in the short-distance process,
which we avoid by imposing an
upper limit $\bar{\mu}$ on $Q_T$ \cite{Laenen:2000de}. A recently proposed
extension \cite{Sterman:2004yk} of joint resummation avoids this
singularity.
The joint resummed $p_T$ distribution of prompt photons in hadronic
collisions is written as
\begin{eqnarray}
\label{eq:6}
{p_T^3 d \sigma^{({\rm resum})}_{AB\to \gamma+X} \over dp_T}
&=& \sum_{ij} \frac{p_T^4}{8 \pi S^2} \int_{\cal C} {dN \over 2 \pi i}\;
f_{i/A}(N,\mu_F) f_{j/B}(N,\mu_F)
\nonumber\\
&\ & \hspace{5mm} \times
\; \int_0^1 d\tilde x^2_T \left(\tilde x^2_T \right)^N\;
{|M_{ij}(\tilde x^2_T)|^2\over \sqrt{1-\tilde{x}_T^2}}\,
\;
C^{(ij\to \gamma k)}(\alpha_s(\mu),\tilde
x_T^2)
\nonumber \\
&& \hspace{10mm} \times
\int {d^2 {\bf Q}_T \over (2\pi)^2}\;
\Theta\left(\bar{\mu}-Q_T\right)
\left( \frac{S}{4 {\bf p}_T'{}^2} \right)^{N+1}\nonumber\\
&\ & \hspace{15mm} \times
\int d^2 {\bf b} \,
{\rm e}^{i {\bf b} \cdot {\bf Q}_T} \,
\exp\left[E_{ij\to \gamma k}\left( N,b,\frac{4 p_T^2}{\tilde
x^2_T},\mu_F \right)\right]\, .
\end{eqnarray}
Let us explain each of the terms on the right hand side of Eq.~\eqref{eq:6}.
The top line displays the moments of
standard parton distribution functions, as well as the sum over
initial state parton flavors.
The next line contains the Mellin
transform over the partonic scaling variable $\tilde{x}_T^2$ in the recoiling
frame, the Born amplitudes,
and the $N$- and $b$-independent hard virtual corrections summarized in $C^{(ij\to \gamma k)}$.
The second to last line contains the integral over the recoil
momentum of the soft partons,
as well as a kinematic factor linking
recoil and threshold effects.
The last line contains the Sudakov exponentials from
initial and final state partons, as well as soft wide-angle radiation
in combined Mellin-impact parameter space.
As indicated in the last line of Eq.~\eqref{eq:6}, large threshold and recoil
logarithms,
expressed through $\ln N$ and
$\ln b$, can be resummed into an exponential form.
The perturbative exponential moment dependence at next-to-leading
logarithmic (NLL) accuracy is given by
\begin{multline}
\label{eq:11}
E_{ij\rightarrow \gamma k}^{\rm PT} (N,b,Q,\mu,\mu_F) = \\
E^{\rm PT}_i (N,b,Q,\mu,\mu_F) + E^{\rm PT}_j (N,b,Q,\mu,\mu_F) + F_k (N,Q,\mu) + G_{ijk} (N,\mu)\, .
\end{multline}
Let us discuss each of these terms in turn. The
initial state perturbative exponent reads, in integral form
\begin{equation}
\label{eq:39}
E^{\rm PT}_i (N,b,Q,\mu,\mu_F)
= -\int_{Q^2/\chi^2}^{Q^2} {d k_T^2\over k_T^2}\; \left\{\
A_i\left(\alpha_s(k_T)\right)\; \ln\left(\frac{Q}{\bar{N}k_T} \right)\right\}
- 2\ln \bar{N}\int^{Q^2}_{\mu_F^2} {d k_T^2\over k_T^2}A_i\left(\alpha_s(k_T)\right)
\end{equation}
where
$\mu,\mu_F$ are the renormalization and factorization scale, respectively.
The function $\chi(N,b)$ defines the $N$- and $b$-dependent scale of
soft gluons to be included in the resummation,
and is chosen as \cite{Kulesza:2002rh}
\begin{equation}
\label{eq:18}
\chi(N,b)=\bar{b} + \frac{\bar{N}}{1+\frac{\eta{\bar{b}}}{\bar{N}}}\; ,
\end{equation}
where $\eta$ is a suitably chosen constant and
\begin{equation}
\label{eq:17}
\bar{N} = N e^{\gamma_E}, \quad
\bar{b} = bQ e^{\gamma_E}/2\,,
\end{equation}
with $\gamma_E$ the Euler constant. An older form used in \cite{Laenen:2000de}
\begin{equation}
\label{eq:19}
\chi(N,b)=\bar{b} + \bar{N}
\end{equation}
generates spurious subleading logarithms in $Q_T$ \cite{Kulesza:2002rh}.
We postpone elaborating on the integral in Eq.~\eqref{eq:39} to the next
section.
The final state jet exponent reads to NLL accuracy
\begin{equation}
\label{eq:12}
F_k (N,Q,\mu) \equiv \frac{1}{\alpha_s (\mu)} f^{(0)}_k (\lambda) +
f^{(1)}_k (\lambda,Q,\mu)\, ,
\end{equation}
where
\begin{equation}
\label{eq:25}
\lambda = b_0 \alpha_s(\mu^2)\ln \bar{N} \,.
\end{equation}
The exponent associated with wide angle soft radiation is
\begin{equation}
\label{eq:22}
G_{abc} (N) \equiv g^{(1)}_{abc} (\lambda)\, .
\end{equation}
The functions $f^{(0,1)}_k$ and $g^{(1)}_{ijk} (\lambda)$
as well as the functions $C^{(ij\to \gamma k)}$ \cite{Catani:1998tm,Catani:1999hs}
are listed in the Appendix.
A nonperturbative term must be added to the perturbative exponent in
Eq.~\eqref{eq:11}
in order to regularize the limit in which $Q_T$ is very small. As in Refs.~\cite{Laenen:2000de,Laenen:2000ij}
we take
\begin{equation}
\label{eq:37}
E^{\mathrm{NP}}_{ij} = -\tfrac{1}{2} g_{\mathrm{NP}} b^2 \qquad
ij = q\bar{q}, \, qg \,.
\end{equation}
The threshold-resummed result can now be derived by simply
neglecting $\bf Q_T$ in the factor $( S/[4 |{\bf p}_T - {\bf Q}_T/2|^2])^{N+1}$
in Eq.~\eqref{eq:6}. Then the $\bf Q_T$ integral sets $\bf b$ to zero
everywhere, yielding the threshold-resummed result.
\section{Including leading $\ln N/N$ terms}
\label{sec:including-leading-ln}
The leading terms in Eq.~\eqref{eq:30} originate from both initial and final state
radiation, and to resum them we will use two different methods upon which we
elaborate in this section.
There are moreover two classes of functions in momentum space at order
$\alpha_s^j$ that generate the leading
$\ln^{2j-1}N/N$ terms upon Mellin transformation. In terms of the
variable $z, \; 0<z<1$ which in the present case can represent either
$\hat{x}_T^2$ or $\tilde{x}_T^2$,
one of the two classes is formed by
the singular
plus distributions $[\ln^{2j-1}(1-z)/(1-z)]_+$, the other by the singular
but integrable $\ln^{2j-1}(1-z)$.
The $\ln N/N$ contributions from the former can be computed
using the methods of \cite{Catani:1996yz} and can be found e.g. in
Ref.~\cite{Kramer:1996iq}.
The $\ln N/N$ contributions from the latter can be generated at any order in
perturbation theory by a simple
replacement in the resummed expression (see below in Eqs.~\eqref{eq:21}),
expanding the resulting expression to the desired order, and
keeping the leading term in Eq.~\eqref{eq:30}. Roughly speaking,
the replacement
is equivalent to exchanging at order $j$ one
soft-collinear gluon (corresponding to one factor
$\alpha_s \ln^2 N$) for a hard-collinear one
(corresponding to a factor $\alpha_s \ln N/N$)
\begin{equation}
\label{eq:31}
\alpha_s^k \ln^{2k} N \rightarrow \alpha_s^k \frac{\ln^{2k-1}N}{N}\,.
\end{equation}
This replacement is in fact easily included in the existing threshold resummation
formulae. A preliminary study for prompt photon production was carried
out in Ref.~\cite{Mathews:2004pu}.
We employ this replacement method in fact for the final state related
$\alpha_s^k \ln^{2k-1}N/N$ terms.
It was pointed out in Refs.~\cite{Kulesza:2002rh,Kulesza:2003wn} that
the initial state related $\alpha_s^k \ln^{2k-1}N/N$
terms could be generated in the context of joint resummation by extending
evolution of parton densities to a soft scale. We will use this
method as well, for the first time for a one-particle inclusive observable.
We now discuss the initial and final state $\ln N/N$ contributions
in turn.
\subsection{Initial state}
\label{sec:initial-state}
Our procedure for the initial state follows
Refs.~\cite{Kulesza:2002rh,Kulesza:2003wn}, where the joint resummation for
electroweak or Higgs boson production at mass $Q$ and transverse
momentum $Q_T$ was given.
We recall the key points here.
The integral form of the initial state NLL exponent \eqref{eq:39} can be written as
\begin{multline}
\label{eq:13}
E_i^{\rm PT}(N,b,Q,\mu,\mu_F) =
-\int_{Q^2/\chi^2}^{Q^2} {d k_T^2\over k_T^2}\; \left\{\
A_i\left(\alpha_s(k_T)\right)\; \ln\left(\frac{Q}{k_T} \right)
+ B_i\left(\alpha_s(k_T)\right)\right\}\\
+ \int_{\mu_F^2}^{Q^2/\chi^2} {d k_T^2\over k_T^2}\;
\left\{\
- \ln \bar N A_i\left(\alpha_s(k_T)\right) - B_i\left(\alpha_s(k_T)\right)
\right\}\,.
\end{multline}
The first term in this expression leads to
\begin{equation}
\label{eq:4}
E_i^{\rm PT}(N,b,Q,\mu) =
\frac{1}{\alpha_s (\mu)}h_i^{(0)} (\beta) +
h_i^{(1)} (\beta,Q,\mu) \; ,
\end{equation}
where
\begin{equation}
\label{eq:38}
\beta = b_0\, \alpha_s (\mu)
\ln \left( \chi \right) \, .
\end{equation}
We recall that the $\chi$ depends on $N$ and $b$ through Eq.~\eqref{eq:18}.
The functions $h_i^{(0,1)}$ are listed in the Appendix.
The second term represents flavor-conserving evolution to NLL accuracy
(the integrand consists of the $\ln N$ and constant terms
for the anomalous dimension matrix $\gamma_{i/j}(N)$ for $j=i$)
from the hard scale $\mu_F$ to the soft scale $Q/\chi$.
One now performs the replacement \cite{Kulesza:2002rh,Kulesza:2003wn}
\begin{equation}
\label{eq:15}
- A_i(\alpha_s) \ln\left( \bar{N}\right) - B_i(\alpha_s) \; \longrightarrow \;
\gamma_{i/i}(N) (\alpha_s) \; ,
\end{equation}
that includes the leading, flavor-diagonal $\ln N/N$ effects
generated by the $k_T$ integral
(the $1/N$ part of $\gamma_{i/i}$ combines with the $\ln N$ terms).
In fact, one may go further and include the off-diagonal contributions via
the replacement
\begin{equation}
\label{eq:16}
\delta_{ig}\, \exp\left[ \frac{-A_g^{(1)}\ln\bar{N} -
B_g^{(1)}}{2\pi b_0} \, s(\beta) \right] \, f_{g/H}(N ,\mu_F)\;
\longrightarrow \;
{\cal E}_{ik} \left(N,Q/\chi,\mu_F\right) \,
f_{k/H}(N ,\mu_F)\, .
\end{equation}
where $s(\beta) = \ln(1-2\beta)$ plus NLL corrections.
As a result, we can replace in Eq.~\eqref{eq:6} the combination
\begin{equation}
\label{eq:10}
f_{i/A}(\mu_F,N) f_{j/B}(\mu_F,N)
\exp\left[E^{\rm PT}_i (N,b,Q,\mu,\mu_F) + E^{\rm PT}_j (N,b,Q,\mu,\mu_F)\right]
\end{equation}
by
\begin{equation}
\label{eq:2}
{\mathcal C}_{i/A}(Q,b,N ) \; {\mathcal C}_{j/B}(Q,b,N)\;
\exp\left[E_i^{\rm PT}(N,b,\mu,Q)+E_j^{\rm PT}(N,b,\mu,Q)\right]
\end{equation}
where
\begin{equation}
\label{eq:5}
{\mathcal C}_{i/H}(Q,b,N)
= \sum_{k} {\cal E}_{ik} \left(N,Q/\chi,\mu_F\right) \,
f_{k/H}(N ,\mu_F) \; .
\end{equation}
The matrix $\mathcal{E}$ implements evolution from scale $\mu_F$ to scale $Q/\chi$,
and is normalized to be the unit matrix if these two scales are equal.
Note that the dependence on $\mu_F$ cancels among the factors in Eq.~\eqref{eq:5}.
\subsection{Final state}
\label{sec:final-state}
Leading $\ln N/N$ effects arising from final state radiation
can be derived from the jet functions \cite{Sterman:1987aj,Kidonakis:1998bk}
that enter threshold or
joint resummed expressions for observables having final state
partons at lowest order.
The integral form of the final state exponent $F_k$ in Eq.~\eqref{eq:11} reads
\begin{equation}
\label{eq:1}
\int_0^1 dz \frac{z^{N-1}-1}{1-z} \left\{\int_{(1-z)^2Q^2}^{(1-z)Q^2} \frac{dq^2}{q^2} A_k(\alpha_s(q^2))
+ B_k(\alpha_s((1-z)Q^2))\right\}\,.
\end{equation}
To include leading $\ln N/N$ dependence in $F_k(N,Q,\mu)$ we make the replacement
\cite{Kramer:1996iq,Catani:2003zt,Mathews:2004pu}
\begin{equation}
\label{eq:21}
\frac{z^{N-1} -1}{1-z} A^{(1)}_i \rightarrow \left[\frac{z^{N-1} -1}{1-z} - p_i z^{N-1}\right]\,
A^{(1)}_q + {\cal O}(\frac{1}{N^2})\,,
\end{equation}
where $p_q = 1 , p_g = 2$. The extra terms can be cast in a more
convenient form. Using
\begin{equation}
\label{eq:35}
z^{N-1} = \frac{z^{N-1}-1 -(z^N-1)}{1-z}
\end{equation}
and the replacement (accurate to NLL)
\begin{equation}
\label{eq:40}
z^{N-1}-1 \rightarrow -\theta\left(1-z-\frac{1}{\bar{N}} \right)
\end{equation}
one finds
\begin{equation}
\label{eq:24}
F_k(N,Q,\mu) = \frac{1}{\alpha_s(\mu)} f^{(0)}_k(\lambda) + f^{(1)}_k(\lambda,Q,\mu)
+f'_k(\lambda,\alpha_s)+O(\alpha_s(\alpha_s\ln N)^n)\,,
\end{equation}
where the extra terms $f\prime_k$ that include the leading $\ln N/N$ terms due to final state radiation
read
\begin{equation}
\label{eq:27}
f^\prime_q
=\frac{A^{(1)}_q}{ 2 \pi b_0}
\exp\left(-\frac{\lambda}{\alpha_s b_0}\right)\left[\ln(1-2 \lambda)-\ln(1- \lambda)\right]\,,
\end{equation}
\begin{equation}
\label{eq:28}
f^\prime_g
=\frac{3 A^{(1)}_g}{ 2 \pi b_0}
\exp\left(-\frac{\lambda}{\alpha_s b_0}\right)\left[\ln(1-2 \lambda)-\ln(1- \lambda)\right]\,.
\end{equation}
There is no leading $\ln N/N$ contribution arising from wide angle soft radiation.
As a result, we finally arrive at the following equation
for the joint resummed prompt photon hadroproduction
$p_T$ spectrum in which leading soft-collinear effects are
included:
\begin{eqnarray}
\label{eq:23}
{p_T^3 d \sigma^{({\rm resum})}_{AB\to \gamma} \over dp_T}
&=& \frac{p_T^4}{8 \pi S^2}\ \sum_{ij}\ \int_{\cal C} {dN \over 2 \pi i}
\int d^2 {\bf b}\; {\rm e}^{i {\bf b} \cdot {\bf Q}_T} \;
\int {d^2 {\bf Q}_T \over (2\pi)^2}\;
\theta\left(\bar{\mu}-|{\bf Q}_T|\right)
\;\nonumber \\
&\ & \hspace{-25mm} \times\;
\int_0^1 d\tilde x^2_T\; \left(\tilde x^2_T \right)^N
{|M_{ij}(\tilde x^2_T)|^2\over \sqrt{1-\tilde{x}_T^2}}\;
C^{(ij\to \gamma k)}(\alpha_s(\mu),\tilde
x_T^2)\;\left( \frac{S}{4 |{\bf p}_T - {\bf Q}_T/2|^2} \right)^{N+1}
\nonumber\\
&\ & \hspace{-20mm} \times
{\mathcal C}_{i/A}(Q,b,N ) \; {\mathcal C}_{j/B}(Q,b,N)\;
\exp\left[E_i^{\rm PT}(N,b,\mu,Q)+E_j^{\rm PT}(N,b,\mu,Q)\right]\nonumber \\
&\ & \hspace{-15mm} \times
\exp\left[\frac{1}{\alpha_s(\mu)} f^{(0)}_k(\lambda) + f^{(1)}_k(\lambda,Q,\mu)
+f'_k(\lambda,\alpha_s)
+ g^{(1)}_{ijk} (\lambda)\right]\,.
\end{eqnarray}
As before, the corresponding threshold result may be obtained by neglecting
$-{\bf Q}_T/2$ in the last factor on the second line.
\section{Results}
\label{sec:results-1}
Here we study numerically the inclusion of the $\ln N/N$ terms for the
case of prompt photon production for two kinematic conditions:
those of $p\bar{p}$ collisions at the Tevatron at $\sqrt{S} = 1.96$ TeV
\cite{Abazov:2005wc,Acosta:2004bg},
and those of the $pN$ collisions in the
E706 \cite{Apanasevich:2004dr} fixed target experiment with $E_{\mathrm{beam}} = 530$ GeV.
Our main aim is to assess the effect of such terms
in relevant kinematic conditions, rather than provide optimized
and realistic theoretical calculations for comparison with
data (see Ref.~\cite{Aurenche:2006vj} for recent study).
For instance, we do not include contributions from fragmentation processes,
which have recently been addressed in Ref.~\cite{deFlorian:2005wf}
and shown to be significant. Our assessments mainly consist
of comparing the same calculation with and without $\ln N/N$ terms.
Our default choices for various input parameters are as follows.
We use the GRV parton density set \cite{Gluck:1998xa}, corresponding to
$\alpha_s(M_Z)=0.114$, with the evolution code of
Ref.~\cite{Vogt:2004ns}, changing flavor number at $\mu = m_c\; (1.4
\,\mathrm{GeV})$ and $m_b \;(4.5 \, \mathrm{GeV})$. We choose the
factorization and renormalization scale equal to $p_T$, and
the non-perturbative parameter $g_{\mathrm{NP}}$ in
Eq.~\eqref{eq:37} equal to $1\,\mathrm{GeV}^2$.
For the parameter $\chi$ we use
the expression in Eq.~\eqref{eq:18}, following \cite{Kulesza:2002rh},
with $\eta = 1/4$ \footnote{Choosing $\eta =1 $ does not substantially modify results,
but choosing the form in Eq.~\eqref{eq:19}, which generates spurious subleading
recoil logs \cite{Kulesza:2002rh}, does lead to significant changes at larger $p_T$.
}.
For our joint-resummed results, we chose for Tevatron (E706) kinematics
the cut-off $\bar{\mu}$ in Eq.~\eqref{eq:6} equal to $15\, (5)$ GeV.
Regarding logarithmic accuracy, and unless otherwise stated we refer to LL when
using only $h^{(0)}_a$ in Eq.~\eqref{eq:4}, $f^{(0)}_k$ in Eq.~\eqref{eq:24},
and $\bar{C}^{(ij\to \gamma k)} = 1$ for the processes
in \eqref{eq:parton-proc}; we refer to NLL
when also including $h^{(1)}_a$ and $f^{(1)}_k$ and the
virtual corrections in \eqref{eq:34}. For the evolution
from scale $\mu_F$ to $Q/\chi$ in Eq.~\eqref{eq:16} we use the
full NLO anomalous dimension in all cases.
Starting with Tevatron kinematics we compare in Figs.~\ref{fig1}-\ref{fig3}
results at LL and NLL accuracy, with and without the leading $\ln N/ N$
contribution for joint resummation.
\begin{figure}[htb!]
\centering
\includegraphics[width=0.49\textwidth]{FIG1a.epsi}
\includegraphics[width=0.47\textwidth]{FIG1b.epsi}
\caption{\small $\ln N/N$ contributions for Tevatron kinematics.
Left pane: LL without $\ln N/N$ ($a$, solid), NLL without $\ln N/N$ ($b$, dashed),
NLL with $\ln N/N$ ($c$, short-dashed). Right pane:
ratio of NLL to LL (solid), ratio of NLL with $\ln N/N$ to
NLL without (dashed).}
\label{fig1}
\end{figure}
For clarity we have here included the constant corrections in \eqref{eq:34}
also for the LL case. Fig.~\ref{fig1} shows that the effect of the leading
$\ln N/N$ is appreciable when compared to the effect of passing from LL to NLL,
the latter difference being almost negligible. Inclusion of $\ln N/N$ effects
leads to noticeable suppression for most of the $p_T$ range, and to enhancement
at very small and very large $p_T$.
To better understand the origin of these $\ln N/N$ suppressed
contributions, we examine in Figs.~\ref{fig2} and \ref{fig3} for each channel
in \eqref{eq:parton-proc} the contributions from the initial and final state.
\begin{figure}[htb!]
\centering
\includegraphics[width=0.5\textwidth]{FIG2a_ratio.epsi}
\caption{\small $\ln N/N$ effects for $q{\bar q}$ channel at LL, Tevatron kinematics.
Ratio to LL without $\ln N/N$ of initial state (solid) and final state (dashed) effects,
and both (short-dashed). }
\label{fig2}
\end{figure}
\begin{figure}[htb!]
\centering
\includegraphics[width=0.5\textwidth]{FIG2b_ratio.epsi}
\caption{\small $\ln N/N$ effects for $qg$ channel at LL, Tevatron kinematics. Labels as
in Fig.~\ref{fig2}. }
\label{fig3}
\end{figure}
We plot these contributions for the LL cross sections only to facilitate interpretation.
To help understand the results, we can
expand the perturbative exponent in Eq.~\eqref{eq:11} to lowest order in
$\alpha_s$, keeping only the $\ln^2N$ and $\ln N/N$
terms
\begin{align}
\label{eq:33}
& \mathrm{q\bar{q}} :\quad:
\frac{\alpha_s}{\pi}\, \ln^2N\,\Big(2A_q^{(1)}-\tfrac{1}{2}A_g^{(1)}\Big)
+ \frac{\alpha_s}{\pi}\, \frac{\ln N}{N}\,\Big(2A_q^{(1)}-\tfrac{3}{2}A_g^{(1)}\Big) \\
& \mathrm{qg} :\quad:
\frac{\alpha_s}{\pi}\, \ln^2N\,\Big(A_q^{(1)}+A_g^{(1)}-\tfrac{1}{2}A_q^{(1)}\Big)
+ \frac{\alpha_s}{\pi}\, \frac{\ln N}{N}\,\Big(A_q^{(1)}+3A_g^{(1)}-\tfrac{1}{2}A_q^{(1)}\Big)
\end{align}
The expressions suggest that
the initial state $\ln N/N$ terms enhance the cross section
for the $q\bar{q}$ and in particular the $qg$ channels, while
the final state $\ln N/N$ terms suppress it, again by an amount
that depends on the channel. The net result
turns out to be suppression in the former channel and enhancement in the
latter.
These qualitative aspects are indeed borne out if we use the same method
to compute initial state $\ln N/N$ effects as we did
for the final state in section \ref{sec:final-state} \footnote{The net
result in the $q\bar{q}$ is actually still enhancement, because the
contribution of the $f^\prime_{q,g}$ functions in ~\eqref{eq:27}, \eqref{eq:28} is
very small.}.
In the present case however, the net $\ln N/N$ effect in both
channels is suppression, indicating that the
non-diagonal terms in the evolution matrix give a sizeable negative
contribution.
Note that for Tevatron kinematics, when combining channels,
the $qg$ channel dominates at low $p_T$, because the required momentum
fractions are not too large. At large $p_T$, where parton
momentum fractions are larger, the valence-quark dominated $q{\bar q}$
channel takes over.
Turning to E706 kinematics we perform the same studies as we did
for the Tevatron. The results are shown in Figs.~\ref{fig4}-\ref{fig6}.
\begin{figure}[htb!]
\centering
\includegraphics[width=0.49\textwidth]{FIG3a.epsi}
\includegraphics[width=0.47\textwidth]{FIG3b.epsi}
\caption{\small $\ln N/N$ contributions for E706 kinematics. Labels
as in Fig.~\ref{fig1}.}
\label{fig4}
\end{figure}
We observe an overall enhancement due to the $\ln N/N$ effects, somewhat
smaller than the change from LL to NLL. Both effects are more pronounced
than for the Tevatron. This is due both to a larger value of $\alpha_s$
as well as being closer to threshold in this fixed target kinematical regime.
Examining the effects per channel in Figs.~\ref{fig5}
and \ref{fig6}, we now see a noticeable enhancement
from the initial state $\ln N/N$ effects in the $q\bar{q}$ channel, but still
suppression in the $qg$ channel. Clearly the non-diagonal terms in the evolution
matrix play a significant role for the E706 case as well.
\begin{figure}[htb!]
\centering
\includegraphics[width=0.5\textwidth]{FIG4a_ratio.epsi}
\caption{\small $\ln N/N$ effects for $q\bar{q}$ channel at LL, E706 kinematics.
Labels as in Fig.~\ref{fig2}. }
\label{fig5}
\end{figure}
\begin{figure}[htb!]
\centering
\includegraphics[width=0.5\textwidth]{FIG4b_ratio.epsi}
\caption{\small $\ln N/N$ effects for $qg$ channel at LL, E706 kinematics.
Labels as in Fig.~\ref{fig2}. }
\label{fig6}
\end{figure}
Next, we examine the differences between
threshold and joint resummation.
In Fig.~\ref{fig8} we compare resummed results directly
by showing the ratios with respect to the joint-resummed
$p_T$ distribution without $\ln N/N$ terms.
We see for Tevatron kinematics that the
threshold resummed dominates the joint resummed at large $p_T$,
while at low $p_T$ the converse is true.
For the E706 case the threshold resummed results are
entirely below the joint-resummed ones.
\begin{figure}[htb!]
\centering
\includegraphics[width=0.46\textwidth]{FIG5a_ratio.epsi}
\includegraphics[width=0.46\textwidth]{FIG5b_ratio.epsi}
\caption{\small Comparison of joint resummation and threshold resummation effects,
ratios to NLL without $\ln N/N$ for Tevatron (left pane)
and E706 (right pane).}
\label{fig8}
\end{figure}
The threshold resummed curves are shown separately in
Fig.~\ref{fig7}, which is analogous to the rightmost panels in
Figs.~\ref{fig1} and \ref{fig4}.
\begin{figure}[htb!]
\centering
\includegraphics[width=0.46\textwidth]{FIG6a_ratio.epsi}
\includegraphics[width=0.46\textwidth]{FIG6b_ratio.epsi}
\caption{\small $\ln N/N$ effects in threshold resummation,
for Tevatron (left pane) and E706 (right pane). Labels as in
Fig.~\ref{fig1} right pane.}
\label{fig7}
\end{figure}
For Tevatron kinematics the inclusion of $\ln N/N$ terms in
threshold resummation leads, as for joint resummation,
from suppression at small $p_T$ to enhancement at larger $p_T$,
but more noticeably.
For E706 kinematics, different from the joint resummation
case, the enhancement at small $p_T$ turns to suppression
just below $p_T = 6$ GeV. The cross section even becomes
negative beyond $6.5$ GeV, which is due to the fact that
the nearness of the threshold drives the scale $Q/\chi$ in Eq.~\eqref{eq:16}
effectively below the starting scale of the PDF evolution.
\section{Conclusions}
\label{sec:conclusions}
We have examined the effects of including terms of the form
\begin{equation}
\label{eq:26}
\alpha_s^i \sum_j^{2i-1}\, d_{ij} \frac{\ln^jN}{N}\,.
\end{equation}
in joint-resummed and threshold-resummed prompt photon
$p_T$ distributions at both collider and fixed target kinematics,
at leading accuracy ($j=i$).
The complete structure of subleading terms of the form \eqref{eq:26}
is still unknown. Note that we have not considered the fragmentation component
of the prompt photon production cross section in our analysis \footnote{To do so would
require inclusion of more partonic subprocesses, each containing
a sum over color structures for the wide-angle soft radiation
component, as well as photon fragmentation functions~\cite{deFlorian:2005wf}.
Presumably, soft-collinear effects for the fragmentation component of prompt photon production could be included in a way analogous
to what we did in the present paper for the initial state: via adjustment of
the resummed part, and evolution of the fragmentation functions.}.
To the extent that terms of the form \eqref{eq:26} arise from initial state radiation
effects, we used the method of Refs.~\cite{Kulesza:2002rh,Kulesza:2003wn}
to include them, now in a single-particle inclusive cross section.
Those arising from final state emission we included
by extending the jet function to leading $\ln N/N$
accuracy. Numerically we found the combined $\ln N/N$ terms to be
comparable to NLL corrections, and dependent on kinematics either
enhancing or suppressing.
The final state $\ln N/N$ contributions were particularly small,
while in the initial state the effects of non-leading $1/N$ effects
are appreciable, depending again on channel and kinematics.
The flavour non-diagonal terms in the evolution matrix
were found to be numerically significant, and the main source of
discrepancy with expectations based on simple approximations.
We conclude that, because the effects, though small, are non-negligible,
understanding the structure of $\ln N/N$ terms better is a worthwhile pursuit.
\subsection*{Acknowledgments}
This work was supported by the Foundation for Fundamental Research of
Matter (FOM) and the National Organization for Scientific Research
(NWO). RB and AM would like to thank NIKHEF and EL and AM the IMSc in Chennai
for local hospitality.
|
2,877,628,090,707 | arxiv | \section{Introduction}
Deep neural networks (DNNs) have become the state-of-the-art standard across a wide variety of fields ranging from computer vision to speech recognition systems, and they have been predominantly adopted by many industries~\cite{abiodun2018state}. As a result, many organizations have come to heavily rely on neural networks as part of their core operations, which requires a substantial investment into very powerful computing resources, vast quantities of data, and specialized machine learning expertise. Hence, organizations that do build and train their own models would need to protect their systems from plagiarism, and those who sell or share their models would also want to demonstrate ownership of the system when an infringement of copyright occurs.
Thus far, attempts to develop provable ownership in neural networks have mainly relied on two distinct categories of watermarking techniques: (1) Watermarks that are embedded through backdooring attacks by injecting the backdoor via training images~\cite{Zhang-watermarks, weakness-into-strength-backdoor}, and (2) Watermarks created and embedded directly into the neural network~\cite{Uchida-watermarks, rouhani2018deepsigns}. Our work focuses on the first set of watermarking techniques which we refer to as ``Backdoor Watermarks.'' In these types of watermarks, the technique typically involves embedding specially-crafted inputs into the training of a neural network that is designed to have a highly consistent, but unusual, output during testing. For example, a watermark may be embedded into a network by including a subset of images during initial training that can skew the network to classify those images unexpectedly. This subset may involve a ``trigger set'' of unrelated images~\cite{weakness-into-strength-backdoor}, or it may contain content or noise overlaid on the image~\cite{Zhang-watermarks}. In all of these cases, feeding the specially-crafted inputs into the trained system returns a consistent output that would not be expected normally. Because of the often overwhelming state of backdoors versus watermarks, we have provided a comparison of related work in Table~\ref{table:description} to highlight where our work falls within this domain.
\begin{table}[!t]
\caption{Comparison of neural network backdooring and watermarking techniques.}
\label{table:description}
\begin{tabular}{l|l|l|l}
\cline{2-3}
& Offensive (backdoor) & Defensive (watermark) & \\ \cline{1-3}
\multicolumn{1}{|l|}{\multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Bit\\ embedding\end{tabular}}} & N/A & \begin{tabular}[c]{@{}l@{}}Uchida et al.~\cite{Uchida-watermarks}, \\ DeepSigns~\cite{rouhani2018deepsigns}, etc.\end{tabular} & \\
\multicolumn{1}{|l|}{} & \begin{tabular}[c]{@{}l@{}}\emph{Mitigated by:}\\ N/A\end{tabular} & \begin{tabular}[c]{@{}l@{}}\emph{Mitigated by:}\\ N/A\end{tabular} & \\ \cline{1-3}
\multicolumn{1}{|l|}{\multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Backdoor \\ embedding\end{tabular}}} & \begin{tabular}[c]{@{}l@{}}BadNets~\cite{gu2017badnets},\\ Liu et al.~\cite{Trojannn}, etc.\end{tabular} & \begin{tabular}[c]{@{}l@{}}Adi et al.~\cite{weakness-into-strength-backdoor},\\ Zhang et al.~\cite{Zhang-watermarks}, etc.\end{tabular} & \\
\multicolumn{1}{|l|}{} & \begin{tabular}[c]{@{}l@{}}\emph{Mitigated by:}\\ Fine-Pruning~\cite{liu2018fine},\\ Neural Cleanse~\cite{wangneural}\end{tabular} & \begin{tabular}[c]{@{}l@{}}\emph{Mitigated by:}\\ \textbf{Our work}\\ \phantom{holder}\end{tabular} & \\ \cline{1-3}
\end{tabular}
\end{table}
The owner organization can then use the network's output to demonstrate their ownership of the model because only their watermarked model would behave specifically in this way. That is, black-box watermarks attempt to prove ownership of a model using only public API access by querying the potentially plagiarized network with carefully constructed inputs.
In particular, Zhang et al.~\cite{Zhang-watermarks} proposed a watermarking model to be secure against model-pruning, fine-pruning, and inversion attacks. Adi et al.~\cite{weakness-into-strength-backdoor} also proposed a watermarking model to be robust against similar removal attacks. These approaches~\cite{Zhang-watermarks, weakness-into-strength-backdoor} allow for the embedding and detection of watermarks that are human-readable (content-based watermarks) as well as those that are not human-readable (unrelated or noise-based watermarks) in black-box scenarios. Moreover, Zhang et al.'s model~\cite{Zhang-watermarks} appears to be considered for deployment at large IT companies that deploy deep neural network services, such as IBM\footnote{https://www.ibm.com/blogs/research/2018/07/ai-watermarking/}. However, it is still questionable whether currently suggested state-of-the-art watermarking techniques are really robust against sophisticated and targeted manipulation of the structure of the neural network, especially given recent research demonstrating significant success at removing backdoors from neural networks altogether~\cite{wangneural, liu2018fine}. Backdoors and watermarks both may exploit the overparameterization of neural networks to learn multiple tasks, but while a backdoor is generally used~\emph{by} adversaries for malicious ends (e.g., misclassifying stop signs with stickers as speed limit signs~\cite{gu2017badnets}), watermarks are used~\emph{against} adversaries to prevent their deployment of stolen models.
\begin{figure}[t!]
\centering
\begin{tabular}{c c c}
\includegraphics[scale=1.5]{./images/1_content.png}\hspace{.80 cm} &
\includegraphics[scale=1.5]{./images/1_noise.png}\hspace{.80 cm} &
\includegraphics[scale=1.5]{./images/m.png} \cr
(a) Content\hspace{.80 cm} &
(b) Noise\hspace{.80 cm} &
(c) Unrelated\cr
\includegraphics[scale=1.3]{./images/105_automobile.png}\hspace{.80 cm} &
\includegraphics[scale=1.3]{./images/105a_automobile.png}\hspace{.80 cm} &
\includegraphics[scale=1.4]{./images/831.png} \cr
(d) Content\hspace{.80 cm} &
(e) Noise\hspace{.80 cm} &
(f) Unrelated \cr
\end{tabular}
\caption{Examples of watermarked images used in the MNIST (top) and CIFAR-10 (bottom) datasets following the description in previous work~\cite{Zhang-watermarks}. MNIST examples are watermarked to be classified as ``0'', and CIFAR-10 examples as ``airplane''.}
\label{fig:Zhang-technqiues}
\end{figure}
In this work, we present our novel neural network ``laundering'' algorithm to effectively remove potentially watermarked neurons or channels in DNN layers. We achieve this via a three-step process of watermark recovery, detecting and resetting watermarked neurons, and retraining on the reconstructed watermarks and watermark masks, each step taking advantage of novel contributions. Moreover, our approach considers various types of backdoor attacks in black-box models as well, and we present the application of our proposed ``laundering'' technique to defeat backdoor attacks~\cite{chen2017targeted, Trojannn}.
To show the effectiveness of our laundering algorithm, we compare it with the removal attempts made within these original watermark proposal papers~\cite{Zhang-watermarks, weakness-into-strength-backdoor}. We also compare our results to the ``Fine-Pruning''~\cite{liu2018fine} backdoor removal technique; detailed backdoor background information and evaluation are available in Appendix~\ref{backdoor-background}. The structures of all our models can also be found in Appendix~\ref{architectures}.
Our approach shows weaknesses of current deep neural network watermarking techniques -- we can defeat the state-of-the-art watermarking techniques proposed by Zhang et al.~\cite{Zhang-watermarks} and Adi et al.~\cite{weakness-into-strength-backdoor}. In particular, we show that with an appropriate representation of the variety of training data, adversaries who have no knowledge of the watermark are able to successfully remove the majority of watermarking techniques, retaining accuracy much higher than stated in the prior work, (e.g., up to 20\% higher in some cases~\cite{Zhang-watermarks}).
In addition, previous studies (e.g.,~\cite{xie2018mitigating-comp1, song2018pixeldefend-comp2}) presenting defense mechanisms for adversarial attacks often argued that some attacks require significant computation. However, as in other DNN attacking research~\cite{athalye2018obfuscated}, we show that this assumption would not be sufficient to provide adequate defense. Instead, if thieves even have the suspicion that a watermarked model could potentially be laundered, they may invest significant effort to remove the watermark. Laundering a stolen model may save significant time and money in data collection, data labeling, and neural network design and construction.
We make the following contributions in this paper:
\begin{itemize}
\item We present our novel neural network ``laundering'' algorithm, which effectively removes neurons or channels in DNN layers that contribute to the classification of watermarked images. Differentiating us from previous work which focused on adversarial backdoors~\cite{wangneural}, we take on the viewpoint of the attacker attempting to remove watermarks (i.e., defensive backdoors) and evaluate our effectiveness under various limited training sets to which an attacker may have access.
%
\item We provide an intensive overview of the combination of parameters used for laundering a neural network for different types of layers and network architectures. We also evaluate the effectiveness of different combinations of parameters and available data both regarding the removal of watermarks~\cite{Zhang-watermarks} and backdoors~\cite{chen2017targeted, Trojannn} as well as the preservation of model performance.
%
\item We discuss in-depth the findings from our experiments and highlight the previously-overlooked weaknesses that currently exist within most watermarking schemes. We also provide new insights into the reasons adversaries will attack a watermarked model despite accuracy loss as well as the reasons previous backdoor-removal techniques do not exploit the weaknesses in watermarks specifically.
\end{itemize}
\input{sections/Background.tex}
\input{sections/AttackModel.tex}
\input{sections/OurProposals.tex}
\input{sections/Experiments.tex}
\input{sections/RemovingBackdoors.tex}
\input{sections/Results.tex}
\input{sections/RelatedWork.tex}
\input{sections/Conclusion.tex}
\bibliographystyle{IEEEtran}
\section{Attack model}
\label{Attack Model}
In our attack model, we define two parties: the true owner \emph{O} of the neural network model \emph{m} and the plagiarizer \emph{P} who has managed to procure \emph{m}. \emph{P} may have acquired \emph{m} through a variety of ways, not all of which may be malicious; however, the exact means by which \emph{P} acquires \emph{m} is outside the scope of this paper. The model \emph{m} performs a particular task \emph{t}, but is watermarked in such a way that certain, carefully-constructed examples \emph{$X_{w}$} will give highly specific output at the task \emph{t}. In our attack model, the goal of the plagiarizer \emph{P} is to alter \emph{m} in such a way that the examples \emph{$X_{w}$} will no longer result in predictable outputs from \emph{m} while minimally impacting \emph{t}.
Our attack model places certain limitations on plagiarizer \emph{P}. First, \emph{P} has a substantially limited set of training data when compared to the creator \emph{O}. Otherwise, \emph{P} could trivially label a large set of non-watermarked training data using \emph{m} to create a non-watermarked model by predictive model theft techniques~\cite{tramer2016stealing}. Second, \emph{P} does not know if \emph{m} has been watermarked but assumes it to be. Therefore, our proposed attack method should be able to overcome the robustness of the watermarked model \emph{m} to pruning and fine-tuning with limited training data such that \emph{m} can adequately perform task \emph{t} without reacting to watermarked examples.
As described in other work~\cite{Uchida-watermarks,Zhang-watermarks,rouhani2018deepsigns}, watermarks in neural networks are designed to be robust to pruning, fine-tuning, and/or watermark overwriting as well as secure against discovery of the presence of a watermark in the model. Although there exist watermarking techniques that are robust to both traditional pruning and retraining, we present more sophisticated methods that are able to greatly hinder and defeat the effectiveness of these black-box watermarking embedding algorithms.
%
%
%
%
%
%
%
%
%
%
\section{Background}
\label{sec: background}
Backdooring attacks on neural networks have highlighted serious weaknesses in the black-box nature of neural networks throughout a variety of different tasks and model structures~\cite{gu2017badnets, chen2017targeted, Trojannn}. Backdoors exploit the vulnerability of the overparameterization of deep neural networks to hide deliberately-designed backdoors in the model.
If the training of a network is outsourced to a third party that surreptitiously inserts specific and maliciously-labeled training images, the victim organization will receive a model that behaves correctly on the surface but contains a hidden backdoor. For example, the BadNets research~\cite{gu2017badnets} demonstrated that it was possible to force the misclassification of stop signs to more than 90\% using only a yellow Post-it note sized square overlaid on the images.
\subsection{Backdoor Watermarking Techniques}
Some research has proposed utilizing the weaknesses of neural networks to backdoor attacks as a method for embedding watermarks. One specific implementation of this watermarking process is Zhang et al.'s~\cite{Zhang-watermarks} black-box technique that uses watermarked images as part of the training set of the network, consistently labeled as one class. These watermarked images include the following three types of images as shown in Figure~\ref{fig:Zhang-technqiues}: 1) meaningful content (e.g., a word) placed over part of the image in some subset of training images (``content''), 2) pre-specified (Gaussian) noise over some subset of training images (``noise''), or 3) completely unrelated images (``unrelated''). Note that Figure~\ref{fig:Zhang-technqiues} (f) is an image taken from MNIST but used as an ``unrelated'' watermark in CIFAR-10. In their experiments, they demonstrate minimal impact on the accuracy of the model, and their watermarks remain strong even after substantial pruning, tuning, and model inversion attacks~\cite{fredrikson2015model} against the watermarked model.
Adi et al.~\cite{weakness-into-strength-backdoor} similarly proposed using the mechanics of backdoors to embed watermarks to prove non-trivial ownership of neural network models. In their approach, the authors utilized 100 abstract images, each randomly assigned to a target class for their trigger set. Furthermore, their embedding procedure required the owner to sample \textit{k} images from the trigger set during each training batch.
Note that unlike watermarking, backdoors typically do not rely heavily on the ``unrelated'' style of image from~\cite{Zhang-watermarks} or the abstract images from~\cite{weakness-into-strength-backdoor}. Overlaying part of the image (e.g., with glasses or noise) is common in backdoors, but unrelated images such as those used in watermarking schemes such as~\cite{weakness-into-strength-backdoor} are not commonly addressed in backdoor removal papers because such images would be predicted randomly. Nonetheless, our work focuses on highlighting the shortcomings of all varieties such watermarking techniques: ``content'', ``noise'', and ``unrelated.''
\subsection{General Backdoor Removal Techniques}
Other research has tackled a similar problem of removing backdoors in neural networks, and one highly related research is the work by Liu et al.~\cite{liu2018fine}. This work proposed a two-step process to remove backdoors by first pruning the network followed by fine-tuning the pruned network. Their method shows success in removing backdoors from different deep neural network implementations. Similarly, recent research by Wang et al.~\cite{wangneural} has shown the ability to recover sufficiently similar backdoor triggers embedded in maliciously backdoored neural networks. Their research focuses on the ability of victims to detect and remove backdoors from their networks, where the victims have access to their full training set but would like to avoid intensive retraining on the model. Research~\cite{wangneural} has also investigated backdoor removal techniques, where the victim has no knowledge of a backdoor trigger but aims to take efforts to potentially remove it. Because of the similarity in goals, we leverage Wang et al.'s backdoor reconstruction algorithm~\cite{wangneural}, which begins with discovering a trigger with the following formulation:
\begin{equation}
\begin{array}{l}
\textrm{\hspace{5mm}}A(x,m,\Delta) = \textbf{$x^{\prime}$}\\
\textrm{\hspace{8mm}}\textbf{$x^{\prime}$}_{\emph{i,j,c}} = (1 - \textbf{m}_{\emph{i,j}}) \cdot \textbf{x}_{\emph{i,j,c}} + \textbf{m}_{\emph{i,j}} \cdot \Delta_{\emph{i,j,c}},
\end{array}
\end{equation}
where \emph{A}(·) is the trigger application function, \textbf{x} is the original image, $\Delta$ is the trigger image, and \textbf{m} is the mask for the trigger. They further constrict this algorithm by measuring the magnitude of the trigger by the \emph{L}1 norm of \textbf{m} to result in the final formulation of:
\begin{equation}
\begin{array}{l}
\min\limits_{m,\Delta} \textrm{\space} \ell(y_{t},f(A(x,m,\Delta))) + \lambda \space \cdot \space |m|\\ \textrm{\hspace{3mm}for\space} \space x \in \textbf{X},
\end{array}
\end{equation}
where \emph{f}(·) is the networks output prediction function, $\ell$(·) is the loss function, $\lambda$ controls the magnitude for controlling the size of the reversed trigger, and \textbf{X} represents the available non-watermarked images. For more detailed information regarding the construction scheme, we direct the reader to the original paper~\cite{wangneural} or to their open-source implementation\footnote{https://github.com/bolunwang/backdoor}.
\section{Conclusion and future work}
We proposed a novel ``laundering'' algorithm that focuses on low-level manipulation of a neural network based on the relative activation of neurons to remove the detection of watermarks via black-box methods. We demonstrated significant weaknesses in current watermarking techniques, especially when compared to the results reported in the original research~\cite{Zhang-watermarks, weakness-into-strength-backdoor}. Specifically, we were able to remove watermarks below ownership-proving thresholds, while achieving test accuracies above 97\% and 80\% for both MNIST and CIFAR-10, respectively, for all evaluated model and watermark combinations using a highly restricted dataset (0.6\% of the original training size in most cases). %
In addition, we provided new insight in marrying the fields of watermark embedding and removal with backdoor embedding and removal. As we have shown in this research, these two will have opposite repercussions on each other: increasing defense against backdoors will reduce the effectiveness of watermarks. From this work, we hope that future work will continue to challenge and improve existing neural network watermarking strategies as well as explore additional avenues for detecting and removing backdoors in neural networks.
Future work will investigate the feasibility of further decreasing black-box watermark effectiveness in two ways. First, future work will consider the possibility of boosting a collection of laundered neural networks using the black-box adversary method. It may be possible to boost the overall accuracy by combining multiple heavily laundered (10+ rounds) models and tallying all their votes for one classification using a method such as AdaBoost~\cite{rojas2009adaboost}. Second, future work will consider the ability of an adversary to attempt to prevent queries to a laundered network when the victim attempts to feed unrelated images for classification. Existing work has already considered detecting out-of-distribution examples~\cite{hendrycks2016baseline}, and even a poor classifier of this type could further weaken the effectiveness of unrelated watermarks below the acceptable threshold.
To conclude, our results indicate that current work proposing to use backdoors for watermarks in neural networks are overestimating the robustness of these watermarks while underestimating the ability of attackers to retain high test accuracy. For the content-based and noise-based watermarks, the watermarks are not robust to detection, reconstruction, and removal attacks. All methods do present some additional overhead to model thieves, both in computational resources as well as in the final classification accuracy, particularly in the unrelated style; however, hopefully our work demonstrates that their adoption should not be considered as secure against persistent removal attempts as more complex backdoor reconstruction methods are developed.
\section{Experiments}
\label{sec:Experiments}
\newcolumntype{Y}{>{\centering\arraybackslash}X}
\begin{table*}[ht!]
\setlength\tabcolsep{1.0pt}
\caption{Results of the black-box adversary algorithm for various watermarks and backdoors.}
\label{black-results}
\begin{tabularx}{\textwidth}{|Y|Y|Y|Y|Y|Y|Y|Y|Y|}
\hline
\textbf{Method} & \textbf{Dataset} & \textbf{Watermark Type} & \textbf{Original Test Accuracy} & \textbf{Laundered Test Accuracy} & \textbf{Original Watermark Accuracy} & \textbf{Laundered Watermark Accuracy} & \textbf{Vanilla Model Watermark Accuracy} & \textbf{Limited Retraining Size} \\ \hline \hline
Zhang et al. & \multirow{6}{*}{MNIST} & Content & 99.46\% & 97.03\% & 100\% & 99.95\% & 1.5\% & 16.6\% \\ \cline{3-9}
(90\% pruning) & & Noise & 99.41\% & 95.19\% & 100\% & 99.55\% & 6.0\% & 16.6\% \\ \cline{3-9}
& & Unrelated & 99.43\% & 93.55\% & 100\% & 99.9\% & 20\% & 16.6\% \\ \clineB{1-1}{2.5} \clineB{3-9}{2.5}
\multirow{3}{*}{Ours} & & Content & 99.90\% & 98.34\% & 100\% & 0.01\% & 1.5\% & 0.6\% \\ \cline{3-9}
& & Noise & 99.86\% & 97.45\% & 99.99\% & 0.07\% & 6.0\% & 0.6\% \\ \cline{3-9}
& & Unrelated & 99.92\% & 98.33\% & 99.71\% & 15\% & 20\% & 0.6\% \\ \hline \hline
Zhang et al. & \multirow{6}{*}{CIFAR-10} & Content & 78.41\% & 64.9\% & 99.93\% & 99.47\% & 5.0\% & 20.0\% \\ \cline{3-9}
(90\% pruning) & & Noise & 78.49\% & 59.29\% & 100\% & 65.13\% & 4.0\% & 20.0\% \\ \cline{3-9}
& & Unrelated & 78.12\% & 62.15\% & 99.86\% & 10.93\% & 52.0\% & 20.0\% \\ \clineB{1-1}{2.5} \clineB{3-9}{2.5}
\multirow{3}{*}{Ours} & & Content & 90.24\% & 87.65\% & 100\% & 1.4\% & 5.0\% & 0.6\% \\ \cline{3-9}
& & Noise & 89.01\% & 84.14\% & 100\% & 0.50\% & 4.0\% & 0.6\% \\ \cline{3-9}
& & Unrelated & 89.77\% & 85.34\% & 99.94\% & 16\% & 52.0\% & 0.6\% \\ \hline \hline
Adi et al. (PT) & \multirow{4}{*}{CIFAR-10} & \multirow{4}{*}{Unrelated} & 93.65\% & $\sim$90\% & 100\% & 100\% & 7\% & $\sim$ \\ \cline{1-1} \cline{4-9}
Ours (PT) & & & 91.55\% & 88.25\% & 100\% & 7.0\% & 12\% & 10.8\% \\ \cline{1-1} \cline{4-9}
Adi et al. (FS) & & & 93.81\% & $\sim$90\% & 100\% & 80\% & 7\% & $\sim$ \\ \cline{1-1} \cline{4-9}
Ours (FS) & & & 91.85\% & 84.73\% & 100\% & 7.0\% & 12\% & 10.8\% \\ \hline
\end{tabularx}
\end{table*}
In order to demonstrate the feasibility of our approach, we recreated the results of Zhang et al.'s work~\cite{Zhang-watermarks} for both MNIST and CIFAR-10. In general, the architectures of the deep neural networks followed the procedure according to the original papers~\cite{Zhang-watermarks, weakness-into-strength-backdoor}, but we will elaborate on any differences where relevant. We followed Zhang et al.'s implementation as described in their paper~\cite{Zhang-watermarks} for MNIST and CIFAR-10; however, while following the description given for the CIFAR-10 model, our models consistently converged at approximately 73\% test accuracy, rather than the 78\% given in the original work.
For re-implementing Adi et al.'s watermarking scheme, we followed both the pre-trained (PT) procedure where watermarks are embedded into the network following non-watermarked training as well as the from-scratch (FS) procedure where watermarks are embedded during training. Their original implementation converged at 93.65\% for CIFAR-10; however, our models converged at 91.15\% when implementing their method in Keras using ResNet18~\cite{he2016deep}. Nevertheless, we were able to recreate the 100\% watermark accuracies on this model described in the original work~\cite{weakness-into-strength-backdoor}.
The results in this section correspond to \textbf{one round of laundering}. We reset all layers in all DNNs except for those models using ResNet18+ (which is used in Adi et al.'s scheme~\cite{weakness-into-strength-backdoor}). Due to the depth of that model, we reset the second half of the weights only; the weights of the first half appear to being learning very high-level features that do not correspond directly with the watermarks.
We implemented our neural laundering prototype in Python 3.6 with Keras 2.1.6 and Tensorflow 1.7.0. The
experiments were conducted on a machine with an Intel i5 CPU, 16 GB RAM, and 3 Nvidia 1080 Ti GPUs with 11GB GDDR5X.
In order to evaluate our laundering technique in a realistic setting, we limited the adversary's retraining dataset size in the case of black-box attackers. Especially for MNIST, using even half of the test set size for training or retraining as performed in Zhang et al.'s original work~\cite{Zhang-watermarks} is sufficient to train a model to above 90\%, with or without the watermarked model. As a result, if we do not limit the training size, adversaries could simply train their own neural network from the output of the watermarked model using prediction algorithms found in Tram{\`e}r et al.'s work~\cite{tramer2016stealing}. This situation inherently implies there would be no need for laundering a watermarked network at all given a large retraining set size.
As a result, for the results against Zhang et al.'s watermarking scheme~\cite{Zhang-watermarks} we purposely limited the adversary's MNIST retraining dataset to be 0.6\% of the original training set size of 60,000 handwritten digits, which results in approximately 42 images per category. Likewise, we also limited the CIFAR-10 dataset to 0.6\%, which results in approximately 36 images per category. In reality, it is likely that adversaries would have more data available, and they would achieve better results than the conservative results we report.
For the following evaluation sections, we will repeatedly refer to Table~\ref{black-results}, wherein we list our results - in rows following ``Ours'' in column 1 (``Method'') - directly below the proposed watermarking approach and other authors' original watermark removal attempts for comparison. Column 4 (``Original Test Accuracy'') and column 5 (``Laundered Test Accuracy'') record the accuracy on the never-before-seen test set by the original watermarked network before laundering and then subsequently on the laundered watermarked network, respectively. Similarly, column 6 (``Original Watermark Accuracy'') and column 7 (``Laundered Watermark Accuracy'') record the accuracy of the detected watermarks by the original watermarked network before laundering and then subsequently on the laundered watermarked network, respectively. The thresholds at which the watermarks are unusable for ownership demonstration purposes are listed in column 8 (``Vanilla Model Watermark Accuracy''), and we discuss how we derive these values in Section~\ref{min-treshold}. Finally, we provide a comparison of the size of the dataset used in the watermark removal process in column 9 (``Limited Retraining Size''), which corresponds to the ``normally labeled data'' referenced in Algorithm~\ref{LaunderingAlg}.
\textbf{MNIST and CIFAR-10 Results via Zhang et al.'s Method.} For the MNIST model across all watermarking types, our method was able to remove detected watermarks below unusable levels. The content and noise watermarks are significantly recovered and fall significantly below even random classification, and although unrelated watermarks are much higher (e.g., 15\% for the MNIST unrelated watermark), they fall below the vanilla model watermark accuracy (e.g., 20\% again for the MNIST unrelated watermark).
Again, our method was able to remove detected watermarks below usable levels for all watermark types, while minimizing the drop in test classification to about 5\% overall. One point of importance is that for CIFAR-10 unrelated watermarks, our model does not reduce watermark accuracy as much as the reported pruning techniques used in~\cite{Zhang-watermarks}. Nevertheless, our method does maintain a higher percentage of the final test accuracy. The original paper recorded a drop from 78.12\% accuracy to 62.15\% (shown in columns 4 and 5 in Table~\ref{black-results}) while our method results in a drop from 89.77\% accuracy to 85.34\%. Additionally, our method demonstrates its effectiveness when using a smaller dataset size.
\begin{figure*}[t!]
\centering
\includegraphics[width=.77\textwidth]{final_images/final_graph_MNIST-EMNIST.png}
\caption{Results of laundering the MNIST and MNIST+ models to remove Zhang et al.'s proposed watermarks~\cite{Zhang-watermarks}.}
\label{rounds-MNIST-both}
\end{figure*}
\textbf{CIFAR-10 Results via Adi et al.'s Method.} In Adi et al.'s proposed watermarking scheme~\cite{weakness-into-strength-backdoor}, they included 100 unrelated (abstract) images, labeled each image with a random class, and trained their model on these images as well. However, their method proposed two ways to embed the watermark, either into the model from scratch (FS) or by fine-tuning a pre-trained (PT) model. We include evaluations of both approaches in Table~\ref{black-results}.
While there are perhaps more efficient ways to attack the Adi et al.~\cite{weakness-into-strength-backdoor} watermarking scheme, we do not adapt our method to be specifically tailored to the Adi scheme. We aimed to test our watermark removal strategy as an agnostic method with only one minor modification. Empirically, for very deep networks such as ResNet18+~\cite{he2016deep}, resetting many shallow layers significantly impacted the overall final performance of the model. While one layer is not enough, resetting all shallow layers results in deep layers receiving activations that they were not originally trained to receive. As a result, for the ResNet model, we chose to reset only the second half of the weights. Additionally, the results in Table~\ref{black-results} include a larger subset of available training data than used against Zhang et al.~\cite{Zhang-watermarks}. A further discussion of this is explained in Section~\ref{sec:abridged_data}.
We find that in both cases (during training and post-training) the watermarks are significantly less robust than the original claims by Adi et al.~\cite{weakness-into-strength-backdoor} using our approach. We compared our results to the original watermark removal technique referred to as ``Re-train All Layers (RTAL)''. Even in the original research, the RTAL watermark removal technique had significantly reduced the accuracy of the pre-trained watermark set. However, we also find that the FS watermarks face similar problems, perhaps due to the reliance on batch normalization~\cite{batch-norm}. Before retraining, we also reset the batch normalization. Additionally, the authors do not use a limited dataset to evaluate their watermark removal methods.
\section{Evaluation of Required Data for Watermark Removal}
\label{sec:abridged_data}
In other backdoor or watermark research~\cite{Zhang-watermarks, liu2018fine} researchers have typically given the remover a typical train-test split worth of data. For example, in the original Zhang et al.~\cite{Zhang-watermarks} watermark proposal, the adversary was given the entire test set of the default train-test split from the MNIST dataset to attempt to remove the watermark via pruning and retraining. Even though these techniques are typically unable to remove watermarks, an adversary would actually have no need to remove them. Using the neural network structure and dataset split utilized in Zhang et al.'s paper~\cite{Zhang-watermarks}, adversaries could train their own MNIST models using the test set only and still easily reach 90\% or above accuracies, completely removing the need to launder the watermarked model. However, because our approaches are able to directly target weaknesses of watermarking techniques and because we approach this issue from the point of an attacker, we may consider the adversary to be even more limited than the other watermark or backdoor removal research has considered.
Therefore, in the above Table~\ref{black-results} in Section~\ref{sec:Experiments}, we included the results where an adversary was limited to 0.6\% (except in the case of Adi et al.~\cite{weakness-into-strength-backdoor}, shown in the final section of Table~\ref{black-results}) of the total training data as well as the results from only one round of laundering. However, we also investigated the effectiveness of our approach under a variety of other splits. These include 10.8\%, 6\%, and 0.6\%, and finally only one example per class as depicted in the various lines in Figure~\ref{rounds-MNIST-both} and~\ref{rounds-adi-cifar}. Additionally, because a black-box adversary has no knowledge of the watermark detection accuracy, we also evaluated scenarios, where an adversary performed multiple rounds of laundering (up to 10).
In Figure~\ref{rounds-MNIST-both} and \ref{rounds-adi-cifar}, the blue lines represent the final test accuracy on unseen data, and the red lines represent the accuracy on watermarked images. Each line style is representative of an available dataset size, and the solid black line represents what we propose to be the minimum watermark accuracy required to claim ownership of a model. We discuss the minimum watermark accuracy value more in Section~\ref{min-treshold}.
\begin{figure*}[t!]
\centering
\includegraphics[width=0.77\textwidth]{final_images/final_graph_ADI-CIFAR-one-correct.png}
\caption{Results of laundering the CIFAR-10 model to remove Adi et al.'s~\cite{weakness-into-strength-backdoor} and Zhang et al.'s~\cite{Zhang-watermarks} proposed watermarks.}
\label{rounds-adi-cifar}
\end{figure*}
\textbf{MNIST Results via Zhang et al.'s Method.} Due to the simplicity of the model, even a small number of laundering examples - down to even one image per class in some cases - is enough to make a significant impact on the watermark accuracy, especially with a small number of iterations. We show that most combinations perform similarly in Figure~\ref{rounds-MNIST-both}. For example, Figure~\ref{rounds-MNIST-both} (a), (b), and (c) all show a decline in watermark accuracy (via red lines) after very few rounds. Additionally, the final testing accuracy (via blue lines) also shows significant resilience even after multiple rounds of laundering, declining gradually over time. However, perhaps due to the complexity of the watermark in the content-based scenario (as shown in Figure~\ref{rounds-MNIST-both} (a), one example per class (solid blue and red lines) is not enough to remove the watermark for plain MNIST.
Moreover, we also trained a more complex version of MNIST with six additional classes taken from the EMNIST dataset~\cite{cohen2017emnist} (specifically letters `t', `u', `w', `x', `y', `z'). Note that while these images are from the EMNIST dataset, we refer to this model as the MNIST+ model as it does not contain all EMNIST classes. As shown in Figure~\ref{rounds-MNIST-both} (d), (e), (f), the inclusion of additional classes made two notable differences. First, in this model our algorithm was able to remove the content-based watermarks completely. This strongly suggests that having a few more training examples is enough to reconstruct content-based watermarks more effectively, although future work will be required to understand this phenomenon in depth. Second, because of the larger number of class sizes, the unrelated watermarks were classified as the watermarked class less often in the clean network, pushing the target black line lower.
As a result, the smaller dataset sizes (0.6\% and one-per-class -- dashed triangles and solid lines, respectively) struggled to push the watermark below that threshold. Our MNIST+ dataset also captures more generalized results than plain MNIST in the testing accuracy as well. As expected, one-per-class impacted testing accuracy the most. Due to the simplicity of plain MNIST, this phenomenon was not captured in those experiments.
\textbf{CIFAR-10 Results via Adi et al.'s Method.} Although the CIFAR-10 task is significantly more complex than MNIST, the 10.8\%, 6.0\%, and 0.6\% retraining sets in Figure~\ref{rounds-adi-cifar} (a) and (b) were able to remove the watermarks, while maintaining a significantly high test accuracy. However, as expected, the accuracy dropped over multiple rounds of laundering, particularly as the size of the laundered set decreased. We speculate that this is owed more to overfitting than to our laundering method. In this case, however, the one-per-class laundering set is completely inadequate, especially over time, in both cases. Regardless of combination, the Adi et al.'s~\cite{weakness-into-strength-backdoor} from-scratch (FS) technique outperforms the pre-trained (PT) technique. Nevertheless, due to the complexity of the task as well as the resilience of the watermark scheme, our scheme does incur a loss of test accuracy on small laundering set sizes. We discuss the implications of this drop in test accuracy in Section~\ref{drop}.
\textbf{CIFAR-10 Results via Zhang et al.'s Method.} Again due to the complexity of the task required to classify CIFAR-10 images, we expected a large impact on the test accuracy over time, and the results do show this phenomenon in Figure~\ref{rounds-adi-cifar} (c), (d), and (e). As in the Adi et al. model, the one-per-class laundering set is again inadequate. However, both the content and noise watermarks were removed via our method in the 10.8\%, 6.0\%, and 0.6\% cases, although with a cost to final test accuracy that worsened across multiple iterations. The unrelated watermarks were much more difficult to consistently remove, with seemingly random high fluctuations. However, the unrelated watermarks in CIFAR-10 are also classified randomly even in vanilla models, and we argue that relying on such a high degree of uncertainty of a watermark is quite risky, and we go into more detail about these risks in the following section.
\textbf{General Laundering Observations.} To conclude this rather extensive evaluation, we make general observations regarding the rounds and percentages used for laundering:
\begin{enumerate}
\item The larger the laundering dataset size, the more rounds of laundering are possible without subsequent decrease in accuracy.
\item The more complex the model, the more likely subsequent rounds of laundering will significantly harm the final test accuracy.
\item In most cases, once the watermark has been removed, it is unlikely to return unless it was embedded with a high randomly-occurring threshold (as in CIFAR-10 unrelated watermarks).
\end{enumerate}
As a result, we make a general recommendation rule that laundering more than 3 rounds will result in diminishing returns for adversaries where they will be losing final test accuracy without removing additional watermark accuracy.
\section{Proposed laundering technique}
\label{Proposed Laundering Technique}
In this section, we describe our algorithms in more detail for removing watermarks from neural networks. Adversaries, which were described as the plagiarizer \textit{P} in the previous Attack Model section, have access to the intermediate pre-trained layers of the watermarked neural network ($L_j$), which, in this case, have been watermarked during the original training process. Additionally, the adversaries have their own training dataset that has been labelled correctly ($X_i$). Adversaries have procured this correctly-labelled dataset either manually or perhaps by using the watermarked network's outputs to automatically label the examples~\cite{tramer2016stealing}. It is important to note that the output of the watermarked neural network would not intentionally misclassify any of these images during creation of the correctly-labelled dataset if the adversary chooses that method; none of these images would contain such a specific watermark by pure chance.
While throughout this section we draw upon previous backdoor-reconstruction techniques~\cite{wangneural}, we include our own novel contributions, specifically designed to help function in black-box watermark-removal scenarios. These include:
\begin{itemize}
\item Combining the ``pruning'' and ``unlearning'' steps proposed in~\cite{wangneural} as a two-step approach to removing watermarks even with very limited retraining data.
\item Implementing a statistical-based approach used to decide if neurons should be reset within a layer based on the relative average activation of the entire dense or convolutional layer.
\item Extensively investigating ``unrelated'' and ``noise'' style of watermarks which tend to avoid detection and removal in the backdoor removal schemes~\cite{wangneural, liu2018fine}.
\item Offering an additional application of the reverse-engineered masks generated during the watermark reverse-engineering process to aid in the removal of the unrelated style of watermark.
\end{itemize}
\begin{figure*}[ht!]
\centering
\includegraphics[scale=0.76]{images/new_black-box_laundering.png}
\caption{Overview of black-box laundering procedure, where the vanilla images are normally labeled data and red crosses are the reset neurons. This shows the process of recovering potential watermarks using methods such as Wang et al.'s~\cite{wangneural}, laundering the network via our algorithm, and finally retraining the network based on re-labeled reconstructed images. The process is repeated over multiple iterations where the retrained model is fed back into the reconstruction algorithm.}
\label{fig:demo2}
\end{figure*}
A black-box attack watermark removal technique requires more effort than a white-box adversary scenario. The recent ``Neural Cleanse''~\cite{wangneural} research poses a significant step toward the reduction of the threat that backdoors pose to neural networks. At the same time, however, it highlights the weaknesses of the \emph{security} of neural network watermarks, which requires that a network shall not reveal any information regarding the watermarks' presence in the network. While this method showed success in many backdoor scenarios where the victim was able to reconstruct a highly representative example of the actual backdoor, our results demonstrated a need to incorporate their proposed reconstructive process into a larger algorithm attack from the adversary's standpoint.
In order to perform such an attack, we propose the following 3-step process: 1) watermark recovery, 2) black-box attack laundering via the algorithm proposed in Algorithm~\ref{LaunderingAlg}, and 3) black-box adversary retraining. Step 2 is required to reset potentially watermarked neurons, and Step 3 restores the performance of the fully laundered model back to acceptable levels in instances, where Step 2 has reset a large number of neurons that also serve an important role in final non-watermarked classification. We present the end-to-end black-box laundering procedure in Figure~\ref{fig:demo2}. We will now go more in-depth on each of these steps.
\textbf{Step 1. Watermark Recovery.} As discussed in Section~\ref{sec: background}, there exist methods to discover potential backdoors within neural networks~\cite{wangneural}. Using such a method, it is possible to reconstruct the smallest perturbations required to push one class to another. However, this step alone is insufficient in the watermark removal domain.
\textbf{Step 2. Black-box Attack Laundering.} While black-box adversaries have no access to known watermark images, they assume the reconstructed watermarks to be accurate representations of the actual watermarked images. As such, the reconstructed watermarks are overlain on the adversary's available training data, and these sets are sent through the laundering algorithm.
Using these manually-constructed watermarks, the adversary is able to observe and record the activations of each layer $L_j$ for each watermarked image in $W_k$ of the neural network to calculate the total watermarked activation for each neuron $AW_j^{total}$. Then, the adversary simply calculates the average activation of each neuron individually $AW_j^{avg}$ by dividing by the number of training examples as shown in line 8 of Algorithm~\ref{LaunderingAlg}.
Following this, the adversary may perform a similar calculation across all non-watermarked images $X_i$ through all the neural network layers $L_j$ and observe and record these observations for the total non-watermarked activation for each neuron $AN_j^{total}$. The adversary then calculates the average of each neuron to calculate $AN_j^{avg}$. As a result, the adversary now has the ability to subtract the average normal image activation $AN_j^{avg}$ from the average watermarked activation $AW_j^{avg}$. This results in the average difference in activation of that layer $A_j^{diff}$ demonstrated in line 10 of Algorithm~\ref{LaunderingAlg}.
From here, the adversary simply resets any neuron that activated strongly when in the presence of watermarked images but did not activate strongly for non-watermarked images. The adversary does this by stepping through each neuron $v$ in $L_j$ and removing it if the difference in the activation between watermarked and non-watermarked images ($A^{diff}_{j_{v}}$) falls above some threshold value $DT$ as in line 14 of Algorithm~\ref{LaunderingAlg}. If the layer is convolutional (and the activation difference falls above $CT$), then the adversary would instead reset the entire intermediate channel $AN^{avg}_{j_{v}}$.
``Resetting'' a neuron or channel may take many forms. In our case, we immediately reset the weights of the input into the layer to zero during the algorithm, maintaining those reset weights on each successive pass through all layers $L_j$ in order to retrain the network, while preventing the watermarked neurons from reappearing. For most activation functions, setting the weights to zero would achieve the desired effect; however, this may not always be the case. Neurons that leverage activation functions such as softplus~\cite{dugas2001incorporatingsoft} that produce non-zero output at point 0 may still be highly activated in the presence of zero-weighted inputs. In such a case, it would be up to the adversary to choose a more appropriate resetting procedure, although we propose simply setting those neurons' or channels' weights to the layer's median weight may suffice as well.
\def\NoNumber#1{{\def\alglinenumber##1{}\State #1}\addtocounter{ALG@line}{-1}}
\newcommand{\mathrel{+}=}{\mathrel{+}=}
\begin{algorithm}[t!]
\caption{Watermark Laundering algorithm (Step 2)}\label{LaunderingAlg}
\begin{algorithmic}[1]
\State Given:
\NoNumber{Trained intermediate layers $(L_{j})$ for $j = 0, 1,\cdots J$}
\NoNumber{Normally labeled data $(X_{i})$ for $i = 0, 1,\cdots, I$}
\NoNumber{(Recovered) Watermarked data $(X_{k})$ for $k = 0, 1,\cdots, k$}
\State Predefined:
\NoNumber{Dense layer activation threshold $(DT)$}
\NoNumber{Convolutional layer activation threshold $(CT)$}
\For{$j = 0, 1, \cdots, J$}
\For{$i = 0, 1, \cdots, I$}
\State $AN^{total}_{j} \mathrel{+}= L_{j}(X_{i})$
\EndFor
\For{$k = 0, 1, \cdots, K$}
\State $AW^{total}_{j} \mathrel{+}= L_{j}(W_{k})$
\EndFor
\EndFor
\State Avg layer watermark activation $AW^{avg}_{j} = AW^{total}_{j} / K$
\State Avg layer normal activation $AN^{avg}_{j} = AN^{total}_{j} / I$
\State Activation difference $A^{diff}_{j} = AW^{avg}_{j} - AN^{avg}_{j}$
\For{$j = 0, 1, \cdots, J$}
\If {$L_{j}$ TYPE is DENSE}
\For{$v = 0, 1, \cdots, V$}
\If {$AN^{avg}_{j_{v}} > DT$}
\State Reset intermediate layer neuron $L_{j_{v}}$
\EndIf
\EndFor
\ElsIf {$L_{j}$ TYPE is CONV}
\For{$v = 0, 1, \cdots, V$}
\If {$AN^{avg}_{j_{v}} > CT$}
\State Reset intermediate layer channel $AN^{avg}_{j_{v}}$
\EndIf
\EndFor
\EndIf
\EndFor
\end{algorithmic}
\end{algorithm}
\textbf{Step 3. Black-box Adversary Retraining.} Finally, the model is retrained on all available examples, including those collected during reconstruction. Note that these non-watermarked examples were the same examples passed through the neural network in the laundering step, except in this step, watermarked examples are labeled as the correct class. This is similar to retraining steps in backdoor-removal methods~\cite{wangneural, liu2018fine}; however, unlike backdoor scenarios, we also face the ``unrelated'' watermark which has no ``correct'' label. Also unlike Neural Cleanse~\cite{wangneural}, we use the Median Absolute Deviation technique to identify the class \emph{least likely} to be watermarked and label our reconstructed unrelated images to that class during retraining. For this reason, we stop laundering early if the original most-likely-infected class is considered to be the least likely to be infected. Otherwise, retraining could be strengthening the original watermark. Additionally, we also feed the reconstructed masks themselves generated during the reverse-engineering step into the retraining dataset. These are also labeled as the least likely class, which acts as a secondary approach to remove neurons potentially watermarked with the unrelated style of image. The retrained model is then fed back into the backdoor reconstruction algorithm.
\section{Related work}
Despite the application of neural networks across a wide variety of domains, research into digital rights management or intellectual property protection schemes for them remains limited. Some research has investigated the feasibility of incorporating encryption schemes into neural networks, and one such example is the work by Hesamifard et al.~\cite{hesamifard2017cryptodl} who developed CryptoDL to preserve the privacy of raw images being trained in CNNs. Using a modification of such schemes encryption schemes, it may be possible to train a network that can only make predictions on new data when the input has been encrypted with the same training private key, thus preserving the intellectual property rights of the neural network's owner in the case of theft.
Originally, Uchida et al.~\cite{Uchida-watermarks} proposed embedding watermarks into the convolutional layer(s) of a deep neural network using a parameter regularizer. However, their procedure required white-box access to the weights of the neural network, which is not feasible or practical in many theft scenarios. Therefore, black-box watermarking models have been proposed as well. One proposed model takes advantage of adversarial examples to detect watermarks~\cite{merrer2017adversarial}, but this method may be vulnerable to retraining techniques aimed to reduce the impact of adversarial examples. ``DeepSigns''~\cite{rouhani2018deepsigns} demonstrated the potential to detect watermarks through the probability density function of the outputs of a neural network in both white-box and black-box scenarios.
Other approaches have similarly attempted to embed watermarks that are more resilient to vanilla fine-tuning and fine-pruning techniques. One such work proposed exponential weighting~\cite{namba-exponential} whereby the weights of the neural network are significantly diminished if they are small, leaving only weights that have large absolute values to influence the operation of the model. However, this method is also susceptible to our algorithm because such highly absolute activation values appear especially in the presence of watermarked images but not in non-watermarked images. This correlates to line 10 in Algorithm~\ref{LaunderingAlg}. Nevertheless, more advanced backdoor embedding proposals have been proposed as well. Specifically, Li et al.~\cite{Li:how-to-prove} propose embedding a watermark via an encoder such that the final watermarked image is barely distinguishable from the original image. Future work will investigate the robustness of such schemes.
Furthermore, there are two additional recent work that explore the ability to mitigate the robustness of watermarks that take advantage of backdoors of neural networks. Hitaj and Mancini~\cite{hitaj2018have} propose using an ensemble of networks to reduce the likelihood of a watermark being correctly classified as well as a detector network that attempts to return random classes if it believes a watermark is present in the image. Shafieinejad et al.~\cite{shafieinejad2019robustness} also investigate content, noise, and abstract categories of watermarks, but instead of attempting to recover the watermark then remove it, the authors propose copying the functionality of the model directly through queries via a non-watermarked dataset.
Finally, while we focus primarily on the intersection of neural network backdoors and neural network watermarking, other research has investigated the relationship of watermarking a digital media with adversarial machine learning. Quiring et al.~\cite{quiring2018forgotten} demonstrate that watermarking and adversarial machine learning attacks correlate such that increasing the robustness of a classifier can be used to prevent oracle attacks against watermarked image detection. %
\section{Discussion}
\label{Discussion}
In this section, we discuss several key issues related to choosing watermark detection thresholds; the integrity, reliability, and accuracy of recovered watermarks; and adversarial watermarks.
\subsection{On choosing the minimum watermark detection threshold}
\label{min-treshold}
Figure~\ref{unrelated_fluctuations} (a) represents our MNIST model training on clean data and attempting to classify unrelated watermarks. As shown in Figure~\ref{unrelated_fluctuations}, there is no true \emph{final} value at which a converged, clean model will classify ``unrelated'' watermark images. It is entirely possible that a legitimate network could stop being trained during the epoch where the watermark classification accuracy is quite high, purely by coincidence. For the unrelated watermarks and especially for the CIFAR-10 model in Figure~\ref{unrelated_fluctuations} (b), this effect is even greater, with fluctuations reaching above 50\% after certain epochs.
As a result, we argue that any laundered network that falls below the highest naturally occurring watermark classification rate is unable to provide any provable ownership for the true owner of the network. Furthermore, \emph{even laundered models that fall closely above this threshold} may not provide any meaningful ownership for the true owner of the network because a black-box model relies strictly on accuracy, not loss, and a (perhaps random) fluctuation of a couple percentages may not be enough to demonstrate ownership of a model. Note that in Figure~\ref{unrelated_fluctuations}, the CIFAR-10 model does not reach 90\% because we did not perform data augmentation (rotating, scaling, etc.) during the training of this example model.
\begin{figure}[t!]
\hspace*{-7mm}
\centering
\includegraphics[scale=0.31]{final_images/FINAL_fluctuations-reduced.png}
\caption{Demonstrations of how legitimately trained models can fluctuate on classifying a watermarked image as the backdoored class during training.}
\label{unrelated_fluctuations}
\end{figure}
\subsection{On the reliability and integrity of DNN watermarks}
In the DeepSigns~\cite{rouhani2018deepsigns} research, the authors further expand upon the requirements of effective watermarks proposed in Uchida et al.'s original design~\cite{Uchida-watermarks}. In addition to including generalizability (that the model should work in both white-box and black-box settings), the authors argue that neural network watermarks should have both \textbf{1) reliability} to yield minimal false negatives and \textbf{2) integrity} to yield minimal false positives. However, using our method on watermarked neural networks significantly hinders and defeats the ability of current watermark embedding processes to claim reliability and integrity. Adversaries are able to significantly increase false negatives and false positives in watermarked networks without access to the original training watermarks or the original training set.
For example, the reliability and integrity of the watermarks are in question if the detection accuracy is not significantly consistent. We argue this is especially true in unrelated watermarks because unrelated watermarks have no ``correct'' answer in these networks. In the MNIST set, the network was trained to classify the letter ``m'' as a ``0''. After applying adversary laundering, the unrelated images were still classified as ``0'' 16\% of the time, but were classified as ``8'' 45\% of the time and ``9'' 25\%. In the case of CIFAR-10, the unrelated images of an MNIST ``1'' digit were embedded as the watermarked class of ``airplane'' and were detected as such 23\% of the time, but were also classified as ``ship'' 52\%, and as ``truck'' 21\%. Non-watermarked networks could come to similar classifications and may not demonstrate ownership in the current methods.
Further complicating the reliability and integrity of demonstrating ownership through watermarks is the ability for adversaries to generate adversarial images via a stolen model. Such examples allow an adversary to conversely claim ownership retroactively through a stolen model. We discuss this in more detail in Appendix~\ref{adversarial-watermarks}.
\captionsetup[subfigure]{oneside,margin={-0.3cm,0cm}}
\begin{figure}[t!]
\centering
\begin{subfigure}{0.15\textwidth}
\centering
\includegraphics[scale=7.5]{reconstruction_images/cifar10_content_original.png}
\caption{Original Image}
\end{subfigure}\hspace{1mm}
\begin{subfigure}{0.15\textwidth}
\centering
\includegraphics[scale=7.5]{reconstruction_images/cifar10_content_recon.png}
\caption{Recovered Image}
\label{fig:subim2}
\end{subfigure}\hspace{1mm}
\begin{subfigure}{0.15\textwidth}
\centering
\includegraphics[scale=2.45]{reconstruction_images/cifar10_content_applied.png}
\caption{Retraining Image}
\label{fig:subim3}
\end{subfigure}
\caption{(a) Watermark, (b) attempted reconstruction, and (c) reconstruction applied to a clean image from Zhang et al.'s~\cite{Zhang-watermarks} content-based watermarks.}
\label{reconstruction}
\end{figure}
\subsection{On the inaccuracy of recovered watermarks}
Using multiple iterations of the full laundering algorithm results in watermark reconstruction that is far from perfect yet is sufficient in most cases. For example, in Figure~\ref{reconstruction}, even though the recovered watermark does not resemble any human-interpretable meaning of the original watermark, it is sufficient to remove the watermark during the laundering process. We speculate that these attacks work on watermarks because of the black-box nature of neural networks, where it is very difficult to control exactly which features are learned. Therefore, even if the reconstructions are not exact, they will be similar enough to the learned features to 1) identify potentially-watermarked neurons, and 2) overwrite them during retraining.
As a result of this phenomenon, we argue that current watermarking approaches, where the watermarked images are simply fed into the neural network training data and trained on them indiscriminately are not adequately fulfilling some crucial criteria. In addition to existing watermark criteria (fidelity, robustness, etc.), we propose that watermarks should be embedded with \textbf{specificity}. As an example, to achieve the specificity requirement, a network being watermarked with content such as that in Figure~\ref{reconstruction} (a) would not lose its watermarked quality even if (b) is recovered and the network is retrained on (c). Although the content of (b) contains some very generalized features of the watermark in (a), we posit that this should not be sufficient to violate such a specificity requirement, where a watermark should only be removed or overwritten by retraining, when the retraining images contain substantially similar watermarked examples.
\captionsetup[subfigure]{oneside,margin={-0.0cm,0cm}}
\begin{figure}[t!]
\centering
\hspace{-1mm}
\begin{subfigure}{0.16\textwidth}
\includegraphics[scale=0.76]{adversarial_images/normal_frog.png}
\caption{Original Image\\(classified as ``frog'')}
\end{subfigure}\hspace{12mm}
\begin{subfigure}{0.16\textwidth}
\includegraphics[scale=0.76]{adversarial_images/adv_frog.png}
\caption{Adversarial Image\\(classified as ``cat'')}
\label{fig:subim99}
\end{subfigure}
\caption{Adversarial example constructed against the stolen model that fools both the original model and the laundered model into classifying the image as ``cat''.}
\label{cifar10-adversarial}
\end{figure}
\subsection{On the drop in test accuracy of laundered models}
\label{drop}
In proposed watermarking strategies~\cite{Zhang-watermarks}, the proponents could argue that the drop in test accuracy from watermark removal techniques are sufficient to prevent attackers from stealing or using stolen models. In our examples, our method generally performs well regarding the ability to maintain test accuracy, with the largest drop occurring in CIFAR-10 against the Zhang et al.~\cite{Zhang-watermarks} ``unrelated'' watermark type of approximately 9\%. However, we argue that such a reduction (or larger) in accuracy does \textbf{not} imply that a laundered model is not useful. Indeed, while it is fair to point out that laundered accuracies on MNIST and CIFAR-10 do not reach either the original model accuracies nor the state-of-the-art, in many of our scenarios, an attacker would not be able to approach such accuracies with their limited datasets. We emphasize that a laundered model is useless only \emph{if the laundered model performs worse than a model trained from scratch on the same (limited) dataset}.
In situations, where adversaries have very limited datasets (e.g., 0.6\% of the original training size), adversaries can use a stolen laundered model to improve their classification accuracy. In such cases, where training data may be very difficult to obtain or is very expensive (such as medical data, high-quality clean data, etc.), a stolen model, even if watermarked, remains enticing to an adversary not only to steal for private use but also perhaps to launder and deploy.
Additionally, we also demonstrate that the reported results of black-box backdoor watermark attacks are overestimating the ability of model to retain those watermarks. Related work is not adequately exploring the potential for attackers to detect and remove neurons and/or weights that are contributing to the detection of watermarks.
\subsection{On the issue of adversarial watermarks}
\label{adversarial-watermarks}
Another point to consider is that a stolen model is susceptible to a wide range of adversarial attacks. Notwithstanding other security threats posed by adversarial examples in such a scenario, given white-box access to a model, an adversary could easily manufacture images that they claim to be watermarked but are actually adversarially-crafted against the original network. In this case, it will be additionally difficult to prove ownership of the model.
For example, Figure~\ref{cifar10-adversarial} contains an adversarial image constructed against a stolen Zhang et al.~\cite{Zhang-watermarks} CIFAR-10 content watermarked network. The original model classifies the adversarial image of ``frog'' as ``cat''. Due to the similarity between the two networks and the ability for adversarial examples to transfer between models~\cite{adv-transferabililty}, it is also mis-classifed as ``cat'' in the post-laundering model as well without targeting that model specifically. This particular laundered model performs with 85\% accuracy on the original task, but with 1.5\% accuracy on the original watermarks. However, when constructing adversarial images, the adversary can choose any percentage of final watermark detection accuracy by also purposefully crafting adversarial watermarks that fail to be detected.
Moreover, even without actually embedding their own ``evil'' watermark into this model, an adversary can simply create an adversarial example that looks sufficiently similar to a watermark of which the adversarial claims ownership. One may argue that adversarial examples of this sort may be easy to detect because the text (``evil'', in this case) contains some human-noticeable noise, yet an optimization algorithm that forbid noise in those pixel locations would likely find a suitable minimum without significantly more work. We leave such an implementation to future work.
\subsection{Limitations of prior watermark removal techniques}
Considering the similarities between watermarks and backdoors, it is natural to question why the original Neural Cleanse approach~\cite{wangneural} is not sufficient for some backdoor removal attempts. We argue that this arises due to two differences in our approaches. First, our method does not inherently rely on the outlier detection mechanism to be accurate; an adversary assumes there to be a watermark regardless of its outcome. This is especially necessary for backdoors as opposed to watermarks due to 1) the limitation of the datasets available, and 2) the types of augmentations that are used as watermarks. Indeed, in the content-based watermark reconstruction, we do find that the results are as expected from the original approach; however, for the noise and unrelated reconstructions shown in Figure~\ref{fig:similar-reconstructions}, an outlier detection scheme may struggle to decide which label is watermarked.
Second, as described earlier, the unrelated watermark has no ground-truth class, and the original Neural Cleanse approach~\cite{wangneural} and even Fine-Pruning~\cite{liu2018fine} do not consider this situation. Either of these two methods leave the unrelated watermark largely untouched. On the other hand, we retrain on all images applied with the reconstructed mask labeled as the correct class as well as on all masks themselves labeled as the least-likely class.
\begin{figure}[t!]
\centering
\begin{tabular}{c c}
\includegraphics[scale=2.2]{reconstruction_images/cifar10_laundered_1_noise_10_visualize_fusion_label_0.png} &
\includegraphics[scale=2.2]{reconstruction_images/cifar10_laundered_1_noise_10_visualize_fusion_label_3.png}\cr
(a) Noise Recon Label 0 &
(b) Noise Recon Label 3 \cr
\includegraphics[scale=2.2]{reconstruction_images/cifar10_laundered_1_unrelated_10_visualize_fusion_label_0.png} &
\includegraphics[scale=2.2]{reconstruction_images/cifar10_laundered_1_unrelated_10_visualize_fusion_label_3.png} \cr
(d) Unrelated Recon Label 0 &
(e) Unrelated Recon Label 3 \cr
\end{tabular}
\caption{CIFAR-10 reconstructed watermarks using the Neural Cleanse~\cite{wangneural} algorithm. Outlier detection struggles to detect the watermarked class; actual watermarked class is Label 0.}
\label{fig:similar-reconstructions}
\end{figure}
\section{Introduction}
Deep neural networks (DNNs) have become the state-of-the-art standard across a wide variety of fields ranging from computer vision to speech recognition systems, and they have been predominantly adopted by many industries~\cite{abiodun2018state}. As a result, many organizations have come to heavily rely on neural networks as part of their core operations, which requires a substantial investment into very powerful computing resources, vast quantities of data, and specialized machine learning expertise. Hence, organizations that do build and train their own models would need to protect their systems from plagiarism, and those who sell or share their models would also want to demonstrate ownership of the system when an infringement of copyright occurs.
Thus far, attempts to develop provable ownership in neural networks have mainly relied on two distinct categories of watermarking techniques: (1) Watermarks that are embedded through backdooring attacks by injecting the backdoor via training images~\cite{Zhang-watermarks, weakness-into-strength-backdoor}, and (2) Watermarks created and embedded directly into the neural network~\cite{Uchida-watermarks, rouhani2018deepsigns}. Our work focuses on the first set of watermarking techniques which we refer to as ``Backdoor Watermarks.'' In these types of watermarks, the technique typically involves embedding specially-crafted inputs into the training of a neural network that is designed to have a highly consistent, but unusual, output during testing. For example, a watermark may be embedded into a network by including a subset of images during initial training that can skew the network to classify those images unexpectedly. This subset may involve a ``trigger set'' of unrelated images~\cite{weakness-into-strength-backdoor}, or it may contain content or noise overlaid on the image~\cite{Zhang-watermarks}. In all of these cases, feeding the specially-crafted inputs into the trained system returns a consistent output that would not be expected normally. Because of the often overwhelming state of backdoors versus watermarks, we have provided a comparison of related work in Table~\ref{table:description} to highlight where our work falls within this domain.
\begin{table}[!t]
\caption{Comparison of neural network backdooring and watermarking techniques.}
\label{table:description}
\begin{tabular}{l|l|l|l}
\cline{2-3}
& Offensive (backdoor) & Defensive (watermark) & \\ \cline{1-3}
\multicolumn{1}{|l|}{\multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Bit\\ embedding\end{tabular}}} & N/A & \begin{tabular}[c]{@{}l@{}}Uchida et al.~\cite{Uchida-watermarks}, \\ DeepSigns~\cite{rouhani2018deepsigns}, etc.\end{tabular} & \\
\multicolumn{1}{|l|}{} & \begin{tabular}[c]{@{}l@{}}\emph{Mitigated by:}\\ N/A\end{tabular} & \begin{tabular}[c]{@{}l@{}}\emph{Mitigated by:}\\ N/A\end{tabular} & \\ \cline{1-3}
\multicolumn{1}{|l|}{\multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Backdoor \\ embedding\end{tabular}}} & \begin{tabular}[c]{@{}l@{}}BadNets~\cite{gu2017badnets},\\ Liu et al.~\cite{Trojannn}, etc.\end{tabular} & \begin{tabular}[c]{@{}l@{}}Adi et al.~\cite{weakness-into-strength-backdoor},\\ Zhang et al.~\cite{Zhang-watermarks}, etc.\end{tabular} & \\
\multicolumn{1}{|l|}{} & \begin{tabular}[c]{@{}l@{}}\emph{Mitigated by:}\\ Fine-Pruning~\cite{liu2018fine},\\ Neural Cleanse~\cite{wangneural}\end{tabular} & \begin{tabular}[c]{@{}l@{}}\emph{Mitigated by:}\\ \textbf{Our work}\\ \phantom{holder}\end{tabular} & \\ \cline{1-3}
\end{tabular}
\end{table}
The owner organization can then use the network's output to demonstrate their ownership of the model because only their watermarked model would behave specifically in this way. That is, black-box watermarks attempt to prove ownership of a model using only public API access by querying the potentially plagiarized network with carefully constructed inputs.
In particular, Zhang et al.~\cite{Zhang-watermarks} proposed a watermarking model to be secure against model-pruning, fine-pruning, and inversion attacks. Adi et al.~\cite{weakness-into-strength-backdoor} also proposed a watermarking model to be robust against similar removal attacks. These approaches~\cite{Zhang-watermarks, weakness-into-strength-backdoor} allow for the embedding and detection of watermarks that are human-readable (content-based watermarks) as well as those that are not human-readable (unrelated or noise-based watermarks) in black-box scenarios. Moreover, Zhang et al.'s model~\cite{Zhang-watermarks} appears to be considered for deployment at large IT companies that deploy deep neural network services, such as IBM\footnote{https://www.ibm.com/blogs/research/2018/07/ai-watermarking/}. However, it is still questionable whether currently suggested state-of-the-art watermarking techniques are really robust against sophisticated and targeted manipulation of the structure of the neural network, especially given recent research demonstrating significant success at removing backdoors from neural networks altogether~\cite{wangneural, liu2018fine}. Backdoors and watermarks both may exploit the overparameterization of neural networks to learn multiple tasks, but while a backdoor is generally used~\emph{by} adversaries for malicious ends (e.g., misclassifying stop signs with stickers as speed limit signs~\cite{gu2017badnets}), watermarks are used~\emph{against} adversaries to prevent their deployment of stolen models.
\begin{figure}[t!]
\centering
\begin{tabular}{c c c}
\includegraphics[scale=1.5]{./images/1_content.png}\hspace{.80 cm} &
\includegraphics[scale=1.5]{./images/1_noise.png}\hspace{.80 cm} &
\includegraphics[scale=1.5]{./images/m.png} \cr
(a) Content\hspace{.80 cm} &
(b) Noise\hspace{.80 cm} &
(c) Unrelated\cr
\includegraphics[scale=1.3]{./images/105_automobile.png}\hspace{.80 cm} &
\includegraphics[scale=1.3]{./images/105a_automobile.png}\hspace{.80 cm} &
\includegraphics[scale=1.4]{./images/831.png} \cr
(d) Content\hspace{.80 cm} &
(e) Noise\hspace{.80 cm} &
(f) Unrelated \cr
\end{tabular}
\caption{Examples of watermarked images used in the MNIST (top) and CIFAR-10 (bottom) datasets following the description in previous work~\cite{Zhang-watermarks}. MNIST examples are watermarked to be classified as ``0'', and CIFAR-10 examples as ``airplane''.}
\label{fig:Zhang-technqiues}
\end{figure}
In this work, we present our novel neural network ``laundering'' algorithm to effectively remove potentially watermarked neurons or channels in DNN layers. We achieve this via a three-step process of watermark recovery, detecting and resetting watermarked neurons, and retraining on the reconstructed watermarks and watermark masks, each step taking advantage of novel contributions. Moreover, our approach considers various types of backdoor attacks in black-box models as well, and we present the application of our proposed ``laundering'' technique to defeat backdoor attacks~\cite{chen2017targeted, Trojannn}.
To show the effectiveness of our laundering algorithm, we compare it with the removal attempts made within these original watermark proposal papers~\cite{Zhang-watermarks, weakness-into-strength-backdoor}. We also compare our results to the ``Fine-Pruning''~\cite{liu2018fine} backdoor removal technique; detailed backdoor background information and evaluation are available in Appendix~\ref{backdoor-background}. The structures of all our models can also be found in Appendix~\ref{architectures}.
Our approach shows weaknesses of current deep neural network watermarking techniques -- we can defeat the state-of-the-art watermarking techniques proposed by Zhang et al.~\cite{Zhang-watermarks} and Adi et al.~\cite{weakness-into-strength-backdoor}. In particular, we show that with an appropriate representation of the variety of training data, adversaries who have no knowledge of the watermark are able to successfully remove the majority of watermarking techniques, retaining accuracy much higher than stated in the prior work, (e.g., up to 20\% higher in some cases~\cite{Zhang-watermarks}).
In addition, previous studies (e.g.,~\cite{xie2018mitigating-comp1, song2018pixeldefend-comp2}) presenting defense mechanisms for adversarial attacks often argued that some attacks require significant computation. However, as in other DNN attacking research~\cite{athalye2018obfuscated}, we show that this assumption would not be sufficient to provide adequate defense. Instead, if thieves even have the suspicion that a watermarked model could potentially be laundered, they may invest significant effort to remove the watermark. Laundering a stolen model may save significant time and money in data collection, data labeling, and neural network design and construction.
We make the following contributions in this paper:
\begin{itemize}
\item We present our novel neural network ``laundering'' algorithm, which effectively removes neurons or channels in DNN layers that contribute to the classification of watermarked images. Differentiating us from previous work which focused on adversarial backdoors~\cite{wangneural}, we take on the viewpoint of the attacker attempting to remove watermarks (i.e., defensive backdoors) and evaluate our effectiveness under various limited training sets to which an attacker may have access.
%
\item We provide an intensive overview of the combination of parameters used for laundering a neural network for different types of layers and network architectures. We also evaluate the effectiveness of different combinations of parameters and available data both regarding the removal of watermarks~\cite{Zhang-watermarks} and backdoors~\cite{chen2017targeted, Trojannn} as well as the preservation of model performance.
%
\item We discuss in-depth the findings from our experiments and highlight the previously-overlooked weaknesses that currently exist within most watermarking schemes. We also provide new insights into the reasons adversaries will attack a watermarked model despite accuracy loss as well as the reasons previous backdoor-removal techniques do not exploit the weaknesses in watermarks specifically.
\end{itemize}
\input{sections/Background.tex}
\input{sections/AttackModel.tex}
\input{sections/OurProposals.tex}
\input{sections/Experiments.tex}
\input{sections/RemovingBackdoors.tex}
\input{sections/Results.tex}
\input{sections/RelatedWork.tex}
\input{sections/Conclusion.tex}
\bibliographystyle{IEEEtran}
\section{Attack model}
\label{Attack Model}
In our attack model, we define two parties: the true owner \emph{O} of the neural network model \emph{m} and the plagiarizer \emph{P} who has managed to procure \emph{m}. \emph{P} may have acquired \emph{m} through a variety of ways, not all of which may be malicious; however, the exact means by which \emph{P} acquires \emph{m} is outside the scope of this paper. The model \emph{m} performs a particular task \emph{t}, but is watermarked in such a way that certain, carefully-constructed examples \emph{$X_{w}$} will give highly specific output at the task \emph{t}. In our attack model, the goal of the plagiarizer \emph{P} is to alter \emph{m} in such a way that the examples \emph{$X_{w}$} will no longer result in predictable outputs from \emph{m} while minimally impacting \emph{t}.
Our attack model places certain limitations on plagiarizer \emph{P}. First, \emph{P} has a substantially limited set of training data when compared to the creator \emph{O}. Otherwise, \emph{P} could trivially label a large set of non-watermarked training data using \emph{m} to create a non-watermarked model by predictive model theft techniques~\cite{tramer2016stealing}. Second, \emph{P} does not know if \emph{m} has been watermarked but assumes it to be. Therefore, our proposed attack method should be able to overcome the robustness of the watermarked model \emph{m} to pruning and fine-tuning with limited training data such that \emph{m} can adequately perform task \emph{t} without reacting to watermarked examples.
As described in other work~\cite{Uchida-watermarks,Zhang-watermarks,rouhani2018deepsigns}, watermarks in neural networks are designed to be robust to pruning, fine-tuning, and/or watermark overwriting as well as secure against discovery of the presence of a watermark in the model. Although there exist watermarking techniques that are robust to both traditional pruning and retraining, we present more sophisticated methods that are able to greatly hinder and defeat the effectiveness of these black-box watermarking embedding algorithms.
%
%
%
%
%
%
%
%
%
%
\section{Background}
\label{sec: background}
Backdooring attacks on neural networks have highlighted serious weaknesses in the black-box nature of neural networks throughout a variety of different tasks and model structures~\cite{gu2017badnets, chen2017targeted, Trojannn}. Backdoors exploit the vulnerability of the overparameterization of deep neural networks to hide deliberately-designed backdoors in the model.
If the training of a network is outsourced to a third party that surreptitiously inserts specific and maliciously-labeled training images, the victim organization will receive a model that behaves correctly on the surface but contains a hidden backdoor. For example, the BadNets research~\cite{gu2017badnets} demonstrated that it was possible to force the misclassification of stop signs to more than 90\% using only a yellow Post-it note sized square overlaid on the images.
\subsection{Backdoor Watermarking Techniques}
Some research has proposed utilizing the weaknesses of neural networks to backdoor attacks as a method for embedding watermarks. One specific implementation of this watermarking process is Zhang et al.'s~\cite{Zhang-watermarks} black-box technique that uses watermarked images as part of the training set of the network, consistently labeled as one class. These watermarked images include the following three types of images as shown in Figure~\ref{fig:Zhang-technqiues}: 1) meaningful content (e.g., a word) placed over part of the image in some subset of training images (``content''), 2) pre-specified (Gaussian) noise over some subset of training images (``noise''), or 3) completely unrelated images (``unrelated''). Note that Figure~\ref{fig:Zhang-technqiues} (f) is an image taken from MNIST but used as an ``unrelated'' watermark in CIFAR-10. In their experiments, they demonstrate minimal impact on the accuracy of the model, and their watermarks remain strong even after substantial pruning, tuning, and model inversion attacks~\cite{fredrikson2015model} against the watermarked model.
Adi et al.~\cite{weakness-into-strength-backdoor} similarly proposed using the mechanics of backdoors to embed watermarks to prove non-trivial ownership of neural network models. In their approach, the authors utilized 100 abstract images, each randomly assigned to a target class for their trigger set. Furthermore, their embedding procedure required the owner to sample \textit{k} images from the trigger set during each training batch.
Note that unlike watermarking, backdoors typically do not rely heavily on the ``unrelated'' style of image from~\cite{Zhang-watermarks} or the abstract images from~\cite{weakness-into-strength-backdoor}. Overlaying part of the image (e.g., with glasses or noise) is common in backdoors, but unrelated images such as those used in watermarking schemes such as~\cite{weakness-into-strength-backdoor} are not commonly addressed in backdoor removal papers because such images would be predicted randomly. Nonetheless, our work focuses on highlighting the shortcomings of all varieties such watermarking techniques: ``content'', ``noise'', and ``unrelated.''
\subsection{General Backdoor Removal Techniques}
Other research has tackled a similar problem of removing backdoors in neural networks, and one highly related research is the work by Liu et al.~\cite{liu2018fine}. This work proposed a two-step process to remove backdoors by first pruning the network followed by fine-tuning the pruned network. Their method shows success in removing backdoors from different deep neural network implementations. Similarly, recent research by Wang et al.~\cite{wangneural} has shown the ability to recover sufficiently similar backdoor triggers embedded in maliciously backdoored neural networks. Their research focuses on the ability of victims to detect and remove backdoors from their networks, where the victims have access to their full training set but would like to avoid intensive retraining on the model. Research~\cite{wangneural} has also investigated backdoor removal techniques, where the victim has no knowledge of a backdoor trigger but aims to take efforts to potentially remove it. Because of the similarity in goals, we leverage Wang et al.'s backdoor reconstruction algorithm~\cite{wangneural}, which begins with discovering a trigger with the following formulation:
\begin{equation}
\begin{array}{l}
\textrm{\hspace{5mm}}A(x,m,\Delta) = \textbf{$x^{\prime}$}\\
\textrm{\hspace{8mm}}\textbf{$x^{\prime}$}_{\emph{i,j,c}} = (1 - \textbf{m}_{\emph{i,j}}) \cdot \textbf{x}_{\emph{i,j,c}} + \textbf{m}_{\emph{i,j}} \cdot \Delta_{\emph{i,j,c}},
\end{array}
\end{equation}
where \emph{A}(·) is the trigger application function, \textbf{x} is the original image, $\Delta$ is the trigger image, and \textbf{m} is the mask for the trigger. They further constrict this algorithm by measuring the magnitude of the trigger by the \emph{L}1 norm of \textbf{m} to result in the final formulation of:
\begin{equation}
\begin{array}{l}
\min\limits_{m,\Delta} \textrm{\space} \ell(y_{t},f(A(x,m,\Delta))) + \lambda \space \cdot \space |m|\\ \textrm{\hspace{3mm}for\space} \space x \in \textbf{X},
\end{array}
\end{equation}
where \emph{f}(·) is the networks output prediction function, $\ell$(·) is the loss function, $\lambda$ controls the magnitude for controlling the size of the reversed trigger, and \textbf{X} represents the available non-watermarked images. For more detailed information regarding the construction scheme, we direct the reader to the original paper~\cite{wangneural} or to their open-source implementation\footnote{https://github.com/bolunwang/backdoor}.
\section{Conclusion and future work}
We proposed a novel ``laundering'' algorithm that focuses on low-level manipulation of a neural network based on the relative activation of neurons to remove the detection of watermarks via black-box methods. We demonstrated significant weaknesses in current watermarking techniques, especially when compared to the results reported in the original research~\cite{Zhang-watermarks, weakness-into-strength-backdoor}. Specifically, we were able to remove watermarks below ownership-proving thresholds, while achieving test accuracies above 97\% and 80\% for both MNIST and CIFAR-10, respectively, for all evaluated model and watermark combinations using a highly restricted dataset (0.6\% of the original training size in most cases). %
In addition, we provided new insight in marrying the fields of watermark embedding and removal with backdoor embedding and removal. As we have shown in this research, these two will have opposite repercussions on each other: increasing defense against backdoors will reduce the effectiveness of watermarks. From this work, we hope that future work will continue to challenge and improve existing neural network watermarking strategies as well as explore additional avenues for detecting and removing backdoors in neural networks.
Future work will investigate the feasibility of further decreasing black-box watermark effectiveness in two ways. First, future work will consider the possibility of boosting a collection of laundered neural networks using the black-box adversary method. It may be possible to boost the overall accuracy by combining multiple heavily laundered (10+ rounds) models and tallying all their votes for one classification using a method such as AdaBoost~\cite{rojas2009adaboost}. Second, future work will consider the ability of an adversary to attempt to prevent queries to a laundered network when the victim attempts to feed unrelated images for classification. Existing work has already considered detecting out-of-distribution examples~\cite{hendrycks2016baseline}, and even a poor classifier of this type could further weaken the effectiveness of unrelated watermarks below the acceptable threshold.
To conclude, our results indicate that current work proposing to use backdoors for watermarks in neural networks are overestimating the robustness of these watermarks while underestimating the ability of attackers to retain high test accuracy. For the content-based and noise-based watermarks, the watermarks are not robust to detection, reconstruction, and removal attacks. All methods do present some additional overhead to model thieves, both in computational resources as well as in the final classification accuracy, particularly in the unrelated style; however, hopefully our work demonstrates that their adoption should not be considered as secure against persistent removal attempts as more complex backdoor reconstruction methods are developed.
\section{Experiments}
\label{sec:Experiments}
\newcolumntype{Y}{>{\centering\arraybackslash}X}
\begin{table*}[ht!]
\setlength\tabcolsep{1.0pt}
\caption{Results of the black-box adversary algorithm for various watermarks and backdoors.}
\label{black-results}
\begin{tabularx}{\textwidth}{|Y|Y|Y|Y|Y|Y|Y|Y|Y|}
\hline
\textbf{Method} & \textbf{Dataset} & \textbf{Watermark Type} & \textbf{Original Test Accuracy} & \textbf{Laundered Test Accuracy} & \textbf{Original Watermark Accuracy} & \textbf{Laundered Watermark Accuracy} & \textbf{Vanilla Model Watermark Accuracy} & \textbf{Limited Retraining Size} \\ \hline \hline
Zhang et al. & \multirow{6}{*}{MNIST} & Content & 99.46\% & 97.03\% & 100\% & 99.95\% & 1.5\% & 16.6\% \\ \cline{3-9}
(90\% pruning) & & Noise & 99.41\% & 95.19\% & 100\% & 99.55\% & 6.0\% & 16.6\% \\ \cline{3-9}
& & Unrelated & 99.43\% & 93.55\% & 100\% & 99.9\% & 20\% & 16.6\% \\ \clineB{1-1}{2.5} \clineB{3-9}{2.5}
\multirow{3}{*}{Ours} & & Content & 99.90\% & 98.34\% & 100\% & 0.01\% & 1.5\% & 0.6\% \\ \cline{3-9}
& & Noise & 99.86\% & 97.45\% & 99.99\% & 0.07\% & 6.0\% & 0.6\% \\ \cline{3-9}
& & Unrelated & 99.92\% & 98.33\% & 99.71\% & 15\% & 20\% & 0.6\% \\ \hline \hline
Zhang et al. & \multirow{6}{*}{CIFAR-10} & Content & 78.41\% & 64.9\% & 99.93\% & 99.47\% & 5.0\% & 20.0\% \\ \cline{3-9}
(90\% pruning) & & Noise & 78.49\% & 59.29\% & 100\% & 65.13\% & 4.0\% & 20.0\% \\ \cline{3-9}
& & Unrelated & 78.12\% & 62.15\% & 99.86\% & 10.93\% & 52.0\% & 20.0\% \\ \clineB{1-1}{2.5} \clineB{3-9}{2.5}
\multirow{3}{*}{Ours} & & Content & 90.24\% & 87.65\% & 100\% & 1.4\% & 5.0\% & 0.6\% \\ \cline{3-9}
& & Noise & 89.01\% & 84.14\% & 100\% & 0.50\% & 4.0\% & 0.6\% \\ \cline{3-9}
& & Unrelated & 89.77\% & 85.34\% & 99.94\% & 16\% & 52.0\% & 0.6\% \\ \hline \hline
Adi et al. (PT) & \multirow{4}{*}{CIFAR-10} & \multirow{4}{*}{Unrelated} & 93.65\% & $\sim$90\% & 100\% & 100\% & 7\% & $\sim$ \\ \cline{1-1} \cline{4-9}
Ours (PT) & & & 91.55\% & 88.25\% & 100\% & 7.0\% & 12\% & 10.8\% \\ \cline{1-1} \cline{4-9}
Adi et al. (FS) & & & 93.81\% & $\sim$90\% & 100\% & 80\% & 7\% & $\sim$ \\ \cline{1-1} \cline{4-9}
Ours (FS) & & & 91.85\% & 84.73\% & 100\% & 7.0\% & 12\% & 10.8\% \\ \hline
\end{tabularx}
\end{table*}
In order to demonstrate the feasibility of our approach, we recreated the results of Zhang et al.'s work~\cite{Zhang-watermarks} for both MNIST and CIFAR-10. In general, the architectures of the deep neural networks followed the procedure according to the original papers~\cite{Zhang-watermarks, weakness-into-strength-backdoor}, but we will elaborate on any differences where relevant. We followed Zhang et al.'s implementation as described in their paper~\cite{Zhang-watermarks} for MNIST and CIFAR-10; however, while following the description given for the CIFAR-10 model, our models consistently converged at approximately 73\% test accuracy, rather than the 78\% given in the original work.
For re-implementing Adi et al.'s watermarking scheme, we followed both the pre-trained (PT) procedure where watermarks are embedded into the network following non-watermarked training as well as the from-scratch (FS) procedure where watermarks are embedded during training. Their original implementation converged at 93.65\% for CIFAR-10; however, our models converged at 91.15\% when implementing their method in Keras using ResNet18~\cite{he2016deep}. Nevertheless, we were able to recreate the 100\% watermark accuracies on this model described in the original work~\cite{weakness-into-strength-backdoor}.
The results in this section correspond to \textbf{one round of laundering}. We reset all layers in all DNNs except for those models using ResNet18+ (which is used in Adi et al.'s scheme~\cite{weakness-into-strength-backdoor}). Due to the depth of that model, we reset the second half of the weights only; the weights of the first half appear to being learning very high-level features that do not correspond directly with the watermarks.
We implemented our neural laundering prototype in Python 3.6 with Keras 2.1.6 and Tensorflow 1.7.0. The
experiments were conducted on a machine with an Intel i5 CPU, 16 GB RAM, and 3 Nvidia 1080 Ti GPUs with 11GB GDDR5X.
In order to evaluate our laundering technique in a realistic setting, we limited the adversary's retraining dataset size in the case of black-box attackers. Especially for MNIST, using even half of the test set size for training or retraining as performed in Zhang et al.'s original work~\cite{Zhang-watermarks} is sufficient to train a model to above 90\%, with or without the watermarked model. As a result, if we do not limit the training size, adversaries could simply train their own neural network from the output of the watermarked model using prediction algorithms found in Tram{\`e}r et al.'s work~\cite{tramer2016stealing}. This situation inherently implies there would be no need for laundering a watermarked network at all given a large retraining set size.
As a result, for the results against Zhang et al.'s watermarking scheme~\cite{Zhang-watermarks} we purposely limited the adversary's MNIST retraining dataset to be 0.6\% of the original training set size of 60,000 handwritten digits, which results in approximately 42 images per category. Likewise, we also limited the CIFAR-10 dataset to 0.6\%, which results in approximately 36 images per category. In reality, it is likely that adversaries would have more data available, and they would achieve better results than the conservative results we report.
For the following evaluation sections, we will repeatedly refer to Table~\ref{black-results}, wherein we list our results - in rows following ``Ours'' in column 1 (``Method'') - directly below the proposed watermarking approach and other authors' original watermark removal attempts for comparison. Column 4 (``Original Test Accuracy'') and column 5 (``Laundered Test Accuracy'') record the accuracy on the never-before-seen test set by the original watermarked network before laundering and then subsequently on the laundered watermarked network, respectively. Similarly, column 6 (``Original Watermark Accuracy'') and column 7 (``Laundered Watermark Accuracy'') record the accuracy of the detected watermarks by the original watermarked network before laundering and then subsequently on the laundered watermarked network, respectively. The thresholds at which the watermarks are unusable for ownership demonstration purposes are listed in column 8 (``Vanilla Model Watermark Accuracy''), and we discuss how we derive these values in Section~\ref{min-treshold}. Finally, we provide a comparison of the size of the dataset used in the watermark removal process in column 9 (``Limited Retraining Size''), which corresponds to the ``normally labeled data'' referenced in Algorithm~\ref{LaunderingAlg}.
\textbf{MNIST and CIFAR-10 Results via Zhang et al.'s Method.} For the MNIST model across all watermarking types, our method was able to remove detected watermarks below unusable levels. The content and noise watermarks are significantly recovered and fall significantly below even random classification, and although unrelated watermarks are much higher (e.g., 15\% for the MNIST unrelated watermark), they fall below the vanilla model watermark accuracy (e.g., 20\% again for the MNIST unrelated watermark).
Again, our method was able to remove detected watermarks below usable levels for all watermark types, while minimizing the drop in test classification to about 5\% overall. One point of importance is that for CIFAR-10 unrelated watermarks, our model does not reduce watermark accuracy as much as the reported pruning techniques used in~\cite{Zhang-watermarks}. Nevertheless, our method does maintain a higher percentage of the final test accuracy. The original paper recorded a drop from 78.12\% accuracy to 62.15\% (shown in columns 4 and 5 in Table~\ref{black-results}) while our method results in a drop from 89.77\% accuracy to 85.34\%. Additionally, our method demonstrates its effectiveness when using a smaller dataset size.
\begin{figure*}[t!]
\centering
\includegraphics[width=.77\textwidth]{final_images/final_graph_MNIST-EMNIST.png}
\caption{Results of laundering the MNIST and MNIST+ models to remove Zhang et al.'s proposed watermarks~\cite{Zhang-watermarks}.}
\label{rounds-MNIST-both}
\end{figure*}
\textbf{CIFAR-10 Results via Adi et al.'s Method.} In Adi et al.'s proposed watermarking scheme~\cite{weakness-into-strength-backdoor}, they included 100 unrelated (abstract) images, labeled each image with a random class, and trained their model on these images as well. However, their method proposed two ways to embed the watermark, either into the model from scratch (FS) or by fine-tuning a pre-trained (PT) model. We include evaluations of both approaches in Table~\ref{black-results}.
While there are perhaps more efficient ways to attack the Adi et al.~\cite{weakness-into-strength-backdoor} watermarking scheme, we do not adapt our method to be specifically tailored to the Adi scheme. We aimed to test our watermark removal strategy as an agnostic method with only one minor modification. Empirically, for very deep networks such as ResNet18+~\cite{he2016deep}, resetting many shallow layers significantly impacted the overall final performance of the model. While one layer is not enough, resetting all shallow layers results in deep layers receiving activations that they were not originally trained to receive. As a result, for the ResNet model, we chose to reset only the second half of the weights. Additionally, the results in Table~\ref{black-results} include a larger subset of available training data than used against Zhang et al.~\cite{Zhang-watermarks}. A further discussion of this is explained in Section~\ref{sec:abridged_data}.
We find that in both cases (during training and post-training) the watermarks are significantly less robust than the original claims by Adi et al.~\cite{weakness-into-strength-backdoor} using our approach. We compared our results to the original watermark removal technique referred to as ``Re-train All Layers (RTAL)''. Even in the original research, the RTAL watermark removal technique had significantly reduced the accuracy of the pre-trained watermark set. However, we also find that the FS watermarks face similar problems, perhaps due to the reliance on batch normalization~\cite{batch-norm}. Before retraining, we also reset the batch normalization. Additionally, the authors do not use a limited dataset to evaluate their watermark removal methods.
\section{Evaluation of Required Data for Watermark Removal}
\label{sec:abridged_data}
In other backdoor or watermark research~\cite{Zhang-watermarks, liu2018fine} researchers have typically given the remover a typical train-test split worth of data. For example, in the original Zhang et al.~\cite{Zhang-watermarks} watermark proposal, the adversary was given the entire test set of the default train-test split from the MNIST dataset to attempt to remove the watermark via pruning and retraining. Even though these techniques are typically unable to remove watermarks, an adversary would actually have no need to remove them. Using the neural network structure and dataset split utilized in Zhang et al.'s paper~\cite{Zhang-watermarks}, adversaries could train their own MNIST models using the test set only and still easily reach 90\% or above accuracies, completely removing the need to launder the watermarked model. However, because our approaches are able to directly target weaknesses of watermarking techniques and because we approach this issue from the point of an attacker, we may consider the adversary to be even more limited than the other watermark or backdoor removal research has considered.
Therefore, in the above Table~\ref{black-results} in Section~\ref{sec:Experiments}, we included the results where an adversary was limited to 0.6\% (except in the case of Adi et al.~\cite{weakness-into-strength-backdoor}, shown in the final section of Table~\ref{black-results}) of the total training data as well as the results from only one round of laundering. However, we also investigated the effectiveness of our approach under a variety of other splits. These include 10.8\%, 6\%, and 0.6\%, and finally only one example per class as depicted in the various lines in Figure~\ref{rounds-MNIST-both} and~\ref{rounds-adi-cifar}. Additionally, because a black-box adversary has no knowledge of the watermark detection accuracy, we also evaluated scenarios, where an adversary performed multiple rounds of laundering (up to 10).
In Figure~\ref{rounds-MNIST-both} and \ref{rounds-adi-cifar}, the blue lines represent the final test accuracy on unseen data, and the red lines represent the accuracy on watermarked images. Each line style is representative of an available dataset size, and the solid black line represents what we propose to be the minimum watermark accuracy required to claim ownership of a model. We discuss the minimum watermark accuracy value more in Section~\ref{min-treshold}.
\begin{figure*}[t!]
\centering
\includegraphics[width=0.77\textwidth]{final_images/final_graph_ADI-CIFAR-one-correct.png}
\caption{Results of laundering the CIFAR-10 model to remove Adi et al.'s~\cite{weakness-into-strength-backdoor} and Zhang et al.'s~\cite{Zhang-watermarks} proposed watermarks.}
\label{rounds-adi-cifar}
\end{figure*}
\textbf{MNIST Results via Zhang et al.'s Method.} Due to the simplicity of the model, even a small number of laundering examples - down to even one image per class in some cases - is enough to make a significant impact on the watermark accuracy, especially with a small number of iterations. We show that most combinations perform similarly in Figure~\ref{rounds-MNIST-both}. For example, Figure~\ref{rounds-MNIST-both} (a), (b), and (c) all show a decline in watermark accuracy (via red lines) after very few rounds. Additionally, the final testing accuracy (via blue lines) also shows significant resilience even after multiple rounds of laundering, declining gradually over time. However, perhaps due to the complexity of the watermark in the content-based scenario (as shown in Figure~\ref{rounds-MNIST-both} (a), one example per class (solid blue and red lines) is not enough to remove the watermark for plain MNIST.
Moreover, we also trained a more complex version of MNIST with six additional classes taken from the EMNIST dataset~\cite{cohen2017emnist} (specifically letters `t', `u', `w', `x', `y', `z'). Note that while these images are from the EMNIST dataset, we refer to this model as the MNIST+ model as it does not contain all EMNIST classes. As shown in Figure~\ref{rounds-MNIST-both} (d), (e), (f), the inclusion of additional classes made two notable differences. First, in this model our algorithm was able to remove the content-based watermarks completely. This strongly suggests that having a few more training examples is enough to reconstruct content-based watermarks more effectively, although future work will be required to understand this phenomenon in depth. Second, because of the larger number of class sizes, the unrelated watermarks were classified as the watermarked class less often in the clean network, pushing the target black line lower.
As a result, the smaller dataset sizes (0.6\% and one-per-class -- dashed triangles and solid lines, respectively) struggled to push the watermark below that threshold. Our MNIST+ dataset also captures more generalized results than plain MNIST in the testing accuracy as well. As expected, one-per-class impacted testing accuracy the most. Due to the simplicity of plain MNIST, this phenomenon was not captured in those experiments.
\textbf{CIFAR-10 Results via Adi et al.'s Method.} Although the CIFAR-10 task is significantly more complex than MNIST, the 10.8\%, 6.0\%, and 0.6\% retraining sets in Figure~\ref{rounds-adi-cifar} (a) and (b) were able to remove the watermarks, while maintaining a significantly high test accuracy. However, as expected, the accuracy dropped over multiple rounds of laundering, particularly as the size of the laundered set decreased. We speculate that this is owed more to overfitting than to our laundering method. In this case, however, the one-per-class laundering set is completely inadequate, especially over time, in both cases. Regardless of combination, the Adi et al.'s~\cite{weakness-into-strength-backdoor} from-scratch (FS) technique outperforms the pre-trained (PT) technique. Nevertheless, due to the complexity of the task as well as the resilience of the watermark scheme, our scheme does incur a loss of test accuracy on small laundering set sizes. We discuss the implications of this drop in test accuracy in Section~\ref{drop}.
\textbf{CIFAR-10 Results via Zhang et al.'s Method.} Again due to the complexity of the task required to classify CIFAR-10 images, we expected a large impact on the test accuracy over time, and the results do show this phenomenon in Figure~\ref{rounds-adi-cifar} (c), (d), and (e). As in the Adi et al. model, the one-per-class laundering set is again inadequate. However, both the content and noise watermarks were removed via our method in the 10.8\%, 6.0\%, and 0.6\% cases, although with a cost to final test accuracy that worsened across multiple iterations. The unrelated watermarks were much more difficult to consistently remove, with seemingly random high fluctuations. However, the unrelated watermarks in CIFAR-10 are also classified randomly even in vanilla models, and we argue that relying on such a high degree of uncertainty of a watermark is quite risky, and we go into more detail about these risks in the following section.
\textbf{General Laundering Observations.} To conclude this rather extensive evaluation, we make general observations regarding the rounds and percentages used for laundering:
\begin{enumerate}
\item The larger the laundering dataset size, the more rounds of laundering are possible without subsequent decrease in accuracy.
\item The more complex the model, the more likely subsequent rounds of laundering will significantly harm the final test accuracy.
\item In most cases, once the watermark has been removed, it is unlikely to return unless it was embedded with a high randomly-occurring threshold (as in CIFAR-10 unrelated watermarks).
\end{enumerate}
As a result, we make a general recommendation rule that laundering more than 3 rounds will result in diminishing returns for adversaries where they will be losing final test accuracy without removing additional watermark accuracy.
\section{Proposed laundering technique}
\label{Proposed Laundering Technique}
In this section, we describe our algorithms in more detail for removing watermarks from neural networks. Adversaries, which were described as the plagiarizer \textit{P} in the previous Attack Model section, have access to the intermediate pre-trained layers of the watermarked neural network ($L_j$), which, in this case, have been watermarked during the original training process. Additionally, the adversaries have their own training dataset that has been labelled correctly ($X_i$). Adversaries have procured this correctly-labelled dataset either manually or perhaps by using the watermarked network's outputs to automatically label the examples~\cite{tramer2016stealing}. It is important to note that the output of the watermarked neural network would not intentionally misclassify any of these images during creation of the correctly-labelled dataset if the adversary chooses that method; none of these images would contain such a specific watermark by pure chance.
While throughout this section we draw upon previous backdoor-reconstruction techniques~\cite{wangneural}, we include our own novel contributions, specifically designed to help function in black-box watermark-removal scenarios. These include:
\begin{itemize}
\item Combining the ``pruning'' and ``unlearning'' steps proposed in~\cite{wangneural} as a two-step approach to removing watermarks even with very limited retraining data.
\item Implementing a statistical-based approach used to decide if neurons should be reset within a layer based on the relative average activation of the entire dense or convolutional layer.
\item Extensively investigating ``unrelated'' and ``noise'' style of watermarks which tend to avoid detection and removal in the backdoor removal schemes~\cite{wangneural, liu2018fine}.
\item Offering an additional application of the reverse-engineered masks generated during the watermark reverse-engineering process to aid in the removal of the unrelated style of watermark.
\end{itemize}
\begin{figure*}[ht!]
\centering
\includegraphics[scale=0.76]{images/new_black-box_laundering.png}
\caption{Overview of black-box laundering procedure, where the vanilla images are normally labeled data and red crosses are the reset neurons. This shows the process of recovering potential watermarks using methods such as Wang et al.'s~\cite{wangneural}, laundering the network via our algorithm, and finally retraining the network based on re-labeled reconstructed images. The process is repeated over multiple iterations where the retrained model is fed back into the reconstruction algorithm.}
\label{fig:demo2}
\end{figure*}
A black-box attack watermark removal technique requires more effort than a white-box adversary scenario. The recent ``Neural Cleanse''~\cite{wangneural} research poses a significant step toward the reduction of the threat that backdoors pose to neural networks. At the same time, however, it highlights the weaknesses of the \emph{security} of neural network watermarks, which requires that a network shall not reveal any information regarding the watermarks' presence in the network. While this method showed success in many backdoor scenarios where the victim was able to reconstruct a highly representative example of the actual backdoor, our results demonstrated a need to incorporate their proposed reconstructive process into a larger algorithm attack from the adversary's standpoint.
In order to perform such an attack, we propose the following 3-step process: 1) watermark recovery, 2) black-box attack laundering via the algorithm proposed in Algorithm~\ref{LaunderingAlg}, and 3) black-box adversary retraining. Step 2 is required to reset potentially watermarked neurons, and Step 3 restores the performance of the fully laundered model back to acceptable levels in instances, where Step 2 has reset a large number of neurons that also serve an important role in final non-watermarked classification. We present the end-to-end black-box laundering procedure in Figure~\ref{fig:demo2}. We will now go more in-depth on each of these steps.
\textbf{Step 1. Watermark Recovery.} As discussed in Section~\ref{sec: background}, there exist methods to discover potential backdoors within neural networks~\cite{wangneural}. Using such a method, it is possible to reconstruct the smallest perturbations required to push one class to another. However, this step alone is insufficient in the watermark removal domain.
\textbf{Step 2. Black-box Attack Laundering.} While black-box adversaries have no access to known watermark images, they assume the reconstructed watermarks to be accurate representations of the actual watermarked images. As such, the reconstructed watermarks are overlain on the adversary's available training data, and these sets are sent through the laundering algorithm.
Using these manually-constructed watermarks, the adversary is able to observe and record the activations of each layer $L_j$ for each watermarked image in $W_k$ of the neural network to calculate the total watermarked activation for each neuron $AW_j^{total}$. Then, the adversary simply calculates the average activation of each neuron individually $AW_j^{avg}$ by dividing by the number of training examples as shown in line 8 of Algorithm~\ref{LaunderingAlg}.
Following this, the adversary may perform a similar calculation across all non-watermarked images $X_i$ through all the neural network layers $L_j$ and observe and record these observations for the total non-watermarked activation for each neuron $AN_j^{total}$. The adversary then calculates the average of each neuron to calculate $AN_j^{avg}$. As a result, the adversary now has the ability to subtract the average normal image activation $AN_j^{avg}$ from the average watermarked activation $AW_j^{avg}$. This results in the average difference in activation of that layer $A_j^{diff}$ demonstrated in line 10 of Algorithm~\ref{LaunderingAlg}.
From here, the adversary simply resets any neuron that activated strongly when in the presence of watermarked images but did not activate strongly for non-watermarked images. The adversary does this by stepping through each neuron $v$ in $L_j$ and removing it if the difference in the activation between watermarked and non-watermarked images ($A^{diff}_{j_{v}}$) falls above some threshold value $DT$ as in line 14 of Algorithm~\ref{LaunderingAlg}. If the layer is convolutional (and the activation difference falls above $CT$), then the adversary would instead reset the entire intermediate channel $AN^{avg}_{j_{v}}$.
``Resetting'' a neuron or channel may take many forms. In our case, we immediately reset the weights of the input into the layer to zero during the algorithm, maintaining those reset weights on each successive pass through all layers $L_j$ in order to retrain the network, while preventing the watermarked neurons from reappearing. For most activation functions, setting the weights to zero would achieve the desired effect; however, this may not always be the case. Neurons that leverage activation functions such as softplus~\cite{dugas2001incorporatingsoft} that produce non-zero output at point 0 may still be highly activated in the presence of zero-weighted inputs. In such a case, it would be up to the adversary to choose a more appropriate resetting procedure, although we propose simply setting those neurons' or channels' weights to the layer's median weight may suffice as well.
\def\NoNumber#1{{\def\alglinenumber##1{}\State #1}\addtocounter{ALG@line}{-1}}
\newcommand{\mathrel{+}=}{\mathrel{+}=}
\begin{algorithm}[t!]
\caption{Watermark Laundering algorithm (Step 2)}\label{LaunderingAlg}
\begin{algorithmic}[1]
\State Given:
\NoNumber{Trained intermediate layers $(L_{j})$ for $j = 0, 1,\cdots J$}
\NoNumber{Normally labeled data $(X_{i})$ for $i = 0, 1,\cdots, I$}
\NoNumber{(Recovered) Watermarked data $(X_{k})$ for $k = 0, 1,\cdots, k$}
\State Predefined:
\NoNumber{Dense layer activation threshold $(DT)$}
\NoNumber{Convolutional layer activation threshold $(CT)$}
\For{$j = 0, 1, \cdots, J$}
\For{$i = 0, 1, \cdots, I$}
\State $AN^{total}_{j} \mathrel{+}= L_{j}(X_{i})$
\EndFor
\For{$k = 0, 1, \cdots, K$}
\State $AW^{total}_{j} \mathrel{+}= L_{j}(W_{k})$
\EndFor
\EndFor
\State Avg layer watermark activation $AW^{avg}_{j} = AW^{total}_{j} / K$
\State Avg layer normal activation $AN^{avg}_{j} = AN^{total}_{j} / I$
\State Activation difference $A^{diff}_{j} = AW^{avg}_{j} - AN^{avg}_{j}$
\For{$j = 0, 1, \cdots, J$}
\If {$L_{j}$ TYPE is DENSE}
\For{$v = 0, 1, \cdots, V$}
\If {$AN^{avg}_{j_{v}} > DT$}
\State Reset intermediate layer neuron $L_{j_{v}}$
\EndIf
\EndFor
\ElsIf {$L_{j}$ TYPE is CONV}
\For{$v = 0, 1, \cdots, V$}
\If {$AN^{avg}_{j_{v}} > CT$}
\State Reset intermediate layer channel $AN^{avg}_{j_{v}}$
\EndIf
\EndFor
\EndIf
\EndFor
\end{algorithmic}
\end{algorithm}
\textbf{Step 3. Black-box Adversary Retraining.} Finally, the model is retrained on all available examples, including those collected during reconstruction. Note that these non-watermarked examples were the same examples passed through the neural network in the laundering step, except in this step, watermarked examples are labeled as the correct class. This is similar to retraining steps in backdoor-removal methods~\cite{wangneural, liu2018fine}; however, unlike backdoor scenarios, we also face the ``unrelated'' watermark which has no ``correct'' label. Also unlike Neural Cleanse~\cite{wangneural}, we use the Median Absolute Deviation technique to identify the class \emph{least likely} to be watermarked and label our reconstructed unrelated images to that class during retraining. For this reason, we stop laundering early if the original most-likely-infected class is considered to be the least likely to be infected. Otherwise, retraining could be strengthening the original watermark. Additionally, we also feed the reconstructed masks themselves generated during the reverse-engineering step into the retraining dataset. These are also labeled as the least likely class, which acts as a secondary approach to remove neurons potentially watermarked with the unrelated style of image. The retrained model is then fed back into the backdoor reconstruction algorithm.
\section{Related work}
Despite the application of neural networks across a wide variety of domains, research into digital rights management or intellectual property protection schemes for them remains limited. Some research has investigated the feasibility of incorporating encryption schemes into neural networks, and one such example is the work by Hesamifard et al.~\cite{hesamifard2017cryptodl} who developed CryptoDL to preserve the privacy of raw images being trained in CNNs. Using a modification of such schemes encryption schemes, it may be possible to train a network that can only make predictions on new data when the input has been encrypted with the same training private key, thus preserving the intellectual property rights of the neural network's owner in the case of theft.
Originally, Uchida et al.~\cite{Uchida-watermarks} proposed embedding watermarks into the convolutional layer(s) of a deep neural network using a parameter regularizer. However, their procedure required white-box access to the weights of the neural network, which is not feasible or practical in many theft scenarios. Therefore, black-box watermarking models have been proposed as well. One proposed model takes advantage of adversarial examples to detect watermarks~\cite{merrer2017adversarial}, but this method may be vulnerable to retraining techniques aimed to reduce the impact of adversarial examples. ``DeepSigns''~\cite{rouhani2018deepsigns} demonstrated the potential to detect watermarks through the probability density function of the outputs of a neural network in both white-box and black-box scenarios.
Other approaches have similarly attempted to embed watermarks that are more resilient to vanilla fine-tuning and fine-pruning techniques. One such work proposed exponential weighting~\cite{namba-exponential} whereby the weights of the neural network are significantly diminished if they are small, leaving only weights that have large absolute values to influence the operation of the model. However, this method is also susceptible to our algorithm because such highly absolute activation values appear especially in the presence of watermarked images but not in non-watermarked images. This correlates to line 10 in Algorithm~\ref{LaunderingAlg}. Nevertheless, more advanced backdoor embedding proposals have been proposed as well. Specifically, Li et al.~\cite{Li:how-to-prove} propose embedding a watermark via an encoder such that the final watermarked image is barely distinguishable from the original image. Future work will investigate the robustness of such schemes.
Furthermore, there are two additional recent work that explore the ability to mitigate the robustness of watermarks that take advantage of backdoors of neural networks. Hitaj and Mancini~\cite{hitaj2018have} propose using an ensemble of networks to reduce the likelihood of a watermark being correctly classified as well as a detector network that attempts to return random classes if it believes a watermark is present in the image. Shafieinejad et al.~\cite{shafieinejad2019robustness} also investigate content, noise, and abstract categories of watermarks, but instead of attempting to recover the watermark then remove it, the authors propose copying the functionality of the model directly through queries via a non-watermarked dataset.
Finally, while we focus primarily on the intersection of neural network backdoors and neural network watermarking, other research has investigated the relationship of watermarking a digital media with adversarial machine learning. Quiring et al.~\cite{quiring2018forgotten} demonstrate that watermarking and adversarial machine learning attacks correlate such that increasing the robustness of a classifier can be used to prevent oracle attacks against watermarked image detection. %
\section{Discussion}
\label{Discussion}
In this section, we discuss several key issues related to choosing watermark detection thresholds; the integrity, reliability, and accuracy of recovered watermarks; and adversarial watermarks.
\subsection{On choosing the minimum watermark detection threshold}
\label{min-treshold}
Figure~\ref{unrelated_fluctuations} (a) represents our MNIST model training on clean data and attempting to classify unrelated watermarks. As shown in Figure~\ref{unrelated_fluctuations}, there is no true \emph{final} value at which a converged, clean model will classify ``unrelated'' watermark images. It is entirely possible that a legitimate network could stop being trained during the epoch where the watermark classification accuracy is quite high, purely by coincidence. For the unrelated watermarks and especially for the CIFAR-10 model in Figure~\ref{unrelated_fluctuations} (b), this effect is even greater, with fluctuations reaching above 50\% after certain epochs.
As a result, we argue that any laundered network that falls below the highest naturally occurring watermark classification rate is unable to provide any provable ownership for the true owner of the network. Furthermore, \emph{even laundered models that fall closely above this threshold} may not provide any meaningful ownership for the true owner of the network because a black-box model relies strictly on accuracy, not loss, and a (perhaps random) fluctuation of a couple percentages may not be enough to demonstrate ownership of a model. Note that in Figure~\ref{unrelated_fluctuations}, the CIFAR-10 model does not reach 90\% because we did not perform data augmentation (rotating, scaling, etc.) during the training of this example model.
\begin{figure}[t!]
\hspace*{-7mm}
\centering
\includegraphics[scale=0.31]{final_images/FINAL_fluctuations-reduced.png}
\caption{Demonstrations of how legitimately trained models can fluctuate on classifying a watermarked image as the backdoored class during training.}
\label{unrelated_fluctuations}
\end{figure}
\subsection{On the reliability and integrity of DNN watermarks}
In the DeepSigns~\cite{rouhani2018deepsigns} research, the authors further expand upon the requirements of effective watermarks proposed in Uchida et al.'s original design~\cite{Uchida-watermarks}. In addition to including generalizability (that the model should work in both white-box and black-box settings), the authors argue that neural network watermarks should have both \textbf{1) reliability} to yield minimal false negatives and \textbf{2) integrity} to yield minimal false positives. However, using our method on watermarked neural networks significantly hinders and defeats the ability of current watermark embedding processes to claim reliability and integrity. Adversaries are able to significantly increase false negatives and false positives in watermarked networks without access to the original training watermarks or the original training set.
For example, the reliability and integrity of the watermarks are in question if the detection accuracy is not significantly consistent. We argue this is especially true in unrelated watermarks because unrelated watermarks have no ``correct'' answer in these networks. In the MNIST set, the network was trained to classify the letter ``m'' as a ``0''. After applying adversary laundering, the unrelated images were still classified as ``0'' 16\% of the time, but were classified as ``8'' 45\% of the time and ``9'' 25\%. In the case of CIFAR-10, the unrelated images of an MNIST ``1'' digit were embedded as the watermarked class of ``airplane'' and were detected as such 23\% of the time, but were also classified as ``ship'' 52\%, and as ``truck'' 21\%. Non-watermarked networks could come to similar classifications and may not demonstrate ownership in the current methods.
Further complicating the reliability and integrity of demonstrating ownership through watermarks is the ability for adversaries to generate adversarial images via a stolen model. Such examples allow an adversary to conversely claim ownership retroactively through a stolen model. We discuss this in more detail in Appendix~\ref{adversarial-watermarks}.
\captionsetup[subfigure]{oneside,margin={-0.3cm,0cm}}
\begin{figure}[t!]
\centering
\begin{subfigure}{0.15\textwidth}
\centering
\includegraphics[scale=7.5]{reconstruction_images/cifar10_content_original.png}
\caption{Original Image}
\end{subfigure}\hspace{1mm}
\begin{subfigure}{0.15\textwidth}
\centering
\includegraphics[scale=7.5]{reconstruction_images/cifar10_content_recon.png}
\caption{Recovered Image}
\label{fig:subim2}
\end{subfigure}\hspace{1mm}
\begin{subfigure}{0.15\textwidth}
\centering
\includegraphics[scale=2.45]{reconstruction_images/cifar10_content_applied.png}
\caption{Retraining Image}
\label{fig:subim3}
\end{subfigure}
\caption{(a) Watermark, (b) attempted reconstruction, and (c) reconstruction applied to a clean image from Zhang et al.'s~\cite{Zhang-watermarks} content-based watermarks.}
\label{reconstruction}
\end{figure}
\subsection{On the inaccuracy of recovered watermarks}
Using multiple iterations of the full laundering algorithm results in watermark reconstruction that is far from perfect yet is sufficient in most cases. For example, in Figure~\ref{reconstruction}, even though the recovered watermark does not resemble any human-interpretable meaning of the original watermark, it is sufficient to remove the watermark during the laundering process. We speculate that these attacks work on watermarks because of the black-box nature of neural networks, where it is very difficult to control exactly which features are learned. Therefore, even if the reconstructions are not exact, they will be similar enough to the learned features to 1) identify potentially-watermarked neurons, and 2) overwrite them during retraining.
As a result of this phenomenon, we argue that current watermarking approaches, where the watermarked images are simply fed into the neural network training data and trained on them indiscriminately are not adequately fulfilling some crucial criteria. In addition to existing watermark criteria (fidelity, robustness, etc.), we propose that watermarks should be embedded with \textbf{specificity}. As an example, to achieve the specificity requirement, a network being watermarked with content such as that in Figure~\ref{reconstruction} (a) would not lose its watermarked quality even if (b) is recovered and the network is retrained on (c). Although the content of (b) contains some very generalized features of the watermark in (a), we posit that this should not be sufficient to violate such a specificity requirement, where a watermark should only be removed or overwritten by retraining, when the retraining images contain substantially similar watermarked examples.
\captionsetup[subfigure]{oneside,margin={-0.0cm,0cm}}
\begin{figure}[t!]
\centering
\hspace{-1mm}
\begin{subfigure}{0.16\textwidth}
\includegraphics[scale=0.76]{adversarial_images/normal_frog.png}
\caption{Original Image\\(classified as ``frog'')}
\end{subfigure}\hspace{12mm}
\begin{subfigure}{0.16\textwidth}
\includegraphics[scale=0.76]{adversarial_images/adv_frog.png}
\caption{Adversarial Image\\(classified as ``cat'')}
\label{fig:subim99}
\end{subfigure}
\caption{Adversarial example constructed against the stolen model that fools both the original model and the laundered model into classifying the image as ``cat''.}
\label{cifar10-adversarial}
\end{figure}
\subsection{On the drop in test accuracy of laundered models}
\label{drop}
In proposed watermarking strategies~\cite{Zhang-watermarks}, the proponents could argue that the drop in test accuracy from watermark removal techniques are sufficient to prevent attackers from stealing or using stolen models. In our examples, our method generally performs well regarding the ability to maintain test accuracy, with the largest drop occurring in CIFAR-10 against the Zhang et al.~\cite{Zhang-watermarks} ``unrelated'' watermark type of approximately 9\%. However, we argue that such a reduction (or larger) in accuracy does \textbf{not} imply that a laundered model is not useful. Indeed, while it is fair to point out that laundered accuracies on MNIST and CIFAR-10 do not reach either the original model accuracies nor the state-of-the-art, in many of our scenarios, an attacker would not be able to approach such accuracies with their limited datasets. We emphasize that a laundered model is useless only \emph{if the laundered model performs worse than a model trained from scratch on the same (limited) dataset}.
In situations, where adversaries have very limited datasets (e.g., 0.6\% of the original training size), adversaries can use a stolen laundered model to improve their classification accuracy. In such cases, where training data may be very difficult to obtain or is very expensive (such as medical data, high-quality clean data, etc.), a stolen model, even if watermarked, remains enticing to an adversary not only to steal for private use but also perhaps to launder and deploy.
Additionally, we also demonstrate that the reported results of black-box backdoor watermark attacks are overestimating the ability of model to retain those watermarks. Related work is not adequately exploring the potential for attackers to detect and remove neurons and/or weights that are contributing to the detection of watermarks.
\subsection{On the issue of adversarial watermarks}
\label{adversarial-watermarks}
Another point to consider is that a stolen model is susceptible to a wide range of adversarial attacks. Notwithstanding other security threats posed by adversarial examples in such a scenario, given white-box access to a model, an adversary could easily manufacture images that they claim to be watermarked but are actually adversarially-crafted against the original network. In this case, it will be additionally difficult to prove ownership of the model.
For example, Figure~\ref{cifar10-adversarial} contains an adversarial image constructed against a stolen Zhang et al.~\cite{Zhang-watermarks} CIFAR-10 content watermarked network. The original model classifies the adversarial image of ``frog'' as ``cat''. Due to the similarity between the two networks and the ability for adversarial examples to transfer between models~\cite{adv-transferabililty}, it is also mis-classifed as ``cat'' in the post-laundering model as well without targeting that model specifically. This particular laundered model performs with 85\% accuracy on the original task, but with 1.5\% accuracy on the original watermarks. However, when constructing adversarial images, the adversary can choose any percentage of final watermark detection accuracy by also purposefully crafting adversarial watermarks that fail to be detected.
Moreover, even without actually embedding their own ``evil'' watermark into this model, an adversary can simply create an adversarial example that looks sufficiently similar to a watermark of which the adversarial claims ownership. One may argue that adversarial examples of this sort may be easy to detect because the text (``evil'', in this case) contains some human-noticeable noise, yet an optimization algorithm that forbid noise in those pixel locations would likely find a suitable minimum without significantly more work. We leave such an implementation to future work.
\subsection{Limitations of prior watermark removal techniques}
Considering the similarities between watermarks and backdoors, it is natural to question why the original Neural Cleanse approach~\cite{wangneural} is not sufficient for some backdoor removal attempts. We argue that this arises due to two differences in our approaches. First, our method does not inherently rely on the outlier detection mechanism to be accurate; an adversary assumes there to be a watermark regardless of its outcome. This is especially necessary for backdoors as opposed to watermarks due to 1) the limitation of the datasets available, and 2) the types of augmentations that are used as watermarks. Indeed, in the content-based watermark reconstruction, we do find that the results are as expected from the original approach; however, for the noise and unrelated reconstructions shown in Figure~\ref{fig:similar-reconstructions}, an outlier detection scheme may struggle to decide which label is watermarked.
Second, as described earlier, the unrelated watermark has no ground-truth class, and the original Neural Cleanse approach~\cite{wangneural} and even Fine-Pruning~\cite{liu2018fine} do not consider this situation. Either of these two methods leave the unrelated watermark largely untouched. On the other hand, we retrain on all images applied with the reconstructed mask labeled as the correct class as well as on all masks themselves labeled as the least-likely class.
\begin{figure}[t!]
\centering
\begin{tabular}{c c}
\includegraphics[scale=2.2]{reconstruction_images/cifar10_laundered_1_noise_10_visualize_fusion_label_0.png} &
\includegraphics[scale=2.2]{reconstruction_images/cifar10_laundered_1_noise_10_visualize_fusion_label_3.png}\cr
(a) Noise Recon Label 0 &
(b) Noise Recon Label 3 \cr
\includegraphics[scale=2.2]{reconstruction_images/cifar10_laundered_1_unrelated_10_visualize_fusion_label_0.png} &
\includegraphics[scale=2.2]{reconstruction_images/cifar10_laundered_1_unrelated_10_visualize_fusion_label_3.png} \cr
(d) Unrelated Recon Label 0 &
(e) Unrelated Recon Label 3 \cr
\end{tabular}
\caption{CIFAR-10 reconstructed watermarks using the Neural Cleanse~\cite{wangneural} algorithm. Outlier detection struggles to detect the watermarked class; actual watermarked class is Label 0.}
\label{fig:similar-reconstructions}
\end{figure} |
2,877,628,090,708 | arxiv | \section{Introduction}
\label{sec:intro}
Fast Radio Bursts (FRBs) are bright millisecond-duration radio bursts that are cosmological in origin. They were discovered over a decade ago \citep{lbm+07} and have been studied ever since at major radio observatories including Parkes \citep{lbm+07,ksk+12,tsb+13,zhd+19}, Arecibo \citep{sch+14}, Green Bank \citep{mkl+15}, Molonglo \citep{cfb+17, ffb+18}.
Recently, two new facilities with a wide field of view have been discovering FRBs in large numbers: the Australian Square Kilometre Array Pathfinder; ASKAP \citep{smb+18} and the Canadian Hydrogen Intensity Mapping Experiment; CHIME \citep{chime}.
Most FRBs seemed to be one-off events, while some are repeating \citep{ssh+16}.
CHIME has been particularly effective at finding repeating FRBs, with 17 published so far \citep{chime_repeater, chime_8}.
The origin of these FRBs remains a hot topic of debate and speculation \citep{pww+19}.
The Five-hundred-meter Aperture Spherical radio Telescope
(FAST; \citealt{FAST}) is the largest telescope in the world \citep{jpy+19}.
Due to the FAST's superior sensitivity, \citet{lor18} and \citet{zhang18} predicted that it would be able to detect FRBs of significantly higher dispersion measure (DMs) than those from less-sensitive telescopes. Since high-DM sources are most likely very luminous, FAST
surveys could help to constrain the high-end of the FRB luminosity function and enable more cosmological applications from FRBs \citep{zhang18}. As a first step toward this goal,
we report here a highly dispersed FRB from commissioning observations of FAST. In \S 2, we describe the observations and method used in discovering the FRB event, in \S 3.1, we present the FRB detection, in \S 3.2 we derive the constraint on the FAST FRB event rate, in \S 4. We summarize the results and discuss the implication of this first blind-search FRB discovery.
\section{Observations}
\subsection{FAST Drift scan survey}
The Commensal Radio Astronomy FAST Survey (CRAFTS\footnote{\url{http://crafts.bao.ac.cn}}, \citealt{li2018}) is a multi-purpose all-sky survey designed to obtain data streams for pulsar searching, transients searching, HI imaging and HI galaxies simultaneously. The survey began testing in August 2017, initially using a single-beam wide-band receiver covering 270--1620~MHz. After May 2018, the survey started using the FAST L-band Array of Nineteen Beams (FLAN), which covers 1050--1450~MHz band with a system temperature of about 20~K \citep{li2018}.
The drift scan survey typically happens at night (Beijing time from 9 pm to 8 am), during which time no other observations are scheduled. A total of 138 nights of observations were conducted, and $\sim$1500 hours of 19-beam observations were taken from May 2018 to November 2018, when the burst event was discovered. Data taken subsequently are still being processed.
While we report on FRB searches here, we note that the CRAFTS survey has already discovered over 100 new pulsars\footnote{\url{http://crafts.bao.ac.cn/pulsar}} \citep{qpl+19, zlh+19}.
The original FAST data were written in PSRFITS format \citep{hvm04} with two polarizations and 8-bit sampling at 196.608~$\mu$s intervals and with 4096 spectral channels between 1000 and 1500~MHz. Due to the large data volume, we sum the two polarizations and compress the data to 1-bit before further processing. In the following sections, the signal searching and significance calculations are both based on the single bit summed data, and we include a
degradation factor of 33\% to account for the loss in signal-to-noise during data compression.
The resulting system parameters we adopt are an average system temperature of 23\,K including contributions from cosmic microwave background, the foreground sky, earth atmosphere and radiation from the surrounding terrain and an effective telescope gain of 10\,K/Jy for beam 17 (15\,K/Jy before digitization loss) \citep{jpy+19,jth+20}.
\subsection{Single Pulse Search System}
FRB~181123 was identified by a novel GPU-based single-pulse search system that integrates the PICS AI software \citep{zbm+14} for selecting single-pulse candidates with the FAST multibeam data. This system uses GPUs to dedisperse the original data streams from each beam into eight subbands for 4096 trial DMs in the range 8.7---9211~pc~cm$^{-3}$. The DM step $\Delta$DM is determined by:
\begin{equation}
C \, \Delta \rm DM (\frac{1}{\nu^2_{\rm min}}-\frac{1}{\nu^2_{\rm max}}) = s \sqrt{\tau^2_{\rm samp} + \tau^2_{\rm pulse} + \tau^2_{\rm smear}},
\end{equation}
here the left hand side is the pulse broadening across the whole band due to one DM step, the right hand side is the pulse broadening in the lowest channel composed of sample time $\tau_{\rm samp}=$196.608~$\mu$s, an assumed minimal pulse width $\tau_{\rm pulse}=0.5$~ms, and the inter-channel DM smearing $\tau_{\rm smear}=2C\delta \nu/\nu^3_{\rm min}$, where $\delta \nu=0.122$~MHz is the channel width; Here $C=4148.808$~MHz$^2$pc$^{-1}$cm$^3$s is the dispersion constant, $s=2$ is a manually chosen sparseness parameter, $\nu_{\rm max}=1500$~MHz, and $\nu_{\rm min}=1000$~MHz.
The dedispersed time series in each subband are downsampled by a small factor (usually 8) in the GPUs to match the expected typical pulse width of $\sim1$~ms. The code outputs the dedispersed time series in each subband for each DM trial to memory. We then search for threshold-crossing burst events in a summed time series combining all subbands in CPUs with multiple levels of possible burst widths.
While searching for bursts, the code uses multi-level wavelets\footnote{\url{https://github.com/PyWavelets/pywt}} \citep{Lee2019}
to reduce red noise and search for significant bursts that pass a threshold of 7$\sigma$.
This burst search normally results in thousands of detections in each dataset. The code then takes the detected signal position in time and DM for each candidate and extracts from the dedispersed subband time series a segment of data that contains the burst signals. This segment of data contains eight frequency subbands and time bins chosen such that the data segment contains 32 times the burst width of data. We refer to these segments as dedispersed snapshots of the burst. Despite some exceptions, most pulsar-like bursts are wide-band signals, their snapshots often contain a full or partial vertical line, which is distinguishable from that of narrow-banded radio frequency interference (RFI). We then employ the CNN classifier in PICS \citep{zbm+14} that was trained using the frequency-vs-phase subplot of the {\it PRESTO}\footnote{\url{https://github.com/scottransom/presto}} \citep{2011ascl.soft07017R} candidate plots.
The image pattern of a real pulse in our snapshots resembles those in the frequency versus phase subplot in pulsar candidates, i.e., a vertical line. Our experiments show that the PICS-CNN classifier was able to rank the most pulsar-like burst snapshot to the top of all snapshots. We pick only the top candidates from these snapshots (usually with a zero-to-one score $>$0.96, as determined by experiments) to form the final output candidate list. These candidates were subsequently plotted and inspected by eye. This GPU single-pulse search system (enabled by PICS) helped in the discovery of over 20 new pulsars in the FAST drift scan survey, including those reported in \citet{qpl+19} and \citet{zlh+19}. This PICS-aided search system uses non-standard ranking criteria.
Although successful in finding some pulsars, it does not necessarily detect all pulses that cross the event threshold.
A careful study of the recall of this system will be presented in a later contribution (Zhu et al., in prep.) For now, we assume that this system does not find all true signals that cross the threshold. Meanwhile, we also searched the data using the more standard {\it HEIMDALL}\footnote{\url{http://sourceforge.net/projects/heimdall-astro}} \citep{bbbf12} pipeline and will report the results in future publication.
\section{Results}
\label{sec:result}
\subsection{FRB~181123}
FRB~181123 was detected with
a significance of 19$\sigma$ in beam 17 of the multibeam receiver on MJD 58445.
More detailed parameters with uncertainties are summarized in Table 1.
We searched time series from other beams that are dedispersed with the same DM but found no signal above $3\sigma$ during the same time.
From the logged position of the receiver cabin at the time, we infer that the FRB came from the direction of $l=184^{\degr}$.06, $b=-13^{\degr}$.47 with a positional
uncertainty of 3' based on the full width of the FAST beam at the center frequency of 1250~MHz. The observed DM for this FRB (1812~pc~cm$^{-3}$) is substantially greater than the maximum DM expected from the Galaxy in this direction $\sim 150$~pc~cm$^{-3}$ \citep{ymw16}.
Figure~\ref{fig:dedisp} shows a more detailed look at the time--frequency spectrum of FRB~181123, along with the dedispersed pulse profile. The burst shows a multi-peak pulse profile with three distinguishable peaks separated by few milliseconds (labeled as P1, P2, and P3). The measured parameters of these peaks are presented in Table \ref{tab:par}.
From Gaussian fits to the pulse profiles, we infer that P2 arrives about 5.6~ms after P1, and P3 arrives about 4~ms trailing P2 in the observer's frame, these correspond to 1.9~ms and 1.4~ms delays in the rest frame of the FRB.
Using the radiometer equation to convert our data to a Jansky scale, we measure the specific peak flux to be 65~mJy for P1 and find a specific fluence of 0.2~Jy~ms for all three peaks.
FRB 181123's flux and multi-peak pulse profile resemble those from some previously discovered FRBs \citep{cpk+16}.
In particular, the two smaller peaks P2, P3, show narrow band features that resemble the down-drift pattern seen in the repeating bursts of FRB~121102 (\citealt{gsp+18, hessels19}; Li et al. in prep.).
\citet{hessels19} presented a detailed analysis of the complex time-frequency structures seen in the repeating bursts of FRB~121102. We followed their approach and estimated the drift rate between FRB~181123's P2 and P3 to be $\lesssim-140$~MHz~ms$^{-1}$ (Figure \ref{fig:dedisp}, right panel) in the observer's frame and $\lesssim-400$~MHz~ms$^{-1}$ in the rest frame of the FRB; the estimated drift rate has significant uncertainty, and it could be underestimated because we only see part of the spectrum of P2 and P3.
Nevertheless, our estimated drift rate of $\lesssim-400$~MHz~ms$^{-1}$ fits well to the range measured in FRB 121102 \citet{hessels19}) around the rest frame emission frequency of 3--4.4~GHz, enhancing the similarities between FRB~181123 and FRB~121102.
Figure \ref{fig:DMcurve} shows the results of a fine frequency-time structure analysis applied to FRB~181123, following the approach in \citet{hessels19}. We found significant fine structures, characterized by the square of Gaussian-smoothed forward-difference time derivatives (i.e. the changes between every consecutive time bins), in the dedispersed bursts around the position of P1 and minor structures in P2 and P3. These fine structures allow us to derive the optimal DM as 1812$\pm$1~pc~cm$^{-3}$. This estimation is consistent with, and slightly more constrained than, what we we derived from using the S/N.
Unlike \citet{gsp+18} and \citet{hessels19}, we did not observe FRB~181123 in a coherent-dedispersion mode, thus the intra-channel smearing due to a DM of $\sim1812$ is 0.5~ms to 2~ms in our observing band. Hence,
DM smearing will not allow us to resolve structure finer than 0.5~ms despite our 0.196608~ms sampling interval.
The measured widths of P1, P2, and P3 are consistent with this DM-smearing width and show no significant evidence of scattering tails.
As can be seen in the frequency structure plot in Figure ~\ref{fig:dedisp}, P1 of FRB~181123 is brighter at the lower frequency part of the band. From P1's on-pulse minus off-pulse spectrum shown in the middle panel of Figure ~\ref{fig:dedisp}, we find that the best-fit FRB spectrum index is $-3.3\pm0.5$. \citet{sch+14} detected the first burst of FRB~121102 in the sidelobe of the Arecibo beam. They argue that the sidelobe position varied with frequency and caused the detected burst spectrum to be steep and up-swinging (with spectral index 7--11). The same argument could be applied conversely to FRB~181123, in which case the FRB is likely detected by the main beam instead of the sidelobe. In contrast to the original observation of FRB~121102 \citep{sch+14}, FRB~181123 is detected in a drift scan where the beam was moving across the sky while the burst arrived, further changing the observed burst spectral index.
We use a theoretical antenna power pattern to evaluate how these two factors change the FRB's spectral shape (Figure \ref{fig:beam}).
The antenna power pattern of a uniformly illuminated dish,
\begin{equation}
P(x,y,\nu)=[2J_1(u)/(u)]^2,
\end{equation}
where
\begin{equation}
u=\pi\sqrt{x^2+y^2} D \nu/c.
\end{equation}
Here $x$ and $y$ represent the source position with respect to the beam center, $D$ is the dish diameter, $\nu$ is the observing frequency, $c$ is the speed of light and $J_1(u)$ is a Bessel function of the first kind \citep{ToRA}.
We integrate $P$ along the drifting path of the FRB
\begin{equation}
I=\int G(\nu) (\nu/\nu_0)^2 P(x(t), y, \nu(t))dt,
\end{equation}
to get an approximated power for two subbands: the bottom band (1000--1250~MHz) and the top band (1250--1500~MHz), here $G(\nu)$ is the gain of the telescope as a function of frequency $\nu$, and $(\nu/\nu_0)^2 $ is a normalizing factor with $\nu_0=1250$~MHz.
For convenience, we assume $G(\nu)$ to be flat while in practice it varies slightly with $\nu$ \citep{jth+20}.
The result of this integration depends on the assumed starting position of the FRB (i.e.~its position at 1500~MHz), and the FRB's DM value.
We then use the ratio between the integrated power in the up and bottom band to derive an approximation for the extra spectral index: $\Delta \gamma \sim \log(I_{\rm top}/I_{\rm bottom})/\log(\rm \nu_{\rm top}/\nu_{\rm bottom})$, where $\nu_{\rm top} = 1375$~MHz and $\nu_{\rm bottom}=1125$~MHz.
As shown in Figure ~\ref{fig:beam}, if FRB~181123 were detected in the sidelobe, its spectrum would likely have been significantly impacted. But, we observed a relatively flat burst spectrum, and the main peak P1's signal persists across the whole band. This suggests that FAST likely caught the FRB in the main lobe.
Admittedly, the observed antenna pattern of FAST \citep{jth+20} may be quantitatively different from a theoretical one, but our conclusion should still be valid.
The observed dispersion measure of FRB~181123 ($1812\pm 2$~pc~cm$^{-3}$) includes contributions from the intergalactic medium (IGM) -- DM$_{\rm IGM}$, from the Galaxy -- DM$_{\rm Gal}$ and from the host galaxy of the FRB -- DM$_{\rm host}$. We assume DM$_{\rm Gal}\sim149.5$ based on \citet{ymw16} and DM$_{\rm host}\sim 40/(1+z)$~pc~cm$^{-3}$ \citep{xh15,yz16}, where $z$ is the redshift of the host galaxy.
\citet{zhang18} derived how one could estimate the upper limit of FRB luminosity based on its observed DM. They also derived a DM--$z$ relation that correctly accounts for the integrated dispersion effect for objects in cosmological distances assuming homogeneous IGM, and provided an approximated formula $z\sim$DM$_{\rm IGM} / 855$~pc~cm$^{-3}$ for $z<2$.
We follow their calculations closely and solve for the redshift $z$ of FRB 181123 using the equation ${\rm DM_{obs}-DM_{Gal}} = (855z + 40/(1+z))$~pc~cm$^{-3}$.
The best solution is $z\lesssim $1.93 and DM$_{\rm IGM}\simeq1650$~pc~cm$^{-3}$.
Note that we kept 3 significant digits for $z$ and DM$_{\rm IGM}$ to show the exact solution to the above equation.
To quantify the uncertainties on the
above $z$ estimate, we now step through the relevant contributions.
We note that the estimated DM$_{\rm Gal}$ has $\sim$50\% uncertainty \citep{ymw16}, DM$_{\rm host}$ may contain $\sim $100\% uncertainty, together they contribute 5\% relative uncertainty to the estimated DM$_{\rm IGM}$.
Furthermore, due to inhomogeneity in the IGM, objects from the same DM$_{\rm IGM}$ could be from different $z$ \citep{plm+19}, according to \citet{wmb18}, this could increase the uncertainty in our estimated $z$ by an additional factor of 10\%.
In the above derivations, we assumed probable distributions of DM$_{\rm Gal}$ and DM$_{\rm host}$, but we could still underestimate them substantially and thus overestimate DM$_{\rm IGM}$ and $z$.
Therefore, we treat the FRB's derived redshift, luminosity, and energy as upper limits.
Assuming the $\Lambda$CDM cosmological parameters with $H_0=67.8\pm0.9$~km~s$^{-1}$~kpc
and $\Omega_{\rm M} =0.308\pm0.012$ \citep{planck16}, the luminosity distance of a $z=1.93$ object is $\simeq15.3$~Gpc \footnote{\url{https://docs.astropy.org/en/stable/cosmology}}.
Based on equation (8) and (9) in \citet{zhang18}, FRB~181123's peak luminosity is $\gtrsim 2\times10^{43}$~erg~s$^{-1}$ and the isotropic energy $\gtrsim 2\times10^{40}$~erg, both limits contain relative uncertainties of $\sim 15$\%.
These values are comparable to those derived in Table 1 of \citet{zhang18} from previously discovered FRBs.
\begin{figure*
\centering
\includegraphics[scale=0.7]{dedisp_new.eps} \\
\caption{\label{fig:dedisp}
{\bf Left-Top panel}: The summed pulse profile from the dedispersed pulse, showing two smaller peaks following closely after the main pulse. The red, green, and blue dotted lines show the best Gaussian-fit to the three peaks. The best-fit parameters are listed in Table \ref{tab:par}. {\bf Left panel}: The dedispersed pulse plot showing clear multi-peak structures. The straightness of the pulse indicates a good fit to the $\nu^{-2}$ dispersion law. The horizontal white strips are the results of channels being cleared due to RFI contamination. {\bf Center panel}: The spectrum of FRB 181123 (on-pulse mean spectrum minus the off-pulse mean spectrum for the main peak P1). Here we only take the on-pulse part (24 spectral samples) of P1 for the on-pulse spectrum. We take 400 spectral samples to form the average off-pulse spectrum, 200 on the left of the P1 pulse, and 200 on the right of the P3 pulse.The gray shadow indicates the uncertainty of the on-off spectrum estimated from the root mean squares of the on- and off-pulse spectra. {\bf Right panel:} The zoomed dynamic spectrum of P2 and P3 smoothed with a Gaussian filter. We fit the two peaks with 2D Gaussian functions to estimate the frequency drift rate between the two peaks. The white line connects the centers of the two best-fit Gaussian functions. {\bf Right-Top panel:} A zoomed view of the pulse profiles of P2 and P3.}
\end{figure*}
\begin{figure
\centering
\includegraphics[scale=0.7]{DM_struct.eps} \\
\caption{\label{fig:DMcurve}
{Similar to the Figure 2 of \citet{hessels19}, the left panel shows the
square of the Gaussian-smoothed forward difference time derivative of the
dedispersed burst profile as a function of DM and time. The profiles are
down sampled by a factor of 2 to boost the S/N. The right panel show the
sum along the time axis and its Gaussian fit. The bestfit DM from this
fine-structure analysis is $1812\pm1$~pc~cm$^{-3}$, consistent with the
estimate from the S/N of P1 (Table \ref{tab:par}).}
}
\end{figure}
\begin{figure
\centering
\includegraphics[scale=0.5]{beam_diff.eps} \\
\caption{\label{fig:beam}
The circular contours show the theoretical power pattern of the FAST main beam at 1250~MHz, assuming a uniformly illuminated dish of 300~m diameter. The 0.5-level circle labels the half-power contour of the main beam. The two 0.017-level contours on the outside marks the rough position of the first sidelobe. The colored image underneath the contour shows the approximate extra spectral index caused by the frequency-dependent beam pattern and the source drifting. The black arrow illustrates how far a celestial object would drift in the dispersed duration of FRB~181123 (4.17~s). }
\end{figure}
\begin{deluxetable}{lc}
\tablecaption{ \label{tab:par}
Observational Parameters of FRB 181123.}
\tabletypesize{\scriptsize}
\tablewidth{0pt}
\tablehead{\colhead{Parameter} & \colhead{Value
\tablenotemark{a} }}
\startdata
Date (UTC) & 2018 Nov 23\\
Time (UTC) & 17:49:09 \\
MJD arrival time\tablenotemark{b} & 58445.74246675\\
Right ascension\tablenotemark{c} & 05$^{\rm h}$06$^{\rm m}$06$^{\rm s} $.76\\
Declination\tablenotemark{c} & 18$^{\degr}$09$^{'}$35$^{''} $.7 \\
Gal. long. & 184.06\\
Gal. lat. & --13.47\\
DM (pc~cm$^{-3}$) & 1812(2) \\
P1 pulse arrival time\tablenotemark{d}(s) & 2.64422(4)\\
P1 pulse height (mJy) & 65(3)\\
P1 pulse width (ms) & 1.05(6)\\
P2 pulse arrival time\tablenotemark{d} (s) & 2.6499(1) \\
P2 pulse height (mJy) & 24(3)\\
P2 pulse width (ms) & 0.6(2) \\
P3 pulse arrival time\tablenotemark{d} (s) &2.6538(1) \\
P3 pulse height (mJy) &19(2) \\
P3 pulse width (ms) &1.2(2) \\
P1 Spectral index\tablenotemark{e} & --3.3(5) \\
P2-P3 frequency drift rate\tablenotemark{f} & $\lesssim$--140~MHz~ms$^{-1}$
\enddata
\tablenotetext{a}{Number in parentheses indicates 1-$\sigma$ uncertainty in the least significant digit.}
\tablenotetext{b}{The topocentric arrival time of the first peak (P1) in MJD.}
\tablenotetext{c}{Coordinates obtained from the position of the
receiver cabin when P1 arrived. We assume a position error circle of 3' radius.}
\tablenotetext{d}{Pulse arrival time at the top edge of the band
(1500~MHz) since the start of the particular data file.}
\tablenotetext{e}{$S(\nu)\propto\nu^\alpha$}
\tablenotetext{f}{Measured in the observer's frame.}
\end{deluxetable}
\subsection{A lower bound on the FAST event rate}
Since the PICS-aided GPU searching system is an experimental pipeline, it probably does not find all events that cross the 7$\sigma$ threshold. A more thorough search is currently being conducted using standard software such as {\it HEIMDALL}.
With one detection from a pipeline of recall $<1$, we can only calculate a lower bound on the FAST event rate for the given detection threshold of 7$\sigma$, i.e., 25~mJy~ms for 1-bit polarization-summed data.
Assuming that the FRB events follow a Poisson distribution, the probability density distribution of the first detected event should follow an exponential distribution, i.e., Poisson distribution of zero events until the first detection.
In this case, the cumulative distribution function of an exponential
distribution follows $F(x) =1 - e^{-k x} $, where $k$ is the
event rate, and $x$ is the time to the first detection.
Here we would like to find the 95\% confidence limits for the event rate
$k$ giving one event in 1500 hours. We find that $F(1500\ \rm hours) > 0.05$ when $k > 0.034$ event per 1000 hour (i.e. 0.3 event per year).
Considering that our search system does not have a 100\% recall rate, no upper bound can be set.
Due to small number statistics, the lower limit of $k > 0.034 $ event per 1000 hour is not yet constraining to most theoretical predictions \citep{lhz+17, lllz18, lml+20}.
Nevertheless, this first detection attests to FAST's potential to systematically detect FRBs in the future, and such detections will put far more stringent constraints on the FRB-rate at high DM.
\citet{lor18} and \citet{zhang18} showed that the FAST event rate, especially the rate of high-DM FRBs, would help determine the luminosity function of FRBs.
Detection of very high DM ($> 6500$ pc~cm$^{-3}$) could probe FRB at more than $z=10$ \citep{zhang18}, and help shed light on the cosmological distribution of the FRBs.
\section{Discussion and Summary}
FRB~181123 shows a clear multi-peak profile. Its two smaller peaks, P2 ($5.7\pm0.2$~ms from P1), and P3 ($3.9\pm0.2$~ms from P2) show narrow band features that resemble the down-drifting pattern seen in the bursts of repeating FRB~121102 \citep{gsp+18,hessels19}, 181128, 181222 and 181226 \citep{chime_8}.
Although multiple sub-bursts and fine pulse structures have also been observed from (so far) non-repeating FRBs \citep{cpk+16,ffb+18,cms+20}, the combination of multiple sub-bursts (or components) with millisecond spacing
and down-drifting pattern have mostly been seen from repeating FRBs. This suggests that FRB~181123 could be a repeating FRB source. To test this, we conducted follow-up observations toward the position of the FRB using FAST. So far, we observed the position during for four independent sessions each with one-hour integration on 2020 February 2, 28, and 29 and 2020 March 27. We have not detected any repeating bursts above the fluence level 0.012~Jy~ms (7$\sigma$ limit; we used 8-bit digitization and two polarization in the follow-up observations, thus reached a lower detection threshold than in the original 1-bit data).
The non-detection of repeating bursts from FRB~181123 may be due to one of the following four reasons \citep[e.g.][]{palaniswamy18}: 1. The waiting times for producing repeating bursts may be longer than the duration of our follow up; 2. Faint repeating bursts may be produced in our observing window, but are below the detection threshold. This requires that the peak fluxes of the putative repeating bursts are lower than that of FRB~181123 by a factor of more than six\footnote{A similar case has been observed in FRB 171019 \citep{kumar19}, whose repeating bursts are much fainter than the originally detected burst.}; 3. The burst activity may be intermittent (e.g. changing due to unidentified periodic activity) like in FRB 121102, and our follow up observations may be taken when the source is not active; 4. The source is an intrinsically non-repeating FRB. It is possible that either of the first three reasons are at play. We plan to continue monitoring the FRB in the coming months and hopefully will eventually detect some bursts if the FRB is a repeater.
The last possibility is difficult to prove, but if true, a catastrophic event has to be able to produce multiple peaks during the emission process. This is challenging for most models, even though in some scenarios this may be possible. For example, the ``blitzar'' model \citep{fr14} suggests that an FRB could originate from the final flash of a supermassive neutron star collapsing into a black hole by magnetic braking. Detailed simulations \citep{most18} showed that this scenario can produce a series of sub-ms pulses whose amplitudes decay exponentially with time. The observed duration of the sub-pulses of FRB 181123, when corrected for the redshift factor, may be consistent with this model. The down-drifting feature seen in sub-pulses may be understood within the generic bunching curvature radiation model invoking open field lines \citep{wang19}, which is invoked in the blitzar model during the magnetospheric ejection phase \citep{most18}.
FAST's sensitivity makes it one of the most effective telescope at detecting FRBs from high redshift, therefore its FRB detection rate is an important observable.
\citet{lhz+17} predicted that the FRB detection rate for the FAST 19-beam would be $5\pm2$ detections per 1000 hours, based on an all-sky event rate of $3\times10^4$ day$^{-1}$ that crosses an energy threshold of 0.03~Jy~ms.
From a different approach, by measuring the event rate density of the luminosity function presented in \citet{lllz18}, \citet{lml+20} predicted an all-sky event rate of $10^4$--$10^5$~day$^{-1}$ for events with flux higher than 5~mJy, which correspond to 1.5--15 events per 1000 hours given the field of view of the FAST 19-beam.
With one detection of FRB 181123, we
can place a lower bound of $0.034$ event per 1000 hour,
which can be translated to an all-sky rate of $>9\times10^2$ day$^{-1}$.
If FRB~181123 is a one-off FRB, not a repeater, it may be possible that at least some energetic FRBs may form a distinct category from the repeaters.
Then one may use this event to estimate the event rate density of these energetic events and compare it with some models predicting one-off FRBs, e.g. those models invoking compact star mergers.
Equation (10) in \citet{zhang18} shows how the fluence ($F_{\nu}\simeq S_{\nu}\tau_{\rm obs}$) of a putative FRB scales with redshift: \begin{equation}
F_{z'} = \left(\frac{1+z'}{1+z}\right)^{1+\alpha}\left(\frac{D_{\rm L}}{D_{\rm L}'}\right)^2F_{z},
\end{equation}
where $D_{\rm L}$ and $D_{\rm L}'$ denote the luminosity distances corresponding to redshift $z$ and $z'$, and $\alpha$ is the spectra index of the FRB.
Following this equation, we find that a FRB like FRB~181123 could be detected with
0.025~Jy~ms fluence at a redshift of $z\simeq4.25$.
For the value of $\alpha$ we used the observed spectral index of P1.
The resulting redshift corresponds to a comoving volume of 1800~Gpc$^3$.
So far, we have made a single detection of an FRB event above $10^{40}$~erg in energy in a volume of 1800~Gpc$^3$.
For the total amount of data we searched, we could infer the 95\% confidence lower limit event rate of 900 per day, and an event rate density lower limit of $>200$~Gpc$^{-3}$yr$^{-1}$ for FRBs with energy $>10^{40}$~erg.
This lower limit is underestimated because we can only observe a fraction of the FRBs, some (maybe most) FRB events have a lower isotropic energy than $10^{40}$~erg, some FRBs may be beamed and likely not beaming towards us.
Interestingly, this lower limit is already in mild tension with the black hole-black hole (BH-BH) merger event rate density ($\sim 200$~Gpc$^{-3}$yr$^{-1}$) inferred from LIGO observations \citep{mg18} (regardless whether BH-BH mergers can make FRBs), but could be consistent with neutron star - neutron star (NS-NS) merger event rate density ($\sim 1.5\times10^3$~Gpc$^{-3}$~yr$^{-1}$; \citealt{aaa+17}). More detections may be made in the same dataset we used and the true event rate density may be better constrained to a (much) higher value than our limit. This could lead to better constraints on the event rate density of energetic events \citep{lml+20}, giving tighter constraints on the consistency with the compact star merger models \citep[see also][]{wang20}.
FAST is a very sensitive telescope. The FRB sample from FAST blind search
would likely be composed of many high DM, high redshift events with higher
isotropic energy than samples from other telescopes. Therefore, the FAST blind
search FRB sample may become relevant for testing catastrophic models for ``one-off'' FRBs.
|
2,877,628,090,709 | arxiv |
\section{Illustration of the IV Selection Procedure for $P=2$}\label{app:illustration}
In figure \ref{fig:Ward2}, the procedure is illustrated. Here, we have a situation with four IVs and two endogenous regressors. Instrument No. 1 is invalid, because it is directly correlated with the outcome, while the remaining three IVs (2, 3, 4) are related with the outcome only through the endogenous regressors and are hence valid.
In the first graph on the top left, we have plotted each just-identified estimate. The horizontal and vertical axes represent coefficient estimates of the effects of the first ($\beta_1$) and second regressor ($\beta_2$), respectively. Each point has been estimated with two IVs, in this case with IV pairs 1-2, 1-3, 1-4, 2-3, 2-4 and 3-4, because there are four candidate IVs.
In the initial Step (0), each just-identified estimate has its own cluster. In step 1, we join the estimates which are closest in terms of their Euclidean distance, e.g. those estimated with pairs 2-3 and 2-4. These two estimates now form one cluster and we only have five clusters. We re-estimate the distances to this new cluster and continue with this procedure, until there is only one cluster left in the bottom right graph. We evaluate the Sargan test at each step, using the IVs which are involved in the estimation of the largest group at each step. When the p-value is larger than a certain threshold, say 0.05, we stop the procedure. Ideally this will be the case at step 2 or 3 of the algorithm, because here the largest cluster (in orange) is formed only by valid IVs (2,3 and 4). If this is the case, only the valid IVs are selected as valid.
\newgeometry{left=1cm, right=1cm, top=2.5cm, bottom=3cm}
\begin{figure}[H]
\caption{Illustration of the Algorithm with Two Regressors \label{fig:Ward2}}
\includegraphics[scale=0.9, trim=25 25 25 25, clip]{pics/Step0.pdf}
\includegraphics[scale=0.9, trim=25 25 25 25, clip]{pics/Step1.pdf}
\includegraphics[scale=0.9, trim=25 25 25 25, clip]{pics/Step2.pdf}
\includegraphics[scale=0.9, trim=25 25 25 25, clip]{pics/Step3.pdf}
\includegraphics[scale=0.9, trim=25 25 25 25, clip]{pics/Step4.pdf}
\includegraphics[scale=0.9, trim=25 25 25 25, clip]{pics/Step5.pdf}
\end{figure}
\restoregeometry
\section{Properties of just-identified estimates when $P \geq 1$}\label{app:PropertiesOfJustIdentified}
There are $\binom{J}{P}$ just-identified models. We write the corresponding just-identified estimators for $\bm{\beta}$ and $\bm{\alpha}$ analogously to the proof of Proposition A.5 in \citet*{Windmeijer2021Confidence} for the case $P = 1$. First, for an arbitrary $[j]$, partition the matrix $\mathbf{Z} = (\mathbf{Z}_1 \quad \mathbf{Z}_2)$, where $\mathbf{Z}_1$ is a $n \times P$ matrix containing the $[j]$-th columns of $\mathbf{Z}$, and $\mathbf{Z}_2$ is a $n \times (J-P)$ matrix containing the remaining columns of $\mathbf{Z}$. $\bm{\gamma} = (\bm{\gamma}_1^\prime \quad \bm{\gamma}_2^\prime)'$ is the equivalent partition of the matrix of first-stage coefficients. $\mathbf{Z^*} = [\hat{\mathbf{D}}\quad \mathbf{Z}_2]$, then $\mathbf{Z^*} = \mathbf{Z} \hat{\mathbf{H}}$, with
\[
\bm{\hat{H}} =
\left( {\begin{array}{cc}
\hat{\bm{\gamma}}_1 & 0 \\
\hat{\bm{\gamma}}_2 & \mathbf{I}_{J - P}\\
\end{array} } \right)
; \quad
\bm{\hat{H}^{-1}} =
\left( {\begin{array}{cc}
\hat{\bm{\gamma}}_1^{-1} & 0 \\
-\hat{\bm{\gamma}}_2\hat{\bm{\gamma}}_1^{-1} & \mathbf{I}_{J - P}\\
\end{array} } \right)
\]
\noindent The just-identified 2SLS estimators using $\mathbf{Z}_{[j]}$ as instruments and controlling for the remaining instruments can be written as
\[
(\hat{\bm{\beta}}_{[j]} \quad \hat{\bm{\alpha}}_{[j]}^\prime)' = \hat{\mathbf{H}}^{-1}\hat{\bm{\Gamma}} = \hat{\mathbf{H}}^{-1}(\mathbf{Z}' \mathbf{Z})^{-1} \mathbf{Z}' (\mathbf{D}\bm{\beta} + \mathbf{Z} \bm{\alpha} + \mathbf{u}) = \hat{\mathbf{H}}^{-1}(\hat{\bm{\gamma}}\bm{\beta} + \bm{\alpha}+(\mathbf{Z}' \mathbf{Z})^{-1} \mathbf{Z}'\mathbf{u})
\]
\noindent Note that $\hat{\bm{\gamma}}\bm{\beta}+\bm{\alpha}$ is equal to
\[
\left(
\begin{array}{c}
\hat{\bm{\gamma}}_1 \bm{\beta} + \bm{\alpha}_1\\
\hat{\bm{\gamma}}_2 \bm{\beta} + \bm{\alpha}_2\\
\end{array}
\right) \text{.}
\]
\noindent By Assumption \ref{ass:Asymptotics}, we have the following asymptotics
\[
plim(\hat{\bm{\beta}}_{[j]}' \quad \hat{\bm{\alpha}}_{[j]}')'
=plim (\mathbf{\hat{H}^{-1}} \left(
\begin{array}{c}
\hat{\bm{\gamma}}_1 \bm{\beta} + \bm{\alpha}_1\\
\hat{\bm{\gamma}}_2 \bm{\beta} + \bm{\alpha}_2
\end{array}
\right))=\left(
\begin{array}{c}
\bm{\beta} + \bm{\gamma}_1^{-1}\bm{\alpha}_1\\
-\bm{\gamma}_2 \bm{\gamma}_1^{-1}\bm{\alpha}_1 + \bm{\alpha}_2
\end{array}
\right)
\]
\noindent We denote the $\binom{J}{P}$ $P \times 1$-dimensional inconsistency terms as $plim(\hat{\bm{\beta}}_{[j]} - \bm{\beta}) = \bm{\gamma}_{[j]}^{-1}\bm{\alpha}_{[j]} = \mathbf{q}$.
\section{$\mathcal{F}_0$ consists of valid IVs only}\label{app:gP}
Next, we show that the family with $\mathbf{q}=\mathbf{0}$ is composed of valid IVs with $\bm{\alpha}_1=\mathbf{0}$, only. Let $\bm{\gamma}$, $\mathbf{Z}$ and $\bm{\alpha}$ be partitioned the same way as in Appendix \ref{app:PropertiesOfJustIdentified}.
\begin{remark}
$\bm{\alpha}_1 = \mathbf{0}$ is necessary and sufficient for $\mathbf{q} = \mathbf{0}$.
\end{remark}
\paragraph{Proof:} First prove sufficiency:
Direct proof: Assume $\bm{\alpha}_1=\mathbf{0}$ holds. $\mathbf{q} = \bm{\gamma}_1^{-1} \bm{\alpha}_1=\mathbf{0}$ follows directly.\\
Second, prove necessity: Proof by contraposition: Assume $\bm{\alpha}_1 \neq \mathbf{0}$, then $\bm{\gamma}_1^{-1}\bm{\alpha}_1 \neq \mathbf{0}$. The latter inequality holds, because
otherwise the columns of $\bm{\gamma}_1^{-1}$ are linearly dependent, and $\bm{\gamma}_1^{-1}$ is not invertible and hence $(\bm{\gamma}_1^{-1})^{-1} = \bm{\gamma}_1$ does not exist, which it clearly does, by Assumption 1.a. $\qed$
\noindent
This also implies that $\mathcal{F}_0$ consists of valid IVs only and all combinations $[j]: \bm{\gamma}_1^{-1}\bm{\alpha}_1=\mathbf{0}$ are elements of $\mathcal{F}_0$.
Hence, the following remark directly follows:
\begin{remark}
$|\mathcal{F}_0| = \binom{g}{P}$.
\end{remark}
\iffalse
\section{One family can consist of different vectors $\alpha$}\label{app:onefamily}
We have shown that the number of valid IVs defines the size of the family $|\mathcal{F}_0|$. However, this relation between $g$ and $|\mathcal{F}_0|$ is available only when $\bm{\alpha}_1=\mathbf{0}$.
\begin{remark}
The function $f(\bm{\alpha}) = \bm{\gamma}_1^{-1}\bm{\alpha} = \mathbf{q}$ is non-injective.
\end{remark}
\paragraph{Proof:} Proof by counter-example: Show that there is more than one element in the domain which leads to the same image, i.e.
\[
\exists (\bm{\gamma}_1,\bm{\alpha}), (\bm{\gamma}_1',\bm{\alpha}') \quad s.t. \quad \bm{(\alpha}, \bm{\gamma}_1) \neq (\bm{\alpha}_1', \bm{\gamma}_1') \text{ with } f(\bm{\alpha}, \bm{\gamma}_1) = \bm{\gamma}_1^{-1}\bm{\alpha} = \bm{\gamma}_1'^{-1}\bm{\alpha}' = f(\bm{\alpha}', \bm{\gamma}_1')
\]
Define $ (\bm{\alpha}', \bm{\gamma}_1') = (\bm{\alpha} + \bm{\delta}_{\alpha}, \bm{\gamma}_1 + \bm{\delta}_{\gamma_1})$. Then, $\mathbf{c}=\bm{\hat{\gamma_1}}^{-1}\bm{\alpha}$ and $\mathbf{c'}=\bm{\hat{\gamma_1}}^{'-1}\bm{\alpha'}$. Assume $\mathbf{c}=\mathbf{c}'$.
\begin{align}
\hat{\bm{\gamma}}_1\mathbf{c} & = \bm{\alpha}\\
\hat{\bm{\gamma}}_1\mathbf{c} + \bm{\delta}_{\gamma} \mathbf{c} - \bm{\delta}_{\alpha} &= \bm{\alpha}
\end{align}
Therefore
\begin{align}
\bm{\delta}_{\gamma} \mathbf{c} & = \bm{\delta}_{\alpha} \\
\end{align}
This means that all $(\bm{\alpha}, \bm{\gamma}) \neq (\bm{\alpha'}, \bm{\gamma'})$ s.t. $\bm{\delta}_{\gamma} \mathbf{c} = \bm{\delta}_{\alpha}$ lead to the same $\mathbf{q}$. $\qed$
Hence, even though the number of IVs with the same value $\alpha_j$ is smaller than $\binom{g}{P}$, the largest family might still consist of combinations of invalid IVs, because the first-stage coefficient matrix also determines $\mathbf{q}$.
\fi
\section{Oracle Properties}\label{app:oracle}
This section gives proofs for Lemma \ref{lemma:AssignToFamily1} and Theorems \ref{th:ConsistentSelection} and \ref{th:ConsistentSelectionLATE}. All proofs apply for the general case that $P \geq 1$.
\subsection{Proof of Lemma 1}\label{app:ProofOfLemma1}
Overall, we want to show that the probability that a cluster $\mathcal{S}_j$ with elements from the true underlying partition $\mathcal{S}_{0q}$ is merged with a cluster with elements from the same partition $\mathcal{S}_{0q}$ goes to 1.
\noindent
The proof is structured as follows:
\begin{enumerate}
\item We note that the means of clusters which are associated with elements from the same family also converge to the same vector as each estimator in the cluster. \\
\item Merging two clusters which are associated only with elements from the same family is equivalent to the two clusters having minimal distance.\\
\item We show that clusters which are associated with members from the same family have distance zero and clusters which are associated with elements from different families have non-zero distance, with probability going to one.
\end{enumerate}
\begin{proof}
\iffalse
$$\{[j], \, [k]\} \in \mathcal{F}_A \, ,\quad A \in \mathbb{R}^P$$
$$\{[l]\} \in \mathcal{F}_B \, ,\quad B \in \mathbb{R}^P, \, B \neq A$$
Under Remark 1:
$$plim(\hat{\beta}_{[j]}) = A$$
$$plim(\hat{\beta}_{[k]}) = A$$
$$plim(\hat{\beta}_{[l]}) = B$$
$$\bar{\beta}_{q} = \frac{\sum\limits_{[j] \in \hat{\theta}_{k}} \hat{\beta}_{[j]}}{|\hat{\theta}_{k}|} \text{ where }\hat{\theta}_{k} \subseteq \mathcal{F}_q$$
and hence $$plim(\bar{\beta}_{q}) = \mathbf{q}$$
\fi
\textit{Part 1}:
Consider
\begin{align*}
\begin{split}
[j], [k] \in \mathcal{F}_{q} \, ,\quad \mathbf{q} \in \mathbb{R}^P\\
[l] \in \mathcal{F}_r \, ,\quad \mathbf{r} \in \mathbb{R}^P, \quad \mathbf{r} \neq \mathbf{q}\\
\end{split}
\end{align*}
Under Assumptions 1 - 5:
\begin{align}
\begin{split}
plim(\hat{\bm{\beta}}_{[j]}) = plim(\hat{\bm{\beta}}_{[k]}) = \mathbf{q}\\
plim(\hat{\bm{\beta}}_{[l]}) = \mathbf{r}
\end{split}
\end{align}
Let $\mathcal{S}_j$ and $\mathcal{S}_k$ be clusters associated with elements from the same family: $\mathcal{S}_j$, $\mathcal{S}_k \subset \mathcal{S}_{0q}$ and $\mathcal{S}_{l} \subset \mathcal{S}_{0r}$.
\begin{equation}\label{eq:conv}
plim \,\, \bar{\mathcal{S}}_j = \frac{\sum\limits_{\hat{\beta}_{[j]} \in \mathcal{S}_{j}} \bm{\hat{\beta}_{[j]}}}{|\mathcal{S}_{j}|} = \frac{|\mathcal{S}_{j}|\mathbf{q}}{|\mathcal{S}_{j}|} \text{ where }\mathcal{S}_{j} \subset \mathcal{S}_{0q}
\end{equation}
and hence $$plim(\bar{\mathcal{S}}_{j}) = \mathbf{q}\text{.}$$
\noindent
\textit{Part 2:}
Consider the case where the Algorithm decides whether to merge two clusters, $\mathcal{S}_j$ and $\mathcal{S}_k$, containing estimators using combinations from the same family, or to merge two clusters from different underlying partitions, $\mathcal{S}_j$ and $\mathcal{S}_l$. The two clusters which are closest in terms of their weighted Euclidean distance are merged first. Hence, we need to consider the distances between $\mathcal{S}_j$ and $\mathcal{S}_k$, $\mathcal{S}_j$ and $\mathcal{S}_l$, as well as $\mathcal{S}_k$ and $\mathcal{S}_l$.
$\mathcal{S}_j$ is merged with a cluster with elements of its own $\mathcal{S}_{0q}$ iff
$\frac{|\mathcal{S}_j||\mathcal{S}_k|}{|\mathcal{S}_j| + |\mathcal{S}_k|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_k||^2 < \frac{|\mathcal{S}_j||\mathcal{S}_l|}{|\mathcal{S}_j| + |\mathcal{S}_l|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_l||^2$. The following two are hence equivalent
\begin{equation*}
lim \, P (\mathcal{S}_j \cup \mathcal{S}_k = \mathcal{S}_{jk} \subseteq \mathcal{S}_{0q}) = 1
\end{equation*}
\begin{equation}\label{eq:Lemma1ToShow}
\Leftrightarrow \quad lim \, P(\frac{|\mathcal{S}_j||\mathcal{S}_k|}{|\mathcal{S}_j| + |\mathcal{S}_k|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_k||^2 < \frac{|\mathcal{S}_j||\mathcal{S}_l|}{|\mathcal{S}_j| + |\mathcal{S}_l|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_l||^2) = 1
\end{equation}
where $\mathcal{S}_{jk}$ is the new merged cluster.
\textit{Part 3}: We want to prove equation \eqref{eq:Lemma1ToShow} in the following. We can then prove $lim \, P(\frac{|\mathcal{S}_j||\mathcal{S}_k|}{|\mathcal{S}_j| + |\mathcal{S}_k|}||\bar{\mathcal{S}}_k - \bar{\mathcal{S}}_j||^2 < \frac{|\mathcal{S}_k||\mathcal{S}_l|}{|\mathcal{S}_k| + |\mathcal{S}_l|}||\bar{\mathcal{S}}_k - \bar{\mathcal{S}}_l||^2) = 1$ by changing the subscripts.
First, define $a = \frac{|\mathcal{S}_j||\mathcal{S}_k|}{|\mathcal{S}_j| + |\mathcal{S}_k|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_k||^2$ , $b=\frac{|\mathcal{S}_j||\mathcal{S}_l|}{|\mathcal{S}_j| + |\mathcal{S}_l|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_l||^2$ and $c=\frac{|\mathcal{S}_j||\mathcal{S}_l|}{|\mathcal{S}_j| + |\mathcal{S}_l|} (\mathbf{q}-\mathbf{r})' (\mathbf{q}-\mathbf{r})$.
Under \eqref{eq:conv}
\begin{align*}
\begin{split}
plim(a)& = \bm{0}\\
plim(b)& = c\\
\end{split}
\end{align*}
To show: $\underset{n \rightarrow \infty}{\lim}\, P (a<b)=1$.
Proof by contradiction: Show that $\underset{n \rightarrow \infty}{\lim}\, P(b<a)\neq0$ leads to a contradiction. Let $\lim$ imply $\underset{n \rightarrow \infty}{\lim}$ in the following.
\noindent
By the definitions of convergence in probability, it follows that
\begin{equation}\label{eq:plim-a}
\lim \, P(a < \varepsilon) = 1
\end{equation}
and
\begin{equation}\label{eq:plim-b-c}
\lim \, P(|b-c|< \varepsilon)=1 \text{.}
\end{equation}
for any $\varepsilon$.
Therefore, $\lim \, P(a<b)\neq0$ and $\lim \, P(a < \varepsilon)=1$ imply $\lim \, P(b<\varepsilon)\neq0$.
Now, consider $\varepsilon < \frac{1}{2}c$.
Then,
\begin{equation}\label{eq:lemmaproofcontradict}
\lim \, P(b<\frac{1}{2}c)\neq0
\end{equation}
Because of the absolute value $b-c$, consider two cases, $b<c$ and $b>c$.
If $b<c$: $\lim \, P(c-b< \frac{1}{2}c)=1 \, \Leftrightarrow \, \lim \, P(c-b > \frac{1}{2}c)=0$.
$\Rightarrow \, \lim \, P(b<\frac{1}{2}c)=0$, a contradiction with \eqref{eq:lemmaproofcontradict}.
If $b\geq c$: $a<\varepsilon<\frac{1}{2}c<c \leq b$ and hence
$\lim \, P(a<b)=1 \, \Leftrightarrow \, \lim \, P(b\leq a)=0$, again a contradiction. $\qed$
\iffalse
Given that $\lim P \, (a<\varepsilon)=1$, we want to show that $lim \, P (b < a < \varepsilon)>0$, i.e. $lim P (b < \varepsilon)>0$, for any $\varepsilon$. However, this is ruled out by $lim P(|b-c|>\varepsilon)=0$, for any $\varepsilon$, with $c$ fixed and larger zero, a contradiction. In other words, in order for this to hold we would need to have
$$plim(b)=0$$
and
$$plim(b)=c$$
where $c\neq0$. But a RV can't converge to two limits at the same time, right?
\begin{align*}
\begin{split}
plim(\frac{|\mathcal{S}_j||\mathcal{S}_k|}{|\mathcal{S}_j| + |\mathcal{S}_k|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_k||^2)& = \bm{0}\\
plim(\frac{|\mathcal{S}_j||\mathcal{S}_l|}{|\mathcal{S}_j| + |\mathcal{S}_l|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_l||^2)& = \frac{|\mathcal{S}_j||\mathcal{S}_l|}{|\mathcal{S}_j| + |\mathcal{S}_l|} (\mathbf{q}-\mathbf{r})' (\mathbf{q}-\mathbf{r})\\
\end{split}
\end{align*}
This means
\begin{align*}
\begin{split}
lim \, P (\frac{|\mathcal{S}_j||\mathcal{S}_k|}{|\mathcal{S}_j| + |\mathcal{S}_k|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_k||^2 - \frac{|\mathcal{S}_j||\mathcal{S}_l|}{|\mathcal{S}_j| + |\mathcal{S}_l|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_l||^2 + \frac{|\mathcal{S}_j||\mathcal{S}_l|}{|\mathcal{S}_j| - |\mathcal{S}_l|}(\mathbf{q}-\mathbf{r})'(\mathbf{q}-\mathbf{r}) < \bm{\varepsilon}) & = 1\\
lim \, P (\frac{|\mathcal{S}_j||\mathcal{S}_k|}{|\mathcal{S}_j| + |\mathcal{S}_k|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_k||^2 < \frac{|\mathcal{S}_j||\mathcal{S}_l|}{|\mathcal{S}_j| + |\mathcal{S}_l|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_l||^2 + \bm{\varepsilon} - \frac{|\mathcal{S}_j||\mathcal{S}_l|}{|\mathcal{S}_j| + |\mathcal{S}_l|}(\mathbf{q}-\mathbf{r})' (\mathbf{q}-\mathbf{r})) & = 1\\
\end{split}
\end{align*}
for any $\bm{\varepsilon} > \bm{0}$, which can get arbitrarily close to zero. The last term is a quadratic form and hence is positive. Therefore if this inequality holds, Equation \ref{eq:Lemma1ToShow} also holds.
\\
\fi
\iffalse
Now, show that
\begin{equation*}
lim \, P(\frac{|\mathcal{S}_j||\mathcal{S}_l|}{|\mathcal{S}_j| + |\mathcal{S}_l|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_l||^2 < \frac{|\mathcal{S}_j||\mathcal{S}_k|}{|\mathcal{S}_j| + |\mathcal{S}_k|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_k||^2) = 0
\end{equation*}
in the same way.
\begin{align*}
\begin{split}
lim \, P (\frac{|\mathcal{S}_j||\mathcal{S}_k|}{|\mathcal{S}_j| + |\mathcal{S}_k|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_k||^2 - \frac{|\mathcal{S}_j||\mathcal{S}_l|}{|\mathcal{S}_j| + |\mathcal{S}_l|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_l||^2 + \frac{|\mathcal{S}_j||\mathcal{S}_l|}{|\mathcal{S}_j| + |\mathcal{S}_l|}(\mathbf{q}-\mathbf{r})^\intercal (\mathbf{q}-\mathbf{r}) > \bm{\varepsilon}) & = 0\\
lim \, P (\frac{|\mathcal{S}_j||\mathcal{S}_l|}{|\mathcal{S}_j| + |\mathcal{S}_l|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_l||^2 - \frac{|\mathcal{S}_j||\mathcal{S}_l|}{|\mathcal{S}_j| + |\mathcal{S}_l|}(\mathbf{q}-\mathbf{r})' (\mathbf{q}-\mathbf{r}) + \bm{\varepsilon} < \frac{|\mathcal{S}_j||\mathcal{S}_k|}{|\mathcal{S}_j| + |\mathcal{S}_k|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_k||^2)& = 0\\
\end{split}
\end{align*}
The last inequality is stronger than the one to be shown. Even decreasing the first term on the left-hand side by a positive term does not decrease it enough to make it smaller than the right-hand side.
\fi
\iffalse
\begin{align}
\begin{split}
lim \, P([j] \in \hat{\theta}_k \subseteq \mathcal{F}_q) & = lim \, P(||\hat{\beta}_{[j]} - \bar{\beta}_{q}||^2 < ||\hat{\beta}_{[j]} - \bar{\beta}_{q}||^2)\\
& = lim \, P(||\hat{\beta}_{[j]} - \bar{\beta}_{q}||^2 - ||\hat{\beta}_{[j]} - \bar{\beta}_{r}||^2 < 0) = 1
\end{split}
\end{align}
The latter holds because\\
$$lim \, P(||\hat{\beta}_{[j]} - \bar{\beta}_{q}||^2 - ||\hat{\beta}_{[j]} - \bar{\beta}_{r}||^2) =||q-q||^2 -||q - r||^2 = -||q - r||^2 < 0$$
\begin{align}
\begin{split}
lim \, P([j] \in \hat{\theta}_k \nsubseteq \mathcal{F}_q) & = lim \, P(||\hat{\beta}_{[j]} - \bar{\beta}_{r}||^2 < ||\hat{\beta}_{[j]} - \bar{\beta}_{q}||^2)\\
& = lim \, P(||\hat{\beta}_{[j]} - \bar{\beta}_{r}||^2 - ||\hat{\beta}_{[j]} - \bar{\beta}_{q}||^2 < 0) = 0
\end{split}
\end{align}
means
$$plim(||\hat{\beta}_{[j]} - \hat{\beta}_{[k]}||^2) = ||A - A||^2 = 0$$
$$plim(||\hat{\beta}_{[j]} - \hat{\beta}_{[l]}||^2) = ||A - B||^2$$
\begin{align}
\begin{split}
lim \, P(\text{Join two from same family}) & = lim \, P(||\bar{\beta}_{[j]} - \bar{\beta}_{[k]}||^2 < ||\bar{\beta}_{[j]} - \bar{\beta}_{[l]}||^2)\\
& = lim \, P(||\bar{\beta}_{[j]} - \bar{\beta}_{[k]}||^2 - ||\bar{\beta}_{[j]} - \bar{\beta}_{[l]}||^2 < 0) = 1
\end{split}
\end{align}
\textcolor{red}{But does (8) really follow from this last line?\\
Or could we show it from the definition of consistency?}
\vspace{10px}
Other way:
Two elements are joined if their distance is minimal
\begin{align}
\begin{split}
plim(min(||\bar{\beta}_{q} - \bar{\beta}_{r}||^2) = 0)
& = min(||A - A||)^2 = 0 \quad \text{ if r and q are subsets of the same family}
\end{split}
\end{align}
If $r = q$
\fi
\end{proof}
\subsection{Proof of Theorem \ref{th:ConsistentSelection}}
\begin{proof}
The proof for Theorem \ref{th:ConsistentSelection} is structured as follows:
\begin{enumerate}
\item We show that asymptotically the selection path generated by Algorithm \ref{algo:ward} contains $\mathcal{F}_0$, the family formed by all the valid instrumental variables.
\item We show that Algorithm \ref{algo:Sargan} can recover $\mathcal{F}_0$ from the selection path from Algorithm \ref{algo:ward}.
\end{enumerate}
\textit{Part 1} follows from Corollary \ref{coro:GroupStructureRetrieved} directly.
\noindent \textit{Part 2}: Firstly, we establish the properties of the Sargan statistic.
The following two equations can be also found in WLHB (p.10). Let $\mathcal{I}$ be the true set of invalid instruments and $\mathcal{V}$ be the true set of valid instruments. The oracle model is
\begin{equation*}
\mathbf{y} = \mathbf{D}\bm{\beta} + \mathbf{Z}_{\mathcal{I}} \bm{\alpha}_{\mathcal{I}} + \mathbf{u} = \mathbf{X}_\mathcal{I} \bm{\theta}_\mathcal{I} + \mathbf{u}
\end{equation*}
with $\mathbf{X}_\mathcal{I} = \left[\mathbf{D} \quad \mathbf{Z}_{\mathcal{I}}\right]$ and $\bm{\theta}_{\mathcal{I}} = \left[\bm{\beta} \quad \bm{\alpha}_{\mathcal{I}}'\right]^\prime$, the Sargan test statistic is given by
\begin{equation}
S(\hat{\bm{\theta}}_{\mathcal{I}})=\frac{\hat{\mathbf{u}}(\hat{\bm{\theta}}_{\mathcal{I}})^\prime\mathbf{Z}_{\mathcal{I}}(\mathbf{Z}_{\mathcal{I}}^\prime\mathbf{Z}_{\mathcal{I}})^{-1}\mathbf{Z}_{\mathcal{I}}^\prime\hat{\mathbf{u}}(\hat{\bm{\theta}}_{\mathcal{I}})}{\hat{\mathbf{u}}(\hat{\theta}_{\mathcal{I}})^\prime\hat{\mathbf{u}}(\hat{\bm{\theta}}_{\mathcal{I}})/n}
\end{equation}
where $\hat{\mathbf{u}}(\hat{\bm{\theta}})=\mathbf{y} - \mathbf{X}_{\mathcal{I}}\hat{\bm{\theta}}_{\mathcal{I}}$, with $\hat{\bm{\theta}}_{\mathcal{I}}$ the 2SLS estimator of $\bm{\theta}_{\mathcal{I}}$.
\noindent Let $\hat{\mathcal{I}}$ be the estimated set of invalid instruments and $\hat{\mathcal{V}}$ be the estimated set of valid instruments where $\hat{\mathcal{I}} = \mathcal{J} \setminus \hat{\mathcal{V}}$. Following Proposition $3.2$ in \cite{windmeijer2019two}, the Sargan statistic has the following properties:
\iffalse
\begin{property}Properties of the Sargan statistic (WLHB p.10)
\begin{enumerate}
\item When $\mathcal{I}=\hat{\mathcal{I}}$: $S(\hat{\bm{\theta}}_\mathcal{\hat{\mathcal{I}}}) \overset{d}{\rightarrow} \chi^2_{|\mathcal{J}| - |\mathcal{I}| - P}$
\item When $\mathcal{I}\neq\hat{\mathcal{I}}$ with $|\mathcal{\hat{I}}| < |\mathcal{I}|$: $S(\hat{\bm{\theta}}_\mathcal{\hat{\mathcal{I}}})=O_p(n)$.
\end{enumerate}
\end{property}
\fi
\begin{property}Properties of the Sargan statistic\label{prop:Sargan}
\begin{enumerate}
\item For all the $|\mathcal{\hat{\mathcal{V}}}| \choose P$ combinations of the instruments from $\hat{\mathcal{V}}$, if the IVs contained in them belong to the same family, then: $S(\hat{\bm{\theta}}_\mathcal{\hat{\mathcal{I}}}) \overset{d}{\rightarrow} \chi^2_{|\mathcal{J}| - |\mathcal{\hat{\mathcal{I}}}| - P}$
\item For all the $|\mathcal{\hat{\mathcal{V}}}| \choose P$ combinations of the instruments from $\hat{\mathcal{V}}$, if the IVs contained in them belong to a mixture of families, then: $S(\hat{\bm{\theta}}_\mathcal{\hat{\mathcal{I}}})=O_p(n)$.
\end{enumerate}
\end{property}
\iffalse
The second case describes selection of too few of the invalid IVs as invalid, i.e. some invalid IVs have been selected as valid.
\fi
\noindent
With these properties we can show that the downward testing procedure described in Algorithm \ref{algo:Sargan} selects valid instruments consistently with $\xi_{n, J - |\hat{\mathcal{I}}| - P} \rightarrow \infty \text{ for } n \rightarrow \infty \text{, and } \xi_{n, J - |\hat{\mathcal{I}}| - P}=o(n)\text{.}$ Let the number of clusters formed in Algorithm \ref{algo:ward} at some certain step be $K$, e.g. at Step 1, $K = {J \choose P}$ and at Step 2, $K = {J \choose P}-1$ etc. Let the true number of families be $Q$. Consider applying the Sargan test to the model selected by the largest cluster at the each step under the following scenarios:
\begin{enumerate}
\iffalse
\item $K = 1$ and $|\mathcal{V}| = |\mathcal{J}|$, i.e. all the $\mathcal{J}$ instruments are valid, and applying the Sargan test to instruments contained in the largest cluster at the step $K = 1$ where all the $\mathcal{J} \choose P$ just-identified estimators are in the same cluster, which is the staring point of the downward testing procedure. Then we have
\begin{equation}
S(\hat{\theta}_\mathcal{\hat{\mathcal{I}}}) = S(\hat{\theta}_\mathcal{I})\text{.}
\end{equation}
By Property 1 and $\xi_{n, J - |\hat{\mathcal{I}}| - P}=o(n)$, we have
\begin{equation*}
\underset{n \rightarrow \infty}{lim} P (S(\hat{\theta}_\mathcal{I}) < \xi_{n, J - |\hat{\mathcal{I}}| - P}) = 1 \text{.}
\end{equation*}
In this case, Algorithm \ref{algo:Sargan} stops immediately and recovers the oracle model.
\fi
\item $1 \leq K < Q$. For each of these steps, the largest cluster is either associated with a mixture of different families, or with one family.
\begin{itemize}
\item Consider the case where the largest cluster is associated with a mixture of different families. Then by Property \ref{prop:Sargan} and $\xi_{n, J - |\hat{\mathcal{I}}| - P}=o(n)$, we have
\begin{equation*}
\underset{n \rightarrow \infty}{lim} P (S(\hat{\bm{\theta}}_{\hat{\mathcal{I}}}) < \xi_{n, J - |\hat{\mathcal{I}}| - P}) = 0 \text{.}
\end{equation*}
In this case, asymptotically the Sargan test would be rejected and the downward testing procedure moves to the next step.
\item Consider the case where the largest cluster is associated with one family. Then this family must be $\mathcal{F}_0$ as by Assumption \ref{ass:familyplurality}, $\mathcal{F}_0$ is the largest family among all $Q$ families. Then following Property \ref{prop:Sargan} and $\xi_{n, J - |\hat{\mathcal{I}}| - P} \rightarrow \infty$ for the Sargan test we have
\begin{equation}\label{eq:sarganpass}
\underset{n \rightarrow \infty}{lim} P (S(\hat{\bm{\theta}}_{\hat{\mathcal{I}}}) < \xi_{n, J - |\hat{\mathcal{I}}| - P}) = 1 \text{.}
\end{equation}
indicating that $\mathcal{V}$ would be selected as the set of valid instruments asymptotically.
\end{itemize}
\item $K = Q$. By Corollary \ref{coro:GroupStructureRetrieved} we know that the $K$ clusters are associated with the $Q$ families respectively, and by Assumption \ref{ass:familyplurality}, the cluster associated with $\mathcal{F}_0$ is the largest cluster. Then applying the Sargan test at this step would be testing all the valid instruments, hence we also have Equation \eqref{eq:sarganpass} and Algorithm \ref{algo:Sargan} selects $\mathcal{V}$ as the set of valid instruments.
\end{enumerate}
To summarize, asymptotically, at steps $1 \leq K < Q$, Algorithm \ref{algo:Sargan} only stops when $\mathcal{F}_0$ forms the largest cluster and hence selects the oracle model, otherwise it moves to step $K = Q$ and selects the oracle model.
\noindent
Combine \textit{Part 1} and \textit{Part 2}, we prove Theorem \ref{th:ConsistentSelection}.
\iffalse
\begin{enumerate}
\item Consider $|\mathcal{V}| = |\mathcal{J}|$. By property 1.1 and the property that the critical values go to infinity, it follows that
$$\underset{n \rightarrow \infty}{lim} P (S(\hat{\bm{\theta}}_\mathcal{I}) < \xi_{n, J - |\hat{\mathcal{I}}| - P}) = 1 \text{.}$$
\item Consider $|\mathcal{V}| < |\mathcal{J}|$. When $|\hat{\mathcal{V}}| > |\mathcal{V}|$, we must have that $\mathcal{I} \neq \hat{\mathcal{I}}$:
\begin{equation}\label{eq:SarganLargerCrit}
\underset{n \rightarrow \infty}{lim} P (S(\hat{\bm{\theta}}_\mathcal{I}) < \xi_{n, J - |\hat{\mathcal{I}}| - P}) = 0 \text{,}
\end{equation}
by property 1.2 of the Sargan test and $\xi_{n, J - |\hat{\mathcal{I}}| - P}=o(n)$.
\end{enumerate}
In case 1., all IVs are valid and Algorithm \ref{algo:Sargan} stops immediately.
In case 2., note that Lemma \ref{lemma:AssignToFamily1} implies that every largest model selected by Algorithm \ref{algo:ward} when the number of selected IVs is larger than the actual number of valid IVs must be a union of multiple groups and as there is only one group of valid IVs, the largest group must always select some invalid IVs as valid.
Next, consider that the number of IVs selected as valid is equal to the actual number of valid IVs, $|\hat{\mathcal{V}}| = |\mathcal{V}|$. Again, we can consider two sub-cases: one where $\hat{\mathcal{V}} \neq \mathcal{V}$ (\textit{a}) and one where $\hat{\mathcal{V}} = \mathcal{V}$ (\textit{b}). Case \textit{a} can only be the union of groups containing valid and invalid IVs, because plurality holds. Hence, for this case Property 1.2 applies and equation \ref{eq:SarganLargerCrit} holds.
In case \textit{b} all IVs are valid, hence Property 1.1 applies. By Property 1.1 of the Sargan statistic and $\xi_{n, J - |\hat{\mathcal{I}}| - P} \rightarrow \infty$, it follows that
$$\underset{n \rightarrow \infty}{lim} P (S(\hat{\bm{\theta}}_\mathcal{I}) < \xi_{n, J - |\hat{\mathcal{I}}| - P}) = 1 \text{.}$$
Hence, Algorithm \ref{algo:Sargan} stops.
Corollary 2 states that at some point on the path, at least when $K=Q$, the true model is on the path, i.e. $lim P(\hat{\mathcal{V}} = \mathcal{V}) = 1$.
Hence, the first selection $\hat{\mathcal{V}}$ that fulfills the HS criterion is the one associated with the oracle model.
\fi
\end{proof}
\iffalse
\subsection{Proof of Theorem \ref{th:ConsistentSelectionLATE}}
\textbf{Proof:} Note that the Sargan test equivalently tests that the IV-specific just-identified estimates $\hat{\beta}_j$ are the same. When taking IVs from the same group, therefore Property 1.1 from the Proof for Theorem 1 holds and when a mixture of IVs with different treatment effects is used, Property 1.2 holds.
Algorithm 2 now performs Sargan tests for all estimated groups. By the same logic as for the preceding proof, at all steps before the one with $K=Q$, mixtures of IVs are tested. When $K=Q$, and the group structure is retrieved, all tests use IVs from one group, and none of the tests rejects. Therefore, the testing procedure stops at the correct step, where $\hat{\bm{\theta}}_k = \mathcal{G}_q$. $\qed$
\fi
\iffalse
\section{The Weak IV Problem}
\subsection{The CI Method}\label{app:weak CI}
Firstly we investigate the asymptotic behaviour of the confidence interval of the 2SLS estimator if the instrumental variable is weak and invalid. As in \citet*{Staiger1997Instrumental}, consider the just-identified IV setup with one endogenous regressor and no exogenous controls.
\begin{equation*}
\bm{y} = \mathbf{D} \beta + \bm{u}
\end{equation*}
The first stage is
\begin{equation*}
\bm{D} = \mathbf{Z} \gamma + \bm{\varepsilon}
\end{equation*}
Let $\gamma = \frac{C}{\sqrt{n}}$ where $C$ is a fixed scalar. Let $Z$ be invalid so that $\textbf{E}(Z^{'}u) \neq 0$. The 2SLS estimator is given by
\begin{equation*}
\hat{\beta} = \frac{Z^{'}Y}{Z^{'}D} = \beta + \frac{Z^{'}u}{Z^{'}D}
\end{equation*}
Then
\begin{align*}
plim(\hat{\beta}) &= \beta + plim(\frac{Z^{'}u}{Z^{'}D}) = \beta + plim(\frac{Z^{'}u}{Z^{'}Z\gamma + Z^{'}\varepsilon})\\
&=\beta + plim(\frac{\frac{1}{n}Z^{'}u}{\frac{1}{n}Z^{'}Z \times \frac{C}{\sqrt{n}} + \frac{1}{n}Z^{'}\varepsilon})\\
&= \beta + \frac{\textbf{E}(Z^{'}u)}{\textbf{E}(Z^{'}Z)\times \frac{C}{\sqrt{n}}} = \infty
\end{align*}
It can be seen that $\hat{\beta}$ converges to $\infty$ at the rate of $\sqrt{n}$. The confidence interval of $\hat{\beta}$ is defined as $\hat{\beta} \pm \psi_n\hat{v}$ where $\psi_n \xrightarrow{} \infty$ and $\psi_n = o(\sqrt{n})$. As for $\hat{v}$, we can see from Equation $(10)$ and $(11)$ in \citet*{Windmeijer2021Confidence} that $\hat{v} \xrightarrow{} \infty$ at the rate of $\sqrt{n}$. Therefore, $\hat{\beta} + \psi_n\hat{v} \xrightarrow{} +\infty$ and $\hat{\beta} - \psi_n\hat{v} \xrightarrow{} -\infty$. The range of the confidence interval of $\hat{\beta}$ is the entire real line if $Z$ is a weak and invalid instrument.
If $Z$ is a weak and valid instrument, $\hat{\beta}$ is bounded \citep*{Staiger1997Instrumental}. From Equation $(10)$ and $(11)$ in \citet*{Windmeijer2021Confidence} , it can be seen that $\hat{v}$ is bounded as well. Therefore, $\hat{\beta} \pm \psi_n\hat{v}$ converges to $\infty$ at the same rate as $\psi_n$.
\subsection{The First Stage Hard Thresholding}\label{app:weak HT}
Consider the following simple setup: a single endogenous variable $D$ and two candidate instruments $Z_1$ and $Z_2$. The structural model is
\begin{equation*}\label{eq:Structural}
\bm{y} = \mathbf{D} \beta + \bm{u}
\end{equation*}
Assume a linear relationship between $D$ and $Z_1$, $Z_2$:
\begin{equation*}\label{eq:FirstStage}
\bm{D} = \mathbf{Z_1} \gamma_1 + \mathbf{Z_2} \gamma_2 + \bm{\varepsilon}
\end{equation*}
where $cov(u, \varepsilon) \neq 0$ hence $D$ is endogenous. Let $Z_2$ be an invalid instrument in the sense that it is correlated with $\bm{u}$ in the following way:
\begin{equation*}
\bm{u} = \mathbf{Z_2} \eta + \bm{e}
\end{equation*}
where $cov(e, Z_2) = cov(e, \varepsilon) = 0$. We can see that $cov(Z_2, \varepsilon) \neq 0$. For valid instrument $Z_1$, let $cov(u, Z_1) = cov(\varepsilon, Z_1) = 0$.
For the strength of $Z_1$ and $Z_2$, let $\gamma_1 \neq 0$ and $\gamma_2 = 0$. That is, the invalid instrument is also an irrelevant instrument, while the valid instrument is relevant. Suppose we run the first-stage hard thresholding based on the OLS estimator from $(2)$. Then
\begin{align*}
\hat{\gamma}_2 &= (\bm{Z}_2^{'}\bm{M}_1\bm{Z}_2)^{-1}\bm{Z}_2^{'}\bm{M}_1\bm{D}\\
&= (\bm{Z}_2^{'}\bm{M}_1\bm{Z}_2)^{-1}\bm{Z}_2^{'}\bm{M}_1(\mathbf{Z_2} \gamma_2 + \mathbf{Z_1} \gamma_1 + \bm{\varepsilon})\\
&= \gamma_2 + (\bm{Z}_2^{'}\bm{M}_1\bm{Z}_2)^{-1}\bm{Z}_2^{'}\bm{M}_1 \mathbf{Z_1} \gamma_1 + (\bm{Z}_2^{'}\bm{M}_1\bm{Z}_2)^{-1}\bm{Z}_2^{'}\bm{M}_1\bm{\varepsilon}\\
&= \gamma_2 + (\bm{Z}_2^{'}\bm{M}_1\bm{Z}_2)^{-1}\bm{Z}_2^{'}\bm{M}_1\bm{\varepsilon}\\
&= \gamma_2 + (\bm{Z}_2^{'}\bm{M}_1\bm{Z}_2)^{-1}\bm{Z}_2^{'}(\bm{I} - \bm{P}_1)\bm{\varepsilon}\\
&= \gamma_2 + (\bm{Z}_2^{'}\bm{M}_1\bm{Z}_2)^{-1}\bm{Z}_2^{'}\bm{\varepsilon} + (\bm{Z}_2^{'}\bm{M}_1\bm{Z}_2)^{-1}\bm{Z}_2^{'}\bm{P}_1\bm{\varepsilon}\\
&= \gamma_2 + (\bm{Z}_2^{'}\bm{M}_1\bm{Z}_2)^{-1}\bm{Z}_2^{'}\bm{\varepsilon} + (\bm{Z}_2^{'}\bm{M}_1\bm{Z}_2)^{-1}\bm{Z}_2^{'}\bm{Z}_1(\bm{Z}_1^{'}\bm{Z}_1)^{-1}\bm{Z}_1^{'}\bm{\varepsilon}
\end{align*}
Because $cov(Z_2, \varepsilon) \neq 0$ and $cov(Z_1, \varepsilon) = 0$, we have $plim(\bm{Z}_1^{'}\bm{\varepsilon})=0$ but $plim(\bm{Z}_2^{'}\bm{\varepsilon}) \neq 0$. Therefore, $plim(\hat{\gamma}_2) \neq \gamma_2 = 0$. This shows that the OLS estimator of $\gamma_2$ is biased away from 0. By the same argument, for $\gamma_1$ we have
\begin{equation*}
\hat{\gamma}_1 = \gamma_1 + (\bm{Z}_1^{'}\bm{M}_2\bm{Z}_1)^{-1}\bm{Z}_1^{'}\bm{\varepsilon} + (\bm{Z}_1^{'}\bm{M}_2\bm{Z}_1)^{-1}\bm{Z}_1^{'}\bm{Z}_2(\bm{Z}_2^{'}\bm{Z}_2)^{-1}\bm{Z}_2^{'}\bm{\varepsilon}
\end{equation*}
It can be seen from the last term that unless $cov(Z_1, Z_2) = 0$, otherwise the OLS estimator of $\gamma_1$ is inconsistent as well. If the sign of $plim((\bm{Z}_1^{'}\bm{M}_2\bm{Z}_1)^{-1}\bm{Z}_1^{'}\bm{Z}_2(\bm{Z}_2^{'}\bm{Z}_2)^{-1}\bm{Z}_2^{'}\bm{\varepsilon})$ is opposite to that of $\gamma_1$, then $\gamma_1$ can be biased towards zero.
In this special case, the first-stage estimator of the coefficient of the irrelevant instrument $Z_2$ is biased away from 0 while for the relevant instrument $Z_1$ it is the opposite. Therefore, the t-test results are biased too - it is possible that $Z_1$ stays while $Z_2$ is dropped from the candidate instruments.
\fi
\subsection{Proof of Theorem \ref{th:ConsistentSelectionLATE}}
The proof of Theorem \ref{th:ConsistentSelectionLATE} works in the same way as the proof of Theorem \ref{th:ConsistentSelection}.
\begin{proof}
The proof for Theorem \ref{th:ConsistentSelectionLATE} is structured as follows:
\begin{enumerate}
\item We note that asymptotically the selection path generated from Algorithm \ref{algo:ward} contains all groups $\mathcal{G}_q$.
\item We show that Algorithm \ref{algo:Sargan} can recover all $\mathcal{G}_q$ from the selection path from Algorithm \ref{algo:ward}.
\end{enumerate}
\textit{Part 1} again follows directly from Corollary \ref{coro:GroupStructureRetrieved}.
\noindent \textit{Part 2}: Firstly, we establish the properties of the Sargan statistic.
\begin{property}Properties of the Sargan statistic\label{prop:SarganLATE}
\begin{enumerate}
\item For all combinations of instruments from $\hat{\mathcal{G}}_k$, if their just-identified estimators are associated with the same group, then: $S(\hat{\bm{\theta}}_{\mathcal{\hat{\mathcal{G}}}_k}) \overset{d}{\rightarrow} \chi^2_{J - |\mathcal{\hat{\mathcal{G}}}_k| - P}$
\item For all combinations of instruments from $\hat{\mathcal{G}}_k$, if their just-identified estimators are associated with a mixture of groups, then: $S(\hat{\bm{\theta}}_{\mathcal{\hat{\mathcal{G}}}_k})=O_p(n)$.
\end{enumerate}
\end{property}
\noindent
As before, $\xi_{n, J - |\hat{\mathcal{I}}| - P} \rightarrow \infty \text{ for } n \rightarrow \infty \text{, and } \xi_{n, J - |\hat{\mathcal{I}}| - P}=o(n)\text{.}$ Consider applying the Sargan test to each cluster separately at the following steps under the following scenarios:
\begin{enumerate}
\item $1 \leq K < Q$, i.e. the number of clusters is smaller than the number of groups. For each of these steps, at least one cluster is associated with a mixture of different groups.
When one cluster is created by a mixture of different groups, by Property \ref{prop:SarganLATE}, we have
\begin{equation}\label{eq:sarganpassLATE}
\underset{n \rightarrow \infty}{lim} P (S(\hat{\bm{\theta}}_{\mathcal{G}_q}) < \xi_{n, J - |\mathcal{G}_q| - P}) = 0 \text{.}
\end{equation}
In this case, asymptotically at least one of the the Sargan tests would be rejected and the downward testing procedure moves to the next step.
\item $K = Q$. By Corollary \ref{coro:GroupStructureRetrieved} we know that the $K$ clusters are formed by the $Q$ groups respectively and $\hat{\mathcal{G}}_k = \mathcal{G}_q$ for all $q$. Then for each of the $K$ tests we have
\begin{equation}
S(\hat{\bm{\theta}}_{\mathcal{\hat{\mathcal{G}}}_k}) = S(\hat{\bm{\theta}}_{\mathcal{G}_q})\text{.}
\end{equation}
By Property \ref{prop:SarganLATE} and $\xi_{n, J - |\mathcal{G}_q| - P}=o(n)$, we have
\begin{equation*}
\underset{n \rightarrow \infty}{lim} P (S(\hat{\bm{\theta}}_{\mathcal{G}_q}) < \xi_{n, J - |\hat{\mathcal{I}}| - P}) = 1 \text{.}
\end{equation*}
In this case, Algorithm \ref{algo:Sargan} stops.
Then applying the Sargan tests to each group at this step will be testing IVs from the same group each time, hence we also have Equation \eqref{eq:sarganpassLATE}.
\end{enumerate}
To summarize, asymptotically, at steps $1 \leq K < Q$, Algorithm \ref{algo:Sargan} does not stop; then it moves to step $K = Q$ and selects the oracle model.
\noindent
Combine \textit{Part 1} and \textit{Part 2}, we prove Theorem \ref{th:ConsistentSelection}.
\end{proof}
\end{appendices}
\section{Monte Carlo Simulations}\label{sec:MonteCarlo}
\subsection{All Candidate Instruments are Strong}\label{sec:strongsimulation}
We conduct Monte Carlo simulation experiments to illustrate the performance of our AHC method in IV selection and estimation, and compare with that of the existing Confidence Interval Method and the Two-Stage Hard Thresholding Method in situations where Assumption \ref{ass:FirstStageSingle} and Assumption \ref{ass:FirstStageMulti} are satisfied. In this set of simulations we find that our method works as well as the CIM in terms of bias and outperforms HT in small-sample settings. When there are multiple regressors, the summed bias is very close to the oracle bias and is only a fraction of the bias of the naive estimator.
We follow closely the setting in \citet*{Windmeijer2021Confidence}: There are 21 candidate instruments, 12 of which are invalid, while 9 are valid with $\bm{\alpha} = c_\alpha \left( \bm{\iota}_6', \, \, \, 0.5 \bm{\iota}_6', \,\,\, \bm{0}_9' \right)'$ where $\bm{0}_r$ is an $r \times 1$ vector of zeros and $\bm{\iota}_r$ is an $r \times 1$ vector of ones. The first-stage parameters are given by $\bm{\gamma} = c_\gamma \times \bm{\iota}_{21} $. We set $c_\alpha=1$ and $c_\gamma=0.4$. The true $\beta$ is $0$ and $\mathbf{z}_i \sim N(0, \bm{\Sigma_z})$ with $\bm{\Sigma_{z,jk}} = 0.5^{|j-k|}$. Errors are generated from
$$\left(\begin{array}{r}
u_i \\
\varepsilon_i
\end{array}\right) \sim N\left(\bm{0},
\left(
\begin{array}{cc}
1 & 0.25\\
0.25 & 1\\
\end{array}\right) \right) \text{.}
$$
The IV selection and estimation results are presented in Table \ref{tab:frank} for sample sizes $N = 500,$ $1000, 2000$ for $1000$ Monte Carlo replications. We report the median absolute error (MAE) and the standard deviation (SD) of the IV estimators, and the coverage rate of the $95\%$ confidence intervals (\textit{Coverage}). For IV selection results we report three statistics: the number of selected invalid instruments (\textit{\# invalid}), the frequency of selecting all invalid instruments as invalid (\textit{p allinv}) and the frequency of selecting the oracle model (\textit{p oracle}).
\input{Comparison_frank}
For $N = 500$, the oracle 2SLS estimator (\textit{oracle}), which uses only the valid IVs and controls for the truly invalid ones, has the lowest MAE at $0.016$ and the coverage rate of the 95 \% confidence interval is at $0.929$. The naive 2SLS estimator (naive) which treats all candidates instruments as valid irrespective of their validity, however, has a much larger median absolute error of about $1.056$ and its 95 \% confidence interval never covers the true value. This does not change even when increasing the sample size to $2000$, as expected. When using the two-stage hard thresholding (HT) method with 500 observations, the MAE is even larger than that of the naive 2SLS estimator and the method never chooses the oracle model, leading none of the confidence intervals to cover the true value. This is in line with the IV selection results - the frequency of including all invalid instruments as invalid, and that of selecting the oracle model are $0$. When using CIM, the MAE is already low when $N=500$, the number of IVs chosen as invalid is close to 12, the frequency with which the oracle model is selected is at $0.966$, and the coverage rate is $0.906$. Results are very similar for our AHC method. When increasing the sample size, the performance improves for all three selection methods. For CIM and AHC, the MAE is equal to that of the oracle estimator both $N = 1000$ and $N = 2000$, and the probabilities to select the oracle model are close to one, while for HT it is lower, showing that CIM and AHC have better finite performance.
\input{MultipleRegressors}
We also inspect the performance of our method when there are multiple endogenous regressors. The existing selection methods do not allow for such an extension. Again, we draw 21 IVs with $\bm{\alpha} = c_\alpha \left( \bm{\iota}_6', \, \, \, 0.5 \bm{\iota}_6', \,\,\, \bm{0}_9' \right)'$. The first-stage parameters are drawn from uniform distributions as $\bm{\gamma}_1 = unif(1,2)$, $\bm{\gamma}_2 = unif(3,4)$ and $\bm{\gamma}_3 = unif(5,6)$, when there is a third endogenous regressor. The rest of the parameters are the same as before. With this setting we estimate $\bm{\beta} = \bm{0}$ for $m=1000$ replications. The results can be found in Table \ref{tab:MultipleRegressors}. It can be seen that the performance of our method approaches that of the oracle estimator as the sample size grows large.
But as the number of endogenous variables increases from 1 to 3, it needs a larger sample size to achieve oracle selection.
The fact that AHC only uses the point estimates in the selection also comes with a cost.
In the setting with only one regressor, the standard deviation of the AHC estimate is never the lowest as compared to CIM and HT. For example in the case with 1000 observations, the standard deviation of AHC is at 0.135, while it is at 0.114 for HT and 0.017 for CIM. It is also interesting to point out that the standard deviation of AHC is also not necessarily the worst for $N=500$ and $N=2000$. Still, caution is needed when using AHC with fairly small samples.
\subsection{Some Weak Instruments Among the Candidate Instruments\label{sec:weaksimulations}}
Now we check the performance of the previously mentioned methods when Assumption \ref{ass:FirstStageSingle} and Assumption \ref{ass:FirstStageMulti} are violated, i.e there are weak instruments among the candidates. Overall, we find that AHC clearly outperforms CIM in all settings with weak IVs and it also outperforms HT in the case where the largest group does not consist of strong and valid IVs. Moreover, with two endogenous regressors AHC is still very close to oracle performance.
For individually weak instruments, we consider the local to zero setup and set their first stage parameters as $\gamma_j = C/\sqrt{n}$ with $C = 0.1$.
\iffalse
extreme case where they are completely irrelevant for the treatment variable, and set their first stage parameters as $\gamma_j = 0$.
\fi
Firstly, consider the same setting as in Section 5.1 with one endogenous variable but with the following variations:
\begin{itemize}
\item Design 1: All the 12 invalid instruments are irrelevant, and all the 9 valid instruments are relevant: $\bm{\gamma} = c_{\gamma}\left(\bm{\iota}_{12}'C/\sqrt{n}, \,\,\,\bm{\iota}_9'\right)'$.
\item Design 2: All the 12 invalid instruments are irrelevant, and almost half of the valid instruments are irrelevant (4 out of 9): $\gamma = c_{\gamma}\left(\bm{\iota}_{16}'C/\sqrt{n}, \,\,\,\bm{\iota}_5'\right)'$.
\item Design 3: Both the valid and invalid instruments are mixtures of irrelevant and relevant instruments.
\begin{itemize}
\item a). Relevant and valid instruments still form the largest group:\\ $\bm{\gamma} = c_{\gamma}\left(\bm{\iota}_6', \,\,\, \bm{\iota}_{7}'C/\sqrt{n}, \,\,\,\bm{\iota}_8'\right)'$.
\item b). Relevant and valid instruments do not form the (strictly) largest group:\\ $\bm{\gamma} = c_{\gamma}\left(\bm{\iota}_6', \,\,\, \bm{\iota}_{9}'C/\sqrt{n}, \,\,\,\bm{\iota}_6'\right)'$.
\end{itemize}
\end{itemize}
All the other parameters are the same as in Section \ref{sec:strongsimulation}. We focus on the large sample performance in the presence of weak instruments and fix the sample size to $N = 2000$. Simulation results are calculated based on $1000$ Monte Carlo replications. We present the results in Table \ref{tab:weaksingle}, where MAE, \textit{\# invalid} and \textit{p allinv} are defined in the same way as in Section \ref{sec:strongsimulation}. Here we report three different IV selection statistics: the frequency of selecting all valid and strong instruments as valid (\textit{strongvalid}), the frequency of selecting all weak invalid instruments as invalid (\textit{weakin}), and the frequency of selecting all weak valid instruments as invalid (\textit{weakva}). In these designs, let the oracle models include only the strong and valid instruments as valid. Our primary focus is the selection of the invalid instruments. It is crucial that all the invalid instruments (either strong or weak) are selected as invalid, because including any invalid instruments in IV estimation can cause severe bias.
\iffalse
The selection of the weak valid instruments is not the main concern because we can cope with the problem with weak IV robust post-selection estimation methods such as the Limited Information Maximum Likelihood (LIML).
\fi
\input{weak_IV_single}
In Table \ref{tab:weaksingle} we can see that in the presence of weak instruments, the CI method can be very problematic - the frequencies of selecting all invalid instruments as invalid are low in all settings (lowest at $0.024$ in Design 1 and highest at $0.351$ in Design 3a), meaning that it almost always includes invalid instruments as valid. Consequently, the MAE of the post-selection estimator is very large (and much larger than those of the oracle, HT and AHC). We have provided reasons for this behavior in the discussion on weak instruments.
The HT method performs well in almost all designs. It selects all weak instruments (both valid and invalid) as invalid with probability almost equal to 1. Also, it has high frequencies of selecting all strong and valid instruments as valid. It can be seen that if the strong and valid instruments form the largest group, the voting mechanism of the HT method can select the oracle model. This is due to the pre-selection of strong IVs performed in HT and might be a further way to complement CIM (and potentially AHC).
\iffalse
This is because by first-stage thresholding, the HT method can rule out all the irrelevant instruments (valid or invalid). Therefore, if the relevant and valid instruments form the largest group, the voting mechanism of the HT method can select the oracle model.
\fi
\input{weak_double_setup}
In line with the selection performance, the MAEs of HT are identical to those of the oracle models. In Design 3b, however, the plurality rule does not hold anymore - there is a tie between the group of strong and valid instruments, and strong and invalid instruments. In this situation, the voting mechanism does not perform well as \textit{p allinv} is only at $0.053$. This results in a significantly larger MAE than the oracle model.
The AHC performs well in general, because it has similar MAE as the oracle model in all settings. For Design 1, 2 and 3a, it guarantees that all the invalid instruments are selected as invalid with \textit{p allinv} and \textit{weakin} close to 1. In terms of valid instruments, all the strong valid instruments are included as valid with high frequencies (\textit{strongvalid} close to 1). For weak valid instruments, some of them are selected as valid. This is because the just-identified estimators of the weak valid instruments may not be too far away from those of the strong and valid instruments, thus in some cases they are not totally separated by the algorithm. This is not the major concern, as for weak valid instruments, the algorithm would only keep those whose Wald ratio estimators are not severely distorted, hence the effect of the selected weak instruments on the resulting post-selection IV estimator is limited (MAEs of AHC are very close to those of the oracle models). It is noticeable that in Design 3b where there are two largest groups, AHC outperforms HT with a frequency of $0.847$ of including all the invalid instruments as invalid. Moreover, AHC can alternatively report both groups.
\input{weak_IV_double}
We also investigate the performance of AHC in the presence of weak IVs with two endogenous variables in large samples (fix sample size $N = 5000$). Simulations are conducted in four designs with $9$ candidate instruments (see Table \ref{tab:weakdoublesetup}).
In Design 1, each instrument is valid but only strong for one endogenous variable, respectively, violating Assumption \ref{ass:FirstStageMulti}. We are interested to see if the AHC method can include all the instruments as valid. In Design 2, still all the candidate instruments are strong for only one treatment variable, but some of them are invalid. In the last design, we make some of the instruments weak for both variables and a mixture of valid and invalid instruments. Results are presented in Table \ref{tab:weakdouble}. In all designs, AHC achieves selection results close to the oracle model and hence very similar MAEs as well.
This shows that even in settings where the usual 2SLS estimator would fail, because the first-stage coefficient matrix is near rank-reduced, we can still obtain useful estimates. This is because some of the just-identified estimates use combinations of IVs that are strong, which can provide sufficient information for selecting valid instruments and hence delivering consistent estimates.
\section{Applications}\label{sec:Applications}
In this section we apply our method to the estimation of the return of education in the US and the effects of immigration on wages. We first describe the settings and then discuss the results. The first application concentrates on a setting with one regressor and shows how AHC can help tell strong valid from weak and/or invalid IVs. We compare our results with those of CIM and TSHT. The second application illustrates the three problems that our new estimator can tackle: estimating the coefficients of multiple endogenous regressors in presence of weak \textit{and} invalid IVs. We add a discussion of a third possible application for which our method could be used: the estimation of the effects of pollutants on human health.
\subsection{The Return to Education - \citet*{Angrist1991Does}}
\begin{table}[!t]
\begin{center}
\input{AK_AHC.tex}
\end{center}
\end{table}
In our second application, we apply our method to the estimation of the effect of years of education on log weekly wages.
In the economics of education, a large literature has tried to estimate this effect. \citet*{Angrist1991Does} estimate a positive effect.
\noindent
We estimate the following model
\begin{align*}\label{eq:AK_AHC}
ln(Wages_i) & = \alpha + Educ_{i}\beta + \sum_{t=1940}^{1949}\gamma_t Year_t + u_{i}\\
Educ_{i} & = \alpha + \sum_{t=1940}^{1949}\sum_{q=1}^{3} Y_t Q_q \theta_{qt} + \sum_{t=1940}^{1949}\gamma_t Year_t + \varepsilon_i
\end{align*}
where $\alpha$ is the intercept, $i$ indexes the individual, $Educ_i$ stands for years of schooling, $ln(Wages_i)$ is the logarithm of weekly wages and $\beta$ denotes the coefficient of interest: the return to education. The year dummies are $Year_t$ and $\gamma_t$ are their coefficients. In the first-stage equation \citet*{Angrist1991Does} use interactions between quarter- and year-of-birth dummies as instruments. We replicate their results using men born between 1940 and 1949 and data from the 1980 Census. This yields 30 instruments. We replicate the results in column 2 of Table VI in \citet*{Angrist1991Does}.
There are 486,926 observations of males born in the United States. The data is taken from the 5 percent samples of the 1980 Census.
The reason for using IVs is that unobserved factors such as ability might affect education and wages at the same time. However, \citet*{Bound1995Problems} shows that this study might suffer from a weak instrument problem despite the large sample size. Therefore, in this application we try to look at the performance of our estimator in presence of a large number of weak instruments. If there is a subset of valid and strong instruments, we can still hope that corrected estimation is possible.
Table \ref{tab:AK_AHC} shows the results of the analysis. Column 1 replicates the OLS results from column 1 in Table VI of \citet*{Angrist1991Does}. The replication of the 2SLS estimate is in Column 2. Both coefficients are at about 0.06 and are both statistically significant. For the 2SLS estimate the p-value of the Hansen-Sargan is very close to zero. The first-stage F-statistic is 7.274 indicating a weak IV problem.
In column 3, we report the result of the CIM. Sixteen IVs are selected as invalid. The F-statistics has now become even lower. Surprisingly the coefficient is now negative and significant of similar magnitude to the 2SLS and OLS coefficient estimates. This indicates that the selection via CIM goes wrong in practice, with results that are not in line with economic theory. When using TSHT, 19 IVs are selected as invalid and the F-statistic is at almost 17. The pre-selection of strong IVs hence seems to work. The coefficient estimate is still close to the ones from OLS and 2SLS. However, the Hansen-Sargan P-value is still close to zero and suggests rejection of the Null hypothesis. This might be due to the lack of a downward testing procedure which allows to explicitly select a significance level for the Hansen-Sargan test.
Selecting the largest group of IVs with AHC we end up with 9 valid and 21 invalid IVs. This leads to a doubled coefficient of education which is still statistically significant. The IVs selected as invalid are (except for one exception) the interactions between the first three year-dummies and the quarter-dummies. When controlling for the 21 IVs selected as weak or invalid, the Hansen-Sargan p-value increases to over 0.7. Moreover, the first-stage F-statistic is at 14.34 and now exceeds the threshold of 10. Therefore, concerns about weak IVs can be moderated. Interestingly, there is a large overlap between the variables selected as invalid by TSHT and those selected by AHC: 17 out of the 21 variables selected as invalid by AHC have also been selected by TSHT. This gives us confidence that in fact the set of variables selected is not erratic. We take the rejection of the post-TSHT model as evidence that our AHC method also has the potential to improve variable selection in empirical settings and is more reliable than the other methods.
This suggests that our selection method in fact helps uncover positive higher effects of schooling on wages.
The difference between these estimates is consistent with the 2SLS estimate being biased towards the OLS estimate, which is itself biased. Among the five reported estimates our post-AHC estimator can claim a slightly more credible estimate of the causal effect.
\subsection{Effect of Immigration on Wages}
In the preceding example, we could still compare our estimates with those from other selection methods. In the next example we show a setting where the other methods are not applicable.
Many recent studies have tried to estimate the causal effects of immigration on labor market outcomes.\footnote{An overview of the literature can be found in \citet*{Dustmann2016impact}.}
Most papers in the literature only estimate the contemporaneous effects of immigration on labor market outcomes. \citet*{Jaeger2020Shift} point out that there might be long-term adjustments that affect wages in the long run, for example because local workers and firms react to the inflow of migrants in the long-term. This calls for including lagged immigration into the regression equation.
\noindent To illustrate our new method, we estimate the following linear model: \begin{equation}\label{eq:BP}
\Delta y_{lt} = \beta^{short} \Delta immi_{l,t} + \beta^{long} \Delta immi_{l,t-10} + \psi_{t} + \varepsilon_{lt} \text{,}
\end{equation}
as in \citet*{Basso2015Association}.
Here, there are three years $t \in \{1990, 2000, 2010\}$ and 722 commuting zones $l$. The dependent variable $\Delta y_{lt}$ is the change in log weekly wages of high-skilled workers. The independent variables are $\Delta immi_{l,t}$, denoting the \textit{current} change of immigrants in employment, and $\Delta immi_{l,t-10}$, denoting the same change ten years ago (note the lagged time subscript). The variables are differenced to eliminate commuting-zone fixed effects. The coefficients of interest are the short-term (contemporaneous) effect $\beta^{short}$ and the long-term effect $\beta^{long}$.
Decade fixed-effects are captured by $\psi_{t}$ and $\varepsilon_{lt}$ is the error term. Commuting-zone fixed effects are eliminated through first-differencing as is standard with panel data \citep[see e.g.][, p. 315]{Wooldridge2010Econometric}. This regression is canonical in migration economics.
The authors use data from the Census Integrated Public Use Micro Samples and the American Community Survey \citep{Ruggles2015Integrated}.
The key econometric challenge is that migrants select where to live endogenously. For example, migrants might choose where to live based on economic conditions in a region. This creates a bias in the estimates.
A much-used estimation strategy to address this issue is to use historical settlement patterns of migrants from many countries of origin as instruments. When earlier migrants attract migrant at later points in time, the instruments are relevant. This identification strategy dates back to \citet*{Altonji1991effects}. The papers that use this type of instrument in this context are numerous \citep{Jaeger2020Shift}.
Therefore, we use all shares of foreign-born people (we call them migrants, analogously) in working age from a certain origin country $j$ at a base period $t_0$ in region $l$. The share is measured relative to their total number in the country and is denoted by $s_{jlt_0}$. We use origin-specific shares from 19 origin country groups and base years 1970 and 1980 as separate IVs and obtain $L=38$ IVs. It is usually expected that the reasons that attracted migrants in the past are quasi-random as compared with current migration. Validity is typically defended on these grounds.
\noindent However, these previous settlement patterns might be invalid.
\citet*{Jaeger2020Shift} show that IV estimators that rely on this kind of exclusion restriction might be inconsistent, first, because of correlation of the IVs with unobserved demand shocks and, second, because of dynamic adjustment processes. Hence, none of these two should play a role. However, it is well plausible that some origin country groups did not locate randomly in the past or have had direct effects on the wages. The second challenge can be somewhat tackled by including lagged immigration as an additional regressor. Of course, this will also be subject to the same endogeneity problem as before and hence should also be instrumented.
To circumvent these problems, we apply the new estimator, which allows for direct effects of many migrant settlement variables on wages by pre-selecting the valid instruments.
This approach is canonical and is also highly relevant in the current applied economic literature: In a recent paper, \citet*{Goldsmith-Pinkham2020Bartik} discuss a class of IVs which are extensively used in labor economics.\footnote{These so-called shift-share IVs combine the previous settlement shares that we use in this application with aggregate-level shocks, so-called shifts.} A sufficient condition for this type of IVs to be valid is that all shares are valid. Therefore, the selection method proposed here can also be used to improve the construction of this class of instruments, as shown in \citet{Apfel2021Relaxing}.
\begin{table}[!t]
\begin{center}
\input{Table_Migration.tex}
\end{center}
\end{table}
\paragraph{Results}
The results can be found in Table \ref{tab:BP}. The first column shows results for ordinary least squares: the contemporaneous effect is 0.586, while the lagged effect is lower and negative. When using all shares as valid IVs, both effects are higher in absolute terms but only the contemporaneous effect is marginally statistically significant. The Hansen-Sargan test for this model gives a $p$-value of 0.0126, which is lower than the proposed significance level of $0.1/log(n)$.
When using AHC with this significance level in the downward testing procedure, two origin country shares are selected as invalid: the share of Scandinavians and North Europeans in the 1980s and the share of foreign-born from the Baltic States. The coefficient estimates of the short- and long-term effects increase considerably in absolute terms. Now, the coefficient estimate of the short-term effect is clearly statistically significant at the 5 percent level and the estimate of the long-term effect is significant at the 10 percent level. This indicates that the use of AHC indeed makes a big difference. Moreover, the $p$-value of the Sargan test is pushed to 0.1137, over the threshold of $0.013$ used in the testing procedure.
The two IVs that are selected are similar a priori in that they are shares from the same region.
It is plausible that these shares are indeed invalid, through a combination of two reasons: invalid and weak IVs. First, as to invalidity, correlation of unobserved shocks might be the culprit. The concentration of Americans of Swedish descent is highest in the Midwest, especially in Minnesota. Cheap land attracted northern European settlers to these agricultural centers. The agricultural sector remained one of the main sectors in this region in subsequent decades. It is therefore well possible that wages or unobserved productivity shocks that have driven initial settlement are correlated over time and invalidate previous shares as instruments. Baltic migrants concentrated in the same large cities which attracted migrants with high wages in the subsequent decades. For both shares, it is therefore likely that wages or unobserved productivity shocks that
have driven the initial settlement are correlated over time, invalidating the initial shares.
Second, weak instruments might exacerbate the problem of inconsistent estimates when using the two selected shares. Northern European and Baltic migration accounted for a small fraction of migration, as compared to the large migrant groups, such as Mexicans or Indians. Therefore, trying to predict more recent \textit{overall} migration, where their fraction is even less empirically relevant, as is the case especially for Scandinavian migration must result in a low correlation and therefore in weak instruments.
In their application \citet*{Goldsmith-Pinkham2020Bartik} show sensitivity - to - misspecification weights that illustrate how the overall bias changes as a certain share's invalidity increases. Notably, they do not find the country groups that we estimate to be invalid in the group of shares with the five highest sensitivity-to-misspecification weights. This shows how small and unsuspicious shares might lead to misleading results and how our method can help in identifying them.
\subsection{The effect of air pollutants on health}
A literature in environmental and health economics estimates the effect of air pollutants on health outcomes, such as deaths, health care use and medical costs. The problem of estimating the effect of fine particulate matter (PM2.5) on health is that pollution reflects economic activity, which is correlated with the local sorting of individuals over the territory, leading to biased estimates in OLS regressions. To address this problem, environmental economists use wind direction variables to instrument for the pollutants. For example \citet*{Deryugina2019Mortality} examine the effect of PM2.5 on health outcomes and use wind direction for each group of pollution monitors where several counties might be in a group. \citet*{Godzinski2021Disentangling} use a large range of altitude-weather variables derived from a climate model, yielding 328 instrumental variables.
The idea behind the use of weather variables as instruments for pollutants is that they are relevant because wind and weather variables codetermine pollution. They carry pollutants from other, potentially distant places or they codetermine the width of the planetary boundary layer, i.e. the space in which pollutants concentrate. Moreover, the exclusion restriction is fulfilled when the weather variables themselves do not have a direct effect on health outcomes, an assumption which may or may not be fulfilled.
In this application, our method is also likely to be a helpful complement, because there are many potential instruments, many of which may violate the exclusion restriction. The common criticism in this literature is that the instruments might be related with other pollutants which are not included in the model. Also, some weather-related variables, such as altitude humidity might be directly correlated with health outcomes, while others are not as mentioned in \citet*{Godzinski2021Disentangling}. Weather variables that describe conditions close to the ground are more likely to act as controls and hence the choice of altitude is important. This also suggests there might be groups of IVs which might more credible than others: for example those further away from the ground should be more credible than those close to the ground. \citet*{Godzinski2021Disentangling} select instrumental variables to achieve efficiency. In contrast, our approach is suited to select weather-related variables that display a strong correlation with pollutants and are not correlated with health directly.
Moreover, including the effects of many separate pollutants means that there are multiple endogeneous regressors, a setting that AHC can address.
\iffalse
\subsection{Estimating the effect of immigration on native wages}
In migration economics, a large body of literature has focused on the estimation of the effect of a labor supply shock through immigration on labor market outcomes. Many studies try to estimate the short-term partial-equilibrium effects on native wages. These effects are usually expected to be negative, because immigrants increase the labor force, compete with natives and this results in a shift of the labor supply curve to the right, leading to a decrease in equilibrium wages.
The endogeneity problem in this case is that migrants self-select into regions with good labor market conditions. Therefore, if one runs a regression of wages on immigration using ordinary least squares, the coefficient is likely to be positively biased.
To solve this endogeneity problem, many studies use shift-share instruments which interact shares of migrants from several origin countries at a period in the past, with origin-country specific inflows of migrants. \citet*{Jaeger2018Shift} document the widespread use of this type of instrument in the migration literature.
\citet*{Goldsmith-Pinkham2020Bartik} show that validity of shares is sufficient for the validity of the entire instrument. Thus, the exclusion restriction in fact needs to hold for all origin countries. This set of assumptions is very strict, because defending the exclusion restriction for each single origin country requires perfect structural knowledge which often is unavailable.
Why should one worry about the validity of the shares? \citet*{Jaeger2018Shift} allow for dynamic adjustments of production factors to an inflow of migrants. These dynamic adjustments can lead to a correlation of the instrument and the outcome variable. Also, the initial shares might be correlated with unobserved shocks which are correlated over time and then correlated with the outcome. The dynamic adjustments are thought to be positive and thus to bias the coefficient of immigration on wages positively.
As a solution to the dynamic adjustment problem, \citet*{Jaeger2018Shift} propose to directly model the dynamic adjustments using a lag of change in migrant population additionally to the contemporaneous variable. This additional regressor is modeled with an additional shift-share instrument which uses an even longer base period in the construction of the instrument. In this way, the model has two endogenous regressors and the number of shares is twice the number of origin countries. This is a setting with two endogenous regressors and many potential IVs. Depending on the points in time available in the data, the regressions can use multiple lags to better mimic the adjustment processes and use even longer lags as instruments.
This approach might lead to a more credible exclusion restriction. Still, the exclusion restriction now must hold for an increased number of origin country shares. It is difficult to argue that taking mid-term dynamic adjustment into account, none of the origin-specific past shares have a direct effect. What if, for example the capital adjustments kicked off by Mexicans indeed leveled off after ten years, while those of Poles and Canadians were still ongoing after 30 years? In that case, even a low direct correlation might lead to a large bias of the estimates.
\fi
\section{Illustration of the IV Selection Procedure for $P=2$}\label{app:illustration}
In figure \ref{fig:Ward2}, the procedure is illustrated. Here, we have a situation with four IVs and two endogenous regressors. Instrument No. 1 is invalid, because it is directly correlated with the outcome, while the remaining three IVs (2, 3, 4) are related with the outcome only through the endogenous regressors and are hence valid.
In the first graph on the top left, we have plotted each just-identified estimate. The horizontal and vertical axes represent coefficient estimates of the effects of the first ($\beta_1$) and second regressor ($\beta_2$), respectively. Each point has been estimated with two IVs, in this case with IV pairs 1-2, 1-3, 1-4, 2-3, 2-4 and 3-4, because there are four candidate IVs.
In the initial Step (0), each just-identified estimate has its own cluster. In step 1, we join the estimates which are closest in terms of their Euclidean distance, e.g. those estimated with pairs 2-3 and 2-4. These two estimates now form one cluster and we only have five clusters. We re-estimate the distances to this new cluster and continue with this procedure, until there is only one cluster left in the bottom right graph. We evaluate the Sargan test at each step, using the IVs which are involved in the estimation of the largest group at each step. When the p-value is larger than a certain threshold, say 0.05, we stop the procedure. Ideally this will be the case at step 2 or 3 of the algorithm, because here the largest cluster (in orange) is formed only by valid IVs (2,3 and 4). If this is the case, only the valid IVs are selected as valid.
\newgeometry{left=1cm, right=1cm, top=2.5cm, bottom=3cm}
\begin{figure}[H]
\caption{Illustration of the Algorithm with Two Regressors \label{fig:Ward2}}
\includegraphics[scale=0.9, trim=25 25 25 25, clip]{Step0.pdf}
\includegraphics[scale=0.9, trim=25 25 25 25, clip]{Step1.pdf}
\includegraphics[scale=0.9, trim=25 25 25 25, clip]{Step2.pdf}
\includegraphics[scale=0.9, trim=25 25 25 25, clip]{Step3.pdf}
\includegraphics[scale=0.9, trim=25 25 25 25, clip]{Step4.pdf}
\includegraphics[scale=0.9, trim=25 25 25 25, clip]{Step5.pdf}
\end{figure}
\restoregeometry
\section{Properties of just-identified estimates when $P \geq 1$}\label{app:PropertiesOfJustIdentified}
There are $\binom{J}{P}$ just-identified models. We write the corresponding just-identified estimators for $\bm{\beta}$ and $\bm{\alpha}$ analogously to the proof of Proposition A.5 in \citet*{Windmeijer2021Confidence} for the case $P = 1$. First, for an arbitrary $[j]$, partition the matrix $\mathbf{Z} = (\mathbf{Z}_1 \quad \mathbf{Z}_2)$, where $\mathbf{Z}_1$ is a $n \times P$ matrix containing the $[j]$-th columns of $\mathbf{Z}$, and $\mathbf{Z}_2$ is a $n \times (J-P)$ matrix containing the remaining columns of $\mathbf{Z}$. $\bm{\gamma} = (\bm{\gamma}_1^\prime \quad \bm{\gamma}_2^\prime)'$ is the equivalent partition of the matrix of first-stage coefficients. $\mathbf{Z^*} = [\hat{\mathbf{D}}\quad \mathbf{Z}_2]$, then $\mathbf{Z^*} = \mathbf{Z} \hat{\mathbf{H}}$, with
\[
\bm{\hat{H}} =
\left( {\begin{array}{cc}
\hat{\bm{\gamma}}_1 & 0 \\
\hat{\bm{\gamma}}_2 & \mathbf{I}_{J - P}\\
\end{array} } \right)
; \quad
\bm{\hat{H}^{-1}} =
\left( {\begin{array}{cc}
\hat{\bm{\gamma}}_1^{-1} & 0 \\
-\hat{\bm{\gamma}}_2\hat{\bm{\gamma}}_1^{-1} & \mathbf{I}_{J - P}\\
\end{array} } \right)
\]
\noindent The just-identified 2SLS estimators using $\mathbf{Z}_{[j]}$ as instruments and controlling for the remaining instruments can be written as
\[
(\hat{\bm{\beta}}_{[j]} \quad \hat{\bm{\alpha}}_{[j]}^\prime)' = \hat{\mathbf{H}}^{-1}\hat{\bm{\Gamma}} = \hat{\mathbf{H}}^{-1}(\mathbf{Z}' \mathbf{Z})^{-1} \mathbf{Z}' (\mathbf{D}\bm{\beta} + \mathbf{Z} \bm{\alpha} + \mathbf{u}) = \hat{\mathbf{H}}^{-1}(\hat{\bm{\gamma}}\bm{\beta} + \bm{\alpha}+(\mathbf{Z}' \mathbf{Z})^{-1} \mathbf{Z}'\mathbf{u})
\]
\noindent Note that $\hat{\bm{\gamma}}\bm{\beta}+\bm{\alpha}$ is equal to
\[
\left(
\begin{array}{c}
\hat{\bm{\gamma}}_1 \bm{\beta} + \bm{\alpha}_1\\
\hat{\bm{\gamma}}_2 \bm{\beta} + \bm{\alpha}_2\\
\end{array}
\right) \text{.}
\]
\noindent By Assumption \ref{ass:Asymptotics}, we have the following asymptotics
\[
plim(\hat{\bm{\beta}}_{[j]}' \quad \hat{\bm{\alpha}}_{[j]}')'
=plim (\mathbf{\hat{H}^{-1}} \left(
\begin{array}{c}
\hat{\bm{\gamma}}_1 \bm{\beta} + \bm{\alpha}_1\\
\hat{\bm{\gamma}}_2 \bm{\beta} + \bm{\alpha}_2
\end{array}
\right))=\left(
\begin{array}{c}
\bm{\beta} + \bm{\gamma}_1^{-1}\bm{\alpha}_1\\
-\bm{\gamma}_2 \bm{\gamma}_1^{-1}\bm{\alpha}_1 + \bm{\alpha}_2
\end{array}
\right)
\]
\noindent We denote the $\binom{J}{P}$ $P \times 1$-dimensional inconsistency terms as $plim(\hat{\bm{\beta}}_{[j]} - \bm{\beta}) = \bm{\gamma}_{[j]}^{-1}\bm{\alpha}_{[j]} = \mathbf{q}$.
\section{$\mathcal{F}_0$ consists of valid IVs only}\label{app:gP}
Next, we show that the family with $\mathbf{q}=\mathbf{0}$ is composed of valid IVs with $\bm{\alpha}_1=\mathbf{0}$, only. Let $\bm{\gamma}$, $\mathbf{Z}$ and $\bm{\alpha}$ be partitioned the same way as in Appendix \ref{app:PropertiesOfJustIdentified}.
\begin{remark}
$\bm{\alpha}_1 = \mathbf{0}$ is necessary and sufficient for $\mathbf{q} = \mathbf{0}$.
\end{remark}
\paragraph{Proof:} First prove sufficiency:
Direct proof: Assume $\bm{\alpha}_1=\mathbf{0}$ holds. $\mathbf{q} = \bm{\gamma}_1^{-1} \bm{\alpha}_1=\mathbf{0}$ follows directly.\\
Second, prove necessity: Proof by contraposition: Assume $\bm{\alpha}_1 \neq \mathbf{0}$, then $\bm{\gamma}_1^{-1}\bm{\alpha}_1 \neq \mathbf{0}$. The latter inequality holds, because
otherwise the columns of $\bm{\gamma}_1^{-1}$ are linearly dependent, and $\bm{\gamma}_1^{-1}$ is not invertible and hence $(\bm{\gamma}_1^{-1})^{-1} = \bm{\gamma}_1$ does not exist, which it clearly does, by Assumption 1.a. $\qed$
\noindent
This also implies that $\mathcal{F}_0$ consists of valid IVs only and all combinations $[j]: \bm{\gamma}_1^{-1}\bm{\alpha}_1=\mathbf{0}$ are elements of $\mathcal{F}_0$.
Hence, the following remark directly follows:
\begin{remark}
$|\mathcal{F}_0| = \binom{g}{P}$.
\end{remark}
\iffalse
\section{One family can consist of different vectors $\alpha$}\label{app:onefamily}
We have shown that the number of valid IVs defines the size of the family $|\mathcal{F}_0|$. However, this relation between $g$ and $|\mathcal{F}_0|$ is available only when $\bm{\alpha}_1=\mathbf{0}$.
\begin{remark}
The function $f(\bm{\alpha}) = \bm{\gamma}_1^{-1}\bm{\alpha} = \mathbf{q}$ is non-injective.
\end{remark}
\paragraph{Proof:} Proof by counter-example: Show that there is more than one element in the domain which leads to the same image, i.e.
\[
\exists (\bm{\gamma}_1,\bm{\alpha}), (\bm{\gamma}_1',\bm{\alpha}') \quad s.t. \quad \bm{(\alpha}, \bm{\gamma}_1) \neq (\bm{\alpha}_1', \bm{\gamma}_1') \text{ with } f(\bm{\alpha}, \bm{\gamma}_1) = \bm{\gamma}_1^{-1}\bm{\alpha} = \bm{\gamma}_1'^{-1}\bm{\alpha}' = f(\bm{\alpha}', \bm{\gamma}_1')
\]
Define $ (\bm{\alpha}', \bm{\gamma}_1') = (\bm{\alpha} + \bm{\delta}_{\alpha}, \bm{\gamma}_1 + \bm{\delta}_{\gamma_1})$. Then, $\mathbf{c}=\bm{\hat{\gamma_1}}^{-1}\bm{\alpha}$ and $\mathbf{c'}=\bm{\hat{\gamma_1}}^{'-1}\bm{\alpha'}$. Assume $\mathbf{c}=\mathbf{c}'$.
\begin{align}
\hat{\bm{\gamma}}_1\mathbf{c} & = \bm{\alpha}\\
\hat{\bm{\gamma}}_1\mathbf{c} + \bm{\delta}_{\gamma} \mathbf{c} - \bm{\delta}_{\alpha} &= \bm{\alpha}
\end{align}
Therefore
\begin{align}
\bm{\delta}_{\gamma} \mathbf{c} & = \bm{\delta}_{\alpha} \\
\end{align}
This means that all $(\bm{\alpha}, \bm{\gamma}) \neq (\bm{\alpha'}, \bm{\gamma'})$ s.t. $\bm{\delta}_{\gamma} \mathbf{c} = \bm{\delta}_{\alpha}$ lead to the same $\mathbf{q}$. $\qed$
Hence, even though the number of IVs with the same value $\alpha_j$ is smaller than $\binom{g}{P}$, the largest family might still consist of combinations of invalid IVs, because the first-stage coefficient matrix also determines $\mathbf{q}$.
\fi
\section{Oracle Properties}\label{app:oracle}
This section gives proofs for Lemma \ref{lemma:AssignToFamily1} and Theorems \ref{th:ConsistentSelection} and \ref{th:ConsistentSelectionLATE}. All proofs apply for the general case that $P \geq 1$.
\subsection{Proof of Lemma 1}\label{app:ProofOfLemma1}
Overall, we want to show that the probability that a cluster $\mathcal{S}_j$ with elements from the true underlying partition $\mathcal{S}_{0q}$ is merged with a cluster with elements from the same partition $\mathcal{S}_{0q}$ goes to 1.
\noindent
The proof is structured as follows:
\begin{enumerate}
\item We note that the means of clusters which are associated with elements from the same family also converge to the same vector as each estimator in the cluster. \\
\item Merging two clusters which are associated only with elements from the same family is equivalent to the two clusters having minimal distance.\\
\item We show that clusters which are associated with members from the same family have distance zero and clusters which are associated with elements from different families have non-zero distance, with probability going to one.
\end{enumerate}
\begin{proof}
\iffalse
$$\{[j], \, [k]\} \in \mathcal{F}_A \, ,\quad A \in \mathbb{R}^P$$
$$\{[l]\} \in \mathcal{F}_B \, ,\quad B \in \mathbb{R}^P, \, B \neq A$$
Under Remark 1:
$$plim(\hat{\beta}_{[j]}) = A$$
$$plim(\hat{\beta}_{[k]}) = A$$
$$plim(\hat{\beta}_{[l]}) = B$$
$$\bar{\beta}_{q} = \frac{\sum\limits_{[j] \in \hat{\theta}_{k}} \hat{\beta}_{[j]}}{|\hat{\theta}_{k}|} \text{ where }\hat{\theta}_{k} \subseteq \mathcal{F}_q$$
and hence $$plim(\bar{\beta}_{q}) = \mathbf{q}$$
\fi
\textit{Part 1}:
Consider
\begin{align*}
\begin{split}
[j], [k] \in \mathcal{F}_{q} \, ,\quad \mathbf{q} \in \mathbb{R}^P\\
[l] \in \mathcal{F}_r \, ,\quad \mathbf{r} \in \mathbb{R}^P, \quad \mathbf{r} \neq \mathbf{q}\\
\end{split}
\end{align*}
Under Assumptions 1 - 5:
\begin{align}
\begin{split}
plim(\hat{\bm{\beta}}_{[j]}) = plim(\hat{\bm{\beta}}_{[k]}) = \mathbf{q}\\
plim(\hat{\bm{\beta}}_{[l]}) = \mathbf{r}
\end{split}
\end{align}
Let $\mathcal{S}_j$ and $\mathcal{S}_k$ be clusters associated with elements from the same family: $\mathcal{S}_j$, $\mathcal{S}_k \subset \mathcal{S}_{0q}$ and $\mathcal{S}_{l} \subset \mathcal{S}_{0r}$.
\begin{equation}\label{eq:conv}
plim \,\, \bar{\mathcal{S}}_j = \frac{\sum\limits_{\hat{\beta}_{[j]} \in \mathcal{S}_{j}} \bm{\hat{\beta}_{[j]}}}{|\mathcal{S}_{j}|} = \frac{|\mathcal{S}_{j}|\mathbf{q}}{|\mathcal{S}_{j}|} \text{ where }\mathcal{S}_{j} \subset \mathcal{S}_{0q}
\end{equation}
and hence $$plim(\bar{\mathcal{S}}_{j}) = \mathbf{q}\text{.}$$
\noindent
\textit{Part 2:}
Consider the case where the Algorithm decides whether to merge two clusters, $\mathcal{S}_j$ and $\mathcal{S}_k$, containing estimators using combinations from the same family, or to merge two clusters from different underlying partitions, $\mathcal{S}_j$ and $\mathcal{S}_l$. The two clusters which are closest in terms of their weighted Euclidean distance are merged first. Hence, we need to consider the distances between $\mathcal{S}_j$ and $\mathcal{S}_k$, $\mathcal{S}_j$ and $\mathcal{S}_l$, as well as $\mathcal{S}_k$ and $\mathcal{S}_l$.
$\mathcal{S}_j$ is merged with a cluster with elements of its own $\mathcal{S}_{0q}$ iff
$\frac{|\mathcal{S}_j||\mathcal{S}_k|}{|\mathcal{S}_j| + |\mathcal{S}_k|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_k||^2 < \frac{|\mathcal{S}_j||\mathcal{S}_l|}{|\mathcal{S}_j| + |\mathcal{S}_l|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_l||^2$. The following two are hence equivalent
\begin{equation*}
lim \, P (\mathcal{S}_j \cup \mathcal{S}_k = \mathcal{S}_{jk} \subseteq \mathcal{S}_{0q}) = 1
\end{equation*}
\begin{equation}\label{eq:Lemma1ToShow}
\Leftrightarrow \quad lim \, P(\frac{|\mathcal{S}_j||\mathcal{S}_k|}{|\mathcal{S}_j| + |\mathcal{S}_k|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_k||^2 < \frac{|\mathcal{S}_j||\mathcal{S}_l|}{|\mathcal{S}_j| + |\mathcal{S}_l|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_l||^2) = 1
\end{equation}
where $\mathcal{S}_{jk}$ is the new merged cluster.
\textit{Part 3}: We want to prove equation \eqref{eq:Lemma1ToShow} in the following. We can then prove $lim \, P(\frac{|\mathcal{S}_j||\mathcal{S}_k|}{|\mathcal{S}_j| + |\mathcal{S}_k|}||\bar{\mathcal{S}}_k - \bar{\mathcal{S}}_j||^2 < \frac{|\mathcal{S}_k||\mathcal{S}_l|}{|\mathcal{S}_k| + |\mathcal{S}_l|}||\bar{\mathcal{S}}_k - \bar{\mathcal{S}}_l||^2) = 1$ by changing the subscripts.
First, define $a = \frac{|\mathcal{S}_j||\mathcal{S}_k|}{|\mathcal{S}_j| + |\mathcal{S}_k|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_k||^2$ , $b=\frac{|\mathcal{S}_j||\mathcal{S}_l|}{|\mathcal{S}_j| + |\mathcal{S}_l|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_l||^2$ and $c=\frac{|\mathcal{S}_j||\mathcal{S}_l|}{|\mathcal{S}_j| + |\mathcal{S}_l|} (\mathbf{q}-\mathbf{r})' (\mathbf{q}-\mathbf{r})$.
Under \eqref{eq:conv}
\begin{align*}
\begin{split}
plim(a)& = \bm{0}\\
plim(b)& = c\\
\end{split}
\end{align*}
To show: $\underset{n \rightarrow \infty}{\lim}\, P (a<b)=1$.
Proof by contradiction: Show that $\underset{n \rightarrow \infty}{\lim}\, P(b<a)\neq0$ leads to a contradiction. Let $\lim$ imply $\underset{n \rightarrow \infty}{\lim}$ in the following.
\noindent
By the definitions of convergence in probability, it follows that
\begin{equation}\label{eq:plim-a}
\lim \, P(a < \varepsilon) = 1
\end{equation}
and
\begin{equation}\label{eq:plim-b-c}
\lim \, P(|b-c|< \varepsilon)=1 \text{.}
\end{equation}
for any $\varepsilon$.
Therefore, $\lim \, P(a<b)\neq0$ and $\lim \, P(a < \varepsilon)=1$ imply $\lim \, P(b<\varepsilon)\neq0$.
Now, consider $\varepsilon < \frac{1}{2}c$.
Then,
\begin{equation}\label{eq:lemmaproofcontradict}
\lim \, P(b<\frac{1}{2}c)\neq0
\end{equation}
Because of the absolute value $b-c$, consider two cases, $b<c$ and $b>c$.
If $b<c$: $\lim \, P(c-b< \frac{1}{2}c)=1 \, \Leftrightarrow \, \lim \, P(c-b > \frac{1}{2}c)=0$.
$\Rightarrow \, \lim \, P(b<\frac{1}{2}c)=0$, a contradiction with \eqref{eq:lemmaproofcontradict}.
If $b\geq c$: $a<\varepsilon<\frac{1}{2}c<c \leq b$ and hence
$\lim \, P(a<b)=1 \, \Leftrightarrow \, \lim \, P(b\leq a)=0$, again a contradiction. $\qed$
\iffalse
Given that $\lim P \, (a<\varepsilon)=1$, we want to show that $lim \, P (b < a < \varepsilon)>0$, i.e. $lim P (b < \varepsilon)>0$, for any $\varepsilon$. However, this is ruled out by $lim P(|b-c|>\varepsilon)=0$, for any $\varepsilon$, with $c$ fixed and larger zero, a contradiction. In other words, in order for this to hold we would need to have
$$plim(b)=0$$
and
$$plim(b)=c$$
where $c\neq0$. But a RV can't converge to two limits at the same time, right?
\begin{align*}
\begin{split}
plim(\frac{|\mathcal{S}_j||\mathcal{S}_k|}{|\mathcal{S}_j| + |\mathcal{S}_k|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_k||^2)& = \bm{0}\\
plim(\frac{|\mathcal{S}_j||\mathcal{S}_l|}{|\mathcal{S}_j| + |\mathcal{S}_l|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_l||^2)& = \frac{|\mathcal{S}_j||\mathcal{S}_l|}{|\mathcal{S}_j| + |\mathcal{S}_l|} (\mathbf{q}-\mathbf{r})' (\mathbf{q}-\mathbf{r})\\
\end{split}
\end{align*}
This means
\begin{align*}
\begin{split}
lim \, P (\frac{|\mathcal{S}_j||\mathcal{S}_k|}{|\mathcal{S}_j| + |\mathcal{S}_k|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_k||^2 - \frac{|\mathcal{S}_j||\mathcal{S}_l|}{|\mathcal{S}_j| + |\mathcal{S}_l|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_l||^2 + \frac{|\mathcal{S}_j||\mathcal{S}_l|}{|\mathcal{S}_j| - |\mathcal{S}_l|}(\mathbf{q}-\mathbf{r})'(\mathbf{q}-\mathbf{r}) < \bm{\varepsilon}) & = 1\\
lim \, P (\frac{|\mathcal{S}_j||\mathcal{S}_k|}{|\mathcal{S}_j| + |\mathcal{S}_k|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_k||^2 < \frac{|\mathcal{S}_j||\mathcal{S}_l|}{|\mathcal{S}_j| + |\mathcal{S}_l|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_l||^2 + \bm{\varepsilon} - \frac{|\mathcal{S}_j||\mathcal{S}_l|}{|\mathcal{S}_j| + |\mathcal{S}_l|}(\mathbf{q}-\mathbf{r})' (\mathbf{q}-\mathbf{r})) & = 1\\
\end{split}
\end{align*}
for any $\bm{\varepsilon} > \bm{0}$, which can get arbitrarily close to zero. The last term is a quadratic form and hence is positive. Therefore if this inequality holds, Equation \ref{eq:Lemma1ToShow} also holds.
\\
\fi
\iffalse
Now, show that
\begin{equation*}
lim \, P(\frac{|\mathcal{S}_j||\mathcal{S}_l|}{|\mathcal{S}_j| + |\mathcal{S}_l|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_l||^2 < \frac{|\mathcal{S}_j||\mathcal{S}_k|}{|\mathcal{S}_j| + |\mathcal{S}_k|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_k||^2) = 0
\end{equation*}
in the same way.
\begin{align*}
\begin{split}
lim \, P (\frac{|\mathcal{S}_j||\mathcal{S}_k|}{|\mathcal{S}_j| + |\mathcal{S}_k|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_k||^2 - \frac{|\mathcal{S}_j||\mathcal{S}_l|}{|\mathcal{S}_j| + |\mathcal{S}_l|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_l||^2 + \frac{|\mathcal{S}_j||\mathcal{S}_l|}{|\mathcal{S}_j| + |\mathcal{S}_l|}(\mathbf{q}-\mathbf{r})^\intercal (\mathbf{q}-\mathbf{r}) > \bm{\varepsilon}) & = 0\\
lim \, P (\frac{|\mathcal{S}_j||\mathcal{S}_l|}{|\mathcal{S}_j| + |\mathcal{S}_l|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_l||^2 - \frac{|\mathcal{S}_j||\mathcal{S}_l|}{|\mathcal{S}_j| + |\mathcal{S}_l|}(\mathbf{q}-\mathbf{r})' (\mathbf{q}-\mathbf{r}) + \bm{\varepsilon} < \frac{|\mathcal{S}_j||\mathcal{S}_k|}{|\mathcal{S}_j| + |\mathcal{S}_k|}||\bar{\mathcal{S}}_j - \bar{\mathcal{S}}_k||^2)& = 0\\
\end{split}
\end{align*}
The last inequality is stronger than the one to be shown. Even decreasing the first term on the left-hand side by a positive term does not decrease it enough to make it smaller than the right-hand side.
\fi
\iffalse
\begin{align}
\begin{split}
lim \, P([j] \in \hat{\theta}_k \subseteq \mathcal{F}_q) & = lim \, P(||\hat{\beta}_{[j]} - \bar{\beta}_{q}||^2 < ||\hat{\beta}_{[j]} - \bar{\beta}_{q}||^2)\\
& = lim \, P(||\hat{\beta}_{[j]} - \bar{\beta}_{q}||^2 - ||\hat{\beta}_{[j]} - \bar{\beta}_{r}||^2 < 0) = 1
\end{split}
\end{align}
The latter holds because\\
$$lim \, P(||\hat{\beta}_{[j]} - \bar{\beta}_{q}||^2 - ||\hat{\beta}_{[j]} - \bar{\beta}_{r}||^2) =||q-q||^2 -||q - r||^2 = -||q - r||^2 < 0$$
\begin{align}
\begin{split}
lim \, P([j] \in \hat{\theta}_k \nsubseteq \mathcal{F}_q) & = lim \, P(||\hat{\beta}_{[j]} - \bar{\beta}_{r}||^2 < ||\hat{\beta}_{[j]} - \bar{\beta}_{q}||^2)\\
& = lim \, P(||\hat{\beta}_{[j]} - \bar{\beta}_{r}||^2 - ||\hat{\beta}_{[j]} - \bar{\beta}_{q}||^2 < 0) = 0
\end{split}
\end{align}
means
$$plim(||\hat{\beta}_{[j]} - \hat{\beta}_{[k]}||^2) = ||A - A||^2 = 0$$
$$plim(||\hat{\beta}_{[j]} - \hat{\beta}_{[l]}||^2) = ||A - B||^2$$
\begin{align}
\begin{split}
lim \, P(\text{Join two from same family}) & = lim \, P(||\bar{\beta}_{[j]} - \bar{\beta}_{[k]}||^2 < ||\bar{\beta}_{[j]} - \bar{\beta}_{[l]}||^2)\\
& = lim \, P(||\bar{\beta}_{[j]} - \bar{\beta}_{[k]}||^2 - ||\bar{\beta}_{[j]} - \bar{\beta}_{[l]}||^2 < 0) = 1
\end{split}
\end{align}
\textcolor{red}{But does (8) really follow from this last line?\\
Or could we show it from the definition of consistency?}
\vspace{10px}
Other way:
Two elements are joined if their distance is minimal
\begin{align}
\begin{split}
plim(min(||\bar{\beta}_{q} - \bar{\beta}_{r}||^2) = 0)
& = min(||A - A||)^2 = 0 \quad \text{ if r and q are subsets of the same family}
\end{split}
\end{align}
If $r = q$
\fi
\end{proof}
\subsection{Proof of Theorem \ref{th:ConsistentSelection}}
\begin{proof}
The proof for Theorem \ref{th:ConsistentSelection} is structured as follows:
\begin{enumerate}
\item We show that asymptotically the selection path generated by Algorithm \ref{algo:ward} contains $\mathcal{F}_0$, the family formed by all the valid instrumental variables.
\item We show that Algorithm \ref{algo:Sargan} can recover $\mathcal{F}_0$ from the selection path from Algorithm \ref{algo:ward}.
\end{enumerate}
\textit{Part 1} follows from Corollary \ref{coro:GroupStructureRetrieved} directly.
\noindent \textit{Part 2}: Firstly, we establish the properties of the Sargan statistic.
The following two equations can be also found in WLHB (p.10). Let $\mathcal{I}$ be the true set of invalid instruments and $\mathcal{V}$ be the true set of valid instruments. The oracle model is
\begin{equation*}
\mathbf{y} = \mathbf{D}\bm{\beta} + \mathbf{Z}_{\mathcal{I}} \bm{\alpha}_{\mathcal{I}} + \mathbf{u} = \mathbf{X}_\mathcal{I} \bm{\theta}_\mathcal{I} + \mathbf{u}
\end{equation*}
with $\mathbf{X}_\mathcal{I} = \left[\mathbf{D} \quad \mathbf{Z}_{\mathcal{I}}\right]$ and $\bm{\theta}_{\mathcal{I}} = \left[\bm{\beta} \quad \bm{\alpha}_{\mathcal{I}}'\right]^\prime$, the Sargan test statistic is given by
\begin{equation}
S(\hat{\bm{\theta}}_{\mathcal{I}})=\frac{\hat{\mathbf{u}}(\hat{\bm{\theta}}_{\mathcal{I}})^\prime\mathbf{Z}_{\mathcal{I}}(\mathbf{Z}_{\mathcal{I}}^\prime\mathbf{Z}_{\mathcal{I}})^{-1}\mathbf{Z}_{\mathcal{I}}^\prime\hat{\mathbf{u}}(\hat{\bm{\theta}}_{\mathcal{I}})}{\hat{\mathbf{u}}(\hat{\theta}_{\mathcal{I}})^\prime\hat{\mathbf{u}}(\hat{\bm{\theta}}_{\mathcal{I}})/n}
\end{equation}
where $\hat{\mathbf{u}}(\hat{\bm{\theta}})=\mathbf{y} - \mathbf{X}_{\mathcal{I}}\hat{\bm{\theta}}_{\mathcal{I}}$, with $\hat{\bm{\theta}}_{\mathcal{I}}$ the 2SLS estimator of $\bm{\theta}_{\mathcal{I}}$.
\noindent Let $\hat{\mathcal{I}}$ be the estimated set of invalid instruments and $\hat{\mathcal{V}}$ be the estimated set of valid instruments where $\hat{\mathcal{I}} = \mathcal{J} \setminus \hat{\mathcal{V}}$. Following Proposition $3.2$ in \cite{windmeijer2019two}, the Sargan statistic has the following properties:
\iffalse
\begin{property}Properties of the Sargan statistic (WLHB p.10)
\begin{enumerate}
\item When $\mathcal{I}=\hat{\mathcal{I}}$: $S(\hat{\bm{\theta}}_\mathcal{\hat{\mathcal{I}}}) \overset{d}{\rightarrow} \chi^2_{|\mathcal{J}| - |\mathcal{I}| - P}$
\item When $\mathcal{I}\neq\hat{\mathcal{I}}$ with $|\mathcal{\hat{I}}| < |\mathcal{I}|$: $S(\hat{\bm{\theta}}_\mathcal{\hat{\mathcal{I}}})=O_p(n)$.
\end{enumerate}
\end{property}
\fi
\begin{property}Properties of the Sargan statistic\label{prop:Sargan}
\begin{enumerate}
\item For all the $|\mathcal{\hat{\mathcal{V}}}| \choose P$ combinations of the instruments from $\hat{\mathcal{V}}$, if the IVs contained in them belong to the same family, then: $S(\hat{\bm{\theta}}_\mathcal{\hat{\mathcal{I}}}) \overset{d}{\rightarrow} \chi^2_{|\mathcal{J}| - |\mathcal{\hat{\mathcal{I}}}| - P}$
\item For all the $|\mathcal{\hat{\mathcal{V}}}| \choose P$ combinations of the instruments from $\hat{\mathcal{V}}$, if the IVs contained in them belong to a mixture of families, then: $S(\hat{\bm{\theta}}_\mathcal{\hat{\mathcal{I}}})=O_p(n)$.
\end{enumerate}
\end{property}
\iffalse
The second case describes selection of too few of the invalid IVs as invalid, i.e. some invalid IVs have been selected as valid.
\fi
\noindent
With these properties we can show that the downward testing procedure described in Algorithm \ref{algo:Sargan} selects valid instruments consistently with $\xi_{n, J - |\hat{\mathcal{I}}| - P} \rightarrow \infty \text{ for } n \rightarrow \infty \text{, and } \xi_{n, J - |\hat{\mathcal{I}}| - P}=o(n)\text{.}$ Let the number of clusters formed in Algorithm \ref{algo:ward} at some certain step be $K$, e.g. at Step 1, $K = {J \choose P}$ and at Step 2, $K = {J \choose P}-1$ etc. Let the true number of families be $Q$. Consider applying the Sargan test to the model selected by the largest cluster at the each step under the following scenarios:
\begin{enumerate}
\iffalse
\item $K = 1$ and $|\mathcal{V}| = |\mathcal{J}|$, i.e. all the $\mathcal{J}$ instruments are valid, and applying the Sargan test to instruments contained in the largest cluster at the step $K = 1$ where all the $\mathcal{J} \choose P$ just-identified estimators are in the same cluster, which is the staring point of the downward testing procedure. Then we have
\begin{equation}
S(\hat{\theta}_\mathcal{\hat{\mathcal{I}}}) = S(\hat{\theta}_\mathcal{I})\text{.}
\end{equation}
By Property 1 and $\xi_{n, J - |\hat{\mathcal{I}}| - P}=o(n)$, we have
\begin{equation*}
\underset{n \rightarrow \infty}{lim} P (S(\hat{\theta}_\mathcal{I}) < \xi_{n, J - |\hat{\mathcal{I}}| - P}) = 1 \text{.}
\end{equation*}
In this case, Algorithm \ref{algo:Sargan} stops immediately and recovers the oracle model.
\fi
\item $1 \leq K < Q$. For each of these steps, the largest cluster is either associated with a mixture of different families, or with one family.
\begin{itemize}
\item Consider the case where the largest cluster is associated with a mixture of different families. Then by Property \ref{prop:Sargan} and $\xi_{n, J - |\hat{\mathcal{I}}| - P}=o(n)$, we have
\begin{equation*}
\underset{n \rightarrow \infty}{lim} P (S(\hat{\bm{\theta}}_{\hat{\mathcal{I}}}) < \xi_{n, J - |\hat{\mathcal{I}}| - P}) = 0 \text{.}
\end{equation*}
In this case, asymptotically the Sargan test would be rejected and the downward testing procedure moves to the next step.
\item Consider the case where the largest cluster is associated with one family. Then this family must be $\mathcal{F}_0$ as by Assumption \ref{ass:familyplurality}, $\mathcal{F}_0$ is the largest family among all $Q$ families. Then following Property \ref{prop:Sargan} and $\xi_{n, J - |\hat{\mathcal{I}}| - P} \rightarrow \infty$ for the Sargan test we have
\begin{equation}\label{eq:sarganpass}
\underset{n \rightarrow \infty}{lim} P (S(\hat{\bm{\theta}}_{\hat{\mathcal{I}}}) < \xi_{n, J - |\hat{\mathcal{I}}| - P}) = 1 \text{.}
\end{equation}
indicating that $\mathcal{V}$ would be selected as the set of valid instruments asymptotically.
\end{itemize}
\item $K = Q$. By Corollary \ref{coro:GroupStructureRetrieved} we know that the $K$ clusters are associated with the $Q$ families respectively, and by Assumption \ref{ass:familyplurality}, the cluster associated with $\mathcal{F}_0$ is the largest cluster. Then applying the Sargan test at this step would be testing all the valid instruments, hence we also have Equation \eqref{eq:sarganpass} and Algorithm \ref{algo:Sargan} selects $\mathcal{V}$ as the set of valid instruments.
\end{enumerate}
To summarize, asymptotically, at steps $1 \leq K < Q$, Algorithm \ref{algo:Sargan} only stops when $\mathcal{F}_0$ forms the largest cluster and hence selects the oracle model, otherwise it moves to step $K = Q$ and selects the oracle model.
\noindent
Combine \textit{Part 1} and \textit{Part 2}, we prove Theorem \ref{th:ConsistentSelection}.
\iffalse
\begin{enumerate}
\item Consider $|\mathcal{V}| = |\mathcal{J}|$. By property 1.1 and the property that the critical values go to infinity, it follows that
$$\underset{n \rightarrow \infty}{lim} P (S(\hat{\bm{\theta}}_\mathcal{I}) < \xi_{n, J - |\hat{\mathcal{I}}| - P}) = 1 \text{.}$$
\item Consider $|\mathcal{V}| < |\mathcal{J}|$. When $|\hat{\mathcal{V}}| > |\mathcal{V}|$, we must have that $\mathcal{I} \neq \hat{\mathcal{I}}$:
\begin{equation}\label{eq:SarganLargerCrit}
\underset{n \rightarrow \infty}{lim} P (S(\hat{\bm{\theta}}_\mathcal{I}) < \xi_{n, J - |\hat{\mathcal{I}}| - P}) = 0 \text{,}
\end{equation}
by property 1.2 of the Sargan test and $\xi_{n, J - |\hat{\mathcal{I}}| - P}=o(n)$.
\end{enumerate}
In case 1., all IVs are valid and Algorithm \ref{algo:Sargan} stops immediately.
In case 2., note that Lemma \ref{lemma:AssignToFamily1} implies that every largest model selected by Algorithm \ref{algo:ward} when the number of selected IVs is larger than the actual number of valid IVs must be a union of multiple groups and as there is only one group of valid IVs, the largest group must always select some invalid IVs as valid.
Next, consider that the number of IVs selected as valid is equal to the actual number of valid IVs, $|\hat{\mathcal{V}}| = |\mathcal{V}|$. Again, we can consider two sub-cases: one where $\hat{\mathcal{V}} \neq \mathcal{V}$ (\textit{a}) and one where $\hat{\mathcal{V}} = \mathcal{V}$ (\textit{b}). Case \textit{a} can only be the union of groups containing valid and invalid IVs, because plurality holds. Hence, for this case Property 1.2 applies and equation \ref{eq:SarganLargerCrit} holds.
In case \textit{b} all IVs are valid, hence Property 1.1 applies. By Property 1.1 of the Sargan statistic and $\xi_{n, J - |\hat{\mathcal{I}}| - P} \rightarrow \infty$, it follows that
$$\underset{n \rightarrow \infty}{lim} P (S(\hat{\bm{\theta}}_\mathcal{I}) < \xi_{n, J - |\hat{\mathcal{I}}| - P}) = 1 \text{.}$$
Hence, Algorithm \ref{algo:Sargan} stops.
Corollary 2 states that at some point on the path, at least when $K=Q$, the true model is on the path, i.e. $lim P(\hat{\mathcal{V}} = \mathcal{V}) = 1$.
Hence, the first selection $\hat{\mathcal{V}}$ that fulfills the HS criterion is the one associated with the oracle model.
\fi
\end{proof}
\iffalse
\subsection{Proof of Theorem \ref{th:ConsistentSelectionLATE}}
\textbf{Proof:} Note that the Sargan test equivalently tests that the IV-specific just-identified estimates $\hat{\beta}_j$ are the same. When taking IVs from the same group, therefore Property 1.1 from the Proof for Theorem 1 holds and when a mixture of IVs with different treatment effects is used, Property 1.2 holds.
Algorithm 2 now performs Sargan tests for all estimated groups. By the same logic as for the preceding proof, at all steps before the one with $K=Q$, mixtures of IVs are tested. When $K=Q$, and the group structure is retrieved, all tests use IVs from one group, and none of the tests rejects. Therefore, the testing procedure stops at the correct step, where $\hat{\bm{\theta}}_k = \mathcal{G}_q$. $\qed$
\fi
\iffalse
\section{The Weak IV Problem}
\subsection{The CI Method}\label{app:weak CI}
Firstly we investigate the asymptotic behaviour of the confidence interval of the 2SLS estimator if the instrumental variable is weak and invalid. As in \citet*{Staiger1997Instrumental}, consider the just-identified IV setup with one endogenous regressor and no exogenous controls.
\begin{equation*}
\bm{y} = \mathbf{D} \beta + \bm{u}
\end{equation*}
The first stage is
\begin{equation*}
\bm{D} = \mathbf{Z} \gamma + \bm{\varepsilon}
\end{equation*}
Let $\gamma = \frac{C}{\sqrt{n}}$ where $C$ is a fixed scalar. Let $Z$ be invalid so that $\textbf{E}(Z^{'}u) \neq 0$. The 2SLS estimator is given by
\begin{equation*}
\hat{\beta} = \frac{Z^{'}Y}{Z^{'}D} = \beta + \frac{Z^{'}u}{Z^{'}D}
\end{equation*}
Then
\begin{align*}
plim(\hat{\beta}) &= \beta + plim(\frac{Z^{'}u}{Z^{'}D}) = \beta + plim(\frac{Z^{'}u}{Z^{'}Z\gamma + Z^{'}\varepsilon})\\
&=\beta + plim(\frac{\frac{1}{n}Z^{'}u}{\frac{1}{n}Z^{'}Z \times \frac{C}{\sqrt{n}} + \frac{1}{n}Z^{'}\varepsilon})\\
&= \beta + \frac{\textbf{E}(Z^{'}u)}{\textbf{E}(Z^{'}Z)\times \frac{C}{\sqrt{n}}} = \infty
\end{align*}
It can be seen that $\hat{\beta}$ converges to $\infty$ at the rate of $\sqrt{n}$. The confidence interval of $\hat{\beta}$ is defined as $\hat{\beta} \pm \psi_n\hat{v}$ where $\psi_n \xrightarrow{} \infty$ and $\psi_n = o(\sqrt{n})$. As for $\hat{v}$, we can see from Equation $(10)$ and $(11)$ in \citet*{Windmeijer2021Confidence} that $\hat{v} \xrightarrow{} \infty$ at the rate of $\sqrt{n}$. Therefore, $\hat{\beta} + \psi_n\hat{v} \xrightarrow{} +\infty$ and $\hat{\beta} - \psi_n\hat{v} \xrightarrow{} -\infty$. The range of the confidence interval of $\hat{\beta}$ is the entire real line if $Z$ is a weak and invalid instrument.
If $Z$ is a weak and valid instrument, $\hat{\beta}$ is bounded \citep*{Staiger1997Instrumental}. From Equation $(10)$ and $(11)$ in \citet*{Windmeijer2021Confidence} , it can be seen that $\hat{v}$ is bounded as well. Therefore, $\hat{\beta} \pm \psi_n\hat{v}$ converges to $\infty$ at the same rate as $\psi_n$.
\subsection{The First Stage Hard Thresholding}\label{app:weak HT}
Consider the following simple setup: a single endogenous variable $D$ and two candidate instruments $Z_1$ and $Z_2$. The structural model is
\begin{equation*}\label{eq:Structural}
\bm{y} = \mathbf{D} \beta + \bm{u}
\end{equation*}
Assume a linear relationship between $D$ and $Z_1$, $Z_2$:
\begin{equation*}\label{eq:FirstStage}
\bm{D} = \mathbf{Z_1} \gamma_1 + \mathbf{Z_2} \gamma_2 + \bm{\varepsilon}
\end{equation*}
where $cov(u, \varepsilon) \neq 0$ hence $D$ is endogenous. Let $Z_2$ be an invalid instrument in the sense that it is correlated with $\bm{u}$ in the following way:
\begin{equation*}
\bm{u} = \mathbf{Z_2} \eta + \bm{e}
\end{equation*}
where $cov(e, Z_2) = cov(e, \varepsilon) = 0$. We can see that $cov(Z_2, \varepsilon) \neq 0$. For valid instrument $Z_1$, let $cov(u, Z_1) = cov(\varepsilon, Z_1) = 0$.
For the strength of $Z_1$ and $Z_2$, let $\gamma_1 \neq 0$ and $\gamma_2 = 0$. That is, the invalid instrument is also an irrelevant instrument, while the valid instrument is relevant. Suppose we run the first-stage hard thresholding based on the OLS estimator from $(2)$. Then
\begin{align*}
\hat{\gamma}_2 &= (\bm{Z}_2^{'}\bm{M}_1\bm{Z}_2)^{-1}\bm{Z}_2^{'}\bm{M}_1\bm{D}\\
&= (\bm{Z}_2^{'}\bm{M}_1\bm{Z}_2)^{-1}\bm{Z}_2^{'}\bm{M}_1(\mathbf{Z_2} \gamma_2 + \mathbf{Z_1} \gamma_1 + \bm{\varepsilon})\\
&= \gamma_2 + (\bm{Z}_2^{'}\bm{M}_1\bm{Z}_2)^{-1}\bm{Z}_2^{'}\bm{M}_1 \mathbf{Z_1} \gamma_1 + (\bm{Z}_2^{'}\bm{M}_1\bm{Z}_2)^{-1}\bm{Z}_2^{'}\bm{M}_1\bm{\varepsilon}\\
&= \gamma_2 + (\bm{Z}_2^{'}\bm{M}_1\bm{Z}_2)^{-1}\bm{Z}_2^{'}\bm{M}_1\bm{\varepsilon}\\
&= \gamma_2 + (\bm{Z}_2^{'}\bm{M}_1\bm{Z}_2)^{-1}\bm{Z}_2^{'}(\bm{I} - \bm{P}_1)\bm{\varepsilon}\\
&= \gamma_2 + (\bm{Z}_2^{'}\bm{M}_1\bm{Z}_2)^{-1}\bm{Z}_2^{'}\bm{\varepsilon} + (\bm{Z}_2^{'}\bm{M}_1\bm{Z}_2)^{-1}\bm{Z}_2^{'}\bm{P}_1\bm{\varepsilon}\\
&= \gamma_2 + (\bm{Z}_2^{'}\bm{M}_1\bm{Z}_2)^{-1}\bm{Z}_2^{'}\bm{\varepsilon} + (\bm{Z}_2^{'}\bm{M}_1\bm{Z}_2)^{-1}\bm{Z}_2^{'}\bm{Z}_1(\bm{Z}_1^{'}\bm{Z}_1)^{-1}\bm{Z}_1^{'}\bm{\varepsilon}
\end{align*}
Because $cov(Z_2, \varepsilon) \neq 0$ and $cov(Z_1, \varepsilon) = 0$, we have $plim(\bm{Z}_1^{'}\bm{\varepsilon})=0$ but $plim(\bm{Z}_2^{'}\bm{\varepsilon}) \neq 0$. Therefore, $plim(\hat{\gamma}_2) \neq \gamma_2 = 0$. This shows that the OLS estimator of $\gamma_2$ is biased away from 0. By the same argument, for $\gamma_1$ we have
\begin{equation*}
\hat{\gamma}_1 = \gamma_1 + (\bm{Z}_1^{'}\bm{M}_2\bm{Z}_1)^{-1}\bm{Z}_1^{'}\bm{\varepsilon} + (\bm{Z}_1^{'}\bm{M}_2\bm{Z}_1)^{-1}\bm{Z}_1^{'}\bm{Z}_2(\bm{Z}_2^{'}\bm{Z}_2)^{-1}\bm{Z}_2^{'}\bm{\varepsilon}
\end{equation*}
It can be seen from the last term that unless $cov(Z_1, Z_2) = 0$, otherwise the OLS estimator of $\gamma_1$ is inconsistent as well. If the sign of $plim((\bm{Z}_1^{'}\bm{M}_2\bm{Z}_1)^{-1}\bm{Z}_1^{'}\bm{Z}_2(\bm{Z}_2^{'}\bm{Z}_2)^{-1}\bm{Z}_2^{'}\bm{\varepsilon})$ is opposite to that of $\gamma_1$, then $\gamma_1$ can be biased towards zero.
In this special case, the first-stage estimator of the coefficient of the irrelevant instrument $Z_2$ is biased away from 0 while for the relevant instrument $Z_1$ it is the opposite. Therefore, the t-test results are biased too - it is possible that $Z_1$ stays while $Z_2$ is dropped from the candidate instruments.
\fi
\subsection{Proof of Theorem \ref{th:ConsistentSelectionLATE}}
The proof of Theorem \ref{th:ConsistentSelectionLATE} works in the same way as the proof of Theorem \ref{th:ConsistentSelection}.
\begin{proof}
The proof for Theorem \ref{th:ConsistentSelectionLATE} is structured as follows:
\begin{enumerate}
\item We note that asymptotically the selection path generated from Algorithm \ref{algo:ward} contains all groups $\mathcal{G}_q$.
\item We show that Algorithm \ref{algo:Sargan} can recover all $\mathcal{G}_q$ from the selection path from Algorithm \ref{algo:ward}.
\end{enumerate}
\textit{Part 1} again follows directly from Corollary \ref{coro:GroupStructureRetrieved}.
\noindent \textit{Part 2}: Firstly, we establish the properties of the Sargan statistic.
\begin{property}Properties of the Sargan statistic\label{prop:SarganLATE}
\begin{enumerate}
\item For all combinations of instruments from $\hat{\mathcal{G}}_k$, if their just-identified estimators are associated with the same group, then: $S(\hat{\bm{\theta}}_{\mathcal{\hat{\mathcal{G}}}_k}) \overset{d}{\rightarrow} \chi^2_{J - |\mathcal{\hat{\mathcal{G}}}_k| - P}$
\item For all combinations of instruments from $\hat{\mathcal{G}}_k$, if their just-identified estimators are associated with a mixture of groups, then: $S(\hat{\bm{\theta}}_{\mathcal{\hat{\mathcal{G}}}_k})=O_p(n)$.
\end{enumerate}
\end{property}
\noindent
As before, $\xi_{n, J - |\hat{\mathcal{I}}| - P} \rightarrow \infty \text{ for } n \rightarrow \infty \text{, and } \xi_{n, J - |\hat{\mathcal{I}}| - P}=o(n)\text{.}$ Consider applying the Sargan test to each cluster separately at the following steps under the following scenarios:
\begin{enumerate}
\item $1 \leq K < Q$, i.e. the number of clusters is smaller than the number of groups. For each of these steps, at least one cluster is associated with a mixture of different groups.
When one cluster is created by a mixture of different groups, by Property \ref{prop:SarganLATE}, we have
\begin{equation}\label{eq:sarganpassLATE}
\underset{n \rightarrow \infty}{lim} P (S(\hat{\bm{\theta}}_{\mathcal{G}_q}) < \xi_{n, J - |\mathcal{G}_q| - P}) = 0 \text{.}
\end{equation}
In this case, asymptotically at least one of the the Sargan tests would be rejected and the downward testing procedure moves to the next step.
\item $K = Q$. By Corollary \ref{coro:GroupStructureRetrieved} we know that the $K$ clusters are formed by the $Q$ groups respectively and $\hat{\mathcal{G}}_k = \mathcal{G}_q$ for all $q$. Then for each of the $K$ tests we have
\begin{equation}
S(\hat{\bm{\theta}}_{\mathcal{\hat{\mathcal{G}}}_k}) = S(\hat{\bm{\theta}}_{\mathcal{G}_q})\text{.}
\end{equation}
By Property \ref{prop:SarganLATE} and $\xi_{n, J - |\mathcal{G}_q| - P}=o(n)$, we have
\begin{equation*}
\underset{n \rightarrow \infty}{lim} P (S(\hat{\bm{\theta}}_{\mathcal{G}_q}) < \xi_{n, J - |\hat{\mathcal{I}}| - P}) = 1 \text{.}
\end{equation*}
In this case, Algorithm \ref{algo:Sargan} stops.
Then applying the Sargan tests to each group at this step will be testing IVs from the same group each time, hence we also have Equation \eqref{eq:sarganpassLATE}.
\end{enumerate}
To summarize, asymptotically, at steps $1 \leq K < Q$, Algorithm \ref{algo:Sargan} does not stop; then it moves to step $K = Q$ and selects the oracle model.
\noindent
Combine \textit{Part 1} and \textit{Part 2}, we prove Theorem \ref{th:ConsistentSelection}.
\end{proof}
\end{appendices}
\end{document}
\section{Conclusion}\label{sec:Conclusion}
We have proposed a novel method to select valid instruments. This method can be particularly helpful in cases when the number of candidate instruments is large and tests of overidentifying restrictions reject.
The method is applied to the estimation of the effect of immigration on wages in the US.
The method can also be easily applied to any other overidentified setting. Another suitable example is Mendelian Randomization, the use of instrumental variables in epidemiology.
The advantages of our method are that it extends straightforwardly to the setting with multiple endogenous regressors and even without a pre-selection step the method can deal with weak instruments. In fact, one might also use our method directly to select strong IVs. We also discuss a setting with heterogeneous treatment effects. It would be worth investigating how to retrieve causal effects when there are richer forms of heterogeneity.
Another way to improve the method would be to account for the variance of each just-identified estimator in the selection algorithm, and to apply it in nonlinear models. We leave these as directions for future research.
\section{Extensions}\label{sec:Extensions}
In this section, we propose extensions of the method to a setting with multiple endogenous regressors and discuss the performance of our method in presence of weak instruments as compared with the HT and CI method. We also discuss a setting with heterogeneous treatment effects.
\subsection{Multiple Endogenous Regressors}
One shortcoming of previous methods that try to select invalid instruments is that they only allow for one endogenous regressor. Therefore, in this section we show how our method can be naturally extended to select invalid instruments when $P > 1$. First of all, the input of our method, all the just-identified estimators, are estimated by all the $P$-combinations from $\mathbf{z}_1, ..., \mathbf{z}_J$. Hence we now have $\binom{J}{P}$ instead of $J$ just-identified estimators. Let $[j]$ be a set of identities of any $P$ instruments such that the model is exactly identified with these $P$ instruments. Let $\mathbf{Z}_{[j]}$ denote the corresponding $n \times P$ instrument matrix. To guarantee that all the $\binom{J}{P}$ just-identified estimators exist, we modify Assumption \ref{ass:FirstStageSingle} as follows:
\setcounter{referredassumption}{1}
\begin{intassumption}\label{ass:FirstStageMulti} Existence of just-identified estimators\\
For all possible values of $[j]$, let $\bm{\gamma}_{[j]}$ be the combinations of the $k_{th}$-rows of $\bm{\gamma}$ for all $k \in [j]$. Then we assume $$rank(\bm{\gamma}_{[j]})=P.$$
\end{intassumption}
\noindent
The plurality assumption also needs modification for $P > 1$. For $P = 1$, Assumption \ref{ass:plurality} states that the valid instruments form the largest group, where instruments form a group if their just-identified estimators converge to the same value. If we find the largest set of just-identified estimators that converge to the same value, then this set is automatically the largest group of instruments as each just-identified estimator is estimated by a single instrument. However, when $P > 1$, each just-identified estimator is estimated by multiple instruments, hence the equivalence between the largest set of just-identified estimators and the largest group of instruments may not hold. In this case, we modify the plurality rule so it is based on the combinations of $P$ instruments instead of individual instruments. The modification starts with revisiting the asymptotics of the just-identified estimators for $P > 1$. The technical details can be found in Appendix \ref{app:PropertiesOfJustIdentified}.
\noindent
Let $\bm{ \hat{\beta}}_{[j]}$ be the just-identified 2SLS estimator estimated with $\mathbf{Z}_{[j]}$, then analogously to the case with one regressor, we have the following property of just-identified estimates:
\begin{property}Properties of just-identified estimates with $P\geq1$\label{prop:JustIdentified2}\\
Under Assumptions \ref{ass:FirstStageMulti} to \ref{ass:ZwNormal} it holds that
$$plim\bm{ \hat{\beta}}_{[j]} = \bm{\beta}_0 + \bm{\gamma}_{[j]}^{-1}\bm{\alpha}_{[j]} = \bm{\beta}_0 + \mathbf{q}$$
\end{property}
\noindent
where the inconsistency of $\bm{\beta}$ is $\hat{\bm{\beta}}_{[j]} - \bm{\beta} = \bm{\gamma}_{[j]}^{-1}\bm{\alpha}_{[j]} = \mathbf{q}$ and there are $\binom{J}{P}$ inconsistency terms $\mathbf{q}$. Note that $\mathbf{q}$ is a $P \times 1$ vector.
Because when $P>1$, not each IV is associated with a single scalar $q$, we introduce the concept of a \textit{family}:
\begin{definition}
A family is a set of just-identifying IV combinations that is associated with just-identified estimators which converge to the same value.
$$\mathcal{F}_{q} = \{[j]: \bm{\beta}_{[j]}=\bm{\beta}_0 + \mathbf{q}\}$$
\end{definition}
\noindent
Note that each element of a family is itself a set of $P$ IVs, such that a model is just-identified. By definition, the family that consists of IV combinations which generate consistent estimators is
$$\mathcal{F}_0 = \{[j]: \mathbf{q}=\mathbf{0}\}.$$
Let there be $Q$ families. Note that when $P=1$ a group of IVs automatically is a family.
Analogously to Assumption \ref{ass:plurality}, we assume that $\mathcal{F}_0$ is the largest family:
\begin{equation*}
|\mathcal{F}_0| > \underset{q \neq 0}{max} |\mathcal{F}_q|
\end{equation*}
We show in Appendix \ref{app:gP}, that a combination of IVs is an element of $\mathcal{F}_0$ if and only if all of the $P$ IVs in the combination are in fact valid. This means that the family of valid IVs consists of all combinations that use $P$ IVs from the set of valid instruments, $\mathcal{V}$, and hence $|\mathcal{F}_0| = \binom{g}{P}$. Therefore, the plurality assumption can be modified to
\setcounter{referredassumption}{6}
\setcounter{intassumption}{0}
\begin{intassumption}\label{ass:familyplurality} Family plurality\\
$$ \binom{g}{P} > \underset{\mathbf{q}\neq\mathbf{0}}{max} |\mathcal{F}_q|$$
\end{intassumption}
\noindent
The inconsistency term of elements in $\mathcal{F}_{\mathbf{q}}$ with $\mathbf{q} \neq \mathbf{0}$ depends on the first-stage coefficient vectors and hence there is no direct relation from $\bm{\alpha}_{[j]}$ to $\mathbf{q}$.
One way in which this new plurality can be fulfilled, is when the largest set of IVs has zero direct effects $\alpha_j=0$. Moreover, the vectors $\bm{\gamma_{[j]}^{-1}} \bm{\alpha_{[j]}}$ constituted by $P$-sets with $\bm{\alpha_{[j]}}\neq 0$ are sufficiently dispersed. Strictly speaking, the family plurality assumption can also hold when the largest group of IVs has some direct effect $\alpha_j=c$. If the dispersion of $\bm{\gamma_{[j]}^{-1}} \bm{\alpha_{[j]}}$ is large enough, the largest family will still be constituted by valid IVs only.
\iffalse
One way in which the new plurality could be fulfilled is
\setcounter{referredassumption}{6}
\begin{intassumption}\label{coro:familiesandgroups}
$$g > \underset{c\neq0}{max} |\{j: \alpha_j = c\}| \text{, }$$
$$g > P$$
and $$\bm{\gamma_{[j]}^{-1}} \bm{\alpha_{[j]}} \neq \bm{\gamma_{[j']}^{-1}} \bm{\alpha_{[j']}} \quad \forall \quad \bm{\alpha_{[j]}}, \bm{\alpha_{[j']}} \neq 0 \text{,}$$ \textcolor{red}{Change to discussion}
\end{intassumption}
\noindent
where $[j]$ and $[j']$ are two different sets of identities of $P$ instruments.
The second part of the assumption makes sure that the family of valid IVs consists of more than one element, without which Assumption 5.a. can not be fulfilled. The third part of this assumption makes sure that sets containing at least one invalid IV do not converge to the same value and therefore the valid group of IVs also translates into the largest family. This can be seen as a technical assumption and is stronger than needed.
\fi
The procedure to estimate $\mathcal{V}$ is analogous to the one in the preceding section (see Appendix \ref{app:illustration} for an illustration) only that now we need to account for the presence of families.
Firstly, for a certain number of clusters, $K$, a unique cluster is selected by the algorithm. This works as follows: the algorithm selects the cluster which contains the largest number of point estimates, $\hat{\beta}_{[j]}$, as potentially the cluster associated with the valid instruments at $K$. Again, this largest cluster is $\hat{\mathcal{S}}_m(K)$.
$$\hat{\mathcal{S}}_m(K) = \{\hat{\mathcal{S}}(K): |\hat{\mathcal{S}}(K)| = \underset{k}{max}|\hat{\mathcal{S}}_k(K)|\}$$
The cluster $\hat{\mathcal{S}}_m(K)$ denotes a cluster of just-identified estimates. This needs to be translated to the \textit{family} associated with the largest cluster, i.e. the set of IV-combinations, $\hat{\mathcal{F}}(K)$, used for the estimates that end up in the largest cluster.
$$\hat{\mathcal{F}}_m(K) = \{[j]: \hat{\bm{\beta}}_{[j]} \in \hat{\mathcal{S}}_m(K)\}$$
In the case with one regressor, each cluster is directly associated with a group of IVs. Now, the families need to be translated to sets of IVs to be tested. To achieve this, for each $K$, the potentially valid IVs are selected as those that are in combinations contained in the largest family.
$$\hat{\mathcal{V}}_m(K) = \{j:[j] \in \hat{\mathcal{F}}(K)\}$$
The remaining IVs are then selected as invalid.
$$\hat{\mathcal{I}}(K) = \mathcal{J}\quad \backslash \quad \hat{\mathcal{V}}_m(K)$$
\iffalse
The cluster containing the valid instruments is chosen as the one where the number of estimates in the cluster is maximal
and the Sargan test is not rejected (the Sargan statistic is smaller than the threshold $\xi_{n, J - |\hat{\mathcal{I}}| - P}$).
\fi
For each $K$, there might be cases where there are multiple maximal clusters $\hat{\mathcal{S}}_m(K)$. Then there are multiple associated $\hat{\mathcal{V}}_m(K)$. Let $\hat{\mathcal{V}}^M(K)$ denote the set of the multiple $\hat{\mathcal{V}}_m(K)$.
\iffalse
$\mathcal{M} = argmax |\hat{\mathcal{S}}_m(K)|$.
\fi
In such a case, we select the cluster in which the most IVs are involved. If there are multiple clusters with maximal number of estimates \textit{and} of IVs, we select the set of IVs which leads to a lower Sargan test. Then for each $K$, the unique set of instruments to be checked by the Sargan test is:
\begin{equation}\label{eq:SelectedSargan}
\hat{\mathcal{V}}^{Sar}(K) = \{\hat{\mathcal{V}}_m(K): \hat{\mathcal{V}}_m(K) = \underset{}{max} |\hat{\mathcal{V}}^M(K)| \,\,\, \& \,\,\, min \, Sar(\hat{\mathcal{V}}^M(K)) \}
\end{equation}
\iffalse
\begin{equation}\label{eq:SelectedSargan}
\hat{\mathcal{V}}^{Sar} = \{\hat{\mathcal{V}}_m(K): \,\,\, K \in \mathcal{M} \,\,\, \land \,\,\, \mathcal{V}(K) = \underset{K \in \mathcal{M}}{max} |\hat{\mathcal{V}}(K)| \,\,\, \land \,\,\, min \, Sar(\hat{\mathcal{V}}(K)) \}
\end{equation}
\fi
\iffalse
This means that the set of IVs selected as valid is the one where the number of clusters chosen maximizes the number of family members, if this choice is not unambiguous, the number of IVs associated with that family is maximal among the families selected and if there are still multiple sets to which this applies, the Sargan test is minimal.
\fi
The downward testing procedure considers the selection via $\hat{\mathcal{V}}^{Sar}(K)$, for each number of clusters $K \in \{1,...,\binom{J}{P}-1\}$, and chooses the smallest $K$ such that the selected group of IVs passes the Sargan test:
\begin{equation}\label{eq:SelectedDownward}
\hat{\mathcal{V}}^{dts} = \{\hat{\mathcal{V}}^{Sar}(K), \, K= min(1, ..., \binom{J}{P}-1): Sar(\hat{\mathcal{V}}^{Sar}(K)) < \xi_{n, J - |\hat{\mathcal{I}}| - P} \}
\end{equation}
\iffalse
One ambiguity arises in finite samples: asymptotically, the largest group (in terms of the direct effects of IVs) being valid might imply that the largest family is valid. However in finite samples, the number of IVs involved in the estimation of the largest cluster might be smaller than the number involved in the estimation of another cluster. Therefore, we could also select the valid IVs as
$$\hat{\mathcal{V}}_{dt} = \{\mathcal{V}(K): \mathcal{V}(K) = \underset{K}{max} |\hat{\mathcal{V}}(K)| \,\,\, \land \,\,\, Sar(\hat{\mathcal{V}}(K) < \xi_{n, J - |\hat{\mathcal{I}}| - P} \}\text{,}$$
directly selecting the family associated with the maximal number of IVs instead of the largest cluster. It is an empirical question which of the two methods should be used. In the simulations and applications of this paper, we use the first alternative as described in Equation \ref{eq:SelectedDownward}.
\lun{Nico: Should we also try this one out, since we talk about it? I was thinking: is it even possible to have a situation where you have more IVs but fewer family members? Yes of course! e.g. 1-2-3-4-5-6, can construct a family of 3 with all of these and a family of six with four IVs.}
\fi
\iffalse
In practice, we might also have cases where multiple sets of IVs fulfill this condition and $\hat{\mathcal{V}}_{dt}$ is not a unique set. In this case, we select the set with the smaller $p$-value of the Sargan test (equivalently, the larger value of $Sar(\hat{\mathcal{V}}(K)$). \textcolor{blue}{Why is this?}
\fi
The method has oracle properties as stated in Theorem \ref{th:ConsistentSelection} and Theorem \ref{th:AsymptoticOracleDistribution}. Here, we formally establish the theoretical results for the general case with an arbitrary number of regressors, $P \geq 1$. See Appendix \ref{app:oracle} for proofs of all theorems. Suppose Algorithm \ref{algo:ward} decides whether to merge two of the three clusters $\mathcal{S}_j$, $\mathcal{S}_k$ and $\mathcal{S}_l$, where all the IV combinations associated with the just-identified estimators in $\mathcal{S}_j$ and $\mathcal{S}_k$ are from the same true cluster $\mathcal{S}_{0q}$. For $\mathcal{S}_l$, however, it contains at least one estimator such that the corresponding IV combination is from a family other than $\mathcal{F}_q$. The following Lemma establishes that asymptotically, Algorithm \ref{algo:ward} merges $\mathcal{S}_j$ and $\mathcal{S}_k$.
\iffalse
when Algorithm \ref{algo:ward} has the choice of assigning a just-identified estimator $\hat{\bm{\beta}}_{[j]}$ to either (1) clusters that are formed by just-identified estimators from the same family as $\hat{\bm{\beta}}_{[j]}$, or (2) clusters that contain at least one element that is from a different family than $\hat{\bm{\beta}}_{[j]}$, asymptotically $\hat{\bm{\beta}}_{[j]}$ will be assigned to the first type of clusters.
\begin{lemma}
Let $\mathcal{S}_j$ be a cluster containing just-identified estimators $\hat{\bm{\beta}}_{[j]}$ such that $\mathcal{S}_j \subseteq \mathcal{F}_q$, $\mathcal{S}_k$ be a cluster such that all the elements in $\mathcal{S}_k$ are from $\mathcal{F}_q$. For $r \neq q$, let $\mathcal{S}_l$ be a cluster such that at least one element in $\mathcal{S}_l$ is from a different family than $\mathcal{F}_q$, $\exists \hat{\bm{\beta}}_{[l]}: \hat{\bm{\beta}}_{[l]} \in \mathcal{S}_l$ and $\hat{\bm{\beta}}_{[l]} \notin \mathcal{F}_q$. Under assumptions 1.a, 2, 3, 4, 5 in Algorithm \ref{algo:ward}, if merging $\mathcal{S}_j$ with either $\mathcal{S}_k$ or $\mathcal{S}_l$, then $\mathcal{S}_j$ is merged with $\mathcal{S}_k$ with probability converging to 1.
\end{lemma}
\fi
\begin{lemma}\label{lemma:AssignToFamily1}
Let $\mathcal{S}_j$ and $\mathcal{S}_k$ be two clusters such that any just-identified estimator $\hat{\bm{\beta}}_{[j]}$ that is contained in $\mathcal{S}_j$ and $\mathcal{S}_k$ satisfies $[j] \in \mathcal{F}_q$. Let $\mathcal{S}_l$ be a cluster such that $\exists \hat{\bm{\beta}}_{[l]}: \hat{\bm{\beta}}_{[l]} \in \mathcal{S}_l$ and $[l] \in \mathcal{F}_r$ with $r \neq q$. Under assumptions \ref{ass:FirstStageMulti}, \ref{ass:RankAssumption}, \ref{ass:ErrorStructure}, \ref{ass:Asymptotics}, \ref{ass:ZwNormal}, \ref{ass:familyplurality} in Algorithm \ref{algo:ward}, if merging two of $\mathcal{S}_j$, $\mathcal{S}_k$ and $\mathcal{S}_l$, then $\mathcal{S}_j$ and $\mathcal{S}_k$ are merged with probability converging to 1.
\end{lemma}
\noindent
In Algorithm \ref{algo:ward}, we start from the number of clusters $K = \binom{J}{P}$. For each step onward, according to Step 3 in Algorithm \ref{algo:ward}, there would be two clusters joining with each other and forming a new cluster.
\iffalse
For a cluster that contains more than one just-identified estimator, the cluster mean can be viewed as a single estimator averaging over all the estimators in it (e.g. $\bm{\bar{\beta}_k}$ and $\bm{\bar{\beta}_l}$ in Step 3 Algorithm \ref{algo:ward}).
Therefore, the joining process for each step in Algorithm \ref{algo:ward} can be viewed as assigning an estimator to a cluster.
Note that a cluster that consists of just-identified estimators from the same family can be viewed as an estimator from this family.
\fi
Based on Lemma \ref{lemma:AssignToFamily1}, along the path of Algorithm \ref{algo:ward}, members of different families will not be joined with each other until all the members from the same family have been merged into one family. If for each family, all the just-identified estimators associated with the IV combinations in the family have been merged into the same cluster, then we know that the total number of clusters is $K = Q$. This implies that when the number of clusters is smaller than $Q$, then at least one cluster contains estimators that use IV-combinations from different families. If the number of clusters is larger than $Q$, then the estimated families are subsets of a family.
\begin{corollary} \label{coro:MergingClustersInOwnFamily}
Under assumptions 1.a to \ref{ass:ZwNormal}, in steps 3 and 4 of Algorithm \ref{algo:ward}:
\begin{equation*}\label{eq:lemma}
\text{When } \binom{J}{P} \geq K \geq Q, \quad \forall k: \quad lim \, P(\hat{\mathcal{F}}_{k} \subseteq \mathcal{F}_q) = 1
\end{equation*}
\end{corollary}
\iffalse
\textcolor{blue}{XR's version, edited:}
\begin{corollary} \label{coro:MergingClustersInOwnFamily}
Under assumptions 1.a to \ref{ass:ZwNormal}, in steps 3 and 4 of Algorithm \ref{algo:ward}:
\begin{equation*}\label{eq:lemma}
\text{When } \binom{J}{P} \geq K \geq Q, \quad \forall [j] \text{ such that } \hat{\bm{\beta}}_{[j]} \in \hat{\mathcal{S}}_{j}:\quad plim([j] \in \mathcal{F}_q) = 1.
\end{equation*}
\end{corollary}
\fi
\noindent
To better understand why this is the case, consider the following analogy.
There are $N$ guests ($\binom{J}{P}$ just-identified estimates) which belong to $Q$ families. These $N$ people live in a hotel, which has $N$ rooms (clusters). Each day, one room disappears, and one of the people needs to move into the room of some other guest. The people in a family have closer ties, so the person whose room disappears will move into the room of somebody from their own family. This goes on until each family is living respectively in one crowded room. The hotel now continues to shrink. Only now are people from different families merged together into the same rooms. The largest family can be detected, when all people from the same family have been merged into one room, but people from other families have not been merged into one room completely (or have just been all merged into one room respectively).
In Algorithm \ref{algo:ward}, the number of clusters starts with $K = \binom{J}{P}$ and ends with $K = 1$. For each step in between, the number of clusters decreases by 1, hence there must be a step where $K = Q$. Based on Lemma \ref{lemma:AssignToFamily1} and Corollary \ref{coro:MergingClustersInOwnFamily}, estimators from different families are joined together only when all elements of their own family have been completely joined to their clusters. This implies that in particular when $K=Q$, there would be a cluster such that all the just-identified estimators in this cluster are estimated by all the valid instruments. Therefore, the path generated by Algorithm \ref{algo:ward} contains the true family with probability going to 1 as there must be one step such that $K=Q$.
\begin{corollary}\label{coro:GroupStructureRetrieved}
When $K = Q$, $lim \, P(\hat{\mathcal{F}}_k = \mathcal{F}_q) = 1 \quad \forall k, q$.
\end{corollary}
\iffalse
\begin{corollary}
When $K = Q$, $\forall \hat{\bm{\beta}}_{[j]}$ such that $\hat{\bm{\beta}}_{[j]} \in \hat{\mathcal{S}}_{j}:\quad plim([j] \in \mathcal{F}_q) = 1 \quad \forall q$ and \\
$\nexists [j],[j']$ with $[j]\neq[j']$ such that .
\end{corollary}
\fi
\noindent
The theoretical results above establish that the selection path generated by Algorithm \ref{algo:ward} covers the family which uses only valid IVs, $\mathcal{F}_0$. In Appendix \ref{app:oracle} we show that by Algorithm \ref{algo:Sargan}, we can locate this $\mathcal{F}_0$ and select the valid instruments consistently. This consistent selection property is summarized in Theorem \ref{th:ConsistentSelection} which holds for $P \geq 1$ under Assumption \ref{ass:FirstStageSingle} (\ref{ass:FirstStageMulti}) to Assumption \ref{ass:plurality} (\ref{ass:familyplurality}). These assumptions also must hold for Theorem \ref{th:AsymptoticOracleDistribution} to hold.
\iffalse
\subsection{Identification}
\begin{theorem}Identification\\
Given reduced form parameters $\bm{\gamma}$ and $\bm{\Gamma}$, there exists a unique solution $\bm{\alpha}$ and $\bm{\beta}$ if and only if
$$\binom{g}{P} > \underset{q\neq0}{max} |\mathcal{F}_q| \text{.}$$
\end{theorem}
\paragraph{Proof:} Same as proof of Theorem 1 in Supplementary material of \citet*{Guo2018Confidence}.
\fi
\subsection{Weak instruments}\label{sec:Weak}
In previous sections, we assumed that all the candidate instruments (or all the $J \choose P$ IV combinations when $P>1$) are relevant for the endogenous variables by Assumption \ref{ass:FirstStageSingle} and Assumption \ref{ass:FirstStageMulti}. In practice, however, these assumptions might not be valid in the sense that some of the candidate instruments are only weakly correlated with the endogenous variables. We now relax these assumptions and allow for individually weak instruments among the candidates. To be specific, we model the weak instruments as local to zero following \citet*{Staiger1997Instrumental}, i.e. an instrument $Z_j$ is defined as weak if $\gamma_j = C/\sqrt{n}$ where $C$ is a fixed scalar and $C \neq 0$. For consistent IV selection, we maintain the plurality assumption \ref{ass:plurality} for \textit{strong and valid} instruments as in \citet*{Guo2018Confidence}: the group formed by all the strong and valid instruments is the largest group. Note that the largest group now also needs to be strong, while IVs in other groups can be weak.\footnote{The equivalent holds for the largest family when there are multiple regressors.}
Inherently, the AHC method can rule out weak and invalid instruments. This is because for these instruments, under Model \ref{eq:Structural} and \ref{eq:FirstStage}, it can be shown that their just-identified estimators tend to infinity.\footnote{Consider $P = 1$. Let $Z_j$ be a weak and invalid instrument, i.e. $\gamma_j = C/\sqrt{n}$ and $\alpha_j \neq 0$. Following Appendix A.5 in \citet*{Windmeijer2021Confidence}, for the just-identified estimator of $Z_j$, denoted by $\hat{\beta}_j$, we have $plim(\hat{\beta}_j) = plim(\beta_j) = plim(\beta + \frac{\alpha_j}{\gamma_j}) = \beta + plim(\sqrt{n}\frac{\alpha_j}{C})$ with $\alpha_j \neq 0$. Therefore $\hat{\beta}_j \rightarrow \infty$ as $n \rightarrow \infty$.} Therefore, they can be separated from the just-identified estimators of the strong and valid instruments by the algorithm as the latter converge to the true value of the causal effect.
As for weak valid instruments, the result depends to a large extent on the performance of the Sargan test. If the power of the Sargan test is not asymptotically one in such a setting, it might not reject in presence of a mixture of strong and weak valid IVs and hence select valid and weak IVs as valid. Unlike the HT method which uses a first-stage hard thresholding and simply selects all weak valid instruments as invalid, the AHC therefore allows some weak and valid IVs to be classified as valid. One important addition to the method would be to use a weak instrument robust overidentification test statistic such as the Anderson-Rubin test, in the downward testing procedure, as proposed in \citet*{Apfel2021Relaxing}.
The AHC has two advantages for valid weak instruments selection. Firstly, compare with the HT method which drops all such instruments. In settings where the largest group of IVs is still strong and there are additional weak and valid IVs which can add information, the strong with the individually weak instruments can be informative all together. Secondly, it can limit the impact of including the selected weak instruments on IV estimation. By the algorithm, it can be seen that if the weak valid instruments are classified as valid, then this indicates that their just-identified estimators are not biased too much from the true value. Also, \citet{Windmeijer2019Two} shows that the 2SLS estimator is a weighted average of all the just-identified estimates. The weights for each IV-specific estimate increase with the strength of each IV. By the plurality assumption, there are already strong valid instruments for post-selection IV estimation.
In this case, the biasing effect of including additional weak valid instruments on the 2SLS estimator would be small as their weights of contribution to the 2SLS estimator are small.
In comparison, the CIM can be problematic in presence of weak instruments among the candidates as it tends to select weak invalid instruments as valid, causing severe bias of the post-selection estimator.
Why is this so? With weak instruments, the confidence intervals will have very large ranges. Thus, most of them will be overlapping with all other confidence intervals, and the resulting largest group (which would be the selected set of valid instruments) will always contain some of the weak invalid instruments. At the same time, the point estimates of the strong and valid IVs are not exactly the same but their confidence intervals are narrower. This can lead to inconsistent selection of the IVs in settings where the valid IVs are strong and the invalid IVs are weak. Once the algorithm decreases the critical value of the confidence intervals, the point estimates that decay in smaller groups are the valid ones, which have narrower confidence intervals. It is noteworthy that inconsistent selection can hence arise especially in settings which seem advantageous at first sight. The AHC method does not suffer from this problem, because the confidence interval is not considered. Therefore, weakness of the instruments leads to the estimates to be scattered around more strongly and makes it easier for the algorithm to find the valid group.
As for the HT method, except for the disadvantage that there can be a potential loss of information by dropping all the weak valid instruments, it is also not clear how it chooses the optimal value of the threshold for any given sample, as noted in \citet*{Windmeijer2021Confidence}.
\iffalse
(\textcolor{red}{we better not use the following statement : "it may also result in less efficient causal estimator, because it selects the weak valid instruments as invalid and treat them as exogenous controls, which are essentially redundant covariates as their $\alpha$ equal to 0" because in the simulations our AHC estimator with weak valid IV actually has larger SD})
\fi
In Section \ref{sec:weaksimulations}, we provide a detailed comparison via Monte Carlo experiments.
To summarize, the AHC method can select all invalid instruments as invalid regardless of their strength, which is the key for consistent estimation of the causal effect. It treats weak valid instruments in a flexible way, retaining the moderately weak and discarding the very weak instruments and at the same time limit the bias-inducing effect of including weak instruments in IV estimation. Our simulation results are in line with this discussion. We leave additional theoretical results on the behaviour of the method with weak IVs for future work.
\iffalse
In the IV selection literature, it is common that the invalidity of the candidate instruments refers to the violation of the exclusion restrictions, while maintaining that the relevance condition of valid instruments is satisfied, like in Assumption \ref{ass:FirstStageSingle} which states that all the individual instruments are relevant. However, in practice it might well be the case that some of the candidate instruments are weak in the sense that they can only explain a small part of the variation in the treatment variable, or are even completely irrelevant.
The existing methods, the HT and CI method, can face problems if some of the candidate instruments are weak. CIM would always select the weak instruments as valid, because the ranges of the confidence intervals of the just-identified estimators which are estimated with weak instruments would be the entire real line, hence they would always overlap with any groups including the largest group which is selected as the valid instruments (see Appendix \ref{app:weak CI} for a proof). As for the HT method, it uses a first-stage hard thresholding to filter the irrelevant instruments, which is to conduct a t-test on each of the first-stage OLS estimators $\gamma_j$ for $H_0: \gamma_j = 0$. If failing to reject the null hypothesis, $Z_j$ would be viewed as irrelevant and dropped from the set of candidate instruments. However, this procedure can be problematic because in the presence of invalid instruments, the first stage relationship between the treatment and instruments is confounded as well. It might even keep invalid instruments and rule out valid ones under certain correlation structures among the instruments (see Appendix \ref{app:weak HT} for an illustration).
A major advantage of our method is that it can deal with both valid and invalid weak instruments efficiently. For weak valid instruments, their just-identified estimators are inconsistent. Unlike the just-identified estimators of the strong valid instruments which converge to $\beta$, their just-identified estimators are bounded but do not converge to $\beta$ \citep*{Staiger1997Instrumental}. Therefore, in the IV selection procedure described in Algorithm \ref{algo:ward} and Algorithm \ref{algo:Sargan}, their just-identified estimators will not cluster with the just-identified estimators of the strong valid instruments, until all of the latter form a cluster. After this, in the next step of merging clusters, the clusters formed by all the just-identified estimators of the strong valid instruments may or may not merge with the just-identified estimators of the weak valid instruments. In the first case, a cluster containing all the valid instruments (strong and weak) is covered by the selection path. In the downward testing procedure, instruments contained in this cluster may pass the Sargan test. In this sense, all the valid instruments (strong and weak) would be selected as valid. Since the Sargan statistics would be distorted in the presence of weak instruments, weak IV robust tests such as the Anderson-Rubin test can be applied. Similarly, To obtain a consistent post-selection estimator, weak IV robust estimation methods such as the Limited Information Maximum Likelihood (LIML) can be used. In the other case, if the cluster formed by all the just-identified estimators of the strong valid instruments merge with clusters that contain just-identified estimators of the invalid instruments in the next step, then the selection path will not cover the cluster that contains all the valid instruments (strong or weak). In this way, the selected valid instruments will only be the strong ones.
For weak invalid instruments, it can be shown that their just-identified estimators are inconsistent and converge to infinity (see Appendix \ref{app:weak CI} for proof). Therefore, their just-identified estimators tend to have much larger magnitudes than those of the other instruments. Hence, the Euclidian distance between these two types of estimators tends to be large, causing them to be unlikely to cluster with each other until the last step of joining. In this way, the cluster formed by all the valid instruments will not be affected and consistent selection is maintained.
\fi
\subsection{Local violations}
A common criticism to the IV selection literature is that there is a non-uniformity issue: if the violations of validity are very small, the selection algorithms might select invalid instruments with positive probability leading to an asymptotic bias.
In this section, we show how our method can be applied in a setting with some IVs which are strongly invalid and some for which the invalidity is local-to-zero. In an influential paper, \citet*[CHR]{Conley2012Plausibly} propose procedures to derive confidence intervals when the IVs are \textit{plausibly exogenous}, so that the direct effect parameter $\alpha$ is near but not exactly zero. The AHC method can help improve some of these procedures.
Following the asymptotic setting referred to in CHR, we write a violation as
\begin{equation}
\alpha = \frac{c}{n^{\kappa}}
\end{equation}
and term it as \textit{mild} when $\kappa=1/2$, as \textit{minor} when $1/2<\kappa<\infty$, following \citet*{Caner2014Near}, and as \textit{strong} when $\kappa < 1/2$.
CHR propose to use possible values of the invalidity vector $\alpha$ to create multiple confidence intervals. They then take the union of these confidence intervals, obtaining confidence intervals with conservative coverage. The main drawback of this method is that even with only one strong violation the confidence interval becomes very wide and runs the risk of not being particularly informative in practice. Similarly, \citet*{Kang2022Two} propose the union of CIs obtained estimating overidentified models. The drawback is again that inference can be very conservative. Moreover, in this case estimation becomes infeasible with a moderate number of IVs. In recent work, \citet*{Guo2021Post} proposes searching and sampling methods for uniform confidence intervals. The drawback of this method is that an initial range for the true $\beta$ is needed. As described in their Algorithm 3, TSHT and CIIV can be combined with these methods. Equally, AHC could also be used to get an initial range for $\beta$.
In a situation with a large group of violations being mild or minor our method can help improving this procedure by acting as a pre-screening that excludes strong violations. Our proposed procedure hence works as follows.
\begin{enumerate}
\item Run AHC
\item Use just-identified models from IVs selected as valid to compute CIs.
\item Take union of these CIs.
\end{enumerate}
If there is at least one valid IV in the group selected as valid, the union of CIs will be wider than needed but still include the correct value, hence resulting in conservative coverage in the sense that for a significance level $\tau$, $P(\beta \in CI(1-\tau)) \geq 1 - \tau$. The presence of mild violations somewhat widens the CI but the effect of strong violation groups or outliers which might have otherwise widened the union of CIs beyond usefulness is reigned in.
\subsection{Heterogeneous Treatment Effects}
The instrumental variable estimator also has a local average treatment effect (LATE) interpretation, as estimating the average treatment effect of a sub-population, whose treatment can be changed by the instrument \citep{Imbens1994Estimation}. Hence, LATEs will naturally vary with the instruments. For example, an increase in minimum school-leaving age versus proximity to school will see different populations increase their schooling. In this section we show that our method can be interpreted as retrieving the the largest group associated with a given LATE or the whole set of different LATEs.
For simplicity, we look at a setting with a binary treatment $d_i$, a binary instrument $z_i$ and potential outcomes $y_{1i}$ and $y_{0i}$. The outcome and the treatments can be written as
\begin{align*}
y_i &= y_{0i} (1 - d_i) + y_{1i} d_i\\
d_i &= d_{0i} (1 - z_i) + d_{1i} z_i\\
\end{align*}
\begin{assumption}\label{ass:Independence}Independence
$\{y_{0i}, y_{1i}, d_{0i}, d_{1i}\} \perp \!\!\! \perp z_i $
\end{assumption}
\begin{assumption}\label{ass:FirstStage}First Stage
$P(d_i=1|z_i=1) \neq P(d_i=1|z_i=0)$
\end{assumption}
\begin{assumption}\label{ass:Monotonicity}Monotonicity
$d_{1i} > d_{0i}$
\end{assumption}
\noindent
If the last three assumptions are fulfilled, \citet*{Imbens1994Estimation} show that the IV estimand is the average treatment effect of compliers:
\begin{equation}
\beta_j = \frac{E(y_i | z_i = 1) - E(y_i | z_i = 0)}{E(d_i | z_i = 1) - E(d_i | z_i = 0)} = E(y_{1i} - y_{0i} | d_{1i} > d_{0i})
\end{equation}
\iffalse
In the following, we show a setting in which the LATEs are dependent on a potentially unobserved variable $u$. For this, we make use of the setting in \citet*{Angrist2010Extrapolate}.
The treatment is determined by the following latent-index assignment mechanism
\begin{equation}\label{eq:LatentIndex}
d_i = 1(h^z(u_i,z_i) > \eta_i )
\end{equation}
where $h^z(u_i,1) \geq h^z(u_i,0)$ and the potential outcomes depend on the variable $u$:
\begin{align*}
y_{0i} & = g_0(u_i) + \epsilon_{0i} \\
y_{1i} & = g_1(u_i) + \epsilon_{1i} \\
\end{align*}
where the errors are $E(\epsilon_i | u_i, z_i) = 0$.
\citet*{Angrist2010Extrapolate} then assume
\begin{assumption}\label{ass:ConditionalIgnorability}Conditional Effect Ignorability:
$E(y_{1i} - y_{0i} | d_{1i}^z, d_{0i}^z, u_i) = E(y_{1i} - y_{0i}| u_i)$
\end{assumption}
\noindent The authors then show that under this assumption the LATE can be written as a function of $u$:
\begin{equation}
\beta_j = E(y_{1i} - y_{0i} | u, d_{1i} > d_{0i}) = g_1(u_i) - g_0(u_i)
\end{equation}
which is equal to
\begin{equation}
\beta_j = E(Y_{1i} - Y_{0i} | \gamma_0^j + \gamma_1^j > \eta_i > \gamma_0^j ) \text{.}
\end{equation}
The superscripts denote the fact that the assignment mechanism varies by instrument, thus $\beta_j$ is a function of the population that fulfills the inequality in the condition. Further, assume that the individual treatment effects are arbitrary functions of the random variable $\eta_i$:
\begin{equation}
Y_{1i} - Y_{0i} = h(\eta_i)\text{.}
\end{equation}
\fi
\noindent We are interested in a setting where the by-IV treatment effects form groups:
\begin{equation}\label{eq:LATEgroups}
\mathcal{G}_q = \{j: \, \beta_j = q \} \text{.}
\end{equation}
Note that Lemma \ref{lemma:AssignToFamily1} and Corollaries \ref{coro:MergingClustersInOwnFamily} and \ref{coro:GroupStructureRetrieved} also hold in the heterogeneous effects setting. In this case, the algorithm can find groups of heterogeneous treatment effects.
Now, Algorithms \ref{algo:ward} and \ref{algo:Sargan} are altered. Instead of steps 4. and 5., in Algorithm \ref{algo:ward} which select the largest cluster and run post-selection 2SLS, we still do the downward testing procedure, but now do the Sargan-test for all clusters and stop at the step where none of the Sargan-tests rejects. Finally, all cluster centers are reported.
\noindent In the same way as before:
\begin{theorem}Consistent selection of LATE groups\label{th:ConsistentSelectionLATE}\\
Let $\xi_n$ be the critical value for the Sargan test in Algorithm 2. Under Assumptions \ref{ass:Independence} - \ref{ass:Monotonicity} and Lemma 2, for $\xi_n \rightarrow \infty$ and $\xi_n = o(n)$,
$$lim(\hat{\mathcal{G}}_k = \mathcal{G}_q) = 1 \quad \forall k,q \text{.}$$
\end{theorem}
\noindent The proof is in the Appendix. This theorem states that we can retrieve all heterogeneous treatment effect groups, when the heterogeneity is structured in groups. The difference to the setting with invalid IVs is that in the LATE-setting not only the largest cluster contains valuable information, but also the smaller clusters contain coefficient estimates obtained with valid instruments. The researcher needs to argue that the heterogeneity in treatments comes from violations of the exclusion restriction or from treatment effect heterogeneity. Allowing for these two possibilities contemporaneously is the object of ongoing research.
\iffalse
\begin{lemma}\label{lemma:AssignToFamily2}
Let $\hat{\beta}_{j}$ be a just-identified estimator such that $\hat{\beta}_{j} \in \mathcal{G}_q$, $\mathcal{S}_k$ be the cluster such that all the elements in $\mathcal{S}_k$ are from $\mathcal{G}_q$. For $r \neq q$, let $\mathcal{S}_r$ be a cluster such that at least one element in $\mathcal{S}_r$ is from a different group than $\mathcal{G}_q$, $\exists \hat{\beta}_{l}: \hat{\beta}_{l} \in \mathcal{S}_r$ and $\hat{\beta}_{l} \notin \mathcal{G}_q$. Under assumptions \ref{ass:Independence}-\ref{ass:ConditionalIgnorability} in Algorithm \ref{algo:ward}, if assigning $\hat{\beta}_{j}$ to either $\mathcal{S}_q$ or $\mathcal{S}_r$, $\hat{\beta}_{j}$ is assigned to $\mathcal{S}_q$ with probability converging to 1.
\end{lemma}
This follows by the same logic as for Lemma \ref{lemma:AssignToFamily1}. Again, as by Corollary \ref{coro:GroupStructureRetrieved} we can state that at some step of the Algorithm the entire group structure is retrieved.
\fi
\subsection{Different Proximity Measures}\label{sec:ProximityMeasures}
In Algorithm \ref{algo:ward} we have made use of the Euclidean distance to assess the proximity of clusters. One might be worried that the results are sensible to the choice of proximity measure. However, in practice this choice does not seem to play a big role.
Especially in settings with multiple regressors, there might be better choices to assess proximity. \citet*{Aggarwal2001Surprising} discuss that the difference between the maximum and minimum distances to a given point becomes zero as the number of dimensions increases. This problem is exacerbated for higher-order norms, that is with $||\cdot||_k$-norms, where $k$ is large. Therefore, the authors suggest to rely on the Manhattan distance instead of the Euclidean distance, in high dimensions. Going further than this, fractional norms of the shape $\sum\limits_{d=1}^D[(x_1^d - x_2^d)^f]^{1/f}$ are introduced. It is shown that these fractional distance metrics preserve the contrast better than integral distance metrics.
Therefore, we also allow to use alternative distances in Algorithm \ref{algo:ward}. We consider the Manhattan and the Minkowski distance, which is similar to the fractional distance as proposed in \citet*{Aggarwal2001Surprising}, with the difference that the absolute value of the distances is taken.
Furthermore, Algorithm \ref{algo:ward} computes the weighted Euclidean norm to evaluate the distance between clusters. The choice of linkage and distance definition is associated with a specific choice of the objective function, as discussed in \citet{Ward1963Hierarchical}. The latter aims to minimize the sum of within-cluster variation. In complete linkage, the two most distant elements of two clusters define the distance between the clusters. Alternative ways to assess proximity would be to use the medians or centroids of each cluster. We allow for alternative distance definitions and linkage methods in the R-package we provide.
In additional simulations we considered these variants of the agglomerative hierarchical clustering algorithm, and the results are very similar to those obtained by using the Euclidean distance and the Ward-linkage function. The results of these simulations are available on demand.
\section{IV Selection and Estimation Method}\label{sec:IVSelectionAndEstimation}
Based on the definition of groups and the plurality rule, a natural strategy for IV selection is to find out the $Q$ IV groups and then select the largest group as the set of valid instruments. In this paper, we explore the clustering methods to discover the IV groups. First, we fit the general clustering framework to the IV selection problem, which is summarized in the minimisation problem in \ref{eq:min}. This general method needs a pre-specified parameter $K$, which is the number of clusters. We show that when $K$ equals the number of groups, there is a unique solution to this minimization problem. This solution coincides with the true underlying partition. However, the fact that consistent selection depends on $K$ makes it difficult to implement in practice, as we do not have prior knowledge about the number of groups. If $K$ is too large (larger than the number of groups), then the largest group will be split. If $K$ is too small, then the largest group might be in a cluster with some other group. To tackle this problem, we propose a downward testing procedure which combines the agglomerative hierarchical clustering method (Ward's method) with the Sargan test for overidentifying restrictions to select the valid instruments, which allows us to select the valid instruments without pre-specifying $K$.
\iffalse
Note that in this section, we develop our methods and discuss its properties for the case with one regressor, $P=1$. In Section 4.1 we extend the method to cases with multiple regressors, where $P > 1$. All proofs in the Appendix are for a general $P$. (\textcolor{blue}{Do we need to move this statement to Section 2?})
\fi
\subsection{Clustering Method for IV Selection}
Let $\mathbf{\mathcal{S}} = \{\mathcal{S}_1, ..., \mathcal{S}_K\}$ be a partition of $J$ just-identified estimators $\hat{\beta}_j$ into $K$ cluster cells with cluster identities $k = 1,..., K$. The clustering result is the solution to the following minimization problem:
\begin{equation}\label{eq:min}
\mathbf{\hat{\mathcal{S}}}(K)= \underset{\mathbf{\mathcal{S}}}{\argmin} \sum_{k=1}^{K}\sum_{\hat{\beta}_{j} \in \mathcal{S}_k}||\hat{\beta}_{j} - \bar{\mathcal{S}}_k||^2 \text{,}
\end{equation}
where $\bar{\mathcal{S}}_k$ is the arithmetic mean of all just-identified estimators in cluster $\mathcal{S}_k$. This corresponds to the intra-cluster variance, summed over the number of clusters.\\
Let the clustering result $\hat{\mathcal{S}}(K)$ be an estimator of sets containing IV-estimators $\hat{\beta_j}$. The IV-estimators in a cluster $\hat{\mathcal{S}}_k$ are selected to belong to a certain group of IVs:
$$\hat{\mathcal{G}}_k = \{j: \hat{\beta}_j \in \hat{\mathcal{S}}_k\}$$
Based on Assumption \ref{ass:plurality}, the cluster that consists of estimators that use valid IVs is estimated as the cluster that contains the largest number of just-identified estimators:
$$\hat{\mathcal{S}}_m(K) = \{ \mathcal{S}(K): |\hat{\mathcal{S}}(K)| = \underset{k}{max}|\hat{\mathcal{S}}_k(K)|\}$$
The valid IVs are selected as those IVs that are used to estimate the largest cluster $\hat{\mathcal{S}}_m(K)$
$$\hat{\mathcal{V}}(K) = \{j: \hat{\beta}_j \in \hat{\mathcal{S}}_m(K)\} \text{.}$$
Then, the remaining IVs are selected as invalid
$$\hat{\mathcal{I}}(K) = \mathcal{J}\quad \backslash \quad \hat{\mathcal{V}}(K).$$
When the number of clusters $K$ is equal to the number of groups $Q$, $K=Q$, then there is a partition minimizing the sum in Equation \ref{eq:min}. This occurs, when the grouping is such that $\mathcal{G}_k = \mathcal{G}_q$, i.e. each selected group $\mathcal{G}_k$ is in fact formed by a true group, $\mathcal{G}_q$. Define the partition leading to this grouping of IVs as the true partition $\mathbf{\mathcal{S}}_0 = \{\mathcal{S}_{01}, ..., \mathcal{S}_{0Q} \}$.
To see that, first note that if the partition is such that $\hat{\mathcal{S}}_k = \mathcal{S}_{0q} \,\, \forall k, q$, i.e. $\hat{\mathbf{\mathcal{S}}}(K) = \mathbf{\mathcal{S}}_0$,
\begin{equation*}
g(\hat{\mathbf{\mathcal{S}}}(K)) = g(\mathbf{\mathcal{S}}_0) = plim \{\sum_{k=1}^{K}\sum_{\hat{\beta}_{j} \in \mathcal{S}_k}||\hat{\beta}_{j} - \bar{\mathcal{S}}_k||^2\} = 0 \text{.}
\end{equation*}
For all $\hat{\beta}_{j} \in \mathcal{S}_k$, we have $plim \, \hat{\beta}_j = plim \, \bar{\mathcal{S}}_k $, and $plim\{||\hat{\beta}_{j} - \bar{\mathcal{S}}_k||^2\} = 0$. This is the case for all $k \in 1,...K$, hence $g(\mathbf{\mathcal{S}}_0) = 0$. Second, if the partition is such that some $\mathcal{S}_k \neq \mathcal{S}_{0q}$, i.e. $\mathbf{\mathcal{S}} \neq \mathbf{\mathcal{S}}_0$, then $plim \, \hat{\beta}_j \neq plim \, \bar{\mathcal{S}}_k $ for some $\hat{\beta}_j \in \mathcal{S}_k$ and $g(\mathbf{\mathcal{S}}) > 0$. This means that when $n \rightarrow \infty$ there is a unique solution for Equation \ref{eq:min}, which is such that $\mathbf{\mathcal{S}}=\mathbf{\mathcal{S}}_0$. A necessary condition for this to hold is that $K=Q$.
\subsection{Ward's Algorithm for IV Selection}
To choose the correct value of $K$ without prior knowledge of the number of groups, we propose a selection method which combines Ward's algorithm, a general agglomerative hierarchical clustering procedure proposed by \citet*{Ward1963Hierarchical}, with the Sargan test of overidentifying restrictions. Our selection algorithm has two parts.
The set of instruments selected as valid by the algorithm is denoted by $\hat{\mathcal{V}}^{dts}$.
The first part is Ward's algorithm, as described in Algorithm \ref{algo:ward} below.
Ward's algorithm aims to minimize the total within-cluster sum of squared error.
This is achieved by minimizing the increase in within-cluster sum of squared error at each step of the algorithm. The method generates a path of cluster assignments with $K$ clusters at each step so that $K \in \{1,...,J\}$. After obtaining the clusters for each $K$, we use a downward testing procedure based on the Sargan-test to select the set of valid instruments (Algorithm \ref{algo:Sargan}).
Ward's Algorithm works as follows
\begin{algorithm}\label{algo:ward}
Ward's algorithm
\begin{enumerate}
\item \textbf{Input:} Each just-identified point estimate is calculated. The Euclidean distance between all of these estimates is calculated and written as a dissimilarity matrix.
\item \textbf{Initialization:} Each just-identified estimate has its own cluster. The total number of clusters in the beginning hence is $J$.
\item \textbf{Joining:} The two clusters which are closest as measured by their weighted squared Euclidean distance $\frac{|\mathcal{S}_k||\mathcal{S}_l|}{|\mathcal{S}_k| + |\mathcal{S}_l|}||\bar{\mathcal{S}}_k - \bar{\mathcal{S}}_l||^2$ are joined to a new cluster. $|\mathcal{S}_k|$ is the number of estimates in cluster $k$. $\bar{\mathcal{S}}_k$ denotes the mean of cluster $k$, which is the arithmetic mean of all the just-identified estimates in $\mathcal{S}_k$.
\item \textbf{Iteration:} The joining step is repeated until all just-identified point-estimates are in one cluster.
\end{enumerate}
\end{algorithm}
\noindent
This yields a path of $S = J-1$ steps, on which there are clusters of size $K \in \{1, ..., J\}$.
\citet{Ward1963Hierarchical} originally also allows for alternative objective functions. These are associated with different dissimilarity metrics and different ways to define the distance between clusters. Our motivation for using the Euclidean distance is that the objective function is the intra-cluster variance or the sum of residual sum of squares. We discuss alternative choices of these so-called linkage methods and dissimilarity metrics in Section \ref{sec:ProximityMeasures}.
After generating the clustering path by Algorithm 1, we select the set of valid instruments following Algorithm \ref{algo:Sargan}:
\begin{algorithm}\label{algo:Sargan}
Downward testing procedure
\begin{enumerate}
\item Starting from $K = 1$, find the cluster that contains the largest number of just-identified estimators. In the first step, all estimators are in one cluster.
\item Do Sargan test on the instruments associated with the largest cluster, using the rest of IVs as controls. If there are multiple such clusters, select the one with the smallest Sargan statistic.
\item Repeat the procedure for each $K = 2,..., J-1$.
\item Stop when for the first time, the model selected by the largest cluster at some $K$ does not get rejected by the Sargan test.
\item Select the instruments associated with the cluster from Step 4 as valid instruments.
\end{enumerate}
\end{algorithm}
\noindent
The Sargan statistic in Step 4 is given by
\begin{equation*}
Sar\left(K\right)=\frac{\widehat{\mathbf{u}}(\widehat{\bm{\theta}}_K)^{\prime}\mathbf{Z}\left(\mathbf{Z}^{\prime}\mathbf{Z}\right)^{-1}\mathbf{Z}^{\prime}\widehat{\mathbf{u}}(\widehat{\bm{\theta}}_K)}{\widehat{\mathbf{u}}(\widehat{\bm{\theta}}_K)^{\prime}\widehat{\mathbf{u}}(\widehat{\bm{\theta}}_K)/n}
\end{equation*}
where $\hat{\bm{\theta}}_K$ is the 2SLS estimator using the instruments associated with the largest cluster for each $K$ as valid instruments and controlling for the rest of the instruments, and $\widehat{\mathbf{u}}(\widehat{\bm{\theta}}_K)$ is the 2SLS residual. We show later that to guarantee consistent selection, the critical value for the Sargan
test, denoted by $\xi_{n, J - |\hat{\mathcal{I}}| - P}$ should satisfy $\xi_{n, J - |\hat{\mathcal{I}}| - P} \rightarrow \infty$ and $\xi_{n, J - |\hat{\mathcal{I}}| - P} = o(n)$. In practice, we choose the significance level $\frac{0.1}{log(n)}$ following \citet*{Windmeijer2021Confidence}.
\begin{figure}[t]
\caption{Illustration of the Algorithm with One Regressor \label{fig:Ward}}
\includegraphics[scale=1.5, trim=320 300 290 270, clip]{illu_P1_1.pdf}\\
\includegraphics[scale=1.5, trim=320 300 290 270, clip]{illu_P1_2.pdf}\\
\includegraphics[scale=1.5, trim=320 300 290 270, clip]{illu_P1_3.pdf}\\
\includegraphics[scale=1.5, trim=320 300 290 270, clip]{illu_P1_4.pdf}\\
\includegraphics[scale=1.5, trim=320 300 290 270, clip]{illu_P1_5.pdf}\\
\includegraphics[scale=1.5, trim=320 300 290 270, clip]{illu_P1_6.pdf}
\end{figure}
\noindent
The procedure is illustrated in figure \ref{fig:Ward}. Here, we have a situation with six instruments. Three of them are valid as they affect the outcome variable only through the endogenous regressor, while it is not the case for the other three invalid instruments. In the graph the circles above the real line denote the just-identified estimate for the coefficient $\beta_0$ estimated by each of the six instruments. From left to right, we number these estimates and their corresponding instruments as No.1 to No.6.
In the initial Step 0 of the clustering process, each just-identified estimate has its own cluster. In Step 1, we join the two estimates which are closest in terms of their weighted Euclidean distance, i.e. those estimated with instrument No.3 and No.4 (the two orange circles). These two estimates now form one cluster and we only have five clusters. We re-calculate the distances with the new cluster and merge the closest two into a new cluster. We continue with this procedure, until there is only one cluster left in the bottom right graph. We continue with Algorithm \ref{algo:Sargan} and evaluate the Sargan test at each step, using the instruments contained in the largest cluster. When the p-value is larger than a certain threshold, say $0.1/log(n)$, we stop the procedure. Ideally this will be the case at Step 3 of the algorithm, because here the largest group (in orange) is formed only by valid IVs (2,3 and 4). If this is the case, only the valid IVs are selected as valid.
To make the procedure robust to heteroskedasticity, clustering and serial correlation, the Sargan test can be replaced with a robust score test, such as the Hansen J-test \citep*{Hansen1982Large}, analogously to \citet*{Windmeijer2021Confidence}.
\subsection{Oracle Selection and Estimation Property}\label{sec:oracle}
In this section, we state the theoretical properties of the IV selection results obtained by Algorithm \ref{algo:ward} and Algorithm \ref{algo:Sargan} and the post-selection estimators. See Section \ref{sec:Extensions} for detailed theoretical results developed for the general case $P \geq 1$. We establish that our method can achieve oracle properties in the sense that it can select the valid instruments consistently, and that the post-selection IV estimator has the same limiting distribution as if we knew the true set of valid instruments.
\begin{theorem}{Consistent selection\label{th:ConsistentSelection}}\\
Let $\xi_n$ be the critical value for the Sargan test in Algorithm 2. Let $\hat{\mathcal{V}}^{dts}$ be the set of instruments selected from Algorithm \ref{algo:ward} and Algorithm \ref{algo:Sargan}. Under Assumptions 1 - 5, for $\xi_n \rightarrow \infty$ and $\xi_n = o(n)$,
$$\underset{n \rightarrow \infty}{lim} P (\hat{\mathcal{V}}^{dts} = \mathcal{V}) = 1.$$
\end{theorem}
\iffalse
For such a $K$, the largest family is selected. Because the largest family is estimated through valid IVs (Remark 1), this is equivalent to selection of the valid IVs.
The above theorem means that the oracle model is on the selection path.
To better understand why this is the case, consider the following analogy.
There are $N$ guests ($\binom{J}{P}$ just-identified estimates) which belong to $Q$ families. These $N$ people live in a hotel, which has $N$ rooms (clusters). Each day, one room disappears, and one of the people needs to move into the room of some other guest. The people in a family have closer ties, so the person whose room disappears will move into the room of somebody from their own family. This goes on until each family is living respectively in one crowded room. The hotel now continues to shrink. Only now are people from different families merged together into the same rooms. The largest family can be detected, when all people from the same family have been merged into one room, but people from other families have not been merged into one room completely (or have just been all merged into one room respectively).
\fi
\noindent
The post-selection 2SLS estimator using the selected valid instruments and controlling for the selected invalid instruments has the same asymptotic distribution as the oracle estimator:
\begin{theorem}{Asymptotic oracle distribution\label{th:AsymptoticOracleDistribution}}\\
Let $\mathbf{Z}_{\hat{\mathcal{I}}} = \mathbf{Z} \setminus \mathbf{Z}_{\hat{\mathcal{V}}^{dts} }$ with $\mathbf{Z}_{\hat{\mathcal{I}}}$ , $\mathbf{Z}_{\hat{\mathcal{V}}^{dts} } $ being the selected invalid and valid instruments respectively. Let $\hat{\beta}_{\hat{\mathcal{V}}^{dts}} $ be the 2SLS estimator given by
$$\hat{\beta}_{\hat{\mathcal{V}}^{dts}} = (\hat{\mathbf{d}}^\prime \mathbf{M_{\mathbf{Z}_{\hat{\mathcal{I}}}}}\hat{\mathbf{d}})^{-1}\hat{\mathbf{d}}^\prime \mathbf{M_{\mathbf{Z}_{\hat{\mathcal{I}}}}} \mathbf{y}$$
Under Assumptions 1-5, the limiting distribution of $\hat{\beta}_{\hat{\mathcal{V}}^{dts}} $ is
\begin{equation*}
\sqrt{n} (\hat{\beta}_{\hat{\mathcal{V}}^{dts}} - \beta) \overset{d}{\rightarrow} N(0, \sigma^2_{or})
\end{equation*}
where $\sigma_{or}^{2}$ is the asymptotic variance for the oracle 2SLS estimator given by
\begin{equation*}
\sigma_{or}^{2}=\sigma_{u}^{2}\left(E\left[\mathbf{z}_{i.}d_{i}\right]^{\prime}E\left[\mathbf{z}_{i.}\mathbf{z}_{i.}^{\prime}\right]^{-1}E\left[\mathbf{z}_{i.}d_{i}\right]-E\left[\mathbf{z}_{\mathcal{I},i.}d_{i}\right]^{\prime}E\left[\mathbf{z}_{\mathcal{I},i.}\mathbf{z}_{\mathcal{I},i.}^{\prime}\right]^{-1}E\left[\mathbf{z}_{\mathcal{I},i.}d_{i}\right]\right)^{-1}.\label{sig2bor}
\end{equation*}
with $\mathcal{I}$ being the true set of invalid instruments.
\end{theorem}
\noindent
The proof of Theorem 2 follows from the proof of \citet*[Consistent selection leads to oracle properties, Theorem 2]{Guo2018Confidence}
\subsection{Computational Complexity}
Recent implementations of the hierarchical agglomerative clustering algorithm have a computational cost of $O(J^2)$ \citep{Amorim2016Ward}. In the downward testing procedure, a maximum of $J-1$ different models needs to be tested. Therefore, the computational cost of the downward testing algorithm is $O(J^2)$. This is an improvement on the CIM which has a time complexity of $O(J^2log(J))$ and where the maximal number of tests is $J(J-1)/2$.
\iffalse
\subsection{Wald-Ward algorithm}
In Algorithm \ref{algo:ward}, we didn't account for the variance of the estimates. We try to allow for this, with the following algorithm.
\subsubsection{Description}
\begin{algorithm}\label{algo:sd}
Wald-Ward algorithm
\begin{enumerate}
\item \textbf{Input:} Point-estimates are calculated for each estimate from a just-identified model.
\item \textbf{Initialization:} Each just-identified estimate has its own cluster. The total number of clusters in the beginning hence is $\binom{J}{P}$
\item \textbf{Joining:} Calculate Wald-test statistic for each pair of clusters. The two clusters which are closest in terms of the Wald-test statistic $$[\bm{\hat{\beta}_{[j]}} - \bm{\hat{\beta}_{[k]}}]'[Var(\bm{\hat{\beta}_{[j]}}-\bm{\hat{\beta}_{[k]}})]^{-1}[\bm{\hat{\beta}_{[j]}} - \bm{\hat{\beta}_{[k]}}]$$ are joined to a new cluster.
\item \textbf{New estimates:} Calculate the 2SLS estimates with each group selected as valid.
\item \textbf{Iteration:} The joining (3.) and new estimates steps (4.) are repeated until all just-identified point-estimates are in one cluster.
\end{enumerate}
\end{algorithm}
This algorithm is analogous to algorithm \ref{algo:ward}, only that the distance metric used is now the Wald test-statistic.
\subsubsection{Oracle properties}
The procedure to show that the oracle model is on the selection path is analogous to the one in the preceding subsection, for algorithm \ref{algo:ward}.
The next Lemma states that for the first $\binom{J}{P} - Q + 1$ steps, each cluster is a subset of or equal to a family.
\begin{lemma}\label{lemma:algo1}
Under assumptions 1 to 4, in step 3 of algorithm \ref{algo:sd}:
\begin{equation}\label{eq:lemma-wald}
\text{When } \binom{J}{P} \geq K \geq Q, \quad \forall k: \quad plim(\hat{\theta}_{k} \subseteq \mathcal{F}_q) = 1
\end{equation}
\end{lemma}
\noindent
\textbf{Proof:}
For step $s=1$ with $K = \binom{J}{P}$, this is the case, because each cluster has one element.
For each following step $s + 1$, s.t. $\binom{J}{P} > K \geq Q$, the probability to join the next $[j]$ to a cluster that has only elements in the same family goes to one, asymptotically.
To see this, consider ${[j], {[k]}}$ and ${[l]}$ as in the proof to Lemma \ref{lemma:algo1}, where $[j], [k] \in \mathcal{F}_q$, $[l] \in \mathcal{F}_r$, $plim(\bm{\hat{\beta}_{[j]}}) = \bm{q}$, $plim(\bm{\hat{\beta}_{[k]}}) = \bm{q}$, $plim(\bm{\hat{\beta}_q}) = \bm{q}$ and $plim(\bm{\hat{\beta}_{[l]}}) = \bm{r}$.
Here $\bm{\hat{\beta}_q}$ is the 2SLS estimate when using a set of IVs $\{j: [j] \in \mathcal{F}_q\}$, i.e. IVs selected from the same family with the number of IVs used as valid being greater than $P$.
By the CMT $plim(\bm{\hat{\beta}_{[j]}} - \bm{\hat{\beta}_{[k]}}) = \bm{0}$, $plim(\bm{\hat{\beta}_{[j]}} - \bm{\hat{\beta}_q}) = \bm{0}$ and $plim(\bm{\hat{\beta}_{[j]}} - \bm{\hat{\beta}_{[l]}}) = \bm{q} - \bm{r}$. \\
If $[Var(\bm{\hat{\beta}_{[j]}}-\bm{\hat{\beta}_{[l]}})]$ (and $[Var(\bm{\hat{\beta}_q}-\bm{\hat{\beta}_{[l]}})]$) is positive definite (this holds by the above: $\bm{Z}^T\bm{Z}$ is real and nonsingular, hence it is pd, then its inverse also is), its inverse also is, and the term on the right is larger zero, $[\bm{\hat{\beta}_{[j]}} - \bm{\hat{\beta}_{[l]}}]'[Var(\bm{\hat{\beta}_{[j]}}-\bm{\hat{\beta}_{[l]}})]^{-1}[\bm{\hat{\beta}_{[j]}} - \bm{\hat{\beta}_{[l]}}] > 0$.
Assume that $[Var(\bm{\hat{\beta}_{[j]}}-\bm{\hat{\beta}_{[k]}})] \overset{P}{\rightarrow} \bm{V}$, where $rk(\bm{V}) = P$ (and respectively for $\bm{\hat{\beta}_{[l]}}$). \lun{This should hold by the CMT, if the formula for the variance, below, is correct. Adjust the assumptions on the convergence of $Z^tZ$, though.}
By the above and the CMT, the following holds:
\begin{equation}\label{eq:lemma2}
\lim\limits_{n \rightarrow \infty}P([\bm{\hat{\beta}_{[j]}} - \bm{\hat{\beta}_{[k]}}]'[Var(\bm{\hat{\beta}_{[j]}}-\bm{\hat{\beta}_{[k]}})]^{-1}[\bm{\hat{\beta}_{[j]}} - \bm{\hat{\beta}_{[k]}}] < [\bm{\hat{\beta}_{[j]}} - \bm{\hat{\beta}_{[l]}}]'[Var(\bm{\hat{\beta}_{[j]}}-\bm{\hat{\beta}_{[l]}})]^{-1}[\bm{\hat{\beta}_{[j]}} - \bm{\hat{\beta}_{[l]}}]) = 1 \text{ .}
\end{equation}
We could also use this one:
\begin{equation}\label{eq:lemma3}
\lim\limits_{n \rightarrow \infty}P([\bm{\hat{\pi}_{[j]}}]'[Var(\bm{\hat{\pi}_{[j]}})]^{-1}[\bm{\hat{\pi}_{[j]}}] < [\bm{\hat{\pi}_{[l]}}]'[Var(\bm{\hat{\pi}_{[l]}})]^{-1}[\bm{\hat{\pi}_{[l]}}]) = 1 \text{ .}
\end{equation}
Thus at each step, s.t. $\binom{J}{P} > K \geq Q$, each assignment involves members of the same family. Each cluster is hence a subset of a family. $\qed$
\\
\iffalse
From this as before Corollary \ref{coro:eachclust} follows, meaning that when $K=Q$, there is a cluster $\hat{\theta}_k =\mathcal{F}_0$. Again, oracle properties follow for $K=Q$.
\begin{theorem}\textbf{Consistent selection}
Under Lemma 2 and Assumption 5 there is at least one step s.t. $K=Q$ and
$\underset{\hat{\theta}_k}{argmax} |\hat{\theta}_k| = \mathcal{F}_0$.
\end{theorem}
\begin{theorem}\textbf{Oracle properties}\\
Under Theorem 2, the limiting distribution of
$$\hat{\beta}_{2SLS}^{W} = (\mathbf{\hat{D}}^\intercal \mathbf{M_{\hat{\mathcal{I}}}}\mathbf{\hat{D}})^{-1}\mathbf{\hat{D}}^\intercal \mathbf{M_{\hat{\mathcal{I}}}} y$$
is that of the oracle estimator.
\end{theorem}
\fi
\subsubsection{Variance of difference of just-identified estimates}
Calculate $Var(\bm{\hat{\beta}_{[j]}}-\bm{\hat{\beta}_{[k]}})$ for use in the Wald test.
We need the first $P$ rows of the following matrices (above):
$$\bm{\hat{H}_{[j]}}^{-1} = \left( {\begin{array}{cc}
\bm{\hat{\gamma}_{[j]}^{-1}} & 0 \\
-\bm{\hat{\gamma}_{[-j]}\hat{\gamma}_{[j]}^{-1}} & \mathbf{I}_{J - P}\\
\end{array} } \right)$$
In the following matrix, we need to readjust column and row order, so that it matches the order in $\bm{Z}$:
$$\bm{\hat{H}_{[k]}}^{-1} = \left( {\begin{array}{cc}
\bm{\hat{\gamma}_{[k]}^{-1}} & 0 \\
-\bm{\hat{\gamma}_{[-k]}\hat{\gamma}_{[k]}^{-1}} & \mathbf{I}_{J - P}\\
\end{array} } \right)$$
$$Var(\bm{\hat{\theta}_{[j]}}) = Var(\bm{\hat{H}_{[j]}^{-1}\bm{\hat{\Gamma}}}) = Var(\bm{\hat{H}_{[j]}^{-1}(\bm{Z}^T\bm{Z})}^{-1}\bm{Z}^T(\bm{D}\beta + \bm{Z}\alpha + u)) $$
$$= \bm{\hat{H}_{[j]}}^{-1}(\bm{Z}^T\bm{Z})^{-1}\bm{Z}^TVar(u)\bm{Z}(\bm{Z}^T\bm{Z})^{-1}(\bm{\hat{H}_{[j]}}^{-1})^T = \sigma_{u}^2 \bm{\hat{H}_{[j]}}^{-1}(\bm{Z}^T\bm{Z})^{-1}(\bm{\hat{H}_{[j]}}^{-1})^T $$
The first $P$ rows of this are:
$$Var(\bm{\hat{\beta}_{[j]}}) = \sigma_{u}^2 (\bm{\hat{\gamma}_1}^{-1} \quad \bm{0})(\bm{Z}^T\bm{Z})^{-1}
\begin{pmatrix}
\bm{\hat{\gamma}_1}^{-1}T\\
\bm{0}
\end{pmatrix}$$
Analogously, consider:
$$\bm{\hat{\theta}_{[j]}}-\bm{\hat{\theta}_{[k]}} = (\bm{\hat{H}_{[j]}^{-1}} - \bm{\hat{H}_{[k]}^{-1}}) \bm{\hat{\Gamma}}$$
$$Var(\bm{\hat{\theta}_{[j]}}-\bm{\hat{\theta}_{[k]}}) = \sigma_{u}^2 (\bm{\hat{H}_{[j]}^{-1}} - \bm{\hat{H}_{[k],ord}^{-1}})(\bm{Z}^T\bm{Z})^{-1}(\bm{\hat{H}_{[j]}^{-1}} - \bm{\hat{H}_{[k],ord}^{-1}})^T$$
And the first $P$ rows of this are:
$$Var(\bm{\hat{\beta}_{[j]}}-\bm{\hat{\beta}_{[k]}}) = \sigma_{u}^2 (\bm{\hat{\gamma}_{[j]}}^{-1}\bm{\bm{e_{[j]}}}^T -
\bm{\hat{\gamma}_{[k]}}^{-1}\bm{e_{[k]}}^T)(\bm{Z}^T\bm{Z})^{-1}(\bm{\bm{e_{[j]}}\hat{\gamma}_{[j]}}^{-1T} - \bm{e_{[k]}}
\bm{\hat{\gamma}_{[k]}}^{-1T}) $$
where $\bm{e_{[j]}} = (\bm{e_1} \, \bm{e_2}, ..., \bm{e_P})$ is a $J \times P$ is a matrix where each column vector is filled with zeros, except for the entry which corresponds to the valid IV.
Why? Change of column order of $\bm{Z}$ leads to change of order of entries in $\bm{\hat{\Gamma}}$. To still get the same result for $\hat{\theta_{[j]}}$, we need to change column order of $\bm{\hat{H}_{[k]}^{-1}}$
\lun{\Large
\begin{enumerate}
\item After joining, at step 4 of the algorithm, we will have to recalculate the variance, but now with overidentified models. There, the variance of the difference - formula does not hold anymore. Could we use the mean of the Wald-statistics to each cluster element instead of the Wald-statistic to the new estimate?
\end{enumerate}
}
\fi
\section{Introduction}\label{sec:intro}
Instrumental variables estimation is a widely used statistical method for analysing the causal effects of treatment variables on an outcome when the causal relationship between them is confounded. Consistent IV estimation requires that all instruments are valid. This requires that
\begin{enumerate}[label=(\alph*)]
\item Instruments are associated with the endogenous variables (relevance condition)
\item Instruments do not affect the outcome directly or through unobserved factors (exclusion restriction)
\end{enumerate}
In practice, a main challenge in IV estimation is that some instrumental variables may be invalid in the sense that they fail the exclusion restriction. The key task therefore is to estimate the causal effect in situations where the number of IVs is large.
In this paper, we propose a new method to select the valid instruments and to estimate the causal effect. The method combines the agglomerative hierarchical clustering (AHC) algorithm, a statistical learning algorithm typically employed in cluster analysis, with the Sargan test for overidentifying restrictions. The estimator that we develop relies on the plurality rule \citep*{Guo2018Confidence} which states that the largest group of IVs consists of valid instruments. Instruments are said to form a group if their instrument-specific just-identified estimators converge to the same value. Under the plurality rule, our method achieves oracle selection. This means that the estimator works as well as if the set of true instruments were known: valid instruments can be selected consistently, and the two-stage least squares (2SLS) estimator using the instruments selected as valid has the same limiting distribution as the ideal estimator that uses the set of truly valid instruments.
Our work adds to a growing literature on valid IV selection inspired by \citet{Andrews1999Consistent} who proposes moment selection criteria and a downward testing procedure. The setting considered in this literature is one where the number of IVs is large in the sense that the number of IVs by far exceeds the number of regressors and considering all possible overidentified models becomes infeasible. However, we are not in a setting where the number of instruments grows with the number of observations. The literature we relate to is also different from the one that uses regularization to find an optimal set of instruments, as in \citet*{Belloni2012Sparse} in that it does not uphold the assumption that all IVs fulfil the violation restriction.
\noindent
Our method improves upon the existing methods in that it is the first to allow for multiple endogenous regressors, deal with weak instruments without calling for a first-stage selection, and accommodate heterogeneous effects and combinations of these. Moreover, it outperforms the existing methods in simulations.
A prominent example for a setting with a large number of IVs, all of which have to be valid is the estimation of the effect of immigration on wages in labor economics. To identify causal effects, researchers often rely on the lagged origin-country specific immigration pattern, measured by previous shares of immigrants. If none of the previous shares by origin country are directly or indirectly correlated with the outcome variable, the causal effect can be consistently estimated.
This assumption is invoked very often in the literature.\footnote{See Table 6 in \citet{Apfel2021Relaxing} for a non-exhaustive list of papers in this literature.}
However, some of the shares may violate the exclusion restrictions, as they may affect the wage variable directly through long-term dynamic adjustment processes, or be correlated with unobserved demand shocks.
Another example where a large number of instruments are present is the estimation of the return to education. Here, interactions of quarter- and year-fixed effects have been famously used by \citet*{Angrist1991Does}. These IVs have been shown to suffer from a weak instrument problem by \citet*{Bound1995Problems}. In this context, our method might be helpful in selecting IVs which are strong and valid.
Another field that makes use of a large number of instruments, some of which may be invalid is Mendelian Randomization. Here, researchers use genetic variation to estimate the causal effect of an exposure on a health-related outcome. This field has also inspired much of the initial invalid IV selection literature. An example is the estimation of the effect of C-reactive protein on coronary heart disease \citep{Wensley2011Association}.
In the applied literature, the two solutions used most often are to select valid instruments from the set of potential instruments based on economic intuition, or to directly include all the candidate instruments in IV estimation. These approaches can be problematic because including invalid instruments often leads to severely biased results. Therefore, it is important to develop data-driven IV selection methods to select invalid instruments, when complete knowledge about the candidate instruments’ validity is absent.
Previous work has tackled the IV selection problem in the single endogenous variable case. \citet*{Kang2016Instrumental} propose a selection method based on the least absolute shrinkage and selection operator (LASSO). \citet*{Windmeijer2019Use} make improvements by proposing an adaptive Lasso based method that has oracle properties under the assumption that more than half of the candidate instruments are valid (the \textit{majority} rule). \citet*{Guo2018Confidence} propose the Hard Thresholding with Voting method (HT) that has oracle properties under the sufficient and necessary identification condition that the largest group is formed by all the valid instruments (the \textit{plurality} rule). This is a relaxation to the majority rule. Under the same identification condition, \citet*{Windmeijer2021Confidence} propose the Confidence Interval method (CIM), which has better finite sample performance.
\noindent
Our research adds to the literature in five ways:
\begin{enumerate}
\item We combine agglomerative hierarchical clustering with a traditional statistical test, the Sargan over-identification test, to yield a novel downward testing algorithm for IV selection. This new method provides the theoretical guarantee that under the plurality rule it can select the true set of valid instruments consistently, and is computationally feasible.
\item We extend the method to settings with multiple endogenous regressors. Such an extension is not available for the aforementioned methods, but it is straightforward in our setting.
\item Our method performs well in the presence of weak valid or invalid instruments, which is an advantage over existing methods.
\item We also discuss the application of our method to a setting with heterogeneous treatment effects. Importantly, we can retrieve and inspect the entire group structure, a possibility that the previous methods do not offer.
\item Our algorithm is computationally less complex than the CI and HT methods.
Also, the only pre-specified parameter for our algorithm is the critical value for the Sargan test, which has been well established in the existing literature to guarantee consistent selection
\end{enumerate}
We also discuss implications in settings with local-to-zero violations of the exclusion restriction.
We conduct Monte Carlo simulations to examine the performance of our method, and compare it with two existing methods: the Hard Thresholding method and the Confidence Interval method. We compare with these two methods, because they also rely on the plurality rule. The simulation results show that our method achieves oracle performance in both single and multiple endogenous regressors settings in large samples when all the instruments are strong. Also, our method works well when some of the candidate instruments are weak, outperforming HT and CIM.
We illustrate the various strength of our method with two illustrations.
We apply our method to the estimation of the short- and long-run effects of immigration on wages in the US by reproducing and revisiting the results of \citet*{Basso2015Association}. In this example, we have two endogenous regressors, potentially weak \textit{and} invalid instruments. The results of \citet*{Angrist1991Does} on the returns to education are also revisited. Here, the main concern is that instruments are weak. In both applications our estimator indicate that the actual effects might be much larger than suggested by the standard 2SLS estimates. In particular for the second application, the first-stage F-statistic doubles after preselection via AHC. We also provide an R-package that makes implementation of our method easy in practice.
The remainder of this paper is structured as follows. In Section \ref{sec:ModelAndAssumptions}, we state the model and assumptions and illustrate some of the well-established properties of the 2SLS just-identified estimator. In Section \ref{sec:IVSelectionAndEstimation}, we describe the basic method and the algorithm when there is a single endogenous variable, and investigate its asymptotic properties. In Section \ref{sec:Extensions}, we present extensions to settings with multiple endogenous regressors and weak instruments, and discuss our method in presence of heterogeneous treatment effects. In Section \ref{sec:MonteCarlo}, we provide Monte Carlo simulation results. In Section \ref{sec:Applications}, we apply our method to estimate the effects of immigration on wages and to the returns to education and add a discussion of how our method could be applied to the estimation of the effect of pollutants on human health. Section \ref{sec:Conclusion} concludes.
\section{Model and Assumptions}\label{sec:ModelAndAssumptions}
In the following, we introduce notational conventions used throughout this paper. Matrices are in upper case and bold. Vectors are in lower case and bold. Scalars are in lower case and not in bold.
Let $\mathbf{y}$ be an $n \times 1$-vector of the observed outcome,
$\mathbf{d}_1$, ..., $\mathbf{d}_P$ be $P$ endogenous regressor vectors (each $n \times 1$), which can be subsumed in an $n \times P$ - matrix $\mathbf{D}$,
$\mathbf{z}_1$, ..., $\mathbf{z}_J$ be $J$ instrument vectors, which can be subsumed in an $n \times J$ - matrix $\mathbf{Z}$.
Let error terms be $\mathbf{u}$ and $\bm{\varepsilon_p}$ for $p \in \{1, ..., P\}$, which are all $n \times 1$ error-vectors and are correlated with $\sigma_{up} := cov(\mathbf{u}, \bm{\varepsilon}_p)$. The latter covariances measure the endogeneity of regressors in $\mathbf{D}$. The $P \times 1$ coefficient vector of interest is $\bm{\beta}$.
The $J \times P$ matrix $\bm{\gamma}$ contains first-stage coefficients.\footnote{To be consistent with the literature, we denoted this matrix as lower case because upper case $\mathbf{\Gamma}$ denotes the reduced form parameters.}
Let $s$ be the number of instruments in the set of valid instruments, $\mathcal{I}$, $g$ be the number of instruments in the set of invalid instruments, $\mathcal{V}$, and $J = g + s$ be the total number of instruments in the overall set of instruments, $\mathcal{J}$.
The arithmetic mean of a variable $x$ is defined as $\mu_x = \frac{\Sigma_{i=1}^{n} x_i}{n}$, the mean of a vector is the vector of dimension-wise arithmetic means, $\lVert \cdot \rVert$ denotes the L2-norm and $|\cdot|$ denotes cardinality, when used around a set and an absolute value, when used around a quantity. The symbol $\&$ denotes the logical conjunction, \textit{and}. The $n \times n$ projection matrix is $\mathbf{P}_X = \mathbf{X}(\mathbf{X}'\mathbf{X})^{-1}\mathbf{X}'$, and the annihilator matrix is $\mathbf{M}_X = \mathbf{I} - \mathbf{P}_X$ and $\hat{\mathbf{D}} = \mathbf{P}_Z \mathbf{D}$ are the fitted values. Throughout the paper, we assume that $J$, $Q$ and $g$ are fixed and $P < J$.
\subsection{Model Setup}
We start from the model setup with a single endogenous regressor, i.e. throughout Section 2 and Section 3, $P = 1$. The extension of our method to the cases with multiple endogenous regressors can be found in Section 4.1. All proofs in the Appendix are for a general $P$.
We adopt the following observed data model which takes the potentially invalid instruments into account:
\begin{equation}\label{eq:Structural}
\mathbf{y} = \mathbf{d} \beta + \mathbf{Z} \bm{\alpha} + \mathbf{u} \text{,}
\end{equation}
with $\mathbf{E}[u_i | \mathbf{z}_i] = 0$. The linear projection of $\mathbf{d}$ on $\mathbf{Z}$ is
\begin{equation}\label{eq:FirstStage}
\mathbf{d} = \mathbf{Z} \bm{\gamma} + \bm{\varepsilon}
\end{equation}
The vector $\bm{\alpha}$ is $J \times 1$ and has entries $\alpha_j$, each of which is associated with an individual instrument. Each entry indicates which of the instruments has a direct effect on the outcome variable and hence is invalid. Following a large econometric and statistical literature, such as \citet*{Masten2021Salvaging}, \citet*{Conley2012Plausibly}, \citet*{Guo2018Confidence} or \citet*{Kang2016Instrumental}, we define a valid instrument as:
\begin{definition}
For $j = 1, ..., J$, instrument $\mathbf{z}_j$ is valid if $\alpha_j = 0$. If $\alpha_j \neq 0$, then $\mathbf{z}_j$ is an invalid instrument.
\end{definition}
\noindent Following the cited literature, we restrict our attention to violations of the exclusion restriction. This could be extended to violations of exogeneity, as in $Cov(\mathbf{Z},\mathbf{e})\neq0$, where $\mathbf{e} = \mathbf{\mathbf{Z}\bm{\alpha} + \mathbf{u}}$. The consequences of this important conceptual difference are however beyond the scope of this paper and we leave it to future work.
The ideal model which selects the truly valid instruments as valid and controls for the set of invalid instruments is the oracle model, defined as follows:
\begin{equation}\label{eq:Oracle}
\mathbf{y} = \mathbf{d} \beta + \mathbf{Z}_{\mathcal{I}} \bm{\alpha}_{\mathcal{I}} + \mathbf{u} = \mathbf{X}_{\mathcal{I}} \bm{\theta}_{\mathcal{I}} + \mathbf{u} \text{.}
\end{equation}
where $\mathbf{X}_\mathcal{I} = (\mathbf{d} \quad \mathbf{Z}_\mathcal{I})$ and $\bm{\theta}_\mathcal{I} = (\beta \quad \bm{\alpha}_\mathcal{I}')'$.
\subsection{Assumptions}
The assumptions that follow are the same as in \citet*{Windmeijer2021Confidence}.
The first assumption is a rank assumption.
\begin{assumption}\label{ass:RankAssumption}
Rank assumption.\\
$$E(\mathbf{z}_i \mathbf{z}_i') = \mathbf{Q} \text{ with } \mathbf{Q} \text{ a finite and full rank matrix.}$$
\end{assumption}
The second assumption makes sure that the just-identified estimators all exist.
\begin{assumption}\label{ass:FirstStageSingle} Existence of just-identified estimators.
$$\bm{\gamma} = (E[\mathbf{z}_i\mathbf{z}_i'])^{-1}E[\mathbf{z}_id_i]), \gamma_j\neq 0 \quad j = 1, ..., J \textbf{.}$$
\end{assumption}
\begin{assumption}\label{ass:ErrorStructure} Error structure.\\
Let $\mathbf{w}_i = (u_i \quad \varepsilon_i)'$. Then, $E(\mathbf{w}_i)=0$ and $E[\mathbf{w}_i \mathbf{w}_i']= \left( \begin{array}{rr}
\sigma_u^2 & \sigma_{u, \varepsilon} \\
\sigma_{u, \varepsilon} & \sigma_{\varepsilon}^2
\end{array}\right)
= \bm{\Sigma} $ with \\
$Var(u_i) = \sigma_u^2, \quad Var(\varepsilon_i) = \sigma_{\varepsilon}^2 \quad Cov(u_i, \varepsilon_i) = \sigma_{u, \varepsilon}$ and the elements of $\bm{\Sigma}$ are finite.
\end{assumption}
\begin{assumption}\label{ass:Asymptotics}
\begin{align}
plim (n^{-1} \mathbf{Z}' \mathbf{Z}) = E(\mathbf{z}_i \mathbf{z}_i') = \mathbf{Q}
\quad &; \quad plim(n^{-1} \mathbf{Z}' \mathbf{d}) = E(\mathbf{z}_i d_i) \notag\\
plim(n^{-1} \mathbf{Z}' \bm{u})= E(\mathbf{z}_i u_i) = 0 \quad &; \quad
plim(n^{-1} \mathbf{Z}' \bm{\varepsilon}) = E(\mathbf{z}_i \varepsilon_i) = 0 \notag\\
plim(n^{-1}\sum\limits_{i=1}^{n} \mathbf{w}_i) = 0 \quad &; \quad plim(n^{-1} \mathbf{w}_i \mathbf{w}_i') = \bm{\Sigma} \text{.} \notag
\end{align}
\end{assumption}
\begin{assumption}\label{ass:ZwNormal}
$\frac{1}{\sqrt{n}}\sum\limits_{i=1}^n vec(\mathbf{z}_i \mathbf{w}_i') \overset{d}{\rightarrow} N(0,\bm{\Sigma \otimes \mathbf{Q}}) \text{ as } n\rightarrow \infty$.
\end{assumption}
\noindent The assumptions above will be modified when there is more than one endogenous regressor. Assumption \ref{ass:ZwNormal} is made for ease of exposition, but the method can be easily extended to accommodate heteroskedasticity, clustering and serial correlation.
From $(1)$ and $(2)$, we have the outcome-instrument reduced form
\begin{equation*}
\mathbf{y} = \mathbf{Z}\bm{\Gamma} +\bm{\epsilon}
\end{equation*}
where $\Gamma_j = \gamma_j \beta + \alpha_j$.
Each individual instrument $\mathbf{z}_j$ is associated with a just-identified estimator for $\beta$, denoted by $\hat{\beta}_j$, which is defined as the two-stage least squares (2SLS) estimator using $\mathbf{z}_j$ as the single valid instruments, and treating the remaining IVs as controls. There are $J$ just-identified IV estimators. We write these estimators as in \citet*{Windmeijer2021Confidence}.
\begin{equation*}
\hat{\beta}_{j}= \frac{\hat{\Gamma}_j}{\hat{\gamma}_j}
\end{equation*}
where $\hat{\Gamma}_j$ and $\hat{\gamma}_j$ are the OLS estimators for $\Gamma_j$ and $\gamma_j$ respectively. Then we have
\begin{property}Properties of just-identified estimates.\label{prop:JustIdentified}\\
Under Assumptions \ref{ass:FirstStageSingle} to \ref{ass:ZwNormal} it holds that
\begin{equation*}
plim(\hat{\beta}_{j})= plim\left(\frac{\hat{\Gamma}_j}{\hat{\gamma}_j}\right) = \beta + \frac{\alpha_j}{\gamma_j}
\end{equation*}
\end{property}
\noindent Hence, the inconsistency of $\hat{\beta}_j$ is $plim(\hat{\beta}_{j}) - \beta = \frac{\alpha_j}{\gamma_j} = q$.
We define a group following the definition in \citet*{Guo2018Confidence} as:
\begin{definition}\label{def:Group}
A group $\mathcal{G}_q$ is a set of IVs that has the same estimand $\beta_j = \beta + q$.
$$\mathcal{G}_{q} = \{j: \beta_j = \beta + q\}$$
\end{definition}
\noindent
Then the group consisting of all valid instruments is
$$\mathcal{G}_0 = \{j: q=0\}$$
Let the number of groups be $Q$, which is finite because the number of IVs $J$ is also finite.
The next assumption is the key assumption for identification. It states that among the $Q$ groups formed by $\mathbf{z}_1, ..., \mathbf{z}_J$, the largest group is composed by all the valid IVs. A group is defined as above, as a set of instruments whose just-identified estimators converge to the same value $\beta + q$.
\begin{assumption}\label{ass:plurality}Plurality Rule.\\
$$g > \underset{q\neq0}{max} |\mathcal{G}_q| $$
\end{assumption}
\iffalse
\subsection{Calculation of just-identified estimates}
Let $\mathbf{\dot{Z}}$ be the $N \times (\binom{J}{P}\cdot P)$ matrix of all $\binom{J}{P}$ combinations of the vectors in $\mathbf{Z}$. Then the just-identified estimates can be written as
$$\hat{\mathbf{\beta}} = \left( (\mathbf{I}_{\binom{J}{P}} \otimes \mathbf{1}_P) \times \mathbf{\dot{Z}}' \mathbf{D} \right)^{-1} \mathbf{\dot{Z}}' Y $$
\fi
\iffalse
\begin{remark}
\textbf{Properties}\\
When valid IVs are used as valid, and invalid IVs are used as controls (IVs are used as valid for which $\alpha_j=0$ holds), under Assumptions 1 to 4:
\begin{equation}
plim(\hat{\beta}_{IV}^{[j]}) = \beta
\end{equation}
When $\alpha_j \neq 0$ for at least one IV used:
\begin{equation}
plim(\hat{\beta}_{IV}^{[j]}) = \beta + q
\end{equation}
\end{remark}
\textbf{Proof:}
Assume that $P$ invalid IVs have been chosen as valid.
Partition the instrument matrix into $\mathbf{Z} = (\mathbf{Z_{\mathcal{I}}} \quad \mathbf{Z_{\mathcal{V}}}) = (\mathbf{Z}_{[j]} \quad \mathbf{Z}_{[-j]} \quad \mathbf{Z_{\mathcal{V}}})$. The first matrix is an $n \times P$ matrix that includes the IVs selected as valid. The second part consists of invalid IVs selected as invalid which are hence partialled out and the third part is an $n \times g$ matrix of valid IVs. Split $\alpha$ into three separate vectors $\alpha_j$, $\alpha_{-j}$ and $\alpha_{\mathcal{V}}$ of length $P$, $L-g-P$ and $g$.\footnote{These lengths apply when $P$ invalid IVs are used as valid.} $\mathbf{M^{[-j]}}$ is the projection matrix onto the orthogonal space to the instrument matrix which excludes $P$ instruments. There are $\binom{L}{P}$ such matrices. Then
\begin{align}\label{eq:inconsistency}
\begin{split}
plim(\hat{\beta}_{IV}^{[j]}) &= plim((\hat{\mathbf{D}}' \mathbf{M^{[-j]}} \mathbf{D})^{-1} \hat{\mathbf{D}}' \mathbf{M^{[-j]}} \mathbf{Y} )\\
& = plim((\hat{\mathbf{D}}' \mathbf{M^{[-j]}} \mathbf{D})^{-1} \hat{\mathbf{D}}' \mathbf{M^{[-j]}} (\mathbf{D} \beta_0 + \mathbf{Z}_{\mathcal{I}} \alpha_{\mathcal{I}} + \mathbf{Z}_{\mathcal{V}} \alpha_{\mathcal{V}} + \varepsilon))\\
& = \beta_0 + plim((\hat{\mathbf{D}}' \mathbf{M^{[-j]}} \mathbf{D})^{-1} \hat{\mathbf{D}}' \mathbf{M^{[-j]}}( \mathbf{Z}_{[-j]} \alpha_{-j} + \mathbf{Z}_{[j]} \alpha_{j}))\\
& = \beta_0 + plim((\hat{\mathbf{D}}' \mathbf{M^{[-j]}} \mathbf{D})^{-1} \hat{\mathbf{D}}' \mathbf{M^{[-j]}}( \mathbf{Z}_{[j]} \alpha_{j}))
\end{split}
\end{align}
where the third equality holds because $E(\mathbf{Z}' \varepsilon)=0$ and $\alpha_{\mathcal{V}}=0$, and the fourth equation holds because $\mathbf{M^{[-j]}} \mathbf{Z}_{[-j]} = 0$. When at least one invalid IV is used as valid, the second part on the last row of \ref{eq:inconsistency}, which we call $q$ is not zero in general.\\
If only valid IVs are chosen as valid and the invalid IVs are partialled out, then \ref{eq:inconsistency} becomes
\begin{align}\label{eq:inconsistency2}
\begin{split}
plim(\hat{\beta}_{IV}^{[j]})
& = \beta_0 + plim((\hat{\mathbf{D}}' \mathbf{M^{[-j]}} \mathbf{D})^{-1} \hat{\mathbf{D}}' \mathbf{M^{[-j]}}( \mathbf{Z}_{[-j]} \alpha_{\mathcal{I}} + \mathbf{Z}_{[j]} \alpha_{\mathcal{V}}))\\
& = \beta_0
\end{split}
\end{align}
\qed
\fi
|
2,877,628,090,710 | arxiv | \section{Introduction}
\label{intro}
Recently neutrino physics has given us many surprises with strong
evidences for flavor neutrino conversion to another type of
neutrinos. Analysis of data from solar, atmospheric and reactor
neutrinos have shown us that no other mechanism can explain all the
data, unless you have massive
neutrinos~\cite{Mohapatra:2006gs,Smirnov:2007pw}. These experiments
are the first strong evidence for non-conservation of
family lepton number and this may indicate that new symmetries and
interactions are the source of this phenomena.
Experimental evidences of massive neutrinos imply that the Minimal
Standard Model (SM) is no longer correct. The simplest extension would
be the inclusion of right-handed sterile neutrinos, what would allow
Dirac mass terms for neutrinos. Despite its simplicity, this approach
does not help us to understand the neutrino mass scale or predict
the neutrino masses. Due to the large gap between neutrino mass scales
and the other SM scales, several mechanisms have been suggested to
generate neutrino masses, relating this mass scale to new physics.
In many of these models the masses are of the Majorana type or a mix
between Majorana and Dirac types, what implies non-conservation of
lepton number.
As it's well known, lepton number is an accidental global symmetry
($U_{L}(1)$) of the Standard Model. So, if the neutrino mass matrix
includes Majorana terms, lepton number is broken either explicitly
or spontaneously. If lepton number ($L$) is indeed a global
symmetry\footnote{In some grand unified theories (GUTs), lepton number
is gauged and becomes a subgroup of a larger gauge symmetry.%
}, its spontaneous breaking will generate a Goldstone boson, usually
called Majoron~\cite{Mohapatra, Gelmini}. In this case the breaking of
$L$ sets a new scale and
requires a scalar which carries lepton number and acquires a non-null
vacuum expectation value (vev). Several extensions of the SM allow
spontaneous $L$ breaking
and predicts the existence of the Majoron. However, the simplest extensions
(with a triplet scalar) are excluded due to the experimental results
of LEP on the $Z^{0}$ invisible decay.
Another important class of models which predicts the existence of
Majorons are supersymmetric extensions of the Standard Model with
spontaneous R parity breaking. In these models, the introduction of
anti-neutrino superfields ($N^{C}$) and new singlet superfields
($\Phi$) (which contain neutral leptonic scalars), allows
spontaneous breaking of lepton number~\cite{Masiero, Romao, Romao2}.
In almost all of these models the Majoron will be the imaginary part
of some linear combination of sneutrinos, the scalar component of
the super Higgs fields ($H_{u}$ and $H_{d}$) and the $\Phi$
superfields. Therefore we may safely assume as a model-independent
coupling the following interaction term between $J$ and $\nu$ %
\footnote{The same being valid for non-supersymmetric models as well. %
}:
\begin{eqnarray}
\mathcal{L}=\sum_{\alpha,\beta=e,\mu,\tau}
ig_{\alpha\beta}\bar{\nu}_{\alpha}\gamma^{5}\nu_{\beta}J\mbox{, }
\end{eqnarray}
where $g_{\alpha\beta}$ is a general complex coupling matrix in the
flavor basis. Because in most models $J$ is basically a singlet
(avoiding the constraints imposed by the LEP results), the above
couplings are usually the most
relevant ones to phenomenological analysis (at least at low energies).
In most models we must also include couplings between neutrinos
and a new light scalar (that we call $\chi$) with the same couplings as $J$:
\begin{eqnarray}
\mathcal{L}=\sum_{\alpha,\beta=e,\mu,\tau}
g_{\alpha\beta}\bar{\nu}_{\alpha}\nu_{\beta}\chi\mbox{,}
\end{eqnarray}
Usually neutrino masses and mixings will depend on the vevs associated
to the spontaneous breaking of $L$ and the matrix $g$. In this
context, knowledge of the couplings between neutrinos and Majorons
may help us to understand the neutrino mass scale. However, this relation
is very model dependent and may be very hard to realize in practice.
Trying to make our results as model independent as possible, we will
make no assumptions on $g_{\alpha\beta}$ and present our results
with and without the existence of the massive scalar $\chi$. Nevertheless, assuming Majorana neutrinos (what is reasonable since lepton number is violated), bounds on $g_{\alpha \beta}$ may be transformed to the mass basis through the relation
\begin{equation}
G=U^{T}gU
\end{equation}
where $G_{ij}$ is the neutrino-Majoron coupling matrix in the basis where the neutrino mass matrix is diagonal ($M=diag(m_{1},m_{2},m_{3})$) and $U$ rotates the mass eigenstates to the flavor eigentates (see section \ref{iid}).
Majoron models can be interesting from the cosmological point of
view because they can affect bounds to neutrino masses from large
scale structure~\cite{beacom1}. Neutrinos coming from astrophysical
sources can also be significantly affected by fast decays, where
the only mechanism not yet eliminated is due to neutrinos coupling
to Majorons. This can affect the very high energy region, strongly
changing the flavor ratios between different neutrino
species~\cite{beacom2} or the lower energy region, as supernova
neutrinos~\cite{yasaman1,tomas2}.
\begin{table}
\begin{tabular}{|c|c|c|c|}
\hline Category& Upper Bound& Process & Reference \tabularnewline
\hline \hline
\hline
& & & \tabularnewline
solar neutrino constrain& $|G_{21}|^2 < 4\times 10^{-6}
\left(\dfrac{7\times 10^{-5} ~{\mbox eV}^2}{\Delta
m^2_{\myodot}}\right)$& $\nu_{2} \rightarrow J + \nu_{1}$&
\cite{beacom3}\tabularnewline
& & & \tabularnewline
& & & \tabularnewline \hline
& & & \tabularnewline
supernova bounds & $|g_{ee}| < (1-20) \times 10^{-5}
\mbox{,} \quad 2 \times 10^{-11}<
|g_{e\mu}||g_{\mu\mu}| < 3 \times 10^{-10}$& $\nu
\rightarrow J+\nu\mbox{, }\nu+\nu\rightarrow J$& \cite{yasaman1}
\tabularnewline
& $|g_{\alpha\beta}| < 3 \times 10^{-7}\mbox{ or } |g_{\alpha\beta}|
> 2 \times 10^{-5}$& $\nu
\rightarrow J+\nu^{\prime}\mbox{, }\nu+\nu\rightarrow J$& \cite{tomas2} \tabularnewline
& & & \tabularnewline
& & & \tabularnewline \hline
& & & \tabularnewline
$\beta\beta 0\nu$ decay & $|g_{ee}| < 2 \times
10^{-4}$ &
$(A,Z)\rightarrow (A,Z+2)+2e+J$&
\cite{dbeta} \tabularnewline
& & & \tabularnewline
& & & \tabularnewline \hline
& & & \tabularnewline
microwave background data & $G_{ij} \leq \, 0.61\, 10^{-11}\, m_{50}^{-2}\quad {\mbox and
}\quad G_{ii} \leq 10^{-7}$ & $\nu
\rightarrow J+\nu^{\prime}$ & \cite{Hannestad:2005ex}
\tabularnewline
& & & \tabularnewline
& & & \tabularnewline \hline
& & & \tabularnewline
meson decay &
$\sum_{l=e,\mu,\tau}|g_{el}|^2 < 3 \times 10^{-5} \mbox{,} \quad
\sum_{l=e,\mu,\tau} |g_{\mu l}|^2 < 2 \times 10^{-4}$
&
$\pi /K\rightarrow e+\nu +J$& ~\cite{Barger,Gelmini2,Britton:1993cj} \tabularnewline
& & & \tabularnewline
& & & \tabularnewline \hline
\end{tabular}
\caption{Some of the previous bounds on neutrino-Majoron couplings
from different sources. In the last two columns are shown the process
used to constraint the couplings and the respective references.}
\label{Tab0}
\end{table}
Presently we know that the role of neutrino-Majoron couplings is
marginal in solar and atmospheric neutrinos, therefore it's possible
to put a limit on~\cite{beacom3}
\begin{eqnarray}
|G_{21}|^2 < | g_{\alpha\beta} U^{*}_{\alpha 2}U_{\beta 1}|^2 <
4\times 10^{-6} \left(\dfrac{7\times 10^{-5} ~{\mbox eV}^2}{\Delta
m^2_{\myodot}}\right)
\end{eqnarray}
where $G_{21}$ is the neutrino-Majoron coupling in the mass basis
and $\Delta m^2_{\myodot}$ is the solar mass difference squared
($\Delta m^2_{21}~\equiv~ m^2_{2}-m^2_{1}$). The observation of
1987A explosion ensures us that a large part of binding energy of
supernova is released into neutrinos, what can be translated into
the bounds~\cite{yasaman1}
\begin{eqnarray}
|g_{ee}| < (1-20) \times 10^{-5}\mbox{,} \quad 2 \times 10^{-11}<
|g_{e\mu}||g_{\mu\mu}| < 3 \times 10^{-10}
\end{eqnarray}
such bounds where read off from Fig.1 and Figs. 3,4 of
Ref.~\cite{yasaman1} for $g_{ee}$ and $|g_{e\mu}||g_{\mu\mu}|$,
respectively. Also, limits from decay and scattering of Majorons
inside supernova give the bounds~\cite{tomas2}
\begin{eqnarray}
|g_{\alpha\beta}| < 3 \times 10^{-7}\mbox{ or } |g_{\alpha\beta}|
> 2 \times 10^{-5}.
\end{eqnarray}
the first limit appears because if neutrino-Majoron coupling is
strong enough the supernova energy is drained due Majoron emission
and no explosion occurs; the second limit appears because if
neutrino-Majoron coupling is too strong, the Majoron becomes trapped
inside the supernova and no constraint is possible.
While neutrinoless double beta decays ($\beta\beta 0\nu$) provide us the constraint
\begin{eqnarray}
|g_{ee}| < 2 \times 10^{-4}\mbox{ and } |g_{ee}| < 1.5
\end{eqnarray}
where the first (second) bound corresponds to Majorons with lepton
number equal to L=0 (L=2) at 90 \% C. L.~\cite{dbeta}. Also, no evidence of
Majoron production was seen in pion and kaon decays and
therefore~\cite{Barger,Gelmini2,Britton:1993cj}
\begin{eqnarray}
\sum_{l=e,\mu,\tau} |g_{el}|^2 < 3 \times 10^{-5} \mbox{,} \quad
\sum_{l=e,\mu,\tau} |g_{\mu l}|^2 < 2 \times 10^{-4}.
\end{eqnarray}
Besides the bounds mentioned above, there are bounds that depend on
the rate of neutrino decay ($\nu \rightarrow \nu^{\prime} J$). Such
reaction depends on the neutrino lifetime, $\tau$, that is a
function of neutrino-Majoron couplings in the mass basis, which we
denote by G. Without additional assumptions on neutrino hierarchy,
we can not relate directly the neutrino-Majoron couplings and the
neutrino lifetime. One example is Ref.~\cite{Hannestad:2005ex}
that using cosmic microwave background data, put a stronger
constrain
\begin{equation}
G_{ij} \leq \, 0.61\, 10^{-11}\, m_{50}^{-2}\quad {\mbox and
}\quad G_{ii} \leq 10^{-7} \label{hann-raff}
\end{equation}
where $m_{50}=m/50\;meV$ and G is the neutrino-Majoron in the mass
basis: $G_{ii}$ and $G_{ij}$ are respectively the diagonal and
off-diagonal elements of G.
Future experiments can improve the present bounds on many order of
magnitude, we refer to Ref.~\cite{Fogli:2004gy,Serpico:2007pt} for
details.
A summary of some of the previous bounds are shown in
Table~\ref{Tab0}, where we also show the respective relevant process
used to constraint the neutrino-Majoron couplings. Almost all the
bounds shown in Table~\ref{Tab0} assume one particular model or
class of models. Probably the most model-independent result is
from~\cite{Barger,Gelmini2,Britton:1993cj}, but in this case they
assume not only neutrino-Majoron couplings but also neutrino-$\chi$
couplings to compute the upper bounds shown in Table~\ref{Tab0}.
Here we will try to improve or make these limits more
model-independent through an analysis of both meson and
lepton decays. In Section \ref{iia} we discuss the limits from
pion, kaons, D, D$_s$ and B decays, including decays of mesons
into taus; in Sections \ref{iib} and \ref{iic} we include bounds from
lepton decays (from the total rate and from the spectral distortions).
We conclude transforming our bounds to the mass basis in Section \ref{iid}.
\section{Results}
\label{ii}
Here we try to improve the bounds on neutrino-Majoron couplings through
the analysis of possible effects on mesons and leptons decays as well
as on the spectrum of the muon decay. We also rewrite our results
in the mass basis, which in many cases is more important for model
analysis. All the bounds obtained here have 90\% C.L. and were obtained through
the chi-square method assuming gaussian distributions and including
both statistical and theoretical errors as follows
\begin{equation}
\chi^{2}=\dfrac{\left(R_{data}-R_{theor} \right)^2}{\sigma^2_{data}+\sigma^2_{theor}}
\label{chi}
\end{equation}
where $R_{data}$, $R_{theor}$, $\sigma_{data}$ and $\sigma_{theor}$
are respectively the experimental data of the rate R, the theoretical prediction
for process R, assuming an incoherent sum of SM rate and Majoron contribution, the
experimental error and the theoretical error.
\subsection{Meson decay rates}
\label{iia}
The process $M\rightarrow l+\nu_{l}$ was extensively studied in the
literature and has the following total
decay rate~\cite{PDG}:
\begin{equation}
\Gamma_{SM}=\dfrac{G_{F}^{2}|V_{qq'}|^{2}}{8\pi}f_{m}^{2}m_{l}^{2}m_{M}
\left(1-\dfrac{m_{l}^{2}}{m_{M}^{2}}\right)^{2} f_{rad},
\label{MesonDecay}
\end{equation}
where the $f_{rad}$ accounts for radiative corrections. In
Eq.(\ref{MesonDecay}), $m_{M}$ and $m_{l}$ are the meson and lepton masses,
$G_{F}$ is the Fermi constant, $f_{m}$ is the
meson decay constant and $V_{qq'}$ is the respective
Cabibbo-Kobayashi-Maskawa (CKM) matrix element. Unless specified otherwise
we are using the quantities as listed in the Particle Data group
compilation~\cite{PDG}. We also use the same source to compute the relevant radiative
corrections for the mesons decay rates. An important
feature of Eq.(\ref{MesonDecay}) is that, because it's a 2-body decay,
$\Gamma_{SM}$ is proportional to $m_{l}^{2}$, as it should be to conserve
angular momentum.
In the last few years several of the meson decay constants were
calculated through lattice QCD~\cite{LatticeQCD}, which can be used
to obtain stronger theoretical predictions. We used both the
experimental~\cite{PDG} and theoretical
values~\cite{Blattice,Kdecaycte,Dsdecaycte1,Dsdecaycte2} of $f_{m}$
on our calculations, but in most cases the results differ only by 10\%.
For this reason we will only show the results using the experimental
values of $f_{m}$.
Due to the neutrino-Majoron couplings, the following process also
contributes to mesons decay rates:
\begin{eqnarray}
M\rightarrow l+\nu_{l'}+J,
\end{eqnarray}
where $J$ stands for Majoron and $\nu_{l'}$ may be any neutrino
flavor. A complex analytic expression for the total decay rate is
given in~\cite{Gelmini2}. Here we show a simpler result valid in the limit
$m_{l}=m_{\nu}=0$:
\begin{eqnarray}
\Gamma_{J}=\dfrac{G_{F}^{2}|V_{qq'}|^{2}}{768\pi^{3}}f_{m}^{2}m_{M}^{3}
\sum_{m=e,\mu\tau}|g_{l m}|^{2}
\end{eqnarray}
This result shows that when Majorons are included, the total decay
rate is no longer proportional to the lepton mass (since now we have
a 3-body decay). Therefore, the Majoron contribution ($\Gamma_{J}$) may easily
overcome the SM predictions ($\Gamma_{SM}$) if $g\sim1$:
\begin{eqnarray}
\dfrac{\Gamma_{J}}{\Gamma_{SM}}\approx\dfrac{1}{48\pi^{2}}
\dfrac{m_{M}^{2}}{m_{l}^{2}}\gg 1
\end{eqnarray}
where we have assumed $m_{l} \ll m_{M}$. Assuming that the total decay rate is
\begin{eqnarray}
\Gamma_{total}=\Gamma_{SM}+\Gamma_{J}\label{TotalTax} ,
\end{eqnarray}
the decay on $J$ will be the dominant channel, unless $g$ is
small.
Because only small deviations from the SM are allowed by experimental
data, we must have $g\ll1$. Following Eq.(\ref{chi}), we
calculated upper bounds for $|g_{\alpha \beta}|^2$ at 90 \% C. L. The
Table~\ref{Tab1} shows the bounds obtained through this analysis.
\begin{table}
\begin{tabular}{|c|c|c|c|c|}
\hline
Mesons&
$|g_{e\alpha}|^{2}$&
$|g_{\mu\alpha}|^{2}$&
$|g_{\tau\alpha}|^{2}$&
Refs (exp. values)
\tabularnewline
\hline
\hline
$\pi$&
$1.6\times10^{-4}$&
$2.1\times10^{-1}$&
&
\cite{PDG}
\tabularnewline
\hline
$K$&
$9.5\times10^{-4}$&
$9.3$&
&
\cite{PDG}
\tabularnewline
\hline
$D$&
$1.6\times10^{-1}$&
$2.3$&
$23$&
\cite{PDG}
\tabularnewline
\hline
$D_{s}$&
&
$1$&
$6.3$
&
\cite{Artuso:2006kz}
\tabularnewline
\hline
$B$&
$0.85$&
$1.5$&
$19$&
\cite{Satoyama:2006xn}
\tabularnewline
\hline
\end{tabular}
\caption{Upper bounds on $\sum_{l=e,\mu,\tau}|g_{l \beta}|^{2}$
from meson decays with 90\% C.L. The references for the experimental values used are
shown in the last column. We only include the Majoron contribution,
and not the new light scalar $\chi$. }
\label{Tab1}
\end{table}
As expected from the above remarks and the results on Table
\ref{Tab1}, the most constrained matrix elements $g_{\alpha\beta}$
will be those concerning $e$, since the approximation $m_{l} \ll
m_{M}$ is good in this case. We found that this bound can be
improved using recent data~\cite{NA48} of the following ratio:
\begin{eqnarray}
\dfrac{\Gamma(K^{+}\rightarrow
e^{+}+\nu_{e})}{\Gamma(K^{+}\rightarrow\mu^{+}+\nu_{\mu})}=
(2.416\pm0.043)\times10^{-5}
\end{eqnarray}
where the error is the quadrature of statistical and systematic errors. Because
the Majoron contributions must be suppressed (as shown in Table \ref{Tab1}), we
may approximate the above ratio:
\begin{eqnarray}
\dfrac{\Gamma(K^{+}\rightarrow
e^{+}+\nu_{e})}{\Gamma(K^{+}\rightarrow\mu^{+}+\nu_{\mu})}=
\dfrac{\Gamma_{SM}^{e} + \Gamma_{J}^{e}}{\Gamma_{SM}^{\mu} + \Gamma_{J}^{\mu}}\approx
\dfrac{\Gamma_{SM}^{e} + \Gamma_{J}^{e}}{\Gamma_{SM}^{\mu}} ,
\end{eqnarray}
where $\Gamma^{e(\mu)}$ represents the decay rate with an $e$ ($\mu$)
in the final state. In this way, using the previous statistical
analysis, we can constraint the elements $g_{e\alpha}$ (at 90\% C.L.):
\begin{eqnarray}
\sum_{l=e,\mu,\tau}|g_{el}|^{2}<1.1\times10^{-5}
\end{eqnarray}
When it comes to the $\mu$ matrix elements ($g_{\mu\alpha}$), the
constraints from Table \ref{Tab1} may also be improved if we
consider the decay channels of mesons in four leptons~\cite{PDG}:
\begin{eqnarray}
BR(K^{+}\rightarrow\mu^{+}+\nu_{\mu}+\nu+\bar{\nu})<6\times10^{-6}
\end{eqnarray}
Since the SM contribution to this decay is negligible, we may assume:
\begin{eqnarray}
BR(K^{+}\rightarrow\mu^{+}+\nu_{l'}+J)<6\times10^{-6}
\end{eqnarray}
resulting on (at 90\% C.L.):
\begin{eqnarray}
\sum_{l=e,\mu,\tau}|g_{\mu l}|^{2}<9\times10^{-5}
\end{eqnarray}
Finally, new experimental data for leptonic decay rates of heavy
mesons such as the $D^{+}$, $D_{s}^{^{+}}$ and $B^{^{+}}$
mesons~\cite{PDG,Corwin:2006wb,Satoyama:2006xn,Artuso:2006kz,Artuso:2005ym},
allow us to impose limits to the $\tau$ matrix elements
($g_{\tau\alpha}$), as shown in Table \ref{Tab1}. The best bound being
from the $D_{s}^{+}$ leptonic decay on $\tau^{+}+\nu_{\tau}$ (at 90\% C.L.)\cite{Artuso:2006kz}:
\begin{eqnarray}
\sum_{l=e,\mu,\tau}|g_{\tau l}|^{2}<6.3
\end{eqnarray}
Because of large experimental uncertainty, this bound is quite weak,
as can be seen above.
We stress that unlike~\cite{Barger,Gelmini2,Britton:1993cj} the results shown
so far do not include possible decays on a light scalar $\chi$ and
therefore are less model-dependent. If this new scalar is considered
with a mass of 1 KeV (other choices for the $\chi$ mass
do not change these results as long as it is well below the initial state
masses), the previous results are basically improved by a factor of 2 (again, at 90\% C.L.):
\begin{eqnarray}
\sum_{\alpha}|g_{e\alpha}|^{2}<5.5\times10^{-6}\mbox{ ,
}\sum_{\alpha}|g_{\mu\alpha}|^{2}<4.5\times10^{-5}\mbox{
and }\sum_{\alpha}|g_{\tau\alpha}|^{2}<3.2
\label{boundmeson}
\end{eqnarray}
\subsection{Lepton decay rates}
\label{iib}
Because of its good experimental precision, lepton
decays are good candidates for imposing bounds on neutrino-Majoron
couplings. Moreover, in this case there aren't uncertainties such as
mesons decay constants and CKM elements. However, the leading term
in $\Gamma(l_{i}\rightarrow l_{j}+\bar{\nu}_{j}+\nu_{i})$ is no
longer proportional to the final lepton mass (as it was in the case
of mesons), because the SM decay is a 3-body decay already. For this
reason $\Gamma_{J}<\Gamma_{SM}$ even for $g\sim 1$. In fact the
inclusion of Majorons in the final state decreases the decay rate by
a factor of $\approx 10$ ($\Gamma_{J} \approx \Gamma_{SM}/10$, for
$g=1$), instead of increasing it as it was in the case of mesons.
Therefore we expect much weaker bounds in this case. But, as we will
show below, it's still possible to obtain good bounds for certain
decays\footnote{We thanks J. F. Beacom for suggestion to use lepton
decays to constrain neutrino-Majoron decays.}. To calculate the
4-body decay rate~($\Gamma(l\rightarrow l'+\bar{\nu}+\nu+J$)) we
used the programs FeynArts and FormCalc~\cite{Hahn:2000kx,Hahn:1998yk}.
As we did in the meson case, to constraint the $g_{\alpha\beta}$
matrix we assume that the total lepton decay rate receives
contributions from Majoron emission:
\begin{eqnarray}
\Gamma_{total}(l_{\alpha}\rightarrow
l_{\beta}+\bar{\nu}_{\beta}+\nu_{\alpha})=\Gamma_{J}(l_{\alpha}\rightarrow
l_{\beta}+\bar{\nu}+\nu+J)+\Gamma_{SM}(l_{\alpha}\rightarrow
l_{\beta}+\bar{\nu}_{\beta}+\nu_{\alpha})
\end{eqnarray}
Because Majoron emission may change neutrino flavor (since
$g_{\alpha\beta}$ may be non-diagonal), $\Gamma_{J}$ may have any
type of neutrinos in its final state. For this reason we omitted the
subindex in $\Gamma_{J}$. Besides, both neutrinos ($\nu$ or
$\bar{\nu}$) may emit Majorons, what implies:
\begin{eqnarray}
\Gamma_{J}(l_{\alpha}\rightarrow l_{\beta}+\bar{\nu}+\nu+J)\propto
\sum_{\delta}(|g_{\alpha \delta}|^{2}+|g_{\beta \delta}|^{2})
\label{LepProp}
\end{eqnarray}
where $g_{\alpha \delta}$ and $g_{\beta \delta}$ are the couplings
between Majoron and the $\alpha$ anti-neutrino and $\beta$ neutrino,
respectively. In Eq.(\ref{LepProp}), the interference terms
$g_{\alpha \delta}g_{\beta \delta}$ are proportional to neutrino
masses squared and were neglected. Because Table~\ref{Tab1} shows
that lighter leptons have stronger upper bounds, we will assume
$g_{\alpha\delta}\gg g_{\beta\delta}$. Therefore we will consider
that Majoron emission by $\bar{\nu}_{\alpha}$ is dominant:
\begin{eqnarray}
\Gamma_{J}(l_{\alpha}\rightarrow l_{\beta}+\bar{\nu}+\nu+J)\propto
\sum_{\delta}|g_{\alpha \delta}|^{2}
\end{eqnarray}
Using the experimental values for the $\mu$ and $\tau$ decay
rates~\cite{PDG} and the same kind of analysis used in the last
section, the following bounds were obtained at 90\% C.L.:
\begin{eqnarray}
\sum_{\alpha}|g_{\mu\alpha}|^{2}<4\times 10^{-4}\mbox{ ,}\quad
\sum_{\alpha}|g_{\tau\alpha}|^{2}<10\times 10^{-2},
\end{eqnarray}
where the first bound comes from $\mu$ decay and the second from
$\tau$ decay, both at 90\% C.L.. For the $\tau$ decay the same constraint is
obtained if one considers decays in $e$'s or $\mu$'s. If we include
the contributions from $\chi$ emission (again with mass of 1 KeV and at 90\% C.L.):
\begin{eqnarray}
\sum_{\alpha}|g_{\mu\alpha}|^{2}<2.7\times 10^{-4}\mbox{ ,}\quad
\sum_{\alpha}|g_{\tau\alpha}|^{2}<5.5\times 10^{-2}.
\label{leptonb}
\end{eqnarray}
\subsection{Spectrum of lepton decay with Majorons}
\label{iic}
Another method that can be used to improve the limits obtained above
is the analysis of the electron spectrum in the muon decay, which can be
modified by the inclusion of Majorons. The normalized spectrum for
the SM case and the Majoron case only are shown in
Figure~\ref{ElecSpec}.
\begin{figure}[hbt]
\includegraphics[scale=0.27]{ElecSpec}
\includegraphics[scale=0.40]{EspecEnd}
\caption{At left, normalized electron spectra for muon decay in the SM
(solid line) and with Majoron emission only (dashed line). At right,
the experimental allowed region (between solid lines) for the electron
spectrum and the total predicted spectrum (SM plus Majorons) for three
values of $g^{2}=\sum_{\alpha}|g_{\mu\alpha}|^{2}$.}
\label{ElecSpec}
\end{figure}
Precision measurements of the electron spectrum (used to impose constraints on
non V-A interactions) may constrain $g$ if we consider the changes
in the SM spectrum after including Majoron emission. The usual analysis parametrizes
the electron spectrum using two parameters ($\rho$ and
$\eta$)~\cite{PDG}:
\begin{eqnarray}
\dfrac{d \Gamma
(x)}{dx}=\frac{G_{F}^{2}m_{\mu}^{5}}{48\pi^{3}}x^{2}[3(1-x)+
\dfrac{2}{3}\rho(4x-3)+3\eta\dfrac{m_{e}}{E_{max}}\dfrac{1-x}{x}]
\end{eqnarray}
where $x=\dfrac{E}{E_{max}}$ and
$E_{max}=\dfrac{m_{\mu}^{2}+m_{e}^{2}}{2m{}_{\mu}}$. For the SM the
predicted values are $\rho=0.75$ and $\eta=0$:
\begin{eqnarray}
\dfrac{d \Gamma_{SM}(x)}{dx}=\frac{G_{F}^{2}m_{\mu}^{5}}{48\pi^{3}}[\dfrac{3}{2}x^{2}-x^{3}]
\end{eqnarray}
The current experimental values are
$\rho=0.7509\pm0.001\mbox{ and }\eta=0.001\pm0.024$~\cite{PDG}.
When the total spectrum (SM plus Majoron) is considered, we have found
\begin{eqnarray}
\dfrac{d
\Gamma_{total}(x)}{dx}=\frac{G_{F}^{2} m_{\mu}^{5}}{48\pi^{3}}
[0.0066|g|^{2}-0.09|g|^{2}x
+(\dfrac{3}{2}+0.35|g|^{2})x^{2}-(1+0.25|g|^{2})x^{3}]
\end{eqnarray}
where $|g|^{2}=\sum_{\alpha}|g_{\mu\alpha}|^{2}$.
\begin{figure}[hbt]
\includegraphics[scale=0.32]{EnSpec}
\includegraphics[scale=0.32]{MunSpec}
\caption{At left (at right), normalized electron neutrino (muon
neutrino ) spectra for muon decay in the SM is the solid curve and
with Majoron emission only is the dashed curve. In both cases we
assume a diagonal $g_{\alpha\beta}$.}
\label{EnSpec}
\end{figure}
From the above expression and Figure \ref{ElecSpec} we see that the
most sensitive region is at the end of the spectrum (large $x$),
which can be used to constrain $g$. Figure \ref{ElecSpec} also shows the
allowed region by experimental data (region between solid gray
lines) and the shape of the total spectrum (including the SM and
Majoron contributions) with different values of
$\sum_{\alpha}|g_{\mu\alpha}|^{2}$.
\begin{table}
\begin{tabular}{|c|c|}
\hline
Previous Bounds & Revised Bounds
\tabularnewline
\hline
\hline
& \tabularnewline
$\sum_{\alpha}|g_{e\alpha}|^{2}<3\times10^{-5}$&
$\sum_{\alpha}|g_{e\alpha}|^{2}<5.5\times10^{-6}$
\tabularnewline
& \tabularnewline
& \tabularnewline \hline
& \tabularnewline
$\sum_{\alpha}|g_{\mu\alpha}|^{2}<2.4\times10^{-4}$&
$\sum_{\alpha}|g_{\mu\alpha}|^{2}<4.5\times10^{-5}$
\tabularnewline
& \tabularnewline
& \tabularnewline \hline
& \tabularnewline
none &
$\sum_{\alpha}|g_{\tau\alpha}|^{2}<5.5\times10^{-2}$
\tabularnewline
& \tabularnewline
& \tabularnewline \hline
\end{tabular}
\caption{Comparison between the strongest bounds (including the scalar
$\chi$) obtained here and the previous bounds from the same
processes. All bounds are at 90\% C.L. and the previous bounds are
from ~\cite{Barger,Gelmini2,Britton:1993cj}.}
\label{FinalRes}
\end{table}
Because the spectrum is more sensitive to changes in the cubic term
(or the $\rho$ parameter), we consider the Majoron contributions to
$\rho$:
\begin{eqnarray}
\rho_{total}=\frac{3}{8}(2-0.25|g|^{2})
\end{eqnarray}
Using the chi-square method at 90\% C.L. we obtain:
\begin{eqnarray}
\sum_{\alpha}|g_{\mu\alpha}|^{2}<8\times10^{-3}
\label{leptonbound}
\end{eqnarray}
As can be seen in Figure \ref{EnSpec},
the Majoron main modifications to the spectra occurs
in the neutrino spectrum, which has been measured by the Karmen
experiment~\cite{Karmen}. However, due to experimental uncertainties,
the resulting bounds on $g$ are too weak in this case.
Summarizing, the strongest bounds are given in the Table
\ref{FinalRes}, where we compared the previous limits and the newest
constraints obtained here.
All bounds from Eqs.~(\ref{boundmeson}), (\ref{leptonb}),
(\ref{leptonbound}) can be written as
\begin{eqnarray}
\sum_{\alpha=e,\mu,\tau} |g_{l\alpha}|^{2} < L_{l}^{2}
\label{allbounds}
\end{eqnarray}
where $L_{l}^{2}$ is the strongest upper bound for $\sum_{\alpha}
|g_{l\alpha}|^{2}$ (see Table \ref{FinalRes}). From this
constraints, we assume the conservative limit, where the upper bound
applies not only for the sum, $\sum_{\alpha} |g_{l\alpha}|^{2}$, but
also for the individual elements, as $|g_{l\alpha}|$:
\begin{eqnarray}
|g_{l\alpha}|< L_{l},\;\forall\;\alpha=e,\mu,\tau
\end{eqnarray}
\subsection{Mass Basis}
\label{iid}
All the results obtained so far are written in the flavor basis. However,
in many cases, theoretical analysis are easier on the mass basis.
We have two possible cases: Dirac or Majorana neutrinos. In this
section we assume Majorana neutrinos to transform our bounds to the
the mass basis.
We can translate the previous results to the mass basis using
the transformation matrix $U$~\cite{PDG}:
\begin{eqnarray}
U=\left(\begin{array}{ccc}
c_{12}c_{13} & s_{12}c_{13} & s_{13}e^{-i\delta}\\
-s_{12}c_{23}-c_{12}s_{23}s_{13}e^{i\delta} &
c_{12}c_{23}-s_{12}s_{23}s_{13}e^{i\delta} & s_{23}c_{13}\\
s_{12}s_{23}-c_{12}c_{23}s_{13}e^{i\delta} &
-c_{12}s_{23}-s_{12}c_{23}s_{13}e^{i\delta} &
c_{23}c_{13}\end{array}\right)\times
diag(e^{i\alpha_{1}/2},e^{i\alpha_{2}/2},1)
\end{eqnarray}
where $c_{ij}=cos(\theta_{ij})$ and $s_{ij}=sin(\theta_{ij})$. The
neutrino mass matrix is given by $M=diag(m_{1},m_{2},m_{3})$ and for a
given mass $m_1$, we can written all other masses as a function of
$m_1$ and the squared mass differences as follows
\begin{eqnarray}
\Delta m_{12}^{2}\equiv m_{2}^{2}-m_{1}^{2}=\Delta m_{\myodot}^{2}
\mbox{ and }
\Delta m_{23}^{2}\equiv m_{3}^{2}-m_{2}^{2}=\Delta m_{atm}^{2}.
\end{eqnarray}
Although the mass differences and angles have
been measured experimentally~\cite{PDG},
we have no information on the Majorana phases $\delta$, $\alpha_{1}$ and
$\alpha_{2}$.
\begin{figure}[hbt]
\includegraphics[scale=0.47]{massboundj1}
\caption{Exclusion curves in the $m_{1}-G$ plane. We show the curves
for $g_1$, $g_2$ and $g_3$. These curves are for $\theta _{13}=0$,
non-zero values of $\theta_{13}$ are more restrictive.}
\label{MassBounds}
\end{figure}
To calculate the bounds in the mass basis, we will use the transformation rule for
Majorana neutrinos
\begin{equation}
G=U^{T}gU
\label{eq2}
\end{equation}
where g is the neutrino-Majoron coupling matrix in the flavor basis
and G is the neutrino-Majoron coupling matrix in the mass basis.
Although it is not valid in general, many models~\cite{Chikashige:1980ui,Joshipura:1992hp,Schechter:1981bd} have
the following property (at least in some limit)
\begin{equation}
G=diag(g_{1},g_{2},g_{3})\propto M=
diag(m_{1},m_{2},m_{3})
\label{eq1}
\end{equation}
Following~\cite{tomas1}, we calculate the allowed region for
different values of $\delta$, $\alpha_{1}$ and $\alpha_{2}$ and then
choose the union of these regions as the final result, valid for any
value of the phases, as shown in Figure \ref{MassBounds}.
\section{Conclusions}
\label{iii}
Using three different techniques we were able to
constraint the neutrino-Majoron couplings. The strongest constraints
are shown in Table \ref{FinalRes}. Considering only the limits
from meson decays we improve by one order of magnitude the previous
limits on $|g_{e\alpha}|^{2}$ and
$|g_{\mu\alpha}|^{2}$~\cite{Barger,Gelmini2,Britton:1993cj}.
Although the best constraints were obtained from meson decay rates,
we have shown that independent bounds can also be obtained from
$\mu$ and $\tau$ decays. The latter one being the best to constraint
the $g_{\tau\alpha}$ elements. We stress that the bounds on
$g_{\tau\alpha}$ shown in Table \ref{FinalRes} is probably the first
model-independent constraint for this parameter.
The third alternative used was an analysis of the spectrum of muon
decay. Despite its potential for constraining the $g_{\mu\alpha}$
elements, the experimental values are not precise enough to make such
an analysis useful. Our best constraints are
$
|g_{e\alpha}|^{2}<5.5\times10^{-6}$,
$|g_{\mu\alpha}|^{2}<4.5\times10^{-5}$ and
$
|g_{\tau\alpha}|^{2}<5.5\times10^{-2}$, $\alpha=e,\mu,\tau$ at 90~\%~C. L. .
Because the models cited here usually try to explain the neutrino
mass scale, it may be convenient to analyze the limits on
neutrino-Majoron couplings in the mass basis. With that in mind we
transformed all our results from the flavor basis to the mass basis,
using the current values for the angles of the neutrino mixing
matrix. As shown in Figure \ref{MassBounds} the constraints on the
mass basis are usually weaker than those on the flavor basis.
\begin{acknowledgments}
This work was supported by Funda\c{c}\~ao de Amparo
\`a Pesquisa do Estado de S\~ao Paulo (FAPESP) and Conselho
Nacional de Desenvolvimento Cient\'\i fico e Tecnol\'ogico
(CNPq).
\end{acknowledgments}
|
2,877,628,090,711 | arxiv |
\section{Introduction}
\label{sec:intro}
Fast radio bursts \citep[FRBs,][]{lorimer2007bright} are transient pulses
of radio light observed out to cosmological distances; both their origins
and emission mechanisms remain unclear. Even though thousands of FRB events
occur over the full sky every day \citep{CHIMEFRB_CAT1,2018MNRAS.475.1427B},
their detection with traditional radio telescopes is challenging due to the
randomly-occurring nature of the majority of bursts.
With its unique design optimized for rapid wide-field
observations and a powerful
real-time transient-search engine \citep[CHIME/FRB,][]{FRBSystemOverview},
the Canadian Hydrogen Intensity Mapping Experiment
\citep[CHIME\footnote{\href{https://chime-experiment.ca}
{https://chime-experiment.ca}}, ][]{chime_overview_2021}
has become the leading facility for detection of FRBs,
detecting over 500 FRBs \citep{CHIMEFRB_CAT1} and
18 new repeating sources \citep{R2,RN, RN2}
in its first year of full operations.
Such an unprecedented sample of events with a single survey has enabled
detailed studies of statistical properties of the FRB population
such as fluence distribution and sky rate, scattering time,
dispersion measure (DM) distribution, spatial
distribution, burst morphology, and correlations with large-scale structure
\citep{CHIMEFRB_CAT1,FRB_Morphology2021,FRB_galactic_lat2021,FRB_lss_xcor2021,
2021arXiv210710858C}.
However, except for FRBs with low dispersion measure
\citep{michilli2020analysis,Bhardwaj_M81,2021arXiv210812122B},
CHIME/FRB's arcminute localization
precision is insufficient to localize these bursts to their host galaxies,
which is crucial to understand their nature and unlock their
potential as probes of the intergalactic medium and large-scale structure.
To overcome this limitation, the CHIME/FRB collaboration is currently
developing CHIME/FRB Outriggers, a program to deploy
CHIME-like outrigger telescopes at continental baseline distances.
CHIME and the outriggers will
form a dedicated very-long-baseline interferometry (VLBI) network
capable of detecting hundreds of FRBs each year with sub-arcsecond
localization precision in near real-time,
allowing for the unique identification of FRB
galaxy hosts and source environments.
Because VLBI localizes sources by precisely measuring the
difference in the arrival time
of astronomical signals between independent telescopes across
far-separated sites, it is critical to use very stable
local reference signals (i.e., clocks) that allow
the synchronization of VLBI stations without losing coherence during
observations and between calibrations.
This is particularly important for stationary telescopes like CHIME
and the outrigger stations that can only be calibrated when a bright
radio source transits through their field of view.
The superior stability performance of
hydrogen masers on short and intermediate timescales makes them the
preferred option for VLBI applications
\citep{2018PASP..130a5002M,2019ApJ...875L...2E,2020arXiv200409987S}.
Here, we present a
hardware and software clock stabilization solution for the CHIME
telescope that effectively
transfers the reference clock from its original GPS-disciplined
crystal oscillator to a passive hydrogen maser during
VLBI observations, meeting the timing requirements for FRB VLBI with CHIME/FRB
Outriggers. Furthermore, this system can be implemented
without interrupting CHIME's current observational campaign and without
modifications to the correlator or
the data-analysis pipelines
for cosmology and radio transient science.
The paper is organized as follows:
Section~\ref{sec:inst_overview} describes
the features of the CHIME instrument that are relevant to its use as
a VLBI station in CHIME/FRB Outriggers.
Section~\ref{sec:requirements} discusses the CHIME/FRB Outriggers
clock stability requirements for FRB VLBI.
Section~\ref{sec:clock_stabilization} describes the hardware and software of the
stabilization system that transfers CHIME's reference clock to
a passive hydrogen maser. Section \ref{sec:validation} shows
the results of the suite of tests that validate the clock stabilization
system with VLBI-style observations between CHIME and the CHIME Pathfinder
\citep[an early small-scale prototype of CHIME recently
outfitted as an outrigger test-bed, ][]{PathfinderOverview,leung2020synoptic}.
Section~\ref{sec:no_maser_outrigger} presents an alternate clock solution
for outrigger stations that do not have the infrastructure to support
a hydrogen maser. Section~\ref{sec:conclusions} presents the conclusions.
\section{Instrument overview}
\label{sec:inst_overview}
A detailed description of the CHIME instrument and the
CHIME/FRB project is presented in
\cite{chime_overview_2021} and \cite{FRBSystemOverview}. In this section
we give a brief introduction to these systems focused on the
features that are relevant for FRB VLBI. We also
give an overview of CHIME/FRB Outriggers.
\subsection{CHIME and CHIME/FRB}
\label{subsec:chime_chimefrb}
CHIME is a hybrid cylindrical transit interferometer located at the
Dominion Radio Astrophysical Observatory (DRAO) near Penticton, B.C.,
Canada. It consists of four 20 m $\times$ 100 m cylindrical reflectors
oriented north-south and instrumented with a total of 1024
dual-polarization feeds and low-noise receivers operating in the
400-800~MHz band. The cylinders are fixed with no moving parts,
so CHIME operates as a drift-scan instrument that surveys the northern
half of the sky every day with an instantaneous field of view of
$\sim 120^\circ$ north-south by $2.5^\circ-1.3^\circ$ east-west.
Although CHIME's design was driven by its primary scientific goal to
probe the nature
of dark energy by mapping the large-scale structure of neutral hydrogen in the universe across the redshift range $0.8\le z\le 2.5$,
its combination of high sensitivity and large field of view also
make it an excellent instrument to study the radio transient sky. Thus,
in its final stages of commissioning, the CHIME correlator was upgraded
with additional hardware and software backends
to perform additional real-time data
processing operations for pulsar timing and FRB science.
The correlator
\citep{2016JAI.....541005B,2016JAI.....541004B,2020JAI.....950014D}
is an FX design (temporal Fourier transform before spatial
cross-multiplication of data), where the F-engine digitizes the 2048
analog inputs at 800~MSPS
and separates the 400~MHz input bandwidth into 1024
frequency channels with 390~kHz spectral resolution.
The F-engine also implements the corner-turn network
that re-arranges the complex-valued channelized data (also known
as ``baseband'') before sending it to the X-engine that
computes a variety of data products for the different
real-time scientific backends: interferometric visibilities for the
hydrogen intensity mapping backend \citep{chime_overview_2021},
dual-polarization tracking
voltage beams for the pulsar monitoring backend \citep{2020arXiv200805681C},
and high-frequency resolution power beams
for the 21-cm absorption systems
backend \citep{2014PhRvL.113d1303Y} and for the CHIME/FRB
backend that triggers on highly dispersed radio transients
to search for FRBs in real time \citep{FRBSystemOverview}.
Additionally, a $\sim$36~s long memory buffer in the X-engine
stores baseband data (2.56~$\mu$s time resolution,
390~kHz spectral resolution, and 4-bit real~+~4-bit imaginary
bit depth for the 2048 correlator inputs)
that can be saved to disk when the CHIME/FRB search pipeline detects
an FRB candidate, enabling polarization and high-time resolution
analysis of FRB events, as well as sub-arcminute localization
precision \citep{michilli2020analysis}.
Eventually it will also enable VLBI localization
with CHIME/FRB Outriggers.
\subsection{CHIME/FRB Outriggers}
\label{subsec:chimefrb_outriggers}
The scientific goal of CHIME/FRB Outriggers is to provide 50~mas
localization for nearly all CHIME detected FRBs
with sub-hour latency. This
angular resolution is sufficient to determine galaxy hosts and
source environments, and is well matched to current best optical
follow-up observations. To this end, the CHIME/FRB collaboration
is currently building outrigger telescopes at distances ranging
from hundreds to several thousands of kilometers from DRAO.
The outriggers will be small-scale versions of CHIME, each with
about one eighth of CHIME's collecting area, the same field of view,
and tilted such that they monitor the same region of the sky as CHIME.
In contrast to traditional VLBI that is typically performed for known
targets with small fields of view and manageable data rates,
the random nature of most FRBs requires the real-time processing
of massive data rates in order to detect and localize these
events in blind searches with wide fields of view.
The baseband data rate of CHIME is
6.6~Tbit/s while that of each outrigger station will be
0.8~Tbit/s. Since such high data rates cannot be continuously saved,
the outriggers will adopt the triggered FRB VLBI approach demonstrated
in \cite{leung2020synoptic}, where each station buffers its local baseband
data in memory and only writes it to disk upon receipt of a trigger
from the CHIME/FRB real-time search pipeline
over internet links. The local data of each
station is then transmitted to a central
facility where the signals are correlated
together such that the outriggers operate with CHIME as an interferometric
instrument with the angular resolution of a telescope with an
aperture of thousands of kilometers.
\section{Clock stability requirements}
\label{sec:requirements}
Accurate timing is critical for VLBI since the localization of
radio sources is ultimately derived from the relative time
of arrival of signals at the telescope stations.
By synthesizing the available frequency channels it is
possible to obtain a statistical precision on the measured delay given
by \citep{1970RaSc....5.1239R}
\begin{equation}
\label{eq:sigma_tau}
\sigma_\tau^{\text{stat}} = \frac{1}{2\pi \cdot \text{SNR}\cdot \text{BW}_{\text{eff}}}
\end{equation}
\noindent where $\text{SNR}$ is the signal-to-noise ratio of the
VLBI event and $\text{BW}_{\text{eff}}$ is the effective bandwidth.
For the CHIME/FRB detection threshold\footnote{The SNR in VLBI
is related to the SNR at CHIME as
$\text{SNR}/\text{SNR}_{\text{CH}} = \sqrt{2A_{\text{O}}/A_{\text{CH}}} = 1/2$
where $A_{\text{CH}}$ and $A_{\text{O}}$
are the collecting areas of CHIME and the
outrigger respectively, and the factor of $\sqrt{2}$ comes
from the difference in the detailed noise statistics of a
cross-correlation compared to an auto-correlation
\citep{2015A&C....12..181M}.
While the CHIME/FRB real-time detection pipeline
has a detection threshold of $\sim 10$ \citep{michilli2020analysis},
the SNR rises by $\sim 50\%$
through the more detailed analysis of the saved baseband data.
As such, we take the floor on the CHIME detection SNR to be
$\text{SNR}_{\text{CH}} = 15$.} and bandwidth (\text{BW})
this corresponds to
\begin{equation}
\label{eq:sigma_tau_1}
\sigma_\tau^{\text{stat}} \approx \frac{184\text{ ps}}
{\displaystyle \left(\frac{\text{SNR}}{7.5}\right)\left(\frac{\text{BW}}{400\text{ MHz}}\right)}.
\end{equation}
For a VLBI baseline $b$, a delay precision $\sigma_\tau$ corresponds to
a statistical localization uncertainty
\begin{equation}
\label{eqn:delay2pos_err}
\sigma_\theta \approx \frac{c}{b} \sigma_\tau
\end{equation}
\noindent which gives $\sigma_\theta^{\text{stat}} \lesssim 11$~mas
for a 1000~km baseline. However,
the (relative) delay measured by the interferometer
includes not only the geometric delay
(which ultimately provides the source localization)
but also additional contributions that need to be accounted for such as
propagation though the troposphere and ionosphere, baseline
errors, drift between clocks of different stations (clock timing errors),
and other instrumental delays. In practice,
the localization uncertainty of CHIME/FRB Outriggers
will be limited
by systematic errors due to uncompensated delay
contributions,
and particularly by errors in the
determination of the dispersive delay due to the ionosphere.
Although the large observation bandwidth of the instrument
helps to mitigate this effect, it still represents the
most important challenge for
system stability at CHIME frequencies.
Our simulations indicate that we can reliably localize
FRB events to 50~mas which, for $b = 1000$ km,
corresponds to a delay error budget of $\sigma_\tau \approx 800$~ps.
Anticipating that the ionosphere
will be the main contributor to delay errors, the
clock timing error specification\footnote{This is not a
hard upper limit,
but rather a reasonable reference value that represents
our goal to keep the clock timing errors well below the 800~ps
total timing error budget.} has been set
to $\sigma_\tau^{\text{clk}}\lesssim 200$~ps.
Note that for blind FRB searches, this
specification must be met at all times. Indeed,
it may not always be possible to find
a calibrator immediately after the
detection of an FRB for phase referencing.
An additional complication is that
stationary telescopes like CHIME and the outriggers
observe the sky as it transits through their field of view
and thus cannot slew towards favorable calibrators. Therefore, it is especially important to have
a reference clock that is reliable on the timescales required to
connect an FRB detection to a calibrator observation,
potentially hours later.
Although the list of steady radio sources
potentially suitable for calibrating
low-frequency VLBI arrays with $\gtrsim 1000$~km baselines
has significantly increased thanks to the
ongoing LOw Frequency ARray (LOFAR) Long-Baseline Calibrator Survey
\citep[LBCS, ][]{2015A&A...574A..73M,jackson2016lbcs},
during its initial stages CHIME/FRB Outriggers
will adopt a more conservative strategy
relying mainly on bright pulsars for calibration.
Pulsars are compact, can be separated from the
steady radio background in the time domain,
and are sufficiently
abundant to be used as the primary sources for phase referencing.
Accordingly, the backend of each outrigger will also have the ability to
form tracking baseband beams for pulsar analysis and calibration.
Recently, \cite{2021arXiv210705659C} demonstrated
the potential of pulsars as calibrators
for CHIME/FRB Outriggers through the triggered VLBI
detection of an FRB over a $\sim 3000$~km
baseline between CHIME and the
Algonquin Radio Observatory (ARO) 10-m Telescope using
PSR B0531+21 in the Crab nebula for phase referencing.
We estimate that an FRB detection can be phase referenced
to an $\text{SNR}\gtrsim 15$ pulsar\footnote{This represents the
VLBI SNR after coherent addition of the pulses within
a single pulsar transit.}
within less than $\sim 10^3$~s.
This timescale and the
relative clock timing error specification set the clock stability
(Allan deviation) requirement to
\begin{equation}
\label{eq:spec}
\sigma_y(10^3~\text{s}) \lesssim 2 \cdot 10^{-13},
\end{equation}
As explained in Appendix~\ref{app:adev}, the Allan deviation
is a measure of stability commonly used in
precision clocks and oscillators, with
$\Delta t \cdot \sigma_y(\Delta t)$ roughly
representing the rms of clock timing errors
a time $\Delta t$ after calibration.
The specification in Equation~\ref{eq:spec} is typically obtained
with hydrogen masers.
However, in Section~\ref{sec:no_maser_outrigger} we show
that even frequency references that initially do not meet this
requirement can be used as reference clocks for the outriggers
by interpolating timing solutions between calibrators.
\section{clock stabilization system}
\label{sec:clock_stabilization}
In this section we discuss the considerations that led to
the current clock stabilization solution for CHIME,
as well as the hardware and data analysis
for the maser signal.
\subsection{Hardware/Software considerations}
\label{subsec:hw_sw_considerations}
The CHIME F-engine is implemented using the ICE
hardware, firmware and software framework \citep{2016JAI.....541005B}.
It consists of field programmable gate array (FPGA)-based motherboards
specialized to perform the data acquisition and channelization
of the 2048 CHIME analog inputs. The ICE motherboards are packaged in
eight crates with custom backplanes that implement the networking engine
that re-organizes and sends the baseband data to a dedicated
graphics processing unit (GPU) cluster that performs the X-engine
operations. The outriggers will also use an ICE-based F-engine.
The data acquisition and signal processing of the
F-engine are driven by a
single 10~MHz clock signal provided by a
Spectrum Instruments TM-4D
global positioning system (GPS)-disciplined
ovenized crystal oscillator. The GPS module also generates
an inter-range instrumentation group (IRIG)-B timecode signal
internally synchronized to the clock and
that is used by the
correlator to time stamp the data.
A low-jitter distribution system sends the clock and
time signals to each ICE backplane and motherboard
and is ultimately used to generate the
analog-to-digital converter (ADC) sampling clocks.
The time stamping process is implicit: The F-engine uses the
IRIG-B signal to synchronize the start of the data acquisition to
an integer second (up to the 10~ns resolution of the IRIG-B decoder
in the FPGA firmware)
and it also tags each data frame with a frame counter
value. The X-engine time stamps the data by calculating
the offset from the start time
based on the frame counter value and assuming a fixed
2.56~$\mu$s baseband sampling time.
Figure~\ref{fig:chime_clk_adev} shows the Allan deviation of the
CHIME GPS clock (blue line) as measured with the clock stabilization
system described in Sections~\ref{subsec:maser_hw} and
\ref{subsec:pipeline}.
The GPS disciplined crystal oscillator, being locked
to a GPS time reference determined by a vast network of
atomic clocks, will eventually surpass the stability performance
of a single hydrogen maser on very long timescales
($\Delta t \gtrsim 10^6$~s).
On intermediate and long timescales ($\Delta t \sim 10^3-10^5$~s),
including the ones of interest for CHIME/FRB Outriggers,
the CHIME clock stability is dominated by
white delay
noise ($\sigma_y(\Delta t) \propto 1/\Delta t$) corresponding to
$\sim 6$~ns root mean square (rms) timing errors.
While the coherence of this frequency standard
is sufficient for CHIME's operations
as a connected interferometer and for all its backends,
the high precision needed for FRB VLBI
requires the development of a more stable clock system.
\begin{figure}[h!]
\centering
\includegraphics[scale=.35]{chime_clk_adev.png}
\caption{Allan deviation of the CHIME GPS clock
and the DRAO hydrogen maser.
Blue: Allan deviation of the CHIME GPS clock
as measured with the clock stabilization system
described in Section~\ref{sec:clock_stabilization}. A total of ten days
of raw ADC data at 30~s cadence was collected for the
measurement. Dashed blue: expected measurement
error contribution to the Allan deviation obtained from
simulations of uncorrelated
but time-dependent errors in the range $\sim 4-20$~ps rms
(the range observed in the measured delays).
Dashed green: stability requirement from Equation~\ref{eq:spec}
assuming white noise delay errors.
Red: manufacturer-specified Allan deviation
of the DRAO maser.
The CHIME GPS clock does not meet the stability requirements for
FRB VLBI, but the DRAO maser does (Equation~\ref{eq:spec}).}
\label{fig:chime_clk_adev}
\end{figure}
As a continuously tracking global navigation satellite system (GNSS)
station and as part of the
Western Canada Deformation Array \cite[WCDA, ][]{wcda1996_chen} and
the Canadian Active Control System \cite[CACS\footnote{\href{http://cgrsc.ca/resources/geodetic-control-networks/canadian-active-control-system-cacs/}
{http://cgrsc.ca/resources/geodetic-control-networks/canadian-active-control-system-cacs}}, ][]{cacs1996_duval},
DRAO is equipped with an atomic frequency standard consisting
of a T4Science pH Maser 1008 passive hydrogen maser owned
and operated by Natural Resources Canada (NRCan).
The maser is installed in a seismic vault at the DRAO site
and has a primary output of 5~MHz (sine wave). It
is directly connected to a low noise distribution amplifier in the same
rack that serves as electrical isolation
for the maser and also derives multiple
copies of the 10~MHz reference signal (sine wave).
NRCan has approved the use of
two of those signals for CHIME-related operations.
The manufacturer-specified Allan deviation of the DRAO maser is
shown in Figure~\ref{fig:chime_clk_adev} (red points).
The maser clearly exceeds the stability requirements for
FRB VLBI with CHIME (Equation~\ref{eq:spec}).
Some of the outrigger sites will also have access
to hydrogen maser frequency references with similar performance.
Although in principle the ICE system can be operated with
an IRIG-B time signal that is not phase-locked to the
10 MHz reference clock (such as an independent maser),
an important restriction
to using the maser as the master clock for the
CHIME correlator is the fact that both
the F-engine and X-engine software and the scientific data analysis
pipelines were developed on the premise that the clock and IRIG-B
signals are synced (e.g., to time stamp the data),
something that cannot be guaranteed if the two
signals are generated by independent systems (the maser and the
GPS receiver respectively). Even if the relative drift between clock and
time signals could be tracked, both the correlator and
data analysis pipelines
would need to be updated to implement this change.
As such, we had to develop a clock
stabilization system that did not impact
the normal operations of CHIME, its existing real-time backends,
and the other scientific teams.
\subsection{Maser signal conditioning}
\label{subsec:maser_hw}
The clock stabilization system designed for FRB VLBI with CHIME
keeps the current GPS-disciplined crystal oscillator as the
master clock and instead feeds the
maser signal to one of the ICE ADC
daughter boards so it is digitized by the crystal-oscillator-driven
F-engine. The data is processed to
monitor the variations in the phase of the sampled maser signal
which correspond to variations in the relative delay
between the maser and the master clock.
By using this information to correct
phase variations in the baseband data recorded at the time
of an FRB detection, the system effectively transfers the
reference clock from the GPS disciplined oscillator to the more
stable maser signal during FRB observations.
As shown in Figure~\ref{fig:chime_clk_adev} (dashed blue line),
the noise penalty
associated with the clock transfer operation
is essentially white
($\sigma_y(\Delta t) \propto 1/\Delta t$)
on the timescales relevant for FRB VLBI
and small ($\lesssim 20$~ps) compared
to the total clock timing error budget.
A block diagram of the maser signal path is shown in
Figure~\ref{fig:maser_setup}.
The first point of access to the 10~MHz maser signal is the
low noise distribution amplifier within the seismic vault.
From there, the signal is transported through $\sim 500$~m of
buried coaxial cable to one of the two radio-frequency (RF)-shielded
huts that house the CHIME F-engine.
We use the same type (LMR-400)
of low-loss coaxial cable used in the CHIME
analog receivers and whose thermal susceptibility has been
extensively tested in the field. At the RF hut, the cable
interfaces with a ground block and the signal is then
carried inside the RF room using standard SMA cables
where it is connected to an isolation transformer to refer
the next stages to the F-engine crate ground.
One complication in the digitization of the maser signal is that the
ADC daughter boards that specialize the ICE system for
CHIME have a bandpass transfer function that strongly attenuates
signals below $\sim 100$~MHz. For this reason, instead of feeding
the maser signal directly to an ADC daughter board, the signal is
used to drive a low-noise sine-to-square wave signal translator
that generates 10~MHz harmonics well into the CHIME band.
The output of the translator is then filtered
to the CHIME band using the same band-defining filter amplifier (FLA)
used in the CHIME receivers
\citep{PathfinderOverview,chime_overview_2021}.
Finally, the FLA is connected
directly to one of the correlator inputs
where it is digitized at 800~MSPS with an 8-bit ADC.
\begin{figure}[h!]
\centering
\includegraphics[scale=.39]{maser_setup.png}
\caption{Maser signal path. The 10~MHz maser signal is transported
through $\sim 500$~m of buried coaxial cable from the seismic vault
to one of the CHIME F-engine RF huts. There, the maser
signal is conditioned to a waveform that can be
digitized by the CHIME F-engine
(see Section~\ref{subsec:maser_hw}
for details).}
\label{fig:maser_setup}
\end{figure}
\subsection{Clock stabilization pipeline}
\label{subsec:pipeline}
The FPGA within each ICE motherboard processes
the data from its digitizers using the custom CHIME F-engine firmware
\citep[for details, see][]{2016JAI.....541005B,2016JAI.....541004B}. Briefly,
the raw ADC data from each input is passed to the
frequency channelizer module as frames of 2048 8-bit samples.
The channelizer forms the baseband data by splitting the 400 MHz input
bandwidth into 1024 frequency channels, each truncated to
a 4-bit real + 4-bit imaginary complex number. Additionally,
a probe sub-module within the channelizer
can be configured to periodically capture a subset of
the raw ADC data that is separately saved and typically
used in CHIME for system monitoring.
By default, the CHIME F-engine software pipeline
saves one raw ADC frame (2.56 $\mu$s of data)
from each input every 30~s, but this cadence can
be modified before starting a
data acquisition.
The clock stabilization pipeline
extracts the raw ADC frames
from the maser input, Fourier transforms each frame via a
Fast Fourier Transform (FFT), and separates the frequency channels
corresponding to the harmonics of the 10 MHz signal in the CHIME
band.
The quality of each harmonic is assessed based on its
signal-to-quantization-noise ratio (SQNR) and its
susceptibility to spurious aliased harmonics (relevant for
harmonics near the edges of the CHIME band).
Low quality harmonics are discarded.
Since the ADC that digitizes the maser signal uses the GPS clock
as the reference for sampling, the variations in the delay of the
sampled maser signal represent the delay variations of the
GPS clock with respect to the maser, the latter of which is more
stable on short and intermediate timescales. These delay
variations $\Delta\tau(t)$ will induce
phase variations $\Delta \phi (t, \nu)$ in the maser harmonics of
the form
\begin{equation}
\label{eq:delay_to_phase_var}
\Delta \phi (t, \nu) = 2\pi\nu \Delta\tau(t)
\end{equation}
\noindent where $\nu$ is the harmonic frequency.
Since we are interested in the GPS clock delay variations
relative to the delay at the time of VLBI calibration,
the phase of the maser harmonics is initially referenced to
the phase of a frame close to calibration time. Then for each frame,
a line described by Equation~\ref{eq:delay_to_phase_var} is fit to
the phase as a function of harmonic frequency to recover the
GPS clock delay (relative to the maser)
as a function of time.
The dominant component of the
recovered delay $\Delta\tau(t)$ is a slow
linear drift as function of time ($\sim 50$~ns/day)
corresponding to a constant offset of the maser frequency from
10~MHz. This linear trend is removed from $\Delta\tau(t)$
since it is due to the maser frequency calibration and not due to the
instability of the GPS clock.
The captured raw ADC
frames are only a small fraction of the available CHIME data; thus,
the times at which GPS clock delay measurements are available are
not necessarily aligned with the times of a calibration observation
or an FRB detection. This means that the GPS clock delay timestream
must be interpolated in order to find the clock
contribution to the total delay measured in a VLBI observation.
We use linear interpolation
to find the GPS clock delay at any arbitrary time, a method
that is motivated by the short timescale behavior of the
clock delay variations.
Figure~\ref{fig:tuning_jitter} shows a
few examples of the behavior of the
GPS clock delay on timescales of a few seconds as measured by
the clock stabilization system.
On these timescales the timing variations are dominated by
the tuning jitter generated by the algorithm that disciplines
the crystal oscillator. In essence, the algorithm works
by counting the number of clock cycles between
successive GPS receiver pulse-per-second (PPS) pulses and adjusting
the crystal's temperature to ensure ten million counts between pulses.
The size of the temperature tuning steps is progressively
reduced as the crystal oscillator frequency approaches 10~MHz.
As shown in Figure~\ref{fig:tuning_jitter},
this discipline procedure gives a characteristic
triangle-wave shape to the tuning jitter, and although in a
perfectly-tuned oscillator the transitions should occur every second,
in practice we observe that they can take longer.
Thus, as long as the GPS clock delay is sampled at cadences
below $\sim 500$~ms we can track the tuning jitter features and
a linear interpolation provides a good approximation to
the true delay at any time.
\begin{figure}[h!]
\centering
\includegraphics[scale=.42]{tuning_jitter.png}
\caption{Three examples of the behavior of the
GPS clock delay on timescales of a few seconds as measured by
the clock stabilization system with respect to the DRAO maser.
The raw ADC data cadence for this measurements was 40~ms.
The measurement errors are in the range $\sim 2-13$~ps.
The characteristic triangle wave pattern is due to
the algorithm that disciplines the crystal oscillator
in the GPS unit.
The algorithm works by counting the number of clock cycles between
successive GPS PPS pulses and adjusting
the crystal's temperature to ensure ten million
counts between pulses.
The size of the temperature tuning steps changes depending on
the tuning history of the oscillator.}
\label{fig:tuning_jitter}
\end{figure}
As the current version of the F-engine control software only allows
saving raw ADC data for all the correlator inputs at the time, raw ADC
data at cadences below 10~s cannot be saved during normal
telescope operations and are restricted to operations during
times scheduled for
hardware maintenance and software upgrades.
A modification to the F-engine control software is ongoing to
allow saving fast-cadence raw ADC data for the maser input while
keeping the default cadence for the remaining correlator inputs,
a change that does not impact the the normal operations of the
correlator and the data-analysis pipelines.
It is also possible to process baseband
data directly to extract the maser signal and measure the GPS clock
delay variations. The operation is very similar
to that of raw ADC data, except that the maser data has already
been transformed to the frequency domain by the F-engine.
Since in this case
most maser harmonics do not lie exactly in the center of
a frequency channel, the pipeline selects the closest F-engine frequency
channel. Then for each selected channel, it
performs an additional channelization by
using an FFT along the time domain to isolate the harmonic frequency.
Although working with baseband data is logistically convenient in
cases where we need to test the performance of the
clock stabilization system (see Section~\ref{sec:validation}),
during regular operations the current system is designed to work mostly
with raw ADC data. This is mainly because a baseband dump for an
FRB event is typically collected in
$\sim 100$~ms segments at different
times for each frequency channel in order to account for the
dispersion delay of the transient, with a total event duration lasting
tens of seconds \citep{michilli2020analysis}.
This leaves only a few megahertz of bandwidth
available at any particular instant, making the monitoring
of clock delay variations more challenging. Furthermore,
when using baseband dumps we still need to rely on the
continuously saved raw ADC data to track and correct the
long-timescale linear drift of the maser.
\section{Validation of the clock stabilization system}
\label{sec:validation}
We tested the reliability of clock stabilization system
by installing it in the Pathfinder telescope and
comparing maser-based measurements of the CHIME-Pathfinder
relative clock drift to independent measurements
obtained from VLBI-style observations of steady
radio sources.
\subsection{The Pathfinder as an outrigger}
\label{subsec:pathfinder}
The Pathfinder is presented in detail in \cite{PathfinderOverview}.
It is a small-scale prototype of CHIME with identical
design and field of view, and it has
the same collecting area of the
outriggers under construction.
The telescope is located $\sim 400$~m from CHIME
and was constructed
before CHIME as a test-bed for technology development.
With the same correlator architecture as CHIME, the Pathfinder
operates as an independent connected interferometer with
its own GPS-disciplined clock.
Recently, \cite{leung2020synoptic} repurposed the Pathfinder
as an outrigger to demonstrate the feasibility of
triggered FRB VLBI for CHIME/FRB Outriggers.
The Pathfinder correlator is now equipped with a custom
baseband data recorder capable of processing
one quarter of the CHIME band and programmed
to write its local baseband data to disk
upon receipt of a trigger from CHIME/FRB.
We also connected the additional copy of the maser signal from the
seismic vault to one of the Pathfinder correlator inputs using
a signal path identical
to that of CHIME and shown
in Figure~\ref{fig:maser_setup}
(except for the transport cable
which is longer for the Pathfinder setup).
\subsection{Comparison to interferometric observations}
\label{subsec:CygA}
The clock stabilization system measures the delay variations
of the CHIME and Pathfinder clocks by processing the
maser data from each telescope as described in
Section~\ref{sec:clock_stabilization}. The delays
from each clock are then interpolated to the observation times
so the relative clock drift can be
tracked over time\footnote{If the maser data
comes from simultaneous baseband dumps
instead of raw ADC samples then clock
delays from the two telescopes can be directly compared
without interpolation.}.
An independent way to measure the CHIME-Pathfinder relative clock
delay is to interferometrically track a known point source over time
using both telescopes and their independently-running backends.
If we properly account for all the other contributions
to the measured interferometric delay
(geometric, ionosphere, etc.) as the source transits through the
field of view of the two telescopes, then any residual delay
should correspond to the relative drift between the two clocks.
If the clock stabilization system is robust,
its measurements should agree very
closely with the interferometric measurement,
which we use as a standard.
\subsubsection{Short timescale test}
\label{subsubsec:short_timescale_test}
We used Cygnus-A (henceforth referred to as CygA)
for the VLBI-style observations since it
is the brightest radio source seen by CHIME that
is unresolved on a CHIME-Pathfinder baseline.
For the first test, we programmed the CHIME/FRB backend to
trigger short baseband dumps
simultaneously for CHIME and the Pathfinder
during a single CygA transit. In this way, we collected
seven 10~ms-long baseband dumps, spaced by a minute, while
the source was in the field of view.
The observation was carried out in November 2020 during a day
scheduled for instrument maintenance, so we were also able
to collect raw ADC maser data at 200~ms cadence with the two telescopes.
Since the telescopes are co-located, they experience a common ionosphere,
suppressing relative ionospheric fluctuations (we see no evidence for these
in our observations). Thus, the residual delay in the
interferometric visibility after accounting for
the geometric contribution gives a measurement of the relative
clock drift. The residual interferometric delays
are calculated using a procedure identical to
that described in \cite{leung2020synoptic}.
In summary, each telescope is internally calibrated
(to measure the directional response of each antenna in the
telescope array) using a separate observation of a bright point source
\citep{chime_overview_2021}.
Then, for each telescope and baseband dump, the data is
coherently summed over antennas to form
a phased-array voltage beam towards CygA.
The beamformed data from CHIME and the Pathfinder
is then cross-correlated on a frequency-by-frequency basis
to form the complex visibility. The phase of each visibility is
compensated for the geometric delay.
The variance of each visibility is found empirically by
splitting each baseband dump into short time segments,
computing the visibility for each segment, and calculating the
variance over segments as function of frequency.
The seven visibilities are phase-referenced to that of the first
baseband dump since we are only interested in
changes of the relative clock delay.
Finally, the wideband fringe-fitting procedure to find residual delay
performs a least-mean-squares fit of a complex exponential
with linear phase and frequency-dependent amplitude
to the measured visibilities.
\begin{figure}[h!]
\centering
\includegraphics[scale=.35]{cygasingle.png}
\caption{Comparison of the CHIME-Pathfinder relative clock delay
inferred via the clock stabilization system and interferometric
observations from a single transit of CygA.
Top: relative clock delay (in ns)
as function of time as inferred from
CygA baseband data (blue),
raw ADC maser data (red) and maser baseband data (green).
Bottom: Difference between sky-based and maser-based measurements
of the relative clock delay
(raw ADC maser data in red, baseband maser data in green),
demonstrating agreement between the two methods. The large error bars
for all but the last point in the raw ADC data analysis (red)
are due to current limitations of the Pathfinder raw ADC
acquisition system (see Section~\ref{subsubsec:short_timescale_test}
for details). The error bar in the
last red point of the top plot ($\sim 14$~ps)
is representative of the expected accuracy
of the clock stabilization system using raw ADC data.}
\label{fig:short_timescale}
\end{figure}
The top panel of Figure~\ref{fig:short_timescale} shows the resulting
comparison between the CHIME-Pathfinder relative clock delays
calculated from interferometric observations (blue)
and those found through the clock stabilization system
(from raw ADC data in red, from baseband data in green).
The $1\sigma$ error bars are too small to be visible in the plot but
they are in the range $\sim 17-22$~ps for CygA measurements,
$\sim 14-238$~ps for raw ADC maser data measurements,
and $\sim 18-33$~ps for baseband maser data measurements.
Measurements with the clock stabilization system
show excellent agreement with the sky. This is further
highlighted in the bottom panel of Figure~\ref{fig:short_timescale}
that shows the difference between sky-based and maser-based measurements
of the relative clock delay
(raw ADC maser data in red, baseband maser data in green).
The large error bars for all but the last point in the
raw ADC data analysis (red)
are dominated by the error in the measurements of
the maser delay at the Pathfinder. These can be traced back to current
limitations of the Pathfinder raw data acquisition system, which
occasionally drops packets when we collect raw ADC data at fast cadence
for the reasons explained in Section~\ref{subsec:pipeline}.
This limitation will be solved in the next upgrade of the
F-engine control software. For the observation times that
fell within sections of missing Pathfinder raw ADC data
(the longest of which was $\sim 80$~s),
the delay values were obtained by performing
a smoothing spline interpolation based on the available
measurements. To estimate the uncertainty in the delay values
obtained with this method we analyzed a segment of the
delay timestream for which there were no gaps due to dropped
packets, $\sim 10$~min before the sky observations.
The errors were found using a procedure similar to the one
used in Section~\ref{sec:no_maser_outrigger} to evaluate
the performance of alternate reference clocks,
where we introduce artificial gaps in the delay timestream and
analyze the statistics of an ensemble of interpolation
residuals. Only the raw ADC delays (red)
for the last observation time could be measured
using the default interpolation for both telescopes
(see Section~\ref{subsec:pipeline}). The
uncertainty for this measurement is $\sim 14$~ps, and
is representative of the expected accuracy
of the clock stabilization system using raw ADC data.
\subsubsection{Long timescale test}
\label{subsubsec:long_timescale_test}
To test the performance of the clock stabilization system
on long timescales we collected five CygA baseband dumps,
each 10~ms in duration, spaced one minute apart,
for nine days in a row for a total of 45 delay measurements.
The observations were carried out in April 2021 during
normal CHIME operations so we relied on baseband
maser data for delay measurements with the clock stabilization
system for the reasons explained in Section~\ref{subsec:pipeline}.
Both interferometric and maser-based delays
are calculated using the same procedure as in the
short timescale test described in
Section~\ref{subsubsec:short_timescale_test}, with the
visibilities phase-referenced to one of the observations
on the fifth day.
\begin{figure}[h!]
\centering
\includegraphics[scale=.35]{cyga_multi.png}
\caption{Top: Comparison of the CHIME-Pathfinder relative clock delay
inferred via the clock stabilization system (red) and interferometric
observations (blue) of from multiple transits of CygA.
For each transit, we made five measurements of the
relative clock delay, spaced by one minute,
for nine days in a row.
Bottom: Difference between sky-based and maser-based
measurements of the relative clock delay.
The two methods show excellent agreement
on short (minute) and long (many days) timescales.
This indicates that the clock stabilization system
we have implemented can track clock delay variations with
better than $\sim 30$~ps rms level precision.}
\label{fig:long_timescale}
\end{figure}
The results of the long timescale test are shown in Figure~\ref{fig:long_timescale}. The interferometric
measurements agree with the maser-based measurements
at the $\sim 30$~ps rms level, demonstrating that
after correction with the clock stabilization system
the resulting reference clock is stable
over timescales of more than a week,
and that the signal
chain used to inject the maser signal into the correlator
is not a limitation for the system's performance
in CHIME/FRB Outriggers.
\section{A reference clock for outrigger stations without a maser}
\label{sec:no_maser_outrigger}
The clock stabilization system allows us to inject external
reference clock signals into radio telescopes which share the
CHIME correlator architecture.
In addition to its use in the new outriggers,
the system enables reference clocks
to be swapped out at existing telescopes like CHIME
and legacy systems like the Pathfinder
without making major changes to the software framework or existing
scientific backends, while expanding the telescopes' capabilities to
include VLBI.
Most outriggers will also have access to
hydrogen maser frequency references that can be used in the
same way as CHIME (Section \ref{sec:clock_stabilization})
to meet the stability requirements
for FRB VLBI. However, we still
need to address the possibility that certain outrigger
stations may be built at locations (e.g., greenfield land) that will
lack the infrastructure to support a hydrogen maser.
In this scenario, alternate reference signals
(e.g., from rubidium microwave oscillators)
can also be injected into the correlator to track and compensate
for GPS clock drifts. The performance of these oscillators is
inferior to that of a hydrogen maser, but they are still
more stable than the CHIME GPS clock on the timescales relevant
for FRB VLBI. They are also less expensive and
more readily available than a maser.
Even if these
frequency references can potentially
be used directly as the correlator master clock since they typically
come in units that can provide GPS disciplining as well as absolute time,
it is still desirable to use them separately as free-running clocks
for short and intermediate timescale observations. Not only are they inherently more stable than the primary CHIME clock, but they are not subject to short-timescale tuning jitter when not locked to GPS.
Equation~\ref{eq:spec}
provides a convenient way to determine whether an off-the-shelf
clock meets the requirements for FRB VLBI
by simply reading the $\sigma_y(10^3~\text{s})$
value from the unit's datasheet.
Passive hydrogen masers meet and exceed this requirement.
However, this specification
was derived from Equation~\ref{eq:adev_wn_app}
which assumes uncorrelated clock timing errors and
perfect calibration measurements.
In practice, the actual clock timing errors
will depend on aspects not necessarily captured by
this equation including
the detailed statistics of the delay variations,
the methods used to estimated them, the
timing and accuracy of the calibration measurements,
and the technique that we use to inject the clock to our system.
These aspects become relevant when
the frequency standard does not clearly exceed the
specification in Equation~\ref{eq:spec}.
As part of the implementation of the clock stabilization system,
\cite{timing_pipeline_scary_2021} developed a software package
with methods to
determine the suitability of precision clocks for
VLBI with transit telescopes like CHIME/FRB Outriggers.
These methods take
into account the details of the
noise processes that determine the stability of the clocks
and simulate realistic timing
calibration scenarios.
The basic input to the software is a timestream that
represents the delay variations as function of
time of the clock under test.
The delay timestream data can be either from measurements or from
simulations; in the latter case the software provides tools to
generate timestreams described by
combinations of power-law noise processes
commonly observed in precision clocks and oscillators including
white phase modulation noise, white frequency modulation noise,
flicker frequency modulation noise,
and random walk frequency modulation noise
\citep{1539968}.
Similarly, the software provides tools to generate
delay timestreams from a set of Allan deviation measurements,
which is convenient to evaluate the performance of a
clock based on its manufacturer specifications.
In this case, it is assumed that the delay variations
are described by a combination of power-law noise processes
where the weight of each noise component is found by
fitting the Allan variance data
to a model consisting of a linear combination of the
Allan variances of the previously described noise processes.
A calibrator is parametrized
by its observing time, number of
clock timing measurements, and $\text{SNR}$ per transit.
For example, for a calibrator at the equator
the observing time with CHIME
is $\sim 6$~min, and with a $\sim 2$~min
integration time we would have three
delay measurements per transit. The $\text{SNR}$ determines
the uncertainty in the calibration delay measurements
(see Equation~\ref{eq:sigma_tau}).
Given a timescale $\Delta t_{cal}$ that represents the
maximum expected time
separation between calibrators, the method masks a
random $\Delta t_{cal}$-long section
of the delay timestream, interpolates
using a best-fit function determined from
the available calibration measurements at each end
of the masked section, and keeps the interpolation residuals.
The process is repeated a
configurable number of times
to obtain a statistical ensemble of interpolation residual
timestreams, each of length\footnote{In
practice there is an additional overhead equivalent to one integration.}
$\sim \Delta t_{cal}$.
As the default metric of the
stability of the clock at $\Delta t_{cal}$ timescales, the method
uses the largest value of the ensemble standard deviation
in the interval $\left [0, \Delta t_{cal} \right]$.
Other metrics of performance are available.
Since throughout the paper
we have used the convention that $\Delta t$ represents the
time between a calibration and an observation, this metric
can be interpreted as an
estimate of the largest clock timing rms error
for $\Delta t$ up to $\sim \Delta t_{cal}/2$.
Different interpolation methods are available including
linear fit, smoothing spline, and
nearest available calibrator.
The fitting weights are determined
by the calibrator $\text{SNR}$ and the level of
the noise added by the timing stabilization system.
As a candidate for outriggers without a maser, we evaluated the
performance of the EndRun Technologies Meridian II US-Rb
rubidium oscillator. The unit was installed
in the Pathfinder RF room and connected to a
separate input of the correlator so we could test its
performance against the DRAO maser under conditions
comparable to those of a typical outrigger. A signal
conditioning chain identical to
the maser was used (signal translator + FLA).
We collected $\sim 42$~h of raw ADC data
at 1~s cadence for both the maser and the Rb clock
and used the clock stabilization pipeline to extract
the clock delay variations relative to the maser.
The top panel of Figure~\ref{fig:merid_clk_adev}
shows the measured delay variations after
removing slow linear drift component due to the maser
(see Section~\ref{subsec:pipeline}).
The bottom panel
shows the corresponding Allan deviation of the Rb clock (blue),
the measurement error (dashed blue), and
the manufacturer-specified Allan deviation of the
Rb clock (green points) and the DRAO maser (red points).
For $\Delta t \lesssim 40$~s the
variations are dominated by
the noise associated to the timing stabilization system,
which is in the range $\sim 10-30$~ps, small compared
to the $\sim 200$~ps clock timing error budget.
At longer timescales the measured
Allan deviation of the Rb oscillator is consistent
with the manufacturer specification. These
results are also consistent with direct
Allan deviation measurements
of the Rb oscillator performed with a phase noise analyzer,
confirming that
the hardware of the stabilization system
is not a limitation for clock performance in
CHIME/FRB Outriggers.
\begin{figure}[h!]
\centering
\includegraphics[scale=.35]{meridian_clk_adev.png}
\caption{Top: Measured delay variations of the
rubidium oscillator tested as a candidate reference clock
for outriggers without a maser.
Bottom: Measured Allan deviation of the Rb clock (blue),
measurement error (dashed blue), and
manufacturer-specified Allan deviation of the
Rb clock (green points) and the DRAO maser (red points).
The measured Allan deviation of the Rb clock is consistent
with the specification at intermediate and long timescales.
At short timescales the noise of the timing stabilization system
dominates the performance, but it is still small ($\sim 10-30$~ps)
compared to the clock timing error budget. This confirms that
the hardware of the clock stabilization system
is not a limitation for clock performance in
CHIME/FRB Outriggers.}
\label{fig:merid_clk_adev}
\end{figure}
Note that if we rely only on the manufacturer-specified
Allan deviation, the Rb clock does not meet the
requirement in Equation~\ref{eq:spec}. This justifies
a more detailed analysis of the clock performance to
determine whether it can still be used as a frequency
reference for the outriggers without a maser.
The measured delay timestream was analyzed with the
software package described above and in \cite{timing_pipeline_scary_2021}
to evaluate the expected performance
of the Rb clock under different calibration conditions.
The results are shown in Figure~\ref{fig:meridian}.
For the analysis we assumed that the calibrator observing time
was 9~min (roughly the value at CHIME's zenith)
with two integrations per observation.
The best performance is obtained with
linear interpolation between calibrators.
We tested calibrator $\text{SNRs}$ of 10 (purple),
15 (green), 20 (red), and $\infty$ (blue),
the latter representing the case where the clock performance
is not limited by calibration errors.
For comparison we also show the projected clock timing errors
for the DRAO maser using synthetic data generated from
the manufacturer-specified
Allan deviation with $\text{SNR}=15$ (dashed green).
Figure~\ref{fig:meridian} shows that,
even in the most conservative scenario
where we assume that all the calibrators have $\text{SNR} = 15$,
the Rb clock timing errors stay below
$225$~ps up to $\Delta t \sim 10^3$~s
by interpolating between timing solutions (solid green).
This clock timing error is still well below
the total timing budget of $\sim 800$~ps,
leaving enough room to handle ionospheric delay errors.
\begin{figure}[h!]
\centering
\includegraphics[scale=.36]{meridian_linear_ip.png}
\caption{Projected clock errors, $\sigma_\tau^{\text{clk}}$, of the
Rb clock as function of the time between calibrators
$\Delta t_{cal}$ from measured delay variations and
simulations of
realistic timing calibration scenarios.
This metric represents an estimate of the
largest clock timing error for
$\Delta t$ up to $\sim \Delta t_{cal}/2$
(see Section~\ref{sec:no_maser_outrigger} for details).
The dashed black horizontal line represents
$\sigma_\tau^{\text{clk}} = 200$~ps.
Even in the most conservative scenario
where we assume that all the
calibrators have $\text{SNR} = 15$ (solid green),
the Rb clock timing errors stay below
$225$~ps up to $\Delta t \sim 10^3$~s
by interpolating between timing solutions,
meeting the requirements for FRB VLBI with CHIME/FRB Outriggers.
}
\label{fig:meridian}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
We developed a clock stabilization system for CHIME/FRB Outriggers that
allows synchronization of CHIME and outrigger
stations at the $\sim 200$~ps level
on short and long timescales. This meets the requirements
for 50~mas localization of FRBs detected with the
CHIME/FRB real-time pipeline. Our proof-of-principle clock
transfer has demonstrated that a variety of different data
products can be used for precise time transfer from an external
reference clocks into data acquisition backends using the ICE
framework. This method is minimally invasive to existing
telescopes like CHIME and the Pathfinder,
expanding the capabilities of these instruments to include VLBI
without impacting their existing scientific backends.
It also allows for
increased flexibility and modularity for future systems such
as those at CHIME outriggers.
For outriggers that do not have the infrastructure to
support a hydrogen maser, we demonstrated that
it is still possible to meet the required clock stability
specification by using alternate reference clocks
and interpolating timing solutions between calibrations.
\begin{acknowledgments}
We acknowledge that CHIME is located on the traditional,
ancestral, and unceded territory of the Syilx/Okanagan people.
We are grateful to the staff of the Dominion Radio
Astrophysical Observatory, which is
operated by the National
Research Council Canada.
CHIME is funded by a grant from the Canada Foundation
for Innovation (CFI) 2012 Leading Edge Fund (Project 31170)
and by contributions from the provinces of British Columbia,
Qu\'ebec and Ontario. The CHIME/FRB Project is funded by a
grant from the CFI 2015 Innovation Fund (Project 33213) and
by contributions from the provinces of British Columbia and
Qu\'ebec, and by the Dunlap Institute for Astronomy and
Astrophysics at the University of Toronto.
Additional support was provided by the Canadian
Institute for Advanced Research (CIFAR), McGill
University and the McGill Space Institute via the
Trottier Family Foundation, and the University of
British Columbia.
The CHIME/FRB Outriggers program is funded by
the Gordon and Betty Moore Foundation
and by a National Science Foundation (NSF) grant (2008031).
FRB research at MIT is supported by an NSF grant (2008031).
FRB research at WVU is supported by an NSF grant (2006548, 2018490).
J.M.P. is a Kavli Fellow.
C.L. was supported by the U.S. Department of Defense (DoD)
through the National Defense Science \& Engineering Graduate
Fellowship (NDSEG) Program.
V.M.K. holds the Lorne Trottier Chair in Astrophysics \&
Cosmology and a Distinguished James McGill Professorship
and receives support from an NSERC Discovery Grant and
Herzberg Award, from an R. Howard Webster Foundation
Fellowship from CIFAR, and from the FRQNT Centre de
Recherche en Astrophysique du Qu\'{e}bec.
J.L.S. acknowledges support from the Canada 150 programme.
\end{acknowledgments}
\vspace{5mm}
\facilities{CHIME}
\software
\texttt{AllanTools}~\citep{2018ascl.soft04021W}}
\section{Introduction}
\label{sec:intro}
Fast radio bursts \citep[FRBs,][]{lorimer2007bright} are transient pulses
of radio light observed out to cosmological distances; both their origins
and emission mechanisms remain unclear. Even though thousands of FRB events
occur over the full sky every day \citep{CHIMEFRB_CAT1,2018MNRAS.475.1427B},
their detection with traditional radio telescopes is challenging due to the
randomly-occurring nature of the majority of bursts.
With its unique design optimized for rapid wide-field
observations and a powerful
real-time transient-search engine \citep[CHIME/FRB,][]{FRBSystemOverview},
the Canadian Hydrogen Intensity Mapping Experiment
\citep[CHIME\footnote{\href{https://chime-experiment.ca}
{https://chime-experiment.ca}}, ][]{chime_overview_2021}
has become the leading facility for detection of FRBs,
detecting over 500 FRBs \citep{CHIMEFRB_CAT1} and
18 new repeating sources \citep{R2,RN, RN2}
in its first year of full operations.
Such an unprecedented sample of events with a single survey has enabled
detailed studies of statistical properties of the FRB population
such as fluence distribution and sky rate, scattering time,
dispersion measure (DM) distribution, spatial
distribution, burst morphology, and correlations with large-scale structure
\citep{CHIMEFRB_CAT1,FRB_Morphology2021,FRB_galactic_lat2021,FRB_lss_xcor2021,
2021arXiv210710858C}.
However, except for FRBs with low dispersion measure
\citep{michilli2020analysis,Bhardwaj_M81,2021arXiv210812122B},
CHIME/FRB's arcminute localization
precision is insufficient to localize these bursts to their host galaxies,
which is crucial to understand their nature and unlock their
potential as probes of the intergalactic medium and large-scale structure.
To overcome this limitation, the CHIME/FRB collaboration is currently
developing CHIME/FRB Outriggers, a program to deploy
CHIME-like outrigger telescopes at continental baseline distances.
CHIME and the outriggers will
form a dedicated very-long-baseline interferometry (VLBI) network
capable of detecting hundreds of FRBs each year with sub-arcsecond
localization precision in near real-time,
allowing for the unique identification of FRB
galaxy hosts and source environments.
Because VLBI localizes sources by precisely measuring the
difference in the arrival time
of astronomical signals between independent telescopes across
far-separated sites, it is critical to use very stable
local reference signals (i.e., clocks) that allow
the synchronization of VLBI stations without losing coherence during
observations and between calibrations.
This is particularly important for stationary telescopes like CHIME
and the outrigger stations that can only be calibrated when a bright
radio source transits through their field of view.
The superior stability performance of
hydrogen masers on short and intermediate timescales makes them the
preferred option for VLBI applications
\citep{2018PASP..130a5002M,2019ApJ...875L...2E,2020arXiv200409987S}.
Here, we present a
hardware and software clock stabilization solution for the CHIME
telescope that effectively
transfers the reference clock from its original GPS-disciplined
crystal oscillator to a passive hydrogen maser during
VLBI observations, meeting the timing requirements for FRB VLBI with CHIME/FRB
Outriggers. Furthermore, this system can be implemented
without interrupting CHIME's current observational campaign and without
modifications to the correlator or
the data-analysis pipelines
for cosmology and radio transient science.
The paper is organized as follows:
Section~\ref{sec:inst_overview} describes
the features of the CHIME instrument that are relevant to its use as
a VLBI station in CHIME/FRB Outriggers.
Section~\ref{sec:requirements} discusses the CHIME/FRB Outriggers
clock stability requirements for FRB VLBI.
Section~\ref{sec:clock_stabilization} describes the hardware and software of the
stabilization system that transfers CHIME's reference clock to
a passive hydrogen maser. Section \ref{sec:validation} shows
the results of the suite of tests that validate the clock stabilization
system with VLBI-style observations between CHIME and the CHIME Pathfinder
\citep[an early small-scale prototype of CHIME recently
outfitted as an outrigger test-bed, ][]{PathfinderOverview,leung2020synoptic}.
Section~\ref{sec:no_maser_outrigger} presents an alternate clock solution
for outrigger stations that do not have the infrastructure to support
a hydrogen maser. Section~\ref{sec:conclusions} presents the conclusions.
\section{Instrument overview}
\label{sec:inst_overview}
A detailed description of the CHIME instrument and the
CHIME/FRB project is presented in
\cite{chime_overview_2021} and \cite{FRBSystemOverview}. In this section
we give a brief introduction to these systems focused on the
features that are relevant for FRB VLBI. We also
give an overview of CHIME/FRB Outriggers.
\subsection{CHIME and CHIME/FRB}
\label{subsec:chime_chimefrb}
CHIME is a hybrid cylindrical transit interferometer located at the
Dominion Radio Astrophysical Observatory (DRAO) near Penticton, B.C.,
Canada. It consists of four 20 m $\times$ 100 m cylindrical reflectors
oriented north-south and instrumented with a total of 1024
dual-polarization feeds and low-noise receivers operating in the
400-800~MHz band. The cylinders are fixed with no moving parts,
so CHIME operates as a drift-scan instrument that surveys the northern
half of the sky every day with an instantaneous field of view of
$\sim 120^\circ$ north-south by $2.5^\circ-1.3^\circ$ east-west.
Although CHIME's design was driven by its primary scientific goal to
probe the nature
of dark energy by mapping the large-scale structure of neutral hydrogen in the universe across the redshift range $0.8\le z\le 2.5$,
its combination of high sensitivity and large field of view also
make it an excellent instrument to study the radio transient sky. Thus,
in its final stages of commissioning, the CHIME correlator was upgraded
with additional hardware and software backends
to perform additional real-time data
processing operations for pulsar timing and FRB science.
The correlator
\citep{2016JAI.....541005B,2016JAI.....541004B,2020JAI.....950014D}
is an FX design (temporal Fourier transform before spatial
cross-multiplication of data), where the F-engine digitizes the 2048
analog inputs at 800~MSPS
and separates the 400~MHz input bandwidth into 1024
frequency channels with 390~kHz spectral resolution.
The F-engine also implements the corner-turn network
that re-arranges the complex-valued channelized data (also known
as ``baseband'') before sending it to the X-engine that
computes a variety of data products for the different
real-time scientific backends: interferometric visibilities for the
hydrogen intensity mapping backend \citep{chime_overview_2021},
dual-polarization tracking
voltage beams for the pulsar monitoring backend \citep{2020arXiv200805681C},
and high-frequency resolution power beams
for the 21-cm absorption systems
backend \citep{2014PhRvL.113d1303Y} and for the CHIME/FRB
backend that triggers on highly dispersed radio transients
to search for FRBs in real time \citep{FRBSystemOverview}.
Additionally, a $\sim$36~s long memory buffer in the X-engine
stores baseband data (2.56~$\mu$s time resolution,
390~kHz spectral resolution, and 4-bit real~+~4-bit imaginary
bit depth for the 2048 correlator inputs)
that can be saved to disk when the CHIME/FRB search pipeline detects
an FRB candidate, enabling polarization and high-time resolution
analysis of FRB events, as well as sub-arcminute localization
precision \citep{michilli2020analysis}.
Eventually it will also enable VLBI localization
with CHIME/FRB Outriggers.
\subsection{CHIME/FRB Outriggers}
\label{subsec:chimefrb_outriggers}
The scientific goal of CHIME/FRB Outriggers is to provide 50~mas
localization for nearly all CHIME detected FRBs
with sub-hour latency. This
angular resolution is sufficient to determine galaxy hosts and
source environments, and is well matched to current best optical
follow-up observations. To this end, the CHIME/FRB collaboration
is currently building outrigger telescopes at distances ranging
from hundreds to several thousands of kilometers from DRAO.
The outriggers will be small-scale versions of CHIME, each with
about one eighth of CHIME's collecting area, the same field of view,
and tilted such that they monitor the same region of the sky as CHIME.
In contrast to traditional VLBI that is typically performed for known
targets with small fields of view and manageable data rates,
the random nature of most FRBs requires the real-time processing
of massive data rates in order to detect and localize these
events in blind searches with wide fields of view.
The baseband data rate of CHIME is
6.6~Tbit/s while that of each outrigger station will be
0.8~Tbit/s. Since such high data rates cannot be continuously saved,
the outriggers will adopt the triggered FRB VLBI approach demonstrated
in \cite{leung2020synoptic}, where each station buffers its local baseband
data in memory and only writes it to disk upon receipt of a trigger
from the CHIME/FRB real-time search pipeline
over internet links. The local data of each
station is then transmitted to a central
facility where the signals are correlated
together such that the outriggers operate with CHIME as an interferometric
instrument with the angular resolution of a telescope with an
aperture of thousands of kilometers.
\section{Clock stability requirements}
\label{sec:requirements}
Accurate timing is critical for VLBI since the localization of
radio sources is ultimately derived from the relative time
of arrival of signals at the telescope stations.
By synthesizing the available frequency channels it is
possible to obtain a statistical precision on the measured delay given
by \citep{1970RaSc....5.1239R}
\begin{equation}
\label{eq:sigma_tau}
\sigma_\tau^{\text{stat}} = \frac{1}{2\pi \cdot \text{SNR}\cdot \text{BW}_{\text{eff}}}
\end{equation}
\noindent where $\text{SNR}$ is the signal-to-noise ratio of the
VLBI event and $\text{BW}_{\text{eff}}$ is the effective bandwidth.
For the CHIME/FRB detection threshold\footnote{The SNR in VLBI
is related to the SNR at CHIME as
$\text{SNR}/\text{SNR}_{\text{CH}} = \sqrt{2A_{\text{O}}/A_{\text{CH}}} = 1/2$
where $A_{\text{CH}}$ and $A_{\text{O}}$
are the collecting areas of CHIME and the
outrigger respectively, and the factor of $\sqrt{2}$ comes
from the difference in the detailed noise statistics of a
cross-correlation compared to an auto-correlation
\citep{2015A&C....12..181M}.
While the CHIME/FRB real-time detection pipeline
has a detection threshold of $\sim 10$ \citep{michilli2020analysis},
the SNR rises by $\sim 50\%$
through the more detailed analysis of the saved baseband data.
As such, we take the floor on the CHIME detection SNR to be
$\text{SNR}_{\text{CH}} = 15$.} and bandwidth (\text{BW})
this corresponds to
\begin{equation}
\label{eq:sigma_tau_1}
\sigma_\tau^{\text{stat}} \approx \frac{184\text{ ps}}
{\displaystyle \left(\frac{\text{SNR}}{7.5}\right)\left(\frac{\text{BW}}{400\text{ MHz}}\right)}.
\end{equation}
For a VLBI baseline $b$, a delay precision $\sigma_\tau$ corresponds to
a statistical localization uncertainty
\begin{equation}
\label{eqn:delay2pos_err}
\sigma_\theta \approx \frac{c}{b} \sigma_\tau
\end{equation}
\noindent which gives $\sigma_\theta^{\text{stat}} \lesssim 11$~mas
for a 1000~km baseline. However,
the (relative) delay measured by the interferometer
includes not only the geometric delay
(which ultimately provides the source localization)
but also additional contributions that need to be accounted for such as
propagation though the troposphere and ionosphere, baseline
errors, drift between clocks of different stations (clock timing errors),
and other instrumental delays. In practice,
the localization uncertainty of CHIME/FRB Outriggers
will be limited
by systematic errors due to uncompensated delay
contributions,
and particularly by errors in the
determination of the dispersive delay due to the ionosphere.
Although the large observation bandwidth of the instrument
helps to mitigate this effect, it still represents the
most important challenge for
system stability at CHIME frequencies.
Our simulations indicate that we can reliably localize
FRB events to 50~mas which, for $b = 1000$ km,
corresponds to a delay error budget of $\sigma_\tau \approx 800$~ps.
Anticipating that the ionosphere
will be the main contributor to delay errors, the
clock timing error specification\footnote{This is not a
hard upper limit,
but rather a reasonable reference value that represents
our goal to keep the clock timing errors well below the 800~ps
total timing error budget.} has been set
to $\sigma_\tau^{\text{clk}}\lesssim 200$~ps.
Note that for blind FRB searches, this
specification must be met at all times. Indeed,
it may not always be possible to find
a calibrator immediately after the
detection of an FRB for phase referencing.
An additional complication is that
stationary telescopes like CHIME and the outriggers
observe the sky as it transits through their field of view
and thus cannot slew towards favorable calibrators. Therefore, it is especially important to have
a reference clock that is reliable on the timescales required to
connect an FRB detection to a calibrator observation,
potentially hours later.
Although the list of steady radio sources
potentially suitable for calibrating
low-frequency VLBI arrays with $\gtrsim 1000$~km baselines
has significantly increased thanks to the
ongoing LOw Frequency ARray (LOFAR) Long-Baseline Calibrator Survey
\citep[LBCS, ][]{2015A&A...574A..73M,jackson2016lbcs},
during its initial stages CHIME/FRB Outriggers
will adopt a more conservative strategy
relying mainly on bright pulsars for calibration.
Pulsars are compact, can be separated from the
steady radio background in the time domain,
and are sufficiently
abundant to be used as the primary sources for phase referencing.
Accordingly, the backend of each outrigger will also have the ability to
form tracking baseband beams for pulsar analysis and calibration.
Recently, \cite{2021arXiv210705659C} demonstrated
the potential of pulsars as calibrators
for CHIME/FRB Outriggers through the triggered VLBI
detection of an FRB over a $\sim 3000$~km
baseline between CHIME and the
Algonquin Radio Observatory (ARO) 10-m Telescope using
PSR B0531+21 in the Crab nebula for phase referencing.
We estimate that an FRB detection can be phase referenced
to an $\text{SNR}\gtrsim 15$ pulsar\footnote{This represents the
VLBI SNR after coherent addition of the pulses within
a single pulsar transit.}
within less than $\sim 10^3$~s.
This timescale and the
relative clock timing error specification set the clock stability
(Allan deviation) requirement to
\begin{equation}
\label{eq:spec}
\sigma_y(10^3~\text{s}) \lesssim 2 \cdot 10^{-13},
\end{equation}
As explained in Appendix~\ref{app:adev}, the Allan deviation
is a measure of stability commonly used in
precision clocks and oscillators, with
$\Delta t \cdot \sigma_y(\Delta t)$ roughly
representing the rms of clock timing errors
a time $\Delta t$ after calibration.
The specification in Equation~\ref{eq:spec} is typically obtained
with hydrogen masers.
However, in Section~\ref{sec:no_maser_outrigger} we show
that even frequency references that initially do not meet this
requirement can be used as reference clocks for the outriggers
by interpolating timing solutions between calibrators.
\section{clock stabilization system}
\label{sec:clock_stabilization}
In this section we discuss the considerations that led to
the current clock stabilization solution for CHIME,
as well as the hardware and data analysis
for the maser signal.
\subsection{Hardware/Software considerations}
\label{subsec:hw_sw_considerations}
The CHIME F-engine is implemented using the ICE
hardware, firmware and software framework \citep{2016JAI.....541005B}.
It consists of field programmable gate array (FPGA)-based motherboards
specialized to perform the data acquisition and channelization
of the 2048 CHIME analog inputs. The ICE motherboards are packaged in
eight crates with custom backplanes that implement the networking engine
that re-organizes and sends the baseband data to a dedicated
graphics processing unit (GPU) cluster that performs the X-engine
operations. The outriggers will also use an ICE-based F-engine.
The data acquisition and signal processing of the
F-engine are driven by a
single 10~MHz clock signal provided by a
Spectrum Instruments TM-4D
global positioning system (GPS)-disciplined
ovenized crystal oscillator. The GPS module also generates
an inter-range instrumentation group (IRIG)-B timecode signal
internally synchronized to the clock and
that is used by the
correlator to time stamp the data.
A low-jitter distribution system sends the clock and
time signals to each ICE backplane and motherboard
and is ultimately used to generate the
analog-to-digital converter (ADC) sampling clocks.
The time stamping process is implicit: The F-engine uses the
IRIG-B signal to synchronize the start of the data acquisition to
an integer second (up to the 10~ns resolution of the IRIG-B decoder
in the FPGA firmware)
and it also tags each data frame with a frame counter
value. The X-engine time stamps the data by calculating
the offset from the start time
based on the frame counter value and assuming a fixed
2.56~$\mu$s baseband sampling time.
Figure~\ref{fig:chime_clk_adev} shows the Allan deviation of the
CHIME GPS clock (blue line) as measured with the clock stabilization
system described in Sections~\ref{subsec:maser_hw} and
\ref{subsec:pipeline}.
The GPS disciplined crystal oscillator, being locked
to a GPS time reference determined by a vast network of
atomic clocks, will eventually surpass the stability performance
of a single hydrogen maser on very long timescales
($\Delta t \gtrsim 10^6$~s).
On intermediate and long timescales ($\Delta t \sim 10^3-10^5$~s),
including the ones of interest for CHIME/FRB Outriggers,
the CHIME clock stability is dominated by
white delay
noise ($\sigma_y(\Delta t) \propto 1/\Delta t$) corresponding to
$\sim 6$~ns root mean square (rms) timing errors.
While the coherence of this frequency standard
is sufficient for CHIME's operations
as a connected interferometer and for all its backends,
the high precision needed for FRB VLBI
requires the development of a more stable clock system.
\begin{figure}[h!]
\centering
\includegraphics[scale=.35]{chime_clk_adev.png}
\caption{Allan deviation of the CHIME GPS clock
and the DRAO hydrogen maser.
Blue: Allan deviation of the CHIME GPS clock
as measured with the clock stabilization system
described in Section~\ref{sec:clock_stabilization}. A total of ten days
of raw ADC data at 30~s cadence was collected for the
measurement. Dashed blue: expected measurement
error contribution to the Allan deviation obtained from
simulations of uncorrelated
but time-dependent errors in the range $\sim 4-20$~ps rms
(the range observed in the measured delays).
Dashed green: stability requirement from Equation~\ref{eq:spec}
assuming white noise delay errors.
Red: manufacturer-specified Allan deviation
of the DRAO maser.
The CHIME GPS clock does not meet the stability requirements for
FRB VLBI, but the DRAO maser does (Equation~\ref{eq:spec}).}
\label{fig:chime_clk_adev}
\end{figure}
As a continuously tracking global navigation satellite system (GNSS)
station and as part of the
Western Canada Deformation Array \cite[WCDA, ][]{wcda1996_chen} and
the Canadian Active Control System \cite[CACS\footnote{\href{http://cgrsc.ca/resources/geodetic-control-networks/canadian-active-control-system-cacs/}
{http://cgrsc.ca/resources/geodetic-control-networks/canadian-active-control-system-cacs}}, ][]{cacs1996_duval},
DRAO is equipped with an atomic frequency standard consisting
of a T4Science pH Maser 1008 passive hydrogen maser owned
and operated by Natural Resources Canada (NRCan).
The maser is installed in a seismic vault at the DRAO site
and has a primary output of 5~MHz (sine wave). It
is directly connected to a low noise distribution amplifier in the same
rack that serves as electrical isolation
for the maser and also derives multiple
copies of the 10~MHz reference signal (sine wave).
NRCan has approved the use of
two of those signals for CHIME-related operations.
The manufacturer-specified Allan deviation of the DRAO maser is
shown in Figure~\ref{fig:chime_clk_adev} (red points).
The maser clearly exceeds the stability requirements for
FRB VLBI with CHIME (Equation~\ref{eq:spec}).
Some of the outrigger sites will also have access
to hydrogen maser frequency references with similar performance.
Although in principle the ICE system can be operated with
an IRIG-B time signal that is not phase-locked to the
10 MHz reference clock (such as an independent maser),
an important restriction
to using the maser as the master clock for the
CHIME correlator is the fact that both
the F-engine and X-engine software and the scientific data analysis
pipelines were developed on the premise that the clock and IRIG-B
signals are synced (e.g., to time stamp the data),
something that cannot be guaranteed if the two
signals are generated by independent systems (the maser and the
GPS receiver respectively). Even if the relative drift between clock and
time signals could be tracked, both the correlator and
data analysis pipelines
would need to be updated to implement this change.
As such, we had to develop a clock
stabilization system that did not impact
the normal operations of CHIME, its existing real-time backends,
and the other scientific teams.
\subsection{Maser signal conditioning}
\label{subsec:maser_hw}
The clock stabilization system designed for FRB VLBI with CHIME
keeps the current GPS-disciplined crystal oscillator as the
master clock and instead feeds the
maser signal to one of the ICE ADC
daughter boards so it is digitized by the crystal-oscillator-driven
F-engine. The data is processed to
monitor the variations in the phase of the sampled maser signal
which correspond to variations in the relative delay
between the maser and the master clock.
By using this information to correct
phase variations in the baseband data recorded at the time
of an FRB detection, the system effectively transfers the
reference clock from the GPS disciplined oscillator to the more
stable maser signal during FRB observations.
As shown in Figure~\ref{fig:chime_clk_adev} (dashed blue line),
the noise penalty
associated with the clock transfer operation
is essentially white
($\sigma_y(\Delta t) \propto 1/\Delta t$)
on the timescales relevant for FRB VLBI
and small ($\lesssim 20$~ps) compared
to the total clock timing error budget.
A block diagram of the maser signal path is shown in
Figure~\ref{fig:maser_setup}.
The first point of access to the 10~MHz maser signal is the
low noise distribution amplifier within the seismic vault.
From there, the signal is transported through $\sim 500$~m of
buried coaxial cable to one of the two radio-frequency (RF)-shielded
huts that house the CHIME F-engine.
We use the same type (LMR-400)
of low-loss coaxial cable used in the CHIME
analog receivers and whose thermal susceptibility has been
extensively tested in the field. At the RF hut, the cable
interfaces with a ground block and the signal is then
carried inside the RF room using standard SMA cables
where it is connected to an isolation transformer to refer
the next stages to the F-engine crate ground.
One complication in the digitization of the maser signal is that the
ADC daughter boards that specialize the ICE system for
CHIME have a bandpass transfer function that strongly attenuates
signals below $\sim 100$~MHz. For this reason, instead of feeding
the maser signal directly to an ADC daughter board, the signal is
used to drive a low-noise sine-to-square wave signal translator
that generates 10~MHz harmonics well into the CHIME band.
The output of the translator is then filtered
to the CHIME band using the same band-defining filter amplifier (FLA)
used in the CHIME receivers
\citep{PathfinderOverview,chime_overview_2021}.
Finally, the FLA is connected
directly to one of the correlator inputs
where it is digitized at 800~MSPS with an 8-bit ADC.
\begin{figure}[h!]
\centering
\includegraphics[scale=.39]{maser_setup.png}
\caption{Maser signal path. The 10~MHz maser signal is transported
through $\sim 500$~m of buried coaxial cable from the seismic vault
to one of the CHIME F-engine RF huts. There, the maser
signal is conditioned to a waveform that can be
digitized by the CHIME F-engine
(see Section~\ref{subsec:maser_hw}
for details).}
\label{fig:maser_setup}
\end{figure}
\subsection{Clock stabilization pipeline}
\label{subsec:pipeline}
The FPGA within each ICE motherboard processes
the data from its digitizers using the custom CHIME F-engine firmware
\citep[for details, see][]{2016JAI.....541005B,2016JAI.....541004B}. Briefly,
the raw ADC data from each input is passed to the
frequency channelizer module as frames of 2048 8-bit samples.
The channelizer forms the baseband data by splitting the 400 MHz input
bandwidth into 1024 frequency channels, each truncated to
a 4-bit real + 4-bit imaginary complex number. Additionally,
a probe sub-module within the channelizer
can be configured to periodically capture a subset of
the raw ADC data that is separately saved and typically
used in CHIME for system monitoring.
By default, the CHIME F-engine software pipeline
saves one raw ADC frame (2.56 $\mu$s of data)
from each input every 30~s, but this cadence can
be modified before starting a
data acquisition.
The clock stabilization pipeline
extracts the raw ADC frames
from the maser input, Fourier transforms each frame via a
Fast Fourier Transform (FFT), and separates the frequency channels
corresponding to the harmonics of the 10 MHz signal in the CHIME
band.
The quality of each harmonic is assessed based on its
signal-to-quantization-noise ratio (SQNR) and its
susceptibility to spurious aliased harmonics (relevant for
harmonics near the edges of the CHIME band).
Low quality harmonics are discarded.
Since the ADC that digitizes the maser signal uses the GPS clock
as the reference for sampling, the variations in the delay of the
sampled maser signal represent the delay variations of the
GPS clock with respect to the maser, the latter of which is more
stable on short and intermediate timescales. These delay
variations $\Delta\tau(t)$ will induce
phase variations $\Delta \phi (t, \nu)$ in the maser harmonics of
the form
\begin{equation}
\label{eq:delay_to_phase_var}
\Delta \phi (t, \nu) = 2\pi\nu \Delta\tau(t)
\end{equation}
\noindent where $\nu$ is the harmonic frequency.
Since we are interested in the GPS clock delay variations
relative to the delay at the time of VLBI calibration,
the phase of the maser harmonics is initially referenced to
the phase of a frame close to calibration time. Then for each frame,
a line described by Equation~\ref{eq:delay_to_phase_var} is fit to
the phase as a function of harmonic frequency to recover the
GPS clock delay (relative to the maser)
as a function of time.
The dominant component of the
recovered delay $\Delta\tau(t)$ is a slow
linear drift as function of time ($\sim 50$~ns/day)
corresponding to a constant offset of the maser frequency from
10~MHz. This linear trend is removed from $\Delta\tau(t)$
since it is due to the maser frequency calibration and not due to the
instability of the GPS clock.
The captured raw ADC
frames are only a small fraction of the available CHIME data; thus,
the times at which GPS clock delay measurements are available are
not necessarily aligned with the times of a calibration observation
or an FRB detection. This means that the GPS clock delay timestream
must be interpolated in order to find the clock
contribution to the total delay measured in a VLBI observation.
We use linear interpolation
to find the GPS clock delay at any arbitrary time, a method
that is motivated by the short timescale behavior of the
clock delay variations.
Figure~\ref{fig:tuning_jitter} shows a
few examples of the behavior of the
GPS clock delay on timescales of a few seconds as measured by
the clock stabilization system.
On these timescales the timing variations are dominated by
the tuning jitter generated by the algorithm that disciplines
the crystal oscillator. In essence, the algorithm works
by counting the number of clock cycles between
successive GPS receiver pulse-per-second (PPS) pulses and adjusting
the crystal's temperature to ensure ten million counts between pulses.
The size of the temperature tuning steps is progressively
reduced as the crystal oscillator frequency approaches 10~MHz.
As shown in Figure~\ref{fig:tuning_jitter},
this discipline procedure gives a characteristic
triangle-wave shape to the tuning jitter, and although in a
perfectly-tuned oscillator the transitions should occur every second,
in practice we observe that they can take longer.
Thus, as long as the GPS clock delay is sampled at cadences
below $\sim 500$~ms we can track the tuning jitter features and
a linear interpolation provides a good approximation to
the true delay at any time.
\begin{figure}[h!]
\centering
\includegraphics[scale=.42]{tuning_jitter.png}
\caption{Three examples of the behavior of the
GPS clock delay on timescales of a few seconds as measured by
the clock stabilization system with respect to the DRAO maser.
The raw ADC data cadence for this measurements was 40~ms.
The measurement errors are in the range $\sim 2-13$~ps.
The characteristic triangle wave pattern is due to
the algorithm that disciplines the crystal oscillator
in the GPS unit.
The algorithm works by counting the number of clock cycles between
successive GPS PPS pulses and adjusting
the crystal's temperature to ensure ten million
counts between pulses.
The size of the temperature tuning steps changes depending on
the tuning history of the oscillator.}
\label{fig:tuning_jitter}
\end{figure}
As the current version of the F-engine control software only allows
saving raw ADC data for all the correlator inputs at the time, raw ADC
data at cadences below 10~s cannot be saved during normal
telescope operations and are restricted to operations during
times scheduled for
hardware maintenance and software upgrades.
A modification to the F-engine control software is ongoing to
allow saving fast-cadence raw ADC data for the maser input while
keeping the default cadence for the remaining correlator inputs,
a change that does not impact the the normal operations of the
correlator and the data-analysis pipelines.
It is also possible to process baseband
data directly to extract the maser signal and measure the GPS clock
delay variations. The operation is very similar
to that of raw ADC data, except that the maser data has already
been transformed to the frequency domain by the F-engine.
Since in this case
most maser harmonics do not lie exactly in the center of
a frequency channel, the pipeline selects the closest F-engine frequency
channel. Then for each selected channel, it
performs an additional channelization by
using an FFT along the time domain to isolate the harmonic frequency.
Although working with baseband data is logistically convenient in
cases where we need to test the performance of the
clock stabilization system (see Section~\ref{sec:validation}),
during regular operations the current system is designed to work mostly
with raw ADC data. This is mainly because a baseband dump for an
FRB event is typically collected in
$\sim 100$~ms segments at different
times for each frequency channel in order to account for the
dispersion delay of the transient, with a total event duration lasting
tens of seconds \citep{michilli2020analysis}.
This leaves only a few megahertz of bandwidth
available at any particular instant, making the monitoring
of clock delay variations more challenging. Furthermore,
when using baseband dumps we still need to rely on the
continuously saved raw ADC data to track and correct the
long-timescale linear drift of the maser.
\section{Validation of the clock stabilization system}
\label{sec:validation}
We tested the reliability of clock stabilization system
by installing it in the Pathfinder telescope and
comparing maser-based measurements of the CHIME-Pathfinder
relative clock drift to independent measurements
obtained from VLBI-style observations of steady
radio sources.
\subsection{The Pathfinder as an outrigger}
\label{subsec:pathfinder}
The Pathfinder is presented in detail in \cite{PathfinderOverview}.
It is a small-scale prototype of CHIME with identical
design and field of view, and it has
the same collecting area of the
outriggers under construction.
The telescope is located $\sim 400$~m from CHIME
and was constructed
before CHIME as a test-bed for technology development.
With the same correlator architecture as CHIME, the Pathfinder
operates as an independent connected interferometer with
its own GPS-disciplined clock.
Recently, \cite{leung2020synoptic} repurposed the Pathfinder
as an outrigger to demonstrate the feasibility of
triggered FRB VLBI for CHIME/FRB Outriggers.
The Pathfinder correlator is now equipped with a custom
baseband data recorder capable of processing
one quarter of the CHIME band and programmed
to write its local baseband data to disk
upon receipt of a trigger from CHIME/FRB.
We also connected the additional copy of the maser signal from the
seismic vault to one of the Pathfinder correlator inputs using
a signal path identical
to that of CHIME and shown
in Figure~\ref{fig:maser_setup}
(except for the transport cable
which is longer for the Pathfinder setup).
\subsection{Comparison to interferometric observations}
\label{subsec:CygA}
The clock stabilization system measures the delay variations
of the CHIME and Pathfinder clocks by processing the
maser data from each telescope as described in
Section~\ref{sec:clock_stabilization}. The delays
from each clock are then interpolated to the observation times
so the relative clock drift can be
tracked over time\footnote{If the maser data
comes from simultaneous baseband dumps
instead of raw ADC samples then clock
delays from the two telescopes can be directly compared
without interpolation.}.
An independent way to measure the CHIME-Pathfinder relative clock
delay is to interferometrically track a known point source over time
using both telescopes and their independently-running backends.
If we properly account for all the other contributions
to the measured interferometric delay
(geometric, ionosphere, etc.) as the source transits through the
field of view of the two telescopes, then any residual delay
should correspond to the relative drift between the two clocks.
If the clock stabilization system is robust,
its measurements should agree very
closely with the interferometric measurement,
which we use as a standard.
\subsubsection{Short timescale test}
\label{subsubsec:short_timescale_test}
We used Cygnus-A (henceforth referred to as CygA)
for the VLBI-style observations since it
is the brightest radio source seen by CHIME that
is unresolved on a CHIME-Pathfinder baseline.
For the first test, we programmed the CHIME/FRB backend to
trigger short baseband dumps
simultaneously for CHIME and the Pathfinder
during a single CygA transit. In this way, we collected
seven 10~ms-long baseband dumps, spaced by a minute, while
the source was in the field of view.
The observation was carried out in November 2020 during a day
scheduled for instrument maintenance, so we were also able
to collect raw ADC maser data at 200~ms cadence with the two telescopes.
Since the telescopes are co-located, they experience a common ionosphere,
suppressing relative ionospheric fluctuations (we see no evidence for these
in our observations). Thus, the residual delay in the
interferometric visibility after accounting for
the geometric contribution gives a measurement of the relative
clock drift. The residual interferometric delays
are calculated using a procedure identical to
that described in \cite{leung2020synoptic}.
In summary, each telescope is internally calibrated
(to measure the directional response of each antenna in the
telescope array) using a separate observation of a bright point source
\citep{chime_overview_2021}.
Then, for each telescope and baseband dump, the data is
coherently summed over antennas to form
a phased-array voltage beam towards CygA.
The beamformed data from CHIME and the Pathfinder
is then cross-correlated on a frequency-by-frequency basis
to form the complex visibility. The phase of each visibility is
compensated for the geometric delay.
The variance of each visibility is found empirically by
splitting each baseband dump into short time segments,
computing the visibility for each segment, and calculating the
variance over segments as function of frequency.
The seven visibilities are phase-referenced to that of the first
baseband dump since we are only interested in
changes of the relative clock delay.
Finally, the wideband fringe-fitting procedure to find residual delay
performs a least-mean-squares fit of a complex exponential
with linear phase and frequency-dependent amplitude
to the measured visibilities.
\begin{figure}[h!]
\centering
\includegraphics[scale=.35]{cygasingle.png}
\caption{Comparison of the CHIME-Pathfinder relative clock delay
inferred via the clock stabilization system and interferometric
observations from a single transit of CygA.
Top: relative clock delay (in ns)
as function of time as inferred from
CygA baseband data (blue),
raw ADC maser data (red) and maser baseband data (green).
Bottom: Difference between sky-based and maser-based measurements
of the relative clock delay
(raw ADC maser data in red, baseband maser data in green),
demonstrating agreement between the two methods. The large error bars
for all but the last point in the raw ADC data analysis (red)
are due to current limitations of the Pathfinder raw ADC
acquisition system (see Section~\ref{subsubsec:short_timescale_test}
for details). The error bar in the
last red point of the top plot ($\sim 14$~ps)
is representative of the expected accuracy
of the clock stabilization system using raw ADC data.}
\label{fig:short_timescale}
\end{figure}
The top panel of Figure~\ref{fig:short_timescale} shows the resulting
comparison between the CHIME-Pathfinder relative clock delays
calculated from interferometric observations (blue)
and those found through the clock stabilization system
(from raw ADC data in red, from baseband data in green).
The $1\sigma$ error bars are too small to be visible in the plot but
they are in the range $\sim 17-22$~ps for CygA measurements,
$\sim 14-238$~ps for raw ADC maser data measurements,
and $\sim 18-33$~ps for baseband maser data measurements.
Measurements with the clock stabilization system
show excellent agreement with the sky. This is further
highlighted in the bottom panel of Figure~\ref{fig:short_timescale}
that shows the difference between sky-based and maser-based measurements
of the relative clock delay
(raw ADC maser data in red, baseband maser data in green).
The large error bars for all but the last point in the
raw ADC data analysis (red)
are dominated by the error in the measurements of
the maser delay at the Pathfinder. These can be traced back to current
limitations of the Pathfinder raw data acquisition system, which
occasionally drops packets when we collect raw ADC data at fast cadence
for the reasons explained in Section~\ref{subsec:pipeline}.
This limitation will be solved in the next upgrade of the
F-engine control software. For the observation times that
fell within sections of missing Pathfinder raw ADC data
(the longest of which was $\sim 80$~s),
the delay values were obtained by performing
a smoothing spline interpolation based on the available
measurements. To estimate the uncertainty in the delay values
obtained with this method we analyzed a segment of the
delay timestream for which there were no gaps due to dropped
packets, $\sim 10$~min before the sky observations.
The errors were found using a procedure similar to the one
used in Section~\ref{sec:no_maser_outrigger} to evaluate
the performance of alternate reference clocks,
where we introduce artificial gaps in the delay timestream and
analyze the statistics of an ensemble of interpolation
residuals. Only the raw ADC delays (red)
for the last observation time could be measured
using the default interpolation for both telescopes
(see Section~\ref{subsec:pipeline}). The
uncertainty for this measurement is $\sim 14$~ps, and
is representative of the expected accuracy
of the clock stabilization system using raw ADC data.
\subsubsection{Long timescale test}
\label{subsubsec:long_timescale_test}
To test the performance of the clock stabilization system
on long timescales we collected five CygA baseband dumps,
each 10~ms in duration, spaced one minute apart,
for nine days in a row for a total of 45 delay measurements.
The observations were carried out in April 2021 during
normal CHIME operations so we relied on baseband
maser data for delay measurements with the clock stabilization
system for the reasons explained in Section~\ref{subsec:pipeline}.
Both interferometric and maser-based delays
are calculated using the same procedure as in the
short timescale test described in
Section~\ref{subsubsec:short_timescale_test}, with the
visibilities phase-referenced to one of the observations
on the fifth day.
\begin{figure}[h!]
\centering
\includegraphics[scale=.35]{cyga_multi.png}
\caption{Top: Comparison of the CHIME-Pathfinder relative clock delay
inferred via the clock stabilization system (red) and interferometric
observations (blue) of from multiple transits of CygA.
For each transit, we made five measurements of the
relative clock delay, spaced by one minute,
for nine days in a row.
Bottom: Difference between sky-based and maser-based
measurements of the relative clock delay.
The two methods show excellent agreement
on short (minute) and long (many days) timescales.
This indicates that the clock stabilization system
we have implemented can track clock delay variations with
better than $\sim 30$~ps rms level precision.}
\label{fig:long_timescale}
\end{figure}
The results of the long timescale test are shown in Figure~\ref{fig:long_timescale}. The interferometric
measurements agree with the maser-based measurements
at the $\sim 30$~ps rms level, demonstrating that
after correction with the clock stabilization system
the resulting reference clock is stable
over timescales of more than a week,
and that the signal
chain used to inject the maser signal into the correlator
is not a limitation for the system's performance
in CHIME/FRB Outriggers.
\section{A reference clock for outrigger stations without a maser}
\label{sec:no_maser_outrigger}
The clock stabilization system allows us to inject external
reference clock signals into radio telescopes which share the
CHIME correlator architecture.
In addition to its use in the new outriggers,
the system enables reference clocks
to be swapped out at existing telescopes like CHIME
and legacy systems like the Pathfinder
without making major changes to the software framework or existing
scientific backends, while expanding the telescopes' capabilities to
include VLBI.
Most outriggers will also have access to
hydrogen maser frequency references that can be used in the
same way as CHIME (Section \ref{sec:clock_stabilization})
to meet the stability requirements
for FRB VLBI. However, we still
need to address the possibility that certain outrigger
stations may be built at locations (e.g., greenfield land) that will
lack the infrastructure to support a hydrogen maser.
In this scenario, alternate reference signals
(e.g., from rubidium microwave oscillators)
can also be injected into the correlator to track and compensate
for GPS clock drifts. The performance of these oscillators is
inferior to that of a hydrogen maser, but they are still
more stable than the CHIME GPS clock on the timescales relevant
for FRB VLBI. They are also less expensive and
more readily available than a maser.
Even if these
frequency references can potentially
be used directly as the correlator master clock since they typically
come in units that can provide GPS disciplining as well as absolute time,
it is still desirable to use them separately as free-running clocks
for short and intermediate timescale observations. Not only are they inherently more stable than the primary CHIME clock, but they are not subject to short-timescale tuning jitter when not locked to GPS.
Equation~\ref{eq:spec}
provides a convenient way to determine whether an off-the-shelf
clock meets the requirements for FRB VLBI
by simply reading the $\sigma_y(10^3~\text{s})$
value from the unit's datasheet.
Passive hydrogen masers meet and exceed this requirement.
However, this specification
was derived from Equation~\ref{eq:adev_wn_app}
which assumes uncorrelated clock timing errors and
perfect calibration measurements.
In practice, the actual clock timing errors
will depend on aspects not necessarily captured by
this equation including
the detailed statistics of the delay variations,
the methods used to estimated them, the
timing and accuracy of the calibration measurements,
and the technique that we use to inject the clock to our system.
These aspects become relevant when
the frequency standard does not clearly exceed the
specification in Equation~\ref{eq:spec}.
As part of the implementation of the clock stabilization system,
\cite{timing_pipeline_scary_2021} developed a software package
with methods to
determine the suitability of precision clocks for
VLBI with transit telescopes like CHIME/FRB Outriggers.
These methods take
into account the details of the
noise processes that determine the stability of the clocks
and simulate realistic timing
calibration scenarios.
The basic input to the software is a timestream that
represents the delay variations as function of
time of the clock under test.
The delay timestream data can be either from measurements or from
simulations; in the latter case the software provides tools to
generate timestreams described by
combinations of power-law noise processes
commonly observed in precision clocks and oscillators including
white phase modulation noise, white frequency modulation noise,
flicker frequency modulation noise,
and random walk frequency modulation noise
\citep{1539968}.
Similarly, the software provides tools to generate
delay timestreams from a set of Allan deviation measurements,
which is convenient to evaluate the performance of a
clock based on its manufacturer specifications.
In this case, it is assumed that the delay variations
are described by a combination of power-law noise processes
where the weight of each noise component is found by
fitting the Allan variance data
to a model consisting of a linear combination of the
Allan variances of the previously described noise processes.
A calibrator is parametrized
by its observing time, number of
clock timing measurements, and $\text{SNR}$ per transit.
For example, for a calibrator at the equator
the observing time with CHIME
is $\sim 6$~min, and with a $\sim 2$~min
integration time we would have three
delay measurements per transit. The $\text{SNR}$ determines
the uncertainty in the calibration delay measurements
(see Equation~\ref{eq:sigma_tau}).
Given a timescale $\Delta t_{cal}$ that represents the
maximum expected time
separation between calibrators, the method masks a
random $\Delta t_{cal}$-long section
of the delay timestream, interpolates
using a best-fit function determined from
the available calibration measurements at each end
of the masked section, and keeps the interpolation residuals.
The process is repeated a
configurable number of times
to obtain a statistical ensemble of interpolation residual
timestreams, each of length\footnote{In
practice there is an additional overhead equivalent to one integration.}
$\sim \Delta t_{cal}$.
As the default metric of the
stability of the clock at $\Delta t_{cal}$ timescales, the method
uses the largest value of the ensemble standard deviation
in the interval $\left [0, \Delta t_{cal} \right]$.
Other metrics of performance are available.
Since throughout the paper
we have used the convention that $\Delta t$ represents the
time between a calibration and an observation, this metric
can be interpreted as an
estimate of the largest clock timing rms error
for $\Delta t$ up to $\sim \Delta t_{cal}/2$.
Different interpolation methods are available including
linear fit, smoothing spline, and
nearest available calibrator.
The fitting weights are determined
by the calibrator $\text{SNR}$ and the level of
the noise added by the timing stabilization system.
As a candidate for outriggers without a maser, we evaluated the
performance of the EndRun Technologies Meridian II US-Rb
rubidium oscillator. The unit was installed
in the Pathfinder RF room and connected to a
separate input of the correlator so we could test its
performance against the DRAO maser under conditions
comparable to those of a typical outrigger. A signal
conditioning chain identical to
the maser was used (signal translator + FLA).
We collected $\sim 42$~h of raw ADC data
at 1~s cadence for both the maser and the Rb clock
and used the clock stabilization pipeline to extract
the clock delay variations relative to the maser.
The top panel of Figure~\ref{fig:merid_clk_adev}
shows the measured delay variations after
removing slow linear drift component due to the maser
(see Section~\ref{subsec:pipeline}).
The bottom panel
shows the corresponding Allan deviation of the Rb clock (blue),
the measurement error (dashed blue), and
the manufacturer-specified Allan deviation of the
Rb clock (green points) and the DRAO maser (red points).
For $\Delta t \lesssim 40$~s the
variations are dominated by
the noise associated to the timing stabilization system,
which is in the range $\sim 10-30$~ps, small compared
to the $\sim 200$~ps clock timing error budget.
At longer timescales the measured
Allan deviation of the Rb oscillator is consistent
with the manufacturer specification. These
results are also consistent with direct
Allan deviation measurements
of the Rb oscillator performed with a phase noise analyzer,
confirming that
the hardware of the stabilization system
is not a limitation for clock performance in
CHIME/FRB Outriggers.
\begin{figure}[h!]
\centering
\includegraphics[scale=.35]{meridian_clk_adev.png}
\caption{Top: Measured delay variations of the
rubidium oscillator tested as a candidate reference clock
for outriggers without a maser.
Bottom: Measured Allan deviation of the Rb clock (blue),
measurement error (dashed blue), and
manufacturer-specified Allan deviation of the
Rb clock (green points) and the DRAO maser (red points).
The measured Allan deviation of the Rb clock is consistent
with the specification at intermediate and long timescales.
At short timescales the noise of the timing stabilization system
dominates the performance, but it is still small ($\sim 10-30$~ps)
compared to the clock timing error budget. This confirms that
the hardware of the clock stabilization system
is not a limitation for clock performance in
CHIME/FRB Outriggers.}
\label{fig:merid_clk_adev}
\end{figure}
Note that if we rely only on the manufacturer-specified
Allan deviation, the Rb clock does not meet the
requirement in Equation~\ref{eq:spec}. This justifies
a more detailed analysis of the clock performance to
determine whether it can still be used as a frequency
reference for the outriggers without a maser.
The measured delay timestream was analyzed with the
software package described above and in \cite{timing_pipeline_scary_2021}
to evaluate the expected performance
of the Rb clock under different calibration conditions.
The results are shown in Figure~\ref{fig:meridian}.
For the analysis we assumed that the calibrator observing time
was 9~min (roughly the value at CHIME's zenith)
with two integrations per observation.
The best performance is obtained with
linear interpolation between calibrators.
We tested calibrator $\text{SNRs}$ of 10 (purple),
15 (green), 20 (red), and $\infty$ (blue),
the latter representing the case where the clock performance
is not limited by calibration errors.
For comparison we also show the projected clock timing errors
for the DRAO maser using synthetic data generated from
the manufacturer-specified
Allan deviation with $\text{SNR}=15$ (dashed green).
Figure~\ref{fig:meridian} shows that,
even in the most conservative scenario
where we assume that all the calibrators have $\text{SNR} = 15$,
the Rb clock timing errors stay below
$225$~ps up to $\Delta t \sim 10^3$~s
by interpolating between timing solutions (solid green).
This clock timing error is still well below
the total timing budget of $\sim 800$~ps,
leaving enough room to handle ionospheric delay errors.
\begin{figure}[h!]
\centering
\includegraphics[scale=.36]{meridian_linear_ip.png}
\caption{Projected clock errors, $\sigma_\tau^{\text{clk}}$, of the
Rb clock as function of the time between calibrators
$\Delta t_{cal}$ from measured delay variations and
simulations of
realistic timing calibration scenarios.
This metric represents an estimate of the
largest clock timing error for
$\Delta t$ up to $\sim \Delta t_{cal}/2$
(see Section~\ref{sec:no_maser_outrigger} for details).
The dashed black horizontal line represents
$\sigma_\tau^{\text{clk}} = 200$~ps.
Even in the most conservative scenario
where we assume that all the
calibrators have $\text{SNR} = 15$ (solid green),
the Rb clock timing errors stay below
$225$~ps up to $\Delta t \sim 10^3$~s
by interpolating between timing solutions,
meeting the requirements for FRB VLBI with CHIME/FRB Outriggers.
}
\label{fig:meridian}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
We developed a clock stabilization system for CHIME/FRB Outriggers that
allows synchronization of CHIME and outrigger
stations at the $\sim 200$~ps level
on short and long timescales. This meets the requirements
for 50~mas localization of FRBs detected with the
CHIME/FRB real-time pipeline. Our proof-of-principle clock
transfer has demonstrated that a variety of different data
products can be used for precise time transfer from an external
reference clocks into data acquisition backends using the ICE
framework. This method is minimally invasive to existing
telescopes like CHIME and the Pathfinder,
expanding the capabilities of these instruments to include VLBI
without impacting their existing scientific backends.
It also allows for
increased flexibility and modularity for future systems such
as those at CHIME outriggers.
For outriggers that do not have the infrastructure to
support a hydrogen maser, we demonstrated that
it is still possible to meet the required clock stability
specification by using alternate reference clocks
and interpolating timing solutions between calibrations.
\begin{acknowledgments}
We acknowledge that CHIME is located on the traditional,
ancestral, and unceded territory of the Syilx/Okanagan people.
We are grateful to the staff of the Dominion Radio
Astrophysical Observatory, which is
operated by the National
Research Council Canada.
CHIME is funded by a grant from the Canada Foundation
for Innovation (CFI) 2012 Leading Edge Fund (Project 31170)
and by contributions from the provinces of British Columbia,
Qu\'ebec and Ontario. The CHIME/FRB Project is funded by a
grant from the CFI 2015 Innovation Fund (Project 33213) and
by contributions from the provinces of British Columbia and
Qu\'ebec, and by the Dunlap Institute for Astronomy and
Astrophysics at the University of Toronto.
Additional support was provided by the Canadian
Institute for Advanced Research (CIFAR), McGill
University and the McGill Space Institute via the
Trottier Family Foundation, and the University of
British Columbia.
The CHIME/FRB Outriggers program is funded by
the Gordon and Betty Moore Foundation
and by a National Science Foundation (NSF) grant (2008031).
FRB research at MIT is supported by an NSF grant (2008031).
FRB research at WVU is supported by an NSF grant (2006548, 2018490).
J.M.P. is a Kavli Fellow.
C.L. was supported by the U.S. Department of Defense (DoD)
through the National Defense Science \& Engineering Graduate
Fellowship (NDSEG) Program.
V.M.K. holds the Lorne Trottier Chair in Astrophysics \&
Cosmology and a Distinguished James McGill Professorship
and receives support from an NSERC Discovery Grant and
Herzberg Award, from an R. Howard Webster Foundation
Fellowship from CIFAR, and from the FRQNT Centre de
Recherche en Astrophysique du Qu\'{e}bec.
J.L.S. acknowledges support from the Canada 150 programme.
\end{acknowledgments}
\vspace{5mm}
\facilities{CHIME}
\software
\texttt{AllanTools}~\citep{2018ascl.soft04021W}}
|
2,877,628,090,712 | arxiv | \section{Introduction}
\label{intro}
Multiplex network analysis \rev{has} emerged as a promising approach to investigate complex systems.
A multiplex network is a compact network model used to represent multiple modes of interaction or different \rev{types of} relationships among entities of the same type (e.g. people). Th\rev{is} model ha\rev{s} been used to study a large variety of systems across disciplines\rev{,} ranging from living organisms and human societies to transportation systems and critical infrastructures. For example, a description of the full protein-protein interactom\rev{e}\footnote[1]{\rev{A}n interactom\rev{e} is the totality of protein--protein interactions happen\rev{ing} in a cell\rev{.}} \rev{involves}, for some organisms, up to seven distinct modes of interaction among thousands of protein molecules \citep{DeDomenico2015}. Another example is in air transportation systems when modeling the connections between airports through direct flights; here, the different commercial airlines can be seen as different modes of connection among airports \citep{Cardillo2013}.
Figure \ref{fig:mpx} shows a typical layered representation of a multiplex network, where each layer corresponds to a type of interaction and nodes \rev{(also called vertices)} in different layers can be associated to the same actor, e.g. the same person or the same airport. Here, we adopt the term \emph{actor} from the field of social network analysis, where multiplex networks have been first applied, and the term \emph{layer} from recent generalizations of the original multiplex model~\citep{Brodka2011a,Magnani2011b,Kivela2014,DickisonMagnaniRossi2016}.
\begin{figure}[!htpb]
\centering
\includegraphics[width=0.8\textwidth]{Figures/Introduction/mpx_network.eps}
\caption{A\rev{n} example of a multiplex network with two types of interaction among five actors. This is represented as five nodes replicated in two layers. The two nodes representing the same actor (e.g. the same person) are linked by a dotted line}
\label{fig:mpx}
\end{figure}
A core task in network analysis is to identify and understand communities, also known as clusters or cohesive groups; that is, to explain why groups of entities (actors) belong together based on the explicit ties among them and/or the implicit ties induced by some similarity measures given some attributes of these entities. Since members of a community tend to share common properties, revealing the community structure in a network can provide a better understanding of the overall functioning of the network.
\rev{Unfortunately, c}ommunity detection methods \rev{for} simple graphs are not sufficient to deal with the complexity of the multiplex
\rev{model, for three main reasons. First, without allowing the analysis of subsets of the layers some communities may become hidden by edges in irrelevant layers. This is a common problem also in traditional multivariate data analysis, where several preprocessing methods have been developed to remove irrelevant information and algorithms have been extended to explore subsets of the data dimensions, as done by subspace clustering methods. Second, algorithms not explicitly representing the different layers cannot differentiate between different types of multiplex communities, e.g., those present on a single layer and those made of specific combinations of layers. Third, without a concept of layer it is not possible to include the same actor in different communities depending on the layer where the actor is active. In other words, community detection methods for simple graphs cannot conceptually represent (and thus discover) some types of communities that can only be defined on multilayer networks, although this does not
imply that
non-multilayer methods will always be outperformed by multilayer algorithms.}
\rev{To address the above limitations,
several community detection algorithms for multiplex networks have been recently proposed, based on different definitions of community and different computational approaches.}
Recent works have provided a partial overview of \rev{existing algorithms}. \citep{Kim2015} proposed some criteria to compare multi-layered community detection algorithms, but without \rev{any} experimental evaluation. Similarly, \citep{Bothorel2015} highlighted the conceptual differences among different clustering methods over attributed graphs, including edge-labeled graphs that can be used to represent multiplex networks, but only provided a taxonomy of the different algorithms without any experimental analysis.
\citep{Loe2015} instead performed a pairwise comparison of the different clusterings produced by some existing algorithms.
This article provides a systematic review and experimental comparison of existing methods\rev{, with the aim of} simplify\rev{ing} the choice and the setup of the most appropriate algorithm for the task at hand. \rev{We test the accuracy of the different methods with respect to some given ground truth on both synthetic and real networks and we study their scalability in terms of the size of the network, both vertically (number of layers) and horizontally (number of actors).} At the same time, \rev{we} highlight weaknesses and strengths of specific methods and of the current state-of-the-art as a whole\rev{, showing how even the most sophisticated methods fail to identify some types of communities}.
The focus of this survey is on algorithms explicitly designed to discover communities in multiplex networks through the analysis of the network structure. Several community detection algorithms have been proposed \rev{to} deal with models \rev{related to but} not \rev{compatible} with the multiplex model\rev{, such as}
Heterogeneous Information Networks \citep{Sun2013,Sun2009,Zhou2013,Sun2009a} and \rev{b}ipartite \rev{n}etworks \citep{Barber2007,Guimera2007}, and are not included in our article. Since we focus on \rev{network structure}, graph clustering on attributed networks \citep{Boden2012a,Ruan, Silva,Xu,Zhou, Qi,Li} \rev{is also not} included in our analysis. For a survey on attributed graph clustering we refer the reader to \citep{Bothorel2015}.
The rest of this work is organized as follows. \rev{Section~\ref{sec:mpx_community} provides some basic definitions used throughout the article.} In Section~\ref{sec:taxonomy} we introduce a taxonomy of existing multiplex community detection methods. \rev{Section~\ref{sec:theory} provides a theoretical comparison of the reviewed algorithms, while} Section~\ref{sec:experiments} presents the experimental settings and the evaluation datasets used in our experiments\rev{. T}he results \rev{of the experimental analysis are given} in Section~\ref{sec:results}. We summarize our main findings and indicate usage guidelines emerged from our experiments in Section~\ref{sec:discussion}.
\section{\rev{Multiplex networks and communities}}\label{sec:prel}
\rev{A multiplex network is a special case of a multilayer network.
A \textit{multilayer network} is defined as a tuple $(A, L, V, E)$, where $A$ is a set of actors, $L$ is a set of layers, and $(V, E)$ is a graph on $V \subseteq A \times L$. Notice that this definition does not require all the actors to be present in all the layers, and allows actors to be present in some layers without having any neighbor on those layers.}
\rev{In multiplex networks $E$ is restricted to intra-layer edges, that is, an edge $((v_1, l_1), (v_2, l_2))$ is allowed only if $l_1 = l_2$. In the following we use $\#a$, $\#l$, $\#v$, and $\#e$ to refer to the cardinality of, respectively, $A$, $L$, $V$, and $E$. We use the terms vertex or node to indicate the elements of $V$, that is, actors inside a layer.}
\label{sec:mpx_community}
The \rev{most common} output of a community detection algorithm for multiplex networks is a set of communities $\mathcal{C}$ = \(\{C_1, C_2, \dots , C_k\} \) such that each community contains a non-empty subset of $V$. \rev{$\mathcal{C}$ is a representation of the \emph{community structure} of the network.} \rev{Sometimes the term \emph{cluster} is also used} as a synonym of community, \rev{although the term community can be interpreted more broadly to also refer to the subgraph induced by its nodes, or even more broadly to indicate the real-world concept it represents, e.g., a group of people with shared norms, values or objectives in a social network. A few community detection methods discover clusters of edges instead of clusters of nodes or actors. Keeping the above considerations in mind, the term}
\emph{clustering} \rev{is also used to refer to the} set of \rev{all} communities. Figure~\ref{fig:clustering_types} illustrates different possible types of clusterings on a multiplex network.
\begin{figure}[!htpb]
\centering
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Community_Definition/complete_clustering}
\caption{Total}
\label{fig:total}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Community_Definition/incomplete_clustering}
\caption{Partial}
\label{fig:partial}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Community_Definition/node-overlapping_clustering}
\caption{Node-overlapping}
\label{fig:node_overlapping}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Community_Definition/node-partitioning_clustering}
\caption{Node-disjoint}
\label{fig:node_partitioning}
\end{subfigure}
\begin{subfigure}[t]{0.31\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Community_Definition/actor-overlapping_clustering}
\caption{Actor-overlapping}
\label{fig:actor_overlapping}
\end{subfigure}
\begin{subfigure}[t]{0.31\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Community_Definition/actor-partitioning_clustering}
\caption{Actor-disjoint}
\label{fig:actor_partitioning}
\end{subfigure}
\caption{Different types of clustering on a multiplex network\rev{. In (c) the two overlapping nodes are A4\_L1 and A3\_L2. In (e) A2 is the overlapping actor}}
\label{fig:clustering_types}
\end{figure}
\begin{figure}[!htpb]
\centering
\begin{subfigure}[t]{0.42\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Community_Definition/pillar_coms}
\caption{Pillar communities}
\label{fig:pillar}
\end{subfigure}
\begin{subfigure}[t]{0.42\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Community_Definition/semi_pillar_coms}
\caption{Semi-pillar communities}
\label{fig:spillar}
\end{subfigure}
\caption{Pillar and semi-pillar multiplex community structures}
\label{fig:pillar_vs_spillar}
\end{figure}
A clustering $\mathcal{C}$ is \textit{total} if every node in $V$ belongs to at least one community, and it is \textit{partial} otherwise.
We also call a clustering \textit{node-overlapping} if there is at least a node that belongs to more than one cluster, otherwise the clustering is called \textit{node-disjoint}. Analogously, if there is at least an actor belonging to more than one cluster we call the clustering \textit{actor-overlapping}, otherwise it is called \textit{actor-disjoint}. Notice that a node-overlapping clustering is also actor-overlapping, while an actor-overlapping clustering may or may not be node-overlapping.
\rev{Finally, a} multiplex community is called \rev{\textit{semi-pillar} on layers $L' \subset L$ if for each actor $a \in A$ in the community all nodes in $\{ (a,l) \in V : l \in L' \}$ are included in the community. When $L' = L$ we talk of a \textit{pillar} community (Figure \ref{fig:pillar_vs_spillar}). Please notice that two pillar (or semi-pillar) communities are either disjoint or both actor- and node-overlapping.}
\section{A taxonomy of the reviewed algorithms}
\label{sec:taxonomy}
In this section we provide a taxonomy of multiplex community detection methods \rev{with three levels of classification. The top-level distinction is between \emph{global} or \emph{local} methods, respectively discovering all communities in the input network or generating a single community around one or more seed nodes. The results of these two types of algorithms are not directly comparable without arbitrary choices in the selection of seed nodes, so we treat them in separate sections in our experimental evaluation. The second level regards the way in which the algorithms handle the presence of multiple layers: reducing them to a single layer (flattening), processing each layer independently to then merge the results of single-layer community detection, or considering all the layers at the same time. The last level of the taxonomy groups the algorithms based on more specific approaches, such as optimizing an objective function, considering the behavior of a random walker or identifying dense subgraphs.} Figure \ref{fig:taxonomy} and Table \ref{table:algorithms} show an overview of the related methods. \rev{Please notice that Section~\ref{sec:theory}, describing some theoretical properties of the algorithms such as whether they are deterministic or not, can also be used to differentiate between different types of algorithms.}
\begin{figure}
\includegraphics[width=\textwidth]{Figures/Taxonomy/taxonomy.eps}
\caption{A taxonomy of multiplex community detection algorithms}
\label{fig:taxonomy}
\end{figure}
\begin{table}[!htpb]
\caption{Multiplex community detection algorithms covered in this survey}
\begin{minipage}{\columnwidth}
\begin{center}
\resizebox{\textwidth}{!}{%
\begin{tabular}{ lll }
\toprule
Algorithm & Notation & Reference \\
\toprule
Non-Weighted Flattening & \textsf{NWF} & \citep{Berlingerio2011c} \\
Weighted Flattening (Edge Count) & \textsf{WF\_EC} &\citep{Berlingerio2011c} \\
Weighted Flattening (Neighbourhood)& \textsf{WF\_N} & \citep{Berlingerio2011c} \\
Weighted Flattening (Differential) & \textsf{WF\_Diff} & \citep{Kim2016} \\
\hline
Frequent pattern mining-based community discovery & \textsf{ABACUS} & \citep{Berlingerio2013} \\
Ensemble-based Multi-layer Community Detection & \textsf{EMCD} &\citep{Tagarelli2017}\\
Principal Modularity Maximization & \textsf{PMM} & \citep{Tang2009UncoverningGV,Tang2012}\\
Subspace Analysis on Grassmann Manifolds & \textsf{SCML} & \citep{Dong2014}\\
\hline
Multi Layer Clique Percolation Method & \textsf{ML-CPM} & \citep{Afsarmanesh2018}\\
Locally Adaptive Random Transitions & \textsf{LART} &\citep{Kuncheva2015}\\
Modular Flows on Multilayer Networks & \textsf{Infomap} & \citep{DeDomenico2015_infomap,Edler2017} \\
Generalized Louvain & \textsf{GLouvain} & \citep{Mucha2010glouvain} \\
Fast algorithm for comm.~detection based on multiplex net.~modularity & \textsf{FCDMNN} & \citep{Zhai2018}\\
Multilink community detection & \textsf{MLink} & \citep{Mondragon2018}\\
Multilevel memetic algorithm for composite community detection & \textsf{MNCD} & \citep{Ma2018}\\
Multi Dimen\rev{s}ional Label Propagation & \textsf{MLP} & \citep{Boutemine2017}\\
\hline
Andersen-Chung-Lang cut & \textsf{ACLcut} &\citep{JEUB2017} \\
Multilayer local community detection &\textsf{ML-LCD}&\citep{Interdonato2017}\\
\bottomrule
\end{tabular}}
\end{center}
\label{table:algorithms}
\end{minipage}
\end{table}
\subsection{\rev{Global methods}}
Global methods are designed to discover all possible communities in a network, thus requiring knowledge on the whole network structure. \rev{As it happens for many multiplex data analysis methods \cite{DickisonMagnaniRossi2016}, global community detection algorithms can also be grouped into three typical main classes, described in the following.}
\subsubsection{\rev{Flattening}}
The first approach consists \rev{in} simplifying the multiplex network into a graph by merging its layers, using a so-called \textit{flattening} algorithm, then applying a traditional community detection algorithm. \rev{This process is illustrated in Figure~\ref{fig:com-flat}.}
\begin{figure}
\includegraphics[width=\textwidth]{Figures/Taxonomy/com_flat.eps}
\caption{\rev{The general process used by flattening methods: a single-layer network is first constructed merging edges from the different layers, then a traditional community detection algorithm is applied to the flattened network, and its result can be used to induce communities on the original network}}
\label{fig:com-flat}
\end{figure}
\rev{The algorithms belonging to this class are defined by the flattening method and by the single-layer community detection algorithm applied to the flattened network. The simplest flattening method consists in creating an unweighted graph where two nodes are adjacent if their corresponding actors are adjacent on any of the input layers \rev{\citep{Berlingerio2011c}}. The advantage of this approach is that the resulting graph is easier to handle, because there are more clustering algorithms for simple graphs than for weighted graphs and weights often imply an additional level of complexity, e.g., deciding a threshold above which weighted edges should be considered. A potential disadvantage is that an unweighted flattening is more susceptible to noise.}
\rev{Weighted flattenings reflect} some structural properties of the original multiplex network in the form of weights assigned to the output edges \rev{\citep{Berlingerio2011c,Kim2016}}. \rev{In theory these methods are less susceptible to noise, but the resulting communities may be biased towards edges appearing on several layers, and the results can be more difficult to interpret because of the weights.}
\rev{In general, the algorithms in this class are only able to identify pillar communities, and some communities may emerge because of edges spread on different layers that would not constitute a community on any individual layer, because of the flattening process.}
\subsubsection{\rev{Layer by layer}}
\rev{While the methods in the previous class merge the layers and then apply traditional community detection algorithms, layer-by-layer methods first apply traditional community detection algorithms to each layer, then merge their results. This process is illustrated in Figure~\ref{fig:com-lbl}.}
\begin{figure}
\includegraphics[width=\textwidth]{Figures/Taxonomy/com_lbl.eps}
\caption{\rev{The general process used by layer-by-layer methods: communities are identified in each layer, the information obtained from each layer is used to cluster the actors, and this clustering can be used to induce communities on the original network}}
\label{fig:com-lbl}
\end{figure}
\rev{As a consequence of the layer-by-layer community detection step, these methods include actors in the same community only when they are part of the same community in at least one layer. Also, due to the merging of layer-specific communities, these methods can in principle only identify pillar communities.
}
\rev{We have identified three types of layer-by-layer approaches in the literature. The \emph{pattern mining} approach exploits association rule mining methods, which are among the main data-mining tasks used to find objects that frequently co-occur together in different transactions. (A typical example of transaction is the basket of products bought together by a customer at a supermarket.) ABACUS considers each single-layer community as a transaction, so that the final communities contain actors that are part of the same community in at least a minimum number of layers \rev{\citep{Berlingerio2013}}.}
\rev{The second way to merge the result of single-layer community detection methods is based on a notion of \textit{consensus}: given a set (or ensemble) of community structure solutions from the individual layers, the goal is to find a single, meaningful solution that is representative of the input ensemble, by optimizing an objective function that is designed to aggregate information from the individual solutions in the ensemble. While early approaches such as the one in~\citep{ConClus2012} are limited to use a clustering ensemble method as a black-box tool for combining multiple clustering solutions from a single-layer network,
the first well-principled formulation of the ensemble-based multilayer community detection (EMCD) problem, provided in~\citep{Tagarelli2017}, does not limit aggregation at node membership level, but rather it accounts for intra-community and inter-community connectivity.
The consensus solution discovered by EMCD is the one with maximum multilayer modularity from a search space of candidates delimited by topological upper-bound and lower-bound solutions, respectively, of the input multilayer network.
}
\rev{Finally, some methods in the literature process the layer-specific adjacency matrices, or derived matrices,
and extend spectral-clustering for simple graphs by exploiting the relationship between the eigenvectors and eigenvalues in the constructed matrices and the presence of clusters in the corresponding graphs.
As an example, the principal modularity maximization (PMM) method~\cite{Tang2009UncoverningGV} extracts structural features from the various layers, then concatenates the features and performs PCA to select the top eigenvectors. Using these eigenvectors, a low-dimensional embedding is computed to capture the principal patterns across the layers, finally a simple $k$-means is applied to assign nodes to communities.
Further details on this class of approaches can be found in~\citep{TangLiu2010}. }
\subsubsection{\rev{Multilayer}}
\rev{The third class of algorithms operates directly on the multiplex network model, as shown in Figure~\ref{fig:com-ml}. As an example, a method belonging to this class based on a random walker would allow the walker to switch from one layer to the other, which would not be possible if the layers have been flattened or if we want to separately identify communities on individual layers.}
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{Figures/Taxonomy/com-ml.eps}
\caption{\rev{The general process used by multilayer methods: communities are discovered directly on the multiplex data}}
\label{fig:com-ml}
\end{figure}
\rev{Various approaches originally developed for simple graphs have been extended to the multilayer case. \emph{Density-based methods} first identify dense regions of the network, then include adjacent regions in the same community. A popular method for simple graphs is clique percolation, where dense regions correspond to cliques and adjacency consists in having common nodes. The multilayer clique percolation method extends this process by looking for cliques spanning multiple layers, and redefining adjacency so that both common nodes and common layers are required \rev{\citep{Afsarmanesh2018}}.}
\rev{Methods based on \emph{random walks} consider that an entity randomly following the edges in a network would tend to get trapped inside communities, because of the higher edge density between nodes inside the same community, less frequently moving from one community to the other. LART \rev{\citep{Kuncheva2015}} and Infomap \rev{\citep{DeDomenico2015_infomap}} are both based on this consideration, with Infomap using a shortest information coding approach to identify the corresponding communities.}
\rev{Several of the reviewed algorithms in the multilayer class use an objective function that, given an assignment of the nodes to communities, returns a higher value when there are more edges inside communities and less edges across communities. Once the objective function has been defined, then different optimization methods can be used to identify a community assignment corresponding to a high value of the function. Generalized Louvain \rev{\citep{Jutla}}, the best-known method in this class, uses an extended version of modularity, and has been analyzed in more detailed in \cite{hanteer_unspoken_2020}. This class also includes a method returning a different type of communities with respect to the ones generated by the other algorithms, where edges are grouped instead of actors and nodes \cite{Mondragon2018}.}
\rev{Finally, the multilayer class includes an algorithm based on \emph{label propagation} \rev{\citep{Boutemine2017}}. A traditional label propagation method would start assigning a different label to each node, then having each node replace its label with one that is frequent among its neighbors, until some stopping condition is satisfied. The multilayer version of this approach follows the same idea, weighting the contribution of each neighbor based on their similarity with the node on the different layers. For example, two nodes being adjacent on all layers and having the same neighbors on all layers would have a higher probability of getting the same label.}
\subsection{\rev{Local methods}}
\rev{L}ocal methods (also known as \textit{node-centric}) are \textit{query-dependent}, i.e., they are designed to discover the community \rev{around} a set of \rev{input} query nodes. \rev{Please notice that the term \emph{local} has also been used with other meanings in the literature, for methods finding global community structures using only neighborhood information when processing vertices in the graph.}
\rev{At the time of writing, we recognize the availability of two methods able to discover multiplex local communities: \textsf{ML-LCD}~\citep{Interdonato2017} and \textsf{ACLcut}~\citep{JEUB2017}.
\textsf{ML-LCD} searches for the local community associated to a seed actor without having a complete knowledge of the network graph, through an incremental exploration of the neighborhood of the query actor, according to the optimization of a criterion function based on the internal and external connectivity of the local community.
\textsf{ACLcut} exploits the solution of a personalized PageRank approximated for an input seed-set (i.e., a set of query actors) in order to find the local communities, using a sweep cut method
to sample local communities based on the lowest conductance values.
Both methods operate directly on the multiplex network model, so that the \textit{Local} branch of our hierarchy only includes the \textit{Multilayer} class. Nevertheless (even if, to the best of our knowledge, there are no such examples in literature) it is in theory possible to easily design multiplex local community detection methods that operate through flattening or layer-by-layer schemes, by exploiting existing single-layer local community detection methods such as \textsf{LCD}~\citep{ChenZG09} and \textsf{Lemon}~\citep{LiHBH15}.}
\subsection{\rev{Selection of algorithms}}
\rev{In the following sections we will provide a detailed comparative analysis of a large subset of the algorithms in our taxonomy. We include at least one representative method for each leaf in the taxonomy. In those cases where different well-known methods inside the same leaf show significant differences, either theoretically or experimentally, we have also included them, as detailed in the following.}
\rev{We only focus on a selection of the flattening methods, with one representative for each class (unweighted and weighted), because of the small variation between the different approaches and because the features and performance of these algorithms are determined more by the single-layer approach used to implement them than by the way in which weights are assigned. While the main interest of this article is on multilayer-specific methods, we still considered it important to test some flattening methods in detail, because as we will see in our comparative analysis these simpler approaches can still produce good and sometimes better results than more sophisticated methods.}
\rev{We include all the methods from the layer-by-layer class (ABACUS, EMCD, PMM and SCML), because they are representative of different ways to merge the results of the single-layer algorithms. PMM has been first published in conference proceedings \citep{Tang2009UncoverningGV} and then abstracted and extended in a journal article \citep{Tang2012}. We use the conference version, because the code for the journal version is not available.}
\rev{From the multilayer class we include all the methods except FCDMNN and MMCD, both because GLouvain is by far the most well-known representative optimization algorithm and because the code to test these two alernatives is not available to the best of our knowledge. MLink, while included in the scalability analysis, produces link communities that are not directly comparable with the ones produced by other methods.}
\rev{We also include all the local methods (ACLcut and ML-LCD), because they use significantly different approaches.}
\section{\rev{Theoretical analysis}}\label{sec:theory}
\rev{In this section we present some theoretical properties of the reviewed algorithms. We describe the types of community structures that can be returned by each algorithm, we indicate some features of the algorithms themselves such as whether they are deterministic, and we discuss parameter setting and computational complexity.}
\rev{These properties should be considered in combination with the results of our experimental evaluation. For example, the fact that \emph{in theory} an algorithm is able to produce some types of multiplex communities does not imply that these types of communities will be found in practice.
Nonetheless, knowing that some algorithms are not able to return some types of communities or that their execution time grows exponentially with respect to the number of layers can be useful to choose which algorithms to use in specific situations.}
\subsection{\rev{Types of community structures}}
In Section~\ref{sec:prel} we have described different properties of multiplex community structures.
\rev{Table~\ref{table:com_type} indicates which ones are associated to each reviewed algorithm. In particular,
\begin{itemize}
\item[\textbf{(NP)}] if the algorithm can generate non-pillar communities;
\item[\textbf{(AO)}] if it can generate actor-overlapping community structures;
\item[\textbf{(NO)}] if it can generate node-overlapping community structures;
\item[\textbf{(PA)}] if it can generate partial community structures.
\end{itemize}
}
\rev{An algorithm not satisfying these properties --- i.e., those with an `$\times$' in the table --- would, respectively, only be able to produce pillars, only partition the actors and nodes, and force all nodes to belong to at least one community. Notice that this can be perfectly fine in some cases, so satisfying or not the properties above does not mean that the algorithm is worse or better. These properties should only be used as an indication about the appropriateness of the algorithm for specific scenarios.}
\begin{table}[!htpb]
\caption{Types of clustering \rev{produced} by the reviewed methods \rev{ and algorithmic properties. The second column recalls the class of the algorithm (G-Flat: global flattening, G-LBL: global layer by layer, G-ML: global multilayer, L-ML: local multilayer). Columns NP (Non-Pillar), AO (Actor-Overlapping), NO (Node Overlapping) and Pa (Partial) indicate if the algorithm can (\checkmark) or cannot ($\times$) produce that type of community structure. Columns LR (Layer Relevance), Det (Deterministic), AK (Automated selection of the number of communities) and SS (Subgraph Structure) refer to the functioning of the algorithm.} (*) indicates that the answer depends on the single-layer clustering algorithm used by the method. \rev{(-) indicates that the property is not relevant for the algorithm.}}
\begin{minipage}{\columnwidth}
\begin{center}
\begin{tabular}{ll | llll | llll}
\toprule
Algorithm & \rev{Category} & NP & AO & NO & Pa & LR & Det & AK & SS \\
\toprule
NWF & \rev{G-Flat} & \rev{$\times$} & * & * & * & $\times$ & * & * & *\\
WF\_EC & \rev{G-Flat} & \rev{$\times$} & * & * & * & $\times$ & * & * & *\\
\hline
ABACUS & \rev{G-LBL} & \rev{$\times$} & \rev{\checkmark} & \rev{\checkmark} & \rev{\checkmark} & $\times$ & * & \checkmark & \checkmark \\
EMCD & \rev{G-LBL} & \rev{$\times$} & \rev{$\times$} & \rev{$\times$} & \rev{$\times$} & \checkmark & * & \checkmark & \checkmark\\
PMM & \rev{G-LBL} & \rev{$\times$} & \rev{$\times$} & \rev{$\times$} & \rev{$\times$} & $\times$ & $\times$ & \rev{$\times$} & \rev{$\times$} \\
SCML & \rev{G-LBL} & \rev{$\times$} & \rev{$\times$} & \rev{$\times$} & \rev{$\times$} & $\times$ & \rev{$\times$} & $\times$ & \rev{$\times$} \\
\hline
ML-CPM & \rev{G-ML} & \rev{\checkmark} & \rev{\checkmark} & \rev{\checkmark} & \rev{\checkmark} &$\times$ & \checkmark & \checkmark & \checkmark \\
Infomap & \rev{G-ML} & \rev{\checkmark} & \rev{\checkmark} & \rev{\checkmark} & \rev{\checkmark} & \rev{$\times$} & \checkmark & \checkmark \\
LART & \rev{G-ML} & \rev{\checkmark} & \rev{\checkmark} & \rev{$\times$} & \rev{$\times$} & \rev{\checkmark} & \rev{$\times$} & \checkmark & \checkmark \\
GLouvain & \rev{G-ML} & \rev{\checkmark} & \rev{\checkmark} & \rev{$\times$} & \rev{$\times$} & \checkmark & $\times$ & \checkmark & \checkmark \\
MLP & \rev{G-ML} & \rev{\checkmark} & \rev{\checkmark} & \rev{$\times$} & \rev{$\times$} & \rev{\checkmark} & $\times$ & \checkmark & \checkmark \\
\hline
ML-LCD & \rev{L-ML} & \rev{$\times$} & \rev{-} & \rev{-} & \rev{-} & \rev{\checkmark} & \rev{\checkmark} & - & \rev{\checkmark} \\
ACLcut & \rev{L-ML} & \rev{\checkmark} & \rev{-} & \rev{-} & \rev{-} & \rev{$\times$} & \rev{$\times$} & - & \rev{\checkmark} \\
\bottomrule
\end{tabular}
\end{center}
\end{minipage}
\label{table:com_type}
\end{table}
\rev{Looking at Table~\ref{table:com_type}, we can identify some patterns:
\begin{itemize}
\item For all flattening methods, the type of the resulting community structure (Overlapping/Disjoint and Total/Partial) depends on the single-layer algorithm used after flattening. The choice of the single-layer algorithm can then be made depending on the wanted result.
\item All flattening methods produce pillar communities, because the actors on different layers are reduced to a single node in the flattened graph.
\item All multilayer methods can produce non-pillar communities in theory, although our experimental evaluation shows that pillar communities are often returned by some of these methods.
\item Pillar actor-overlapping communities are always node-overlapping, by definition.
\item Non-pillar actor-overlapping communities may be or not node-overlapping.
\end{itemize}}
\subsection{\rev{Algorithmic properties}}
In their survey work, \citep{Kim2015} discussed a classification framework based on a set of desired properties for multilayer community detection methods. These properties are: multiple layer applicability, consideration of each layer's importance, flexible layer participation (i.e., every community can have a different coverage of the layers' structure), no-layer-locality assumption (e.g., independence from initialization steps biased by a particular layer), independence from the order of layers, algorithm insensitivity, and overlapping layers (e.g., two or more communities can share substructures over different layers).
We observe that the first of the properties \rev{listed above} (multiple layer applicability) \rev{is satisfied by} all methods we reviewed, therefore we do not elaborate on this further. By contrast, the second property (consideration of each layer's importance) \rev{is also included in our list and further elaborated, as detailed below (Layer Relevance)}. \rev{We collapse the} properties about independence from \rev{the order in which} node\rev{s and} layer\rev{s are examined into a single property, also including stochastic behaviors such as in the case of random walkers (Determinism). As we focus on multiplex networks, we do not treat the case where layers are ordered}. The insensitivity property (i.e., independence or robustness against main tunable input parameters) is instead \rev{replaced by a more specific property on} whether the number of communities is automatically derived (Auto-detection), and \rev{a more general discussion about how to set additional parameters, in the next paragraph.} \rev{The last property we consider (Subgraph Structure) was not discussed in previous surveys.}
In light of the above considerations, we next define \rev{the following} four properties\rev{, also indicated in Table~\ref{table:com_type}}.
\begin{itemize}
\item[\rev{(\textbf{LR})}] \textbf{Layer relevance.} Some methods take into consideration each layer's importance\rev{, also called relevance in some of the reviewed works,}
in order to control their contribution to the computation of the multiplex community structure.
\rev{Layer relevance is either} learned based on the layer characteristics, or \rev{it can be an input of the algorithm} based on a-priori knowledge (e.g., user preferences).
\item[\rev{(\textbf{Det})}] \textbf{Determinism.} This refers to whether a method has a deterministic behavior, \rev{e.g.,} its output is independent from the order of examination of the nodes and/or layers.
\item[\rev{(\textbf{AK})}] \textbf{Auto-detection of the number of communities.}
Some methods expect the number of communities to be decided ahead of time while other methods can automatically define the number of communities.
\item[\rev{(\textbf{SS})}] \textbf{Subgraph structure.}
\rev{The primary product of all the reviewed methods are the cluster memberships of nodes. However, some methods also tell us something about the multilayer subgraph structures underlying each community, that is, we can get more information about which edges contributed to the discovery of each community.}
\end{itemize}
\rev{
Different algorithms tune layer relevance (LR) in different ways. The only algorithm allowing to specify weights as input parameters is GLouvain, through the parameter omega {\color{red} $\omega$} that gives more or less importance to the fact that the same actor is included in the same community in different layers. However, these weights are assigned to pairs of actors in different layers, not to individual layers, and in practice omega {\color{red} $\omega$} is set to a single value for the whole network.
In EMCD, the importance of the various layers may be considered by differently setting the resolution parameter in the multilayer modularity. Both LART and MLP use a concept of layer relevance (that is, how important a layer is for a node or a pair of nodes) to weight the probability of the random walker to switch layer or of a label to be propagated.
ML-LCD is designed to explicitly incorporate layer relevance weighting schemes in the local community functions.
}
\rev{Non-determinism is the result of different features in different algorithms: using heuristics to optimize an objective function (such as GLouvain), using non-deterministic clustering algorithms as sub-procedures (as PMM and SCML), using stochastic choices (as LART) or updating community assignments iteratively (as MLP).}
\rev{With regard to the last property, all the methods returning non-pillar communities provide information about which layers define each community. For example, in ML-CPM communities are combinations of adjacent cliques, so all the edges in these cliques can be considered part of the community. As another example, MLP computes a score for each pair of nodes indicating how likely a label should be propagated from one to the other, leading to a common community. However, also methods not returning information about layers as their primary output could be used to indicate which layers and edges determine each community. EMCD only accounts for those edges from different layers that contribute to maximize the multilayer modularity of the consensus community structure solution. In ABACUS, even if the output of the algorithm is about actors, for each pair of actors included in the same community we could look at which layers determined that assignment.}
\subsection{\rev{Parameter setting}}
\rev{Apart from the number of communities to discover, which is required by some algorithms as input, the reviewed methods have a variety of additional input parameters to set. While explaining the meaning of each parameter goes beyond the aims of this survey, we believe it can be useful to characterize the methods with respect to how difficult and/or important it is to properly set their parameters.}
\rev{Some methods can be executed parameter-free. This is the case for all flattening methods, except if their single-layer clustering algorithm needs some, and for MLP and Infomap, although Infomap provides additional options that the interested reader can check on the information-rich website provided by the authors.\footnote{https://www.mapequation.org}}
\rev{ABACUS and ML-CPM require to specify minimum values for the number of layers and actors to be included in a community, which makes them able to identify partial community structures. These parameters affect the result by making it more and more difficult to accept some groups of nodes as a community, and while setting the correct values may require multiple trials, in our opinion the meaning of these parameters is easy to grasp.}
\rev{EMCD requires to specify the co-association threshold, $\theta$, that may have a strong impact on the resulting consensus communities. The original paper presenting this algorithm indicates optimal ranges of values on some networks and suggests that similar values can be used for similar networks.}
\rev{PMM requires to specify the number of structural features, which can be any number between 1 and $\#a-2$. Also in this case different settings can lead to quite different results, and this parameter has a less intuitive meaning if compared with those required by other methods. Similarly, SCML requires a regularization parameter lambda. In addition, both methods require to specify the number of expected communities, as mentioned in the previous section, and the number of times the k-means algorithm used as a sub-procedure should be repeated. In general, different executions of k-means can lead to different results.}
\rev{GLouvain requires only two parameters: $\omega$, weighting inter-layer contributions, and $\gamma$, the so-called resolution parameter. Regarding $\gamma$, we refer the reader to the literature about its usage and shortcomings in the single-layer version of modularity. $\omega$, which in theory can be set individually for each pair of edges but is more practically set to a single value, has an apparently intuitive meaning: a low value would give priority to intra-layer communities, a higher value would tend to discover communities spanning multiple layers. We refer the reader to \cite{hanteer_unspoken_2020} for a deeper discussion about what can and cannot be identified with different settings of $\omega$.}
\rev{LART requires four parameters: $t$, $\epsilon$, $\gamma$, and \textit{linkage}. While the interpretation of some of these parameters is intuitive, in particular the type of hierarchical clustering to be performed inside the algorithm (\textit{linkage}) and the number of steps to be taken by the random walker ($t$), it is in general difficult to predict what impact each setting would have on the final result, which makes these parameters more difficult to be set if compared with other methods.}
\rev{Regarding the local methods, they naturally take the set of query nodes as an input parameter. \textsf{ML-LCD} has no additional parameters, except for the ones controlling layer weights in the \textsf{ML-LCD$_{(lwsim)}$} formulation. However, in absence of exogenous information about the importance of each layer, uniform weights can be used without loss of generality. Concerning \textsf{ACLcut}, the main parameters are the ones controlling the random walk generating the input transition tensor. Two alternative models can be used, which differ in how they navigate the multiplex network: a classic random walk, controlled by an uniform interlayer edge weight $\omega$, and a relaxed random walk, controlled by a layer-jumping probability $r$. These parameters are shown to have a major impact in the characteristics of resulting local communities, thus it is not clear how to set them in general cases.
\textsf{ACLcut} also includes an underlying \textsf{APPR} (Approximated Personalized PageRank) procedure, whose resolution is controlled by two additional parameters: the teleportation parameter
$\gamma$ and the truncation parameter $\epsilon$.
A default value of $0.95$ can be used for $\gamma$, while arbitrary small values can be used for $\epsilon$ (e.g., inversely proportional to the number of nodes in the network).}
\subsection{\rev{Some notes on computational complexity}}
\rev{In most cases, a detailed study of the computational complexity of community detection algorithms is not provided in the original references. This can be explained by the fact that many well-known algorithms have not been developed by computer scientists nor published in computer science venues. However, we also notice that worst-case complexity would often be not particularly informative: execution time typically strongly depends on data and parameter setting, making an experimental analysis more useful in characterizing the methods.
At the same time, some considerations can be useful to either predict or understand the behaviour of some algorithms in specific situations.}
\rev{For flattening methods, time complexity depends on the flattening step and on the subsequent single-layer community detection step. Basic types of flattening are in $O(\#e)$, in which case the complexity of the algorithm corresponds to the one of the community detection step.}
\rev{As for layer-by-layer methods, the complexity also depends on the community detection algorithm applied to each layer, but the step where the communities from the different layers are merged can be significantly more expensive than a flattening. ABACUS uses association rule mining, which can in theory generate an exponential number of rules. The actual execution time is however dependent on the input thresholds: the minimum number of layers where actors must be assigned to the same community to be included in the final result (corresponding to the support count measure in association rule mining) and the minimum number of actors in a community to be counted (limiting the transaction size in the association rule mining algorithm).
\rev{EMCD linearly scales with the number of multilayer edges and with the number of consensus communities.}
PMM extracts $f$ structural features from each dimension, then computes a singular value decomposition on data of size $\#a \times f\#l$; therefore, its complexity depends on the number of actors, the number of layers (that is, the data), and on the number of features (which is an input parameter).}
\rev{ML-CPM requires the computation of maximal cliques, that is NP-Hard even on a single layer. This implies that dense regions of the input networks across $m$ or more layers consisting of a few tens of nodes may lead to impractically slow computations. Maximal clique detection can however be very fast in practice for sparser networks with small communities. GLouvain uses a heuristic to optimize an extended modularity objective function, as modularity optimization is already NP-Hard on single networks. In general, label propagation algorithms have a complexity of $O(\#e \cdot i)$, where $i$ is the number of iterations which is often small. However, MLP also contains a subroutine iterating over all subsets of the layers, to compute pairwise weights to be used when labels are propagated. This makes its complexity exponential in the number of layers $\#l$.}
\rev{Computational complexity of \textsf{ML-LCD} is proportional to the size of the generated community, thus the overall upper bound is $\mathcal{O}(|C|^2 \times d \times \Phi)$, where $|C|$ is the size of the local community, $d$ is the maximum degree of a node in the network and $\Phi$ is the cost of optimizing the $LC$ function. Possible values of $\Phi$ depend on the three alternative formulations and are
$\mathcal{O}(\#l \times d)$ for \textsf{ML-LCD$_{(lwsim)}$}, $\mathcal{O}(\#l \times d^2 \log d)$ for \textsf{ML-LCD$_{(wlsim)}$} and $\mathcal{O}(|C| \times d^2 \log d \times \#l^2)$ for \textsf{ML-LCD$_{(clsim)}$}. Complexity of \textsf{ACLcut} has not been studied in the original paper.}
\section{Experimental evaluation}
\label{sec:experiments}
We devised an experimental evaluation to pursue two main goals in comparing the various methods: one relating to the quality of the produced communities, the other to efficiency aspects. More specifically, our experiments were carried out to answer the following research questions:
\begin{itemize}
\item[\rev{\textbf{Q1}}] To what extent
are the evaluated methods able to detect ground truth communities?
\item[\rev{\textbf{Q2}}] To what extent do the evaluated methods produce similar community structures?
\item[\textbf{Q3}] To what extent are the evaluated methods scalable?
\end{itemize}
Two main stages of evaluation were devised: one for global methods (Sect.~\ref{ssec:results_global}), whose output is a set of communities, and one for local methods (Sect.~\ref{ssec:results_local}), whose output is a single community centered around a node (or set of nodes). Due to their structural differences, these two tracks had to be evaluated separately and by means of different criteria. \\
\subsection{\rev{Data}}
\rev{To evaluate the communities discovered by the tested methods, we use} a selection of real datasets widely used in the literature, \rev{representing different application areas and with different characteristics: AUCS (short for Aarhus University Computer Science) \cite{Rossi2015}, a hybrid online/offline network with different types of relationships between employees of a university department; DKPol (short for Dansk Politik) \cite{DBLP:conf/asunam/Hanteer0DM18}, a network with three types of online relations between Danish Members of the Parliament on Twitter, Airports (short for Air Transportation Multiplex) \cite{Cardillo2013}, with flight connections between European airports, and Rattus \cite{DeDomenico2015}, about genetic interactions. AUCS and DKPol also come with some possible community structures, referred to as ground truth in the following: respectively, the research groups at the department, and affiliation to political parties.}
\rev{As ground truth in the real datasets has a quite simple structure, mostly containing pillar non-overlapping communities, we also generated synthetic datasets forcing specific types of community structures, illustrated in Figure~\ref{fig:CCSS}. The code used to generate these networks is available at the following address: https://bitbucket.org/uuinfolab/20csur.}
General information about the\rev{se} networks including the mean and standard deviation over the layers for density, degree, average path length and clustering coefficients are reported in Table \ref{table:mpx_info}. More detailed information about the evaluation datasets used in the experiments \rev{is} provided \rev{in} the Appendix.
\rev{Finally, we generated networks with varying numbers of actors (100 to 10000) and layers (1 to 20) to perform scalability tests. These networks have the same structure indicated as PEP (Pillar Equal Partitioning) in Figure~\ref{fig:CCSS}, because this is the only type of community structure that most of the methods can correctly recover, as we shall see in the results of our experiments.}
\sisetup{separate-uncertainty}
\begin{table}[!htbp]
\centering
\caption{ \textbf{Summary of structural characteristics of the evaluation networks}: number of layers (\textbf{\#l}), number of actors (\textbf{\#a}), number of nodes (\textbf{\#n}), number of edges (\textbf{\#e}), and mean/std over the layers of density (\textbf{a\_den}), node degree (\textbf{a\_deg}), average path length (\textbf{a\_p\_len}), and clustering coefficient (\textbf{ccoef})}
\label{table:mpx_info}
\begin{tabular}{lrrrr
S[table-format=1.2(3)]
S[table-format=2.2(3)]
S[table-format=1.2(3)]
S[table-format=1.2(3)]}
\toprule
\textbf{Network} & \textbf{\#l} & \textbf{\#a} & \textbf{\#n} & \textbf{\#e} & {\textbf{a\_den}} & {\textbf{a\_deg}} & {\textbf{a\_p\_len}} & {\textbf{ccoef}} \\
\toprule
\input{"exp/data_summary_real.csv"}
\bottomrule
\end{tabular}
\subcaption{Real datasets}
\bigskip
\begin{tabular}{lrrrr
S[table-format=1.2(3)]
S[table-format=2.2(3)]
S[table-format=1.2(3)]
S[table-format=1.2(3)]
}
\toprule
\textbf{Network} & \textbf{\#l} & \textbf{\#a} & \textbf{\#n} & \textbf{\#e} & {\textbf{a\_den}} & {\textbf{a\_deg}} & {\textbf{a\_p\_len}} & {\textbf{ccoef}} \\
\toprule
\input{"exp/data_summary_synt.csv"}
\bottomrule
\end{tabular}
\subcaption{Synthetic datasets with a controlled community structure}
\end{table}
\begin{figure}
\centering
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Experiments/Synthetic_with_community_structure/PEP_.eps}
\caption{Pillar Equal Partitioning (PEP)}
\label{fig:CCSS:pep}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Experiments/Synthetic_with_community_structure/PEO_.eps}
\caption{Pillar Equal Overlapping (PEO)}
\label{fig:CCSS:peo}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Experiments/Synthetic_with_community_structure/PNP_.eps}
\caption{Pillar Non-Equal Partitioning (PNP)}
\label{fig:CCSS:pnp}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Experiments/Synthetic_with_community_structure/PNO_.eps}
\caption{Pillar Non-Equal Overlapping (PNO)}
\label{fig:CCSS:pno}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Experiments/Synthetic_with_community_structure/SEP_.eps}
\caption{Semi-pillar Equal Partitioning (SEP)}
\label{fig:CCSS:sep}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Experiments/Synthetic_with_community_structure/SEO_.eps}
\caption{Semi-pillar Equal Overlapping (SEO)}
\label{fig:CCSS:seo}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Experiments/Synthetic_with_community_structure/SNP.eps}
\caption{Semi-pillar Non-equal Partitioning (SNP)}
\label{fig:CCSS:snp}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Experiments/Synthetic_with_community_structure/SNO.eps}
\caption{Semi-pillar Non-equal Overlapping (SNO)}
\label{fig:CCSS:sno}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Experiments/Synthetic_with_community_structure/NHN_.eps}
\caption{Hierarchical (HIE)}
\label{fig:CCSS:HIE}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Experiments/Synthetic_with_community_structure/NNM_.eps}
\caption{Mixed (MIX)}
\label{fig:CCSS:nnm}
\end{subfigure}
\caption{An illustration of \rev{the types of} synthetic multiplex networks generated for different possible multiplex community structures\rev{. Equal/Non-Equal refers to the number of nodes (size) in the communities}}
\label{fig:CCSS}
\end{figure}
\subsection{\rev{Detailed setting for each method}}
\rev{For all methods based on a single-layer algorithm, we use Louvain. Using the same algorithm makes the comparison fairer; however we must point out how this deviates from some original publications. We also tested the methods using the single-layer algorithm mentioned in the original references (e.g., label propagation). We think that the relevance of these methods for this paper lies in the way they deal with the multilayer structure rather than the specific algorithm that is used on the single-layer network. Within this perspective, using Louvain provides more stable, more accurate and more comparable results in general.}
With respect to parameter setting, in general we used the default values proposed by the original works. In some specific cases, where \rev{different parameter settings are expected to be used to identify different types of community structures} (i.e., GLouvain, ML-CPM, ABACUS, ACLcut, ML-LCD, and Infomap), we \rev{tested multiple settings as detailed in the following.}
\begin{itemize}
\item \rev{For \textsf{ABACUS},} two main parameters have effect on filtering out possible multiplex communities \rev{when single-layer communities are merged into the final result}, namely, the minimum number of actors in a community (\rev{$k$}) and the minimum number of single-layer communities in which the actors must have been grouped together (\rev{$m$}). We use this algorithm with two settings, \textsf{ABACUS$_{(31)}$} with (\rev{$k$}=3, \rev{$m$}=1) and \textsf{ABACUS$_{(42)}$} with (\rev{$k$}=4, \rev{$m$}=2) which filters out the communities that are not expanded over multiple layers.
\item \rev{\textsf{PMM} takes three parameters: the number of communities to return, the number of structural features, and the number of times k-means should be executed as a subroutine, that we set to 5. The number of communities has been set to the number of known communities in the data where that is known, and to an arbitrary number (10) for Airports and Rattus. The fact that we used knowledge about the expected result to setup the algorithm should be considered when the different methods are compared. We did not find heuristics to set the number of structural features (Ell), so we used two settings: low and constant (Ell = 10), and high and dependent on the number of actors (Ell = $\#a/2$); these are among the settings returning good results for AUCS and PEP, for which a ground truth compatible with the results that PMM can return exists. However, please notice that the results may vary very significantly by varying this parameter, and we set it based on knowledge of the expected result. This should also be considered when looking at the experimental results.}
\item \rev{\textsf{SCML} takes two parameters: the number of communities, for which the same settings and reflections for PMM apply, and lambda, set to the default value .5.}
\item \rev{\textsf{EMCD} takes one parameter, theta, for which different settings can lead to significantly different results. The original reference contains an evaluation of appropriate ranges of theta for datasets with different statistics. We based our settings on these considerations: .03 for Airports and Rattus, .01 for DKPol, .2 for AUCS, .1 for the synthetic networks.}
\item \textsf{ML-CPM}: two main parameters \rev{can influence} the results and the \rev{execution time} of the algorithm, namely, the minimum number of actors that form a multilayer clique (\rev{$k$}), and the minimum number of layers to be considered when counting the multilayer cliques (\rev{$m$}). To be more inclusive, we defined two settings for these parameters, \textsf{ML-CPM$_{(31)}$} with (\rev{$k$}=3, \rev{$m$}=1) which allows single-layer communities but could be computationally very expensive with large networks, and \textsf{ML-CPM$_{(42)}$} with (\rev{$k$}=4, \rev{$m$}=2) which is less expensive computationally, but forces the communities to be expanded over at least two layers.
\item \rev{\textsf{LART} has been executed with default parameter settings: $t = 9$ (number of steps for random walker to take), eps = 1 (for binary matrices this will mean adding a self-loop to each node on each layer),
gamma = 1 (recommended by the authors), and linkage = average (determining the type of hierarchical clustering performed in the algorithm).}
\item \textsf{Infomap} can be used to find both overlapping and non-overlapping communities. Consequently, we included it twice in our experiments, i.e., forcing a non-overlapping community discovery \rev{(}\textsf{Infomap$_{(no)}$}\rev{)}, and accepting overlapping communities \rev{(}\textsf{Infomap$_{(o)}$}\rev{)}.
\item \rev{For} \textsf{GLouvain} we defined two settings, \textsf{GLouvain$_{h}$} to denote high weight assigned to the inter-layer edges ($\omega = 1$), and \textsf{GLouvain$_{l}$} to refer to \rev{a} low value for the inter-layer edge weight ($\omega = 0.1$). The motivation is that high values for $\omega$ favor the identification of pillar communities and may prevent the identification of actor-overlapping communities that the algorithm can retrieve with a low $\omega$.
\item \rev{\textsf{Mlink} takes two input parameters leading to different types of results. As we have not analyzed the resulting communities, for which we refer to the original reference, we use the default values used in the original implementation for scalability analysis.}
\item \rev{\textsf{MLP} has no input parameters.}
\item \rev{For} \textsf{ACLcut}\rev{,} two settings were used. One with a classical random walker \textsf{ACLcut$_{(c)}$}, and another with a relaxed random walker \textsf{ACLcut$_{(r)}$}.
\item \rev{For} \textsf{ML-LCD} \rev{we used three} settings \rev{corresponding to different ways} to optimize the $LC$ function during the selection of nodes to join a local community, namely, \textsf{ML-LCD$_{(lwsim)}$}, for the layer-weighted similarity based $LC$, \textsf{ML-LCD$_{(wlsim)}$} for the within-layer similarity based $LC$, and \textsf{ML-LCD$_{(clsim)}$} for the cross-layer similarity based $LC$.
\end{itemize}
\subsection{\rev{Software}}
\rev{The following experiments have been performed using a combination of original code (LART in python2.7, EMCD in java, PMM, SCML, and Mlink in matlab, Infomap in C++) and the implementations of the other algorithms available in the multinet library (NWF, WF\_EC, ABACUS, CPM, GLouvain, MLP, all written in C++ and also available for R and python). We also use the multinet library for basic functions to read networks, communities, to compute the Omega index, etc. Infomap was also run from inside multinet, but the code is the one from the authors with minor adaptations to make it compatible with the requirements of the CRAN repository. The implementation of ABACUS uses code from https://borgelt.net/eclat.html for the association rule mining subroutine. All the algorithms are available at https://bitbucket.org/uuinfolab/20csur, except ACLCut which has not been ported to the latest version of the multinet library. The matlab code in this repository is run using octave. All the matlab code could be executed in octave, except the internal edge clustering subroutine used by Mlink. As we did not compare the results of Mlink with other algorithms, we skipped that part of the execution, which does not affect our conclusions about its scalability. For scalability tests of the Generalized Louvain algorithm we used \citep{Jutla}.}
\subsection{\rev{Assessment criteria}}
In order to measure pairwise similarity between two global community structures, we use the Omega index which is a well known measure \citep{Collins1988} that can be applied to situations where both, one, or neither of the clusterings being compared is overlapping \citep{murray2012omega}. It does so by averaging the number of agreements on both clusterings and then adjusting that by the expected \rev{number of} agreement\rev{s between the two} clusterings in case they were generated at random. An agreement is when a pair of nodes is grouped together \rev{in the same number of clusters} ($j$) in both clusterings. The values of $j$ start from 0, meaning that if a pair \rev{is never assigned to the same cluster} in either clustering, this still counts as an agreement.
Given two clusterings $\mathcal{C}_1$, $\mathcal{C}_2$, the similarity between them using Omega index is given by
\begin{equation}
\text{Omega ($\mathcal{C}_1$,$\mathcal{C}_2$)} = \frac{\text{Observed ($\mathcal{C}_1$,$\mathcal{C}_2$)} - \text{Expected ($\mathcal{C}_1$,$\mathcal{C}_2$)}}{1-\text{Expected ($\mathcal{C}_1$,$\mathcal{C}_2$)}}
\end{equation}
\begin{equation}
\text{Observed ($\mathcal{C}_1$,$\mathcal{C}_2$)} = \frac{1}{N} \sum_{j=0}^{l} A_j
\end{equation}
\begin{equation}
\text{Expected ($\mathcal{C}_1$,$\mathcal{C}_2$)} = \frac{1}{N^2} \sum_{j=0}^{l} N_{(j,1)}N_{(j,2)}
\end{equation}
Where Observed ($\mathcal{C}_1$,$\mathcal{C}_2$) refers to the observed agreement represented by the average number of agreements between $\mathcal{C}_1$ and $\mathcal{C}_2$, $l$ is the maximum number of times a pair appears together in both $\mathcal{C}_1$ and $\mathcal{C}_2$ at the same time, N is the total number of possible pairs, $A_j$ is the number of pairs that are grouped together $j$ times in both clusterings, and $N_{(j,1)}$, $N_{(j,2)}$ \rev{indicate} the number\rev{s} of pairs that have been grouped together $j$ times in $\mathcal{C}_1$, $\mathcal{C}_2$ respectively. Theoretically, values \rev{of the Omega index} are in the range [-1,1]. However, in practice, Omega index returns 1 for two identical clusterings, and values close to 0 when one of the two input clusterings is a totally random reordering of the other one.
\rev{To clarify the formulas above, we provide two examples. First, to understand the meaning of each part of the formulas, consider two equal overlapping clusterings of four elements 1, 2, 3, and 4: $\mathcal{C}_1 = \{\{1,2,3\}, \{2,3,4\}\}$ and $\mathcal{C}_2 = \{\{1,2,3\}, \{2,3,4\}\}$. In this case the number of possible pairs $N$ is 6 ($\{1,2\}, \{1,3\}, \{1,4\}\dots$). $A_0=1$, because only the pair $\{1,4\}$ does not appear inside a same cluster in both clusterings. $A_1=4$, corresponding to pairs $\{1,2\}, \{1,3\}, \{2,4\}$, and $\{3,4\}$, all appearing together once in each clustering. Only the pair $\{2,3\}$ is assigned to two different clusters in each clustering, therefore $A_2=1$. The other values to compute the omega index are $N_{(0,1)}=1, N_{(0,2)}=1, N_{(1,1)}=4, N_{(1,2)}=4, N_{(2,1)}=1, N_{(2,2)}=1$. As a result, we have:
$\text{Observed ($\mathcal{C}_1$,$\mathcal{C}_2$)} = \frac{1}{6} (1+4+1)
$ and $\text{Expected ($\mathcal{C}_1$,$\mathcal{C}_2$)} = \frac{1}{36} (1\cdot1+4\cdot4+1\cdot1)$.
The corresponding Omega index is 1, as expected because the two clusterings are identical. Now consider the two clusterings $\mathcal{C}_1 = \{\{1,2\}, \{3,4\}\}$ and $\mathcal{C}_2 = \{\{1,2\}, \{3\}, \{4\}\}$. We now have
$\text{Observed ($\mathcal{C}_1$,$\mathcal{C}_2$)} = \frac{1}{6} (4+1)$ and $\text{Expected ($\mathcal{C}_1$,$\mathcal{C}_2$)} = \frac{1}{36} (4\cdot5 + 2\cdot1)$ with Omega index 0.57.}
The reason why we choose the Omega index is that it is, by definition, a valid measure when one, both or none of the two clusterings is \rev{overlapping} as we discuss in a previous study for community evaluation metrics \cite{hanteer2019_sim}. In addition, Omega index is an adjusted similarity measure that accounts for the by-chance agreements that might still exist between any two random clusterings over the same node-set.
For measuring similarity between two local communities $s_1$, $s_2$, we use \rev{the} Jaccard coefficient:
\begin{equation}
JC = \frac{N(s_1,s_2)}{N(s_1)+N(s_2)-N(s_1,s_2)}
\end{equation}
where $N(s_1)$ refers to the number of actors in solution $s_1$ and $N(s_1,s_2)$ refers to the number of common actors between two solutions $s_1$, $s_2$. The values of \rev{the} Jaccard coefficient lie in the range [0,1] where 1 means perfect similarity and 0 means perfect dissimilarity.
In order to measure the accuracy of the solutions obtained by global methods with respect to a ground truth (Section \ref{sssec:results_glob_acc}), we resort again to \rev{the} Omega index.
The accuracy of local community detection methods (Section \ref{sssec:local_acc_anyalisis}) has been evaluated by comparing pairwise similarities (using \rev{the} Jaccard index) between a given actor (i.e., seed node) and the ground truth community it belongs to. The average Jaccard index over all actors is then used as the final accuracy score.
\subsection{Global Methods}
\label{ssec:results_global}
In this section we report the experimental results of the comparative evaluation of global multiplex community detection methods. The section is structured as follows: Section~\ref{sssec:results_glob_stats} reports on the main properties of the community structures detected by the evaluated methods in different datasets. Section~\ref{sssec:results_glob_acc} presents the results of the accuracy analysis\rev{.} Section~\ref{sssec:results_glob_pw} discusses the results of the pairwise comparison between different methods.
Section~\ref{sssec:results_glob_scal} focuses on scalability.
\subsubsection{Basic descriptive statistics}
\label{sssec:results_glob_stats}
As the first step of our comparative analysis, we analyzed the structural properties of the different community structures identified by the evaluated methods. Table~\ref{table:global_real_stats:2} presents the statistics concerning the community structures obtained on the smallest (AUCS) and largest (Airports) of the real-world multiplex networks taken into account; statistics for the other real-world networks are reported in the Appendix.
In Table~\ref{table:global_real_stats:2}, we denote with $\#c$ the number of communities, with $sc1$ the size of the largest community \rev{(number of nodes)}, with $sc2/sc1$ the ratio between the size of the largest community to the second largest, with $\%n$ the percentage of nodes assigned to at least one community, with $\%p$ the percentage of pillars, with $\%ao$ the percentage of actors in more than one community, with $\%no$ the percentage of nodes in more than one community and with $\%s$ the percentage of singleton communities.
\begin{sidewaystable}
\centering
\caption{Statistics about the community structures obtained on the \rev{AUCS} network (results averaged over 10 runs). We denote with \textbf{\#c} the number of communities, with \textbf{sc1} the size of the largest community \rev{(number of nodes)}, with \textbf{sc2/sc1} the ratio between the size of the largest community \rev{and} the second largest, with \textbf{\%n} the percentage of nodes assigned to at least one community, with \textbf{\%p} the percentage of pillars, with \textbf{\%ao} the percentage of actors in more than one community, with \%no the percentage of nodes in more than one community and with \textbf{\%s} the percentage of singleton communities}
\begin{tabular}{l
S[table-format=1.2(2)]
S[table-format=3.2(1)]
S[table-format=3.2(1)]
S[table-format=1.2(2)]
S[table-format=1.2(2)]
S[table-format=1.2(2)]
S[table-format=1.2(2)]
S[table-format=1.2(2)]}
\toprule
method & {$\#c$} & {$sc1$} & {$sc2/sc1$} & {$\%n$} & {$\%p$} & {$\%ao$} & {$\%no$} & {$\%s$} \\
\toprule
\input{exp/comm_summary_aucs.csv}
\bottomrule
\end{tabular}
\subcaption*{AUCS\rev{. \#l = 5, \#a = 61, \#n = 224, \#e = 620}}
\label{table:global_real_stats:2}
\end{sidewaystable}
\begin{sidewaystable}
\centering
\caption{Statistics about the community structures obtained on the Airports network (results averaged over 10 runs). We denote with \textbf{\#c} the number of communities, with \textbf{sc1} the size of the largest community \rev{(number of nodes)}, with \textbf{sc2/sc1} the ratio between the size of the largest community \rev{and} the second largest, with \textbf{\%n} the percentage of nodes assigned to at least one community, with \textbf{\%p} the percentage of pillars, with \textbf{\%ao} the percentage of actors in more than one community, with \%no the percentage of nodes in more than one community and with \textbf{\%s} the percentage of singleton communities}
\begin{tabular}{l
S[table-format=4.2(3)]
S[table-format=5.2(1)]
S[table-format=3.2(1)]
S[table-format=1.2]
S[table-format=1.2(3)]
S[table-format=1.2(3)]
S[table-format=1.2(3)]
S[table-format=1.2(3)]}
\toprule
method & {$\#c$} & {$sc1$} & {$sc2/sc1$} & {$\%n$} & {$\%p$} & {$\%ao$} & {$\%no$} & {$\%s$} \\
\toprule
\input{exp/comm_summary_airports.csv}
\bottomrule
\end{tabular}
\subcaption*{Airports\rev{. \#l = 37, \#a = 417, \#n = 2034, \#e = 3588}}
\label{table:global_real_stats:2}
\end{sidewaystable}
It can be observed how \textsf{LART} generates a number of communities which is higher than that of most other methods on all real networks. However a large percentage of these communities appear to be singletons, indicating that this algorithm mostly fails in aggregating nodes into communities.
Other algorithms which appear to generate a relatively high number of communities regardless of the network structure are \rev{\textsf{Infomap\_o}} and \textsf{ABACUS}, both variants. Interestingly, \rev{both retrieve a large number of communities without retrieving any singleton, showing a different behavior from \textsf{LART}. \rev{The discovery of many communities by \textsf{Infomap\_o} and \textsf{ABACUS} is associated to a high percentage of node-overlapping.}
As regards to the size of the largest community, higher values correspond to \textsf{PMM\_l} and \textsf{Infomap\_o}. On the other end, \textsf{ABACUS} (both variants) and \textsf{CPM$_{(42)}$} assign a small number of nodes to the largest communities, in both the AUCS and the Airports networks. \rev{This can be explained by the strong requirements that ABACUS and (even more) ML-CPM have to cluster nodes together.}
Concerning $sc2/sc1$, we can observe how the values tend to be all relatively high for the smallest (\rev{AUCS}) and largest (Airports) network\rev{s}, indicating that in these cases the largest communities for each identified community structure have comparable sizes. \rev{An algorithm grouping most of the nodes together, and thus not able to structure them into separate communities, would have a very low value for $sc2/sc1$.}}
\rev{The values found in columns $\%n$, $\%p$, $\%ao$ and $\%no$ can be explained as follows:}
\begin{itemize}
\item With regards to the percentage $\%n$ of nodes assigned to at least one community, as we discussed in Section \ref{sec:mpx_community}, certain methods\footnote{\textsf{NWF}, \textsf{WF\_EC}, \textsf{GLouvain} (both variants), \textsf{LART}, \textsf{Infomap} (both variants)} are forced to provide a community assignment for each node: in these cases the value of $\%n$ will \rev{always} be $1$. As regards the other methods, we can observe how \textsf{ML-CMP$_{(42)}$} and \textsf{ABACUS$_{(42)}$} are unable to detect community assignments for a majority of nodes on almost all networks.
\item Regarding the percentage $\%p$ of pillars, both flattening methods always return pillar communities (since the information about layers is lost during the flattening process). \rev{Infomap and GLouvain can detect non-pillar clusters in theory. Data show how \textsf{Infomap} can return non-pillars both in the overlapping and in the non-overlapping version, while only \textsf{GLouvain$_{l}$} returns non-pillar communities.}
\item The percentage of overlapping actors ($\%ao$) and nodes ($\%no$) mainly depends on the properties of the specific methods whether they allow overlapping (on the node level or the actor level) or not.
\item As we have already discussed, the percentage of singleton communities $\%s$ appears to be extremely high in the case of \textsf{LART} and \textsf{EMCD} and high in the case of \textsf{PMM\_l}. It should be noted that, with the exception of \textsf{Infomap}, that returns a small fraction of singletons in the Airports network, the methods that return singletons in the AUCS network return a larger percentage of singletons in the Airports network suggesting that the behaviour is not induced by the network but amplified by its complexity.
\end{itemize}
\subsubsection{Accuracy analysis}
\label{sssec:results_glob_acc}
With the aim of answering \rev{\textbf{Q1}} (i.e., ``To what extent are the evaluated methods able to detect ground truth communities?'', cf. Section~\ref{sec:experiments}), we perform here an extensive quantitative analysis about the accuracy obtained by each method with respect to ground truth communities.
For real-world networks, only two of them have an available ground truth --- specifically AUCS (i.e., affiliations to research groups) and DKPol (i.e., affiliation to political parties). All synthetic networks naturally come with controlled ground truth.
\begin{figure}[!htpb]
\centering
\begin{subfigure}[t]{0.4\textwidth}
\centering
\captionsetup{justification=centering}
\includegraphics[width=\textwidth]{exp/PEP-100-3_accuracy.eps}
\caption{Pillar Equal Partitioning (PEP)}
\end{subfigure}
\begin{subfigure}[t]{0.4\textwidth}
\centering
\captionsetup{justification=centering}
\includegraphics[width=\textwidth]{exp/PNP-100-3_accuracy.eps}
\caption{Pillar Non-equal Partitioning (PNP)}
\end{subfigure}
\begin{subfigure}[t]{0.4\textwidth}
\centering
\captionsetup{justification=centering}
\includegraphics[width=\textwidth]{exp/SEP-100-3_accuracy.eps}
\caption{Semi-pillar Equal Partitioning (SEP)}
\end{subfigure}
\begin{subfigure}[t]{0.4\textwidth}
\centering
\captionsetup{justification=centering}
\includegraphics[width=\textwidth]{exp/PEO-100-3_accuracy.eps}
\caption{Pillar Equal Overlapping (PEO)}
\end{subfigure}
\caption{Accuracy with respect to a ground truth, Omega index, selected synthetic networks.}
\label{fig:accuracy_global_syn_1}
\end{figure}
\begin{figure}[!htpb]
\centering
\begin{subfigure}[t]{0.4\textwidth}
\centering
\captionsetup{justification=centering}
\includegraphics[width=\textwidth]{exp/MIX-100-3_accuracy.eps}
\caption{Mixed (MIX)}
\end{subfigure}
\begin{subfigure}[t]{0.4\textwidth}
\centering
\captionsetup{justification=centering}
\includegraphics[width=\textwidth]{exp/HIE-100-3_accuracy.eps}
\caption{Hierarchical (HIE)}
\end{subfigure}
\caption{Accuracy with respect to a ground truth, Omega index, mixed and hierarchical communities.}
\label{fig:accuracy_global_syn_2}
\end{figure}
\rev{The results are reported in details in Figures ~\ref{fig:accuracy_global_syn_1}, ~\ref{fig:accuracy_global_syn_2} and ~\ref{fig:accuracy_global_r}.
We will organize or analysis along three structural differences of the community structure that showed to have a relevant impact on the performance: Pillar \emph{vs} Non-Pillar structures, Partitioning \emph{vs} Overlapping structures, Equal \emph{vs} Non Equal structures.
In the case of Pillar Equal Partitioning (PEP) structures almost all the methods perform very well, with \textsf{WF\_EC}, \textsf{WF\_NW}, \textsf{Infomap} and \textsf{GLouvain} (both versions) reaching perfect accuracy. Overall, only \textsf{ML-CPM} (both versions) and \textsf{LART} score below 0.5. In the first case, the strict rules imposed by its parameters explain the performance, for the latter, as we saw in Table~\ref{table:global_real_stats:2} \textsf{LART} does not seem to be able to group a considerable number of nodes into communities. Similar patterns, even if with worse levels of accuracy, are visible for all the Pillar structures (PNP, PEO, PNO). Minor notable differences are present in the Pillar Non-equal Partitioning structure where \textsf{Infomap} (both variations) performs better than all the other methods (that also score above $0.8$).
Despite the positive results for many methods, one could easily ask if, in the general context of pillar community structures proper multilayer methods are necessary since the same (good) results can be achieved with flattening-based methods. }
\rev{The more we move away from a pillar structure the more a different picture emerges. Semi-Pillar structures show a general reduction in accuracy for all the methods with the exception of \textsf{ABACUS} (both variations) that consistently perform as the best in the group. It is also interesting to observe how, in the context of Hierarchical and Mixed structures, \textsf{CMP$_{(31)}$} performs well, being the best performing method in the case of Hierarchical structures.}
\rev{Looking at the results of the node-partitioning \emph{vs} node-overlapping methods it can be observed a similar pattern to what we observed with respect to the Pillar \emph{vs} Semi-Pillar community structure. Within a general picture of worse accuracy, overlapping structures show similar results for flattening-based methods, \textsf{Infomap} and \textsf{GLouvain} in Pillar structures, with a remarkable worsening of the performance for Semi-Pillar networks with the notable exception of \textsf{ABACUS} (both variants).}
\rev{The reason why some methods have an Omega index around 0 is that in these cases these methods only find one or two large communities. This is not surprising if we consider the structures of some synthetic datasets. In the overlapping community structures all the communities are kept together by their overlapping parts, and in the semi-pillar structures the well-separated semi-pillar communities spanning a subset of the layers result connected by the different communities on the remaining layers.}
Different behaviors can be observed when we take into account the node-overlapping methods (Figure~\ref{fig:accuracy_global_syn_2}). Accuracy values are generally much lower than in the node-partitioning case.
On the networks with semi-pillar (SEO) and non-pillar (\rev{MIX}) community structures, \textsf{ABACUS$_{(31)}$} seems to perform generally better than the other methods confirming the trend we have already observed.
\rev{In summary, we observed
how the main element that plays a role in methods' accuracy is the pillar nature of the community structure. The more the network moves away from a pillar structure (with semi-pillar, mixed and hierarchical structures) the worse the results are among most of the methods. A notable exception is \textsf{Abacus} that, regardless of the variation, keeps performing above the average with Semi-Pillar and Mixed Communities. Hierarchical structures are extremely challenging for all the methods with the notable exceptions of \textsf{CPM$_{(31)}$} and \textsf{GLouvain$_{l}$}, although GLouvain is finding communities on individual layers and thus it is not clearly identifying any hierarchy spanning multiple layers.}
\rev{These results may indicate that, even though for simple Pillar Equal Partitioning structures multilayer methods do not seem to provide any real advantage over flattening-based methods, more complex structures show how proper multilayer methods can perform better than flattening-based methods.}
\rev{Figure~\ref{fig:accuracy_global_r} reports on the accuracy obtained by the evaluated methods on real-world networks.
It can be observed how accuracy values are relatively low on both networks for all methods, i.e., with Omega index always below $0.8$ and often below $0.5$. More interestingly, the best performing methods do not entirely overlap with the methods that perform the best with the synthetic data.
On AUCS, the best performing method is \textsf{SCML} ($0.70$), followed by \textsf{EMCD}. }
\rev{The results are even more variable on Dkpol, where many methods show low results.\footnote{Zero values are a result of identifying a clustering constituted of only one giant component (i.e with \textsf{Infomap$_{no}$)}. The result of \textsf{CPM$_{(31)}$} is not reported as the execution took more than 24 hours.} An exception to this are the two variants of \textsf{GLouvain}, reaching accuracies of $0.68$ (\textsf{GLouvain$_{h}$}) and $0.43$ (\textsf{GLouvain$_{l}$}) respectively. \textsf{SCML}, \textsf{WF\_NW}, \textsf{WF\_EC} and \textsf{EMCD} also perform relatively well with scores a little lower of slightly above $0.6$. }
As a final remark, the difference in performance between real-world and synthetic networks confirms how the ``ideal'' concept of community, i.e., the one based on topological density that is used to build the synthetic ones and to drive the detection process of the methods, is often far from the ground truth communities observed in real cases (which are, in turn, often questionable and subjective). This is a well known problem in the community detection field, and poses challenges in both ways, i.e., concerning the need to design both more powerful methods and more reliable ground truths.
\begin{figure}[!htpb]
\centering
\begin{subfigure}[t]{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{exp/aucs_accuracy.eps}
\caption{aucs}
\end{subfigure}
\begin{subfigure}[t]{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{exp/dkpol_accuracy.eps}
\caption{dkpol}
\end{subfigure}
\caption{Accuracy with respect to a ground truth for real-world networks, measured using Omega index.}
\label{fig:accuracy_global_r}
\end{figure}
\subsubsection{Pairwise comparison analysis}
\label{sssec:results_glob_pw}
In order to answer \rev{\textbf{Q2}} (i.e., ``To what extent do the evaluated methods produce similar community structures?'', cf. Section \ref{sec:experiments}), we performed pairwise comparisons between the selected methods, in order to determine the similarity between the community structures produced by each \rev{pair} of methods on each network.
Based on the results presented in \rev{S}ection \ref{sssec:results_glob_acc}, we focus on the following selected networks:
\textsf{PEP, SNP, and MIX}.
More specifically, Figure~\ref{fig:pws:omega_n_comparision} reports on the results of pairwise analysis among Pillar Equal Partitioning and Semi-Pillar Non-equal Partitioning, while Figure~\ref{fig:pws:omega_n_comparision_2} reports on the results of pairwise analysis among Mixed networks.\footnote{The executions were stopped if not terminated within 24 hours. These cases are left blank in the reported heatmaps.}
All results are organized as heatmaps reporting the Omega index values for the pairwise similarities.
We show Omega index values in the main paper for a matter of homogeneity, i.e., since NMI cannot be applied to overlapping solutions.
\begin{figure}[!htbp]
\centering
\begin{subfigure}[t]{0.7\textwidth}
\centering
\includegraphics[width=\textwidth]{exp/PEP-100-3_hm_omega.eps}
\caption{PEP}
\end{subfigure}
\begin{subfigure}[t]{0.7\textwidth}
\centering
\includegraphics[width=\textwidth]{exp/SNP-100-3_hm_omega.eps}
\caption{SNP}
\end{subfigure}
\caption{Pairwise comparison, Omega index: pillar and semi-pillar partitioning communities}
\label{fig:pws:omega_n_comparision}
\end{figure}
\begin{figure}
\centering
\centering
\includegraphics[width=0.7\textwidth]{exp/MIX-100-3_hm_omega.eps}
\caption{MIX}
\caption{Pairwise comparison, Omega index: partial communities}
\label{fig:pws:omega_n_comparision_2}
\end{figure}
The results shown in Figure~\ref{fig:pws:omega_n_comparision} confirm and expand the understanding of the methods we have described so far. In the case of Pillar Equal Partitioning networks, almost all the methods produce very similar structures, with the notable exception of \textsf{ML-CPM} and \textsf{LART}.
In the case of Semi-Pillar partitioning communities the similarities are much smaller with few notable exemptions: \textsf{Infomap$_{no}$} returns communities extremely similar to those returned by \textsf{GLouvain$_h$} and both also show a strong similarity ($0.7$) with the communities returned from the flattening-based methods.
\rev{Figure~\ref{fig:pws:omega_n_comparision_2} makes this underlying trend clearly visible showing how flattening-based methods, \textsf{Infomap} (both variations) and \textsf{GLouvain} (both variations) produce highly similar results, although not corresponding to the ground truth, and are largely different from what is returned by the other methods.}
Summing up, we observed that node-partitioning methods may produce similar community structures on specific cases (i.e., depending on the methods and the target network), suggesting that, when multiple community memberships are not allowed, some communities will often be unambiguously recognized in the network topology.
Conversely, multiple community memberships allowed by overlapping methods end up in extremely variate solutions, i.e., relatively low similarities are observed regardless of the selected network and pair of methods.
\subsubsection{Scalability Analysis}
\label{sssec:results_glob_scal}
In order to answer \textbf{Q3} (``To what extent are the evaluated methods scalable?'', cf. Section~\ref{sec:experiments}), we tested the scalability of the selected methods with respect to number of actors and number of layers. \rev{The reported results were obtained on a MacOS Catalina system version 10.15.5 with a 2,4GHz Dual-Core Intel Core i7 processor and 16GB of RAM.}
Figures~\ref{fig:global_scalability_na}--\ref{fig:global_scalability_nl} report the scalability of each method with respect to an increment in the number of actors and the number of layers respectively. \rev{Note that in both cases the scalability of the flattening algorithms largely depends on the one of the community detection method used at the final step, since the computational cost of the flattening process is irrelevant.
Some methods proved to be extremely scalable, more specifically, \textsf{EMCD} and \textsf{Infomap} --- all of which could run in less than a minute on networks containing up to $8000$ actors. However, \textsf{EMCD} takes single-layer community structures as input, therefore the time to find these communities is not counted in the plot. Considering the whole process, we would find \textsf{EMCD} close to the flattening methods.
\textsf{ML-CMP} (both variations), \textsf{mlink} and \textsf{LART} proved to be much less scalable, with a running time quickly increasing with the number of actors.}
\rev{As regards to the scalability in the number of layers (Figure~\ref{fig:global_scalability_nl}), we see that, generally speaking, it affects the results less than the number of actors. Only four methods show some significant increase in execution time: \textsf{ML-CPM} with $m=1$, \textsf{Mlink}, \textsf{LART} and \textsf{MLP}. The behavior of \textsf{MLP} is in accordance with its theoretical time complexity.}
\begin{figure}[!htpb]
\centering
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{exp/n_actors_7.eps}
\caption{}
\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{exp/n_actors_15.eps}
\caption{}
\end{subfigure}
\caption{Scalability of different community detection methods with respect to the number of actors}
\label{fig:global_scalability_na}
\end{figure}
\begin{figure}[!htpb]
\centering
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{exp/n_layers_7.eps}
\caption{}
\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{exp/n_layers_15.eps}
\caption{}
\end{subfigure}
\caption{Scalability of different community detection methods with respect to the number of layers}
\label{fig:global_scalability_nl}
\end{figure}
\subsection{Local Methods}
\label{ssec:results_local}
In this section we report the experimental results of the comparative evaluation of local multiplex community detection methods. The section is structured as follows: Section~\ref{sssec:local_acc_anyalisis} presents the results of the accuracy analysis, Section~\ref{sssec:local_pw_comp} reports on the results of the pairwise comparison between different methods, while Section~\ref{sssec:local_scal_anyalisis} discusses scalability issues.
\subsubsection{Accuracy analysis}
\label{sssec:local_acc_anyalisis}
We performed an accuracy analysis on the local community detection methods, by comparing the local community of each actor to the one that same actor belongs to in the ground truth. \rev{S}imilarity is computed using \rev{the} Jaccard \rev{index}, while the final accuracy value is the average over all actors.
Figure ~\ref{fig:acc_local_real} shows results on real\rev{-}world networks. On AUCS, accuracy is in the range of $0.5$--$0.7$ for $4$ out of $5$ methods, with \textsf{ML-LCD(wlsim)} being the best performer ($0.7$). Much lower accuracy values were obtained on \rev{DKP}ol, where the best performing method was \textsf{ML-LC\rev{D}(lwsim)} ($0.27$).
Concerning synthetic networks,
\rev{we limited our analysis} to networks with \rev{a} pillar partitioning community structure (PEP and PNP), for compatibility with the methods' output (both return actor communities).
\rev{In these cases, we observed that a}ccuracies are much higher than the ones observed for real-world networks, with all values in the range [$0.8$,$1.0$]. \textsf{ML-LCD(clsim)} is the best performing method, since it is able to perfectly identify the ground truth community structure on both networks.
Summarizing, while all methods proved to be able to identify synthetic pillar community structures, their performance was much worse on real\rev{-}world networks. These results confirm the behavior observed for global methods (cf. Section~\ref{sssec:results_glob_acc}).
Moreover, it should be pointed out that
comparing a global community structure (i.e., the ground truth) to a set of local ones (i.e., the results obtained by local methods on all actors) may not be completely fair.
\rev{The ground truth in this case represents a global partitioning of the network, while local communities are actor-centered, query dependent and, in general, they overlap with each other.
Moreover, they may be discovered without having a complete knowledge of the network graph, which is the case for \textsf{ML-LCD}. Although based on the comparison of conceptually different objects (i.e., global and local communities), our accuracy analysis is still significant as it quantifies to what extent the local community formed around a certain actor falls inside the community found in the global structure that contains the actor.} Unfortunately, no networks with an associated ground truth of multiplex local communities are available at the time of writing.
\begin{figure}
\centering
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=.95\textwidth]{Figures/Results/Local_methods/Accuracy_analysis/aucs_jaccard.eps}
\caption{\textbf{Aucs}}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=.95\textwidth]{Figures/Results/Local_methods/Accuracy_analysis/dkpol_jaccard.eps}
\caption{\textbf{Dkpol}}
\end{subfigure}
\caption{Average accuracy of the local methods with respect to a ground truth, on real-world networks}
\label{fig:acc_local_real}
\end{figure}
\subsubsection{Pairwise comparison}
\label{sssec:local_pw_comp}
As seen in Section \ref{sssec:results_glob_pw} for global methods, we set up an equivalent evaluation stage based on pairwise comparison between the local methods.
In this case, we resorted to \rev{the} Jaccard index to measure the similarity of the community solutions produced by two local methods. Since these methods are query-dependent (i.e., they return the local community of a given query/seed node), we computed the Jaccard similarity between each pair of communities obtained using the same actor as seed, and then averaged the results over all actors. The standard deviation of these average values is provided in the Appendix.
Figure~\ref{fig:local_pw_sim_real} reports on the results obtained on real-world networks. On most of these networks (\rev{DKP}ol, Airports, and Rattus), we can note that communities identified by different variants of \textsf{ML-LCD} and \textsf{ACLcut} tend to be very different. \rev{L}ooking at AUCS, the communities identified by all variants of both \textsf{ML-LCD} and \textsf{ACLcut} tend to be less different and a higher similarity can be observed among the three variants of \textsf{ML-LCD}.
For synthetic networks (Figure~\ref{fig:local_pw_sim_syn}), it can be noted how similarities are higher for networks based on pillar community structures. In some cases (i.e., PEP and PNP) all methods are practically interchangeable, with all similarities equal or near to $1.0$.
In other networks with pillar (i.e., PEO and PNO), semi-pillar (i.e., SEP and SEO) or both (MIX and HIE) community structures, similarities are stronger between the different variants of each method.
Summing up, we observed some similarities in the behavior of all local methods on some real-world and synthetic networks, with an expected tendency of the variants of a same method to identify similar local communities. Nevertheless, this cannot be taken as a general rule, since we also observed specific cases where all methods behaved differently from each other, both on real-world and synthetic networks.
\begin{figure}[!htpb]
\centering
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width=.95\textwidth]{Figures/Results/Local_methods/Pairwise_analysis/aucs_hm_jacc_mean.eps}
\caption{\textbf{Aucs}}
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width=.95\textwidth]{Figures/Results/Local_methods/Pairwise_analysis/dkpol_hm_jacc_mean.eps}
\caption{\textbf{Dkpol}}
\end{subfigure}
\caption{Average pairwise similarity among the different local methods on real-world networks}
\label{fig:local_pw_sim_real}
\end{figure}
\begin{figure}[!htpb]
\centering
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width=.95\textwidth]{Figures/Results/Local_methods/Pairwise_analysis/PEP_hm_jacc_mean.eps}
\caption{\textbf{PEP}}
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width=.95\textwidth]{Figures/Results/Local_methods/Pairwise_analysis/PEO_hm_jacc_mean.eps}
\caption{\textbf{PEO}}
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width=.95\textwidth]{Figures/Results/Local_methods/Pairwise_analysis/NHN_hm_jacc_mean.eps}
\caption{\textbf{HIE}}
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width=.95\textwidth]{Figures/Results/Local_methods/Pairwise_analysis/NNM_hm_jacc_mean.eps}
\caption{\textbf{\rev{MIX}}}
\end{subfigure}
\caption{Average pairwise similarity among the different local methods when the same seed is used as an input, on selected synthetic networks}
\label{fig:local_pw_sim_syn}
\end{figure}
\subsubsection{Scalability Analysis}
\label{sssec:local_scal_anyalisis}
We tested the scalability of local community detection methods in terms of number of actors and number of layers. To carry out the experiment we used the synthetic networks already used for the global case (Section~\ref{sssec:results_glob_scal}). For each network, we present \rev{median} execution times obtained on 100 random seeds. For each method, we choose the least scalable variant as a representative of that method's scalability.
Figures~\ref{fig:actors_scalability_local}--\ref{fig:layers_scalability_local} show results related to scalability in terms of number of actors and of layers respectively. \rev{Both methods showed a similar good scalability, with \textsf{ML-LCD} showing a higher dispersion depending on the chosen seeds}.
\begin{figure}[!htpb]
\centering
\begin{subfigure}[t]{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Results/Local_methods/Scalability_analysis/actors/n_actors_ACLcut.eps}
\caption{\textbf{ACLcut\rev{\_c}}}
\end{subfigure}
\begin{subfigure}[t]{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Results/Local_methods/Scalability_analysis/actors/n_actors_ML-LCD.eps}
\caption{\textbf{ML-LCD\rev{(clsim)}}}
\end{subfigure}
\caption{\rev{Median} scalability of local methods with respect to the number of actors in the multiplex network}
\label{fig:actors_scalability_local}
\end{figure}
\begin{figure}[!htpb]
\centering
\begin{subfigure}[t]{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Results/Local_methods/Scalability_analysis/layers/n_layers_ACLcut.eps}
\caption{\textbf{ACLcut\rev{\_c}}}
\end{subfigure}
\begin{subfigure}[t]{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/Results/Local_methods/Scalability_analysis/layers/n_layers_ML-LCD.eps}
\caption{\textbf{ML-LCD\rev{(clsim)}}}
\end{subfigure}
\caption{\rev{Median} scalability of local methods with respect to the number of layers in the multiplex network}
\label{fig:layers_scalability_local}
\end{figure}
\section{Discussion}
\label{sec:discussion}
\rev{Our experimental study had two main outcomes. First, it} allowed us to \rev{identify} guidelines \rev{about which methods can be the most appropriate} for the data and the task at hand. \rev{Second,} observing in which cases the \rev{reviewed} methods consistently failed in identifying the \rev{expected} communities
allowed us to identify the multiplex community structures that are challenging with the currently available community detection algorithms. This \rev{indicates} a set of open problems for community detection methods in multiplex networks.
Accuracy analysis on synthetic networks
has revealed that \rev{most of the} methods perform \rev{very well} when the community structure
\rev{is made of disjoint} pillar\rev{s}. \rev{Among the many well-performing methods}, Infomap \rev{and SCML are consistently} discovering community structures that are close \rev{or equal} to the ground truth, \rev{GLouvain and the methods using Louvain are also performing well but have some issues with communities of varying size.} whereas ML-LCD(clsim) appears to be the best choice among the local methods. \rev{It is worth noticing that simpler flattening methods are also among the best methods.}
With regard to non-pillar community structures, we have observed a considerable reduction in the achieved accuracy scores for almost all methods. This observation raises the following question: what kind of assumptions are considered by different methods when multiplex communities are identified? It is clear that there is a tendency, even if not \rev{always} explicitly declared, to assume that multiplex communities are pillar communities expanding over all the layers of the multiplex network. For instance, multi-slice modularity~\citep{Mucha2010} rewards pillar communities when calculating the modularity score, \rev{and spectral methods assume the existence of a latent community structure at actor level}. While pillar community structures are perfectly reasonable and can be \rev{assumed to exist} in many scenarios, they \rev{are} also \rev{the simplest possible cases we tested in this article}. As multiplex approaches have been developed
to overcome the oversimplification of monoplex networks, relying on a single type of ideal community structure seems, at least, a missed opportunity. Thus, more work has to be done on improving the accuracy of community detection methods for non-pillar community structures.
A second set of considerations can be drawn by looking at the results obtained by the evaluated methods when applied to real-world datasets. Our experiments have shown that, on real-world datasets, the \rev{detected} community structures largely differ from the ground truth. This raises two interesting questions\rev{. First,} to which extent is the assumed ground truth itself a valid assumption? In other words, does the ground truth given for a real dataset always describe the community structures identified by a community detection method, or does it capture only one part of the whole picture? The answer to this question is never trivial even in monoplex networks. Nevertheless it is easy to see how adding more layers makes it further complicated. For example, both \rev{DKPol and AUCS} ground truths group together individuals belonging to the same organization (political parties in one case and research groups in the other). The question then becomes whether it is reasonable to assume that the selected relations, observed in the multiplex networks, will produce a community structure corresponding to this formal grouping, and to some extent, how different relations (thus different layers) can be more or less aligned with the hypothesis described above. Will members of the same research group work together, or publish together? Have lunch and fun together? Will members of the same political party retweet each other on Twitter, and reply to each other? Indeed, looking at the accuracy of the community structures identified for the real world dataset\rev{s}, especially in the case of \rev{DKPol}, one might ask whether we are observing a generalized failure of the community detection methods, or conversely, whether the community detection methods were actually able to observe relevant structures that \rev{were} just different from the community structures assumed in the ground truth.
The second question, which is strongly related to the first one, is whether all the layers included in these datasets positively contribute to an accurate identification of the community structure in these datasets, or whether some of them add more noise that heavily affect the identification process. Indeed, the fact that \rev{most of the} community detection method\rev{s} always give an output --- no matter what layers are included in the input multiplex network --- \rev{makes the inclusion of more input layers potentially problematic}. Layers, besides being defined by a specific internal topology, are also defined by internal logics that might or might not be coherent with those of the other layers. The \rev{DKPol} dataset represents a good example of this problem since some \rev{detailed} analysis of the three layers composing the multiplex network has shown that retweets and following/follower interactions follow relatively \rev{assortative} dynamics for political parties. The replies, however, \rev{are more frequent between members of different} political parties. Here we think that more efforts have to be made in the modelling phase of the multiplex network and some layer-specific measures should be developed to lead the choice of the layers that contribute to the identification of the communities. Several such \emph{multilayer network simplification} methods exist, and more can be developed, as reviewed in
\citep{DBLP:journals/csr/InterdonatoMPTV20}.
A \rev{separate} consideration should be made about the similarities of the obtained results. Focusing, for the above-mentioned reason, mainly on the results obtained from the synthetic networks, it is possible to observe some general patterns. Global partitioning methods show a remarkable level of similarity in detecting community structures based on a pillar-like model. Semi-pillar and hierarchical community structures show a lower degree of similarity between the retrieved community structures.
\rev{We should also consider that differences in the results of different algorithms may be partially due to the fact that some algorithms use heuristics to optimize an objective function (e.g., generalized Louvain), therefore they might not achieve the optimal value.}
Local methods show a behavior that is, to some extent, similar to the global partitioning methods. When tested on pillar communities they show a remarkable similarity between the produced communities, which can easily lead to calling them interchangeable. Nevertheless, the less pillar-like the community structure in the data is, the higher the differences seem to be at first between \textsf{ACLcut} and \textsf{ML-LCD} and then also between \rev{different settings of} the same algorithm.
\rev{Scalability analysis has also provided useful information about specific methods with scalability issues, which can be used to select feasible approaches depending on the data.}
\rev{
We would also like to draw additional remarks that might be considered mainly by practitioners. Community detection remains a challenging task, and further complicated in multilayer networks, which is testified by the plethora of available approaches and methods, most of which have been studied in our extensive survey, while new others are currently under development at the time of this writing. From a practical viewpoint, the core problems are, on the one hand, \textit{i}) to select the most suited algorithm and parameterization for a target application domain, and on the other hand, \textit{ii}) to have it clear in mind what kind of community we are interested in or we expect to detect.
Problem \textit{i}) should be addressed by taking into account that community detection methods, especially if belonging to different methodological approaches, will easily discover different patterns in a multilayer network, mainly because every method has its own bias resulting from the optimization of different criteria.
We believe this variety of choice should not be seen as a negative point, but rather as an opportunity to find out communities with different structures and related meanings. Also, if the need for having a unified solution from different available ones still remains as a priority, the ensemble-based consensus approach could be considered as the way to go.
Understanding problem \textit{ii}) will nonetheless be crucial in most cases, as it may pose a requirement for the structure of the communities to be discovered, thus possibly impacting on the choice of the method to be used. In any case, this will also depend on the actual presence of communities of a desired form in the input network; for instance, any method based on the identification of cliques of a given size will likely fail if such cliques are rare or missing at all in the input network. Therefore, one suggestion in this regard would be to deepen as much as possible the study of structural micro/mesoscopic characteristics of the input network, both in its entirety as a complex system and at level of its constituent layers, to better prepare the subsequent analysis for the community detection task. In this regard,
}
as we have shown, pillar-like communities can be well detected with the methods considered in this survey; however, it does not come to our surprise that the more we move away from that ideal model of multiplex community structure, the more the expected accuracy drops and the differences between various algorithms become more pronounced.
\rev{Despite the complexity of the multiplex community detection task emerging from our study, we would like to conclude our discussion on a positive note. There are many cases where we have a good expectation of what type of community structures could be found in the data. One example is the simple case of actor communities that expand over multiple layers, as in the AUCS network where people inside the same research group work together, publish papers together and go to lunch together -- although the multilayer data allows us to appreciate how administrative people are part of the community only on some layers, and not for example of the co-authorship one. Another example are hierarchical communities where the layers represent different organizational levels, e.g., University-level interactions, Department-level interactions, research-group-level interactions, etc. Overlapping can also be expected inside data describing flexible organizations with people having multiple roles. These examples share the same features of some of our synthetic networks (Pillar, Hierarchical, Overlapping).
Therefore, domain knowledge about what type of communities to expect can be used together with our accuracy (and scalability, in case of larger networks) plots to determine which algorithms to prioritize.}
\section{Results}
\label{sec:results}
In this section we present the experimental results of our comparative evaluation. Results of the comparative evaluation of global methods are reported in Section~\ref{ssec:results_global}, while results related to the evaluation of local methods are reported in Section~\ref{ssec:results_local}.
\input{6.1_glocal_results}
\input{6.2_local_results}
\section{Discussion}
\label{sec:discussion}
\rev{Our experimental study had two main outcomes. First, it} allowed us to \rev{identify} guidelines \rev{about which methods can be the most appropriate} for the data and the task at hand. \rev{Second,} observing in which cases the \rev{reviewed} methods consistently failed in identifying the \rev{expected} communities
allowed us to identify the multiplex community structures that are challenging with the currently available community detection algorithms. This \rev{indicates} a set of open problems for community detection methods in multiplex networks.
Accuracy analysis on synthetic networks
has revealed that \rev{most of the} methods perform \rev{very well} when the community structure
\rev{is made of disjoint} pillar\rev{s}. \rev{Among the many well-performing methods}, Infomap \rev{and SCML are consistently} discovering community structures that are close \rev{or equal} to the ground truth, \rev{GLouvain and the methods using Louvain are also performing well but have some issues with communities of varying size.} whereas ML-LCD(clsim) appears to be the best choice among the local methods. \rev{It is worth noticing that simpler flattening methods are also among the best methods.}
With regard to non-pillar community structures, we have observed a considerable reduction in the achieved accuracy scores for almost all methods. This observation raises the following question: what kind of assumptions are considered by different methods when multiplex communities are identified? It is clear that there is a tendency, even if not \rev{always} explicitly declared, to assume that multiplex communities are pillar communities expanding over all the layers of the multiplex network. For instance, multi-slice modularity~\citep{Mucha2010} rewards pillar communities when calculating the modularity score, \rev{and spectral methods assume the existence of a latent community structure at actor level}. While pillar community structures are perfectly reasonable and can be \rev{assumed to exist} in many scenarios, they \rev{are} also \rev{the simplest possible cases we tested in this article}. As multiplex approaches have been developed
to overcome the oversimplification of monoplex networks, relying on a single type of ideal community structure seems, at least, a missed opportunity. Thus, more work has to be done on improving the accuracy of community detection methods for non-pillar community structures.
A second set of considerations can be drawn by looking at the results obtained by the evaluated methods when applied to real-world datasets. Our experiments have shown that, on real-world datasets, the \rev{detected} community structures largely differ from the ground truth. This raises two interesting questions\rev{. First,} to which extent is the assumed ground truth itself a valid assumption? In other words, does the ground truth given for a real dataset always describe the community structures identified by a community detection method, or does it capture only one part of the whole picture? The answer to this question is never trivial even in monoplex networks. Nevertheless it is easy to see how adding more layers makes it further complicated. For example, both \rev{DKPol and AUCS} ground truths group together individuals belonging to the same organization (political parties in one case and research groups in the other). The question then becomes whether it is reasonable to assume that the selected relations, observed in the multiplex networks, will produce a community structure corresponding to this formal grouping, and to some extent, how different relations (thus different layers) can be more or less aligned with the hypothesis described above. Will members of the same research group work together, or publish together? Have lunch and fun together? Will members of the same political party retweet each other on Twitter, and reply to each other? Indeed, looking at the accuracy of the community structures identified for the real world dataset\rev{s}, especially in the case of \rev{DKPol}, one might ask whether we are observing a generalized failure of the community detection methods, or conversely, whether the community detection methods were actually able to observe relevant structures that \rev{were} just different from the community structures assumed in the ground truth.
The second question, which is strongly related to the first one, is whether all the layers included in these datasets positively contribute to an accurate identification of the community structure in these datasets, or whether some of them add more noise that heavily affect the identification process. Indeed, the fact that \rev{most of the} community detection method\rev{s} always give an output --- no matter what layers are included in the input multiplex network --- \rev{makes the inclusion of more input layers potentially problematic}. Layers, besides being defined by a specific internal topology, are also defined by internal logics that might or might not be coherent with those of the other layers. The \rev{DKPol} dataset represents a good example of this problem since some \rev{detailed} analysis of the three layers composing the multiplex network has shown that retweets and following/follower interactions follow relatively \rev{assortative} dynamics for political parties. The replies, however, \rev{are more frequent between members of different} political parties. Here we think that more efforts have to be made in the modelling phase of the multiplex network and some layer-specific measures should be developed to lead the choice of the layers that contribute to the identification of the communities. Several such \emph{multilayer network simplification} methods exist, and more can be developed, as reviewed in
\citep{DBLP:journals/csr/InterdonatoMPTV20}.
A \rev{separate} consideration should be made about the similarities of the obtained results. Focusing, for the above-mentioned reason, mainly on the results obtained from the synthetic networks, it is possible to observe some general patterns. Global partitioning methods show a remarkable level of similarity in detecting community structures based on a pillar-like model. Semi-pillar and hierarchical community structures show a lower degree of similarity between the retrieved community structures.
\rev{We should also consider that differences in the results of different algorithms may be partially due to the fact that some algorithms use heuristics to optimize an objective function (e.g., generalized Louvain), therefore they might not achieve the optimal value.}
Local methods show a behavior that is, to some extent, similar to the global partitioning methods. When tested on pillar communities they show a remarkable similarity between the produced communities, which can easily lead to calling them interchangeable. Nevertheless, the less pillar-like the community structure in the data is, the higher the differences seem to be at first between \textsf{ACLcut} and \textsf{ML-LCD} and then also between \rev{different settings of} the same algorithm.
\rev{Scalability analysis has also provided useful information about specific methods with scalability issues, which can be used to select feasible approaches depending on the data.}
\rev{
We would also like to draw additional remarks that might be considered mainly by practitioners. Community detection remains a challenging task, and further complicated in multilayer networks, which is testified by the plethora of available approaches and methods, most of which have been studied in our extensive survey, while new others are currently under development at the time of this writing. From a practical viewpoint, the core problems are, on the one hand, \textit{i}) to select the most suited algorithm and parameterization for a target application domain, and on the other hand, \textit{ii}) to have it clear in mind what kind of community we are interested in or we expect to detect.
Problem \textit{i}) should be addressed by taking into account that community detection methods, especially if belonging to different methodological approaches, will easily discover different patterns in a multilayer network, mainly because every method has its own bias resulting from the optimization of different criteria.
We believe this variety of choice should not be seen as a negative point, but rather as an opportunity to find out communities with different structures and related meanings. Also, if the need for having a unified solution from different available ones still remains as a priority, the ensemble-based consensus approach could be considered as the way to go.
Understanding problem \textit{ii}) will nonetheless be crucial in most cases, as it may pose a requirement for the structure of the communities to be discovered, thus possibly impacting on the choice of the method to be used. In any case, this will also depend on the actual presence of communities of a desired form in the input network; for instance, any method based on the identification of cliques of a given size will likely fail if such cliques are rare or missing at all in the input network. Therefore, one suggestion in this regard would be to deepen as much as possible the study of structural micro/mesoscopic characteristics of the input network, both in its entirety as a complex system and at level of its constituent layers, to better prepare the subsequent analysis for the community detection task. In this regard,
}
as we have shown, pillar-like communities can be well detected with the methods considered in this survey; however, it does not come to our surprise that the more we move away from that ideal model of multiplex community structure, the more the expected accuracy drops and the differences between various algorithms become more pronounced.
\rev{Despite the complexity of the multiplex community detection task emerging from our study, we would like to conclude our discussion on a positive note. There are many cases where we have a good expectation of what type of community structures could be found in the data. One example is the simple case of actor communities that expand over multiple layers, as in the AUCS network where people inside the same research group work together, publish papers together and go to lunch together -- although the multilayer data allows us to appreciate how administrative people are part of the community only on some layers, and not for example of the co-authorship one. Another example are hierarchical communities where the layers represent different organizational levels, e.g., University-level interactions, Department-level interactions, research-group-level interactions, etc. Overlapping can also be expected inside data describing flexible organizations with people having multiple roles. These examples share the same features of some of our synthetic networks (Pillar, Hierarchical, Overlapping).
Therefore, domain knowledge about what type of communities to expect can be used together with our accuracy (and scalability, in case of larger networks) plots to determine which algorithms to prioritize.}
\section{Evaluation datasets}
\label{appendixA}
We selected three types of datasets: (i) \textit{real networks} widely used in the literature, for two of which the ground truth is available, (ii) \emph{community structure-controlled synthetic networks} generated by forcing specific community structures\rev{, and (iii) \emph{networks} varying the number of actors and layers, generated by the same code used to generate the Pillar Equal Partitioning network used for accuracy analysis (global methods) and using the mLFR benchmark \citep{Brodka16} (local methods)}.
\subsection{Real networks}
\label{sssec:real_datasets}
For real networks, a selection of publicly available multiplex networks have been chosen to cover different properties and domains of multiplex networks. More specifically, we selected the following networks:
\begin{itemize}
\item \textbf{\rev{AUCS}}: In this multiplex network, the multiple layers refer to different relationships between 61 employees/PhD students at a University department. The relationships are \textbf{(\romannum{1})} Being friends on Facebook, \textbf{(\romannum{2})} Having lunch together at the university, \textbf{(\romannum{3})} Co-working, \textbf{(\romannum{4})} Co-authoring, \textbf{(\romannum{5})} Offline friendship~\citep{aucs}. The ground truth of this dataset reports the affiliation of actors to research groups.
\item \textbf{\rev{DKPol}}: This is a Twitter interaction network among 490 danish politicians during the month leading up to the parliamentary elections of 2015. The three layers model the different Twitter interactions \textit{follow}, \textit{reply}, and \textit{retweet} among these politicians. A ground truth for this dataset is available in the form of pairs $\langle$politician name, political affiliation$\rangle$. The political affiliation is one of the major ten political parties in Denmark (i.e., Alternativet, Radikale Venstre, Enhedslisten, Socialdemokratiet and Socialistisk Folkeparti,Dansk Folkeparti, KristenDemokraterne, Liberal Alliance, Venstre, or Det Konservative Folkeparti).
\item \textbf{Airports}. This multiplex network models the connections between 417 European airports on a certain day. Each of the 37 layers in this dataset models the connections made by one commercial airline~\citep{Cardillo2013}.
\item \textbf{Rattus}. This is a multiplex network that models different types of genetic interactions for \rev{Rattus Norvegicus, constructed from BioGRID and making} use of the following layers: physical association, direct interaction, colocalization, association, additive genetic, interaction defined by inequality and suppressive genetic interaction defined by inequality~\citep{Stark2006}\rev{.}
\end{itemize}
\rev{The ground truth for AUCS (research groups) and \rev{DKPol} (parties) is approximately pillar partitioning, as indicated in Table~\ref{table:gt_stats}.}
\begin{table}
\centering
\caption{Statistics about the community structures in our networks with ground truth. We denote with \textbf{\#c} the number of communities, with \textbf{sc1} the size of the largest community \rev{(number of nodes)}, with \textbf{sc2/sc1} the ratio between the size of the largest community to the second largest, with \textbf{\%n} the percentage of nodes assigned to at least one community, with \textbf{\%p} the percentage of pillars, with \textbf{\%ao} the percentage of actors in more than one community, with \%no the percentage of nodes in more than one community and with \textbf{\%s} the percentage of singleton communities}
\begin{tabular}{l
rrrrrrrr}
\toprule
method & {$\#c$} & {$sc1$} & {$sc2/sc1$} & {$\%n$} & {$\%p$} & {$\%ao$} & {$\%no$} & {$\%s$} \\
\toprule
\input{exp/gt_summary.csv}
\bottomrule
\end{tabular}
\label{table:gt_stats}
\end{table}
\subsection{Community structure-controlled synthetic networks}
\label{sssec:datasets_8mpx}
Different multiplex community detection methods have different underlying assumptions about what a multiplex community is. To shed light on these assumptions, we generated \rev{10} different multiplex networks with 8 different built-in community structures. To keep the focus on the community structure, each of the \rev{10} multiplex networks is comprised of 3 layers, 100 actors, and 300 nodes (100 per layer). After forcing a specific community structure on each multiplex, the edges were generated with a probability $P_{in} = 0.5$ to be internal (within a community) and a probability $P_{ext} = 0.01$ to be external (among communities). The following is a brief description of each multiplex network:
\begin{itemize}
\item \textbf{Pillar Equal Partitioning (PEP)}: The community structure in this multiplex is a set of pillar non-overlapping communities that are approximately equal in size. (Figure~\ref{fig:CCSS:pep}).
\item \textbf{Pillar Equal Overlapping (PEO)}: Similar to PEP in terms of the size of the communities and the pillar structure. The communities in PEO are however overlapping (Figure~\ref{fig:CCSS:peo}).
\item \textbf{Pillar Non-Equal Partitioning (PNP)}: The community structure in this multiplex is a set of pillar non-overlapping communities. As to the size distribution of the communities, there are few big pillar communities and many small pillar communities (Figure~\ref{fig:CCSS:pnp}).
\item \textbf{Pillar Non-Equal Overlapping (PNO)}:
Similar to PNO in terms of the community size distribution and the pillar structure. The communities in PNO are however overlapping (Figure~\ref{fig:CCSS:pno}).
\item \textbf{Semi-pillar Equal Partitioning (SEP)}: The community structure in this multiplex is a set of semi-pillar non-overlapping communities that are approximately equal in size and a set of single-layer communities (Figure~\ref{fig:CCSS:sep}).
\item \textbf{Semi-pillar Equal Overlapping (SEO)}: Similar to SEP except that the semi-pillar communities are overlapping (Figure~\ref{fig:CCSS:seo}).
\item \rev{\textbf{Semi-pillar Non-Equal Partitioning (SNP)}: The community structure in this multiplex is a set of semi-pillar non-overlapping communities. As to the size distribution of the communities, there are few big pillar communities and many small pillar communities (Figure~\ref{fig:CCSS:snp}).}
\item \rev{\textbf{Semi-pillar Non-Equal Overlapping (SNO)}:
Similar to SNP in terms of the community size distribution and the pillar structure. The communities in SNO are however overlapping (Figure~\ref{fig:CCSS:sno}).}
\item \textbf{Hierarchical (\rev{HIE})}: The community structure in this multiplex reflects some hierarchy among communities on the actor level. Some big node-level communities (like $C_7$ in Figure~\ref{fig:CCSS:HIE}) on a layer $L_3$ are constituted of smaller communities on layer $L_2$.
\item \textbf{Mixed (\rev{MIX})}: The community structure in this multiplex is a \rev{small} set of single-layer communities some of which are overlapping (Figure~\ref{fig:CCSS:nnm}).
\end{itemize}
Table \ref{table:gt_stats} provides information about the communities in these multiplex networks and Figure~\ref{fig:CCSS} illustrates the different types of multiplex community structures.
\section{Community structure statistics on other real datasets}
\label{appendixC}
Statistics of each method occupy one row in each a table (multiple settings of the input parameters for some methods are represented as separated entries for the same method). Since some of the methods are non-deterministic, we carried out each of such methods multiple times (10 times), and provided \rev{mean and standard deviation}. Here we present the general statistics about the communities detected using multiplex community detection methods on \rev{DKPol} and Rattus datasets (Table\rev{s \ref{table:global_real_stats:dkpol} and \ref{table:global_real_stats:rattus}}).
\begin{sidewaystable}
\centering
\caption{\rev{Statistics about the community structures obtained on the \rev{DKPol} network (results averaged over 10 runs). We denote with \textbf{\#c} the number of communities, with \textbf{sc1} the size of the largest community (number of nodes), with \textbf{sc2/sc1} the ratio between the size of the largest community to the second largest, with \textbf{\%n} the percentage of nodes assigned to at least one community, with \textbf{\%p} the percentage of pillars, with \textbf{\%ao} the percentage of actors in more than one community, with \%no the percentage of nodes in more than one community and with \textbf{\%s} the percentage of singleton communities. ML-CPM\_{31} is not present because it takes too long to produce a result, for all executions.}}
\begin{tabular}{l
S[table-format=1.2(3)]
S[table-format=3.2(1)]
S[table-format=3.2(1)]
S[table-format=1.2]
S[table-format=1.2(3)]
S[table-format=1.2(3)]
S[table-format=1.2(3)]
S[table-format=1.2(3)]}
\toprule
method & {$\#c$} & {$sc1$} & {$sc2/sc1$} & {$\%n$} & {$\%p$} & {$\%ao$} & {$\%no$} & {$\%s$} \\
\toprule
\input{exp/comm_summary_dkpol.csv}
\bottomrule
\end{tabular}
\subcaption*{DKPol\rev{. \#l = 3, \#a = 493, \#n = 839, \#e = 20226}}
\label{table:global_real_stats:dkpol}
\end{sidewaystable}
\begin{sidewaystable}
\centering
\caption{\rev{Statistics about the community structures obtained on the \rev{Rattus} network (results averaged over 10 runs). We denote with \textbf{\#c} the number of communities, with \textbf{sc1} the size of the largest community (number of nodes), with \textbf{sc2/sc1} the ratio between the size of the largest community to the second largest, with \textbf{\%n} the percentage of nodes assigned to at least one community, with \textbf{\%p} the percentage of pillars, with \textbf{\%ao} the percentage of actors in more than one community, with \%no the percentage of nodes in more than one community and with \textbf{\%s} the percentage of singleton communities. ML-CPM\_{42} is not present because it does not find any community.}}
\begin{tabular}{l
S[table-format=1.2(3)]
S[table-format=5.2(1)]
S[table-format=3.2(1)]
S[table-format=1.2]
S[table-format=1.2(3)]
S[table-format=1.2(3)]
S[table-format=1.2(3)]
S[table-format=1.2(3)]}
\toprule
method & {$\#c$} & {$sc1$} & {$sc2/sc1$} & {$\%n$} & {$\%p$} & {$\%ao$} & {$\%no$} & {$\%s$} \\
\toprule
\input{exp/comm_summary_rattus.csv}
\bottomrule
\end{tabular}
\subcaption*{Rattus\rev{. \#l = 6, \#a = 2640, \#n = 3263, \#e = 3956}}
\label{table:global_real_stats:rattus}
\end{sidewaystable}
|
2,877,628,090,713 | arxiv | \section{1. Introduction}
In the recent years, the operation of the Atacama Large Millimeter/submillimeter Array (ALMA) has open a new era of radio interferometry with an unprecedented number of antennas, offering a broad choice of possible configurations. In particular, the availability of long baselines has often favoured high angular resolution configurations at the detriment of short baselines, resulting in significant losses of detected flux away from the target. This so-called short-spacing problem (SSP) is well known and has been the subject of numerous studies \citep{Faridani2018, Braun1985}. In particular, incomplete coverage of the $uv$ plane is known to produce artefacts known as ``ghosts'' or ``invisible distributions'' \citep[][and references therein]{Chandra2012}. The SSP is typically coped with by merging the array data with single dish observations, or, in the case of ALMA, with data collected using the compact array (ACA). However, the high diversity of possible antenna configurations producing short baselines may result in a complex morphology of the map of the array acceptance, the precise knowledge of which is mandatory for a reliable interpretation of the data. In particular, it may not be reducible to a single number, the so-called Maximal Recoverable Scale (MRS). In principle, ALMA users are properly warned of this fact and are encouraged to produce simulations of the imaging process for the specific antenna configuration being used (see for example Chapter 7 of the ALMA technical handbook\footnote{https://almascience.nao.ac.jp/documents-and-tools/cycle7/alma-technical-handbook}). In practice, however, it is often easy to underestimate the importance of this measure and the danger to overlook significant imaging distortions is real. We illustrate the argument using archival ALMA observations of the emission of the $^{29}$SiO($\nu$=0, $J$=8-7) line by the circumstellar envelope (CSE) of W Hya, an oxygen-rich AGB star at a distance of only $\sim$104$^{+14}_{-11}$ pc to the Sun \citep{VanLeeuwen2007}. It is a semi-regular variable with a period of $\sim$361 days \citep{Samus2017}, often quoted as a Mira \citep{Lebzelter2005} belonging to spectral class M7.5e-M9ep. Its mass-loss rate is $\sim$10$^{-7}$ M$_\odot$ yr$^{-1}$ \citep{Maercker2008, Khouri2014a}. Its main sequence mass was between 1 and 1.5 M$_\odot$\ \citep{Khouri2014b, Danilovich2017}.
\section{2. Observations, data reduction and imaging}
The present work uses archival observations of W Hya from project ADS/JAO.ALMA\#2015.1.01446.S (PI: A. Takigawa), which were carried out for a total of $\sim$2 hours on source over three separate blocks between 30 November and 5 December 2015 with ALMA in Cycle 3. In the present work we use mostly blocks 1 and 2, the data of block 3 being of lesser quality, but we have checked that all the arguments made in the article are independent of the blocks being used, 1+2+3 or 1+2 or 2 alone, each block displaying very similar antenna configurations. The antennas, respectively 33, 41 and 31 in number, were configured in such a way that the baseline lengths were distributed in two groups, one covering between $\sim$15 m and $\sim$200 m, the other between $\sim$400 m and $\sim$8 km; the former group included 10 antennas in a circle of 100 m radius (Figure \ref{fig1}). The emission of the $^{29}$SiO (8-7) line, with a frequency of 342.9808 GHz (wavelength $\lambda$$=$0.875 mm), was covered with a frequency resolution of $\sim$977 kHz channel$^{-1}$, corresponding to a channel spacing of 0.854 km s$^{-1}$. The data have been reduced using standard scripts without continuum subtraction, with particular attention to the scale over which the flux is reliably recoverable.
Both imaging codes CASA and GILDAS have been used\footnote{https://casa.nrao.edu and https://imager.oasu.u-bordeaux.fr/}. We have checked the consistency between the results obtained with the two codes and with different parameters of the deconvolution algorithms, including weights, masks, number of iterations, etc. In all cases, a proper behaviour was observed and the arguments developed in the article were found to be valid independently from these. As an illustration, we show in Figure \ref{fig1} the dirty beam and a map of residuals obtained with GILDAS using natural weighting. Imaging produces a beam of 63$\times$52 mas$^2$ with PA=79$^\circ$\ with natural weighting and a beam of 52$\times$38 mas$^2$ with PA=99$^\circ$\ with robust weighting (threshold of 1).
\begin{figure*}
\centering
\includegraphics[height=4.15cm,trim=0.2cm -1.5cm -1.cm 1.cm,clip]{fig1a-uvcov.eps}
\includegraphics[height=4.4cm,trim=0.2cm .5cm 0.2cm 1.7cm,clip]{fig1ab-baselines-dist.eps}
\includegraphics[height=4.4cm,trim=-.5cm .5cm 0.2cm 1.5cm,clip]{fig1-dirtybeam.eps}\\
\includegraphics[height=4.4cm,trim=0.2cm .5cm 0.2cm 1.5cm,clip]{fig121.eps}
\includegraphics[height=4.4cm,trim=0.2cm .5cm 0.2cm 1.5cm,clip]{fig122.eps}
\includegraphics[height=4.4cm,trim=0.2cm .5cm 0.2cm 1.5cm,clip]{fig1-residual2.eps}
\caption{Upper panels: $uv$ coverage for blocks 1+2. In the left panel, block1 is black and block 2 is red. Baseline distributions are shown in the central panels, the rightmost being a zoom on smaller baselines. Right: Dirty beam (natural weighting); the colour map is saturated at 0.3 (rather than 1) to show the side lobe of the dirty beam. Lower panels: antenna configuration for block 1+2 (42 antennas); the central panel is a zoom on the central array; the red arrow (left) points to the central array, the blue arrow points to a triplet of antennas of the extended array producing short baselines; the circle (centre) has a radius of 100 m; the ellipses define three antenna alignments: A, B and C that are referred to in Figure \ref{fig7}. Right: map of residuals (natural weighting) of a frequency channel close to systemic velocity; the maximum of the colour scale is $\sim$3 mJy beam$^{-1}$, 1.4 times the $\sigma$ of the noise (2.1 mJy beam$^{-1}$).}
\label{fig1}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[height=5cm,trim=0.0cm .5cm 1.5cm 1.cm,clip]{fig2a-fmap-cont-nat.eps}
\includegraphics[height=5cm,trim=1.5cm .5cm 1.5cm 1.cm,clip]{fig2b-fmap-cont-rob1b.eps}
\caption{Maps of the continuum brightness obtained with natural (left) and robust (right) weighting. The colour scale is in units of mJy\,beam$^{-1}$. The beam is shown in the lower right corner of each panel.}
\label{fig2}
\end{figure*}
Maps of the continuum emission are shown in Figure \ref{fig2}. After beam deconvolution, Gaussian fits to the stellar disc give a diameter of 35$\pm$2 mas FWHM, in good agreement with earlier evaluations \citep{Vlemmings2019}. We use coordinates centred on the continuum emission, $x$ pointing east, $y$ pointing north and $z$ pointing away from Earth. The projected distance to the star is calculated as $R=\sqrt{x^2+y^2}$. Position angles, $\omega$, are measured counter-clockwise from north. The Doppler velocity $V_z$ spectrum is centred on a systemic velocity of 40.4 km s$^{-1}$.
Figure \ref{fig3} displays the brightness distribution of the $^{29}$SiO(8-7) emission in a $R$$<$2 arcsec circle centred at the origin; a Gaussian fit to the noise gives a $\sigma$ of 2.1 mJy\,beam$^{-1}$. Also shown are the Doppler velocity spectrum and the intensity map, the latter displaying a complex pattern that is further illustrated in Figure \ref{fig4}, using polar rather than Cartesian sky coordinates. The observed pattern shows back-to-back outflows and a depression around $R$$\sim$0.45 arcsec, deeper when using natural weighting than when using robust weighting. Overall, it suggests successive emission in a plane close to the plane of the sky, at intervals of typically 30 years, of three successive pairs of back-to back outflows differently oriented on the plane. Such a pattern has never been observed in earlier studies of AGB stars; we show in the next section that it is an artefact of the particular antenna configuration.
\begin{figure*}
\centering
\includegraphics[width=5cm,trim=0.0cm .5cm 1.cm 1.5cm,clip]{fig3a-noise-rlt2.eps}
\includegraphics[width=5cm,trim=0.0cm .5cm 1.cm 1.5cm,clip]{fig3b-spect-rlt1.eps}
\includegraphics[width=5cm,trim=0.0cm .5cm 1.cm 1.5cm,clip]{fig3c-fmap-vzlt8.eps}
\caption{ Left: brightness distribution in a $R$$<$2 arcsec circle (natural weighting). Middle: Doppler velocity spectrum integrated over $R$$<$1 arcsec. Right: map of the intensity integrated over $|V_z|$$<$8 km s$^{-1}$\ and multiplied by $R$. The beam is shown in the lower right corner.}
\label{fig3}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=6.cm,trim=0.0cm 1cm 0.cm 1.cm,clip]{fig4a-romega-map-casa-gildas.eps}
\includegraphics[width=6.cm,trim=0.0cm 1.cm 0.cm 1.cm,clip]{fig4-rdist-casa-gildas.eps}
\caption{Comparison between the clean maps obtained using different deconvolution algorithms. Left: clean map in the $\omega$ vs $R$ plane reconstructed using CASA (colour) and using GILDAS (contours). The arrows show a spacing of 180$^\circ$\ in position angle. Right: $R$ distributions integrated over position angle.}
\label{fig4}
\end{figure*}
\section{3. Impact of the SSP}
In order to evaluate the impact of the SSP on the results presented in the preceding section and to estimate the projected distance from the star up to which they can be considered reliable, we simulate the response of the array to the emission of an optically thin isotropic wind of constant velocity, with an intensity decreasing in inverse proportion to the projected distance to the star, $R$. To avoid the singularity of such a wind at $R=0$, we use a constant brightness within the disc $R<$0.1 arcsec. As the observed pattern is seen to evolve smoothly from frequency channel to frequency channel, it is sufficient at this stage to consider a single channel, namely to ignore the dependence on Doppler velocity. The clean image obtained from the visibilities produced by the actual antenna configuration is shown in Figure \ref{fig5} and displays a pattern strikingly similar to that observed for the real data: a same depression around $R$$\sim$0.45 arcsec and a same structure in successive pairs of back-to-back outflows. This unexpected result raises two questions: what is precisely causing the effect? And how far away from the star can the wind morphology be reliably evaluated?
\begin{figure*}
\centering
\includegraphics[width=6.cm,trim=0.0cm 1.cm 0.cm 1.cm,clip]{fig5a-omgr-2sets-data-mod-m6vzp6.eps}
\includegraphics[width=6.cm,trim=0.0cm 1.cm 0.cm 1.cm,clip]{fig5b-rdist-2sets-data-mod-m6vzp6.eps}
\caption{Imaging the isotropic wind model of emission with the antenna configuration of data sets 1 and 2. Left: clean map ($\omega$ vs $R$) of the model intensity, contours are from the data. Right: $R$ distributions (red is for the model, black is for the data).}
\label{fig5}
\end{figure*}
Figure \ref {fig6} illustrates the mechanism that causes the depression observed around $R$$\sim$0.45 arcsec. The left panel shows that it is already present when using the whole array, as could be seen in the right panel of Figure \ref{fig5}. But it also shows that widening the gap of missing baselines by progressively removing baselines of the central array causes a very rapid drop of the acceptance at distances from the star smaller than $\sim$1 arcsec. Namely the depression is caused by the lack of baselines between 200 m and 400 m and imaging at distances larger than $\sim$ 0.2 arcsec relies on the baselines of the central array. Indeed, keeping only the extended array, with baselines effectively starting at $\sim$ 450 m, reduces the acceptance to a small region around the origin, with $R$ not exceeding $\sim$ 0.2 arcsec. The role of the central array is to recover flux missed by the extended array, as can be seen on the left panel of Figure \ref{fig6}: the intensity of the detected emission covering $<$$\sim$0.2 arcsec decreases rapidly when the $uv$ coverage of the central array gets smaller. Note that the flux obtained for the whole array oscillates about the real value, the depression being in fact accompanied by an increased acceptance at $R$$<$$\sim$0.2 arcsec. Using robust weighting rather than natural weighting gives a stronger weight to the extended array in comparison with the central array. As a result, as shown in Figure \ref{fig4}, the depression is attenuated and less missing flux is recovered. In strong contrast with this behaviour, the removal of long baselines has essentially no effect on the acceptance at distances from the star exceeding some 0.2 arcsec. The right panel of Figure \ref{fig6} shows that keeping only baselines shorter than 600 m, namely just the rising edge of the baseline distribution of the extended array (see Figure \ref{fig1}) produces an acceptance close to that produced by the whole array. In summary, the absence of $uv$ coverage for baselines between $\sim$200 m and $\sim$400 m results in a distortion of the acceptance producing a depression around $R$$\sim$0.45 arcsec ($=\lambda/400$ m). As a result, the pattern observed within $R$$<$1 arcsec is dominated by baselines shorter than $\sim$700 m, the role of the longer baselines being exclusively to improve the angular resolution. A conservative attitude would therefore be to use as MRS the value associated with the extended array, with the understanding that, at larger distances from the star, imaging is seriously impaired by the gap of missing baselines. Using the definition given in the ALMA Technical Handbook, which relates the MRS to the minimal baseline in the sample of the 95\% larger baselines, we find an MRS of 0.33 arcsec corresponding to an effective minimal baseline of 545 m.
In general, this means that one should use as effective MRS that associated with the extended array. The role of the central array should then be limited to recovering missing flux, in the same way as one would do when using observations from single dish telescopes or from the Atacama Compact Array (ACA).
\begin{figure*}
\centering
\includegraphics[width=6.cm,trim=0.cm 1.cm 0cm 1.cm,clip]{fig6anew-uvgt0-60-120-180.eps}
\includegraphics[width=6.cm,trim=0.cm 1.cm 0cm 1.cm,clip]{fig6bnew-uvall-lt600-800-1000.eps}
\caption{Radial distributions of the intensity integrated over position angle of the isotropic wind model emission (shown as a dotted line) for restricted sets of baselines (BL) as indicated in the inserts.}
\label{fig6}
\end{figure*}
Such a conservative attitude, however, closes the door to any hope for exploring the morpho-kinematics of the CSE up to $R$$\sim$1 arcsec, as was our original intention. A more ambitious attitude is to attempt imaging beyond the conservative MRS of 0.33 arcsec, say up to some 0.6 arcsec from the centre of the star, by correcting for the distorted acceptance. This implies understanding the morphology of the acceptance as a function of both $R$ and $\omega$, no longer simply as a function of $R$ as was done when discussing the radial depression. We expect the image to be governed in this region by the antenna configuration of the central array. Figure \ref{fig7} shows that this is indeed the case by displaying images obtained with selected subsamples of the antennas of the central array. A particularly spectacular illustration is obtained by disregarding the central array altogether. Then short baselines are provided exclusively by the triplet of antennas indicated by a blue arrow in Figure \ref{fig1}. They completely dominate the pattern observed beyond $R$$\sim$0.2 arcsec, which is now a set of fringes oriented at position angle 25$^\circ$\ modulo 180$^\circ$\ and separated by 0.6 arcsec (lower row of Figure \ref{fig7}).
The left and central panels of Figure \ref{fig8} compare the images of the data and of the model in projection on the sky plane, a representation better suited to the study of the $R$$<$0.6 arcsec range than the $\omega$ $vs$ $R$ representation used in Figures \ref{fig4} and \ref{fig5}. The comparison shows strong similarity between the patterns displayed by the two images: qualitatively, it implies that the emission of the W Hya CSE is isotropic up to distances of at least 0.5 arcsec from the centre of the star. However, it would be unrealistic to hope for a reliable evaluation of an upper limit on a possible small anisotropy: the exploration of the morpho-kinematics of the CSE at distances from the star in excess of 0.3-0.4 arcsec cannot be done reliably with the present observations; it requires new observations providing adequate coverage of the $uv$ plane.
The above discussion is of a broader scope than simply applied to the present observations: it shows the importance of ensuring that the detection of back-to-back outflows is not in fact the result of artefacts caused by an improper $uv$ coverage. Such pairs of outflows have been observed earlier in the CSEs of several AGB stars, such as R Dor \citep{Nhung2021}, EP Aqr \citep{Nhung2019}, RS Cnc \citep{Winters2021}, etc. In all these cases a distinctive feature was the observed asymmetry of the Doppler velocities of the members of the pair. This provides indeed a useful criterion against artefacts, but cannot be used if the outflows are in the plane of the sky. The right panel of Figure \ref{fig8} displays the $V_z$ $vs$ $\omega$ map of the emission of the $^{29}$SiO line averaged in the region 0.3$<$$R$$<$0.6 arcsec where the artefacts are apparent. It shows clearly the absence of back-to-back asymmetry in $V_z$. We note the presence of a small blob of enhanced emission centred at $V_z$$\sim$$-$5.5 km s$^{-1}$\ and $\omega$$\sim$290$^\circ$, which we discuss briefly at the end of the next section.
\begin{figure*}
\centering
\includegraphics[width=5.5cm,trim=0.cm 1.6cm 0.2cm 1.9cm,clip]{fig711.eps}
\includegraphics[width=5.cm,trim=-1.cm 1.6cm 0.2cm 1.9cm,clip]{fig712.eps}
\includegraphics[width=5.cm,trim=-1.cm 1.6cm 0.2cm 1.9cm,clip]{fig713.eps}\\
\includegraphics[width=5.5cm,trim=0.cm 1.6cm 0.2cm 1.9cm,clip]{fig721.eps}
\includegraphics[width=5.cm,trim=-1.cm 1.6cm 0.2cm 1.9cm,clip]{fig722.eps}
\includegraphics[width=5.cm,trim=-1.cm 1.6cm 0.2cm 1.9cm,clip]{fig723.eps}\\
\includegraphics[width=5.5cm,trim=0.cm 1.6cm 0.2cm 1.9cm,clip]{fig731.eps}
\includegraphics[width=5.cm,trim=-1.cm 1.6cm 0.2cm 1.9cm,clip]{fig732.eps}
\includegraphics[width=5.cm,trim=-1.cm 1.6cm 0.2cm 1.9cm,clip]{fig733.eps}\\
\includegraphics[width=5.5cm,trim=0.cm 1.6cm 0.2cm 1.9cm,clip]{fig741.eps}
\includegraphics[width=5.cm,trim=-1.cm 1.6cm 0.2cm 1.9cm,clip]{fig742.eps}
\includegraphics[width=5.cm,trim=-1.cm 1.6cm 0.2cm 1.9cm,clip]{fig743.eps}\\
\includegraphics[width=5.5cm,trim=0.cm 1.6cm 0.2cm 1.9cm,clip]{fig751.eps}
\includegraphics[width=5.cm,trim=-1.cm 1.6cm 0.2cm 1.9cm,clip]{fig752.eps}
\includegraphics[width=5.cm,trim=-1.cm 1.6cm 0.2cm 1.9cm,clip]{fig753.eps}\\
\includegraphics[width=5.5cm,trim=0.cm 1.6cm 0.2cm 1.9cm,clip]{fig761.eps}
\includegraphics[width=5.cm,trim=-1.cm 1.6cm 0.2cm 1.9cm,clip]{fig762.eps}
\includegraphics[width=5.cm,trim=-1.cm 1.6cm 0.2cm 1.9cm,clip]{fig763.eps}\\
\includegraphics[width=5.5cm,trim=0.cm 0.5cm 0.2cm 1.9cm,clip]{fig771.eps}
\includegraphics[width=5.cm,trim=-1.cm 0.5cm 0.2cm 1.9cm,clip]{fig772.eps}
\includegraphics[width=5.cm,trim=-1.cm 0.5cm 0.2cm 1.9cm,clip]{fig773.eps}
\caption{Clean maps of the isotropic wind model emission obtained by using the whole extended array but retaining only a subset of antennas in the central array. Intensity maps are shown on the left column in Cartesian coordinates ($y$ vs $x$) and in the middle column in polar coordinates ($\omega$ vs $R$); the radial distributions, integrated on position angle, are shown in the right column. From up down, the subsets are A, B, C, B+C, A+C and A+B where A, B and C are defined in the central lower panel of Figure \ref{fig1}. The last row ignores all antennas of the central array, the pattern being exclusively defined by the triplet of antennas indicated by a blue arrow in Figure \ref{fig1}.}
\label{fig7}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[height=4.8cm,trim=0.0cm 1.cm 0.cm 0cm,clip]{fig8a-datamodel-compare-2sets-xymap.eps}
\includegraphics[height=4.8cm,trim=0.0cm 1.cm 0.cm 0cm,clip]{fig8b-datamodelrobust-compare-2sets-xymap}
\includegraphics[height=4.8cm,trim=0.0cm 1.cm 0.cm 0cm,clip]{fig8c-data-2sets-vzomega-0.3rxy0.6.eps}
\caption{Left and centre: comparison between the intensity map of the $^{29}$SiO emission averaged between $-$4 and 6 km s$^{-1}$\ (colour scale) and the model (contours). The left panel uses natural weighting and the central panel uses robust weighting. Right: PV map of the $^{29}$SiO emission ($V_z$ $vs$ $\omega$) averaged over the interval 0.3$<$$R$$<$0.6 arcsec.}
\label{fig8}
\end{figure*}
\section{4. W Hya near the stellar disc: a comparison with R Dor}
The arguments developed in the preceding sections have shown that the observed data-cube of $^{29}$SiO(8-7) emission is reliable up to distances of over 15 au (0.15 arcsec) from the star. In this range, W Hya has recently been the object of detailed studies: observations in the visible and near-visible using NACO and SPHERE-ZIMPOL on the VLT and AMBER on the VLTI \citep{Norris2012, Ohnaka2016, Ohnaka2017, Khouri2020, Hadjara2019} have given evidence for a clumpy and dusty layer, displaying important variability at short time scale. Dust grains, mostly aluminium composites, are found to have sizes ranging between 0.1 and 0.5 microns. In the near-IR, using MIDI on the VLTI, \citet{ZhaoGeisler2015} have confirmed this result as have \citet{Vlemmings2017, Vlemmings2019} using ALMA to observe continuum and CO($\nu$=1, $J$=3-2) emissions close to the star. \citet{Khouri2015} have proposed a simple model that also confirms this result, showing that SiO emits at larger distances. \citet{Khouri2014a,Khouri2014b} have used Herschel in the infrared to measure the $^{12}$C/$^{13}$C ratio as 18$\pm$10, to detect emission from H$_2$O and $^{28}$SiO and to establish that $\sim$1/3 of SiO atoms are locked up in dust. \citet{Takigawa2017}, in their analysis of the present observations, have shown that while the spatial distribution of AlO molecules is confined within $\sim$6 au, $^{29}$SiO molecules extend beyond $\sim$10 au without significant depletion. They argue \citep{Takigawa2019} that transition alumina containing $\sim$10\% of Si is the most plausible source of the dust emission from W Hya and, more generally, from alumina-rich AGB stars. Also using the present observations, \citep{Vlemmings2017} have explored the inner layer of the CSE. The latter two publications, by \citep{Takigawa2017}and \citep{Vlemmings2017}, using the present observations, have obtained important results concerning the shock-heated atmosphere of the star and the mechanism governing the formation of dust, respectively. This does not leave much room to extract from the present observations further information of relevance. Yet, a few features that have not been mentioned in the published literature deserve being briefly presented and commented upon, which we do in the present section. To do so, we use as a guide ALMA observations of the emission by the CSE of R Dor of the same $^{29}$SiO(8-7) molecular line as observed in W Hya \citep{Nhung2021}. The motivation for doing so is the similarity between the two stars (same mass loss rate, very similar long periods and spectral types, absence of technetium in their spetra, etc.) which is often used in the published literature as an encouragement for comparing their properties, as is the case in particular in the work of \citet{Vlemmings2017}.
Absorption over and well beyond the stellar disc is illustrated in Figure \ref{fig9} and found to be qualitatively similar to that of R Dor. A narrow absorption peak is seen at $\sim$$-$5.5 km s$^{-1}$, slightly lower than for R Dor, corresponding to absorption in the outer SiO layer, which is expected to extend up to some 200 au from the star \citep{Nhung2021}. The continuum levels measured within 12 au, corresponding to a good coverage of the continuum emission, are in excellent agreement with those measured by \citet{Vlemmings2019}: 420 and 580 mJy for W Hya and R Dor respectively. The larger beam size (when measured in au) of the W Hya data causes significant smearing of the absorption.
Rotation of the R Dor CSE in the vicinity of the star has been studied in detail \citep{Vlemmings2018, Homan2018, Nhung2021} and found to be solid-body-like up to some $\sim$6 au from the star, to reach a maximal velocity of $\sim$6 km s$^{-1}$\ at $\sim$8 au and then to slow down and cancel beyond 15 au. It is illustrated in Figure \ref{fig10}. Instead, the W Hya observations, albeit being slightly less sensitive to the presence of a possible rotation because of the larger beam size, show no clear evidence for rotation. From the observed dependence of the mean Doppler velocity on position angle, we infer an upper limit of $\sim$1 km s$^{-1}$\ on a possible rotation velocity averaged between $R$$=$5 au and $R$$=$10 au and divided by the sine of the angle between rotation axis and line of sight. In both cases an offset of $\sim$0.5 km s$^{-1}$\ is observed, probably due, at least in part, to absorption. We note that a tilt of 5$^\circ$\ of the rotation axis with respect to the line of sight is sufficient to prevent detection of a rotation velocity equal to that at stake in R Dor.
The presence of high Doppler velocity wings in the vicinity of the star has been observed in the CSEs of several oxygen-rich AGB stars and is understood as probing their inner layer where shocks from pulsations and convection cell ejections play an important role. Observations of the $^{29}$SiO(8-7) emission from R Dor and W Hya in a ring of radius 5$<$$R$$<$10 au are illustrated in Figure \ref{fig11}. While both stars display high Doppler velocity wings, partly affected by absorption, the effect is less marked in W Hya than in R Dor, with a $\sigma$ of 4.2 km s$^{-1}$\ instead of 5.2 km s$^{-1}$.
Finally, we note (Figure \ref{fig12}) the presence of a blob of emission in the blue hemisphere, some 10 au north of the star. It has not been mentioned earlier \citep{Takigawa2017}. Such a blob was first observed in R Dor by \citet{Decin2018} and later shown by \citet{Nhung2021} to be caused by a stream of gas rather than by a companion as suggested in earlier analyses \citep{Homan2018, Decin2018, Vlemmings2018}. Here, in contrast with R Dor, the distance to the star is independent from Doppler velocity, excluding a sensible interpretation in terms of a gas flow. We show in Figure \ref{fig13} the $\omega$ vs $R$ map integrated over $-$11.7$<$$V_z$$<$$-$4.8 km s$^{-1}$\ and the Doppler velocity spectrum integrated over the blob, significantly notched by absorption. The present data cannot do more than showing that the blob covers a well-defined compact region of the data cube. Observations of other molecular line emissions would probably help with understanding its nature.
\begin{figure*}
\centering
\includegraphics[width=4.5cm,trim=0.0cm .5cm 1.cm 0.5cm,clip]{fig11a-spect-r12au.eps}
\includegraphics[width=4.5cm,trim=0.0cm .5cm 1.cm 0.5cm,clip]{fig11b-rdor-pvmap-vz-rxy.eps}
\includegraphics[width=4.5cm,trim=0.0cm .5cm 1.cm 0.5cm,clip]{fig11c-whya-pvmap-vz-rxy.eps}
\caption{Doppler velocity spectra integrated over $R$$<$12 au for R Dor (black) and W Hya (red). The arrows show the continuum levels measured by \citet{Vlemmings2019}. Centre and right: PV maps $V_z$ vs $R$ averaged over position angles, for R Dor (centre) and W Hya (right). The colour scales are in units of Jy beam$^{-1}$. }
\label{fig9}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=4.5cm,trim=0.0cm 1cm 1.5cm 1.5cm,clip]{fig12a-rdor-rot.eps}
\includegraphics[width=4.5cm,trim=0.0cm 1cm 1.5cm 1.5cm,clip]{fig12b-whya-rot1.eps}
\caption{Dependence of $<$$V_z$$>$ on $\omega$ for R Dor (left) and W Hya (right) averaged in rings $R$$=$7.5$\pm$3.0 au and
$R$$=$7.5$\pm$2.5 au, respectively. The sine wave fits (km s$^{-1}$) are $0.5-3.6\sin(\omega-11$$^\circ$) and $0.4+0.5\sin(\omega-48$$^\circ$), respectively.}
\label{fig10}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=4.5cm,trim=0.0cm 1cm 1.5cm 1.5cm,clip]{fig13a-rdor-spect-highv-r.083-.167.eps}
\includegraphics[width=4.5cm,trim=0.0cm 1cm 1.5cm 1.5cm,clip]{fig13b-whya-spect-highv-r.05-.1.eps}
\caption{Doppler velocity spectra (black) integrated in a ring of 5 to 10 au centred on R Dor (left) and W Hya (right). Gaussian profiles, superimposed on a constant flux, are shown in red as references. They have $\sigma$'s of 5.2 (left) and 4.2 (right) km s$^{-1}$.}
\label{fig11}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=14cm,trim=0.0cm .5cm 0.cm .5cm,clip]{fig14-whya1-chmap-blueblob.eps}
\caption{Channel maps of the $^{29}$SiO(8-7) line emission of W Hya in the blue hemisphere showing a blob of enhanced emission. The redmost panel is in the absorption range. Mean Doppler velocities are indicated in each panel. The beam, 52$\times$38 mas$^2$, PA=99$^\circ$\ (robust weighting), is shown in the lower-right corner of the lower-right panel.}
\label{fig12}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[height=5.5cm,trim=0.0cm 1cm 0cm 1.5cm,clip]{fig15a-whya1-blue-blob-omega-rxy.eps}
\includegraphics[height=5.5cm,trim=0.0cm 1cm 1.cm 1.5cm,clip]{fig15b-whya1-blue-blob-spect.eps}
\caption{Left: $\omega$ vs $R$ map of the $^{29}$SiO(8-7) line emission of W Hya integrated over the Doppler velocity interval displayed in Figure \ref{fig12}. Right: Doppler velocity spectrum (black) measured in the rectangle shown in the left panel; the mirror spectrum is shown in red. The beam is the same as in Figure \ref{fig12}.}
\label{fig13}
\end{figure*}
\section{5. Summary and conclusions}
The present work, initiated with the intention of exploring the morpho-kinematics of the CSE of W Hya up to distances at arcsec scale using ALMA observations of the $^{29}$SiO(8-7) line, has revealed a major shortcoming preventing such an exploration to be reliably performed.
The lack of $uv$ coverage for baselines between 200 m and 400 m has been found to cause
a strong distortion of the radial distribution of the detected flux, with a depression at projected distances centred around 0.45 arcsec ($=\lambda/400$ m). As this can easily be overlooked in other observations using antenna configurations with significant intervals of missing baselines, we devoted the major part of the article to a detailed study of the effect. Antenna configurations combining an extended array with a compact smaller central array are prone to produce baseline distributions made of two families separated by a gap: short baselines not exceeding the diameter of the central array (here $\sim$200 m) and long baselines between either two antennas of the extended array or an antenna of the extended array and one of the central array (here exceeding $\sim$400 m). Beyond the depression, the observed pattern of emission is essentially an artefact reflecting the detailed configuration of antennas in the central array; as such, it is central symmetric and takes the form of apparent outflows emitted back-to-back. At variance with real back-to-back outflows that are emitted at opposite Doppler velocities these artefacts are independent of the frequency interval being considered, providing a useful discrimination against them. In principle, one should be able to cope with this problem by modelling the morpho-kinematics of the CSE and adjusting the associated parameters to best fit the observed visibilities. In practice, however, the complexity of the physics at stake is such that it would be difficult to obtain a convincingly reliable result: the presence of dust formation, of a complex temperature-dependent gas-dust chemistry and of strong absorption prevent such an enterprise to be credibly successful.
Three earlier publications have been using the same ALMA observations as used in the present article without making explicit reference to the impact of missing baselines. The work of \citep{Vlemmings2017} focuses on the inner part of the CSE, within less than 6 au from the centre of the star, in a region where imaging is perfectly reliable and not at all affected by the missing baselines. Their important conclusions concerning the shock-heated inner atmosphere are therefore fully valid. The work of \citep{Takigawa2017} uses analyses of both AlO and SiO emissions. While the former is confined to the close neighbourhood of the star and is therefore unaffected by the missing baselines, the latter extends further out. Indeed their Figure 2b clearly shows the presence of the depression. However the main conclusion of their work rests on the remark that SiO emission extends much further out than AlO emission. This conclusion not only remains valid but is even strengthened when correcting for the missing flux associated with the missing baselines: their impact is therefore expected to be minimal. Finally, the work of \citep{Danilovich2019}, which studies the abundances of sulfur-bearing molecules CS and SiS, extends well beyond the region where imaging is reliable. It is therefore affected by the depression around 0.45 arcsec while these authors state in their article that ``W Hya was observed on baselines from 17 m to 11 km, giving sensitivity to angular scales up to about 6 arcsec [...] The full width half maximum of the ALMA primary beam is about 15 arcsec in this frequency range and all of the results presented here are from the inner few arcsec where the reduction in sensitivity is negligible.'' It is not up to us to evaluate precisely the impact of the missing baselines on their result. We remark, however, that it is likely not to be very large, being affected only by the distortion of the radial distribution.
The impact of the missing baselines would be considerably more severe on studies that rely on the detailed morphology of the image, not simply on the radial distribution of the intensity. It would result in claiming the presence of back-to-back outflows, which would in fact be pure artefacts.
Finally, while being unable to explore reliably the morpho-kinematics of the CSE up to arcsec scale distances, we have mentioned a few features of lesser importance that were not commented upon in the published literature but are significant enough to deserve a brief presentation. They are presented in the form of a comparison with similar features observed in R Dor and include comments on rotation, absorption and broad line widths. Evidence has been given for the presence of a blob of enhanced emission in the blue-shifted hemisphere, some 10 au north of the star, the present data being however insufficient to propose a reliable interpretation.
\section*{Acknowledgements}
We thank Dr. St\'{e}phane Guilloteau for useful discussions and Dr. Aki Takigawa for clarifications on the MRS used in their article. This paper uses ALMA data ADS/JAO.ALMA\#2015.1.01446.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. The data are retrieved from the JVO/NAOJ portal. We are deeply indebted to the ALMA partnership, whose open access policy means invaluable support and encouragement for Vietnamese astrophysics. Financial support from the World Laboratory, the Odon Vallet Foundation and VNSC is gratefully acknowledged. This research is funded by the Vietnam National Foundation for Science and Technology Development (NAFOSTED) under grant number 103.99-2019.368.
|
2,877,628,090,714 | arxiv | \section{Introduction and main results}
Let $0<\alpha<n$, $m\in \mathbb{Z}^+$ and $b\in L_{loc}^{1}(\mathbb{R}^n)$. The fractional integral operator $I_\alpha$ and its higher order commutator $I_{\alpha}^{b,m}$ are defined by
\begin{align*}
I_{\alpha}f(x)=\int_{\mathbb{R}^n}\frac{f(y)}{|x-y|^{n-\alpha}}dy,\quad
I_{\alpha}^{b,m}f(x)=\int_{\mathbb{R}^n}(b(x)-b(y))^m\frac{f(y)}{|x-y|^{n-\alpha}}dy.
\end{align*}
In this paper, we consider two weight estimates for $I_{\alpha}^{b,m}$
\begin{align*}
\Big(\int_{\mathbb{R}^n}|I_{\alpha}^{b,m}f(x)|^q\mu(x)dx\Big)^{1/q}\leq C\Big(\int_{\mathbb{R}^n}|f(x)|^p\nu(x)dx\Big)^{1/p},
\end{align*}
where $(\mu,\nu)$ is a pair of weights. Before this, we recall some backgrounds.
Let $1<p<n/\alpha$ and $1/p-1/q=\alpha/n$. It is well known that $I_\alpha$ is bounded from $L^p(\mathbb{R}^n)$ to $L^q(\mathbb{R}^n)$. Given a function $b\in L_{loc}^{1}(\mathbb{R}^n)$, we say that $b\in BMO(\mathbb{R}^n)$ if
\begin{align*}
\|b\|_{BMO(\mathbb{R}^n)}=\sup_Q\frac{1}{|Q|}\int_Q|b(x)-b_Q|dx<\infty,
\end{align*}
where $b_Q=|Q|^{-1}\int_Qb(x)dx$. In 1982, Chanillo \cite{Chan} proved that if $1<p<n/\alpha$, $1/p-1/q=\alpha/n$ and $b\in BMO(\mathbb{R}^n)$, then $I_{\alpha}^{b,1}$ is bounded from $L^p(\mathbb{R}^n)$ to $L^q(\mathbb{R}^n)$. By a weight $\omega$, we mean a nonnegative locally integrable function on $\mathbb{R}^n$. We say that $\omega\in A_{p,q}$ if
\begin{align*}
[\omega]_{A_{p,q}}=\sup_{Q}\Big(\frac{1}{|Q}\int_Q\omega(x)^qdx\Big)\Big(\frac{1}{|Q|}\int_Q
\omega(x)^{-p'}\Big)^{q/p'}<\infty,~1<p<q<\infty.
\end{align*}
Muckenhoupt and Wheeden \cite{MuW} proved that $I_\alpha$ is bounded from $L^p(\omega^p)$ to $L^q(\omega^q)$, where $0<\alpha<n$, $1<p<n/\alpha$, $1/p-1/q=\alpha/n$ and $\omega\in A_{p,q}$. Under the same conditions as \cite{MuW} with $b\in BMO(\mathbb{R}^n)$, Segovia and Torrea \cite{ST} obtained the weighted $L^p\rightarrow L^q$ boundedness for commutators of fractional integral operators.
Though the forms of two weight inequalities for singular integral operators and related operators are the generalization of one weight inequalities, two weight estimates for operators are more difficult. For instance, it is well known that the $A_p$ condition
\begin{align*}
\sup_Q\Big(\frac{1}{|Q|}\int_Q\omega(x)dx\Big)\Big(\frac{1}{|Q|}\int_Q\omega(x)^{1-p'}dx\Big)^{p-1}
<\infty
\end{align*}
is the sufficient condition for singular integral operators and related operators to be bounded on $L^p(\omega)$. However, in general, the $A_p$ condition for a pair of weights $(\mu,\nu)$
\begin{align}\label{two weight Ap}
\sup_Q\Big(\frac{1}{|Q|}\int_Q\mu(x)dx\Big)\Big(\frac{1}{|Q|}\int_Q\nu(x)^{1-p'}dx\Big)^{p-1}
<\infty
\end{align}
is necessary but never sufficient for operators to be bounded from $L^p(\nu)$ to $L^p(\mu)$, see \cite{CruMP}. To solve this problem, Sawyer \cite{Sawyer} first introduced the following test condition: there is a positive constant $C$ such that for any cubes $Q$
\begin{align*}
\int_QM(\nu^{1-p'}\chi_Q)(x)^p\mu(x)dx\leq C\int_Q\nu(x)^{1-p'}dx<\infty,
\end{align*}
where $M$ is the Hardy-Littlewood maximal operator, he proved that this test condition is necessary and sufficient for $M$ to be bounded from $L^p(\nu)$ to $L^p(\mu)$. However, this condition is very difficult to verify due to the operator $M$ involves in it. This drawback appeals researchers to searching for some simpler sufficient conditions, which are close to \eqref{two weight Ap} in some sense. Neugebaur \cite{Neu} first proved that for some $r>1$, if a pair of weights
$(\mu,\nu)$ satisfies the following power bump condition:
\begin{align*}
\sup_Q\Big(\frac{1}{|Q|}\int_Q\mu(x)^rdx\Big)^{1/r}\Big(\frac{1}{|Q|}\int_Q\nu(x)^{r(1-p')}dx\Big)
^{(p-1)/r}<\infty,
\end{align*}
then
\begin{align*}
\int_{\mathbb{R}^n}(Mf(x))^p\mu(x)dx\leq C\int_{\mathbb{R}^n}|f(x)|^p\nu(x)dx.
\end{align*}
To formulate the following works of seeking for appropriate bump conditions which are sufficient for the two weight inequalities of singular integral operators and related operators. We recall some facts about Orlicz spaces. We say $A(t):[0,\infty)\rightarrow[0,\infty)$ is a Young function if it is increasing, convex, $A(0)=0$ and $A(t)/t\rightarrow\infty$ as $t\rightarrow\infty$. Given a Young function $A$, the associated complementary function $\bar{A}$ is defined by
\begin{align*}
\bar{A}(t)=\sup_{s>0}\{st-A(s)\}.
\end{align*}
Let $1<p<\infty$ and $A$ be a Young function, we say that $A\in B_p$ if
\begin{align*}
\int_{1}^{\infty}\frac{A(t)}{t^p}\frac{dt}{t}<\infty.
\end{align*}
Given a Young function $A$, the Orlicz average on a cube $Q$ of a function $f$ is defined by
\begin{align*}
\|f\|_{A,Q}=\inf\Big\{\lambda>0:\frac{1}{|Q|}\int_QA\Big(\frac{|f(x)|}{\lambda}\Big)dx\leq1\Big\}.
\end{align*}
In 1995, P\'{e}rez \cite{Perez3} improved Neugebaur's result by eliminating the power bump on the left-hand weight $\mu$ and replacing the right-hand weight $\nu$ by the ``Orlicz bump''. Precisely, he proved that if a pair of weights $(\mu,\nu)$ satisfies
\begin{align*}
\sup_Q\|\mu^{1/p}\|_{p,Q}\|\nu^{-1/p}\|_{\Phi,Q}<\infty,~1<p<\infty,
\end{align*}
and $\bar{\Phi}\in B_p$, then $M:L^p(\nu)\rightarrow L^p(\mu)$. While for Calder\'{o}n-Zygmund operator $T$, Cruz-Uribe and P\'{e}rez \cite{CruPe} conjectured that if both terms in \eqref{two weight Ap} were bumped, then $T:L^p(\nu)\rightarrow L^p(\mu)$. This conjecture was partially solved in \cite{NaReTV} and completely solved by Lerner in \cite{Lerner}. Lerner proved that if a pair of weights $(\mu,\nu)$ satisfies
\begin{align}\label{bump conjcture}
\sup_Q\|\mu^{1/p}\|_{\Psi,Q}\|\nu^{-1/p}\|_{\Phi,Q}<\infty,~1<p<\infty,
\end{align}
and $\bar{\Phi}\in B_p, \bar{\Psi}\in B_{p'}$, then $T:L^p(\nu)\rightarrow L^p(\mu)$. The separated bump conjecture, which aries from the work of Cruz-Uribe et al. \cite{CruPe0}, who asserted that $T:L^p(\nu)\rightarrow L^p(\mu)$ provided that \eqref{bump conjcture} is replaced by
\begin{align*}
\sup_Q\|\mu^{1/p}\|_{p,Q}\|\nu^{-1/p}\|_{\Phi,Q}<\infty\quad and\quad \|\mu^{1/p}\|_{\Psi,Q}\|\nu^{-1/p}\|_{p',Q}<\infty.
\end{align*}
In \cite{CruRV}, Cruz-Uribe et al. only proved this conjecture is true for $\Phi(t)=t^{p'}[\log(e+t)]^{p'-1+\delta}$ and $\Psi(t)=t^p[\log(e+t)]^{p-1+\delta}$ for some $\delta>0$. This conjecture is still open, and we refer readers to see \cite{Lacey,LerOR,Li} for more recent works about it. Analogue to the case of singular integral operators, P\'{e}rez \cite{Perez2} gave the following sufficient condition:
\begin{align*}
\sup_Q|Q|^{\alpha/n+1/q-1/p}\|\mu^{1/q}\|_{A,Q}\|\nu^{-1/p}\|_{B,Q}<\infty,~\bar{A}\in B_p, \bar{B}\in B_{q'},
\end{align*}
such that $I_\alpha:L^p(\nu)\rightarrow L^q(\mu)$. The conditions $\bar{A}\in B_p, \bar{B}\in B_{q'}$ were improved to $\bar{A}\in B_{p,q}, \bar{B}\in B_{q',p'}$ in \cite{CruzM}. Here, we say that $A\in B_{p,q}$ if
$$\int_{1}^{\infty}\frac{A(t)^{q/p}}{t^q}\frac{dt}{t}<\infty.$$
Recently, Rahm \cite{R} used ``entropy bumps'' and ``direct comparison bumps'' to get the two weight boundedness for fractional sparse operators.
On the other hand, Cruz-Uribe and Moen \cite{CruzM0} showed that if $b\in BMO(\mathbb{R}^n)$ and a pair of weights $(\mu,\nu)$ satisfies
\begin{align*}
\sup_Q\|\mu^{1/p}\|_{L^p(\log L)^{2p-1+\delta},Q}\|\nu^{-1/p}\|_{L^{p'}(\log L)^{2p'-1+\delta},Q}<\infty,
\end{align*}
then the commutator of Calder\'{o}n-Zygmund operator $T_b$ is bounded from $L^p(\nu)$ to $L^p(\mu)$. This result was recently improved by Lerner et al. \cite{LerOR}, who provided a wider class of weights $(\mu,\nu)$:
\begin{align*}
\sup_Q\|\mu^{1/p}\|_{L^p(\log L)^{(m+1)p-1+\delta}}\|\nu^{-1/p}\|_{B,Q}+\sup_Q
\|\mu^{1/p}\|_{A,Q}\|\nu^{-1/p}\|_{L^p(\log L)^{(m+1)p-1+\delta},Q}<\infty,
\end{align*}
for which $\|T_b^m\|_{L^p(\nu)\rightarrow L^p(\mu)}<\infty$, where $b\in BMO(\mathbb{R}^n)$ and $\bar{A}\in B_{p'},\bar{B}\in B_p$. Very recently, Cruz-Uribe et al. \cite{CruMT} generalized the work in \cite{LerOR} by assuming the Young functions $\bar{A},\bar{C}\in B_{p'}$, $\bar{B},\bar{D}\in B_p$ and $(\mu,\nu)$ satisfies
\begin{align*}
\sup_Q\|\mu^{1/p}\|_{A,Q}\|(b-b_Q)^m\nu^{-1/p}\|_{B,Q}+
\sup_Q\|(b-b_Q)^m\mu^{1/p}\|_{C,Q}\|\nu^{-1/p}\|_{B,Q}<\infty.
\end{align*}
We also refer readers to see \cite{IsPT} for the result in the matrix setting. For the commutators of fractional integral operators, Cruz-Uribe \cite{Cru} showed that if a pair of weights $(\mu,\nu)$ satisfies
\begin{align}\label{improve this condition}
&\sup_Q|Q|^{\alpha/n+1/q-1/p}\|\mu^{1/q}\|_{A,Q}\|\nu^{-1/p}\|_{B,Q}<\infty,
\end{align}
with $A(t)=t^q(\log(e+t))^{2q-1+\delta}$, $B(t)=t^{p'}(\log(e+t))^{2p'-1+\delta}$, then $I_\alpha^{b,1}$ is bounded from $L^p(\nu)$ to $L^p(\mu)$. Recently, Cardenas and Isralowitz \cite{CaIs} established two weight inequality for $I_{\alpha}^{b,1}$ in the matrix setting.
Inspired by the works in \cite{CaIs,CruMT,LerOR}, in this paper, we mainly consider two weight inequalities for $I_{\alpha}^{b,m}$. Our first main result can be formulated as follows.
\begin{theorem}\label{theorem1.1}
Let $1<p\leq q<\infty$, $0<\alpha<n$, $m\in\mathbb{Z}^+$, $b\in L_{loc}^{m}(\mathbb{R}^n)$ and $\mathcal{S}$ be a sparse family.
\begin{itemize}
\item [(1)]Suppose that $A,B,C,D$ are Young functions which satisfy $\bar{A},\bar{C}\in B_{q'}$ and $\bar{B},\bar{D}\in B_{p,q}$. If a pair of weights $(\mu,\nu)$ satisfies
\begin{align*}
&\sup_{Q\in\mathcal{S}}|Q|^{\alpha/n+1/q-1/p}\|\mu^{1/q}
\|_{A,Q}\|(b-b_Q)^m\nu^{-1/p}\|_{B,Q}\\
&\qquad+\sup_{Q\in\mathcal{S}}|Q|^{\alpha/n+1/q-1/p}
\|(b-b_Q)^m\mu^{1/q}\|_{C,Q}\|\nu^{-1/p}\|_{D,Q}<\infty,
\end{align*}
then
\begin{align}\label{new}
\|T_{\alpha}^{\mathcal{S},b,m}f\|_{L^q(\mu)}
+\|(T_{\alpha}^{\mathcal{S},b,m})^{\ast}f\|_{L^q(\mu)}\lesssim \|f\|_{L^p(\nu)}.
\end{align}
\item [(2)]Conversely, if \eqref{new} holds, then
\begin{align*}
&\sup_{Q\in\mathcal{S}}|Q|^{{\alpha}/{n}+{1}/{q}-{1}/{p}}
\|\mu^{1/q}\|_{q,Q}\|(b-b_Q)^m\nu^{-1/p}\|_{p',Q}\\
&\qquad+\sup_{Q\in\mathcal{S}}|Q|^{{\alpha}/{n}+{1}/{q}-{1}/{p}}
\|(b-b_Q)^m\mu^{1/q}\|_{q,Q}\|\nu^{-1/p}\|_{p',Q}<\infty.
\end{align*}
\end{itemize}
Here
\begin{align*}
T_{\mathcal{S},\alpha}^{b,m}f(x)=\sup_{Q\in\mathcal{S}}|Q|^{\alpha/n}\Big(|Q|^{-1}\int_Q
|b(x)-b_Q|^m|f(x)|dx\Big)\chi_Q(x),
\end{align*}
and
\begin{align*}
(T_{\mathcal{S},\alpha}^{b,m})^{\ast}f(x)=\sup_{Q\in\mathcal{S}}|Q|^{\alpha/n}|b(x)-b_Q|^m
\Big(|Q|^{-1}\int_Q|f(x)|dx\Big)\chi_Q(x).
\end{align*}
\end{theorem}
As an application, we can obtain the following two weight bump conditions for iterated commutators $I_\alpha^{b, m}$.
\begin{theorem}\label{theorem1.2}
Let $1<p\leq q<\infty$, $0<\alpha<n$, $m\in\mathbb{Z}^+$, $b\in L_{loc}^{m}(\mathbb{R}^n)$ and $I_\alpha^{b,m}$ be commutators of fractional integral operators. Suppose that $A,B,C,D$ are Young functions which satisfy $\bar{A},\bar{C}\in B_{q'}$ and $\bar{B},\bar{D}\in B_{p,q}$. If a pair of weights $(\mu,\nu)$ satisfy
\begin{align*}
&\sup_{Q\in\mathcal{S}}|Q|^{\alpha/n+1/q-1/p}\|\mu^{1/q}
\|_{A,Q}\|(b-b_Q)^m\nu^{-1/p}\|_{B,Q}\\
&\qquad+\sup_{Q\in\mathcal{S}}|Q|^{\alpha/n+1/q-1/p}
\|(b-b_Q)^m\mu^{1/q}\|_{C,Q}\|\nu^{-1/p}\|_{D,Q}<\infty,
\end{align*}
then $\|I_{\alpha}^{b,m}f\|_{L^q(\mu)}\lesssim\|f\|_{L^p(\nu)}$.
\end{theorem}
Furthermore, as a consequence of Theorem \ref{theorem1.2}, we can obtain the more traditional bump conditions by assuming that the multiplier $b$ lies in an oscillation class related to $BMO(\mathbb{R}^n)$.
\begin{theorem}\label{theorem1.3}
Let $1<p\leq q<\infty$, $0<\alpha<n$, $m\in\mathbb{Z}^+$ and $I_\alpha^{b,m}$ be commutators of fractional integral operators. Assume that $A,B,C,D,X,Y$ are Young functions which satisfy $\bar{A},\bar{C}\in B_{q'}$, $\bar{B},\bar{D}\in B_{p,q}$ and $X,Y$ satisfy
\begin{align*}
X^{-1}(t)\lesssim\frac{B^{-1}(t)}{\Phi^{-1}(t)^m}~and~Y^{-1}(t)\lesssim
\frac{C^{-1}(t)}{\Phi^{-1}(t)^m}
\end{align*}
for large $t$. If $b\in Osc(\Phi)$ and a pair of weights $(\mu,\nu)$ satisfies
\begin{align*}
&\sup_{Q}|Q|^{\alpha/n+1/q-1/p}\|\mu^{1/q}
\|_{A,Q}\|\nu^{-1/p}\|_{X,Q}\\
&\qquad+\sup_{Q}|Q|^{\alpha/n+1/q-1/p}
\|\mu^{1/q}\|_{Y,Q}\|\nu^{-1/p}\|_{D,Q}<\infty,
\end{align*}
then $\|I_{\alpha}^{b,m}f\|_{L^q(\mu)}\lesssim\|b\|_{Osc(\Phi)}^m\|f\|_{L^p(\nu)}$, where $Osc(\Phi)$ is the space of functions $b\in L_{loc}^{1}(\mathbb{R}^n)$ with
$$\|b\|_{Osc(\Phi)}=\sup_Q\|b-b_Q\|_{\Phi,Q}<\infty.$$
\end{theorem}
When $b\in BMO(\mathbb{R}^n)$, we may take $\Phi(t)=\exp t-1$ in Theorem \ref{theorem1.3}. Then we have the following result.
\begin{corollary}\label{corollary1.4}
Let $1<p\leq q<\infty$, $0<\alpha<n$, $m\in\mathbb{Z}^+$ and $I_\alpha^{b,m}$ be commutators of fractional integral operators. Suppose that $A,D$ are Young functions which satisfy $\bar{A}\in B_{q'}$, $\bar{D}\in B_{p,q}$. If $b\in BMO(\mathbb{R}^n)$ and a pair of weights $(\mu,\nu)$ satisfy
\begin{align*}
&\sup_{Q}|Q|^{\alpha/n+1/q-1/p}\|\mu^{1/q}
\|_{A,Q}\|\nu^{-1/p}\|_{L^{p'}(\log L)^{(m+1)p'-1+\delta},Q}\\
&\qquad+\sup_{Q}|Q|^{\alpha/n+1/q-1/p}
\|\mu^{1/q}\|_{L^{q}(\log L)^{(m+1)q-1+\delta},Q}\|\nu^{-1/p}\|_{D,Q}<\infty,
\end{align*}
then $\|I_{\alpha}^{b,m}f\|_{L^q(\mu)}\lesssim\|b\|_{BMO(\mathbb{R}^n)}^m\|f\|_{L^p(\nu)}$.
\end{corollary}
In particular, if we take $A(t)=t^q[\log(e+t)]^{q-1+\delta}$ and $D(t)=t^{p'}[\log(e+t)]^{p'-1+\delta}$, then we can get the
following more general two-weight bump conditions than (\ref{improve this condition}) for $I_\alpha^{b,m}$.
\begin{corollary}\label{corollary1.5}
Let $1<p\leq q<\infty$, $0<\alpha<n$, $m\in\mathbb{Z}^+$ and $I_\alpha^{b,m}$ be commutators of fractional integral operators. If $b\in BMO(\mathbb{R}^n)$ and a pair of weights $(\mu,\nu)$ satisfies
\begin{align}\label{weaker}
&\sup_{Q}|Q|^{\alpha/n+1/q-1/p}\|\mu^{1/q}
\|_{L^{q}(\log L)^{q-1+\delta},Q}\|\nu^{-1/p}\|_{L^{p'}(\log L)^{(m+1)p'-1+\delta},Q}\\
&\qquad+\sup_{Q}|Q|^{\alpha/n+1/q-1/p}
\|\mu^{1/q}\|_{L^{q}(\log L)^{(m+1)q-1+\delta},Q}\|\nu^{-1/p}\|_{L^{p'}(\log L)^{p'-1+\delta},Q}<\infty\nonumber
\end{align}
for some $\delta>0$, then $\|I_{\alpha}^{b,m}f\|_{L^q(\mu)}\lesssim\|b\|_{BMO(\mathbb{R}^n)}^m\|f\|_{L^p(\nu)}$.
\end{corollary}
\begin{remark}It is clear that the bump condition in $(1.5)$ for $m=1$ is more general than one in $(1.3)$. Therefore, Corollary \ref{corollary1.5} is an essential improvement and extension to the corresponding result in \cite{Cru}.
\end{remark}
Next, we turn to the necessity of bump conditions for the two-weight boundedness of $I_{\alpha}^{b,m}$, which is addressed by the following theorem.
\begin{theorem}\label{theorem1.7}
Let $1<p\leq q<\infty$, $0<\alpha<n$, $m\in\mathbb{Z}^+$ and $I_\alpha^{b,m}$ be commutators of fractional integral operators. Suppose that $\mu$ is a doubling weight and for any $b\in BMO(\mathbb{R}^n)$,
$$\|I_{\alpha}^{b,m}f\|_{L^{q,\infty}(\mu)}\lesssim\|b\|_{BMO(\mathbb{R}^n)}^m
\|f\|_{L^p(\nu)}.$$
Then
\begin{align*}
\sup_Q|Q|^{\frac{\alpha}{n}+\frac{1}{q}-\frac{1}{p}}\|\mu^{1/q}\|_{L^q,Q}
\|\nu^{-1/p}\|_{L^{p'}(\log L)^{mp'},Q}<\infty.
\end{align*}
\end{theorem}
Finally, we consider the inverse result related to Bloom type estimate for $I_\alpha^{b,m}$. We first recall the relevant definition and backgrounds.
Let $\eta$ be a weight, we say that $b\in BMO_\eta$ if
$$\|b\|_{BMO_\eta}:=\sup_Q\frac{1}{\eta(Q)}\int_Q|b(x)-b_Q|dx<\infty.$$
Bloom \cite{Bl0} first charactered $BMO_\eta$ via the two weight estimate of commutator of Hilbert transform $H$. For the commutator of fractional integral operator, Accomazzo et al. \cite{AMR} proved that if $\lambda,\mu\in A_{p,q}$ and
$\eta=\big({\mu}{\lambda^{-1}}\big)^{1/m}$, then
$$b\in BMO_\eta\Rightarrow\|I_{\alpha}^{b,m}\|_{L^q(\lambda^q)}\lesssim\|b\|_{BMO_\eta}^m
\|f\|_{L^p(\mu^p)}$$
and
$$\|I_{\alpha}^{b,m}\|_{L^q(\lambda^q)}\lesssim\|f\|_{L^p(\mu^p)}\Rightarrow b\in BMO_\eta.$$
The corresponding result for $m=1$ was obtained by Holmes et al. in \cite{HRS}. Our next theorem can be regarded as the converse of the above Bloom type estimate for $I_\alpha^{b,m}$.
\begin{theorem}\label{theorem1.8}
Let $0<\alpha<n$, $1<p<n/\alpha$, $1/p-1/q=\alpha/n$, $m\in\mathbb{Z}^+$, $\lambda,\mu\in A_{p,q}$ and $I_\alpha^{b,m}$ be commutator of fractional integral operators. If $\eta$ is an arbitrary weight which satisfies
\begin{align}\label{boundedness}
b\in BMO_\eta\Rightarrow\|I_{\alpha}^{b,m}\|_{L^q(\lambda^q)}\lesssim\|b\|_{BMO_\eta}^m
\|f\|_{L^p(\mu^p)}
\end{align}
and
\begin{align}\label{necessity}
\|I_{\alpha}^{b,m}\|_{L^q(\lambda^q)}\lesssim\|f\|_{L^p(\mu^p)}\Rightarrow b\in BMO_\eta,
\end{align}
then $\eta\sim\big({\mu}{\lambda^{-1}}\big)^{1/m}$ almost where.
\end{theorem}
We organize the rest of the paper as follows. Section 2 is devoted to the proofs of Theorems \ref{theorem1.1}-\ref{theorem1.3}, Corollarys \ref{corollary1.4} and \ref{corollary1.5}. In Section 3, we will show Theorem \ref{theorem1.7} and Theorem \ref{theorem1.8} will be given in Section 4.
We end this section by making some conventions. We denote $f\lesssim g$, $f\thicksim g$ if $f\leq Cg$ and $f\lesssim g \lesssim f$, respectively. For any ball $B:=B(x_0,r)\subset \mathbb{R}^n$, $x_0$ and $r$ are the center and the radius of $B$, respectively, and $f_B$ means the mean value of $f$ over $B$, $\chi_B$ represents the characteristic function of $B$. For any cube $Q\subset\mathbb{R}^n$, the diameter of $Q$ is denoted by diam $Q$. $C_{c}^{\infty}(\Rn)$ is the space of all smooth functions with compact support.
\section{Two-weight boundedness for $I_\alpha^{b,m}$}
In this section, we will prove Theorems \ref{theorem1.1}-\ref{theorem1.3} and Corollary \ref{corollary1.4}-\ref{corollary1.5}. To begin with recalling some notation,definitions and facts related to sparse families (see \cite{LerNa,Perey} for more details).
Given a cube $Q\subset\mathbb{R}^n$, let $\mathcal{D}(Q)$ be the set of cubes obtained by repeatedly subdividing $Q$ and its descendants into $2^n$ congruent subcubes.
\begin{definition}
A collection of cubes $\mathcal{D}$ is called a dyadic lattice if it satisfies the following properties:\\
$(1)$ if $Q\in\mathcal{D}$, then every child of $Q$ is also in $\mathcal{D}$;\\
$(2)$ for every two cubes $Q_1, Q_2\in\mathcal{D}$, there is a common ancestor $Q\in\mathcal{D}$ such that $Q_1, Q_2\in\mathcal{D}(Q)$;\\
$(3)$ for any compact set $K\subset\mathbb{R}^n$, there is a cube $Q\in\mathcal{D}$ such that $K\subset Q$.
\end{definition}
\begin{definition}
A subset $\mathcal{S}\subset\mathcal{D}$ is called an $\eta$-sparse family with $\eta\in(0,1)$ if for every cube $Q\in\mathcal{S}$, there is a measurable subset $E_Q\subset Q$ such that $\eta|Q|\leq|E_Q|$, and the sets $\{E_Q\}_{Q\in\mathcal{S}}$ are mutually disjoint.
\end{definition}
In \cite{AMR}, Accomazzo et al. proved the following sparse dominations for commutators of fractional integral operators.
\begin{lemma}{\rm(cf. \cite{AMR})}\label{sparse domination}
Let $0<\alpha<n$ and $m\in\mathbb{Z}^+$. For every $f\in C_c^\infty(\mathbb{R}^n)$ and $b\in L_{loc}^{m}(\mathbb{R}^n)$, there exist a family $\{\mathcal{D}_j\}_{j=1}^{3^n}$ of dyadic lattices and a family $\{\mathcal{S}_j\}_{j=1}^{3^n}$ of sparse families such that $\mathcal{S}_j\subset\mathcal{D}_j$, for each $j$, and
$$|I_{\alpha}^{b,m}f(x)|\lesssim\sum_{j=1}^{3^n}\sum_{Q\in\mathcal{S}_j}\sum_{k=0}^{m}
|b(x)-b_Q|^{m-k}|Q|^{\alpha/n}\Big(\frac{1}{|Q|}\int_Q|b(x)-b_Q|^k|f(x)|dx\Big)\chi_Q(x).$$
\end{lemma}
Based on Lemma \ref{sparse domination}, we can prove the following lemma.
\begin{lemma}\label{lemma2.4}
Let $0<\alpha<n$, $m\in\mathbb{Z}^+$, $b\in L_{loc}^{m}(\mathbb{R}^n)$ and $I_\alpha^{b,m}$ be commutators of fractional integral operators. Then for $f\in C_c^\infty(\mathbb{R}^n)$, there exist $3^n$ sparse families $\mathcal{S}_j\subset\mathcal{D}_j$, $j=1,\cdots,3^n$, such that
\begin{align*}
|I_{\alpha}^{b,m}f(x)|\lesssim\sum_{j=1}^{3^n}(T_{\mathcal{S}_j,\alpha}^{b,m}f(x)+
(T_{\mathcal{S}_j,\alpha}^{b,m})^\ast f(x)),
\end{align*}
where $T_{\mathcal{S},\alpha}^{b,m}$ and $(T_{\mathcal{S},\alpha}^{b,m})^\ast$ are defined in Theorem \ref{theorem1.1}.
\end{lemma}
\begin{proof}
Fix a sparse family $\mathcal{S}$, let $Q\in\mathcal{S}$ and $x\in Q$, then
\begin{align*}
&|Q|^{\alpha/n}\sum_{k=0}^m|b(x)-b_Q|^{m-k}\frac{1}{|Q|}\int_Q|b(y)-b_Q|^k|f(y)|dy\\
&\quad\leq\frac{1}{|Q|}\int_Q\Big(\sum_{k=0}^m\max\{|b(x)-b_Q|,|b(y)-b_Q|\}^m\Big)
|f(y)|dy|Q|^{\alpha/n}\\
&\quad=(m+1)\frac{1}{|Q|}\int_Q\max\{|b(x)-b_Q|^m,|b(y)-b_Q|^m\}|f(y)|dy|Q|^{\alpha/n}\\
&\quad\lesssim|b(x)-b_Q|^m\frac{1}{|Q|}\int_Q|f(y)|dy|Q|^{\alpha/n}+
\frac{1}{|Q|}\int_Q|b(y)-b_Q|^m|f(y)|dy|Q|^{\alpha/n}.
\end{align*}
This, together with Lemma \ref{sparse domination}, leads to the desired conclusion and completes the proof of Lemma \ref{lemma2.4}.
\end{proof}
The proof of Theorem \ref{theorem1.1} is converted into the following two propositions.
\begin{proposition}\label{pro2.5}
Let $0<\alpha<n, m\in\mathbb{Z}^+$, $b\in L_{loc}^{m}(\mathbb{R}^n)$ and $\mathcal{S}$ be a sparse family. Assume that $1<p\leq q<\infty$ and $A,B$ are Young functions that satisfy $\bar{A}\in B_{q'},\bar{B}\in B_{p,q}$. If $(\mu,\nu)$ is a pair of weights that satisfies
\begin{align*}
\sup_{Q\in\mathcal{S}}|Q|^{{\alpha}/{n}+{1}/{q}-{1}/{p}}
\|\mu^{1/q}\|_{A,Q}\|(b-b_Q)^m\nu^{-1/p}\|_{B,Q}<\infty,
\end{align*}
then
\begin{align}\label{2.1}
\|T_{\mathcal{S},\alpha}^{b,m}f\|_{L^q(\mu)}\leq C\|f\|_{L^p(\nu)}.
\end{align}
Conversely, if $T_{\mathcal{S},\alpha}^{b,m}$ satisfies \eqref{2.1}, then the pair of weights $(\mu,\nu)$ satisfies
\begin{align*}
\sup_{Q\in\mathcal{S}}|Q|^{{\alpha}/{n}+{1}/{q}-{1}/{p}}
\|\mu^{1/q}\|_{q,Q}\|(b-b_Q)^m\nu^{-1/p}\|_{p',Q}<\infty.
\end{align*}
\end{proposition}
\begin{proof}
By duality, there exists nonnegative measurable function $g\in L^{q'}(\mu)$ with $\|g\|_{L^{q'}(\mu)}=1$ such that
\begin{align}\label{2.2}
\|T_{\mathcal{S},\alpha}^{b,m}\|_{L^q(\mu)}&=\int_{\mathbb{R}^n}T_{\mathcal{S},\alpha}
^{b,m}f(x)g(x)\mu(x)dx\\
&\leq\sum_{Q\in\mathcal{S}}|Q|^{\alpha/n+1}\Big(\frac{1}{|Q|}\int_Q|b(x)-b_Q|^m|f(x)|dx\Big)
\Big(\frac{1}{|Q|}\int_Q|g(x)|\mu(x)dx\Big)\nonumber.
\end{align}
Let $1/p-1/q=\beta/n$, it was proved in \cite{CruzM} that
\begin{align*}
M_{\beta,\bar{B}}: L^p(\mathbb{R}^n)\rightarrow L^q(\mathbb{R}^n).
\end{align*}
From this, by \eqref{2.2}, the generalized H\"{o}lder inequality and our assumptions yield that
\begin{align*}
\|T_{\mathcal{S},\alpha}^{b,m}\|_{L^q(\mu)}&\leq\sum_{Q\in\mathcal{S}}
\|(b-b_Q)^m\nu^{-1/p}\|_{B,Q}\|f\nu^{1/p}\|_{\bar{B},Q}\|\mu^{1/q}\|_{A,Q}
\|g\mu^{1/q'}\|_{\bar{A},Q}|Q|^{1+\frac{\alpha}{n}+\frac{1}{q}-\frac{1}{p}+\frac{\beta}{n}}\\
&\lesssim|E_Q||Q|^{\beta/n}\|f\nu^{1/p}\|_{\bar{B},Q}\|g\mu^{1/q'}\|_{\bar{A},Q}\\
&\leq\int_{\mathbb{R}^n}M_{\bar{A}}(g\mu^{1/q'})(x)M_{\beta,\bar{B}}(f\nu^{1/p})(x)dx\\
&\leq\|M_{\beta,\bar{B}}(f\nu^{1/p})\|_{L^q}\|M_{\bar{A}}(g\mu^{1/q'})\|_{L^{q'}}
\lesssim\|f\|_{L^p(\nu)}.
\end{align*}
Next, we turn to prove necessity. Fix $Q\in\mathcal{S}$, let $f=|b-b_Q|^{m(p'-1)}\nu^{-p'/p}\chi_Q$. For $x\in Q$, it is easy to see that
\begin{align*}
T_{\mathcal{S},\alpha}^{b,m}f(x)\geq
|Q|^{\alpha/n-1}\int_Q|b(x)-b_Q|^{mp'}\nu(x)^{-p'/p}dx,
\end{align*}
which implies that
\begin{align*}
\Big(\int_QT_{\mathcal{S},\alpha}^{b,m}f(x)^q\mu(x)dx\Big)^{1/q}\geq
|Q|^{\alpha/n-1}\int_Q|b(x)-b_Q|^{mp'}\nu(x)^{-p'/p}dx\Big(\int_Q\mu(x)dx\Big)^{1/q}.
\end{align*}
On the other hand,
\begin{align*}
\Big(\int_QT_{\mathcal{S},\alpha}^{b,m}f(x)^q\mu(x)dx\Big)^{1/q}&\leq C\Big(\int_{\mathbb{R}^n}|f(x)|^p\nu(x)dx\Big)^{1/p}\\
&=C\Big(\int_Q|b(x)-b_Q|^{mp'}\nu(x)^{-p'/p}dx\Big)^{1/p}.
\end{align*}
Hence, we conclude that
\begin{align*}
&|Q|^{\alpha/n-1}\int_Q|b(x)-b_Q|^{mp'}\nu(x)^{-p'/p}dx\Big(\int_Q\mu(x)dx\Big)^{1/q}\\
&\quad\leq C\Big(\int_Q|b(x)-b_Q|^{mp'}\nu(x)^{-p'/p}dx\Big)^{1/p}.
\end{align*}
The desired result follows by rearranging the above terms.
\end{proof}
Similarly, we can obtain the following proposition, and we leave the details for the interested readers.
\begin{proposition}\label{pro2.6}
Let $0<\alpha<n, m\in\mathbb{Z}^+$, $b\in L_{loc}^{m}(\mathbb{R}^n)$ and $\mathcal{S}$ be a sparse family. Assume that $1<p\leq q<\infty$ and $C,D$ are Young functions that satisfy $\bar{C}\in B_{q'},\bar{D}\in B_{p,q}$. If $(\mu,\nu)$ is a pair of weights that satisfies
\begin{align*}
\sup_{Q\in\mathcal{S}}|Q|^{{\alpha}/{n}+{1}/{q}-{1}/{p}}
\|(b-b_Q)^m\mu^{1/q}\|_{C,Q}\|\nu^{-1/p}\|_{D,Q}<\infty,
\end{align*}
then
\begin{align}\label{2.3}
\|(T_{\mathcal{S},\alpha}^{b,m})^\ast f\|_{L^q(\mu)}\leq C\|f\|_{L^p(\nu)}.
\end{align}
Conversely, if $(T_{\mathcal{S},\alpha}^{b,m})^\ast$ satisfies \eqref{2.3}, then the pair of weights $(\mu,\nu)$ satisfies
\begin{align*}
\sup_{Q\in\mathcal{S}}|Q|^{{\alpha}/{n}+{1}/{q}-{1}/{p}}
\|(b-b_Q)^m\mu^{1/q}\|_{q,Q}\|\nu^{-1/p}\|_{p',Q}<\infty.
\end{align*}
\end{proposition}
\begin{proof}[Proofs of Theorems \ref{theorem1.1} and \ref{theorem1.2}]
Theorem \ref{theorem1.1} follows from Propositions \ref{pro2.5} and \ref{pro2.6}, and Theorem \ref{theorem1.2} follows from Lemma \ref{lemma2.4} and Theorem \ref{theorem1.1}.
\end{proof}
Next, we prove Theorem \ref{theorem1.3}. We first recall the following lemma.
\begin{lemma}{\rm(cf. \cite{CruMP})}\label{general Holder}
Let $A,B$ be continuous and strictly increasing functions on $[0,\infty)$ and $C$ be Young function that satisfies $A^{-1}(t)B^{-1}(t)\lesssim C^{-1}(t)$ for $t$ large. Then
$$\|fg\|_{C,Q}\lesssim\|f\|_{A,Q}\|g\|_{B,Q}.$$
\end{lemma}
\begin{proof}[Proof of Theorem \ref{theorem1.3}]
Denote $\Phi_m(t)=\Phi(t^{1/m})$. Since $B,X,\Phi$ satisfy
$$\Phi^{-1}(t)^mX^{-1}(t)\lesssim B^{-1}(t)$$ for $t$ large, by Lemma \ref{general Holder}, we have that
\begin{align*}
\|(b-b_Q)^m\nu^{-1/p}\|_{B,Q}&\lesssim\|(b-b_Q)^m\|_{\Phi_m,Q}\|\nu^{-1/p}\|_{X,Q}\\
&=\|(b-b_Q)\|_{\Phi,Q}^m\|\nu^{-1/p}\|_{X,Q}.
\end{align*}
Therefore,
\begin{align*}
&\sup_Q|Q|^{{\alpha}/{n}+{1}/{q}-{1}/{p}}\|\mu^{1/q}\|_{A,Q}
\|(b-b_Q)^m\nu^{-1/p}\|_{B,Q}\\
&\quad\lesssim\|b\|_{Osc(\Phi)}^m\sup_Q|Q|^{{\alpha}/{n}+{1}/{q}-{1}/{p}}
\|\mu^{1/q}\|_{A,Q}\|\nu^{-1/p}\|_{X,Q}<\infty.
\end{align*}
By Proposition \ref{pro2.5}, we get that $$\|T_{\mathcal{S},\alpha}^{b,m} f\|_{L^q(\mu)}\leq C\|f\|_{L^p(\nu)}.$$ Similarly, we have
\begin{align*}
&\sup_Q|Q|^{{\alpha}/{n}+{1}/{q}-{1}/{p}}\|(b-b_Q)^m\mu^{1/q}
\|_{C,Q}\|\nu^{-1/p}\|_{D,Q}\\
&\quad\lesssim\|b\|_{Osc(\Phi)}^m\sup_Q|Q|^{{\alpha}/{n}+{1}/{q}-{1}/{p}}
\|\mu^{1/q}\|_{Y,Q}\|\nu^{-1/p}\|_{D,Q}<\infty.
\end{align*}
This, together with Proposition \ref{pro2.6}, deduces that $$\|(T_{\mathcal{S},\alpha}^{b,m})^\ast f\|_{L^q(\mu)}\leq C\|f\|_{L^p(\nu)}.$$ Summing up the above estimates with Lemma \ref{lemma2.4}, we obtain the conclusion of Theorem \ref{theorem1.3}.
\end{proof}
To prove Corollary \ref{corollary1.4}, we need to recall the following fact. For $\varphi(t)=t^p(\log(e+t))^q$ with $p>1$ and $q\in\mathbb{R}$, Cruz-Uribe et al. \cite{CruPe00} showed that
\begin{align}\label{Young function}
\varphi^{-1}(t)\sim\frac{t^{1/p}}{(\log(e+t))^{q/p}},\quad \bar{\varphi}(t)=\frac{t^{p'}}
{(\log(e+t))^{p'q/p}}.
\end{align}
Now, we give the proof of Corollary \ref{corollary1.4}.
\begin{proof}[Proof of Corollary \ref{corollary1.4}]
To prove this corollary, we need only to choose some Young functions that satisfy the conditions of Theorem \ref{theorem1.3}. For some $\delta>0$, choose
$$X(t)=t^{p'}[\log(e+t)]^{(m+1)p'-1+\delta},~ Y(t)=t^q[\log(e+t)]^{(m+1)q-1+\delta},$$
$$B(t)=t^{p'}[\log(e+t)]^{p'-1+\delta},~C(t)=t^q[\log(e+t)]^{q-1+\delta},~\Phi(t)=e^t-1.$$
It is not hard to check that
$$\bar{B}(t)\sim\frac{t^{p}}{[\log(e+t)]^{1+{p\delta}/{p'}}}\in B_{p}\subset B_{p,q},~ \bar{C}(t)\sim\frac{t^{q'}}{[\log(e+t)]^{1+{q'\delta}/{q}}}\in B_{q'}$$
and $\Phi^{-1}(t)=\log(e+t)$. By \eqref{Young function}, we have that
$$X^{-1}(t)\sim\frac{t^{1/p'}}{[\log(e+t)]^{m+1/p+\delta/p'}},~
Y^{-1}(t)\sim\frac{t^{1/q}}{[\log(e+t)]^{m+1/q'+\delta/q}},$$
$$B^{-1}(t)\sim\frac{t^{1/p'}}{[\log(e+t)]^{1/p+\delta/p'}},~
C^{-1}(t)\sim\frac{t^{1/q}}{[\log(e+t)]^{1/q'+\delta/q}}.$$
Then
$$X^{-1}(t)\Phi^{-1}(t)^m\sim\frac{t^{1/p'}}{[\log(e+t)]^{m+1/p+\delta/p'}}[\log(e+t)]^m\sim B^{-1}(t),$$
$$Y^{-1}(t)\Phi^{-1}(t)^m\sim\frac{t^{1/q}}{[\log(e+t)]^{m+1/q'+\delta/q}}[\log(e+t)]^m\sim C^{-1}(t).$$
Finally, by the John-Nirenberg inequality and $t\lesssim \Phi(t)$, we get $\|b\|_{BMO(\mathbb{R}^n)}\sim\|b\|_{Osc(\Phi)}$. Thus, Theorem \ref{theorem1.3} implies Corollary \ref{corollary1.4}.
\end{proof}
\begin{proof}[Proof of Corollary \ref{corollary1.5}]
Choosing $A(t)=t^q[\log(e+t)]^{q-1+\delta}$, $D(t)=t^{p'}[\log(e+t)]^{p'-1+\delta}$ in Corollary \ref{corollary1.4}. Then
$$\bar{A}(t)\sim\frac{t^{q'}}{[\log(e+t)]^{1+{q'\delta}/{q}}}\in B_{q'},~\bar{D}(t)\sim\frac{t^p}{[\log(e+t)]^{1+{p\delta}/{p'}}}\in B_{p}\subset B_{p,q}.$$
Hence, Corollary \ref{corollary1.5} directly follows from Corollary \ref{corollary1.4}.
\end{proof}
\section{Necessity of two weight inequalities for $I_\alpha^{b,m}$}
In this section, we give the proof of Theorem \ref{theorem1.7}. To prove Theorem \ref{theorem1.7}, we need the following two lemmas.
\begin{lemma}\label{lm3.1}
Let $K_\alpha(x,y)=\frac{1}{|x-y|^{n-\alpha}}$. Then for each $A\geq4$ and each ball $B:=B(y_0,r)$, there exists a disjoint ball $\tilde{B}:=B(x_0,r)$ with dist$(B,\tilde{B})\sim Ar$ satisfies $|K_\alpha(x_0,y_0)|=\frac{1}{A^{n-\alpha}r^{n-\alpha}}$, and for any $y\in B$ and $x\in\tilde{B}$, there holds
\begin{align*}
|K_\alpha(x,y)-K_\alpha(x_0,y_0)|\lesssim\frac{\epsilon_A}{A^{n-\alpha}r^{n-\alpha}},
\end{align*}
where $\epsilon_A\rightarrow0$ as $A\rightarrow\infty$.
\end{lemma}
\begin{proof}
Fix a ball $B=B(y_0,r)$ and $A\geq4$, take $x_0=y_0+Ar\theta_0$, where $\theta_0\in\mathbb{S}^{n-1}$. Let $\tilde{B}:=B(x_0,r)$, it is easy to see that dist$(B,\tilde{B})\sim Ar$ and $K_\alpha(x_0,y_0)=\frac{1}{|x_0-y_0|^{n-\alpha}}=\frac{1}{A^{n-\alpha}r^{n-\alpha}}$. For any $y\in B$ and $x\in\tilde{B}$, by the mean value theorem, we have
\begin{align*}
|K_\alpha(x,y)-K_\alpha(x_0,y_0)|&\leq|K_\alpha(x,y)-K_\alpha(x_0,y)|+|K_\alpha(x_0,y)-K_\alpha(x_0,y_0)|\\
&\lesssim\frac{|x-x_0|}{|x_0-y|^{n-\alpha+1}}\leq\frac{1/A}{(Ar)^{n-\alpha}}
=:\frac{\epsilon_A}{(Ar)^{n-\alpha}}.
\end{align*}
\end{proof}
\begin{lemma}{\rm(cf. \cite{LerOR})}\label{lm3.2}
Assume that $f\in BMO(\mathbb{R}^n)$, and let $Q$ be a cube such that $f_Q=0$. Then there exists a function $\varphi$ such that $\varphi=f$ on $Q$, $\varphi=0$ on $\mathbb{R}^n\backslash2Q$ and $\|\varphi\|_{BMO(\mathbb{R}^n)}\lesssim\|f\|_{BMO(\mathbb{R}^n)}$.
\end{lemma}
\begin{proof}[Proof of Theorem \ref{theorem1.7}]
For any cube $Q\subset\mathbb{R}^n$, we define
\begin{align*}
g(x)=\log^+\Big(\frac{M(\nu^{1-p'}\chi_Q)(x)}{(\nu^{1-p'})_Q}\Big).
\end{align*}
It is well known that $g\in BMO(\mathbb{R}^n)$. Moreover, the Kolmogogov inquality yields that
\begin{align*}
\int_Q(M(f\chi_Q))^\delta\lesssim\Big(\frac{1}{|Q|}\int_Q|f|\Big)^\delta|Q|,~0<\delta<1.
\end{align*}
We then have $g_Q\lesssim1$. According to Lemma \ref{lm3.2}, there is a function $\varphi$ satisfying $\varphi=g-g_Q$ on $Q$, $\varphi=0$ outside $2Q$ and $\|\varphi\|_{BMO(\mathbb{R}^n)}\lesssim1$. Choosing a ball $B$ such that the centre of $B$ is the same as cube $Q$ and $r=diam~ Q$. Then by Lemma \ref{lm3.1}, there is a ball $\tilde{B}$ such that the centre of $\tilde{B}$ is the same as cube $B$ and dist$(B,\tilde{B})=Ar$, where $A\geq4$ will be determined later.
Now, we return to prove our theorem. By duality, we find that the condition
\begin{align*}
\|I_\alpha^{b,m}f\|_{L^{q,\infty}(\mu)}\lesssim\|b\|_{BMO(\mathbb{R}^n)}^m\|f\|_{L^p(\nu)}
\end{align*}
is equivalent to the condition
\begin{align}\label{duality}
\|(I_\alpha^{b,m})^\ast f\|_{L^{p'}(\nu^{1-p'})}\lesssim\|b\|_{BMO(\mathbb{R}^n)}^m\|f/\mu\|_{L^{q',1}(\mu)}.
\end{align}
One can check that $(I_\alpha^{b,m})^\ast=(-1)^mI_\alpha^{b,m}$, hence, we can still deal with \eqref{duality} by considering $I_\alpha^{b,m}$. Let $b=\varphi$, then for $x\in B$ and a non-negative function $f$,
\begin{align*}
I_\alpha^{b,m}(f\chi_{\tilde{B}})(x)=\int_{\tilde{B}}(b(x)-b(y))^m\frac{f(y)}{|x-y|^{n-\alpha}}dy
=\varphi(x)^m\int_{\tilde{B}}\frac{f(y)}{|x-y|^{n-\alpha}}dy.
\end{align*}
By \eqref{duality}, we immediately get that
\begin{align*}
\Big(\int_B\Big(\int_{\tilde{B}}\frac{f(y)}{|x-y|^{n-\alpha}}dy\Big)^{p'}|\varphi(x)|^{mp'}
\nu(x)^{1-p'}dx\Big)^{1/p'}\lesssim\|f\chi_{\tilde{B}}/\mu\|_{L^{q',1}(\mu)}.
\end{align*}
This, combining with Lemma \ref{lm3.1}, yields that
\begin{align*}
&\frac{1}{A^{n-\alpha}}\Big(\int_B|\varphi(x)|^{mp'}\nu(x)^{1-p'}dx\Big)^{1/p'}f_{\tilde{B}}\\
&\quad=\frac{r^{n-\alpha}}{(Ar)^{n-\alpha}}\Big(\int_B|\varphi(x)|^{mp'}\nu(x)^{1-p'}dx\Big)^{1/p'}
f_{\tilde{B}}\\
&\quad=r^{-\alpha}\Big(\int_B\Big(\int_{\tilde{B}}\frac{f(y)}{|x_0-y_0|^{n-\alpha}}dy\Big)^{p'}
|\varphi(x)|^{mp'}\nu(x)^{1-p'}dx\Big)^{1/p'}\\
&\quad\leq r^{-\alpha}\Big(\int_B\Big(\int_{\tilde{B}}\Big|\frac{1}{|x_0-y_0|^{n-\alpha}}-
\frac{1}{|x-y|^{n-\alpha}}\Big|f(y)dy\Big)^{p'}|\varphi(x)|^{mp'}\nu(x)^{1-p'}dx\Big)^{1/p'}\\
&\qquad+r^{-\alpha}\Big(\int_B\Big(\int_{\tilde{B}}
\frac{1}{|x-y|^{n-\alpha}}f(y)dy\Big)^{p'}|\varphi(x)|^{mp'}\nu(x)^{1-p'}dx\Big)^{1/p'}\\
&\quad\lesssim\frac{\epsilon_A}{A^{n-\alpha}}\Big(\int_B|\varphi(x)|^{mp'}\nu(x)^{1-p'}dx\Big)^{1/p'}
f_{\tilde{B}}+r^{-\alpha}\|f\chi_{\tilde{B}}/\mu\|_{L^{q',1}(\mu)}.
\end{align*}
Choosing $A$ large enough, we have
\begin{align*}
\Big(\int_B|\varphi(x)|^{mp'}\nu(x)^{1-p'}dx\Big)^{1/p'}f_{\tilde{B}}\lesssim
r^{-\alpha}\|f\chi_{\tilde{B}}/\mu\|_{L^{q',1}(\mu)}.
\end{align*}
Taking $f=\mu$ and using the fact that $\|\chi_{\tilde{B}}\|_{L^{q',1}(\mu)}\sim(\int_{\tilde{B}}\mu)^{1/q'}$, we obtain
\begin{align}\label{3.2}
r^\alpha|\tilde{B}|^{-1}\Big(\int_B|\varphi(x)|^{mp'}\nu(x)^{1-p'}dx\Big)^{1/p'}
\Big(\int_{\tilde{B}}\mu(x)dx\Big)^{1/q}\lesssim1.
\end{align}
Similarly, let $b=\chi_B$, following the arguments as \eqref{3.2}, we get that
\begin{align}\label{3.3}
r^\alpha|\tilde{B}|^{-1}\Big(\int_B\nu(x)^{1-p'}dx\Big)^{1/p'}
\Big(\int_{\tilde{B}}\mu(x)dx\Big)^{1/q}\lesssim1.
\end{align}
Observe that $|\tilde{B}|\sim|Q|$ and $Q\subset\theta\tilde{B}$, where $\theta$ depends only on $A$ and $n$. Combing with these facts and the doubling property of $\mu$, we can replace \eqref{3.2} and \eqref{3.3} by
\begin{align*}
r^\alpha|Q|^{-1}\Big(\int_Q|g(x)-g_Q|^{mp'}\nu(x)^{1-p'}dx\Big)^{1/p'}
\Big(\int_{Q}\mu(x)dx\Big)^{1/q}\lesssim1
\end{align*}
and
\begin{align*}
r^\alpha|Q|^{-1}\Big(\int_Q\nu(x)^{1-p'}dx\Big)^{1/p'}
\Big(\int_{Q}\mu(x)dx\Big)^{1/q}\lesssim1,
\end{align*}
respectively. Keeping in mind that $g_Q\lesssim1$, we finally have
\begin{align*}
&r^\alpha|Q|^{-1}\Big(\int_Qg(x)^{mp'}\nu(x)^{1-p'}dx\Big)^{1/p'}
\Big(\int_{Q}\mu(x)dx\Big)^{1/q}\\
&\quad\lesssim r^\alpha|Q|^{-1}\Big(\int_Q|g(x)-g_Q|^{mp'}\nu(x)^{1-p'}dx\Big)^{1/p'}
\Big(\int_{Q}\mu(x)dx\Big)^{1/q}\\
&\qquad+r^\alpha|Q|^{-1}\Big(\int_Q\nu(x)^{1-p'}dx\Big)^{1/p'}
\Big(\int_{Q}\mu(x)dx\Big)^{1/q}\lesssim1,
\end{align*}
which implies that
\begin{align*}
&\sup_Q|Q|^{\frac{\alpha}{n}+\frac{1}{q}-\frac{1}{p}}
\Big(\frac{1}{|Q|}\int_Q\mu(x)dx\Big)^{1/q}\\
&\quad\times\Big(\frac{1}{|Q|}\int_Q\nu(x)^{1-p'}\Big[\log\Big(\frac{v(x)^{1-p'}}
{(v(x)^{1-p'})_Q}+e\Big)\Big]^{mp'}dx\Big)^{1/p'}<\infty.
\end{align*}
Therefore, using the following fact proved in \cite{W},
$$\|f\|_{L(\log L)^\alpha,Q}\sim\frac{1}{|Q|}\int_Q|f(x)|[\log(|f(x)|/|f|_Q+e)]^\alpha dx,$$
we get the desired result. Theorem \ref{theorem1.7} is proved.
\end{proof}
\section{Converse to Bloom type estimate for $I_\alpha^{b,m}$}
This section is concerning with the proof of Theorem \ref{theorem1.8}. First, we recall and establish some lemmas, which are the keys in our arguments.
\begin{lemma}{\rm(cf. \cite{LerOR})}\label{lm3.3}
Let $\eta_1,\eta_2$ be the weights such that $\eta_1/\eta_2\not\in L^\infty$. Then there exists $b\in BMO_{\eta_1}\backslash BMO_{\eta_2}$.
\end{lemma}
\begin{lemma}\label{lm3.4}
Let $\lambda,\mu$ be arbitrary weights satisfy \eqref{boundedness} and $p,q,m,\alpha$ be given in Theorem \ref{theorem1.8}. Then for each ball $B:=B(y_0,r)$, there exists a disjoint ball $\tilde{B}:=B(x_0,r)$ with dist$(B,\tilde{B})\sim Ar$ such that for any non-negative measurable function $f$,
\begin{align*}
\Big(\int_{\tilde{B}}\eta(x)^{mq}\lambda(x)^qdx\Big)^{1/q}f_B\lesssim r^{-\alpha}\Big(\int_Bf(x)^p\mu(x)^pdx\Big)^{1/p}.
\end{align*}
\end{lemma}
\begin{proof}
Let $\tilde{B}$ be given in Lemma \ref{lm3.1} and $b=\eta\chi_{\tilde{B}}$. Then for $x\in\tilde{B}$,
\begin{align*}
I_{\alpha}^{b,m}(f\chi_B)(x)=\int_B(b(x)-b(y))^m\frac{f(y)}{|x-y|^{n-\alpha}}dy=\eta(x)^m
\int_B\frac{f(y)}{|x-y|^{n-\alpha}}dy.
\end{align*}
By \eqref{boundedness}, we have
\begin{align*}
\Big(\int_{\tilde{B}}\Big(\int_B\frac{f(y)}{|x-y|^{n-\alpha}}dy\Big)^q
\eta(x)^{mq}\lambda(x)^qdx\Big)^{1/q}
\lesssim\Big(\int_Bf(x)^p\mu(x)^pdx\Big)^{1/p}.
\end{align*}
From this and Lemma \ref{lm3.1}, we deduce that
\begin{align*}
&\frac{1}{A^{n-\alpha}}\Big(\int_{\tilde{B}}\eta(x)^{mq}\lambda(x)^qdx\Big)^{1/q}f_B\\
&\quad=\frac{r^{n-\alpha}}{(Ar)^{n-\alpha}}\Big[\int_{\tilde{B}}\eta(x)^{mq}\lambda(x)^q
\Big(\frac{1}{|B|}\int_Bf(y)dy\Big)^qdx\Big]^{1/q}\\
&\quad=r^{-\alpha}\Big[\int_{\tilde{B}}\eta(x)^{mq}\lambda(x)^q
\Big(\int_B\frac{f(y)}{|x_0-y_0|^{n-\alpha}}dy\Big)^qdx\Big]^{1/q}\\
&\quad\leq r^{-\alpha}\Big[\int_{\tilde{B}}\eta(x)^{mq}\lambda(x)^q
\Big(\int_B\Big|\frac{1}{|x-y|^{n-\alpha}}-\frac{1}{|x_0-y_0|^{n-\alpha}}\Big|f(y)dy\Big)^qdx\Big]
^{1/q}\\
&\qquad+r^{-\alpha}\Big[\int_{\tilde{B}}\eta(x)^{mq}\lambda(x)^q
\Big(\int_B\frac{1}{|x-y|^{n-\alpha}}f(y)dy\Big)^qdx\Big]^{1/q}\\
&\quad\lesssim\frac{\epsilon_A}{A^{n-\alpha}}\Big(\int_{\tilde{B}}\eta(x)^{mq}\lambda(x)^qdx\Big)
^{1/q}f_B+r^{-\alpha}\Big(\int_Bf(x)^p\mu(x)^pdx\Big)^{1/p}.
\end{align*}
Then the desired result directly follows by letting $A\rightarrow\infty$.
\end{proof}
\begin{lemma}\label{lm3.5}
Let $\lambda,\mu$ be arbitrary weights satisfy \eqref{boundedness} and $p,q,m,\alpha$ be given in Theorem \ref{theorem1.8}. Then
\begin{align*}
\lambda(x)\eta(x)^{m}\lesssim\mu(x).
\end{align*}
\end{lemma}
\begin{proof}
Let $f=1$ in Lemma \ref{lm3.4}. Keep in mind that $1/p-1/q=\alpha/n$, then
\begin{align*}
\Big(\frac{1}{|\tilde{B}|}\int_{\tilde{B}}\eta(x)^{mq}\lambda(x)^qdx\Big)^{1/q}\lesssim
\Big(\frac{1}{|B|}\int_{B}\mu(x)^pdx\Big)^{1/p}.
\end{align*}
By the Lebesgue differential theorem, we get the desired result.
\end{proof}
Now, we are in the position to prove Theorem \ref{theorem1.8}.
\begin{proof}[Proof of Theorem \ref{theorem1.8}]
By Lemma \ref{lm3.5}, it suffices to prove that
\begin{align}\label{3.4}
\mu\lesssim\lambda\eta^m
\end{align}
almost everywhere. Suppose that \eqref{3.4} is not true. Denote $\tilde{\eta}=(\mu/\lambda)^{1/m}$. Then $\tilde{\eta}/\eta\not\in L^\infty$. Note that when $\lambda,\,\mu\in A_{p,q}$, Accomazzo et al. \cite{AMR} proved that for $b\in BMO_{\tilde{\eta}}$,
\begin{align*}
\|I_{\alpha}^{b,m}f\|_{L^q(\lambda^q)}\lesssim\|f\|_{L^p(\mu^p)}.
\end{align*}
This, together with Lemma \ref{lm3.3}, implies that $b\not\in BMO_\eta$, which contradicts with \eqref{necessity} and completes the proof of Theorem \ref{theorem1.8}.
\end{proof}
|
2,877,628,090,715 | arxiv | \section{Introduction}
As is well-known, violation of the Bell inequality and other Bell-type inequalities\cite{Bell,CH1,FC,CH2,Aspect1,Aspect2,Aspect3,GM,MWZ, Nat} in the quantum world indicates that quantum mechanics cannot be described as a local hidden variable theory, which we call the {\it nonlocality} of quantum mechanics in this study. Because of consistency with special relativity, it is impossible to communicate instantaneously with people who are at a spacelike distance even with the help of this nonlocality. Nevertheless, we should not conclude that we cannot use the transitions of quantum states due to observation as a communication tool; we call these transitions {\it collapse of the wave function}, in accordance with standard texts on quantum mechanics. According to special relativity, it is impossible to communicate by means of the collapse of the wave function if it propagates instantaneously. Moreover, if the quantum mechanics is unitary, the no-communication theorem prohibits such communication no matter how slowly the information is passed. Conversely, if the quantum mechanics has a nonunitary process\cite{mochi1}, there is no reason to deny the possibility of slow communication by means of the collapse of the wave function. Because the proposition that the quantum mechanics is unitary is not certain\cite{Healey}, it is worth examining the possibility of such slow communication.
Some people may insist that the Einstein--Podolsky--Rosen (EPR) experiment\cite{EPR} demonstrates the instantaneous collapse of the wave function. As discussed below, however, the EPR experiment does not establish the instantaneous collapse of the wave function but the nonlocality of the wave function. We should not confuse them. Equally, communication at a finite speed by means of collapse of the wave function does not conflict with the nonlocality of quantum mechanics.
In this study, we show that quantum communication by means of collapse of the wave function is possible. In this context, {\it quantum communication} does not mean quantum teleportation\cite{telepo1,telepo2,telepo3,telepo4,telepo5} or quantum cryptography\cite{cry1,cry2}, but transmission of information itself. This study, however, bears some relation to quantum cryptography. Because of the no-cloning theorem\cite{noclo1,noclo2,noclo3}, the information must change if the communication is tapped by an eavesdropper. This fact is a hint that communication via quantum entanglement is possible; the eavesdropper can transmit some information to the receiver. In other words, we may use the no-cloning theorem for communication
We consider a thought experiment in which the measurement process is divided into two steps; the first step is a microscopic interaction between the system and the probe of the measuring device, and the second step is the subsequent amplification and output of the result. This thought experiment shows that communication by means of the collapse of the wave function is possible, and no inconsistency with special relativity appears.
This paper is organized as follows. In the second section, we outline our thought experiment. We show in the third section that its inferred results are consistent with special relativity and the nonlocality of quantum mechanics. The EPR--Bohm\cite{Bohm} experiment is discussed in this context in the fourth section. The last section presents our conclusion. There are some supplementary explanations of entanglement in Appendix.
\section{Thought experiment}
\subsection{Setting}
We consider an experiment in which the spin of an electron $S$ is observed. $|+\rangle$ and $|-\rangle$ are the eigenstates of its spin in the $z$ direction belonging to its eigenvalues $+\hbar/2$ and $-\hbar/2$, respectively. Then, we define
\begin{equation}
\hat\sigma_z=|+\rangle\langle+|-|-\rangle\langle -|,
\end{equation}
\begin{equation}
\hat\sigma_x=|+\rangle\langle-|+|-\rangle\langle +|.
\end{equation}
$M_z$ and $M_x$ are prepared as macroscopic apparatuses that measure $\hat\sigma_z$ and $\hat \sigma_x$, respectively. The initial state $|r\rangle$ of $S$ is the eigenstate of $\hat\sigma_x$ belonging to its eigenvalue 1:
\begin{equation}
|r\rangle=\frac{1}{\sqrt{2}}\Big(|+\rangle+|-\rangle\Big),\label{eq:appb3}
\end{equation}
\[
\hat\sigma_x|r\rangle=|r\rangle.
\]
$M_z$ includes a microscopic probe that interacts locally with $S$. We define the position operator $\hat\xi_z$ and momentum operator $\hat\pi_z$ of the probe, which satisfy
\begin{equation}
[\hat\pi_z,\ \hat\xi_z]=-i.
\end{equation}
The initial state $|\phi_z\rangle$ of $M_z$ satisfies
\begin{equation}
\langle\phi_z|\hat\xi_z|\phi_z\rangle=\Xi_z,
\end{equation}
\begin{equation}
\langle\phi_z|\hat\pi_z|\phi_z\rangle=0,
\end{equation}
where $\Xi_z$ is a constant.
A microscopic interaction between $S$ and the probe of $M_z$, which does not include amplification and output of the results in $M_z$, is described by the interaction Hamiltonian $\hat H_z$:
\begin{equation}
\hat H_z\equiv g_z\hat\sigma_z\hat\pi_z,\label{eq:Hz}
\end{equation}
where $g_z$ is a coupling constant.
We define some operators and a state for $M_x$ as we did for $M_z$:
\begin{equation}
[\hat\pi_x,\ \hat\xi_x]=-i,
\end{equation}
\begin{equation}
\langle\phi_x|\hat\xi_z|\phi_x\rangle=\Xi_x,
\end{equation}
\begin{equation}
\langle\phi_x|\hat\pi_x|\phi_x\rangle=0,
\end{equation}
\begin{equation}
\hat H_x\equiv g_x\hat\sigma_x\hat\pi_x.\label{eq:Hx}
\end{equation}
Moreover, the pre-initial state $|\Phi\rangle$ of the combined system of $M_z$, $M_x$, and $S$ is defined as
\begin{equation}
|\Phi\rangle\equiv |\phi_x\rangle|\phi_z\rangle|r\rangle.
\end{equation}
First, $S$ interacts with the probe of $M_z$ for $\Delta t$. If we ignore the time development of $M_z$ itself, the combined state $|\Phi(0)\rangle$ and the density matrix $\hat\rho (0)$ after this interaction are
\begin{equation}
|\Phi(0)\rangle=\exp (-i\hat H_z\Delta t)|\Phi\rangle,\label{eq:state0}
\end{equation}
\begin{equation}
\hat\rho (0)=|\Phi(0)\rangle\langle\Phi(0)|.
\end{equation}
Then, keeping the state unchanged, $M_z$ is separated from $S$ by a distance. We regard this state of the combined system as the initial state at the time $t=0$ in our thought experiment. Hereafter, we call the operators of $M_z$ and $M_x$ {\it Alice} and {\it Bob}, respectively.
\begin{figure}
\centering
\includegraphics[width=10cm]{ptep1.jpg}
\caption{the initial state}
\end{figure}
This experiment consists of the following three operations:\\
1. (the definition of $t_0$)
The local interaction between $S$ and the probe of $M_x$ for $\Delta t$. We call the time at which it ends $t_0\ (t_0>0)$.\\
2. (the definition of $t_z$)
The measurement processes excluding the local interaction of $S$ with $M_z$, i.e., the amplification and output of the result in $M_z$, which are thought to take little time. We call the time at which they end $t_z\ (t_z>0)$.\\
3. (the definition of $t_x$)
The measurement processes excluding the local interaction of $S$ with $M_x$, i.e., the amplification and output of the result in $M_x$, which are thought to take little time. We call the time at which they end $t_x\ (t_x\ge t_0)$.
We consider two time orders in which these operations can be performed.
\subsection{$t_z<t_0$}
The thought experiment in this subsection is similar to the three state cascaded Stern--Gerlach experiment\cite{casSG}.
All measurement processes of $M_z$ have finished and the wave function has collapsed before the interaction between $S$ and $M_x$ begins. Therefore, the expectation value $\langle\xi_z\rangle_1$ of $\hat\xi_z$ at $t_z$ is
\begin{equation}
\begin{array}{rl}
\langle\xi_z\rangle_1&={\rm Tr}\big[\hat\rho(0)\hat\xi_z\big]\\
&=\Xi_z+g_z\Delta t\langle r|\hat\sigma_z|r\rangle\\
&=\Xi_z.\label{eq:41zkekka}
\end{array}
\end{equation}
Then, because of the collapse of the wave function, the density matrix $\hat\rho_1(t_z)$ of $S$ and $M_x$ after $t_z$ becomes
\begin{equation}
\hat\rho_1(t_z)=\frac{1}{2}\Big(|+\rangle\langle +|+|-\rangle\langle -|\Big)\otimes |\phi_x\rangle\langle\phi_x|,\label{eq:deco}
\end{equation}
and the expectation value $\langle\xi_x\rangle_1$ of $\hat\xi_x$ at $t_x$ is
\begin{equation}
\begin{array}{rl}
\langle\xi_x\rangle_1&={\rm Tr}\big[\exp(-i\hat H_x\Delta t)\hat\rho_1(t_z)\exp(+i\hat H_x\Delta t)\hat\xi_x\big]\\
&=\Xi_x+\frac{1}{2}g_x\Delta t\Big(\langle +|\hat\sigma_x|+\rangle+\langle -|\hat\sigma_x|-\rangle\Big)\\
&=\Xi_x.\label{eq:41xkekka}
\end{array}
\end{equation}
\subsection{$t_x<t_z$}
Because no collapse of the wave function has occurred before $t_x$, the state $|\Phi(t_0)\rangle$ of the combined system between $t_0$ and $t_x$ is
\begin{equation}
|\Phi(t_0)\rangle=\exp(-i\hat H_x\Delta t)|\Phi(0)\rangle.\label{eq:phit0}
\end{equation}
Therefore, the expectation value $\langle\xi_x\rangle_2$ of $\hat\xi_x$ at $t_x$ is
\begin{equation}
\begin{array}{rl}
\langle\xi_x\rangle_2&={\rm Tr}\big[|\Phi(t_0)\rangle\langle\Phi(t_0)|\hat\xi_x\big]\\
&=\Xi_x+g_x\Delta t\langle r|\hat\sigma_x|r\rangle\\
&=\Xi_x+g_x\Delta t.\label{eq:44xkekka}
\end{array}
\end{equation}
In the second line of (\ref{eq:44xkekka}), we have neglected the terms of $\mathcal{O}(\hbar)$ and more.
We associate the thought experiment in this section with the reversible Stern--Gerlach experiment\cite{revSG1,revSG2}, the quantum-eraser experiment\cite{eraser1,eraser2,eraser3} and the delayed-choice experiment\cite{delay1, delay2}. Alice can cansel the interaction and return the state $|\Phi (0)\rangle$ to $|\Phi\rangle$ before Bob does his work, i.e., before $t_0$. Therefore, we conclude that Alice has no information about the state before $t_z$, and (\ref{eq:44xkekka}) does not conflict with the no-cloning theorem.
\begin{figure}
\centering
\includegraphics[width=10cm]{ptep2.jpg}
\caption{time orderings in the 2nd section}
\end{figure}
\subsection{Transmission of information}
Because the two expectation values (\ref{eq:41xkekka}) and (\ref{eq:44xkekka}) are not the same, Alice can change the expectation value of $\hat\xi_x$ and transmit information to Bob. If Alice restarts $M_z$ to amplify and output the results before $t_0$, Bob will obtain the expectation value (\ref{eq:41xkekka}) and learn that {\it the Giants} won. Conversely, if Alice does not restart the measuring process, Bob will obtain (\ref{eq:44xkekka}) and be unhappy to learn that {\it the Giants} lost.
As shown in Appendix B, this result is consistent with the no-communication (no-signaling) theorem.
\section{Special relativity and nonlocality}
\subsection{Requirement of special relativity}
In the previous section, we showed that we can communicate by means of collapse of the wave function. Therefore, the collapse of the wave function should propagate at the speed of light or slower to be consistent with special relativity. On the other hand, the clear violation of the Bell inequality rules out the description of quantum mechanics as a local hidden variable theory. To demonstrate their consistency, we consider other time orderings than those considered in the previous section.
In the following, we assume that the collapse of the wave function propagates at the speed of light, and hence we use the equal sign (=) for the time ordering of two events that are spacelike separated and $<$ or $>$ for the time ordering of timelike-separated events.
\subsection{$t_0\le t_z<t_x$}
The time ordering between $t=0$ and $t_z$ may change and the state of the combined system just before $t_z$ is (\ref{eq:phit0}) or (\ref{eq:state0}) depending on the frame of reference. The expectation values of $\hat\xi_z$ for both states are the same:
\begin{equation}
\langle\xi_z\rangle_3=\Xi_z.\label{eq:42zkekka}
\end{equation}
Therefore, the difference between the frames of reference causes no inconsistency.
\begin{figure}
\centering
\includegraphics[width=10cm]{ptep3.jpg}
\caption{time orderings in the 3rd section}
\end{figure}
\subsection{$t_0<t_z=t_x$}
The state of the combined system just before $t_z$ and $t_x$ is (\ref{eq:phit0}), for which both $M_z$ and $M_x$ are operated. Because $\hat\sigma_z$ and $\hat\sigma_x$ do not commute, the Hilbert space to which the state (\ref{eq:phit0}) belongs is not a direct product of the individual Hilbert spaces of $\hat\sigma_z$ and $\hat\sigma_x$. The states of $S$ and $M_z$ are entangled via the interaction whose Hamiltonian is (\ref{eq:Hz}); similarly, the states of $S$ and $M_x$ are entangled via the interaction whose Hamiltonian is (\ref{eq:Hx}). Therefore, the states of $M_z$ and $M_x$ are also entangled, and the measurement processes of $\hat\xi_z$ and $\hat\xi_x$ depend on each other, even though they commute\cite{Arthurs}\cite{moc2}. They are parts of {\it one} measurement process of $\hat\xi_z\hat\xi_x$, whose expectation value $\langle\xi_z\xi_x\rangle_4$ is
\begin{equation}
\begin{array}{rl}
\langle\xi_z\xi_x\rangle_4=&{\rm Tr}\big[|\Phi(t_0)\rangle\langle\Phi(t_0)|\hat\xi_z\xi_x\big]\\
=&\Xi_z\Xi_x+\Xi_zg_x\Delta t\langle r|\hat\sigma_x|r\rangle+\Xi_xg_z\Delta t\langle r|\hat\sigma_z|r\rangle\\
&+\frac{1}{2}g_zg_x(\Delta t)^2\langle r|(\hat\sigma_z\hat\sigma_x+\hat\sigma_x\hat\sigma_z)|r\rangle\\
=&\Xi_z(\Xi_x+g_x\Delta t).\label{eq:43zxkekka}
\end{array}
\end{equation}
Weak value\cite{Aha1, Aha2} is evidence of such entanglement. As confirmed by many experiments\cite{Res,Lun,Yok}, the measured value of weak measurement agrees with the corresponding weak value. Although the first measurement and the post-selection are not spacelike-separated but sequential in the weak measurement, similar entanglement of the apparatuses is observed because of the weakness of the first measurement. See Appendix for the details.
\subsection{Consistency}
The expectation value (\ref{eq:43zxkekka}) is obtained as a necessary consequence of the nonlocality of quantum mechanics and our conclusion in the second section that the collapse of the wave function propagates at the speed of light or slower. For consistency with special relativity, Alice should not be able to know which ordering is correct, $t_z=t_x$ or $t_z<t_x$, and hence the output of $M_z$ described in section 3.3 should not be distinguishable from that in section 3.2. Similarly, Bob should not be able to know which ordering is correct, $t_z=t_x$ or $t_z>t_x$, and hence the output of $M_x$ as described in section 3.3 should not be distinguishable from that of 2.3. As shown in the previous sections, (\ref{eq:44xkekka}), (\ref{eq:42zkekka}), and (\ref{eq:43zxkekka}) are expectation values for the state (\ref{eq:phit0}). Taking account of the commutativity between $\hat\xi_z$ and $\hat\xi_x$ as well, we conclude that it is impossible for Alice or Bob to know about spacelike-separated events despite the nonlocality of quantum mechanics.
In contrast, if we assume that the collapse of the wave function propagates instantaneously, the expectation values of $\hat\xi_x$ for the time ordering $t_z=t_x$ calculated in different frames of reference may be different. That is, this assumption of the instantaneous propagation is inconsistent with special relativity.
\section{Discussion}
In the previous sections, we clarified that communication by means of collapse of the wave function is possible and is consistent with both special relativity and the nonlocality of quantum mechanics. In some textbooks, however, the consistency between special relativity and the nonlocality of quantum mechanics is considered to be guaranteed on the hypothesis that communication by means of quantum correlation is impossible. For example, Redhead claimed this in his famous book\cite{red}, but his proof was inadequate because he ignored entanglement of quantum states. On the other hand, there is an opposite way of explaining things. The impossibility of communication by means of quantum correlation is occasionally regarded not as evidence of consistency but as its necessary consequence. We should be careful not to confuse the nonlocality of quantum mechanics with instantaneous communication by means of quantum correlation\cite{Esp}. Communication by means of collapse of the wave function at the speed of light or slower is consistent with both special relativity and the nonlocality of quantum mechanics, as shown in the previous section.
The main objection to our conclusion may be the assumption that the EPR--Bohm experiment demonstrated instantaneous propagation of the collapse of the wave function. As shown below, however, this objection is wrong. The EPR--Bohm experiment proves only the nonlocality of quantum mechanics. To show this, we consider measurement of the spins in the $z$ direction of an EPR pair of spin 1/2 particles whose initial state $|I\rangle$ is defined as
\begin{equation}
|I\rangle =\frac{1}{\sqrt{2}}\Big( |+\rangle_1|-\rangle_2+|-\rangle_1|+\rangle_2\Big),\label{eq:apb}
\end{equation}
where $|+\rangle_{1(2)}$ and $|-\rangle_{1(2)}$ are the eigenstates of $\hat\sigma_z^{1(2)}$ of particle 1 (2) with eigenvalues $+1$ and $-1$, respectively.
If the two measurement processes for the two particles are timelike separated, there is no mystery. The collapse of the wave function propagates from one electron to the other, and the later measurement would be done for the state that has an already-determined spin.
In contrast, we must carefully observe the experiment if the two measuring processes are spacelike separated. Firstly, we notice that $|I\rangle$ can also be expressed as an eigenstate of the spin-correlation operator $\hat C$:
\begin{equation}
\begin{array}{rl}
\hat C\equiv&\hat\sigma_z^1\otimes\hat\sigma_z^2\\
=&\big(|+\rangle_1|+\rangle_2\langle +|_2\langle +|_1+|-\rangle_1|-\rangle_2\langle -|_2\langle -|_1\\
&-|+\rangle_1|-\rangle_2\langle -|_2\langle +|_1-|-\rangle_1|+\rangle_2\langle +|_2\langle -|_1\big),
\end{array}
\end{equation}
\begin{equation}
\hat C|I\rangle =-|I\rangle.\label{eq:soukan}
\end{equation}
If the two measurement processes are spacelike separated, our measurement of both spins is regarded as {\it one} measurement of $\hat C$ for the entangled state, because the states of particles 1 and 2 are entangled. In this case, what we obtain is the expectation value of $\hat C$, which is similar to the situation in section 3.3\footnote{In the EPR--Bohm case, the expectation value obtained by each observer can also be interpreted as the independent expectation values of the spin of each particle, because $\hat\sigma_z^1$ and $\hat\sigma_z^2$ commute.}. The spin of each particle cannot be thought to have been determined before the measurement, as confirmed by the violation of the Bell inequality. Nevertheless, the expectation value of $\hat C$ for $|I\rangle$ is $-1$ because $|I\rangle$ is the eigenstate of $\hat C$, which has an eigenvalue of $-1$. Therefore, accepting the nonlocality of the wave function is sufficient to understand the consistency between our conclusion and the result of the EPR--Bohm experiment.
Note that what we want to demonstrate here is that the EPR-Bohm experiment does not disprove our conclusion, though we cannot explain by means of any local theory {\it why} both of the outcomes are consistent with this fact.
\section{Conclusion}
We showed that quantum communication by means of collapse of the wave function is possible. Therefore, consistency with special relativity requires the collapse of the wave function to propagate at the speed of light or slower. This fact is also consistent with the nonlocality of quantum mechanics.
\section*{Appendix : Entanglement between the states of $M_z$ and $M_x$ in $|\Phi (t_0)\rangle$}
In this appendix, we show that the states of $M_z$ and $M_x$ in (\ref{eq:phit0}) are entangled and weak value is evidence of such entanglement. The discussion about weak measurement in this Appendix is based on \cite{moc2}.
If $t_0<t_z=t_x$, the state of the combined system just before $t_z$ and $t_x$ is (\ref{eq:phit0}):
\[
\begin{array}{rl}
|\Phi(t_0)\rangle&=\exp(-i\hat H_x\Delta t)|\Phi(0)\rangle\\
&=\exp(-i\hat H_x\Delta t)\exp(-i\hat H_z\Delta t)|\Phi\rangle.
\end{array}
\]
We define the partial density matrix $\hat\rho^{(m)}(t_0)$of the two measuring devices as
\begin{equation}
\hat \rho^{(m)}(t_0)={\rm Tr}^{(s)}\big[|\Phi(t_0)\rangle\langle\Phi(t_0)|\big],\label{eq:density}
\end{equation}
where ${\rm Tr}^{(s)}$ is the partial trace of the observed system.
If the ensembles $\mathcal{M}_z$ of $M_z$ and $\mathcal{M}_x$ of $M_x$ after their unitary interaction with the measured system are both separately obtained by combining all the elements of the sub-ensembles, each of them can be described by its own ket. Each element of $\mathcal{M}_z$ belongs to one of the sub-ensembles $E_{\alpha},\ \alpha =1,2,\cdots$, described by $|Z_{\alpha}\rangle$ and each element of $\mathcal{M}_x$ belongs to one of the sub-ensembles $E_{\beta},\ \beta =1,2,\cdots$, described by $|X_{\beta}\rangle$ such that the sub-ensemble $\varepsilon_{\alpha,\beta}$ of the combined measuring device, whose elements belong to both $E_{\alpha}$ and $E_{\beta}$, is described by the density matrix
\[
\hat\rho_{\alpha ,\beta}=|X_{\beta}\rangle |Z_{\alpha}\rangle\langle Z_{\alpha}|\langle X_{\beta}|,
\]
and the ensemble of the combined measuring device is described as the weighted sum of $\hat\rho_{\alpha,\beta}$:
\begin{equation}
\hat\rho^{\prime\prime}=\sum_{\alpha ,\beta}P_{\alpha ,\beta}\hat\rho_{\alpha ,\beta},\label{eq:simulrhoprime}
\end{equation}
where $P_{\alpha ,\beta}$ are suitable factors. However, (\ref{eq:density}) does not take the form of (\ref{eq:simulrhoprime}). Therefore, the states of $M_z$ and $M_x$ in (\ref{eq:phit0}) are entangled.
In weak measurement, the similar entanglement of the apparatuses is observed.
If we weakly measure an observable $\hat A$ and then select the final state $|F\rangle$, the state of the unified system of the observed system and the two measuring devices, one of which weakly measures $\hat A$ and the other selects the final state, following the interaction between them is
\begin{equation}
|\Psi(t)\rangle=\exp(-i\hat H_F\Delta t )\exp(-i\hat H_A\Delta t )|\Psi(0)\rangle,\label{eq:enta}
\end{equation}
where $|\Psi(0)\rangle$ is the initial state of the combined system and $\hat H_F$ and $\hat H_A$ are defined as
\[
\hat H_F=g_F|F\rangle\langle F|\hat\pi_F,
\]\[
\hat H_A=g_A\hat A\hat\pi_A,
\]
Hereafter, we put $g_F\Delta t=1$ for simplicity.
We define the partial density matrix $\hat \rho_w^{(m)}(t)$ of the measuring devices as
\begin{equation}
\hat \rho_w^{(m)}(t)={\rm Tr}^{(s)}\big[|\Psi(t)\rangle\langle\Psi(t)|\big].\label{eq:densitypsi}
\end{equation}
By calculating the expectation value of either the position operator $\hat x_A$ of the probe of the measuring device of $\hat A$ or $\hat x_F$, we can obtain the expectation value of either $\hat A$ or $\hat F=|F\rangle\langle F|$ accurately as follows:
\begin{equation}
\overline x_A= {\rm Tr}\big[\hat\rho_w^{(m)}(t)\hat x_A\big]
=g_A\Delta t\langle I|\hat A|I\rangle,
\end{equation}
\begin{equation}
\overline{x}_F= {\rm Tr}\big[\hat\rho_w^{(m)}(t)\hat x_F\big]=\langle I|\hat F|I\rangle.\label{eq:Fdake}
\end{equation}
Because of the reason which is almost the same as the previous discussion, we cannot know the expectation values of both $\hat A$ and $\hat F$ simultaneously, but we should regard the measurement of $\hat x_A$ and $\hat x_F$ as {\it one} manipulation. The measured observable is $\hat x_F\hat x_A$, whose expectation value is
\begin{equation}
\begin{array}{rl}
\overline{x_Fx_A}&={\rm Tr}\big[ \hat x_F\hat x_A\hat\rho_w^{(m)}(t)\big]\\
&={\rm Tr}\big[ \hat x_A\hat x_F\hat\rho_w^{(m)}(t)\big]\\
&=\frac{1}{2}g_At_A\langle I|(\hat F\hat A+\hat A\hat F)|I\rangle.\label{eq:FAAF}
\end{array}
\end{equation}
Because we select cases of the measured value $X_F$ of $\hat x_F$, which is $1$ or $0$, is $1$ in the post-selection, the average of the measured value $X_A$ of $\hat x_A$ is equal to the average of $X_FX_A$ after the post-selection:
\begin{equation}
\langle X_A\rangle^{(p)}=\langle X_FX_A\rangle^{(p)},
\end{equation}
where $\langle\ \rangle^{(p)}$ stands for the average after post-selection. Without the post-selection,
\begin{equation}
\langle X_FX_A\rangle=\overline{x_Fx_A},\label{eq:without}
\end{equation}
Moreover, because $\langle X_FX_A\rangle^{(p)}$ is the quotient of the sum of post-selected $X_FX_A$'s, which is equal to the sum of all $X_FX_A$'s without any post-selection, divided by the number of the post-selected data, it is boosted by $1/\langle X_F\rangle$:
\begin{equation}
\frac{\langle X_AX_F\rangle^{(p)}}{\langle X_AX_F\rangle}=\frac{1}{\langle X_F\rangle},
\end{equation}
where $\langle X_F\rangle$ is nearly equal to $\overline x_F$, because the first measurement is weak.
Gathering these pieces, we obtain
\begin{equation}
\langle X_A\rangle^{(p)}=\frac{\overline{x_Ax_F}}{\overline x_F}.\label{eq:average}
\end{equation}
By means of (\ref{eq:Fdake}) and (\ref{eq:FAAF}), (\ref{eq:average}) becomes
\begin{equation}
\frac{\langle X_A\rangle^{(p)}}{g_At_A}=\frac{\langle I|(\hat F\hat A+\hat A\hat F)|I\rangle}{2\langle I|\hat F|I\rangle}.\label{eq:real}
\end{equation}
The right-hand side of (\ref{eq:real}) is the real part of the weak value
\begin{equation}
\langle \hat A\rangle^w\equiv\frac{\langle F|\hat A|I\rangle}{\langle F|I\rangle}.\label{eq:wv}
\end{equation}
As confirmed by many experiments, the measured value of the weak measurement agrees with the corresponding weak value. We theoretically showed it by taking account of the entanglement between the states of the two measuring devices. If we ignored it and regarded (\ref{eq:wv}) as a some kind of expectation value of $\hat A$, we would suffer from the anomalous weak value problem. Therefore, we assert that weak value is evidence of the entanglement between the states of the two measuring devices in (\ref{eq:enta}), thus in J(\ref{eq:phit0}).
|
2,877,628,090,716 | arxiv | \section{Data Assimilation algorithms}
\label{sec:DA_algorithms}
\graphicspath{{Cordier_Figs/}}
\subsection{Kalman filter algorithm}
\label{sec:KFalgo}
The Kalman filter algorithm discussed at the beginning of Sec.~\ref{sec:maths} corresponds to Fig.~\ref{fig:KF_Init_analyse}.
\begin{figure}[htbp]
\centering
\includegraphics[width=1\textwidth]{Kalman_Filter_Cordier_Init_Analyse.pdf}
\caption{\label{fig:KF_Init_analyse}Kalman Filter algorithm. The initialization is made with the analysed state.}
\end{figure}
\subsection{Ensemble Kalman filter algorithm}
\label{sec:EnKFalgo}
An efficient implementation of the EnKF relying on anomaly matrices is given in Algo.~\ref{alg:Stoch_EnKF_Anomaly_Carrassi}. We have used the secant method described in \cite{asch_data_2016} to change the definition of the variable $\mathbf{Y}_k^\text{f}$.
\input{Cordier_Algorithm/Stoch_EnKF_Anomaly_Carrassi}
\subsection{Dual Ensemble Kalman filter algorithm}
\label{sec:DualEnKFalgo}
An efficient implementation of the Dual EnKF relying on anomaly matrices is given in Algo.~\ref{alg:Stoch_Dual_EnKF_Moradkhani_Carrassi}. We have slightly adapted this algorithm from \cite{DENKF_MORADKHANI2005}.
\input{Cordier_Algorithm/Stoch_Dual_EnKF_Moradkhani_Carrassi}
\subsection{Multigrid Ensemble Kalman filter algorithm}
\label{sec:MEnKF}
\input{Cordier_Algorithm/Multigrid_EnKF_Cordier}
\input{Cordier_Algorithm/Stoch_Dual_EnKF_Moradkhani_Carrassi_Coarse_Grid}
\input{Cordier_Algorithm/Inner_loop}
Algorithm \ref{alg:algo_KF_corrected_corrected} represents a simplified, ready-to-use, application of the conceptual methodology presented in
Sec.~\ref{sec:MGEnKF_Description} and Figs.~\ref{fig:schema_MGENKF} and \ref{fig:schema_inou_loop}.
\section{Introduction}
\label{sec:introduction}
The analysis and control of complex configurations for high Reynolds problems of industrial interest is one of the most distinctive open challenges that the scientific community has to face for fluid mechanics applications in the coming decades. The essential non-linearity of this class of flows is responsible for multiscale interactions and an extreme sensitivity to minimal variations in the set-up of the problem. Under this perspective, applications using data-driven tools from Data Assimilation \cite{Daley1991_cambridge,Simon2006_wiley}, and in particular of sequential tools such as the Kalman filter (KF) \cite{Kalman1960_jbe} or the ensemble Kalman filter (EnKF) \cite{Evensen2009_Springer,Asch2016_SIAM}, have been recently used to obtain a precise estimation of the physical flow state accounting for bias or uncertainty in the performance of the investigative tool \cite{Rochoux2014_nhess,Kato2015_jcp,Meldi2017_jcp,Meldi2018_ftc,Labahn2019_pci,Zhang2020_cf,Zhang2020_jcp}. Advances in EnKF Data Assimilation for meteorological application have inspired early studies in computational fluid dynamics (CFD), dealing in particular with the statistical inference of boundary conditions \cite{Mons2016_jcp} or the optimization of the behavior of turbulence / subgrid scale modeling \cite{Wilcox1988_AIAA,Pope2000_cambridge,Durbin2001_Wiley}. However, further advances are needed for a systematic application to complex flows. The first problematic aspect is associated with the computational cost associated with the generation of the ensemble. This issue has usually been bypassed relying on the use of stationary reduced-order numerical simulations such as RANS \cite{Xiao2019_pas}, providing some statistical inference of turbulence modeling \cite{Kato2015_jcp,Xiao2016_jcp,Zhang2020_cf,Zhang2020_jcp}. Applications for scale resolving unstationary flows such as direct numerical simulation (DNS) and large eddy simulation (LES) are much more rare in the literature, because of the computational resources required \cite{Labahn2019_pci,Mons2019_jcp} to generate an acceptably large database to perform a converged parametric inference. Alternative strategies recently explored to reduce the computational costs deal with multilevel \cite{Hoel2016_SIAM,Siripatana2019_cg,Fossum2020_cg} / multifidelity \cite{Popov2021_SIAM} ensemble applications.
A second issue to be faced is that the solution at the end of the time step should, as much as possible, comply with the dynamic equations represented by the discretized numerical model (\emph{conservativity}). One may argue that, if the solution at the beginning of the time step is not accurate, then conservativity is not an efficient objective to be targeted. However, it is also true that the correction performed to the state estimation by KF methods may be responsible for non-physical perturbations of the predicted state, which may produce irreversible numerical instabilities for complex flows.
In a recent work, the Authors have presented a multigrid ensemble Kalman filter \cite{Moldovan2021_jcp} (MEnKF, renamed from now on MGEnKF). This strategy manipulates data using multiple meshes with different resolutions, exploiting the natural multilevel structure of multigrid solvers.
\textcolor{Reviewer1}{This algorithm belongs to the class of multilevel methods whose modern treatment was first proposed by Giles \cite{giles_2008, giles_2015}.} The main advantage of the MGEnKF is that a good level of accuracy of the DA procedure (comparable to classical application of the EnKF) is obtained with a significant reduction of the computational resources required. In addition, spurious oscillations of the physical variables due to the state estimation are naturally damped by the iterative resolution procedure of the multigrid solver. In the case of the classical \textcolor{Reviewer2}{Full Approximation Scheme} (FAS) two-grid multigrid algorithm, the sources of information operating in the MGEnKF are the following:
\begin{itemize}
\item \emph{One main simulation} whose final solution at each time step is provided on the fine level of the grid.
\item \emph{An ensemble of low-resolution simulations}, which are performed at the coarse level of the grid.
\item Some \emph{observations} which are provided locally in space and time in the physical domain.
\end{itemize}
The data assimilation procedure, which will be described in detail in Sec.~\ref{sec:maths}, relies on the recursive nature of the multigrid algorithm. The update of the physical state of the flow as well as its parametric description are obtained via two sets of operations:
\begin{itemize}
\item In the \emph{outer loop}, a classical EnKF procedure is performed using the results from an ensemble of simulations and the observation. The EnKF is used to update the physical state and the parametric description of \textit{i)} every member of the ensemble on the coarse grid and \textit{ii)} the main simulation on the fine grid level, via a projection operator.
\item In the \emph{inner loop}, the physical state obtained with the main simulation, which is more precise than the predicted state by the ensemble members, is used as surrogate observation in a new optimization procedure to improve the predictive capabilities of the physical model over the coarse grid.
\end{itemize}
In the first article detailing the MGEnKF \cite{Moldovan2021_jcp}, the focus of the analysis has been over the performance of the \emph{outer loop}. This choice was performed due to the central contribution of this loop to the global data assimilation strategy. For this reason, the \emph{inner loop} was suppressed in order to obtain an unambiguous assessment of this element of the MGEnKF.
In this manuscript, an extensive analysis is performed to assess the effects of the \emph{inner loop} over the global accuracy obtained via the MGEnKF. While the accuracy of the numerical model employed to obtain the predicted states for the ensemble members is directly affected by the \emph{outer loop}, further significant improvement is expected with the application of the \emph{inner loop} for two main reasons. The first one is that the usage of surrogate observation from the main simulation is \emph{consistent} with the numerical model used for time advancement on the different refinement levels of the computational grids. Therefore, biases that can affect data assimilation using very different sources of information (such as experiments and numerical results) are naturally excluded. One can also expect a faster rate of convergence owing to this property. The second valuable feature of the \emph{inner loop} is that, as the whole physical state of the main simulation is known on the fine grid level, the surrogate observation can be sampled everywhere in the physical domain. One of the main problematic aspects when assimilating experimental results in numerical models is that often the placement of sensors is affected by physical limitations which can preclude the sampling in highly sensitive locations. This problem is completely bypassed in the \emph{inner loop}, where the user can arbitrarily select the number and the location of sensors. This also opens perspectives of automatic procedures to determine optimal sensor placement \cite{Mons2017_jfm,Mons2021_jfm}, which are not explored in the present manuscript but will be targeted in future works.
The manuscript is organized as follows. In Sec.~\ref{sec:maths}, the mathematical and numerical ingredients used in the framework of the MGEnKF model are introduced and discussed. An extensive discussion of the strategy used to perform the \emph{inner loop} is provided. In Sec.~\ref{sec:apriorianalysis}, the numerical models optimized in the \emph{inner loop} are discussed. In Sec.~\ref{sec:advection}, the results of the numerical simulations for the analysis of the one-dimensional advection equation are provided and discussed. In Sec.~\ref{sec:Burgers}, the analysis is extended to the one-dimensional Burgers' equation. In this case, the accurate representation of non-linear effects is investigated. For both test cases, comparisons between results obtained via the MGEnKF with or without \emph{inner} loop are performed. In Sec.~\ref{sec:conclusions}, concluding remarks and future developments are discussed.
\section{Model equations \& sequential state estimation}
\section{Ensemble Kalman Filter (EnKF) and multigrid version (MGEnKF)}
\label{sec:maths}
The Kalman Filter (KF) \cite{Kalman1960_jbe} is a sequential data assimilation tool which provides an estimation of a state variable at time $k$ ($\mathbf{x}_k$), combining the initial estimate $\mathbf{x}_0$, a set of observations $\mathbf{y}^\text{o}_k$, and a linear dynamical model $\mathbf{M}_{k:k-1}$. The state estimation is obtained via two successive operations, which are usually referred to as \textit{forecast} step (superscript $f$) and \textit{analysis} step (superscript $a$). The classical version of the algorithm reads as follows:
\begin{subequations}
\begin{align}
\mathbf{x}^\text{f}_{k} & = \mathbf{M}_{k:k-1} \mathbf{x}^\text{a}_{k-1}\label{eq:x_mdl_kf}\\
\mathbf{P}^\text{f}_{k}& = \mathbf{M}_{k:k-1} \mathbf{P}^\text{a}_{k-1} {\mathbf{M}^\top_{k:k-1}}+\mathbf{Q}_k \label{eq:P_mdl_kf} \\
\mathbf{K}_k& = \mathbf{P}^\text{f}_{k}{\mathbf{H}^\top_k}\left(\mathbf{H}_k \mathbf{P}^\text{f}_{k} {\mathbf{H}^\top_k}+\mathbf{R}_k\right)^{-1}\label{eq:k_kf}\\
\mathbf{x}^\text{a}_{k}& = \mathbf{x}^\text{f}_{k} + \mathbf{K}_k\left(\mathbf{y}^\text{o}_k-\mathbf{H}_k\mathbf{x}^\text{f}_{k}\right)\label{eq:x_kf}\\
\mathbf{P}^\text{a}_{k}& = \left(I-\mathbf{K}_k \mathbf{H}_k\right)\mathbf{P}^\text{f}_{k}\label{eq:P_kf}
\end{align}
\end{subequations}
The final state estimation $\mathbf{x}^\text{a}_{k}$ is obtained in \eqref{eq:x_kf} by combining the predictor solution $\mathbf{x}^\text{f}_{k}$ and a correction term accounting for the real observation $\mathbf{y}^\text{o}_k$ and the observation predicted from $\mathbf{x}^\text{f}_{k}$ through the linear observation operator $\mathbf{H}_k$. This correction term is weighted by the matrix $\mathbf{K}_k$, usually referred to as \textit{Kalman gain}. The matrices $\mathbf{Q}_k$ and $\mathbf{R}_k$ measure the level of confidence in the model and in the observation, respectively. The error covariance matrix $\mathbf{P}^{\text{f}/\text{a}}_k$, defined as $
\mathbb{E}\left[
\left(\mathbf{x}_k^{\text{f}/\text{a}} - \mathbb{E}(\mathbf{x}_k^{\text{f}/\text{a}})\right)
\left(\mathbf{x}_k^{\text{f}/\text{a}} - \mathbb{E}(\mathbf{x}_k^{\text{f}/\text{a}})\right)^\top \right]$, is advanced in time in the \textit{forecast} step \eqref{eq:P_mdl_kf} and manipulated in the \textit{analysis} step \eqref{eq:P_kf}.
These last two operations are usually extremely expensive and they may require computational resources that are orders of magnitude larger than the model operation. In order to alleviate such computational issues, one possible alternative is to provide an approximation of $\mathbf{P}$ using stochastic Monte-Carlo sampling. The Ensemble Kalman Filter (EnKF) \cite{Evensen1994,Evensen2009_Springer} provides an approximation of $\mathbf{P}_k$ by means of an ensemble of $N_\text{e}$ states.
Given an ensemble of forecast/analysis states at a given instant $k$, the ensemble matrix collecting the information of the database is defined as:
\begin{equation}
\pmb{\mathscr{E}}_k^{\text{f}/\text{a}}=
\left[\mathbf{x}_k^{\text{f}/\text{a},(1)},\cdots,\mathbf{x}_k^{\text{f}/\text{a},(N_\text{e})}\right]\in\mathbb{R}^{N_x\times N_\text{e}} \label{eq:Ensemble_Matrix}
\end{equation}
Starting from the data assembled in $\pmb{\mathscr{E}}_k^{\text{f}/\text{a}}$, the ensemble mean
\begin{equation}
\overline{\mathbf{x}_k^{\text{f}/\text{a}}}=\frac{1}{N_\text{e}}\sum^{N_\text{e}}_{i=1}\mathbf{x}_k^{\text{f}/\text{a},(i)} \label{eq:Ensemble_Mean}
\end{equation}
is used to obtain the normalized ensemble anomaly matrix:
\begin{equation}
\mathbf{X}_k^{\text{f}/\text{a}}=\frac{\left[\mathbf{x}_k^{\text{f}/\text{a},(1)}-\overline{\mathbf{x}_k^{\text{f}/\text{a}}},\cdots,\mathbf{x}_k^{\text{f}/\text{a},(N_\text{e})}-\overline{\mathbf{x}_k^{\text{f}/\text{a}}}\right]}{\sqrt{N_\text{e}-1}}\in\mathbb{R}^{N_x\times N_\text{e}}, \label{eq:Ensemble_Anomaly}
\end{equation}
The approximated error covariance matrix, hereafter denoted with the superscript $e$, is obtained via the matrix product:
\begin{equation}
\mathbf{P}_k^{\text{f}/\text{a},\text{e}}=\mathbf{X}_k^{\text{f}/\text{a}} \left(\mathbf{X}_k^{\text{f}/\text{a}}\right)^\top\in\mathbb{R}^{N_x\times N_x} \label{eq:Ensemble_P}
\end{equation}
The goal of the EnKF is to mimic the BLUE (Best Linear Unbiased Estimator) analysis of the Kalman filter.
For this, Burgers et al. \cite{Burgers1998} showed that the observation must be considered as a random variable with an average corresponding to the observed value and a covariance $\mathbf{R}_k$ (the so-called \emph{data randomization} trick).
Therefore, given the discrete observation vector $\mathbf{y}^\text{o}_k\in \mathbb{R}^{N_y}$ at an instant $k$, the ensemble of perturbed observations is defined as:
\begin{equation}
\mathbf{y}_k^{\text{o},(i)}=\mathbf{y}_k^\text{o}+\mathbf{\epsilon}_k^{\text{o},(i)},
\quad
\text{with}
\quad
i=1,\cdots,N_\text{e}
\quad
\text{and}
\quad
\mathbf{\epsilon}_k^{\text{o},(i)}\sim \mathcal{N}(0,\mathbf{R}_k)
.\label{eq:perturbed_y}
\end{equation}
A normalized anomaly matrix of the observations errors is defined as
\begin{equation}
\mathbf{E}_k^{\text{o}}=
\frac{1}{\sqrt{N_\text{e}-1}}
\left[
\mathbf{\epsilon}_k^{\text{o},(1)}-\overline{\mathbf{\epsilon}_{k}^{\text{o}}},
\mathbf{\epsilon}_k^{\text{o},(2)}-\overline{\mathbf{\epsilon}_{k}^{\text{o}}},
\cdots,
\mathbf{\epsilon}_k^{\text{o},(N_\text{e})}-\overline{\mathbf{\epsilon}_{k}^{\text{o}}},
\right]
\in\mathbb{R}^{N_y\times N_\text{e}}
\end{equation}
where
$\displaystyle\overline{\mathbf{\epsilon}_{k}^{\text{o}}}=\frac{1}{N_\text{e}}\sum_{i=1}^{N_\text{e}}\mathbf{\epsilon}_k^{\text{o},(i)}$.
The covariance matrix of the measurement error can then be estimated as
\begin{equation}
\mathbf{R}_k^\text{e}=\mathbf{E}^\text{o}_k \left(\mathbf{E}^\text{o}_k\right)^\top\in\mathbb{R}^{N_y\times N_y}. \label{eq:observation_matrix_errorcov}
\end{equation}
Combining the previous results, one obtains (see \cite{Asch2016_SIAM}) the standard stochastic EnKF algorithm. The corresponding analysis step consists of updates performed on each of the ensemble members, as given by
\begin{equation}
\mathbf{x}_k^{\text{a},(i)}=
\mathbf{x}_k^{\text{f},(i)}+
\mathbf{K}_k^\text{e}
\left(y_k^{\text{o},(i)}-\mathbf{\mathcal{H}}_k\left(\mathbf{x}^{\text{f},(i)}_k\right)\right)\label{eq:ensemble_update}
\end{equation}
where $\mathbf{\mathcal{H}}_k$ is the non-linear observation operator. The expression of the Kalman gain is
\begin{equation}
\mathbf{K}_k^\text{e}=
\mathbf{X}_k^\text{f}
\left(\mathbf{Y}_k^\text{f}\right)^\top
\left(
\mathbf{Y}_k^\text{f}
\left(\mathbf{Y}_k^\text{f}\right)^\top
+
\mathbf{E}_k^\text{o} \left(\mathbf{E}_k^\text{o}\right)^\top
\right)^{-1}
\end{equation}
where $\mathbf{Y}_k^\text{f}=\mathbf{H}_k\mathbf{X}_k^\text{f}$.
Applications of EnKF strategies in fluid mechanics and engineering deal with a number of different topics such as wildfire propagation \cite{Rochoux2014_nhess}, combustion \cite{Labahn2019_pci}, turbulence modeling \cite{Xiao2016_jcp,Zhang2020_cf,Zhang2021_cf} and hybrid variational-EnKF methods \cite{Mons2019_jcp}.
The state estimation procedure can be extended to infer the values of free parameters $\theta$ which describe the model. Several proposals have been reported in the literature (see the review work by Asch et al. \cite{Asch2016_SIAM} for an extended discussion). The strategy selected for this work is the \textit{dual estimation} proposed by Moradkhani et al. \cite{DENKF_MORADKHANI2005}.
\subsection{Multigrid Ensemble Kalman Filter (MGEnKF)}
While the ensemble approximation $\mathbf{P}^\text{e}$ of the error covariance matrix $\mathbf{P}$ provides a significant reduction of the computational resources required by the Kalman filter, the generation of a database of $40-100$ elements may still be out of reach for complex applications in fluid mechanics. For this reason, the team has developed a multigrid ensemble Kalman filter (MGEnKF) \cite{Moldovan2021_jcp} strategy which exploits the numerical features of multigrid solvers. The algorithm is flexible and can be applied to numerical solvers using multiple refinement levels. Here, for sake of simplicity, it is illustrated on the classical two-grid FAS (Full Approximation Scheme) \cite{Brandt1977_mc,Wesseling1999_jcam} algorithm. This scheme employs a fine grid (solutions indicated with the superscript F) and a coarse grid (superscript C) in a three-step procedure: \lc{je ne reconnais pas en comparant avec l'article précédent cet algorithme}
\begin{enumerate}
\item A time advancement is performed starting from the solution $\mathbf{x}^\text{\tiny F}_{k-1}$ to obtain the state $(\mathbf{x}^\text{\tiny F}_{k})^{*}$
\item $(\mathbf{x}^\text{\tiny F}_{k})^{*}$ is projected on the coarse grid level and the state $\left(\mathbf{x}^\text{\tiny C}_k\right)^{*}=
\Pi_\text{\tiny C}\left(\left(\mathbf{x}_k^\text{\tiny F}\right)^{*}\right)
$ is obtained. Iterative calculations are used to filter out high-frequency noise and derive the solution $\left(\mathbf{x}^\text{\tiny C}_k\right)^{'}$
\item The difference between $\left(\mathbf{x}^\text{\tiny C}_k\right)^{'}$ and $\left(\mathbf{x}^\text{\tiny C}_k\right)^{*}$ is projected on the fine grid and the state $(\mathbf{x}^\text{\tiny F}_{k-1})^{*} + \Pi_\text{\tiny C}\left(\left(\mathbf{x}_k^\text{\tiny F}\right)^{'}-\left(\mathbf{x}_k^\text{\tiny F}\right)^{*}\right)$ is used as initial condition for a last set of iterations on the fine grid, which produces the final state $\mathbf{x}^\text{\tiny F}_{k}$
\end{enumerate}
\begin{figure}[htbp]
\includegraphics[width=0.95\textwidth]{figures/schema_MGENKF.pdf}
\caption{\label{fig:schema_MGENKF}
Schematic representation of the Multigrid Ensemble Kalman Filter (MGEnKF) \cite{Moldovan2021_jcp}. Two different levels of representation (fine and coarse grids) are used to obtain an estimation for the main simulation running on the fine grid. The inner and outer loop of the Dual Ensemble Kalman filter procedure are solved using ensemble members generated on the coarse grid.}
\end{figure}
\begin{figure}[htbp]
\includegraphics[width=1\textwidth]{figures/schema_inner_outer_loop.pdf}
\caption{\label{fig:schema_inou_loop}
Schematic representation of the box \textit{Dual EnKF} in Fig.~\ref{fig:schema_MGENKF}. Two optimization procedures, referred to as \textit{inner loop} and \textit{outer loop}, are sequentially performed.}
\end{figure}
A two-level EnKF, shown in Fig.~\ref{fig:schema_MGENKF}, is naturally integrated in the multigrid structure of the algorithm. This reduces the computational costs of the data assimilation procedure and contributes to render the solutions compatible with the dynamic equations of the model \cite{Moldovan2021_jcp}.
The \textit{Dual EnKF} box shown in Fig.~\ref{fig:schema_MGENKF} consists of two distinct parts: an \textit{outer loop}, where observations are obtained from an external source of information, and an \textit{inner loop}, where fine grid solutions projected onto the coarse grid are used as surrogate observations. The complete general algorithm for MGEnKF is structured in the following operations:
\begin{enumerate}
\item \textbf{Predictor step}. The initial solution on the fine grid $\left(\mathbf{x}^\text{\tiny F}_{k-1}\right)^\text{a}$ is used to calculate a forecast state $\left(\mathbf{x}^\text{\tiny F}_{k}\right)^\text{f}$
$$
\left(\mathbf{x}_k^\text{\tiny F}\right)^\text{f}=
\mathbf{\mathcal{M}}^\text{\tiny F}_{k:k-1}\left(
\left(\mathbf{x}_{k-1}^\text{\tiny F}\right)^\text{a},
\overline{\mathbf{\theta}_{k}^\text{a}}\right)
$$
where $\mathbf{\mathcal{M}}^\text{\tiny F}_{k:k-1}$ is the model used on the fine grid and $\overline{\mathbf{\theta}_{k}^\text{a}}$ is a set of free parameters describing the setup of the model on the fine grid. Each member $i$ of the ensemble calculated on the coarse grid is also advanced in time
$$
\left(\mathbf{x}_k^\text{\tiny C}\right)^\text{f,(i)}=
\mathbf{\mathcal{M}}^\text{\tiny C}_{k:k-1}\left(
\left(\mathbf{x}_{k-1}^\text{\tiny C}\right)^\text{a,(i)},
\mathbf{\theta}_{k}^\text{f,(i)}\right) + \mathbf{\mathcal{C}}_{k:k-1}\left(
\left(\mathbf{x}_{k-1}^\text{\tiny C}\right)^\text{a,(i)},
\mathbf{\psi}_{k}^\text{f,(i)}\right),
$$
where $\mathbf{\mathcal{M}}^\text{\tiny C}_{k:k-1}$ is the coarse grid model parameterized by $\mathbf{\theta}_{k}^\text{f,(i)}$, while $\mathbf{\mathcal{C}}_{k:k-1}$ is an additional correction term included to compensate the loss of resolution due to calculations on the coarse grid \cite{Brajard2020_arxiv}. This additional model, whose structure is usually unknown, is driven by the set of free parameters $\mathbf{\psi}_{k}^\text{f,(i)}$.
\item \textbf{Projection on the coarse grid \& \textit{inner} loop}. $\left(\mathbf{x}^\text{\tiny F}_{k}\right)^\text{f}$ is projected on the coarse grid space via a projection operator $\Pi_\text{\tiny C}$, so that $\left(\mathbf{x}^\text{\tiny C}_{k}\right)^{*}$ is obtained, \textit{i.e.}
$$
\left(\mathbf{x}^\text{\tiny C}_k\right)^{*}=
\Pi_\text{\tiny C}\left(\left(\mathbf{x}_k^\text{\tiny F}\right)^\text{f}\right)
$$
In this step, the flow field obtained on the fine mesh level is sampled to a set of observations $\left(\mathbf{y}_k^\text{\tiny C}\right)^\text{SO}=\left(\mathcal{H}_k^\text{\tiny C}\right)^\text{SO}\left(\mathbf{x}^\text{\tiny C}_k\right)^{*}$ which are used in the \textit{inner} loop as surrogate observations. Here, the EnKF is used as a \textit{parameter estimation only} scheme, \textit{i.e.} the ensemble states $\left(\mathbf{x}_k^\text{\tiny C}\right)^\text{f,(i)}$ are not modified, but the free parameters $\mathbf{\psi}_{k}^\text{f,(i)}$ are optimized to obtain values $\mathbf{\psi}_{k}^\text{a,(i)}$. This optimization targets an improvement of the prediction of the ensemble members simulated on the coarse grid via an update of the term $\mathbf{\mathcal{C}}_{k:k-1}\left(
\left(\mathbf{x}_{k-1}^\text{\tiny C}\right)^\text{a,(i)},
\mathbf{\psi}_{k}^\text{a,(i)}\right)$.
\item \textbf{\textit{Outer} loop}. If external observation $\left(\mathbf{y}_k^\text{\tiny C}\right)^\text{o}$ is available, the ensemble forecast $\left(\mathbf{x}^\text{\tiny C}_{k}\right)^{\text{f},(i)}$ is corrected with the standard Dual EnKF procedure to obtain $\left(\mathbf{x}^\text{\tiny C}_{k}\right)^{\text{a},(i)}$ as well as an update for the parameters $\mathbf{\theta}_{k}^{\text{a},(i)}$.
\item \textbf{Determination of the state variables on the coarse grid}.
In this step, the physical state of the main simulation is updated on the coarse grid. This solution, which will be referred to as $\left(\mathbf{x}^\text{\tiny C}_k\right)^{'}$, is obtained by classical iterative procedures on the coarse grid using the initial solution $\left(\mathbf{x}^\text{\tiny C}_{k}\right)^{*}$ if observations are not available.
On the other hand, if observations are available, the Kalman gain matrix $\left(\mathbf{K}_k^\text{\tiny C}\right)^{x,\text{e}}$ determined in the framework of EnKF for the ensemble members is used to determine the coarse grid solution $\left(\mathbf{x}^\text{\tiny C}_k\right)^{'}$ through a KF operation, \textit{i.e.}
\begin{align*}
\left(\mathbf{x}^\text{\tiny C}_k\right)^{'} & =
\left(\mathbf{x}^\text{\tiny C}_k\right)^{*}+
\left(\mathbf{K}_k^\text{\tiny C}\right)^{x,\text{e}}
\left[
\left(\mathbf{y}_k^\text{\tiny C}\right)^\text{o}-
\left(\mathbf{\mathcal{H}}_k^\text{\tiny C}\right)^\text{o}
\left(\left(\mathbf{x}_k^\text{\tiny C}\right)^{*}\right)
\right]
\end{align*}
\item \textbf{Final iteration on the fine grid}. The fine grid state solution $\left(\mathbf{x}^\text{\tiny F}_k\right)^{'}$ is determined using the results obtained on the coarse space: $\left(\mathbf{x}^\text{\tiny F}_k\right)^{'}=\left(\mathbf{x}^\text{\tiny F}_{k}\right)^\text{f}+\Pi_\text{\tiny F}\left(\left(\mathbf{x}^\text{\tiny C}_k\right)^{'}-\left(\mathbf{x}^\text{\tiny C}_{k}\right)^{*}\right)$. The state $\left(\mathbf{x}^\text{\tiny F}_k\right)^\text{a}$ is obtained from a final iterative procedure starting from $\left(\mathbf{x}^\text{\tiny F}_k\right)^{'}$.
\end{enumerate}
A detailed representation of the \textit{Dual EnKF} box shown in Fig.~\ref{fig:schema_MGENKF} is provided in Fig.~\ref{fig:schema_inou_loop}. This method is reminiscent of multilevel \cite{Hoel2016_SIAM,Siripatana2019_cg,KodyLaw2020,Fossum2020_cg} and multifidelity \cite{Gorodetsky2020_jcp,Popov2021_SIAM} ensemble techniques for data assimilation. However, this strategy does not need to include control variates as most approaches using stochastic Monte-Carlo sampling do. Therefore, only one main simulation on the fine grid is needed, significantly reducing the computational cost associated to the whole procedure.
\section{Model equations \& sequential state estimation}
\section{Ensemble Kalman Filter (EnKF) and multigrid version (MGEnKF)}
\label{sec:maths}
\subsection{Ensemble Kalman Filter (EnKF)}
\label{sec:EnKF_Description}
The Kalman Filter (KF) \cite{Kalman1960_jbe} is a sequential data assimilation tool which provides an estimation of a state variable at time $k$ ($\mathbf{x}_k$), combining the initial estimate $\mathbf{x}_0$, a set of observations $\mathbf{y}^\text{o}_k$, and a linear dynamical model $\mathbf{M}_{k:k-1}$. The state estimation is obtained via two successive operations, which are usually referred to as \emph{forecast} step (superscript $f$) and \emph{analysis} step (superscript $a$). The classical version of the algorithm reads as follows:
\begin{subequations}
\begin{align}
\mathbf{x}^\text{f}_{k} & = \mathbf{M}_{k:k-1} \mathbf{x}^\text{a}_{k-1}\label{eq:x_mdl_kf}\\
\mathbf{P}^\text{f}_{k}& = \mathbf{M}_{k:k-1} \mathbf{P}^\text{a}_{k-1} {\mathbf{M}^\top_{k:k-1}}+\mathbf{Q}_k \label{eq:P_mdl_kf} \\
\mathbf{K}_k& = \mathbf{P}^\text{f}_{k}{\mathbf{H}^\top_k}\left(\mathbf{H}_k \mathbf{P}^\text{f}_{k} {\mathbf{H}^\top_k}+\mathbf{R}_k\right)^{-1}\label{eq:k_kf}\\
\mathbf{x}^\text{a}_{k}& = \mathbf{x}^\text{f}_{k} + \mathbf{K}_k\left(\mathbf{y}^\text{o}_k-\mathbf{H}_k\mathbf{x}^\text{f}_{k}\right)\label{eq:x_kf}\\
\mathbf{P}^\text{a}_{k}& = \left(I-\mathbf{K}_k \mathbf{H}_k\right)\mathbf{P}^\text{f}_{k}\label{eq:P_kf}
\end{align}
\end{subequations}
The final state estimation $\mathbf{x}^\text{a}_{k}$ is obtained in \eqref{eq:x_kf} by combining the predictor solution $\mathbf{x}^\text{f}_{k}$ and a correction term accounting for the real observation $\mathbf{y}^\text{o}_k$ and the observation predicted from $\mathbf{x}^\text{f}_{k}$ through the linear observation operator $\mathbf{H}_k$. This correction term is weighted by the matrix $\mathbf{K}_k$, usually referred to as \emph{Kalman gain}. The matrices $\mathbf{Q}_k$ and $\mathbf{R}_k$ measure the level of confidence in the model and in the observation, respectively. The error covariance matrices $\mathbf{P}^{\text{f}/\text{a}}_k$, defined as $
\mathbb{E}\left[
\left(\mathbf{x}_k^{\text{f}/\text{a}} - \mathbb{E}(\mathbf{x}_k^{\text{f}/\text{a}})\right)
\left(\mathbf{x}_k^{\text{f}/\text{a}} - \mathbb{E}(\mathbf{x}_k^{\text{f}/\text{a}})\right)^\top \right]$, are advanced in time in the \emph{forecast} step \eqref{eq:P_mdl_kf} and manipulated in the \emph{analysis} step \eqref{eq:P_kf}.
These last two operations are usually extremely expensive and they may require computational resources that are orders of magnitude larger than the model operation. In order to alleviate such computational issues, one possible alternative is to provide an approximation of $\mathbf{P}$ using stochastic Monte-Carlo sampling. The Ensemble Kalman Filter (EnKF) \cite{Evensen1994,Evensen2009_Springer} provides an approximation of $\mathbf{P}_k$ by means of an ensemble of $N_\text{e}$ states. \textcolor{Reviewer1}{One clear advantage of the EnKF over low-rank approximation of the Kalman filter is that it can naturally advance the state in time using non-linear models.}
Given an ensemble of forecast/analysis states at a given instant $k$, the ensemble matrices collecting the information of the database are defined as:
\begin{equation}
\pmb{\mathscr{E}}_k^{\text{f}/\text{a}}=
\left[\mathbf{x}_k^{\text{f}/\text{a},(1)},\cdots,\mathbf{x}_k^{\text{f}/\text{a},(N_\text{e})}\right]\in\mathbb{R}^{N_x\times N_\text{e}}, \label{eq:Ensemble_Matrix}
\end{equation}
where $N_x$ is the size of the state vector. Starting from the data assembled in $\pmb{\mathscr{E}}_k^{\text{f}/\text{a}}$, the ensemble means
\begin{equation}
\overline{\mathbf{x}_k^{\text{f}/\text{a}}}=\frac{1}{N_\text{e}}\sum^{N_\text{e}}_{i=1}\mathbf{x}_k^{\text{f}/\text{a},(i)} \label{eq:Ensemble_Mean}
\end{equation}
are used to obtain the normalized ensemble anomaly matrices:
\begin{equation}
\mathbf{X}_k^{\text{f}/\text{a}}=\frac{\left[\mathbf{x}_k^{\text{f}/\text{a},(1)}-\overline{\mathbf{x}_k^{\text{f}/\text{a}}},\cdots,\mathbf{x}_k^{\text{f}/\text{a},(N_\text{e})}-\overline{\mathbf{x}_k^{\text{f}/\text{a}}}\right]}{\sqrt{N_\text{e}-1}}\in\mathbb{R}^{N_x\times N_\text{e}}, \label{eq:Ensemble_Anomaly}
\end{equation}
The approximated error covariance matrices, hereafter denoted with the superscript $e$, are obtained by the matrix products:
\begin{equation}
\mathbf{P}_k^{\text{f}/\text{a},\text{e}}=\mathbf{X}_k^{\text{f}/\text{a}} \left(\mathbf{X}_k^{\text{f}/\text{a}}\right)^\top\in\mathbb{R}^{N_x\times N_x} \label{eq:Ensemble_P}
\end{equation}
The goal of the EnKF is to mimic the BLUE (Best Linear Unbiased Estimator) analysis of the Kalman filter.
For this, Burgers et al. \cite{Burgers1998} showed that the observations must be considered as random variables with an average corresponding to the observed values and a covariance $\mathbf{R}_k$ (the so-called \emph{data randomization} trick).
Therefore, given the discrete observation vector $\mathbf{y}^\text{o}_k\in \mathbb{R}^{N_y}$ at an instant $k$, the ensemble of perturbed observations is defined as:
\begin{equation}
\mathbf{y}_k^{\text{o},(i)}=\mathbf{y}_k^\text{o}+\mathbf{\epsilon}_k^{\text{o},(i)},
\quad
\text{with}
\quad
i=1,\cdots,N_\text{e}
\quad
\text{and}
\quad
\mathbf{\epsilon}_k^{\text{o},(i)}\sim \mathcal{N}(0,\mathbf{R}_k)
.\label{eq:perturbed_y}
\end{equation}
A normalized anomaly matrix of the observations errors is defined as
\begin{equation}
\mathbf{E}_k^{\text{o}}=
\frac{1}{\sqrt{N_\text{e}-1}}
\left[
\mathbf{\epsilon}_k^{\text{o},(1)}-\overline{\mathbf{\epsilon}_{k}^{\text{o}}},
\mathbf{\epsilon}_k^{\text{o},(2)}-\overline{\mathbf{\epsilon}_{k}^{\text{o}}},
\cdots,
\mathbf{\epsilon}_k^{\text{o},(N_\text{e})}-\overline{\mathbf{\epsilon}_{k}^{\text{o}}},
\right]
\in\mathbb{R}^{N_y\times N_\text{e}},
\end{equation}
where
$\displaystyle\overline{\mathbf{\epsilon}_{k}^{\text{o}}}=\frac{1}{N_\text{e}}\sum_{i=1}^{N_\text{e}}\mathbf{\epsilon}_k^{\text{o},(i)}$
The ensemble covariance matrix of the measurement error can then be estimated as
\begin{equation}
\mathbf{R}_k^\text{e}=\mathbf{E}^\text{o}_k \left(\mathbf{E}^\text{o}_k\right)^\top\in\mathbb{R}^{N_y\times N_y}. \label{eq:observation_matrix_errorcov}
\end{equation}
Combining the previous results, one obtains (see \cite{Asch2016_SIAM}) the standard stochastic EnKF algorithm. The corresponding analysis step consists of updates performed on each of the ensemble members, as given by
\begin{equation}
\mathbf{x}_k^{\text{a},(i)}=
\mathbf{x}_k^{\text{f},(i)}+
\mathbf{K}_k^\text{e}
\left(y_k^{\text{o},(i)}-\mathbf{\mathcal{H}}_k\left(\mathbf{x}^{\text{f},(i)}_k\right)\right)\label{eq:ensemble_update}
\end{equation}
where $\mathbf{\mathcal{H}}_k$ is the non-linear observation operator. The expression of the Kalman gain is
\begin{equation}
\mathbf{K}_k^\text{e}=
\mathbf{X}_k^\text{f}
\left(\mathbf{Y}_k^\text{f}\right)^\top
\left(
\mathbf{Y}_k^\text{f}
\left(\mathbf{Y}_k^\text{f}\right)^\top
+
\mathbf{E}_k^\text{o} \left(\mathbf{E}_k^\text{o}\right)^\top
\right)^{-1}
\end{equation}
where $\mathbf{Y}_k^\text{f}=\mathbf{H}_k\mathbf{X}_k^\text{f}$.
Applications of EnKF strategies in fluid mechanics and engineering deal with a number of different topics such as wildfire propagation \cite{Rochoux2014_nhess}, combustion \cite{Labahn2019_pci}, turbulence modeling \cite{Xiao2016_jcp,Zhang2020_cf,Zhang2021_cf} and hybrid variational-EnKF methods \cite{Mons2019_jcp}.
The state estimation procedure can be extended to infer the values of free parameters $\theta$ which describe the model. Several proposals have been reported in the literature (see the review work by Asch et al. \cite{Asch2016_SIAM} for an extended discussion). The strategy selected for this work is the \emph{dual estimation} proposed by Moradkhani et al. \cite{DENKF_MORADKHANI2005}.
\subsection{Multigrid Ensemble Kalman Filter (MGEnKF)}
\label{sec:MGEnKF_Description}
While the ensemble approximation $\mathbf{P}^\text{e}$ of the error covariance matrix $\mathbf{P}$ provides a significant reduction of the computational resources required by the Kalman filter, the generation of a database of $40-100$ elements may still be out of reach for complex applications in fluid mechanics. For this reason, the team has developed a multigrid ensemble Kalman filter (MGEnKF) \cite{Moldovan2021_jcp} strategy which exploits the numerical features of multigrid solvers. The algorithm is flexible and can be applied to numerical solvers using multiple refinement levels. Here, for sake of simplicity, it is illustrated on the classical two-grid FAS (Full Approximation Scheme) \cite{Brandt1977_mc,Wesseling1999_jcam} algorithm. This scheme employs a fine grid (solutions indicated with the superscript F) and a coarse grid (superscript C) in a three-step procedure:
\begin{enumerate}
\item A time advancement is performed starting from the solution $\mathbf{x}^\text{\tiny F}_{k-1}$ to obtain the state $(\mathbf{x}^\text{\tiny F}_{k})^{*}$.
\item $(\mathbf{x}^\text{\tiny F}_{k})^{*}$ is projected on the coarse grid level leading to the state $\left(\mathbf{x}^\text{\tiny C}_k\right)^{*}=
\Pi_\text{\tiny C}\left(\left(\mathbf{x}_k^\text{\tiny F}\right)^{*}\right)
$. Iterative calculations are used to filter out high-frequency noise and derive the solution $\left(\mathbf{x}^\text{\tiny C}_k\right)^{'}$.
\item \textcolor{Reviewer2}{The difference between $\left(\mathbf{x}^\text{\tiny C}_k\right)^{'}$ and $\left(\mathbf{x}^\text{\tiny C}_k\right)^{*}$ is projected on the fine grid leading to the state $(\mathbf{x}^\text{\tiny F}_{k})^{'}=(\mathbf{x}^\text{\tiny F}_{k})^{*} + \Pi_\text{\tiny F}\left(\left(\mathbf{x}_k^\text{\tiny C}\right)^{'}-\left(\mathbf{x}_k^\text{\tiny C}\right)^{*}\right)$. Thereafter, $(\mathbf{x}^\text{\tiny F}_{k})^{'}$ is used as initial condition for a last set of iterations on the fine grid, which produces the final state $\mathbf{x}^\text{\tiny F}_{k}$}.
\end{enumerate}
\begin{figure}[htbp]
\includegraphics[width=0.95\textwidth]{figures/schema_MGENKF.pdf}
\caption{\label{fig:schema_MGENKF}
Schematic representation of the Multigrid Ensemble Kalman Filter (MGEnKF) \cite{Moldovan2021_jcp}. Two different levels of representation (fine and coarse grids) are used to obtain an estimation for the main simulation running on the fine grid. The inner and outer loop of the Dual Ensemble Kalman filter procedure are solved using ensemble members generated on the coarse grid.}
\end{figure}
\begin{figure}[htbp]
\includegraphics[width=1\textwidth]{figures/schema_inner_outer_loop.pdf}
\caption{\label{fig:schema_inou_loop}
Schematic representation of the boxes \emph{inner} and \emph{outer loop} in Fig.~\ref{fig:schema_MGENKF}. Two optimization procedures based on EnKF are sequentially performed.}
\end{figure}
A two-level EnKF, shown in Fig.~\ref{fig:schema_MGENKF}, is naturally integrated in the multigrid structure of the algorithm. This reduces the computational costs of the data assimilation procedure and contributes to render the solutions compatible with the dynamic equations of the model \cite{Moldovan2021_jcp}.
\textcolor{Reviewer2}{We emphasize here that in the MGEnKF algorithm, the model used for the time-integration of the state does not necessarily need to use implicit time-discretisation schemes, where multigrid strategies can actually be applied. Depending on the case, explicit time-integration models could also be used. However, this can have a negative impact on the conservativity of the system and it's up to the user to determine the most appropriate strategy for the desired application. Notably, in the 1D experiments performed in this article, the numerical stability of the simulations was not necessarily an issue, thus explicit models were used for simplicity. For 2D and 3D analysis of compressible flows, conservativity can be a real issue, and the use of implicit methods is more robust \cite{Moldovan2021_jcp}. For sake of simplicity and to keep the description general, we have chosen to present the algorithm assuming that implicit methods are used, thus the iterative part of the multigrid method is kept in the description. Whenever we mention classical iterative procedures, we refer to, for instance, the Jacobi method, or the Gauss–Seidel method. The interested reader is referred to Chapter 10 of \cite{Hirsch2007}, which contains a comprehensive review of iterative methods used for solving algebraic systems of equations related to fluid mechanics.}
\textcolor{Reviewer2}{The sequential estimation by EnKF is performed in}
two distinct parts: an \emph{outer loop}, where observations are obtained from an external source of information, and an \emph{inner loop}, where fine grid solutions projected onto the coarse grid are used as surrogate observations \textcolor{Reviewer2}{(see Fig.~\ref{fig:schema_MGENKF})}.
\textcolor{Reviewer2}{%
Let $\mathbf{\mathcal{M}}^\text{\tiny F}_{k:k-1}\left(
\mathbf{x}_{k-1}^\text{\tiny F},
\mathbf{\theta}_{k}\right)$ be a parametrized discretized model defined on the fine grid where $\mathbf{\theta}_{k}$ is a set of free parameters describing the setup of the model.}
The complete general algorithm for MGEnKF is structured in the following operations \textcolor{Reviewer2}{(see \ref{sec:DA_algorithms} for the pseudo-algorithms)}:
\begin{enumerate}
\item \textbf{Predictor step}. The analyzed solution defined on the fine grid $\left(\mathbf{x}^\text{\tiny F}_{k-1}\right)^\text{a}$ is used to calculate a forecast state $\left(\mathbf{x}^\text{\tiny F}_{k}\right)^\text{f}$
\textcolor{Reviewer2}{
$$
\left(\mathbf{x}_k^\text{\tiny F}\right)^\text{f}=
\mathbf{\mathcal{M}}^\text{\tiny F}_{k:k-1}\left(
\left(\mathbf{x}_{k-1}^\text{\tiny F}\right)^\text{a},
\overline{\mathbf{\theta}}_{k}^\text{a}\right)
$$
}
where
\textcolor{Reviewer2}{$\overline{\mathbf{\theta}}_{k}^\text{a}=\frac{1}{N_\text{e}}\sum_{i=1}^{N_\text{e}}\mathbf{\theta}_{k}^{\text{a},(i)}$ is the ensemble average of coarse-level parameters.} Each member $i$ of the ensemble of states defined on the coarse grid is also advanced in time
$$
\left(\mathbf{x}_k^\text{\tiny C}\right)^\text{f,(i)}=
\mathbf{\mathcal{M}}^\text{\tiny C}_{k:k-1}\left(
\left(\mathbf{x}_{k-1}^\text{\tiny C}\right)^\text{a,(i)},
\mathbf{\theta}_{k}^\text{f,(i)}\right) + \mathbf{\mathcal{C}}_{k:k-1}\left(
\left(\mathbf{x}_{k-1}^\text{\tiny C}\right)^\text{a,(i)},
\mathbf{\psi}_{k}^\text{f,(i)}\right),
$$
where $\mathbf{\mathcal{M}}^\text{\tiny C}_{k:k-1}$ is the coarse grid model parameterized by $\mathbf{\theta}_{k}^\text{f,(i)}$, while $\mathbf{\mathcal{C}}_{k:k-1}$ is an additional correction term included to compensate the loss of resolution due to calculations on the coarse grid \cite{Brajard2020_arxiv}. This additional model, whose structure is usually unknown, is driven by the set of free parameters $\mathbf{\psi}_{k}^\text{f,(i)}$ \textcolor{Reviewer2}{(see Sec.~\ref{sec:apriorianalysis} for extended discussion about this correction term)}.
\item \textbf{Projection on the coarse grid \& \emph{inner} loop}. $\left(\mathbf{x}^\text{\tiny F}_{k}\right)^\text{f}$ is projected on the coarse grid space via a projection operator $\Pi_\text{\tiny C}$, so that $\left(\mathbf{x}^\text{\tiny C}_{k}\right)^{*}$ is obtained, \textit{i.e.}
\begin{equation*} \label{eq:CG_projection}
\left(\mathbf{x}^\text{\tiny C}_k\right)^{*}=
\Pi_\text{\tiny C}\left(\left(\mathbf{x}_k^\text{\tiny F}\right)^\text{f}\right).
\end{equation*}
\textcolor{Reviewer2}{
At this step, surrogate observations (denoted hereafter with the superscript \enquote{so}) are determined from $\left(\mathbf{x}^\text{\tiny C}_k\right)^{*}$ with an observation operator $\left(\mathcal{H}_k^\text{\tiny C}\right)^\text{so}$:
\begin{equation*} \label{eq:surrogate_observation}
\left(\mathbf{y}_k^\text{\tiny C}\right)^\text{so}=\left(\mathcal{H}_k^\text{\tiny C}\right)^\text{so}\left(\mathbf{x}^\text{\tiny C}_k\right)^{*}
\end{equation*}
These surrogate observations are used exclusively in the \emph{inner} loop for estimating the parameters $\mathbf{\psi}_{k}^\text{f,(i)}$.
Hence, the ensemble states $\left(\mathbf{x}_k^\text{\tiny C}\right)^\text{f,(i)}$ are not modified, but the free parameters $\mathbf{\psi}_{k}^\text{f,(i)}$ are updated by a specific \textit{inner loop} EnKF to obtain the analysed values $\mathbf{\psi}_{k}^\text{a,(i)}$. This parameter optimization targets an improvement of the prediction of the ensemble members simulated on the coarse grid via an update of the correction term $\mathbf{\mathcal{C}}_{k:k-1}\left(
\left(\mathbf{x}_{k-1}^\text{\tiny C}\right)^\text{a,(i)},
\mathbf{\psi}_{k}^\text{f,(i)}\right)$.
}
\item \textbf{\emph{Outer} loop}. If external observations $\left(\mathbf{y}_k^\text{\tiny C}\right)^\text{o}$ are available, the ensemble forecast $\left(\mathbf{x}^\text{\tiny C}_{k}\right)^{\text{f},(i)}$ is corrected with the standard Dual EnKF procedure to obtain $\left(\mathbf{x}^\text{\tiny C}_{k}\right)^{\text{a},(i)}$ as well as an update of the parameters $\mathbf{\theta}_{k}^{\text{a},(i)}$.
\item \textbf{Determination of the state variables on the coarse grid}.
In this step, the coarse grid state is updated. If observations are not available, this update, referred as $\left(\mathbf{x}^\text{\tiny C}_k\right)^{'}$, is obtained by classical iterative procedures on the coarse grid using the initial solution $\left(\mathbf{x}^\text{\tiny C}_{k}\right)^{*}$.
On the other hand, if observations are available, the Kalman gain matrix $\left(\mathbf{K}_k^\text{\tiny C}\right)^{x,\text{e}}$ determined by EnKF, is used to determine the coarse grid solution $\left(\mathbf{x}^\text{\tiny C}_k\right)^{'}$ through a KF operation, \textit{i.e.}
\begin{align*}
\left(\mathbf{x}^\text{\tiny C}_k\right)^{'} & =
\left(\mathbf{x}^\text{\tiny C}_k\right)^{*}+
\left(\mathbf{K}_k^\text{\tiny C}\right)^{x,\text{e}}
\left[
\left(\mathbf{y}_k^\text{\tiny C}\right)^\text{o}-
\left(\mathbf{\mathcal{H}}_k^\text{\tiny C}\right)^\text{o}
\left(\left(\mathbf{x}_k^\text{\tiny C}\right)^{*}\right)
\right]
\end{align*}
\item \textbf{Estimation on the fine grid}. The fine grid state solution $\left(\mathbf{x}^\text{\tiny F}_k\right)^{'}$ is determined using the results obtained on the coarse space: $\left(\mathbf{x}^\text{\tiny F}_k\right)^{'}=\left(\mathbf{x}^\text{\tiny F}_{k}\right)^\text{f}+\Pi_\text{\tiny F}\left(\left(\mathbf{x}^\text{\tiny C}_k\right)^{'}-\left(\mathbf{x}^\text{\tiny C}_{k}\right)^{*}\right)$. The state $\left(\mathbf{x}^\text{\tiny F}_k\right)^\text{a}$ is obtained from a final iterative procedure starting from $\left(\mathbf{x}^\text{\tiny F}_k\right)^{'}$.
\end{enumerate}
A detailed representation of the \textcolor{Reviewer2}{\emph{inner} and \emph{outer} loops} shown in Fig.~\ref{fig:schema_MGENKF} is provided in Fig.~\ref{fig:schema_inou_loop}. This method is reminiscent of multilevel \cite{Hoel2016_SIAM,Siripatana2019_cg,KodyLaw2020,Fossum2020_cg} and multifidelity \cite{Gorodetsky2020_jcp,Popov2021_SIAM} ensemble techniques for data assimilation.
\textcolor{Reviewer1}{Our approach differs by the use of surrogate observations issued from the fine grid resolution to infer the correction parameters of the numerical integration scheme. In addition, }
only one main simulation on the fine grid is needed, significantly reducing the computational cost associated to the whole procedure.
\section{Optimization of numerical schemes on a coarse mesh}
\label{sec:apriorianalysis}
During time integration of solutions in the MGEnKF algorithm, classical numerical schemes unavoidably lead to diffusive and dispersive errors that can excessively alter the representativity of solutions on too coarse meshes. It may inhibit their utilization for evaluating accurately the error covariance matrix required for updating the solution on the fine mesh.
An original feature of the present MGEnKF algorithm presented in Sec.~\ref{sec:maths} is that the main solution advanced at the most refined level of the grid can also be used as surrogate observation to optimize the parameters of the numerical schemes used to advance the solutions on such coarse meshes. The principles guiding the choice of the numerical scheme and associated parametrization retained to control these numerical errors on coarse meshes are presented in this section.
Let us consider the prototype 1D linear advection equation of a scalar $u$ advected with advection velocity $c$.
\begin{equation}
\label{eq:1D_adv}
\frac{\partial u}{\partial t}+c \frac{\partial u}{\partial x}=0
\end{equation}
For the sake of clarity and conciseness, the discussion is restricted here to the discretization of this advection equation with explicit finite difference schemes on four-point stencils (second or third order accurate) for the case $c>0$ on a Cartesian mesh with a constant mesh size $\Delta x$. The following considerations can quite easily be extended to non-linear systems with time and space varying advection velocity, irregular mesh or higher-order schemes. $\Delta t$ is the time step and $\sigma=c\Delta t / \Delta x$ the CFL number. $u_\text{j}^\text{k}$ represents the discrete numerical solution at the spatial location $x_\text{j}=(j-1)\Delta x$ at time $t=k\Delta t$.
A general one-parameter family of second order accurate schemes (see for example \cite{Hirsch2007} p. 364) may be defined on a backward upwind stencil $(j-2,j-1,j,j+1)$ as:
\small
\begin{equation}
\label{eqschema}
\begin{aligned}
u_\text{j}^\text{k} & = & u_\text{j}^\text{k-1}-\frac{\sigma}{2} \left( u_\text{j+1}^\text{k-1}-u_\text{j-1}^\text{k-1} \right) +
\frac{\sigma^2}{2} \left( u_\text{j+1}^\text{k-1}-2 u_\text{j}^\text{k-1}+u_\text{j-1}^\text{k-1} \right) \\
& & +
\delta \left( -u_\text{j-2}^\text{k-1}+3u_\text{j-1}^\text{k-1}-3u_\text{j}^\text{k-1}+u_\text{j+1}^\text{k-1} \right)
\end{aligned}
\end{equation}
The terms in first line of equation \eqref{eqschema} (remaining for $\delta=0$) correspond to the standard centered Lax Wendroff scheme. The last term in the second line (pre-multiplied by $\delta$) corresponds to a correction consistent with a dispersion term of the form $\delta \Delta x^3 u_{xxx}$ discretized on the extended backward upwind stencil.
This scheme is compatible with the linear advection equation \eqref{eq:1D_adv} with precision order proportional to $(\Delta x^2, \Delta t^2)$. If one retains the two first dominant error terms in the combination of Taylor expansions corresponding to these discrete terms forming scheme Eq \ref{eqschema}, the following equivalent differential equation is obtained:
\begin{equation}
\label{eqequiv}
\begin{aligned}
\frac{\partial u}{\partial t}+c \frac{\partial u}{\partial x} & = & \frac{c}{6} \left[ 6\delta -
\sigma \left( 1-\sigma^2\right)\right] \Delta_x^2 \frac{\partial ^3 u}{\partial x^3} \\
& & - \frac{c}{8} \left[ \delta \left( 1-2 \sigma \right) + \frac{\sigma^2}{4} \left( 1 - \sigma^2 \right) \right] \Delta_x^3 \frac{\partial ^4 u}{\partial x^4} \\
\end{aligned}
\end{equation}
For arbitrary values of $\delta$, Eq. \eqref{eqequiv} shows that the dominant numerical error of scheme \eqref{eqschema} is of dispersive nature with an error proportional to $\Delta x^2$ and a less dominant (third-order) diffusive error term. Their expressions already show that their relative level varies both as function of $\sigma$ and $\delta$. It does not seem interesting to optimize a priori the choice of $\sigma$, in order keep the MGEnKF algorithm sufficiently flexible and general. Therefore, it is chosen to target an optimization for $\delta$ only with a strategy suitable for any value of $\sigma$. This choice allows the user to set the value of $\sigma$ based on practical constraints, such as for synchronizing simulations with observation data available.
The level of this error can then be partially controlled through the parameter $\delta$ for a given $\sigma$. By setting in particular $\delta = \sigma(1-\sigma^2)/6$, this dominant error term cancels out and the scheme becomes third order accurate in space with now a dominant error term of diffusive nature proportional to $\Delta x^3$. Different values for $\delta$ maintain the formal second order accuracy but can induce significantly different effective evolution of the numerical errors. An illustration of this possible error behavior (and thus potential a priori control) is shown in Fig \ref{fig:figerror}. These errors are here quantified through a classical Fourier analysis, through which the expression of the complex gain factor $G = Re(G) + i \, Im(G)$ is extracted as function of the phase angle $\phi = m \Delta x$ with $m$ corresponding to a spatial wavenumber. It is worth recalling that the phase angle $\phi=m \Delta x = 2\pi / \lambda$, where $\lambda$ is the spatial wavelength, can also be written as $\phi = 2\pi / (N-1)$, where N represents the number of points used to discretize the signal over $\lambda$. The modulus of the gain factor $|G|$ should remain less than 1 for the whole range of $\phi$ present in the signal being advected to ensure stability. However, $1-|G|$ represents the level of numerical diffusion and might be minimized to avoid artificial decrease of wave amplitude components. The dispersive error is here characterized by $\epsilon_{\phi} = atan(-Im(G)/Re(G)) / \sigma \phi $, which corresponds to the spurious multiplicative factor affecting the expected phase velocity of wave components. A good numerical scheme should control it to remain as close as possible to the unity value to limit phase advance or delay observed for $\epsilon_{\phi}>1$ or $\epsilon_{\phi}<1$, respectively. In Fig. \ref{fig:figerror}, the scheme responses for various $\delta$ as function of $\phi$ and $\sigma$ show how different functional evolutions of $\delta$ may thus lead to significantly different effective error properties.
The case $\delta=0$ (Lax Wendroff scheme) is characterized by a dominant phase lag within the stability bounds $0<\sigma<1$. The choice $\delta=\sigma (1-\sigma)/2$ (not shown) corresponds to Beam and Warming scheme which yields a dominant phase advance error for the similar range of variation of $0<\sigma<1$. The case $\delta = \sigma (1-\sigma)/4$ corresponds to the famous Fromm's scheme which compensates to some extent the delay and advance phase errors of the two aforementioned schemes for a wide range of $\sigma$. It can be indeed noticed in Fig. \ref{fig:figerror} that isolines of $\epsilon_{\phi}$ for values lower than one are significantly shifted towards the higher values of $\phi$, indicating that dispersive error levels can be a priori significantly reduced in the intermediate range of $\phi$ corresponding to practical simulation cases. However, the reduction of diffusive error, as illustrated with $|G|$ is far less efficient, in particular for high values of $\sigma$. This diffusive error is even seen to increase on the contrary for lower values of $\sigma$ and high values of $\phi$.
It should be kept in mind that the MGEnKF algorithm employs ensemble members which has to be generated using relatively coarse grids (thus high values of $\phi$ for a given signal to advect) for which both the dispersive and diffusive terms are likely to be important. Both Eq. \ref{eqequiv} and these aforementioned observations of the non-monotonic and uncorrelated evolutions of $|G|$ and $\epsilon_{\phi}$ suggest accordingly that adjusting only this single parameter $\delta$ is not sufficient to allow a satisfactory control of both kind of error at the same time. An optimization of $\delta$ to reduce the dispersive error may indeed lead to deteriorate the diffusive behavior of the scheme. This justifies a first important choice retained for the present optimization strategy, which consists in considering also an additional correction to scheme \eqref{eqschema} for the present optimization, with the same order as the dispersive correction term in factor of $\delta$. This term is chosen as being consistent with $\alpha \Delta x^2 \frac{\sigma^2}{2}\frac{\partial ^2 u}{\partial x^2}$. With negative values of $\alpha$, this term will be expected to add an anti-diffusive behavior, counteracting the diffusion error intrinsically associated with scheme \eqref{eqschema}. The combined use of both correction terms is thus expected to allow a more relevant separated control of both dispersion and diffusion errors.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\textwidth, trim=0cm 0cm 0cm 0cm,clip]{figures/errorLWvsFromm.png}
\caption{\label{fig:figerror} Comparison of diffusive (top) and dispersive (bottom) errors for two different a priori parametrizations of $\delta$.}
\end{figure}
As previously observed in Fig. \ref{fig:figerror}, the effective scheme response significantly varies as function of $\phi$.
For a given value of $\sigma$ and a given mesh size, an optimization of $\delta$ and $\alpha$, considered as constant parameters, could indeed only allow a significant reduction of numerical error in a relatively limited range of $\phi$ (thus enabling a control of accuracy for representing a limited range of the spectral content associated to the signal to advect). In view of considering complex (spectrally richer) solutions and extending the use of this scheme to non-linear models, possibly leading to spatially evolving frequency content, it is thus also chosen to consider spatially varying functions for $\delta$ and $\alpha$ instead of constant values. This variability, which will be represented expressing the parameters via spatial expansions of polynomials, will allow for local numerical optimization on coarse meshes.
\paragraph{Summary: numerical model retained to perform coarse-grid simulations}~\\
Following the previous a priori analysis given all along this section, the following numerical scheme is finally retained:
\begin{equation}
\label{eq:1D_adv_oneparameter_parametrized}
\begin{aligned}
u_\text{j}^\text{k} & = & u_\text{j}^\text{k-1}-\frac{\sigma}{2} \left( u_\text{j+1}^\text{k-1}-u_\text{j-1}^\text{k-1} \right) +
\left(1+\alpha\right)\frac{\sigma^2}{2} \left( u_\text{j+1}^\text{k-1}-2 u_\text{j}^\text{k-1}+u_\text{j-1}^\text{k-1} \right) \\
& & +\left(\delta+\gamma\right) \left( -u_\text{j-2}^\text{k-1}+3u_\text{j-1}^\text{k-1}-3u_\text{j}^\text{k-1}+u_\text{j+1}^\text{k-1} \right) \\
& = & \mathbf{\mathcal{M}}_{k:k-1}\left(u, \sigma, \delta \right) + \mathbf{\mathcal{C}}_{k:k-1}\left( u, \sigma, \alpha, \gamma \right)
\end{aligned}
\end{equation}
Here, one can see that the optimization of $\delta$ is not performed directly, but via a parameter $\gamma$ which measures the deviation of the optimized dispersion coefficient from the constant value $\delta= \sigma(1-\sigma^2)/4$ proposed by Fromm. This choice has been performed to provide a clear separation between the dynamic model $\mathbf{\mathcal{M}}$ and the correction model $\mathbf{\mathcal{C}}$ when comparing Eq.~\ref{eqschema} and Eq.~\ref{eq:1D_adv_oneparameter_parametrized}. The variability in space of the coefficients $\gamma(x)$ and $\alpha(x)$ is obtained expressing them in terms of a Legendre Polynomial expansion (truncated to the order $n$):
\begin{equation}
\gamma\left(x\right)=\gamma_\text{0}P_\text{0}\left(x\right)+\gamma_\text{1}P_\text{1}\left(x\right)...+\gamma_\text{n}P_\text{n}\left(x\right)\label{eq:beta_legendre},
\end{equation}
\begin{equation}
\alpha\left(x\right)=\alpha_\text{0}P_\text{0}\left(x\right)+\alpha_\text{1}P_\text{1}\left(x\right)...+\alpha_\text{n}P_\text{n}\left(x\right)\label{eq:alpha_legendre}.
\end{equation}
Preliminary tests showed that $4$-th order representation ($n=4$) is satisfactory for the tests considered in the present study. The inner loop will be used to optimize the expansion coefficients for a total of $10$ parameters (five expansion coefficients $\gamma_i$ and five expansion coefficients $\alpha_i$). The values for these parameters could possibly be constrained during the inner loop optimization in order to accept only values leading to stable solutions. In particular, the extended stability constraint for the present scheme in absence of additional anti-diffusive correction is imposed, which reads as:
\begin{equation}
\label{stab}
\gamma (1-2\sigma) + \frac{1}{4} \sigma^2 (1-\sigma^2) \geq 0
\end{equation}
\section{Optimization of numerical integration schemes on a coarse mesh}
\label{sec:apriorianalysis}
Numerical schemes unavoidably lead to diffusive and dispersive errors that can excessively alter the representativity of solutions on too coarse meshes.
These errors can lead to an imprecise determination of the error covariance matrix, prohibiting the updating of the solution on the fine mesh.
An original feature of the MGEnKF algorithm presented in Sec.~\ref{sec:maths} is that the solution advanced at the most refined level of the grid can also be used as surrogate observation to optimize the parameters of the numerical schemes.
In this section, we present the principles guiding the choice of the numerical scheme and associated parametrization retained to control the numerical errors on coarse meshes.
Let us consider the prototype 1D linear advection equation of a scalar quantity $u$ advected with the constant velocity $c$:
\begin{equation}
\label{eq:1D_adv}
\frac{\partial u}{\partial t}+c \frac{\partial u}{\partial x}=0
\end{equation}
For simplicity of presentation, we restrict ourselves to using an explicit finite difference scheme on four-point stencils (second or third order accurate) for the case $c>0$.
The spatial discretization is performed on a Cartesian mesh with a constant size $\Delta x$.
$\Delta t$ is the time step and $\sigma=c\Delta t / \Delta x$ the CFL number. $u_\text{j}^\text{k}$ represents the discrete numerical solution at the spatial location $x_\text{j}=(j-1)\Delta x$ at time $t=k\Delta t$.
The following considerations can be extended quite easily to non-linear systems with time and space varying advection velocity, irregular mesh or higher-order schemes.
A general one-parameter family of second order accurate schemes (see for example \cite{Hirsch2007} p. 364) may be defined on a backward upwind stencil $(j-2,j-1,j,j+1)$ as:
\begin{equation}
\label{eqschema}
\begin{aligned}
u_\text{j}^\text{k} & = & u_\text{j}^\text{k-1}-\frac{\sigma}{2} \left( u_\text{j+1}^\text{k-1}-u_\text{j-1}^\text{k-1} \right) +
\frac{\sigma^2}{2} \left( u_\text{j+1}^\text{k-1}-2 u_\text{j}^\text{k-1}+u_\text{j-1}^\text{k-1} \right) \\
& & +
\delta \left( -u_\text{j-2}^\text{k-1}+3u_\text{j-1}^\text{k-1}-3u_\text{j}^\text{k-1}+u_\text{j+1}^\text{k-1} \right)
\end{aligned}
\end{equation}
The first line of \eqref{eqschema} corresponds to the standard centered Lax Wendroff scheme. The second line can be interpreted as the discretization of an additional dispersion term of the form $\delta \Delta x^3 u_{xxx}$.
The expression \eqref{eqschema} is compatible with the linear advection equation \eqref{eq:1D_adv} discretized at precision order proportional to $(\Delta x^2, \Delta t^2)$. If one retains the two first dominant error terms in the combination of Taylor expansions corresponding to these discrete terms forming scheme \eqref{eqschema}, \lc{comprends pas cette phrase} the following equivalent differential equation is obtained:
\begin{equation}
\label{eqequiv}
\begin{aligned}
\frac{\partial u}{\partial t}+c \frac{\partial u}{\partial x} & = & \frac{c}{6} \left[ 6\delta -
\sigma \left( 1-\sigma^2\right)\right] \Delta_x^2 \frac{\partial ^3 u}{\partial x^3} & \\
& & - \frac{c}{8} \left[ \delta \left( 1-2 \sigma \right) + \frac{\sigma^2}{4} \left( 1 - \sigma^2 \right) \right] \Delta_x^3 \frac{\partial ^4 u}{\partial x^4} & + \mathcal{O}\left(\Delta_x^4\right)\\
\end{aligned}
\end{equation}
Equation \eqref{eqequiv} reveals that the dominant numerical error of \eqref{eqschema} is dispersive with an error proportional to $\Delta x^2$ and that a less dominant (third-order) diffusive error term occurs. The expressions of these errors show that their relative level varies both as function of $\sigma$ and $\delta$.
In order to keep the MGEnKF algorithm sufficiently flexible and general, we will not search to optimize the parameter $\sigma$.
Therefore, it is chosen to target only an optimization for $\delta$ with a strategy suitable for any value of $\sigma$. This choice allows the user to set the value of $\sigma$ based on practical constraints, such as for synchronizing simulations with available observation data.
The level of the dominant error can be controlled through the parameter $\delta$. By setting in particular $\delta = \sigma(1-\sigma^2)/6$, the dominant error term cancels out and the scheme becomes third order accurate in space with now a diffusive dominant error term proportional to $\Delta x^3$. Other values of $\delta$ maintain the formal second order accuracy but can induce significantly different effective evolution of the numerical errors.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.9\textwidth, trim=0cm 0cm 0cm 0cm,clip]{figures/errorLWvsFromm.pdf}
\caption{\label{fig:figerror} Comparison of diffusive ($\epsilon_D=|G|$, top) and dispersive ($\epsilon_{\phi}$, bottom) errors for two different a priori parametrizations of $\delta$. $\delta=0$ (left column) corresponds to the Lax Wendroff scheme and $\delta=\sigma\left(1-\sigma\right)/4$ (right column) to Fromm's scheme.}
\end{figure}
An illustration of this is shown in Fig. \ref{fig:figerror} where the diffusion and dispersion errors are represented. These errors are quantified through a classical Fourier analysis, through which the expression of the complex gain factor $G = \text{Re}(G) + \jmath \, \text{Im}(G)$ is extracted as function of the phase angle $\phi = m \Delta x$ with $m$ corresponding to a spatial wavenumber. It is worth recalling that the phase angle $\phi=m \Delta x = 2\pi / \lambda$, where $\lambda$ is the spatial wavelength, can also be written as $\phi = 2\pi / (N-1)$, where N represents the number of points used to discretize the signal over $\lambda$.
To ensure stability, the modulus of the gain factor $|G|$ should remain less than 1 for the whole range of $\phi$ present in the signal being advected.
As for him, $1-|G|$ represents the level of numerical diffusion.
This quantity might be minimized to avoid artificial decrease of wave amplitude components. The dispersive error is here characterized by $\epsilon_{\phi} = \arctan(-\text{Im}(G)/\text{Re}(G)) / \sigma \phi $, which corresponds to the spurious multiplicative factor affecting the expected phase velocity of wave components.
A good numerical scheme should keep the value of $\epsilon_{\phi}$ as close as possible to unity to limit phase advance or delay observed for $\epsilon_{\phi}>1$ or $\epsilon_{\phi}<1$, respectively.
Figure \ref{fig:figerror} clearly shows the role played by $\delta$ on the diffusion and dispersion errors.
The case $\delta=0$ (Lax Wendroff scheme) is characterized by a dominant phase lag within the stability bounds $0<\sigma<1$. The choice $\delta=\sigma (1-\sigma)/2$ (not shown) corresponds to Beam and Warming scheme which yields a dominant phase advance error for $0<\sigma<1$. The case $\delta = \sigma (1-\sigma)/4$ corresponds to the famous Fromm's scheme which compensates to some extent the phase errors of the two aforementioned schemes for a wide range of $\sigma$. Indeed, we notice in Fig. \ref{fig:figerror} that isolines of $\epsilon_{\phi}$ lower than unity are significantly shifted towards the higher values of $\phi$, indicating that dispersive error levels can be a priori significantly reduced in the intermediate range of $\phi$ corresponding to practical simulation cases. However, the reduction of diffusive errors, as illustrated with $|G|$ is far less efficient, in particular for high values of $\sigma$. The diffusive errors are even seen to increase for lower values of $\sigma$ and high values of $\phi$.
We have to keep in mind that the MGEnKF algorithm employs ensemble members which has to be generated using relatively coarse grids (thus high values of $\phi$) for which both the dispersive and diffusive errors are likely to be important. The aforementioned observations of the non-monotonic and uncorrelated evolutions of $|G|$ and $\epsilon_{\phi}$ suggest that adjusting only $\delta$ is not sufficient to allow a satisfactory control of both kind of errors at the same time.
An optimization of $\delta$ made for reducing the dispersive error could undesirably deteriorate the diffusive behavior of the scheme.
This justifies that in the optimization strategy that we consider in Sec.~\ref{sec:advection}, we add to \eqref{eqschema} an additional correction of the same order as the dispersive correction term in factor of $\delta$.
This correction is chosen as being consistent with $\alpha \Delta x^2 \dfrac{\sigma^2}{2}\dfrac{\partial ^2 u}{\partial x^2}$. With negative values of $\alpha$, this term will be expected to add an anti-diffusive behavior, counteracting the diffusion error intrinsically associated with the scheme \eqref{eqschema}. The combined use of both correction terms is thus expected to allow a more relevant separated control of both dispersion and diffusion errors.
As previously observed in Fig. \ref{fig:figerror}, the properties of the numerical scheme significantly vary as function of $\phi$.
It is therefore far from being evident that considering $\delta$ and $\alpha$ as constant optimization parameters is sufficient to reduce the numerical error over a wide range of $\phi$ scales.
In view of considering complex (spectrally richer) solutions and extending the use of this scheme to non-linear models, possibly leading to spatially evolving frequency content, it is thus also chosen to consider spatially varying functions for $\delta$ and $\alpha$ instead of constant values. This variability, which will be represented by expressing these parameters via spatial expansions of polynomials, will allow for local numerical optimization on coarse meshes.
\paragraph{Summary: numerical scheme retained to perform coarse-grid simulations}~\\
Following the a priori analysis given all along this section, the following numerical scheme is finally retained:
\begin{equation}
\label{eq:1D_adv_oneparameter_parametrized}
\begin{aligned}
u_\text{j}^\text{k} & = & u_\text{j}^\text{k-1}-\frac{\sigma}{2} \left( u_\text{j+1}^\text{k-1}-u_\text{j-1}^\text{k-1} \right) +
\left(1+\alpha\right)\frac{\sigma^2}{2} \left( u_\text{j+1}^\text{k-1}-2 u_\text{j}^\text{k-1}+u_\text{j-1}^\text{k-1} \right) \\
& & +\underbrace{\left(\delta_\text{F}+\gamma\right)}_{\delta} \left( -u_\text{j-2}^\text{k-1}+3u_\text{j-1}^\text{k-1}-3u_\text{j}^\text{k-1}+u_\text{j+1}^\text{k-1} \right) \\
& = & \mathbf{\mathcal{M}}_{k:k-1}\left(u;\sigma, \delta_\text{F} \right) + \mathbf{\mathcal{C}}_{k:k-1}\left( u;\sigma, \alpha, \gamma \right)
\end{aligned}
\end{equation}
\lc{pour moi, il ne faudrait plus utiliser ici la variable delta car gamma, autant que je le comprenne, est introduit comme une variation par rapport à une valeur particulière de delta donnée par Fromm. Je choisis donc d'appeler cette valeur particulière de delta $\delta_\text{F}$.}
Here, one can see that the optimization of $\delta$ is not performed directly, but via a parameter $\gamma$ which measures the deviation of the optimized dispersion coefficient from the constant value $\delta_\text{F} \triangleq \sigma(1-\sigma^2)/4$ proposed by Fromm. This choice has been performed to provide a clear separation between the dynamical model $\mathbf{\mathcal{M}}$ and the correction model $\mathbf{\mathcal{C}}$ when comparing \eqref{eqschema} and \eqref{eq:1D_adv_oneparameter_parametrized}. The variability in space of the coefficients $\gamma$ and $\alpha$ is obtained expressing them in terms of Legendre Polynomial expansions truncated to the order $n$:
\begin{equation}
\gamma\left(x\right)=\gamma_\text{0}P_\text{0}\left(x\right)+\gamma_\text{1}P_\text{1}\left(x\right)+\cdots+\gamma_\text{n}P_\text{n}\left(x\right)\label{eq:beta_legendre},
\end{equation}
and
\begin{equation}
\alpha\left(x\right)=\alpha_\text{0}P_\text{0}\left(x\right)+\alpha_\text{1}P_\text{1}\left(x\right)+\cdots+\alpha_\text{n}P_\text{n}\left(x\right)\label{eq:alpha_legendre}.
\end{equation}
Preliminary tests showed that $4$-th order representation ($n=4$) is satisfactory for the cases considered in this study. The inner loop will be used to optimize the expansion coefficients for a total of $10$ parameters (five expansion coefficients $\gamma_i$ and five expansion coefficients $\alpha_i$). The values for these parameters could possibly be constrained during the inner loop optimization in order to accept only values leading to stable solutions. In particular, the extended stability constraint for the present scheme in absence of additional anti-diffusive correction is imposed, which reads as\lc{$\alpha$ ne joue aucun rôle dans cette condition?}:
\begin{equation}
\label{stab}
\gamma (1-2\sigma) + \frac{1}{4} \sigma^2 (1-\sigma^2) \geq 0
\end{equation}
\section{Optimization of numerical integration schemes on a coarse mesh}
\label{sec:apriorianalysis}
Numerical schemes unavoidably lead to diffusive and dispersive errors that can excessively alter the representativity of solutions on too coarse meshes.
These errors can lead to an imprecise determination of the error covariance matrix, prohibiting the updating of the solution on the fine mesh.
An original feature of the MGEnKF algorithm presented in Sec.~\ref{sec:maths} is that the solution advanced at the most refined level of the grid can also be used as surrogate observation to optimize the parameters of the numerical schemes.
In this section, we present the principles guiding the choice of the numerical scheme and associated parametrization retained to control the numerical errors on coarse meshes.
Let us consider the prototype 1D linear advection equation of a scalar quantity $u$ advected with the constant velocity $c$:
\begin{equation}
\label{eq:1D_adv}
\frac{\partial u}{\partial t}+c \frac{\partial u}{\partial x}=0
\end{equation}
For simplicity of presentation, we restrict ourselves to using an explicit finite difference scheme on four-point stencils (second or third order accurate) for the case $c>0$.
The spatial discretization is performed on a Cartesian mesh with a constant size $\Delta_x$.
$\Delta_t$ is the time step and $\sigma=c\Delta_t / \Delta_x$ the CFL number. $u_\text{j}^\text{k}$ represents the discrete numerical solution at the spatial location $x_\text{j}=(j-1)\Delta_x$ at time $t=k\Delta_t$.
The following considerations can be extended quite easily to non-linear systems with time and space varying advection velocity, irregular mesh or higher-order schemes.
A general one-parameter family of second order accurate schemes (see \cite{Hirsch2007}, p. 362, Eq.~(8.2.37)) may be defined on a backward upwind stencil $(j-2,j-1,j,j+1)$ as:
\begin{equation}
\label{eq:schema}
\begin{aligned}
u_\text{j}^\text{k} & = & u_\text{j}^\text{k-1}-\frac{\sigma}{2} \left( u_\text{j+1}^\text{k-1}-u_\text{j-1}^\text{k-1} \right) +
\frac{\sigma^2}{2} \left( u_\text{j+1}^\text{k-1}-2 u_\text{j}^\text{k-1}+u_\text{j-1}^\text{k-1} \right) \\
& & +
\delta \left( -u_\text{j-2}^\text{k-1}+3u_\text{j-1}^\text{k-1}-3u_\text{j}^\text{k-1}+u_\text{j+1}^\text{k-1} \right)
\end{aligned}
\end{equation}
The first line of \eqref{eq:schema} corresponds to the standard centered Lax Wendroff scheme. The second line can be interpreted as the discretization of an additional dispersion term of the form $\delta \Delta_x^3 u_{xxx}$.
The expression \eqref{eq:schema} is compatible with the linear advection equation \eqref{eq:1D_adv} discretized at precision order proportional to $\left(\Delta_x^2, \Delta_t^2\right)$. If one retains the two first dominant error terms in the combination of Taylor expansions corresponding to these discrete terms forming scheme \eqref{eq:schema},
the following equivalent differential equation is obtained:
\begin{equation}
\label{eqequiv}
\begin{aligned}
\frac{\partial u}{\partial t}+c \frac{\partial u}{\partial x} & = & \frac{c}{6} \left[ 6\delta -
\sigma \left( 1-\sigma^2\right)\right] \Delta_x^2 \frac{\partial ^3 u}{\partial x^3} & \\
& & - \frac{c}{8} \left[ \delta \left( 1-2 \sigma \right) + \frac{\sigma^2}{4} \left( 1 - \sigma^2 \right) \right] \Delta_x^3 \frac{\partial ^4 u}{\partial x^4} & + \mathcal{O}\left(\Delta_x^4\right)\\
\end{aligned}
\end{equation}
Equation \eqref{eqequiv} reveals that the dominant numerical error of \eqref{eq:schema} is dispersive with an error proportional to $\Delta_x^2$ and that a less dominant (third-order) diffusive error term occurs. The expressions of these errors show that their relative level varies both as function of $\sigma$ and $\delta$.
In order to keep the MGEnKF algorithm sufficiently flexible and general, we will not search to optimize the parameter $\sigma$.
Therefore, it is chosen to target only an optimization for $\delta$ with a strategy suitable for any value of $\sigma$. This choice allows the user to set the value of $\sigma$ based on practical constraints, such as for synchronizing simulations with available observation data.
The level of the dominant error can be controlled through the parameter $\delta$. By setting in particular $\delta = \sigma(1-\sigma^2)/6$, the dominant error term cancels out and the scheme becomes third order accurate in space with now a diffusive dominant error term proportional to $\Delta_x^3$. Other values of $\delta$ maintain the formal second order accuracy but can induce significantly different effective evolution of the numerical errors.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.9\textwidth, trim=0cm 0cm 0cm 0cm,clip]{figures/errorLWvsFromm.pdf}
\caption{\label{fig:figerror} Comparison of diffusive ($\epsilon_D=|G|$, top) and dispersive ($\epsilon_{\phi}$, bottom) errors for two different \textit{a priori} parametrizations of $\delta$. $\delta=0$ (left column) corresponds to the Lax Wendroff scheme and $\delta=\sigma\left(1-\sigma\right)/4$ (right column) to Fromm's scheme.}
\end{figure}
An illustration of this is shown in Fig. \ref{fig:figerror} where the diffusion and dispersion errors are represented. These errors are quantified via a classical
\textcolor{Reviewer2}{Von Neumann stability analysis (see \cite{Hirsch2007}, Sec.~7.4), also known as Fourier stability analysis. It consists in analysing the amplification factor of any harmonic of the signal, defined at any spatial position, as the ratio of the solutions at two successive time instants $G = u^k/u^{k-1}$. Through this procedure, }
the expression of the complex gain factor $G = \text{Re}(G) + \jmath \, \text{Im}(G)$ is extracted as function of the phase angle $\phi = m \Delta_x$ with $m$ corresponding to a spatial wavenumber. It is worth recalling that the phase angle $\phi=m \Delta_x = 2\pi / \lambda$, where $\lambda$ is the spatial wavelength, can also be written as $\phi = 2\pi / (N-1)$, where N represents the number of points used to discretize the signal over $\lambda$.
To ensure stability, the diffusion error $\epsilon_D$, \textit{i.e.} the modulus of the gain factor $|G|$, should remain less than 1 for the whole range of $\phi$ present in the signal being advected. The quantity $1-|G|$ which represents the level of numerical diffusion must be minimized to avoid artificial decrease of wave amplitude components. The dispersive error is here characterized by $\epsilon_{\phi} = \arctan(-\text{Im}(G)/\text{Re}(G)) / \sigma \phi $, which corresponds to the spurious multiplicative factor affecting the expected phase velocity of wave components.
A good numerical scheme should keep the value of $\epsilon_{\phi}$ as close as possible to unity to limit phase advance or delay observed for $\epsilon_{\phi}>1$ or $\epsilon_{\phi}<1$, respectively.
Figure \ref{fig:figerror} clearly shows the role played by $\delta$ on the diffusion and dispersion errors.
The case $\delta=0$ (Lax Wendroff scheme) is characterized by a dominant phase lag within the stability bounds $0<\sigma<1$. The choice $\delta=\sigma (1-\sigma)/2$ (not shown) corresponds to Beam and Warming scheme which yields a dominant phase advance error for $0<\sigma<1$. The case $\delta = \sigma (1-\sigma)/4$ corresponds to the famous Fromm's scheme which compensates to some extent the phase errors of the two aforementioned schemes for a wide range of $\sigma$. Indeed, we notice in Fig. \ref{fig:figerror} that isolines of $\epsilon_{\phi}$ lower than unity are significantly shifted towards the higher values of $\phi$, indicating that dispersive error levels can be \textit{a priori} significantly reduced in the intermediate range of $\phi$ corresponding to practical simulation cases. However, the reduction of diffusive errors, as illustrated with $|G|$ is far less efficient, in particular for high values of $\sigma$. The diffusive errors are even seen to increase for lower values of $\sigma$ and high values of $\phi$.
We have to keep in mind that the MGEnKF algorithm employs ensemble members which have to be generated using relatively coarse grids (thus high values of $\phi$) for which both the dispersive and diffusive errors are likely to be important. The aforementioned observations of the non-monotonic and uncorrelated evolutions of $\epsilon_D$ and $\epsilon_{\phi}$ suggest that adjusting only $\delta$ is not sufficient to allow a satisfactory control of both kind of errors at the same time.
An optimization of $\delta$ made for reducing the dispersive error could undesirably deteriorate the diffusive behavior of the scheme.
This justifies that in the optimization strategy that we consider in Sec.~\ref{sec:advection}, we add to \eqref{eq:schema} an additional correction of the same order as the dispersive correction term in factor of $\delta$.
This correction is chosen as being consistent with $\alpha \Delta_x^2 \dfrac{\sigma^2}{2}\dfrac{\partial ^2 u}{\partial x^2}$. With negative values of $\alpha$, this term will be expected to add an anti-diffusive behavior, counteracting the diffusion error intrinsically associated with the scheme \eqref{eq:schema}. The combined use of both correction terms is thus expected to allow a more relevant separated control of both dispersion and diffusion errors.
As previously observed in Fig. \ref{fig:figerror}, the properties of the numerical scheme significantly vary as function of $\phi$.
It is therefore far from being evident that considering $\delta$ and $\alpha$ as constant optimization parameters is sufficient to reduce the numerical error over a wide range of $\phi$ scales.
In view of considering complex (spectrally richer) solutions and extending the use of this scheme to non-linear models, possibly leading to spatially evolving frequency content, it is thus also chosen to consider spatially varying functions for $\delta$ and $\alpha$ instead of constant values. This variability, which will be represented by expressing these parameters via spatial expansions of polynomials, will allow for local numerical optimization on coarse meshes.
\paragraph{Summary: numerical scheme retained to perform coarse-grid simulations}~\\
Following the \textit{a priori} analysis given all along this section, the following numerical scheme is finally retained:
\begin{equation}
\label{eq:1D_adv_oneparameter_parametrized}
\begin{aligned}
u_\text{j}^\text{k} = & \mathbf{\mathcal{M}}_{k:k-1}\left(u;\sigma, \delta \right) + \mathbf{\mathcal{C}}_{k:k-1}\left( u;\sigma, \alpha, \gamma \right)
\end{aligned}
\end{equation}
\textcolor{Reviewer2}{
where
\begin{equation}
\begin{aligned}
\mathbf{\mathcal{M}}_{k:k-1}\left(u;\sigma, \delta \right)& = & u_\text{j}^\text{k-1}-\frac{\sigma}{2} \left( u_\text{j+1}^\text{k-1}-u_\text{j-1}^\text{k-1} \right) + \frac{\sigma^2}{2} \left( u_\text{j+1}^\text{k-1}-2 u_\text{j}^\text{k-1}+u_\text{j-1}^\text{k-1} \right) \\
& & +\delta \left( -u_\text{j-2}^\text{k-1}+3u_\text{j-1}^\text{k-1}-3u_\text{j}^\text{k-1}+u_\text{j+1}^\text{k-1} \right),
\end{aligned}
\label{eq:1D_adv_oneparameter_parametrized_M}
\end{equation}
and
\begin{equation}
\mathbf{\mathcal{C}}_{k:k-1}\left( u;\sigma, \alpha, \gamma \right) =
\alpha\frac{\sigma^2}{2} \left( u_\text{j+1}^\text{k-1}-2 u_\text{j}^\text{k-1}+u_\text{j-1}^\text{k-1} \right)+\gamma \left( -u_\text{j-2}^\text{k-1}+3u_\text{j-1}^\text{k-1}-3u_\text{j}^\text{k-1}+u_\text{j+1}^\text{k-1} \right)
\label{eq:1D_adv_oneparameter_parametrized_C}
\end{equation}
}
Here, one can see that the optimization of $\delta$ is not performed directly, but via a parameter $\gamma$ which measures the deviation of the optimized dispersion coefficient from the constant value $\delta=\sigma(1-\sigma^2)/4$ proposed by Fromm. This choice has been performed to provide a clear separation between the dynamical model $\mathbf{\mathcal{M}}$ and the correction model $\mathbf{\mathcal{C}}$ when comparing \eqref{eq:schema} and \eqref{eq:1D_adv_oneparameter_parametrized}. The variability in space of the coefficients $\gamma$ and $\alpha$ is obtained expressing them in terms of Legendre Polynomial expansions truncated to the order $n$:
\begin{equation}
\gamma\left(x\right)=\gamma_\text{0}P_\text{0}\left(x\right)+\gamma_\text{1}P_\text{1}\left(x\right)+\cdots+\gamma_\text{n}P_\text{n}\left(x\right)\label{eq:beta_legendre},
\end{equation}
and
\begin{equation}
\alpha\left(x\right)=\alpha_\text{0}P_\text{0}\left(x\right)+\alpha_\text{1}P_\text{1}\left(x\right)+\cdots+\alpha_\text{n}P_\text{n}\left(x\right)\label{eq:alpha_legendre}.
\end{equation}
Preliminary tests showed that $4$-th order representation ($n=4$) is satisfactory for the cases considered in this study. The inner loop will be used to optimize the expansion coefficients for a total of $10$ parameters (five expansion coefficients $\gamma_i$ and five expansion coefficients $\alpha_i$). The values for these parameters could possibly be constrained during the inner loop optimization in order to accept only values leading to stable solutions. In particular, the extended stability constraint for the present scheme in absence of additional anti-diffusive correction is imposed, which reads a
:
\begin{equation}
\label{stab}
\gamma (1-2\sigma) + \frac{1}{4} \sigma^2 (1-\sigma^2) \geq 0
\end{equation}
\section{Application: one-dimensional advection equation}
\label{sec:advection}
The sensitivity of the MGEnKF to the performance of the \emph{inner loop} introduced in Sec.~\ref{sec:maths} \textcolor{Reviewer2}{is now assessed in a }practical DA experiment. More precisely, the \emph{inner loop} is here used to optimize the behavior of the numerical models for the one-dimensional linear equation discussed in Sec.~\ref{sec:apriorianalysis}. The optimization, which is performed over the polynomial expansion coefficients $\gamma_i$ and $\alpha_i$, aims to reduce the numerical error of the ensemble simulations which are performed on coarse grids.
\subsection{Set-up of test case and test solutions on coarse meshes\label{sec:adv_coarse_model}}
The set-up of the test case representing the one-dimensional linear advection equation (see \eqref{eq:1D_adv} in Sec. \ref{sec:apriorianalysis}) is now presented. The constant advection velocity is set to $c=1$.
Preliminary numerical tests are carried out with the scheme presented in \eqref{eq:1D_adv_oneparameter_parametrized} and by setting \textit{a priori} $\alpha=0$ and $\gamma=0$ (Fromm scheme). The initial condition is set to $u(x,t=0)=c$ everywhere in the physical domain.
A Dirichlet time-varying condition is imposed at the inlet:
\begin{equation}
u(x=0,t)=c \, \left(1 +\theta \sin\left(2\pi t\right) \right),
\label{eq:inlet-1D-advection}
\end{equation}
where $\theta$ represents the amplitude of a sinusoidal perturbation of period $T = 1$ and is set to $\theta=0.015$. \textcolor{Reviewer2}{Every time-step, the outlet boundary condition is extrapolated along grid-lines from the interior nodes to the boundary points at $x=10$ using 4-th order Lagrange polynomials. This represents a classical outlet condition for simulating advective flows (see Sec. 8.10.2 in \cite{Ferziger2002_springer}), that can be used to direct the flow outside of the physical domain}. The simulations are performed over a computational domain of size $0 \leq x \leq 10$ in $L_0= c T$ units. Three different levels of mesh refinement (moderate to low) are chosen for these tests. The resolution is chosen to be of practical interest for usage for the ensemble members in the MGEnKF algorithm. The mesh size is set with constant values of $\Delta_x=0.0625$, $0.1$ and $0.125$, respectively. This corresponds to $16$, $10$ and $8$ discrete nodes per characteristic length $L_0$ or equivalently, phase angles around $\phi=0.4$, $0.6$ and $0.8$. According to Fig. \ref{fig:figerror}, relatively moderate error levels can be observed with these resolution levels. However, their accumulation during the signal advection is expected to become significant.
The preliminary simulations are also performed using different CFL numbers $\sigma$.
The results, which are shown in Fig. \ref{fig:1D-advection_coarse}, are compared with the \emph{true} (exact) known solution. One can see that, with the exception of the finer mesh resolution and higher values of $\sigma$, the numerical solutions are rapidly affected by the accumulation of diffusive and dispersive errors. In particular, a significant amplitude reduction and phase advance can be observed. It is worth recalling that these errors could be naturally eliminated in the present case for the specific choice of $\sigma=1$. However, this constraint is not necessarily compatible with practical needs associated with the numerical simulations running within the MGEnKF algorithm (reduced $\sigma$ required to ensure stability with more complex boundary conditions, signal synchronization with observation, and so on).
\begin{figure}[htp]
\centering
\includegraphics[width=1\textwidth]{figures/advection_model.pdf}
\caption{\label{fig:1D-advection_coarse}Preliminary simulations for the 1D advection equation. Solutions at $t=10$ for different grid refinement and $\sigma$ values are compared with the \emph{true} state.}
\end{figure}
\subsection{Performance of the MGEnKF algorithm without the \emph{inner loop}\label{sec:adv_coarse_MGENKF}}
The performance of the MGEnKF algorithm without the \emph{inner loop} may be severely degraded by the numerical errors induced by the use of coarse grids for the ensemble members. This can be shown with a simple twin-experiment where observation is accessible relatively far from the inlet and the DA tool attempts to estimate the inlet parameter $\theta$.
Data assimilation is performed with the following conditions:
\begin{itemize}
\item Observations are generated from the analytical solution with $\theta=0.015$ on the space domain $[3, 4]$ and on the time window $[0, 390]$. The sampling frequency is set so that approximately $15$ observation updates per characteristic evolution time $L_0 / c$ are obtained, for a total of $\approx 6000$ DA analysis phases. Also, the time origin for the sampling is shifted of ten characteristic times so that the state for $t=0$ is \emph{fully developed}, \textit{i.e.} the initial condition $u(x,0)=c$ is completely advected outside the computational domain. For simplicity, we assume that the observations and the coarse-grid ensemble are represented on the same space. Therefore, $\mathbf{\mathcal{H}}_\text{k}\equiv\mathbf{\mathcal{H}}$ is a subsampling operator independent of time retaining only the points comprised in the coarse space domain $[3,4]$.
The observations are artificially perturbed using a constant in time Gaussian noise of diagonal covariance $\mathbf{R}_\text{k}\equiv\mathbf{R}=2.25\cdot 10^{-6}\mathbf{I}$. This choice has been performed for every test case following the recommendations of \cite{Tandeo_Ailliot_Bocquet_Carrassi_Miyoshi_Pulido_Zhen_MWR_2020}, which extensively investigated the sensitivity of the EnKF to the noise / uncertainty in the model and in the observation.
\item The \emph{model} is represented by \textit{i)} a main simulation with a resolution of $\Delta_x=0.0125$ (\textit{i.e.} $80$ mesh elements per $L_0$) and \textit{ii)} ensemble simulations performed on coarse grids. Multiple runs of the MGEnKF are performed using three different mesh resolutions for $\Delta_x$ (equal to $0.125$, $0.1$ and $0.0625$) for the ensemble members and also imposing different values for $\sigma$. The \emph{model} employs fixed parameters $\alpha=0$ and $\gamma=0$ (Fromm scheme) for all cases. The size of the ensemble is set to $N_\text{e}=100$. The amplitude of the sinusoidal inlet perturbation $\theta$ for the ensemble simulations is considered to be unknown. It is initially assumed to be described by Gaussian distribution $\theta \sim \mathcal{N}(0.025, \mathbf{Q}_{\theta})$, with $\mathbf{Q}_{\theta}(t=0)=2.5 \cdot 10^{-7}\mathbf{I}$. For the main simulation run on the fine grid, the mean value of the Gaussian distribution, \textit{i.e.} $\theta=0.025$ is initially imposed. These values are significantly far from $\theta=0.015$ used with the analytical solution. This choice allows to analyze the rate of convergence of the optimization procedure. The initial condition $u(x,t=0)=c$ is used for the main simulation on the fine-grid as well as for the coarse ensemble simulations. It is worth recalling that, in such a case, at $t=0$, the true state exhibits a very different solution. This choice allows to check the robustness of the algorithm during the transient solution and ascertain the correct evolution of the first state estimation stages when the solution of the model may be very different from the observations.
\end{itemize}
The time-evolution of the estimation of $\theta$ is shown in Fig. \ref{fig:amplitude_adv_OL}. First of all, one can see that the estimation procedure starts at $t=10$. This choice is consistent with the positioning of the sensors for observation, which are located in the middle of the computational domain ($[3, 4]$)
\begin{figure}[htbp]
\includegraphics[width=1\textwidth]{figures/advection_theta_OL.pdf}
\caption{\label{fig:amplitude_adv_OL}
Estimation history of the inlet parameter $\theta$ using the MGEnKF without the \emph{inner loop}. The DA method is performed using three different mesh resolutions for the ensemble members and varying the parameter $\sigma$.}
\end{figure}
The estimation of $\theta$ is progressively degraded as the refinement is decreased, with the most accurate results obtained with the finest mesh refinement $\Delta_x=0.0625$ for any given $\sigma$. Concerning the influence of the CFL number $\sigma$, the model prediction is generally more accurate as $\sigma \to 1$. One can also see that progressively larger deviations are observed varying $\sigma$ for coarser meshes.
Therefore, the increased dispersive and diffusive errors associated with lower CFL numbers are the cause for the large deviations observed when $\sigma=0.125$. $\theta$ is naturally over-estimated in this case as the calculations of the model in the sampling region are dominated by numerical errors. The amplitude of the sinusoidal wave imposed at the inlet is numerically \emph{diffused} and \emph{dispersed} between $0\leq x \leq 3$. However, the optimized value of $\theta$ determined by MGEnKF compensates for the mismatch between model and reference in the sampling region. This can be clearly observed in Fig. \ref{fig:1D-advection_MGENKF}, where the ensemble coarse-grid estimation is shown.
\begin{figure}[htbp]
\centering
\includegraphics[width=1\textwidth]{figures/advection_MGENKF_ol.pdf}
\caption{\label{fig:1D-advection_MGENKF}State estimation obtained for the ensemble members via MGEnKF without the \emph{inner loop} for the linear advection equation test case. Comparisons with the exact solution are shown for $t=300$ for different grid refinement levels and $\sigma$ values.}
\end{figure}
\subsection{Performance of the MGEnKF with the \emph{inner loop}}\label{sec:adv_coarse_oMGENKF}
In this section, the complete MGEnKF scheme is used to study the same DA problem investigated in Sec. \ref{sec:adv_coarse_MGENKF}. This analysis will allow to unambiguously identify the contribution of the \emph{inner loop} for the optimization of the ensemble members running on the coarse grid level.
The main modification when compared with the previous analysis is that now the parameters $\gamma(x)$ and $\alpha(x)$ in \eqref{eq:1D_adv_oneparameter_parametrized} are considered to be unknown space varying model parameters which will be optimized using the \emph{inner loop}. The additional term related to the correction model $\mathcal{C}_{k:k-1}$ can be explicitly written as $\alpha\frac{\sigma^2}{2} \left( u_\text{j+1}^\text{k-1}-2 u_\text{j}^\text{k-1}+u_\text{j-1}^\text{k-1} \right) + \gamma \left( -u_\text{j-2}^\text{k-1}+3u_\text{j-1}^\text{k-1}-3u_\text{j}^\text{k-1}+u_\text{j+1}^\text{k-1} \right)$, see \eqref{eq:1D_adv_oneparameter_parametrized_C}. In particular, the optimization will target the values of the Legendre polynomial expansion coefficients $\gamma_i$ and $\alpha_i$ introduced in \eqref{eq:beta_legendre} and \eqref{eq:alpha_legendre} using the fine-grid state as surrogate observation.
The MGEnKF thus performs two optimization procedures within the analysis phase, one in the \emph{inner loop} and a second one in the \emph{outer loop}:
\begin{enumerate}
\item Optimization of the polynomial expansion coefficients $\alpha_i$ and $\gamma_i$ to reduce the discrepancy between the low-fidelity (ensemble members) and high-fidelity (main simulation) models in the \emph{inner loop}.
\item Optimization of the amplitude of the inlet perturbation $\theta$ in the \emph{outer loop}.
\end{enumerate}
The coefficients $\alpha_i$ and $\gamma_i$ are initially described by Gaussian distributions $\alpha_\text{i} \sim \mathcal{N}(0, \mathbf{Q})$, $\gamma_\text{i} \sim \mathcal{N}(0, \mathbf{Q})$ with $\mathbf{Q}(t=0)=9\cdot 10^{-8}\mathbf{I}$ and $i=0, 1, 2, 3, 4$. The information from the entire fine-grid domain is available. Therefore, $\mathbf{\mathcal{H}}_\text{k}^{\text{o}}\equiv\mathbf{\mathcal{H}}^{\text{o}}$ is a subsampling operator independent of time retaining all the points comprised in the coarse space domain $[0,10]$. Preliminary tests showed that, in order to improve the performance of the \emph{inner loop}, the surrogate observation from the fine grid should be perturbed using a constant in time Gaussian noise of covariance $\mathbf{R}^\text{o}_\text{k}\equiv\mathbf{R}^\text{o}=1\cdot 10^{-8}\mathbf{I}$. The value here used for $\mathbf{R}^\text{o}$ is orders of magnitude lower than the observation covariance matrix $\mathbf{R}$. Therefore, one can consider the surrogate observation as a quasi-exact observation.
It should be noted that the EnKF procedure in the \emph{inner loop} only optimizes the parameters affecting the model term $\mathcal{C}$ included in the low-fidelity model and no state estimation is performed on the coarse ensemble within this phase.
\begin{figure}[htbp]
\includegraphics[width=1\textwidth]{figures/advection_theta_OLIL.pdf}
\caption{\label{fig:amplitude_adv_optimized}
Estimation history of the inlet parameter $\theta$ using the complete MGEnKF. The DA method is performed using three different mesh resolutions for the ensemble members and varying the parameter $\sigma$.}
\end{figure}
The estimation of the parameter $\theta$ using the complete MEnKF is shown in Fig. \ref{fig:amplitude_adv_optimized}. As in Sec. \ref{sec:adv_coarse_model}, runs have been performed using three levels of mesh refinement for the ensemble members. The first ten characteristic times of the experiment are used to initialize the tuning of the parameters $\alpha$ and $\gamma$ (\textit{i.e.} \emph{outer loop} initially deactivated). For $t>10$, both optimization procedures are performed. During the very first phases of the assimilation process ($10<t<20$), the estimated value of $\theta$ gets within the $5\%$ error margin when compared to the \emph{truth} and no degradation is observed for every mesh refinement / $\sigma$ combination investigated. The convergence is noticeably slower when $\sigma=0.125$ for $\Delta_x=0.100$ and $\Delta_x=0.125$, where the numerical errors in the initial phase are the largest. For the worst case scenario ($\sigma=0.125$, $\Delta_x=0.125$), an initial phase of $40$ characteristic times was required to obtain converged results for the inner loop, which delayed the start of the outer loop.
Overall, the complete MGEnKF outperforms the version without the \emph{inner loop}. This result highlights the complementary features of the two optimization strategies to obtain a global accurate representation of the flow.
The optimization performed in the inner loop is now analyzed in detail. The parameter $\alpha(x)$ is shown in Fig. \ref{fig:adv_alpha} at $t=300$.
As expected, one can see that $\alpha$ exhibits negative values which approach zero with increasing mesh resolution and higher $\sigma$ values. The numerical diffusion observed in the model when computed on coarser meshes increases, thus the optimization procedure provides an anti-diffusive contribution.
\begin{figure}[htbp]
\includegraphics[width=1\textwidth]{figures/advection_alpha.pdf}
\caption{\label{fig:adv_alpha}
Values of the parameter $\alpha$ obtained via the \emph{inner loop}. Results are shown for different grids and $\sigma$ values for a simulation time $t=300$.}
\end{figure}
The spatial distribution of the sum of the parameters $\gamma + \delta$ representing the total dispersion of the scheme \eqref{eq:1D_adv_oneparameter_parametrized} is presented in Fig. \ref{fig:adv_gamma} at $t=300$. For every case analyzed, one can remark that $\gamma + \delta$ tend to converge towards the value for which the scheme \eqref{eq:1D_adv_oneparameter_parametrized} becomes third order accurate, that is $\delta + \gamma=\sigma\left(1-\sigma^2\right)/6$ ($0.0547$, $0.0625$ and $0.0205$ for $\sigma$ equal to $0.75$, $0.5$ and $0.125$, respectively). This result is expected since this particular value cancels out the dominant dispersive error in the scheme
\begin{figure}[htbp]
\includegraphics[width=1\textwidth]{figures/advection_gamma.pdf}
\caption{\label{fig:adv_gamma}
Values of the parameter $\gamma + \delta$ obtained via the \emph{inner loop}. Results are shown for different grids and $\sigma$ values for a simulation time $t=300$.}
\end{figure}
Finally, results for the ensemble members are shown in Fig. \ref{fig:1D-advection_OMGENKF} at $t=300$. One can see a marked improvement when these results are compared to the ones shown in Fig. \ref{fig:1D-advection_MGENKF}. The estimation of the parameter $\theta$ is clearly much more accurate ($5\%$ error) and there is virtually no difference in prediction between the \emph{truth} and model used on the coarse grids. This result has been obtained owing to the suppression of the numerical diffusion and dispersion errors via \emph{inner loop}, which proved to be efficient for every configuration analyzed (wide range of phase angle / CFL numbers). DA analyses considering more complex inlet conditions (multiple frequencies $\theta_i$) have been performed to assess the method. The results, which are not presented here for sake of brevity, show similar accuracy.
\begin{figure}[htbp]
\centering
\includegraphics[width=1\textwidth]{figures/advection_MGENKF_ilol.pdf}
\caption{\label{fig:1D-advection_OMGENKF}Solutions provided by the ensemble members in the complete MGEnKF. Results, which are obtained for different meshes and values of $\sigma$, are compared with the \emph{true} state at $t=300$.}
\end{figure}
\section{Application: one-dimensional viscous Burgers' equation}
\label{sec:Burgers}
Let us now consider the non-linear and viscous 1D Burgers' equation:
\begin{equation}
\frac{\partial u}{\partial t}+u\frac{\partial u}{\partial x}=\nu\frac{\partial^2 u}{\partial x^2}\label{eq:1D_burgers}
\end{equation}
where $x$ is the spatial coordinate, $u$ the velocity and $\nu$ the kinematic viscosity. Considering a centered difference scheme for both the convection and the diffusion term over a uniform grid with mesh size $\Delta_x$, and an explicit forward first order scheme for the time derivative, one obtains:
\begin{align}
u^{\text{k}}_{\text{j}}=&u^{\text{k-1}}_{\text{j}}-u^{\text{k-1}}_{\text{j}}\frac{\Delta_t}{2\Delta_x}\left(u^{\text{k-1}}_{\text{j+1}}-u^{\text{k-1}}_{\text{j-1}}\right)+\nu\frac{\Delta_t}{{\Delta_x}^2}\left(u^{\text{k-1}}_{\text{j+1}}-2u^{\text{k-1}}_{\text{j}}+u^{\text{k-1}}_{\text{j-1}}\right) \label{eq:1D_burgers_scheme}
\end{align}
where $\Delta_t$ represents the time step.
Similarly to what was done in Sec.~\ref{sec:advection}, the performance of the MGEnKF is studied. However, owing to the non-linearity of \eqref{eq:1D_burgers}, a model of the numerical error associated with the discretization process cannot be derived from the dynamical equation. Thus, for this case, the optimization by the inner loop is performed using the \textcolor{Reviewer2}{same dispersion correction term as in \eqref{eq:schema}}. While this model has not been derived for the dynamical equations \eqref{eq:1D_burgers}-\eqref{eq:1D_burgers_scheme}, one can assess the degree of precision attained in reducing amplitude and phase errors
A numerical experiment for this test case is first performed using a high-resolution mesh to obtain a reference solution and to generate observation for the MGEnKF application. A Dirichlet time-varying condition is imposed at the inlet:
\begin{equation}
u(x=0,t)=u_0 \left(1+\theta\sin\left(2\pi t\right) \right),
\label{eq:inlet-1D-burgers}
\end{equation}
where $u_0=1$ is the mean characteristic velocity of the flow and $\theta$ represents the amplitude of a sinusoidal signal whose period is $T =1$. The amplitude parameter has been set to $\theta=0.2$ in order to observe significant non-linear effects with the Reynolds number chosen for this application, which will be discussed in the following. \textcolor{Reviewer2}{As described in the previous section, the outlet boundary condition is extrapolated along grid-lines from the interior nodes to the boundary points at $x=10$ using 4-th order Lagrange polynomials at every time step.} The initial condition is $u(x,t=0)=u_0$ everywhere in the physical domain. The Reynolds number is set to $Re=\frac{u_0 \, L_0}{\nu}=200$, where $L_0 =u_0 \, T$ is the mean wave-length of the signal and the characteristic length of the system. All physical lengths characterizing the system are normalized by $L_0$. The simulation is performed over a computational domain of size $10$ length units and the origin of the system is set so that $0 \leq x \leq 10$.
The mesh resolution is set to $64$ discrete nodes per $L_0$, for a total of $640$ mesh elements. The constant time step $\Delta_t$ is set so that the mean CFL number is $CFL=\frac{u_0 \Delta_t}{\Delta_x}=0.025$, which is small enough to guarantee a stable numerical evolution of the system.
The predicted solution using this model is referred to as the \emph{true} state of the system. This state is first compared with the prediction obtained via a low-fidelity model, which is identical to the reference simulation but uses only $8$ nodes per length $L_0$, for a total of $80$ mesh elements. The comparison of the two solutions, which is shown in Fig. \ref{fig:1D-burgers_CGmodel}, clearly indicates that the lack of mesh resolution is responsible for important errors in the amplitude and in the phase of the velocity signal.
\begin{figure}[htbp]
\centering
\includegraphics[width=1\textwidth]{figures/burgers_CGmodel.pdf}
\caption{\label{fig:1D-burgers_CGmodel}Instantaneous solution of the 1D Burgers' equation at $t=100$. Solutions obtained via a very refined simulation (Truth, black line) and using a coarse grid (Low-Fidelity Model, gray line) are compared.}
\end{figure}
In particular, the main source of numerical error appears to be of dispersive nature. More specifically, the time period of a full oscillation of the velocity field is significantly shorter when compared with the reference simulation. Diffusion errors are also visible, although their magnitude is smaller. The combination of these two sources of error severely affects the representation of the non linear phenomena at play. In fact, in the reference simulation, one can see that non-linear dynamics are strong enough to sensibly deform the sinusoidal profiles imposed at the inlet. On the other hand, the state predicted via the low-fidelity model does not show marked deformations of the velocity profile, suggesting that non-linear effects are poorly represented.
In order to perform an extensive test of the performance of the MGEnKF strategy, two different runs are performed. The first one includes an outer loop only, while the second one performs the complete inner loop and outer loop scheme. This comparison will allow to assess the impact of the inner loop performance over the optimization of the global coefficients of the simulation.
Similarly to what is proposed in Sec.~\ref{sec:advection}, the numerical scheme given by \eqref{eq:1D_burgers_scheme} is modified to introduce \emph{model correction} terms:
\begin{align}
u^{\text{k}}_{\text{j}}=&u^{\text{k-1}}_{\text{j}}-u^{\text{k-1}}_{\text{j}}\frac{\Delta_t}{2\Delta_x}\left(u^{\text{k-1}}_{\text{j+1}}-u^{\text{k-1}}_{\text{j-1}}\right)+\nu\frac{\Delta_t}{{\Delta_x}^2}\left(u^{\text{k-1}}_{\text{j+1}}-2u^{\text{k-1}}_{\text{j}}+u^{\text{k-1}}_{\text{j-1}}\right)+\mathbf{\mathcal{C}}_{k:k-1}(u;\alpha,\gamma) \nonumber \\=&u^{\text{k-1}}_{\text{j}}-u^{\text{k-1}}_{\text{j}}\frac{\Delta_t}{2\Delta_x}\left(u^{\text{k-1}}_{\text{j+1}}-u^{\text{k-1}}_{\text{j-1}}\right)+\left(1+\alpha\right)\nu\frac{\Delta_t}{{\Delta_x}^2}\left(u^{\text{k-1}}_{\text{j+1}}-2u^{\text{k-1}}_{\text{j}}+u^{\text{k-1}}_{\text{j-1}}\right) \nonumber \\ &+\gamma \left(-u^{\text{k-1}}_{\text{j-2}}+3u^{\text{k-1}}_{\text{j-1}}-3u^{\text{k-1}}_{\text{j}}+u^{\text{k-1}}_{\text{j+1}}\right). \label{eq:1D_burgers_scheme_modified}
\end{align}
The model $\mathbf{\mathcal{C}}$ here introduced is composed by two correction terms which are driven by the parameters $\alpha(x,t)$ and $\gamma(x,t)$. $\alpha$ and $\gamma$ are identically zero in the main simulation of the MGEnKF, while they are optimized in the inner loop for the ensemble members. First, the $\alpha$ parameter controls a diffusive effect / numerical viscosity term. For this reason, local values are bounded to respect the condition $\alpha(x,t) \geq -1$, \textit{i.e.} non physical solutions with negative global viscosity are excluded. Figure \ref{fig:1D-burgers_CGmodel} shows that grid coarsening is responsible for an over estimation of diffusive effects. Therefore, one should expect to observe a convergence of the parameter $\alpha(x,t)$ towards negative values. The second correction term $\gamma \left(-u^{\text{k-1}}_{\text{j-2}}+3u^{\text{k-1}}_{\text{j-1}}-3u^{\text{k-1}}_{\text{j}}+u^{\text{k-1}}_{\text{j+1}}\right)$ mimics the effects of a dispersion term of the form $\left(\gamma{\Delta_x}^3u_{xxx}\right)$ \cite{Hirsch2007}. This term was used in the previous section to correct the dispersive errors observed in the advection equation. The time evolution of $\alpha(x,t)$ and $\gamma(x,t)$ is taken into account by the MGEnKF itself, as the parameters are updated at each inner analysis phase. On the other hand, the space variability of the two parameters is obtained expressing them in terms of a Legendre Polynomial expansion.
Similarly to what was done for the linear advection case presented in Sec.~\ref{sec:advection}, the expansion is truncated to $n=4$, \textit{i.e.} a $4$-th polynomial order. This implies that the optimization performed in the \emph{inner loop} targets the value for the ten model coefficients $\gamma_\text{i}$ and $\alpha_\text{i}$.
The performance of the estimators (inner and outer loop, outer loop only) is assessed via the following data-assimilation strategy:
\begin{itemize}
\item The observations are sampled each $160$ time steps of the reference simulation run on the space domain $[3, 4]$ ($64$ sensors) and on the time window $[0, 240]$. Considering the value of the time step $\Delta_t$ employed for the investigation, this implies that approximately $20$ analysis phases per characteristic time evolution $T$ are performed. The sampling of the reference simulation is performed starting from a \emph{fully developed} state at $t=0$. This is easily done owing to the periodic characteristics of the inlet. For sake of simplicity, we assume that the observations and the coarse-grid ensemble are represented on the same space. Therefore, $\mathbf{\mathcal{H}}_\text{k}\equiv\mathbf{\mathcal{H}}$ is a sub-sampling operator independent of time retaining only the points comprised in the coarse space domain $[3,4]$.
The observations are artificially perturbed adding a constant in time Gaussian noise of diagonal covariance $\mathbf{R}_\text{k}\equiv\mathbf{R}=4\cdot 10^{-4}\mathbf{I}$.
\item The \emph{model} realizations consist of a main simulation (run on the same mesh used for the reference simulation) and an ensemble of $N_\text{e}=100$ coarse simulations ($8$ mesh elements per wavelength $L_0$) which are run using the numerical scheme \eqref{eq:1D_burgers_scheme_modified}. The initial condition $u(x,t=0)=u_0$ is imposed for each simulation. The outer loop provides an optimization for the value of the parameter $\theta$ driving the inlet condition. The initial condition for this parameter for each simulation is provided in the form of a Gaussian distribution $\theta \sim \mathcal{N}(0.15, \mathbf{Q}_{\theta})$, with $\mathbf{Q}_{\theta}(t=0)=6.25\cdot 10^{-4}\mathbf{I}$.
As previously stated, the inner loop of the MGEnKF optimizes the polynomial expansion coefficients $\alpha_\text{i}$ and $\gamma_\text{i}$ controlling the behavior of the model terms introduced in the dynamical equations. Here, the physical state predicted by the main simulation is used as surrogate observation for this step. Initial values of the coefficients are described by Gaussian distributions $\alpha_\text{i} \sim \mathcal{N}(0, \mathbf{Q})$, $\gamma_\text{i} \sim \mathcal{N}(0, \mathbf{Q})$ with $\mathbf{Q}(t=0)=9\cdot 10^{-8}\mathbf{I}$ and $i=0, 1, 2, 3, 4$. The surrogate observation is here represented by the projection of the complete state predicted on the fine-grid by the main simulation over the coarse grid.
This also implies that the operator $\mathbf{\mathcal{H}}_\text{k}^{\text{o}}\equiv\mathbf{\mathcal{H}}^{\text{o}}$ used in the inner loop of the Kalman Filter and the multigrid projector operator are the same. However, the surrogate observation sampled from the fine-grid is further randomized using a constant in time Gaussian noise of covariance $\mathbf{R}^\text{o}_\text{k}\equiv\mathbf{R}^\text{o}=1\cdot 10^{-8}\mathbf{I}$ in order to improve the convergence of the inner loop optimization procedure. As in Sec. \ref{sec:advection}, one can see that $\mathbf{R}^\text{o}$ is orders of magnitude smaller than the observation covariance matrix $\mathbf{R}$.
The inner loop only optimizes values of $\alpha_\text{i}$ and $\gamma_\text{i}$ and no update of the state estimation is here performed. Also, the inner loop is not performed at each outer loop analysis phase, but it is instead performed around twice per characteristic time, \textit{i.e.} once every ten outer loop analyses.
\end{itemize}
\begin{figure}[ht]
\includegraphics[width=1\textwidth]{figures/burgers_theta.pdf}
\caption{\label{fig:amplitude_burgers_ILEL}Evolution in time of the parameter $\theta$ during the outer loop optimization via MGEnKF. Results obtained from the DA complete model (outer plus inner loop, dot-dashed line) and the simplified DA model (outer loop only, dotted line) are compared with the exact result (black line).}
\end{figure}
\begin{figure}[htbp]
\includegraphics[width=1\textwidth]{figures/burgers_CG_DA.pdf}
\caption{\label{fig:burgers_CG_DA}State estimation results for the 1D Burgers' test case, projected on the coarse grid at $t=240$. The projected true state (black line) is compared with results obtained via the MGEnKF complete model (outer plus inner loop, gray dotted line) and the simplified MGEnKF model (outer loop only, gray line).}
\end{figure}
The time-evolution of the estimation for the parameter $\theta$ is shown in Fig. \ref{fig:amplitude_burgers_ILEL}. The accuracy of the complete MGEnKF scheme is remarkably good, while a significant error ($\theta=-0.2$) is observed for the simplified MGEnKF using the outer loop only. The phase error due to grid coarsening observed in Fig. \ref{fig:1D-burgers_CGmodel} for the model is responsible for this important mismatch. The reason why is clear when analysing the instantaneous physical state in Fig. \ref{fig:burgers_CG_DA}. In fact, the cumulative loss of phase of the model in the region $3\leq x \leq 4$, which includes the observation, is approximately $\pi$. Thus, this error induces a bias in the estimation of the inlet parameter $\theta$, which compensates the error in the observation region but provides massive errors outside of it.
On the other hand, the optimization via inner loop of the coefficients $\alpha_\text{i}$ and $\gamma_\text{i}$ allows to obtain a precise representation of the flow field in the whole physical domain and not only in the observation region. The physical state obtained in Fig. \ref{fig:burgers_CG_DA} by the complete MGEnKF scheme is in better agreement with the \emph{truth}. In addition, the non-linearity of the flow is adequately captured despite the significant difference in resolution between the reference simulation and the ensemble members.
\begin{figure}[htbp]
\includegraphics[width=1\textwidth]{figures/burgers_alpha.pdf}
\caption{\label{fig:alpha_burgers_ILEL}
Instantaneous space distribution of the model parameter $\alpha$ determined via inner loop optimization. The results shown correspond to a simulation time of $t=240$.}
\end{figure}
\begin{figure}[htbp]
\includegraphics[width=1\textwidth]{figures/burgers_gamma.pdf}
\caption{\label{fig:gamma_burgers_ILEL}
Instantaneous space distribution of the model parameter $\gamma$ determined via inner loop optimization. The results shown correspond to a simulation time of $t=240$.}
\end{figure}
\begin{figure}[htbp]
\includegraphics[width=1\textwidth]{figures/burgers_coeff_alpha.pdf}
\caption{\label{fig:calpha_burgers_ILEL}
Estimation history of the Legendre Polynomial coefficients $\alpha_\text{i}$.}
\end{figure}
\begin{figure}[htbp]
\includegraphics[width=1\textwidth]{figures/burgers_coeff_gamma.pdf}
\caption{\label{fig:cgamma_burgers_ILEL}
Estimation history of the Legendre Polynomial coefficients $\gamma_\text{i}$.}
\end{figure}
The performance of the inner loop is now investigated via the analysis of the model parameters $\alpha$ and $\gamma$. In figures \ref{fig:alpha_burgers_ILEL} and \ref{fig:gamma_burgers_ILEL}, spatial distributions of the two parameters are shown at $t=240$. It will also be shown later that time variations are weak for these quantities, so the results presented can be considered as mean values for $\alpha$ and $\gamma$ as well. As expected, $\alpha$ exhibits negative values to compensate the higher numerical diffusion due to the coarser grid. Moreover, the condition $\alpha > -1$ is strictly respected in the whole physical domain. The spatial distribution for $\gamma$ is quasi constant and equal to $\approx 0.004$.
For the linear advection equation, if $\gamma=\sigma\left(1-\sigma^2\right)/6$, one obtains a third-order accurate scheme on the support $j-2,j-1,j,j+1$, namely the Warming, Kutler, Lomax scheme \cite{Hirsch2007}. Considering the values for $u_0$, $\Delta_x$ and $\Delta_t$ used for this analysis, it corresponds to $\gamma=0.025\left(1-0.025^2\right)/6\approx 0.004$. Thus, the optimized value for $\gamma$ obtained via the inner loop is close to the value that provides maximum accuracy in the linear advection case.
More information about the numerical models can be drawn by the analysis of the time evolution of the coefficients $\alpha_\text{i}$ and $\gamma_\text{i}$ in Fig. \ref{fig:calpha_burgers_ILEL} and \ref{fig:cgamma_burgers_ILEL}. First of all, one can see that the zero-order contributions $\alpha_\text{0}$ and $\gamma_\text{0}$ are the most important in terms of magnitude. In addition, one can also see a small but non-zero evolution in time of such coefficients. This observation is tied to the interactions between the optimization procedures performed in the inner loop and in the outer loop, whose results interact for non-linear phenomena. This is also the reason why the rate of convergence of $\alpha$, which is strictly connected to $\theta$, is slower.
Multiple strategies for the complete MGEnKF scheme have been tried varying the starting time of the inner and outer loops. It has been observed that the global optimization converges faster and it is more robust if a first phase using the inner loop only is followed by a second phase where both inner loop and outer loop are applied. This procedure allows to \emph{train} the model used for the ensemble members to perform similarly to the model employed on the fine grid for the main simulation. Therefore, the following classical DA optimization represented by the outer loop converges more rapidly to the targeted behavior provided by the observation. This initial phase of training, which has been here performed for $t \in [0, 40]$, is particularly important if the values of the parameters driving the model used for the ensemble members are unknown, like for the present analysis.
\section{Conclusions}
\label{sec:conclusions}
The predictive features of the Multigrid Ensemble Kalman Filter (MGEnKF) recently proposed for Data Assimilation of unsteady fluid flows have been investigated in this article. The analysis focused on the improvement in global performance due to the \emph{inner loop}. This step of the DA strategy targets a systematic improvement of the accuracy of the ensemble members using surrogate information from the main simulation run on the fine grid level.
The method has been tested over two classical one-dimensional problems, namely the linear advection problem and the Burgers' equation. The results indicate the importance of the \emph{inner loop} in improving the performance of the Data Assimilation algorithm. For the linear advection case, the proposed model correction term
$\mathbf{\mathcal{C}}_{k:k-1}$
has been derived from the exact equation in order to compensate dispersive and diffusive numerical errors. The tests performed showed that this approach can fully correct the numerical errors associated with the coarse grid level where ensemble members are run. A similar strategy has been used for the non-linear Burgers' case. Part of the correction model derived for the linear advection equation (dispersive term) has here been used as a correction term to reduce phase mismatch between fine and coarse grid forecast. An additional diffusion term parametrized by $\alpha$ was introduced to control the loss of amplitude of the solution. The improvement in the estimation accuracy due to the use of the \emph{inner loop} is remarkable. However, contrarily to what was observed in the linear advection case, the model correction term here is not able to fully correct the discrepancies due to the numerical error. This aspect, which is due to the lack of an exact correction model for the Burgers' equation, shows the limitations of the \emph{inner loop}
These findings open exciting perspectives of application to grid-dependent reduced-order models extensively used in fluid mechanics applications for complex flows, such as Large Eddy Simulation (LES). \textcolor{Reviewer1}{This method is extensively used in both academic and industrial studies because of its accuracy and reduced computational demands when compared with DNS. However, several research work in the literature have highlighted the extreme sensitivity of LES to variations in the set-up of the problem. Non-linear error dynamics involving different sources (discretization error, implicit / explicit filtering, subgrid scale modelling...) have been observed \cite{Meyers2006_jfm,Lucor2007_jfm,Meldi2011_pof,Meldi2012_pof,Salvetti2018_DLES} which may result in counter intuitive results such as degradation of the accuracy with mesh refinement.} An accurate model reconstruction via the \emph{inner} loop may alleviate or even prevent one of the major issues associated with multilevel applications in fluid mechanics, that is the sensitivity of the parametric description of the model (in the form of the set of parameters $\theta$) to different mesh resolution. In this scenario, one may just tune the reduced-order model for the most refined grid resolution using the \emph{outer} loop, and compensate the emerging differences from progressively coarser grids using the \emph{inner} loop to optimize the additional model. Of course, the additional model provided for the coarse grids must be suitable for this task, as seen for the one-dimensional Burgers' equation, and difficulties are expected for applications with scale-resolved turbulent flows. Combined applications of EnKF with machine learning tools, which have been recently explored for simplified test cases, may provide success in deriving precise model structures when included in the formalism of the MGEnKF model for the \emph{inner loop}.
\textbf{Acknowledgements}: Our research activities are supported by the Direction Générale de l'Armement (DGA) and the Région Nouvelle Aquitaine.
LC would like to acknowledge the supplementary funding and excellent working conditions offered by the French Agence Nationale de la Recherche (ANR) in the framework of the projects "Closed- loop control of the wake of a road vehicle – COWAVE" (ANR-17-CE22-0008) and "Apprentissage automatique pour les récepteurs solaires à haute température – SOLAIRE" (ANR-21-CE50-0031).
\textbf{Conflict of interest}: The Authors have no conflict of interests.
|
2,877,628,090,717 | arxiv | \section{Introduction }
The line-ratio method [8] is used for measuring magnetic fields in subtelescopic ($d \leq 10^2$ km) structures on the Sun. The basic idea consists in comparison of magnetograph signals for spectral lines that have the same depth of formation and temperature sensitivity, but have different Lande factors. When regions with low field strengths ($H \leq 30$--50 mT) fall on the entrance slit, such lines give the same values of $H_\parallel$ for the measured longitudinal fields. But if there are areas with strong ($H > 50$--100~mT) fields in the region, the $H_\parallel$ found with different lines will differ, as the relationship between the magnetograph signal and the actual field strength is nonlinear.
The method does not depend on the spatial resolution in direct observations, but it requires precise information about the thermodynamic characteristics of the medium where the subtelescopic magnetic structures are localized. On account of this, the total number of free parameters (both magnetic and non-magnetic) is approximately 10, and this makes some simplifying assumptions necessary. The most frequently used assumptions are: the magnetic field is longitudinal, the radial velocities are the same inside the small-scale flux tubes and outside them, and the contribution from the anomalous dispersion (AD) is insignificant. The last assumption has not been substantiated by rigorous quantitative calculations with reference to the theory and practice of the line-ratio method. The purpose of this study is therefore to find out whether it is permissible (and under what conditions) to neglect the effect of AD when small-scale fields are measured.
\section{Calculation of the theoretical Stokes profiles}
Formation of absorption lines in the presence of a magnetic field is described with the transfer equations for polarized light. The polarized radiation is usually given in a parametric representation. According to the theory of Stokes, who was the first to introduce the parametric representation of the polarized radiation, the intensity and polarization state are defined by four Stokes parameters, $I$, $Q$, $U$, and $V$. The physical meaning of these becomes clear from the equations:
\begin{eqnarray}
I & = & I_0+I_p=I_0 + \sqrt{Q^2+U^2+V^2},\nonumber\\
Q & =&I_{lin}(\varphi=0^{\circ})-I_{lin}(\varphi=90^{\circ}),\nonumber \\
U & =&I_{lin}(\varphi=45^{\circ})-I_{lin}(\varphi=135^{\circ}), \\
V & =&I_{circ, right}-I_{circ, left}\nonumber
\end{eqnarray}
\noindent where $ I_0,~I_p$ are the intensities of the unpolarized and polarized components, respectively, $I_{lin}$ and $I_{circ}$ are the intensities of the linearly and circularly polarized components; $\varphi$ is an angle reckoned from the $OX$ direction in the $XOY$ plane perpendicular to the line if sight. The choice of coordinate system and the magnetic field vector orientation affect the Stokes parameters. The orientation of the magnetic field is often chosen in the way proposed by Shurcliff [11]: an arbitrary direction of the vector \textbf{H} is determined by the inclination angle $\gamma$, which is reckoned from the $OZ$ axis to the direction of \textbf{H}, and the azimuth $\varphi$ which is reckoned from the $OX$ axis to the direction of the vector \textbf{H} projection on the plane $XOY$.
The theory of absorption line formation in the magnetic field was first developed by Unno [16] and later was generalized by many authors: Stepanov, Rachkovskii, Obridko, Stenflo, Staude, Domke, Landi Degl'Innocenti, et al. By now, it has been developed in such a detail that it can provide quite a reliable basis for the theoretical interpretation of magnetographic and polarimetric observations. Analytical methods for solving the transfer equations developed rapidly in the late 1960s and in the 1970s, but their application is always based on approximations that restrict the class of astrophysical problems. For example, it is impossible for realistic model atmospheres to indicate where magnetically sensitive lines are formed, what magnetic field and velocity gradients exist, etc.
Progress in high-precision observations of four Stokes parameters with Stokes polarimeters [15] has increased the need for numerical solutions of transfer equations, as well as for the mathematical software for theoretical calculations of the Stokes parameter profiles. In this field of research, the works by Mattig, Beckers, Wittmann, Staude, Landi Degl'Innocenti, van Ballegooijen, et al., are well known. On this basis, Sheminova wrote the SPANSATM program [9,10], which includes all achievements of other authors, offers the necessary service facilities, and is very helpful. The only substantial limitation is the assumption of the local thermodynamic equilibrium (LTE). Non-LTE effects may nevertheless be taken into account empirically, through the coefficients of deviation from LTE.
Magneto-optical effects in the theory of spectral-line formation in the presence of a magnetic field were taken into account for the first time by Rachkovskii [5,6]. When the transfer equations are solved by numerical methods, accounting for these effects does not cause additional difficulties, as the anomalous dispersion coefficients appear in the transfer equations through the anti-symmetric elements of the absorption matrix. Below we shall give the principal formulas in order to follow the contribution of the anomalous dispersion to the transfer equations. These equations for polarized radiation with the AD allowed for have the following vectorial form:
\begin{equation}
\frac{d\bf{I}}{d\tau} = \frac{1}{\mu}[(\bf{\eta_0}
+ \bf{\eta)}\bf{I} - (\bf{\eta_0 B}+\bf{\eta S})]
\end{equation}
\noindent where
\begin{equation}
\bf{I}=\left(\begin{array}{c}
I\\ Q\\ U\\ V
\end{array} \right),~~~
\bf{S}=\left(\begin{array}{c} S\\ S\\ S\\ S
\end{array} \right),~~~
\bf{B}=\left(\begin{array}{c} B\\ 0\\ 0\\ 0
\end{array} \right),
\end{equation}
\begin{equation}
\bf{\eta_0}=\left(\begin{array}{cccc}
\eta_0& 0& 0& 0\\ 0& \eta_0& 0& 0\\ 0& 0& \eta_0& 0\\ 0& 0& 0&
\eta_0
\end{array} \right),~~~~~~
\bf{\eta}=\left( \begin{array}{cccc} \eta_I& \eta_Q& \eta_U&
\eta_V\\ \eta_Q& \eta_I& \rho_V&-\rho_U\\ \eta_U& -\rho_V&\eta_I&
\rho_Q\\ \eta_V& \rho_U& -\rho_Q& \eta_I
\end{array} \right).
\end{equation}
Here $\mu= \cos\theta$; $B$ is the Planck function; $S$ is the source function in a line; $\eta_0$, is the ratio of the selective absorption coefficient at the line center $k_{\lambda_0}$ to the continuous absorption coefficient $\kappa_5$ at the wavelength $\lambda = 500$ nm; $\eta_I,~ \eta_Q,~ \eta_U,~\eta_V$ are the ratios of the selective absorption coefficients for four Stokes parameters to the coefficient $\kappa_5$, and $\rho_Q,~ \rho_U,~ \rho_V$ are the anomalous dispersion coefficients as ratios to the parameter
$\kappa_5$. The quantities $\eta_I,~ \eta_Q,~ \eta_U,~\eta_V$ and $\rho_Q,~ \rho_U,~ \rho_V$ are determined by the direction of the magnetic lines of force, i.e., by the inclination $\gamma$, azimuth $\varphi$, and also by the coefficients of selective absorption and anomalous dispersion for radiation linearly polarized in the direction of the field ($\eta_p,~ \rho_p$), counterclockwise polarized in the plane perpendicular to the field direction ($\eta_b,~\rho_b$), and, finally, clockwise polarized in the same plane ($\eta_r,~\rho_r$). According to [17], the latter coefficients are defined by:
\begin{eqnarray}
\eta_p&= &\frac{k_{\lambda_0}}{\kappa_5}[ H(a,
v) + \frac{1}{2}\frac{1}{\Delta\lambda_D^2}\frac{\partial^2H}{\partial v^2}v_0],
\nonumber\\
\eta_{r,b}&=&\frac{k_{\lambda_0}}{\kappa_5}
[H(a,v \mp v_H) + \frac{1}{2}\frac{1}{\Delta\lambda_D^2}\frac{\partial^2H}{\partial v^2}v_1]
\nonumber\\
\rho_p&=&\frac{k_{\lambda_0}}{\kappa_5}
[F(a,v) + \frac{1}{2}\frac{1}{\Delta\lambda_D^2}\frac{\partial^2F}{\partial v^2}v_0],
\nonumber\\
\rho_{r,b}&=&\frac{k_{\lambda_0}}{\kappa_5} [F(a,v \mp
v_H) + \frac{1}{2}\frac{1}{\Delta\lambda_D^2}\frac{\partial^2F}{\partial v^2}v_1]
\end{eqnarray}
where $H(a, v)$ and $F(a, v)$ are the Voigt and Faraday functions, respectively; $v$ is the distance from the line center; $v_H$ is the Zeeman shift; $v_0$ and $v_1$, are the distances of the displaced $p$-components and $b$- and $r$-components, respectively. The Faraday function, which is also called the dispersion function, and the Voigt function are defined by
\begin{equation}
H(a,v)=\frac{a}{\pi}\int\limits_{-\infty}^{\infty}\frac{\exp(-y^2)}{(v-y)^2+
a^2}dy,
\end{equation}
\begin{equation}
F(a,v)=\frac{1}{2\pi}\int\limits_{-\infty}^{\infty}\frac{\exp(-y^2)(v-y)}{(v-y)^2+
a^2}dy .
\end{equation}
Here
\begin{equation}
v=(\lambda-\lambda_0)/ \Delta\lambda_D ,
\end{equation}\nonumber
\begin{equation}
\Delta\lambda_D=\frac{\lambda_0}{c}\sqrt{2RT/m_i+v^2_{\rm micro}} ,
\end{equation}\nonumber
\begin{equation}
a= \Gamma \lambda_0^2/(4\pi c\Delta\lambda_D).
\end{equation}\nonumber
The Faraday function takes the anomalous dispersion in absorption lines into account. Its characteristics are described in [18].
Thus, (2) can be written as a system of four first order differential equations. Using a fifth order Runge-Kutta-Fehlberg method and the boundary conditions according to [13], we obtained the Stokes parameter profiles for the escaping radiation at the Sun's surface at the center of the disk in units of relative depression (or line depth); they are as follows:
\begin{eqnarray}
R_I(\Delta\lambda) & = & (I_c-I(\Delta\lambda))/I_c = 1-I(\Delta\lambda)/I_c,\nonumber \\
R_Q(\Delta\lambda) & =& (Q_c-Q(\Delta\lambda))/I_c = -Q(\Delta\lambda)/I_c,\nonumber\\
R_U(\Delta\lambda) & = & (U_c-U(\Delta\lambda))/I_c = -U(\Delta\lambda)/I_c,\nonumber \\
~~~R_V(\Delta\lambda) & =& (V_c-V(\Delta\lambda))/I_c = -V(\Delta\lambda)/I_c
\end{eqnarray}
where $I_c,~Q_c,~U_c, V_c$ are the Stokes parameters for the continuous radiation, which is usually considered to be unpolarized, i.e., $Q_c = U_c = V_c = 0$.
To calculate the Stokes parameters for the concrete lines Fe I $\lambda\lambda$ 524.7 and 525.0 nm (multiplet no 1) that have the effective Lande factors $g_{eff}$ equal to 2.0 and 3.0, respectively, we have used the following input data: the HOLMU model atmosphere [12], microturbulent velocity $v_{\rm micro} = 0.8$~km/s, macroturbulent velocity $v_{\rm macro}= 0$, damping constant $\Gamma = 1.5 \Gamma_{\rm WdW}$, iron abundance $A = 7.64$, oscillator strengths $\log gf$ equal to $-5.03$ and $-4.89$, respectively [1].
\begin{figure}[h!b]
\centerline{
\includegraphics [scale=1.0]{fig11.eps}
}
\caption {\small
Profiles of the Stokes parameter $R_V$ for the Fe I lines $\lambda$~524.7 nm (curves 1 and 2) and $\lambda$ 525.0 run (curves 3 and 4) with $H$ = 200 mT and $\gamma = 75^\circ$. Profiles 1 and 3 are calculated ignoring the anomalous dispersion, and profiles 2 and 4 take it into account.
}
\end{figure}
Figure 1 shows some calculations with an ES-1061 computer. It is clear that the anomalous dispersion has an appreciable effect only on the parts of line profiles that are close to the line center ($\Delta\lambda\leq 4$--6 pm). As was already noted in [9,10], the wavelength $\lambda$, excitation potential $EP$, equivalent width $W$, and factor $g_{\rm eff}$ have a minor effect on the anomalous dispersion. Nevertheless, the latter increases with increasing $\lambda$, decreasing $EP$, increasing $W$, and increasing $g_{\rm eff}$. The anomalous dispersion is more sensitive to the parameters of the medium. It increases with decreasing $v_{\rm micro}$, with decreasing $\Gamma$, with the rise of $T$, with increasing $H$, and with increasing $\gamma$. The anomalous dispersion is, in fact, proportional to the magnetic gain of these parameters.
\section{Calculation scheme in the line-ratio method}.
Diagnostic relations in the line-ratio method were calculated according to the scheme described in detail [2]. The profiles $R_V(\Delta\lambda)$ found earlier were used for determination of the ratios
\begin{equation}
r=H_{\parallel}(525.0)/H_{\parallel}(524.7)
\end{equation}
as functions of the distance from the center of the line, $\Delta\lambda$, using the following expressions:
\begin{equation}
H_{\parallel}=H_c\delta_{\parallel}/\delta_c
\end{equation}
where $H_c = 2.14 \cdot 10^7 \Delta\lambda_c/ g_{\rm eff} \lambda^2$ ($H_c$ is in units of T, $\lambda$ and $\Delta\lambda$ are in units of nm), and
\begin{equation}
\delta_\parallel = 2\alpha x^{-2}_m \int_0^{x_m} {R_V(\Delta\lambda, x)xdx},
\end{equation}
\begin{equation}
\delta_c = R_I(\lambda+\Delta\lambda_c)-R_I(\lambda-\Delta\lambda_c).
\end{equation}
Here $R_I(\lambda\pm\Delta\lambda_c)$ is the line profile unperturbed by the magnetic field and shifted by $\pm\Delta\lambda_c$ for calibration of the magnetograph signal $\delta_\parallel$; $\alpha$ is the fraction of the aperture area occupied by flux tubes (filling factor); $x = l/l_0$ is the distance from the axis of symmetry of a flux tube expressed in relative units ($l$ is the line distance and $l_0$ is a typical radius of a flux tube); $x_m$ is the distance at which the field strength becomes zero.
The function $R_V(\Delta\lambda, x)$ under the integral depends on the Zeeman splitting
\begin{equation}
\Delta\lambda_H=4.67\cdot 10^{-8}g_{\rm eff}\lambda^2 H
\end{equation}
where $H$ may, in its turn, also depend on $x$, i.e, $H = H(x)$. Later we shall call the $H(x)$ function a field cross profile (or simply a field profile) in flux tubes.
Expression (14) corresponds to the total magnetic flux is concentrated within flux tubes. But it may be easily generalized to the case when the contribution of the background field of the strength $H_i$ is not zero. Then
\begin{equation}
\delta_\parallel = 2\alpha x_m^{-2} \int_0^{x_m} R_{V,f}(\Delta\lambda, x)xdx+ (1-\alpha)R_{V,i}(\Delta\lambda)
\end{equation}
where indices $f$and $i$ refer to flux tubes and the background field, respectively.
\section{Anomalous dispersion effect on the line-ratio method.}
According to calculations, the assumption that the magnetic field is longitudinal is quite admissible, particularly for $\Delta\lambda < 8$ pm (Fig. 2). Experimental values $r$ are determined with an error of approximately 5--10\%, and, therefore, a small actual discrepancy between thi relations, for example, for $\gamma = 0^\circ$ and $\gamma = 75^\circ$ at $\Delta\lambda < 8$ pm has no practical importance.
\begin{figure}[h!]
\centerline{
\includegraphics [scale=1.0]{fig22.eps}
}
\caption {\small
Ratio $r=H_{\parallel}(525.0)/H_{\parallel}(524.7)$ as a function of $\Delta\lambda$, the distance from the line center, for a uniform magnetic field of $H = 200$~mT when the anomalous dispersion is absent and the values of the angle $\gamma$ are $0^\circ$ (1), $45^\circ$ (2), and $75^\circ$ (3).
}
\end{figure}
\begin{figure}[h!b]
\centerline{
\includegraphics [scale=1.0]{fig33.eps}
}
\caption {\small
The same as in Fig. 2, but taking the anomalous dispersion into account and for $\gamma = 75^\circ$ (1), $45^\circ$ (2), and $0^\circ$ (3).
}
\end{figure}
\begin{figure}[h!b]
\centerline{
\includegraphics [scale=1.0]{fig44.eps}
}
\caption {\small
The same as in Figs. 2 and 3, but for the two-component model $H(x)$ = const = 200 mT, $H_i = 20$ mT, $\alpha= 0.2$, $\gamma = 75^\circ$: ignoring the anomalous dispersion (1), taking it into account (2).
}
\end{figure}
In general, the anomalous dispersion affects $r$ more strongly than the field inclination angle, especially near the line centers ($\Delta\lambda < 4$ pm), as well as at large angles ($\gamma > 20-30^\circ$), in an intense field ($H > 100$ mT), and when the intensity is the same in all points in the aperture (Fig. 3). The contribution of anomalous dispersion is to be taken into account only when the conditions indicated above are fulfilled simultaneously. If at least one of the conditions is not fulfilled, the anomalous dispersion effects will be within the typical errors of determination of $r$.
In particular, if regions with both strong ($H > 100$ mT) non-longitudinal field and non-longitudinal field of small and medium strength ($H \leq 30$--50 mT) fall on the entrance slit, $r$ will be the same within the error limit: for both cases -- when the anomalous dispersion is present or absent -- in the whole interval of actual $\Delta\lambda$. Figure 4 illustrates this, showing the relationships for the two-component model that involves a background field of $H_i = 20$ mT and subtelescopic flux tube with a rectangular field distribution
\begin{equation}
H(x)={\rm const}= 200~ {\rm mT}.
\end{equation}
Here we adopted, according to the models of [4,7], $\alpha =0.02$, $H_i/\alpha = 100$ mT. The difference of relationships shown in Fig. 4 gives a certain maximum effect for models with a background field because in real flux tubes $H(x) \neq$~const [4].
But if one uses the given method, as in [17], for the study of spatially resolved structures (e.g., pores), when a region with an intense field fills in the aperture area completely, then the neglect of the anomalous dispersion can cause considerable errors, more than 30\% of the true field intensity, for $\Delta\lambda < 4$~pm.
Calculations have shown also that, when the ratios of linear polarization amplitudes, $\delta_\perp = \sqrt{R_Q^2 + R^2_U}$, are used in this method, a situation occurs very similar to the case of circular polarization $R_V$ analyzed above. In particular, the effect of the anomalous dispersion here is also the largest for $\Delta\lambda= 0$--6 pm, whereas it is negligible outside this interval.
\section{Conclusions}
Calculations show that when the line-ratio method is used, the effect of anomalous dispersion cannot be neglected if the following conditions are satisfied simultaneously: a) the inclination angle of the lines of force exceeds $20^\circ$, b) the magnetic field exceeds 100 mT, c) the magnetic field profile in flux tubes is rectangular, and d) parts of line profiles near the center ($\Delta\lambda < 4$~pm) are used. It means practically that when real small-scale flux tubes outside of spots or pores are studied, it is not necessary to take the anomalous dispersion into account not only in the central zone of the solar disk, but even for heliocentric angles of approximately 60--$70^\circ$. Although flux tubes in the solar atmosphere appear to be almost vertical [3,15], the magnetic field profile is nonrectangular in them (according to the data of [4], $H(x) \propto 1 - x^4$). Besides that, a background field is very probable between tubes, and this field transfers a magnetic flux comparable with the flux in the tubes. The criterion stated above is thus violated. This means, with reference to specific proposed models, that neither the model by Rachkovskii and Tsap [17] nor that by Lozitskii and Tsap [4] need be revised from this point of view.
The anomalous dispersion should nevertheless be taken into account in the method when solar pores observed if their size is not smaller than the effective size of the aperture and the heliocentric angles are greater than 20--$30^\circ$.
\normalsize
|
2,877,628,090,718 | arxiv | \section*{Introduction and conclusions}
The formalism developed by Ferrara, Gibbons and Kallosh (FGK) in
Ref.~\cite{Ferrara:1997tw} has proven a formidable tool in the study of
4-dimensional black holes. For extremal 4-dimensional black holes, it has
solidly established a connection between the entropy and the values of the
scalars on the horizon through the extremization of the so-called black-hole
potential. In the special case of $N\geq 2,d=4$ supergravity theories, the
black-hole potential is just a function of the central charge and its
covariant derivatives, and some of the extrema of the black-hole potential
(the supersymmetric ones) are the extrema of the central charge, whose value
on the horizon determines the entropy. This explains how the attractor
mechanism works in these theories \cite{Ferrara:1995ih}
These particular (but very important) results of Ref.~\cite{Ferrara:1997tw}
have been used in much of the literature on black holes: the attractors values
of the scalars on the horizon for a given set of charges of given model or
class of models are determined and the entropy of the corresponding extremal
black holes is computed without ever having to construct the complete
black-hole spacetime metric explicitly. Actually, only in some supergravity
theories it is known how to perform this construction (notably, in $N=2,d=4$
supergravity), even if the generic form of the solutions is known, in
principle, for all 4-dimensional supergravities \cite{Meessen:2010fh}. For
extremal non-supersymmetric the situation is worse: a systematic procedure to
construct the solutions does not exist even for $N=2,d=4$ supergravities,
except in trivial cases. The non-extremal solutions are described by the FGK
effective action as well and their physics is much richer (and the unknown
extremal non-supersymmetric solutions can probably be be obtained from them);
the absence of an attractor mechanism makes their construction harder and
their study less attractive but definitely no less rewarding.
Recently, a general ansatz to construct general families of non-extremal
black-hole solutions of $N=2,d=4$ supergravity in combination with the FGK
formalism has been proposed in Ref.~\cite{Galli:2011fq} and new variables that
clarify their structure and their construction have been proposed in
Refs.~\cite{Mohaupt:2011aa,Meessen:2011mu}.
Given the power of this approach, it is natural to try to generalize it to
other cases. In Ref.~\cite{Meessen:2011bd} a generalization of the FGK
formalism for $d$-dimensional black holes was presented and the special
properties of the black-hole potential in the $N=2,d=5$ supergravity case were
studied. New variables, similar to those constructed in
Refs.~\cite{Mohaupt:2011aa,Meessen:2011mu} for the 4-dimensional case are also
known \cite{Mohaupt:2009iq,Mohaupt:2010fk,Meessen:2011mu} and all these
results can be combined with the proof of the attractor mechanism presented in
Ref.~\cite{Ortin:2011vm} to find very general results concerning extremal and
non-extremal black-hole solutions of those theories that will be presented
elsewhere \cite{kn:MOPS}.
In this paper we generalize the formalism to $p$-branes in $d$ dimensions,
determining the general form of the metric of a single, charged, static,
regular, flat $p$-brane in $d$ dimensions and constructing the effective
action for the single independent metric function and for the scalars. We
derive the generalization of the results of Ref.~\cite{Ferrara:1997tw} that
relate the values of the scalars on the horizon of extremal black branes to
the extrema of the \textit{black-brane} potential and the entropy to (some
power of) the value of the black-brane potential on the horizon. We also study
the special properties of the black-string potential in the $N=2,d=5$
supergravity case: just as in the black-hole case the black-hole potential
could be written as a function of the central charge and its derivatives, in
the black-string case the black-string potential can be written as a function
of a dual central charge and its derivatives so that the extrema of the
central charge are also (supersymmetric) extrema of the black-string potential
and the entropy is given in those cases by (a power of) the value of the
dual central charge on the horizon. This case is particularly interesting
because new variables, similar to those used for black holes in
Refs.~\cite{Mohaupt:2009iq,Mohaupt:2010fk,Meessen:2011mu} can also be defined
\cite{kn:MOPS}.
Finally, further generalizations to, for instance, branes with curved
worldvolumes such as those considered in Ref.~\cite{Chemissany:2011gr} are
clearly possible using this formalism\footnote{It seems that the background
transverse metric needs to be modified since, from this point of view, it is
not universal. We thank T.~Van Riet for pointing out this fact to us.}.
This paper is organized as follows: in Section~\ref{sec:branesformalism} we
describe the general actions we are going to deal with and, using the ansatz
that emerges from Appendix~\ref{sec-known}, we perform the dimensional
reduction to find the generalization of the FGK effective action (obtained in
an alternative fashion in Appendix~\ref{sec-dimred}) and of the general
results concerning extremal branes (Section~\ref{sec:FGKtheorems}). In
Section~\ref{sec:N2d5} we apply the general formalism to the special case of
black strings in $N=2,d=5$ supergravity and we solve explicitly a simple
model.
\section{The FGK formalism for black $p$- branes}
\label{sec:branesformalism}
\subsection{Derivation of the effective action action}
\label{sec:geodesic action}
We are interested in theories with scalar fields $\phi^{i}$ parametrizing a
non-linear $\sigma$-model with metric $\mathcal{G}_{ij}(\phi)$, and
$(p+1)$-form potentials $A^{\Lambda}_{(p+1)\, \mu_{1}\cdots\mu_{p+2}}$ coupled
to gravity whose actions are of the general form
\begin{equation}
\label{eq:daction}
\mathcal{I}[g,A^{\Lambda}_{(p+1)},\phi^{i}]
=
\int d^{d}x \sqrt{|g|}
\left\{
R + \mathcal{G}_{ij} (\phi)\partial_{\mu} \phi^{i} \partial^{\mu} \phi^{j}
+4 \tfrac{(-1)^{p}}{(p+2)!} I_{\Lambda \Sigma}(\phi) F_{(p+2)}^{\Lambda} \cdot F_{(p+2)}^{\Sigma}
\right\}\, ,
\end{equation}
\noindent
where
\begin{equation}
\begin{array}{rcl}
F^{\Lambda}_{(p+2)\, \mu_{1}\cdots \mu_{p+2}}
& = &
(p+2)\partial_{[\mu_{1}|}A^{\Lambda}{}_{(p+1)\, |\mu_{2}\cdots \mu_{p+2}]}\, ,
\\
& & \\
F_{(p+2)}^{\Lambda} \cdot F_{(p+2)}^{\Sigma}
& \equiv &
F_{(p+2)\, \mu_{1}\cdots \mu_{p+2}}^{\Lambda}
F_{(p+2)}^{\Sigma}{}^{\mu_{1}\cdots \mu_{p+2}}\, ,
\end{array}
\end{equation}
\noindent
are the $(p+2)$-form field strengths and $I_{\Lambda \Sigma}(\phi)$ is a
scalar-dependent, negative-definite matrix that describes the coupling of the
scalar fields to the $(p+1)$-form fields. The normalizations have been chosen
so as to recover the particular cases considered in
Refs.~\cite{Meessen:2011bd,Ferrara:1997tw} for $p=0$, general $d$ and
$p=0,d=4$, respectively, with the original normalizations.
In the particular cases $p=\tilde{p}=(d-4)/2$ (for instance, black holes in
$d=4$, strings in $d=6$, membranes in $d=8$ and 3-branes in $d=10$, to mention
only those which are relevant from the String Theory point of view) one should
consider additional terms of the form
\begin{equation}
\label{eq:additionalterm}
+4\xi^{2} \tfrac{(-1)^{p}}{(p+2)!}R_{\Lambda\Sigma}(\phi)
F_{(p+2)}^{\Lambda} \cdot \star F_{(p+2)}^{\Sigma}\, ,
\end{equation}
\noindent
in the action, where $R_{\Lambda\Sigma}(\phi)$ is a scalar dependent matrix
such that
\begin{equation}
R_{\Lambda\Sigma} = -\xi^{2}R_{\Sigma\Lambda}\, ,
\end{equation}
\noindent
and where\footnote{This constant is associated to the value of the square of
the Hodge star when it acts on a $(p+2)$ form: $\star^{2} = \xi^{2}$.}
\begin{equation}
\xi^{2} =-(-1)^{d/2} = (-1)^{p+1}\, ,
\end{equation}
\noindent
and the ansatz should take into account that the same brane can also be
magnetically charged with respect to the dual of the $(p+1)$-form potentials,
which are also $(p+1)$-forms, i.e.~they can be dyonic. Furthermore, if $d=
4n+2$ ($p$ odd: strings in $d=6$ and 3-branes in $d=10$) the dyonic branes can
also be self- or anti-self-dual.
The first ingredient we need is a generic ansatz for the metric of any
electrically charged, static, flat, black $p$-brane in $d=p+\tilde{p}+4$
dimensions, where $\tilde{p}$ is the dimension of the of the dual (magnetic)
brane, with a transverse radial coordinate $\rho$ such that the event horizon
is at $\rho\rightarrow \infty$.
This generic ansatz can be found by studying the metrics of known families of
solutions of this kind, such as those originally found in
Ref.~\cite{Horowitz:1991cd}\footnote{Here we use the conventions and notation
of Ref.~\cite{Ortin:2004ms}}. This study is performed in
Appendix~\ref{sec-known} and the ansatz for the metric that emerges from it
is\footnote{This metric has also been derived from the equatios of motion in
Refs.~\cite{Janssen:2007rc}, where it has also been shown to be valid for
time-dependent cases. Inthose references, more general slicings of the
spacetime were also considered.}
\begin{equation}
\label{eq:generalmetric1}
ds_{(d)}^{2}
=
e^{\frac{2}{p+1}\tilde{U}}
\left[
W^{\frac{p}{p+1}} dt^{2}
-W^{-\frac{1}{p+1}}d\vec{y}^{\, 2}_{(p)}
\right]
-e^{-\frac{2}{\tilde{p}+1}\tilde{U}}
\gamma_{(\tilde{p}+3)\, \underline{m}\underline{n}} dx^{m} dx^{n}\, .
\end{equation}
\noindent
where $\vec{y}_{(p)}\equiv (y^{1},\cdots, y^{p})$ are the brane's $p$
spacelike worldvolume coordinates and where $\gamma_{(\tilde{p}+3)\,
\underline{m}\underline{n}}$ is the background transverse metric given by
\begin{equation}
\label{eq:backgroundtransversemetric}
\gamma_{(\tilde{p}+3)\, \underline{m}\underline{n}} dx^{m} dx^{n}
=
\left(\frac{ \omega/2}{\sinh{\left(\frac{\omega}{2} \rho\right)}} \right)^{\frac{2}{\tilde{p}+1}}
\left[
\left( \frac{\omega/2}{\sinh{\left(\frac{\omega}{2} \rho \right)}}\right)^2
\frac{d\rho^2}{(\tilde{p}+1)^2}
+ d\Omega^{2}_{(\tilde{p}+2)}
\right]\, ,
\end{equation}
\noindent
where, in turn, $d\Omega_{(\tilde{p} +2)}^{2}$ is the metric of the
round $(\tilde{p}+2)$-sphere of unit radius.
The general metric Eq.~(\ref{eq:generalmetric1}), which reduces in the $p=0$
case to the metrics used in $d=4$ and arbitrary $d$-dimensional black holes in
Refs.~\cite{Ferrara:1997tw} and \cite{Meessen:2011bd} respectively ($W$
disappears), should be capable of describing any non-extremal black brane for
adequate choices of the functions $\tilde{U}(\rho)$ and $W(\rho)$. In what
follows we will use it as an ansatz in which only $\tilde{U}(\rho)$ and
$W(\rho)$ have to be determined.
Observe that, while it is possible to redefine $\tilde{U}$ and the transverse
metric $\gamma_{(\tilde{p}+3)\, \underline{m}\underline{n}}$ so as to totally
absorb $W$ in some components of the metric, it is not possible to do it
simultaneously in all of them. We do not expect more than one independent
function in a black-brane metric, but nothing prevents us from using the above
metric with \textit{a priori} independent functions $\tilde{U}$ and $W$ as
an ansatz and then letting the equations of motion dictate how they are related
and what is the best definition for the single independent function that we
expect.
If we are to describe electrically charged $p$-branes, an adequate ansatz for
the $(p+1)$-form potentials $A^{\Lambda}_{(p+1)}$ is
\begin{equation}
A^{\Lambda}_{(p+1)\, ty_{1}\cdots y_{p}} = \psi^{\Lambda}(\rho)\, ,
\end{equation}
\noindent
(all the other components vanish). In the special case $p=\tilde{p}=(d-4)/2$,
the branes can also be magnetically charged with respect to the dual
(\textit{magnetic}) $(p+1)$-form potentials. These are defined as follows: the
equations of motion of the \textit{electric} $(p+1)$-form potentials, when we
add the term Eq.~(\ref{eq:additionalterm}) to the action, can be expressed in
the form
\begin{equation}
dG_{(p+2)\, \Lambda}=0\, ,
\hspace{1cm}
G_{(p+2)\,\Lambda} \equiv
R_{\Lambda\Sigma}F_{(p+2)}^{\Sigma}+I_{\Lambda\Sigma}\star F_{(p+2)}^{\Sigma}\, ,
\end{equation}
\noindent
and imply the local existence of the magnetic $(p+1)$-form potentials
$A_{(p+1)\,\Lambda}$ satisfying
\begin{equation}
G_{(p+2)\,\Lambda} = d A_{(p+1)\,\Lambda}\, .
\end{equation}
\noindent
Then, in this particular cases, our ansatz for the magnetic potentials is
\begin{equation}
A_{(p+1)\, \Lambda\, ty_{1}\cdots y_{p}} = \chi_{\Lambda}(\rho)\, .
\end{equation}
The electric and magnetic field $(p+2)$-form strengths can be arranged into a
vector
\begin{equation}
\left(\mathcal{F}^{M}\right)
\equiv
\left(
\begin{array}{c}
F^{\Lambda} \\ G_{\Lambda} \\
\end{array}
\right)\, ,
\hspace{1cm}
\left(\Psi^{M}\right)
\equiv
\left(
\begin{array}{c}
\psi^{\Lambda} \\ \chi_{\Lambda} \\
\end{array}
\right)\, ,
\end{equation}
\noindent
so the Bianchi identities and Maxwell equations can be written in the compact
form
\begin{equation}
d\mathcal{F}^{M}=0\, ,
\end{equation}
\noindent
which is covariant under linear transformations
\begin{equation}
\left(
\begin{array}{c}
F^{\prime} \\ G^{\prime} \\
\end{array}
\right)
=
\left(
\begin{array}{cc}
A & B \\
C & D \\
\end{array}
\right)
\left(
\begin{array}{c}
F \\ G \\
\end{array}
\right)\, .
\end{equation}
\noindent
The consistency of these linear transformations with the definitions of the
magnetic field strengths requires that the matrices $R,I$ transform according
to
\begin{equation}
N^{\prime}
=
\left(C+D N\right)
\left(A+BN\right)^{-1}\, ,
\hspace{1cm}
N \equiv R+\xi I\, .
\end{equation}
On the other hand, the contribution of the $(p+1)$-form potentials to the
energy-momentum tensor can be written in the form
\begin{equation}
\Omega_{MN} \star\mathcal{F}^{M}{}_{\mu\alpha_{1}\cdots \alpha_{p+1}}
\mathcal{F}^{N}{}_{\nu}{}^{\alpha_{1}\cdots \alpha_{p+1}}\, ,
\end{equation}
\noindent
where we have defined the metric
\begin{equation}
\left(\Omega_{MN} \right)
\equiv
\left(
\begin{array}{cc}
0 & \mathbb{I} \\
\xi^{2}\mathbb{I} & 0 \\
\end{array}
\right)\, ,
\end{equation}
\noindent
which will be used to raise and lower $M,N$ indices. This implies that the
linear transformations of the $n$ electric and $n$ magnetic field strengths
must be restricted to $O(n,n)$ when $\xi^{2}=+1$ and to $Sp(2n+2,\mathbb{R})$ when
$\xi^{2}=-1$.
An alternative expression for this contribution to the energy-momentum tensor
is
\begin{equation}
\mathcal{M}_{MN}\mathcal{F}^{M}{}_{\mu\alpha_{1}\cdots \alpha_{p+1}}
\mathcal{F}^{N}{}_{\nu}{}^{\alpha_{1}\cdots \alpha_{p+1}}\, ,
\end{equation}
\noindent
where the symmetric matrix $\mathcal{M}_{MN}$ is given by
\begin{equation}
\begin{array}{rcl}
\left(\mathcal{M}_{MN} \right)
& \equiv &
\left(
\begin{array}{cc}
I-\xi^{2}RI^{-1}R & \xi^{2} RI^{-1} \\
& \\
-I^{-1}R & I^{-1} \\
\end{array}
\right)\, ,
\\
& & \\
\left(\mathcal{M}^{MN} \right)
& = &
\left(
\begin{array}{cc}
I^{-1} & -\xi^{2} I^{-1}R \\
& \\
RI^{-1} & I-\xi^{2}RI^{-1}R \\
\end{array}
\right)=
(\mathcal{M}_{NP})^{-1}\, .
\end{array}
\end{equation}
In what follows we will write the expressions including the additional terms
(matrix $R_{\Lambda\Sigma}$, magnetic charges $p^{\Lambda}$ etc.) in the
understanding that they vanish whenever the condition $p=\tilde{p}=(d-4)/2$ is
satisfied.
To end the description of our ansatz, we are also going to assume that the
scalars only depend on $\rho$. Plugging this ansatz into the equations of
motion derived from the above action, we get two equations
\begin{eqnarray}
\frac{d^{2}\ln{W}}{d\rho^{2}}
& = &
0\, ,
\\
& & \nonumber \\
\frac{d~}{d\rho}\left[e^{-2\tilde{U}}\, \mathcal{M}_{MN}\, \dot{\Psi}^{N} \right]
& = &
0\, .
\end{eqnarray}
\noindent
(overdots denoting derivatives w.r.t.~$\rho$)
that can be integrated immediately, giving
\begin{eqnarray}
W
& = &
e^{\gamma \rho}\, ,
\\
& & \nonumber \\
\dot{\Psi}^{M}
& = &
\alpha e^{2\tilde{U}}\mathcal{M}^{MN}\mathcal{Q}_{N}\, ,
\end{eqnarray}
\noindent
where we have normalized $W(0)=1$ at spatial infinity and we have introduced
the integration constants $\gamma$ and $\mathcal{Q}_{M}$, $\alpha$ being a
normalization constant. The constants $\mathcal{Q}_{M}$ are, up to global
normalization, just the electric and magnetic charges of the $p$-brane with
respect to the $(p+1)$-form potentials
\begin{equation}
\mathcal{Q}_{M} \sim \int_{S^{\tilde{p}+2}} \star\, \mathcal{M}_{MN}
\mathcal{F}^{N}\, ,
\hspace{1cm}
(\mathcal{Q}^{M}) \equiv
\left(
\begin{array}{c}
p^{\Lambda} \\ q_{\Lambda} \\
\end{array}
\right)\, ,
\hspace{1cm}
\mathcal{Q}_{M} \equiv \Omega_{MN}\mathcal{Q}^{N}\, .
\end{equation}
These first integrals allow us to eliminate from the equations of motion $W$
and $\Psi^{M}$ (which only appears through $\dot{\Psi}^{M}$). The remaining
three equations only involve $\tilde{U}$ and $\phi^{i}$ and take the form
\begin{eqnarray}
\label{eq:1}
\ddot{\tilde{U}} +e^{2\tilde{U}}V_{\rm BB}
& = &
0\, ,
\\
& & \nonumber \\
\ddot{\phi}^{i} +\Gamma_{jk}{}^{i}\dot{\phi}^{j}\dot{\phi}^{k}
+\tfrac{d-2}{2(\tilde{p}+1)(p+1)}e^{2\tilde{U}}\partial^{i} V_{\rm BB}
& = &
0\, ,
\\
& & \nonumber \\
\label{eq:hamiltonianconstraint}
(\dot{\tilde{U}})^{2}
+\tfrac{(p+1)(\tilde{p}+1)}{d-2} \mathcal{G}_{ij} \dot{\phi}^{i} \dot{\phi}^{j}
+e^{2 \tilde{U}} V_{\rm BB}
& = &
\hat{\mathcal{B}}^{2}\, ,
\end{eqnarray}
\noindent
where we have defined the negative semidefinite \textit{black-brane potential}
\begin{equation}
\label{eq:VBB}
V_{\rm BB}(\phi,\mathcal{Q}) \equiv
2\alpha^{2} \tfrac{(p+1)(\tilde{p}+1)}{(d-2)}
\mathcal{M}_{MN} \mathcal{Q}^{M}\mathcal{Q}^{N}\, ,
\end{equation}
\noindent
and the constant
\begin{equation}
\label{eq:B2}
\hat{\mathcal{B}}^{2} \equiv \tfrac{(p+1)(\tilde{p}+2)}{4(d-2)}\, \omega^{2}
-\tfrac{(\tilde{p}+1)p}{4(d-2)}\gamma^{2}\, .
\end{equation}
These equations (up to the constant in Eq.~(\ref{eq:hamiltonianconstraint},
which arises as the Hamiltonian constraint) can be derived from from the
effective action
\begin{equation}
\label{eq:geodesicaction}
\mathcal{I}[\tilde{U},\phi^{i}]=
\int d\rho
\left\{
(\dot{\tilde{U}})^{2}
+\tfrac{(p+1)(\tilde{p}+1)}{d-2} \mathcal{G}_{ij} \dot{\phi}^{i} \dot{\phi}^{j}
-e^{2 \tilde{U}} V_{\rm BB} +\hat{\mathcal{B}}^{2}
\right\}\, .
\end{equation}
Summarizing, we have found that, if we use the ansatz
\begin{equation}
\begin{array}{rcl}
ds_{(d)}^{2}
& = &
e^{\frac{2}{p+1}\tilde{U}}
\left[
e^{\frac{p}{p+1}\gamma\rho} dt^{2}
-e^{-\frac{1}{p+1}\gamma\rho}d\vec{y}^{\, 2}_{(p)}
\right]
-
e^{-\frac{2}{\tilde{p}+1}\tilde{U}}
\gamma_{(\tilde{p}+3)\, \underline{m}\underline{n}} dx^{m} dx^{n}\, ,
\\
& & \\
A^{M}_{(p+1)}
& = &
\Psi^{M}(\rho)\, dt\wedge dy^{1} \wedge \cdots \wedge dy^{p}\, ,
\hspace{1cm}
\dot{\Psi}^{M }
=
\alpha e^{2\tilde{U}}\mathcal{M}^{MN}\mathcal{Q}_{N}\, ,
\\
& & \\
\phi^{i}
& = &
\phi^{i}(\rho)\, ,
\end{array}
\end{equation}
\noindent
where $\tilde{U}$ is a function of $\rho$; $\gamma,\mathcal{Q}_{M}$ are constants
and $\gamma_{(\tilde{p}+3)\, \underline{m}\underline{n}}$ is the transverse
space metric given in Eq.~(\ref{eq:generalmetric1}), in the theories defined by
generic family of actions Eq.~(\ref{eq:daction}), we find that they are
solutions of these theories if the following
Eqs.~(\ref{eq:1})-(\ref{eq:hamiltonianconstraint}) are
satisfied.
The same result is obtained in Appendix~\ref{sec-dimred} by reducing first the
action Eq.~(\ref{eq:daction}) to $(d-p)=(\tilde{p}+4)$ dimensions in
such a way that the action only contains the Einstein-Hilbert term, scalars
and 1-forms and then by using the FGK formalism of Ref.~\cite{Meessen:2011bd}
in a second stage.
In general, the integration constant $\gamma$ will be related to the
non-extremality parameter $\omega$ by requiring the solution to have a regular
event horizon. Indeed, Eq.~(\ref{eq:asympW}) implies that
\begin{equation}
\gamma=\omega
\hspace{1cm}
W=e^{\omega\rho}\, ,
\hspace{1cm}
\hat{\cal B}^{2}= (\omega/2)^{2}\, ,
\end{equation}
\noindent
and, therefore, the general form of regular $p$-branes will be taken to be
\begin{equation}
\label{eq:generalmetric}
ds_{(d)}^{2}
=
e^{\frac{2}{p+1}\tilde{U}}
\left[
e^{\frac{p}{p+1}\omega\rho} dt^{2}
-e^{-\frac{1}{p+1}\omega\rho}d\vec{y}^{\, 2}_{(p)}
\right]
-
e^{-\frac{2}{\tilde{p}+1}\tilde{U}}
\gamma_{(\tilde{p}+3)\, \underline{m}\underline{n}} dx^{m} dx^{n}\, ,
\end{equation}
\subsection{FGK theorems for static flat branes}
\label{sec:FGKtheorems}
In the same spirit as \cite{Ferrara:1997tw,Meessen:2011bd}, we can use the
formalism presented in the previous section to derive several results about
single, static, flat, black $p$-brane solutions in $d$ dimensions.
Let us first consider extremal black branes $\omega=0$, whose general form
follows from the $\omega\longrightarrow 0$ limit of the general metric
Eq.~(\ref{eq:generalmetric}):
\begin{equation}
\label{eq:extremalgeneralmetric}
ds_{(d)}^{2}
=
e^{\frac{2\tilde{U}}{p+1}} \left[dt^{2}- d\vec{y}_{(p)}^{~2} \right]
-\frac{e^{-\frac{2\tilde{U}}{\tilde{p}+1}}}{\rho^{\frac{2}{\tilde{p}+1}}}
\left[ \frac{1}{\rho^{2}}\frac{d\rho^{2}}{(\tilde{p}+1)^2}
+d\Omega^{2}_{(\tilde{p}+2)}\right] \, .
\end{equation}
\noindent
According to the results in Appendix~\ref{sec-extremal}, in the extremal
limit, $\tilde{U}$ must behave as in Eq.~(\ref{eq:Uasymp}), which
we reproduce here for convenience:
\begin{equation}
e^{\tilde{U}}
\sim
\tilde{S}^{-\frac{\tilde{p}+1}{\tilde{p}+2}} \rho^{-1}\, ,
\end{equation}
\noindent
where $\tilde{S}$ is the entropy density per unit worldvolume , defined in the
paragraph above Eq.~(\ref{eq:entropydensity}). Therefore, the near-horizon
limit of Eq.~(\ref{eq:extremalgeneralmetric}) takes the general form
\begin{equation}
\label{eq:metricextremalimit}
ds^{2}_{(d)}
=
\rho^{\frac{-2}{p+1}} \tilde{S}^{-\frac{2(\tilde{p}+1)}{(p+1) (\tilde{p}+2)}}
\left[dt^2- d\vec{y}_{(p)}^{~2} \right]
-\tilde{S}^{\frac{2}{\tilde{p}+2}}\left[
\frac{1}{\rho^2}\frac{d\rho^2}{(\tilde{p}+1)^2}
+d\Omega^{2}_{(\tilde{p}+2)}\right] \, ,
\end{equation}
\noindent
which is the direct product $AdS_{p+2}\times S^{\tilde{p}+2}$, both with radii
dual to $\tilde{S}^{\frac{1}{\tilde{p}+2}}$.
We impose the following regularity condition on the scalars
\begin{equation}
\label{eq:scalarregularity}
\lim_{\rho\rightarrow \infty} \frac{(p+1)(\tilde{p}+1)}{d-2}
\mathcal{G}_{ij}\dot{\phi}^i \dot{\phi}^j
e^{2\tilde{U}} \rho^{4}\equiv \mathcal{X} < \infty\, .
\end{equation}
\noindent
Then, the near-horizon limit $\rho\rightarrow\infty$ of the Hamiltonian
constraint Eq.~(\ref{eq:hamiltonianconstraint}) is
\begin{equation}
\label{eq:nearhorizonantigravity}
1+\mathcal{X} \tilde{S}^{\frac{2(\tilde{p}+1)}{\tilde{p}+2}}
+\tilde{S}^{-\frac{\tilde{p}+1}{\tilde{p}+2}} V_{\rm BB}(\phi_{H},\mathcal{Q})=0\, .
\end{equation}
\noindent
If we assume that the entropy density $\tilde{S}$ does not vanish and the
values of the scalars do not diverge on the horizon $\phi^{i}_{\rm h}<
\infty$, then it can be shown that
\begin{equation}
\label{eq:deduccion1}
\rho\frac{d\phi^i}{d\rho}=0,~~~ \mathcal{X}=0\, ,
\end{equation}
\noindent
and from Eqs.~(\ref{eq:nearhorizonantigravity}) and (\ref{eq:deduccion1}) we
obtain
\begin{equation}
\label{eq:extremalarea}
\tilde{S}
=
\left[-V_{\rm BB}(\phi_{\rm h},\mathcal{Q})
\right]^{\frac{\tilde{p}+2}{2(\tilde{p}+1)}}\, ,
\end{equation}
\noindent
and therefore the entropy of an extremal brane is given by (a power of) the
value of the black-brane potential at the horizon.
On the other hand, if we assume that, again, the entropy density is finite
and, furthermore, that
\begin{equation}
\label{eq:assume2}
\rho\frac{d\phi^i}{d\rho}=0\, ,\,\,\,\,\ \forall i\, ,
\end{equation}
\noindent
we deduce, from the near-horizon limit of the equations of the scalars, that
the value of the scalars on the horizon is fixed in terms of the charges by
\begin{equation}
\label{eq:attractor1}
\mathcal{G}^{ij}(\phi_{\rm h})\partial_i V_{\rm BB}(\phi_{\rm
h},\mathcal{Q})=0\, ,
\end{equation}
\noindent
and does not diverge.
Therefore the condition Eq.~(\ref{eq:assume2}) plus finiteness of the entropy
density imply the regularity of the scalars on the horizon that we assumed
before. If the metric of the scalar manifold $\mathcal{G}_{ij}$ is positive
definite, then Eq.~(\ref{eq:attractor1}) is equivalent to
\begin{equation}
\label{eq:attractor2}
\partial_{i} V_{\rm BB}(\phi_{\rm h},\mathcal{Q})=0\, ,
\end{equation}
\noindent
which generalizes the usual attractor mechanism for static extremal black
holes to the case of static extremal flat branes.
Finally, if we take the spatial infinity limit $\rho\rightarrow 0^{+}$ of the
Hamiltonian constraint Eq.~(\ref{eq:hamiltonianconstraint}), we obtain the
analog for branes of the so-called extremality (or antigravity) bound for
black holes
\begin{equation}
\label{eq:antigravitybound}
\tilde{u}^2
+\frac{(p+1) (\tilde{p}+1)}{d-2} \mathcal{G}_{ij}(\phi_{\infty})\Sigma^i
\Sigma^j
+V_{BB} (\phi_{\infty},\mathcal{Q})=(\omega/2)^2 \, ,
\end{equation}
\noindent
where $\Sigma^i$ are the scalar charges and $\tilde{u}=-\tilde{U}'(0)$ is
given in terms of the black $p$-brane's tension $T_{p}$ and the
non-extremality parameter $\omega$ by Eq.~(\ref{eq:generaltensionformula}):
\begin{equation}
\tilde{u} = -\frac{1}{(d-2)}
\left[
(p+1)(\tilde{p}+2)T_{p} +p(\tilde{p}+1)\omega/2
\right]\, .
\end{equation}
The above formula differs from the black hole's by terms proportional to
$p\omega$ which vanish in the black-hole case $p=0$.
\section{Non-extremal strings in $N=2$, $d=5$ supergravity.}
\label{sec:N2d5}
In order to illustrate the formalism developed in the previous sections, we
are going to particularize it for the case of $N=2,d=5$ supergravity, solving
a simple example. The relevant part of the bosonic action of $N=2,d=5$
supergravity theories coupled to $n$ vector multiplets is, using the
conventions of Refs.~\cite{Bellorin:2006yr,Bergshoeff:2004kh},
\begin{equation}
\label{eq:N2d5action}
\mathcal{I}[g_{\mu\nu},A^{I}{}_{\mu},\phi^{x}] = \int d^{5}x
\left\{
R +\tfrac{1}{2}g_{xy}\partial_{\mu}\phi^{x}\partial^{\mu}\phi^{y}
-\tfrac{1}{4}a_{IJ}F^{I}{}_{\mu\nu}F^{J\, \mu\nu}
\right\}\, ,
\end{equation}
\noindent
where $I,J=0,1,\cdots,n$ and $x,y=1,\cdots,n$. The scalar target spaces are
implicitly defined by the existence of $n+1$ functions $h^{I}(\phi)$ of the
$n$ physical scalar subject to the constraint
\begin{equation}
\label{eq:hypersurface}
C_{IJK}h^{I}h^{J}h^{K}=1\, ,
\end{equation}
\noindent
where $C_{IJK}$ is a completely symmetric constant tensor that determines the
model. Defining
\begin{equation}
h_{I} \equiv C_{IJK}h^{J}h^{K}\, ,\hspace{1cm}\mbox{(so $h_{I}h^{I}\ =\
1$)}\, ,
\end{equation}
\noindent
the positive definite matrix $a_{IJ}$ can be expressed as
\begin{equation}
a_{IJ} = -2C_{IJK}h^{K}+3h_{I}h_{J}\, ,
\end{equation}
\noindent
and can be used to consistently raise and lower the index of the functions
$h^{I}$. We also define
\begin{equation}
\label{eq:9}
h^{I}{}_{x} \ \equiv\ -\sqrt{3} \partial_{x}h^{I}\;\; ,\;\;
h_{I\, x} \ \equiv\ a_{IJ}h^{J} \ =\ +\sqrt{3} \partial_{x}h_{I}\; ,
\end{equation}
\noindent
which are orthogonal to the functions $h^{I}$ with respect to the metric
$a_{IJ}$. Finally, the $\sigma$-model metric is given by
\begin{equation}
\label{eq:a-1}
g_{xy}\ \equiv\ a_{IJ}h^{I}{}_{x} h^{J}{}_{y} \;\;
\longrightarrow
\;\;
a^{IJ} = h^{I}h^{J} +g^{xy}h^{I}{}_{x}h^{J}{}_{y}\, ,
\hspace{.5cm}
a^{IJ}a_{JK}= \delta^{I}{}_{K}\, .
\end{equation}
Since we want to obtain non-extremal strings, it is more convenient to use the
dual 2-form potentials $B_{I\, \mu\nu}$ and their 3-form field strengths
$H_{I\, \mu\nu\rho}= 3\partial_{[\mu|}B_{I\, |\nu\rho]}$, which are related to
the 1-forms by the duality relations
\begin{equation}
\label{eq:dualvariables}
H_{I} = a_{IJ}\star F^{J}\, .
\end{equation}
In terms of these variables the action takes the form\footnote{We have
dualized an incomplete action Eq.~(\ref{eq:N2d5action}), and, therefore,
there are terms missing in this dual action. However, for the kind of
solutions that we want to study, only electrical charged with respect to
the 2-forms, the missing terms are irrelevant.}
\begin{equation}
\label{eq:actionsimplemodel}
\mathcal{I}[g_{\mu\nu},B^{I}{}_{\mu\nu},\phi^{x}] = \int d^{5}x
\left\{
R +\tfrac{1}{2}g_{xy}\partial_{\mu}\phi^{x}\partial^{\mu}\phi^{y}
+\tfrac{1}{2\cdot 3!}a^{IJ}H_{I}{}^{\mu\nu\rho}H_{J\, \mu\nu\rho}
\right\}\, ,
\end{equation}
Comparing now Eq.~(\ref{eq:actionsimplemodel}) (taking $p=1$, $\tilde{p}=0$,
as corresponds to $d=5$ string solutions) to Eq.~(\ref{eq:daction}) we find
that
\begin{equation}
\label{eq:relaciones}
I_{IJ}= -\tfrac{1}{8} a^{IJ}\, ,
\hspace{1cm}
\mathcal{G}_{xy}=\tfrac{1}{2} g_{xy}\, ,
\end{equation}
\noindent
and, therefore, the effective action for this model is given by
\begin{equation}
\label{eq:effectived5}
\mathcal{I}[\tilde{U},\phi^{x}]
= \int d\rho
\left\{
(\dot{\tilde{U}})^{2}
+\tfrac{1}{3}g_{xy} \dot{\phi}^{x} \dot{\phi}^{y}
-e^{2\tilde{U}}V_{\rm BB}
+(\omega/2)^{2}
\right\}\, .
\end{equation}
\noindent
where the negative semidefinite black-brane potential, after an adequate
choice of normalization, is given by
\begin{equation}
-V_{\rm BB}(\phi,p) \ =\ a_{IJ}p^{I}p^{J}\, ,
\end{equation}
\noindent
where we denote by $p^{I}$ the electric charges of the string
\begin{equation}
p^{I} \sim \int_{S^{2}_{\infty}}a^{IJ}\star H_{J}\, .
\end{equation}
\noindent
The Hamiltonian constraint (\ref{eq:hamiltonianconstraint}) becomes
\begin{equation}
\label{eq:Effe2a}
(\dot{\tilde{U}})^{2}
+\tfrac{1}{3}g_{xy}\dot{\phi}^{x} \dot{\phi}^{y}
+e^{2\tilde{U}}V_{\rm BB}
=
(\omega/2)^{2}\, .
\end{equation}
If we define the {\em dual central charge} $\tilde{\cal Z}(\phi,p)$
by\footnote{This definition should be compared to that the standard central
charge $\mathcal{Z}(\phi,q)=h^{I}q_{I}$.}
\begin{equation}
\tilde{\mathcal{Z}}(\phi,q)\equiv h_{I}p^{I}\, ,
\end{equation}
\noindent
it is possible to rewrite the black-brane potential in the form
\begin{equation}
-V_{\rm BB} \ =\
\tilde{\mathcal{Z}}^{2}
+3g^{xy}\partial_{x}\tilde{\mathcal{Z}}\partial_{y}\tilde{\mathcal{Z}}\, ,
\end{equation}
\noindent
where Eq.~(\ref{eq:a-1}) has been used to obtain the last expression. Just as
it happens in the black-hole case, this form of the black-brane potential
allows us to rewrite the effective action Eq.~(\ref{eq:effectived5}) in a BPS
form, \textit{i.e.}~as a sum of squares up to a total derivative
\begin{equation}
\label{eq:effectived5BPS}
\mathcal{I}[\tilde{U},\phi^{x}]
= \int d\rho
\left\{
\left(\dot{\tilde{U}}\pm e^{\tilde{U}}\tilde{\mathcal{Z}} \right)^{2}
+\tfrac{1}{3}g_{xy}
\left( \dot{\phi}^{x} \pm 3 e^{\tilde{U}}\partial^{x}\tilde{\mathcal{Z}}\right)
\left( \dot{\phi}^{y} \pm 3 e^{\tilde{U}}\partial^{y}\tilde{\mathcal{Z}}\right)
\mp \frac{d~}{d\rho}\left(e^{\tilde{U}}\tilde{\mathcal{Z}}\right)
\right\}\, .
\end{equation}
\noindent
The action is, then, extremized, and the second-order equations of motion that
follow from the action are satisfied when the first-order BPS equations
\begin{eqnarray}
\dot{\tilde{U}}
& = &
\mp e^{\tilde{U}}\tilde{\mathcal{Z}}\, ,
\\
& & \nonumber \\
\dot{\phi}^{x}
& = &
\mp 3 e^{\tilde{U}}\partial^{x}\tilde{\mathcal{Z}}\, .
\end{eqnarray}
Observe that the equations of motion that follow from the action do not set
the Hamiltonian to any particular value. Actually, these first-order equations
imply the Hamiltonian constraint for $\omega=0$, \textit{i.e.}~for extremal
strings. It should be possible to show that the extremal strings that satisfy
the above equations are, precisely, the supersymmetric ones.
On the horizon of these solutions the dual central charge reaches a
stationary point
\begin{equation}
\label{eq:extremalcharge}
\left. \partial_{x}\tilde{\mathcal{Z}}\right|_{\rm \phi_{\rm h}}\ =\ 0\, .
\end{equation}
\noindent
The above condition and the properties of real special geometry imply that the
black-brane potential also reaches a stationary point on the horizon. The
converse is not always true and we expect the existence of extremal,
non-supersymmetric black strings and we are going to construct some solutions
of this kind explicitly in the following sections.
The extremal supersymmetric strings of these theories saturate the
supersymmetric BPS bound, \textit{i.e.}
\begin{equation}
T_{p} =\tfrac{3}{4}| \tilde{\mathcal{Z}}(\phi_{\infty},p) |\, .
\end{equation}
\noindent
On the horizon, the general relation between entropy density and black-brane
potential Eq.~(\ref{eq:extremalarea}) plus the particular property
Eq.~(\ref{eq:extremalcharge}) imply that the entropy density is determined by
the value of the dual central charge on the horizon (here $\tilde{p}=0$)
\begin{equation}
\label{eq:extremalareaN2d5}
\tilde{S}
=
|\tilde{\cal Z}(\phi_{\rm h},p)|^{2}\, .
\end{equation}
There is a well-established procedure to construct all the extremal
supersymmetric strings of an ungauged $N=2,d=5$ supergravity coupled to $n$
vector multiplets \cite{Gauntlett:2002nw}: given $n+1$ spherically-symmetric
real harmonic functions on Euclidean $\mathbb{R}^{3}$
\begin{equation}
\label{eq:harmonicfunctions}
K^{I} = K^{I}_{\infty} +p^{I}\rho\, ,
\end{equation}
\noindent
the fields of the supersymmetric solutions are implicitly given in terms of
these functions by the relations
\begin{equation}
\label{eq:stabil}
e^{-\tilde{U}}\ h^{I}(\phi ) \, =\, K^{I}\, .
\end{equation}
\noindent
We will denote the explicit expressions for the physical fields of the
solutions with the subscript $\mathrm{susy}$: $\tilde{U}_{\rm
susy}=\tilde{U}_{\rm susy}(K)$, $\phi^{i}_{\rm susy}=\phi^{i}_{\rm
susy}(K)$.
\subsection{A one-modulus model}
\label{sec:onemodulus}
In this section we are going to apply the formalism developed in the previous
sections to construct the black-string solutions of the simple model of
$N=2,d=5$ coupled to one vector multiplet whose black-hole solutions were
constructed in Ref.~\cite{Meessen:2011bd}. This model, which can be obtained
by dimensional reduction of minimal $d=6$ $N=(1,0)$ supergravity, is
determined by $C_{011}=1/3$. The hypersurface defined by
Eq.~(\ref{eq:hypersurface}) has to be covered by two coordinate patches that
determine two branches of the theory. We label these two branches by
$\sigma=\pm 1$. The relation between the projective coordinates $h$ and the
physical scalar $\phi$ in both branches is given by
\begin{equation}
\begin{array}{rclrcl}
h_{(\sigma )}^{0} & = & e^{\sqrt{\frac{2}{3}}\phi}\, ,
\hspace{1cm}&
h_{(\sigma )}^{1} & = & \sigma\ e^{-\frac{1}{\sqrt{6}}\phi}\, , \\
& & & & & \\
h_{(\sigma )\, 0} & = & \tfrac{1}{3} e^{-\sqrt{\frac{2}{3}}\phi}\, ,
\hspace{1cm}&
h_{(\sigma )\, 1} & = & \tfrac{2}{3} \sigma\ e^{\frac{1}{\sqrt{6}}\phi}\, . \\
\end{array}
\end{equation}
\noindent
The scalar metric $g_{\phi\phi}$ and the vector field strengths metric
$a_{IJ}$ take exactly the same values in both branches:
\begin{equation}
\label{eq:11}
g_{\phi\phi} \ =\ 1 \hspace{.5cm},\hspace{.5cm}
a_{IJ} \ =\ \tfrac{1}{3}\left(
\begin{array}{cc}
e^{-2\sqrt{\frac{2}{3}}\phi} & 0 \\
0 & 2 e^{\sqrt{\frac{2}{3}}\phi} \\
\end{array}
\right)\, ,
\end{equation}
\noindent
and, therefore, the bosonic parts of both models and their classical solutions
are identical. However, since the functions $h_{(\sigma )}^{I}(\phi)$ differ,
the fermionic structure and, therefore, the supersymmetry properties of a
given solution will be different in different branches. In particular, the
dual central charge is different in each branch:
\begin{equation}
\tilde{\mathcal{Z}}_{(\sigma )} = \tfrac{1}{3}\left( p^{0}e^{-\sqrt{\frac{2}{3}}\phi}
+2 \sigma p^{1}e^{\frac{1}{\sqrt{6}}\phi}\right)\, .
\end{equation}
The black-brane potential is identical in both branches because it is a
property of the bosonic part of the theory. It is given by
\begin{equation}
-V_{\rm BB} =
\tfrac{1}{3}\left[
(p^{0})^{2}e^{-2\sqrt{\frac{2}{3}}\phi}
+2(p^{1})^{2}e^{\sqrt{\frac{2}{3}}\phi}
\right]\, ,
\end{equation}
\noindent
and it is extremized for
\begin{equation}
\phi_{\rm h} =
\sqrt{\tfrac{2}{3}} \log{\left(\pm \sigma \frac{p^0}{p^1}\right)}\, ,
\end{equation}
\noindent
taking the value
\begin{equation}
\label{eq:bbpotentialonthehorizon}
-V_{\rm BB} (\phi_{\rm h},p)
=
\left[|p^{0}| (p^{1})^{2}\right]^{\frac{2}{3}}\, ,
\end{equation}
\noindent
in all cases, while the dual central charge takes the value
\begin{equation}
\label{eq:dualcentralchargeonthehorizon}
\tilde{\cal Z}(\phi_{\rm h}, p)
= \tfrac{1}{3}(1\pm 2)\, \mathrm{sign}(p^{0})
\left[|p^{0}|(p^{1})^{2}\right]^{\frac{1}{3}}\, .
\end{equation}
Since $\pm\sigma p^{0}/p^{1} >0$, the upper sign (which corresponds to the
supersymmetric case in the $\sigma$-branch, because it extremizes the dual
central charge) requires the following relation between the signs of the
charges $p^{I}$
\begin{equation}
\mathrm{sign}(p^{0}) = \sigma \mathrm{sign}(p^{1})\, ,
\end{equation}
\noindent
while the lower sign (which corresponds to non-supersymmetric extremal black
strings in the $\sigma$-branch) requires
\begin{equation}
\mathrm{sign}(p^{0}) = -\sigma \mathrm{sign}(p^{1})\, .
\end{equation}
We are going to construct the supersymmetric solutions of the $\sigma$-branch
next; the non-supersymmetric solutions of the $(-\sigma )$-branch will be
constructed at the same time.
\subsection{Supersymmetric and non-supersymmetric extremal solutions}
\label{sec:susysol}
The general prescription tells us that the extremal supersymmetric solutions
are given by two real harmonic functions of the form
Eq.~(\ref{eq:harmonicfunctions}), and are related to $\tilde{U}_{\rm susy}$
and $\phi_{\rm susy}$ by Eqs.~(\ref{eq:stabil}), which in this case take the
form
\begin{equation}
\label{eq:2}
K^{0}
\ =\
e^{-\tilde{U}_{\rm susy}}\ e^{\sqrt{\frac{2}{3}}\, \phi_{\rm susy}}\, ,
\hspace{1cm}
K^{1}
\ =\
\sigma\ e^{-\tilde{U}_{\rm susy}}\ e^{\frac{-1}{\sqrt{6}}\phi_{\rm susy}} \; .
\end{equation}
\noindent
Then, $\tilde{U}_{\rm susy}$ and $\phi_{\rm susy}$ are given by
\begin{equation}
\label{eq:USUSY}
e^{-\tilde{U}_{\rm susy}}
\ =\
\left[ K^{0}(K^{1})^{2} \right]^{1/3} \, ,
\hspace{1cm}
\phi_{\rm susy}
\ =\
\sqrt{\tfrac{2}{3}} \log\left(\sigma \frac{K^{0}}{K^{1}}\right)\; .
\end{equation}
\noindent
For these fields to be regular and well-defined, the harmonic functions
$K^{I}$ must satisfy several conditions\footnote{These restrictions can be
read directly from Eqs.~(\ref{eq:2}).}:
\begin{enumerate}
\item[i)] They should not vanish at any finite, positive, value of $\rho$:
this requirement relates the signs of the two constants that enter in each
function $K^{I}$, $p^{I}$ and $K^{I}_{\infty}$:
\begin{equation}
\mathrm{sign}(K^{I}_{\infty})= \mathrm{sign}(p^{I})\, .
\end{equation}
\item[ii)] For $\phi_{\rm susy}$ to be well-defined in the $\sigma$-branch
\begin{equation}
\mathrm{sign}(K^{0})=\sigma\ \mathrm{sign}( K^{1})\, ,
\end{equation}
\noindent
everywhere. This implies, in particular, that $\mathrm{sign}(p^{0})=\sigma
\mathrm{sign}(p^{1})$ which is the relation we found for the supersymmetric
critical points. Thus, there are two supersymmetric cases for each branch
which are disjoint in charge space: $\mathrm{sign}(p^{0})=+1,
\mathrm{sign}(p^{1})=\sigma$ and $\mathrm{sign}(p^{0})=-1,
\mathrm{sign}(p^{1})=-\sigma$.
\item[iii)] For $\tilde{U}_{\rm susy}$ to be well-defined ($e^{-\tilde{U}}>0$)
we must have
\begin{equation}
K^{0}>0\, ,
\hspace{1cm}
\mathrm{sign}( K^{1}) = \sigma\, ,\,\,\, \Rightarrow\,\,\,
p^{0}>0\, ,\,\,\,
\mathrm{sign}(p^{1})\sigma>0\, .
\end{equation}
\end{enumerate}
It is, then, evident, that $K^{0}<0$ corresponds to the non-supersymmetric,
extremal case, which will be given by
\begin{equation}
\label{eq:Unsusy}
e^{-\tilde{U}_{\rm nsusy}}
\ =\
\left[ (-K^{0})(K^{1})^{2} \right]^{1/3}\, ,
\hspace{1cm}
\phi_{\rm nsusy}
\ =\
\sqrt{\tfrac{2}{3}} \log\left[\sigma \frac{(-K^{0})}{K^{1}}\right]\, .
\end{equation}
To summarize: the supersymmetric and the non-supersymmetric extremal solutions
can be written in this unified way:
\begin{equation}
\label{eq:Uex}
e^{-\tilde{U}_{\rm ext}}
\ =\
\left[ |K^{0}|(K^{1})^{2} \right]^{1/3}\, ,
\hspace{1cm}
\phi_{\rm ext}
\ =\
\sqrt{\tfrac{2}{3}} \log\left|\frac{K^{0}}{K^{1}}\right|\, ,
\end{equation}
\noindent
with the harmonic functions given by
\begin{equation}
\label{eq:3}
K^{0}
\ =\
\mathrm{sign}(p^{0})\, \left(e^{\sqrt{\frac{2}{3}}\phi_{\infty}} +|p^{0}|\rho\right)\, ,
\hspace{1cm}
K^{1}
\ =\
\sigma \left(
e^{-\frac{1}{\sqrt{6}}\phi_{\infty}}+ |p^{1}|\rho \right)\, .
\end{equation}
\noindent
The supersymmetric cases correspond to the signs $\mathrm{sign}(p^{0})> 0$
$\mathrm{sign}(p^{1})=\sigma$ and the non-supersymmetric ones to
$\mathrm{sign}(p^{0})<0$ and $\mathrm{sign}(p^{1})=-\sigma$.
The tension of these extremal solutions, defined in the $\rho\rightarrow 0$
limit by Eq.~(\ref{eq:generaltensionformula})
is given in all the cases by the manifestly positive quantity
\begin{equation}
\label{eq:tensionextremal}
T_{1} = \tfrac{1}{4}\left( |p^{0}|e^{-\sqrt{\frac{2}{3}}\phi}
+2 |p^{1}|e^{\frac{1}{\sqrt{6}}\phi}\right)\, ,
\end{equation}
\noindent
which only equals the absolute value of the central charge when
$\mathrm{sign}(p^{0})= \sigma \mathrm{sign}(p^{1})$, which happens in the
supersymmetric cases. Furthermore, in the supersymmetric cases, as we just
said, $\mathrm{sign}(p^{0})>0$ and
\begin{equation}
T_{1} = \tfrac{3}{4}\mathcal{Z}_{(\sigma )}(\phi_{\infty},p)\, .
\end{equation}
\noindent
In the non-supersymmetric cases, as one should expect, the mass is larger than
the central charges.
The entropy is given by the black-string potential on the horizon according to
the formula Eq.~(\ref{eq:extremalarea}). Then,
Eq.~(\ref{eq:bbpotentialonthehorizon}) tells us that the entropy density is,
in all extremal cases, given by
\begin{equation}
\tilde{S} = \left[|p^{0}| (p^{1})^{2}\right]^{\frac{2}{3}}\, .
\end{equation}
\noindent
Comparing with Eq.~(\ref{eq:dualcentralchargeonthehorizon}), we find that the
relation between the entropy density and the dual central charge on the
horizon Eq.~(\ref{eq:extremalareaN2d5}) only holds in the supersymmetric
cases. In the non-supersymmetric ones
\begin{equation}
\tilde{S} > |\tilde{\cal Z}(\phi_{\rm h}, p)|^{2} = \tfrac{1}{9} \tilde{S}\, .
\end{equation}
\subsection{Non-extremal solutions}
\label{sec:nonextsol}
As in the black-hole case considered in Ref.~\cite{Meessen:2011bd}, the most
general solution can be obtained by direct integration using the fact that the
effective action is separable: defining the new variables
\begin{equation}
\label{eq:7}
x \ \equiv\ \tilde{U} -\sqrt{\tfrac{2}{3}}\, \phi \;\;\; ,\;\;\;
y \ \equiv\ \tilde{U} +\tfrac{1}{\sqrt{6}}\,\phi\, ,
\end{equation}
\noindent
the effective action Eq.~(\ref{eq:effectived5}) takes the form
\begin{equation}
\mathcal{I}[x,y] \; =\;
\tfrac{1}{3}\int d\rho\
\left[\
(\dot{x})^{2}
\ +\ 2(\dot{y})^{2}
\ +\ (p^{0})^{2}e^{2x}
\ +\ 2 (p^{1})^{2}e^{2y}\
\right]\; ,
\end{equation}
\noindent
and the equations of motion that follow from it can be integrated
immediately in full generality, giving
\begin{eqnarray}
\label{eq:Effe10a}
e^{-3\tilde{U}} & =& |p^{0}(p^{1})^{2}| \
\left(\frac{\sinh{(C\rho+D)}}{C}\right)^{2}
\left(\frac{\sinh{(A\rho+B)}}{A}\right)\, , \\
& &\nonumber \\
\phi & =& -\sqrt{\tfrac{2}{3}}
\log{
\left\{
\left|\frac{p^{1}}{p^{0}}\right|
\left(\frac{A}{\sinh{(A\rho+B)}}\right)
\left(\frac{\sinh{(C\rho+D)}}{C}\right)
\right\}
}\, ,
\end{eqnarray}
\noindent
where $A,B,C$ and $D$ are (positive) integration constants. Their values are
related to the non-extremality parameter $\omega$ by the Hamiltonian
constraint Eq.~(\ref{eq:Effe2a})
\begin{equation}
\label{eq:Effe30}
2C^{2}\ +\ A^{2}\; =\; 3 (\omega/2)^{2}\, .
\end{equation}
\noindent
The regularity of the solution imposes $A=C$. This constraint together with
the Hamiltonian constraint Eq.~(\ref{eq:Effe30}) implies
\begin{equation}
A=C=\omega/2\, .
\end{equation}
\noindent
We are left with two constants, $B$ and $D$, that have to be expressed in
terms of the physical parameters of the solution by requiring $\tilde{U}(0)=0$
(asymptotic flatness) and $\phi(0)=\phi_{\infty}$:
which can be solved, yielding
\begin{eqnarray}
\label{eq:BD}
B
& = &
\log\left(\frac{\omega}{2|p^0|}e^{\sqrt{\frac{2}{3}}\phi_{\infty}}
+
\sqrt{1+\frac{\omega^2}{4|p^0|^2}e^{2\sqrt{\frac{2}{3}}\phi_{\infty}}}\right)\, ,
\\
& & \nonumber \\
D
& = &
\log\left(\frac{\omega}{2|p^1|}e^{-\frac{1}{\sqrt{6}}\phi_{\infty}}
+
\sqrt{1+\frac{\omega^{2}}{4|p^1|^2}e^{-\frac{2}{\sqrt{6}}\phi_{\infty}}}\right)\, .
\end{eqnarray}
\noindent
The tension is given by
\begin{equation}
\label{eq:mass}
T_{p}
=
-\tfrac{1}{8}\omega +\tfrac{1}{8}\sqrt{\omega^{2}+4\left( p^{0}\right)^{2}
e^{-2\sqrt{\frac{2}{3}}\phi_{\infty}}}
+\tfrac{1}{4}\sqrt{\omega^{2}+4\left(p^{1}\right)^{2} e^{\sqrt{\frac{2}{3}}\phi_{\infty}}}\, .
\end{equation}
\noindent
When the charges vanish we recover the Schwarzschild branes' tension
$T_{p}=|\omega|/2$. Taking $\omega=0$ we obtain the tension of all the
extremal cases Eq.~(\ref{eq:tensionextremal}). This equation can be inverted
in order to explicitly identify the different extremal limits and the
correspondent mass, but the expression is very involved, so we will analyze
the extremal limits from Eq.~(\ref{eq:mass}).
The entropy density is given by
\begin{eqnarray}
\label{eq:S}
\tilde{S} = \left| p^0 (p^1)^2\right|^{\frac{2}{3}}
\left(
\frac{\omega}{2|p^1|}e^{-\frac{1}{\sqrt{6}}\phi_{\infty}}
+\sqrt{1+\frac{\omega^2}{4|p^1|^2}e^{-\frac{2}{\sqrt{6}}\phi_{\infty}}}
\right)^{\frac{4}{3}}
\left(
\frac{\omega}{2|p^0|}e^{\sqrt{\frac{2}{3}}\phi_{\infty}}
+\sqrt{1+\frac{\omega^2}{4|p^0|^2}e^{2\sqrt{\frac{2}{3}}\phi_{\infty}}}
\right)^{\frac{2}{3}}\, .
\end{eqnarray}
\noindent
Taking the extremal limit $\omega\rightarrow 0$, we recover the expression
already found for the extremal case. The Hawking temperature can be found
using the relation between the entropy density, the temperature and the
non-extremality parameter Eq.~(\ref{eq:relation}).
\section*{Acknowledgments}
The authorswould like to thank R.~Emparan and P.~Meessen for useful
conversations. This work has been supported in part by the Spanish Ministry of
Science and Education grant FPA2009-07692, the Comunidad de Madrid grant
HEPHACOS S2009ESP-1473 and the Spanish Consolider-Ingenio 2010 program CPAN
CSD2007-00042. The work of CSS has been supported by a JAE-predoc grant JAEPre
2010 00613. TO wishes to thank M.M.~Fern\'andez for her permanent support.
|
2,877,628,090,719 | arxiv | \section{Introduction}
Breakdown of superfluidity at large velocities of the flow is a
fundamentally important problem which has been widely studied in physics of
quantum liquids, such as $^{4}$He and Bose-Einstein condensates (BECs) of
dilute atomic gases (see book \cite{ps2003} and references therein). As is
known, the breakdown is caused by opening of channels of radiation of
elementary excitations in the fluid, as it happens, for instance, with
increase of the velocity of a body (``obstacle") moving through the
superfluid (in the experiment, a far-blue-detuned laser beam, which produces
a local repulsive force acting on atoms, usually plays the role of the
obstacle \cite{engels07,engels08}). In the BEC, solitary waves and vortices
are readily generated if the size of the obstacle is of the order of (or
greater than) the characteristic healing length of condensate. The
generation of these structures manifests itself as an effective dissipation,
which implies the loss of superfluidity at some critical value of the
obstacle's velocity \cite{hakim,us1}. On the other hand, if the size of the
obstacle is much smaller than the healing length, the main loss channel
corresponds to the Cherenkov radiation of Bogoliubov excitations, and it
opens at supersonic velocities of the obstacle \cite{ap04}. If a large
obstacle moves at a supersonic velocity, then the amplitude of the generated
waves becomes large too. In the latter case, two dispersive shocks, which
start their propagation from the front and the rear parts of the moving
body, are formed. Far from the body, the shock front gradually transforms
into a linear ``ship wave" located outside the Mach cone \cite%
{carusotto,gegk07,gk07,gsk08}, whereas the rear zone of the shock is
converted into a ``fan" of oblique dark solitons located inside the Mach
cone \cite{ek06,egk06,egk07b}. Although, as is well known, such dark
solitons are unstable with respect to transverse perturbations, it was shown
in \cite{kp08} that for a flow velocity greater than some critical value
this instability becomes a convective one (rather than being absolute) in
the reference frame moving along with the obstacle. This fact actually means
that the dark solitons are effectively stable in the region around the
obstacle. Some of these structures have already been observed in experiments
\cite{cornell05,carusotto}, and similar non-stationary dispersive shocks
were studied both theoretically and experimentally in BEC \cite%
{damski04,kgk04,simula,hoefer,engels07} (see also review \cite{nonlin}) and
nonlinear optics \cite{fleischer,trillo,fleischer3,fleischer2,el07,khamis08}.
The picture described above corresponds to a single-component BEC whose
mean-field dynamics is governed by the respective Gross-Pitaevskii equation
\cite{ps2003}. At the same time, one of
the focal points of activity in BEC physics has been the study of
multi-component settings, a prototypical one being presented by binary
mixtures \cite{Myatt1997a,Hall1998a,Stamper-Kurn1998b}. These two-component
media exhibit phase-separation phenomena \cite{boris1,boris2,tsubota} due to
the nonlinear interaction between the different atomic species (or possibly
different hyperfine states) that constitute the mixture. The formation of
robust single- and multi-ring patterns \cite{Hall1998a,dshall}, the
evolution of initially coincident triangular vortex lattices through a
turbulent regime into an interlaced square vortex lattice~\cite%
{Schweikhard2004a} in coupled hyperfine states in the $^{87}$Rb condensate,
and the study of the interplay between atomic states at different Zeeman
levels in the $^{23}$Na condensate, forming striated magnetic domains~in
optical traps \cite{Miesner1999a,Stenger1999a}, are only a small subset
among the many possibilities that multi-component BECs can offer. It should
also be noted that mixtures with a higher number of components, namely
spinor condensates~\cite{ket1,cahn}, are known too. They have been realized
with the help of far-off-resonant optical techniques for trapping ultracold
atomic gases~\cite{ket0}, which, in turn, allowed the spin degree of freedom
to be explored (previously, it was frozen in magnetic traps).
Our aim in the present work is to unite these two areas by investigating the
effects generated by the motion of an obstacle in a ``pancake"-shaped, i.e.,
effectively two-dimensional (2D), \emph{two-component}\textit{\ }BEC. In an
earlier work \cite{us2}, the critical situation, when the obstacle had the
velocity comparable to the two speeds of sound in the two-component system,
was examined. Here, we extend the analysis to the supercritical case, when
the speed of the moving body may be significantly higher than the sound
speed(s). We demonstrate that two branches of linear, so-called
``ship-wave", patterns form outside of the two Mach cones (which are
associated with the speeds of sound), while oblique dark solitons are
located inside the wider Mach cone. While these dark solitons are unstable
at relatively low velocities of the motion of the obstacle, they can be
convectively stabilized at sufficiently high values of the velocity.
The presentation is structured as follows. In section II, we outline the
model under consideration. In section III, we study the linear (ship) waves
by means of both analytically and numerical methods. In section IV, we study
dark solitons (again, by dint of both analytical and numerical methods).
Finally, we summarize our findings in section V, and discuss potential
directions for future work.
\section{Mathematical Model and Setup}
In the usual mean-field approximation, the 2D flow of a binary condensate
mixture past an obstacle obeys the system of nonlinearly coupled
Gross-Pitaevskii equations (GPEs), with spatial coordinates $\mathbf{r}%
=(x,y) $. In the scaled form, the equations take a well-known form \cite%
{ps2003,nonlin},
\begin{equation}
\begin{split}
i\frac{\partial \psi _{1}}{\partial t}& =-\frac{1}{2}\Delta \psi
_{1}+(g_{11}\left\vert \psi _{1}\right\vert ^{2}+g_{12}\left\vert \psi
_{2}\right\vert ^{2})\psi _{1}+V(\mathbf{r},t)\psi _{1}, \\
i\frac{\partial \psi _{2}}{\partial t}& =-\frac{1}{2}\Delta \psi
_{2}+(g_{12}\left\vert \psi _{2}\right\vert ^{2}+g_{22}\left\vert \psi
_{2}\right\vert ^{2})\psi _{2}+V(\mathbf{r},t)\psi _{2}.
\end{split}
\label{1-2}
\end{equation}
We assume that atoms in both species have the same mass (i.e., they
represent different hyperfine states of the same atom, as is often the case,
see e.g., Ref. \cite{dshall} and references therein), $V(\mathbf{r},t)$
being the potential induced by the moving obstacle, which is identical for
both components.
Small-amplitude waves and solitons correspond to potential flows with zero
vorticity, for which Eqs.~(\ref{1-2}) can be transformed into a hydrodynamic
form by means of substitutions
\begin{equation}
\psi_{1}(\mathbf{r},t)=\sqrt{n_{1}(\mathbf{r},t)}\exp \left( i\int^{{\bf{r}}} \mathbf{u%
}_{1}(\mathbf{r}',t)d\mathbf{r}'-i\mu _{1}t\right),\quad
\psi_{2}(\mathbf{r},t)=%
\sqrt{n_{2}(\mathbf{r},t)}\exp \left( i\int^{{\bf{r}}} \mathbf{u}_{2}(\mathbf{r}',t)d%
\mathbf{r}'-i\mu _{2}t\right) , \label{1-3}
\end{equation}
where $n_{1,2}(\mathbf{r},t)$ are the atom densities of the two BEC
components, $\mathbf{u}_{1,2}(\mathbf{r},t)$ their velocity fields, and $\mu
_{1,2}$ the respective chemical potentials. Substituting Eqs.~(\ref{1-3})
into the Eqs.~(\ref{1-2}) we arrive at the following system,%
\begin{equation}
\begin{split}
n_{1,t}+\nabla \cdot (n_{1}\mathbf{u}_{1}) =0,\qquad
n_{2,t}+\nabla \cdot (n_{2}\mathbf{u}_{2})=0, \\
\mathbf{u}_{1,t}+(\mathbf{u}_{1}\cdot \nabla )\mathbf{u}_{1}+g_{11}\nabla
n_{1}+g_{12}\nabla n_{2}+\nabla \left( \frac{(\nabla n_{1})^{2}}{8n_{1}^{2}}-%
\frac{\Delta n_{1}}{4n_{1}}\right) +\nabla V(\mathbf{r},t)& =0, \\
\mathbf{u}_{2,t}+(\mathbf{u}_{2}\cdot \nabla )\mathbf{u}_{2}+g_{12}\nabla
n_{1}+g_{22}\nabla n_{2}+\nabla \left( \frac{(\nabla n_{2})^{2}}{8n_{2}^{2}}-%
\frac{\Delta n_{2}}{4n_{2}}\right) +\nabla V(\mathbf{r},t)& =0.
\end{split}
\label{1-4}
\end{equation}%
The first pair of the equations represents the conservation of the number of
atoms in each component, and the second pair corresponds to the Euler's
equations for fluid velocities under the action of the pressure induced by
interactions between atoms, the obstacle's potential, and quantum dispersion.
We consider waves generated by the obstacle moving at constant velocity $U$
along the $x$-axis,
\begin{equation}
V(\mathbf{r},t)=V(x-Ut), \label{1-5}
\end{equation}%
through a uniform condensate, so that at $|x|\rightarrow \infty $ both
components have constant densities and vanishing velocities. This setting
implies the absence of a trapping potential in the plane of the quasi-2D BEC
(or, more realistically, a very weak trapping). In the reference frame
moving along with the obstacle, the unperturbed condensate flows at constant
velocity $\mathbf{u}=(-U,0)$, hence the respective boundary conditions for
the densities and velocities are
\begin{equation}
n_{1}\rightarrow n_{10},~n_{2}\rightarrow n_{20},~\mathbf{u}_{1}\rightarrow
\mathbf{u},~\mathbf{u}_{2}\rightarrow \mathbf{u}\quad \text{at}\quad
|x|\rightarrow \infty . \label{1-6}
\end{equation}%
As follows from Eqs.~(\ref{1-6}), chemical potentials $\mu _{1,2}$ are
related with the asymptotic densities $n_{10},n_{20}$ by the relations
\begin{equation}
\mu _{1}=g_{11}n_{10}+g_{12}n_{20},\quad \mu _{2}=g_{12}n_{10}+g_{22}n_{20}.
\label{1-7}
\end{equation}%
In this reference frame, the wave pattern is a stationary one, which is
convenient for analytical considerations.
\section{Linear ``ship waves''}
We first aim to consider linear waves generated by the moving obstacle,
assume that potential $V(\mathbf{r},t)$ is weak enough to apply the
perturbation theory \cite{ap04} based on linearized equations. Actually, the
approximation is valid everywhere where the amplitude of the waves is small%
---in particular, far enough from the obstacle outside the Mach cones
associated with the speeds of sound. Thus, we introduce small deviations
from the uniform state, namely,
\begin{equation}
n_{1}=n_{10}+n_{1}^{\prime },\quad n_{2}=n_{20}+n_{2}^{\prime },\quad\mathbf{u}_{1}=%
\mathbf{u}+\mathbf{u}_{1}^{\prime },\quad\mathbf{u}_{2}=\mathbf{u}+\mathbf{u}%
_{2}^{\prime }, \label{2-1}
\end{equation}%
and accordingly linearize Eqs. (\ref{1-4}):
\begin{equation}
n_{1,t}^{\prime }+n_{10}(\nabla \cdot \mathbf{u}_{1}^{\prime })+(\mathbf{u}%
\cdot \nabla )n_{1}^{\prime }=0,\quad
n_{2,t}^{\prime }+n_{20}(\nabla \cdot
\mathbf{u}_{2}^{\prime })+(\mathbf{u}\cdot \nabla )n_{2}^{\prime }=0,
\end{equation}
\begin{equation}
\begin{split}
\mathbf{u}_{1,t}^{\prime }+(\mathbf{u}\cdot \nabla )\mathbf{u}_{1}^{\prime
}+g_{11}\nabla n_{1}^{\prime }+g_{12}\nabla n_{2}^{\prime }-\frac{1}{4n_{10}}%
\nabla (\Delta n_{1}^{\prime })& =-\nabla V, \\
\mathbf{u}_{2,t}^{\prime }+(\mathbf{u}\cdot \nabla )\mathbf{u}_{2}^{\prime
}+g_{12}\nabla n_{1}^{\prime }+g_{22}\nabla n_{2}^{\prime }-\frac{1}{4n_{20}}%
\nabla (\Delta n_{2}^{\prime })& =-\nabla V.
\end{split}
\label{2-2}
\end{equation}%
In the stationary case, all time derivatives vanish. Further, we apply the
Fourier transform,
\begin{equation}
n_{1}^{\prime }(\mathbf{r},t)=\int \int \tilde{n}_{1}^{\prime }(\mathbf{k}%
,t)e^{i\mathbf{kr}}\frac{d^{2}k}{(2\pi )^{2}}. \label{2-2a}
\end{equation}%
Then, we eliminate the velocity perturbations in Eqs.~(\ref{2-2}) in favor
of the densities, arriving at the following system of two linear equations:
\begin{equation}
\begin{split}
& [-(\mathbf{k}\cdot \mathbf{u})^{2}+k^{2}(g_{11}n_{10}+k^{2}/4)]\tilde{n}%
_{1}^{\prime }+k^{2}g_{12}n_{10}\tilde{n}_{2}^{\prime }=-k^{2}\tilde{V}%
n_{10}, \\
& k^{2}g_{12}n_{20}\tilde{n}_{1}^{\prime }+[-(\mathbf{k}\cdot \mathbf{u}%
)^{2}+k^{2}(g_{22}n_{20}+k^{2}/4)]\tilde{n}_{2}^{\prime }=-k^{2}\tilde{V}%
n_{20},
\end{split}
\label{2-3}
\end{equation}%
where tildes denote the Fourier components. This linear system can be
readily solved for $\tilde{n}_{1,2}^{\prime }$ and the inverse
Fourier transform yields
\begin{equation}
\begin{split}
n_{1}^{\prime }& =-n_{10}\int \int \frac{k^{2}\tilde{V}\left\{ [-(\mathbf{k}%
\cdot \mathbf{u})^{2}+k^{2}(g_{22}n_{20}+k^{2}/4)]-k^{2}g_{12}n_{20}\right\}
}{((\mathbf{k}\cdot \mathbf{u})^{2}-\omega _{+}^{2})((\mathbf{k}\cdot
\mathbf{u})^{2}-\omega _{-}^{2})}\frac{d^{2}k}{(2\pi )^{2}}, \\
n_{2}^{\prime }& =-n_{20}\int \int \frac{k^{2}\tilde{V}\left\{ [-(\mathbf{k}%
\cdot \mathbf{u})^{2}+k^{2}(g_{11}n_{10}+k^{2}/4)]-k^{2}g_{12}n_{10}\right\}
}{((\mathbf{k}\cdot \mathbf{u})^{2}-\omega _{+}^{2})((\mathbf{k}\cdot
\mathbf{u})^{2}-\omega _{-}^{2})}\frac{d^{2}k}{(2\pi )^{2}},
\end{split}
\label{2-4}
\end{equation}%
where the dispersion relations for the linear waves in the binary mixture
are given by
\begin{equation}
\omega _{\pm }^{2}=\tfrac{1}{2}k^{2}\left[ g_{11}n_{10}+g_{22}n_{20}+\tfrac{1%
}{2}k^{2}\pm \sqrt{(g_{11}n_{10}-g_{22}n_{20})^{2}+4g_{12}^{2}n_{10}n_{20}}%
\right] , \label{2-5}
\end{equation}%
In the long-wave limit, Eqs.~(\ref{2-5}) yield the sound velocities \cite%
{us2}, $c_{\pm }=\omega (k)/k$, given by:%
\begin{equation}
c_{\pm }\equiv \lim_{k\to0} \frac{\omega (k)}{k}=\sqrt{\frac{%
g_{11}n_{10}+g_{22}n_{20}\pm \sqrt{%
(g_{11}n_{10}-g_{22}n_{20})^{2}+4g_{12}^{2}n_{10}n_{20}}}{2}}. \label{2-6}
\end{equation}
Expressions (\ref{2-4}) can be analyzed following the lines of Ref.~\cite%
{gk07,gsk08}. To this end, we introduce new coordinates (see Fig.~1),
\begin{equation}
\mathbf{r}\equiv (-r\cos \chi ,r\sin \chi ),\quad \mathbf{k}\equiv (k\cos
\eta ,k\sin \eta ), \label{2-7}
\end{equation}%
and assume that the wavelength of the pattern is much greater than the
characteristic size of the obstacle, hence its potential can be approximated
by the form $V(\mathbf{r})=V_{0}\delta (\mathbf{r})$. Then, we obtain
\begin{equation}
\begin{split}
n_{1}^{\prime }& =\frac{4V_{0}n_{10}}{\pi ^{2}}\int_{-\pi }^{\pi
}\int_{0}^{\infty }\frac{k[(g_{22}-g_{12})n_{20}+k^{2}/4-U^{2}\cos ^{2}\eta
]e^{i\mathbf{k}\mathbf{r}}}{(k^{2}-k_{+}^{2}-i0)(k^{2}-k_{-}^{2}-i0)}dkd\eta
, \\
n_{2}^{\prime }& =\frac{4V_{0}n_{20}}{\pi ^{2}}\int_{-\pi }^{\pi
}\int_{0}^{\infty }\frac{k[(g_{11}-g_{12})n_{10}+k^{2}/4-U^{2}\cos ^{2}\eta
]e^{i\mathbf{k}\mathbf{r}}}{(k^{2}-k_{+}^{2}-i0)(k^{2}-k_{-}^{2}-i0)}dkd\eta
,
\end{split}
\label{2-8}
\end{equation}%
where infinitesimal imaginary parts in the denominators are written to
define correct contributions from the poles corresponding to adiabatically
slow switching on of the potential, and
\begin{equation}
k_{\pm }\equiv 2\sqrt{U^{2}\cos ^{2}\eta -c_{\pm }^{2}}. \label{2-9}
\end{equation}
\begin{figure}[tb]
\begin{center}
\includegraphics[width=6cm,height=6cm,clip]{fig1.eps}
\end{center}
\caption{(Color online.) Coordinates defining radius-vector $\mathbf{r}$ and
wave vector $\mathbf{k}$. The latter one is normal to the wave front of one
of the ship-wave modes, which is shown schematically by a curve.}
\label{fig1}
\end{figure}
Next, we split the integration domain of $\eta $ into two parts: $\int_{-\pi
/2}^{3\pi /2}d\eta \equiv \int_{-\pi /2}^{\pi /2}d\eta +\int_{\pi /2}^{3\pi
/2}d\eta $. After replacement $\eta ^{\prime }=\eta -\pi $ in the second
term, one can notice that the integrand turns into its own complex
conjugate, allowing one to write the integrals in Eq.~(\ref{2-8}) as
\begin{equation}
\begin{split}
n_{1}^{\prime }& =\frac{8V_{0}n_{10}}{\pi ^{2}}\mathrm{Re}\int_{-\pi
/2}^{\pi /2}d\eta \int_{0}^{\infty }\frac{%
k[(g_{22}-g_{12})n_{20}+k^{2}/4-U^{2}\cos ^{2}\eta ]e^{i\mathbf{k}\mathbf{r}}%
}{(k^{2}-k_{+}^{2}-i0)(k^{2}-k_{-}^{2}-i0)}dk, \\
n_{2}^{\prime }& =\frac{8V_{0}n_{20}}{\pi ^{2}}\mathrm{Re}\int_{-\pi
/2}^{\pi /2}d\eta \int_{0}^{\infty }\frac{%
k[(g_{11}-g_{12})n_{10}+k^{2}/4-U^{2}\cos ^{2}\eta ]e^{i\mathbf{k}\mathbf{r}}%
}{(k^{2}-k_{+}^{2}-i0)(k^{2}-k_{-}^{2}-i0)}dk.
\end{split}
\label{2-10}
\end{equation}%
The integration over $k$ should be carried out over the positive real
half-axis. However, we can add to this path a quarter of an infinite circle
and the imaginary half-axis in the complex $k$ plane, to build a closed
integration contour. It is easy to show that the contribution from the
quarter of the infinite circle is zero, and contribution from the imaginary
axis depends on $r$ as $r^{-2}$, decaying at large $r$ much faster than the
contribution of the pole, which is $\sim r^{-1/2}$. Thus, far from the
obstacle, it is sufficient to keep only the contribution from the poles,
which yields ($\nu \equiv \pi -\chi -\eta $)
\begin{equation}
\begin{split}
n_{1}^{\prime }& =-\frac{2V_{0}n_{10}}{\pi (c_{+}^{2}-c_{-}^{2})}\left\{
[c_{+}^{2}-(g_{22}-g_{12})n_{20}]\mathrm{Im}\int_{-\pi /2}^{\pi /2}d\eta
e^{ik_{+}r\cos \nu }-[c_{-}^{2}-(g_{22}-g_{12})n_{20}]\mathrm{Im}\int_{-\pi
/2}^{\pi /2}d\eta e^{ik_{-}r\cos \nu }\right\} , \\
n_{2}^{\prime }& =-\frac{2V_{0}n_{20}}{\pi (c_{+}^{2}-c_{-}^{2})}\left\{
[c_{+}^{2}-(g_{11}-g_{12})n_{10}]\mathrm{Im}\int_{-\pi /2}^{\pi /2}d\eta
e^{ik_{+}r\cos \nu }-[c_{-}^{2}-(g_{11}-g_{12})n_{10}]\mathrm{Im}\int_{-\pi
/2}^{\pi /2}d\eta e^{ik_{-}r\cos \nu }\right\} .
\end{split}
\label{2-11}
\end{equation}
Far from the obstacle, where phases $\mathbf{k}_{\pm }\mathbf{r}=rs_{\pm }$
are large, we are dealing with large values of
\begin{equation}
s_{\pm }(\eta )=k_{\pm }(\eta )\cos (\chi +\eta ), \label{2-12}
\end{equation}%
and the integrals in Eq.~(\ref{2-11}) can be estimated by means of the
standard stationary-phase method. Since calculations of both integrals are
identical, we consider, for definiteness, the integral over $k_{+}$.
Condition $\partial s_{+}/\partial \eta =0$ is an equation for the
stationary-phase point, which can be easily transformed to
\begin{equation}
\tan \nu _{+}=\left( 2U^{2}/k_{+}^{2}\right) \sin 2\eta _{+}, \label{2-13}
\end{equation}%
or, with regard to the definition of $\nu $, we obtain an expression for $%
\chi $,
\begin{equation}
\tan \chi _{+}=\frac{(1+k_{+}^{2}/(2c_{+}^{2}))\tan \eta _{+}}{%
U^{2}/c_{+}^{2}-(1+k_{+}^{2}/(2c_{+}^{2}))}. \label{2-14}
\end{equation}%
Here $\eta _{+}$ takes values in interval
\begin{equation}
-\arccos \frac{1}{M_{+}}\leq \eta _{+}\leq \arccos \frac{1}{M_{+}},\quad
M_{+}\equiv \frac{U}{c_{+}}, \label{2-15}
\end{equation}%
while the corresponding vector $\left\{ x,y\right\} $, given by the
parametric expressions,
\begin{equation}
x(\eta _{+})=\frac{4\phi c_{+}^{2}}{k_{+}^{3}}(M_{+}^{2}\cos 2\eta
_{+}-1)\cos \eta _{+},\quad y(\eta _{+})=\frac{4\phi c_{+}^{2}}{k_{+}^{3}}%
(2M_{+}^{2}\cos ^{2}\eta _{+}-1)\sin \eta _{+}, \label{2-16}
\end{equation}%
moves along a curve with constant phase $\phi $ (e.g., a crest line).
As usual in the method of stationary phase, we reduce the integrals in
(\ref{2-11}) to Gaussian ones and, as a final result, obtain
\begin{equation}\label{2-17}
\begin{split}
n_{1}^{\prime }=&-\frac{2V_{0}n_{10}}{\pi (c_{+}^{2}-c_{-}^{2})}\Bigg\{\left[
c_{+}^{2}-(g_{22}-g_{12})n_{20}\right] \sqrt{\frac{2\pi }{k_{+}r}}\\
&\times \frac{\left[ 1+(4U^{2}/k_{+}^{2})\sin ^{2}\left( 2\eta _{+}\right) %
\right] ^{1/4}}{\left[ 1+\left( 4U^{2}/k_{+}^{2}\right) \cos 2\eta
_{+}+\left( 12U^{4}/k_{+}^{4}\right) \sin ^{2}\left( 2\eta _{+}\right) %
\right] ^{1/2}}\cos (k_{+}r\cos \nu _{+}-\pi /4)\\
&-\left[ c_{-}^{2}-(g_{22}-g_{12})n_{20}\right] \sqrt{\frac{2\pi }{k_{-}r}}%
\frac{\left( 1+\left( 4U^{2}/k_{-}^{2}\right) \sin ^{2}2\eta _{-}\right)
^{1/4}}{\left[ 1+\left( 4U^{2}/k_{-}^{2}\right) \cos \left( 2\eta
_{-}\right) +\left( 12U^{4}/k_{-}^{4}\right) \sin ^{2}\left( 2\eta
_{-}\right) \right] ^{1/2}}\\
&\times \cos (k_{-}r\cos \nu _{-}-\pi /4)\Bigg\}
\end{split}
\end{equation}%
where $\nu _{\pm }\equiv \pi -\chi _{\pm }-\eta _{\pm }$, and similar
expressions can be derived for the density oscillations of the second
component of the binary condensate.
An example of wave patterns generated by numerical simulations of
Eqs.~(\ref{1-2}) is displayed in Fig.~2, for parameters $g_{11}=1.5,%
\,g_{22}=1.25,\,g_{12}=1.0,$ and $n_{10}=1.0,\,n_{20}=2.0$ and velocity $%
U=3.5$.
\begin{figure}[tb]
\subfigure{
\begin{minipage}[b]{0.45\textwidth}
\centering
\includegraphics[scale=0.40]{fig2a.eps}
\end{minipage}}
\subfigure{
\begin{minipage}[b]{0.45\textwidth}
\centering
\includegraphics[scale=0.40]{fig2b.eps}
\end{minipage}}
\caption{Spatial contour plots of densities of the two components (left and
right panels, respectively) of the BEC for the flow velocity $U=3.5$, at $%
t=35$. The form of the obstacle'e potential used in this case is $V(x,y,t)=2%
\left[ \text{sech}\left( \protect\sqrt{(x-Ut)^{2}+y^{2}}/2\right) \right]
^{2}$. Solid and dashed thin lines correspond to wave crests of two modes
of ship waves.}
\label{fig2}
\end{figure}
Three different types of waves can clearly be distinguished in this picture.
First, one observes ship waves outside the Mach cones, which were analyzed
above. Oblique dark solitons, which are located inside the outer Mach cone,
they will be discussed in the next Section. One can also note that the
oblique solitons are modulated by concentric circular waves, which were
actually generated by the initial introduction (switching on) of the
obstacle's potential. These circular waves are not related to the steady
regime and will not be considered here.
We have compared analytical results (\ref{2-16}) for the crest lines with
the numerical findings. The analytically predicted curves are shown by thin
solid and dashed lines in Fig.~2. To simplify the pattern, we have chosen
the parameters of the binary BEC so that the chemical potentials (\ref{1-7})
of both components are equal to each other. As a result, the smaller sound velocity
satisfies the relation $c_-^2=(g_{11}-g_{12})n_{10}=(g_{22}-g_{12})n_{20}$
and hence the last term in Eq.~(\ref{2-17}) as well as in analogous expression
for $n_2'$ vanishes. Therefore the ship waves in the region
between the two Mach cones are not visible in agreement with the results of
numerical simulations.
Another linear mode describes the ship waves
outside of the outer Mach cone, and the analytically calculated form of the
crest line demonstrates excellent agreement with
the numerically obtained one.
\section{Oblique dark solitons}
As mentioned above, in Fig.~2 one can observe oblique dark
solitons located inside the outer Mach cone. It is worthy to note
that they decay into vortices at end points, which is a result of
the ``snaking" instability of dark soliton stripes in 2D
\cite{kp-1970,zakharov-1975,kt-1988} (see also
\cite{mplb}). However, for large velocities $U$ this instability is
convective only \cite{kp08}, which means that the dark solitons
are effectively stable around the obstacle and, hence, the length
of the dark solitons increases with time. The solitons originate
from a depression in the density distribution formed behind the
obstacle by the flow, therefore their depth also varies near the
obstacle. However, upon sufficiently long evolution time, there
exists a region where the oblique dark solitons can be considered
as quasi-stationary structures. In this region, the solitons are
described by the stationary solution of the
GPEs. This allows us to take the stationary GPEs in the hydrodynamic form (%
\ref{1-4}),
\begin{equation}
\begin{split}
\nabla \cdot (n_{1}\mathbf{u}_{1})=0,\quad \nabla \cdot (n_{2}\mathbf{u}_{2})=0,\\
(\mathbf{u}_{1}\cdot \nabla )\mathbf{u}_{1}+g_{11}\nabla n_{1}+g_{12}\nabla
n_{2}+\nabla \left( \frac{(\nabla n_{1})^{2}}{8n_{1}^{2}}-\frac{\Delta n_{1}%
}{4n_{1}}\right) & =0, \\
(\mathbf{u}_{2}\cdot \nabla )\mathbf{u}_{2}+g_{12}\nabla n_{1}+g_{22}\nabla
n_{2}+\nabla \left( \frac{(\nabla n_{2})^{2}}{8n_{2}^{2}}-\frac{\Delta n_{2}%
}{4n_{2}}\right) & =0.
\end{split}
\label{3-1}
\end{equation}%
Here, it is also taken into regard that the obstacle's potential is
negligible far from it. Equations (\ref{3-1}) should be solved with the
boundary conditions
\begin{equation}
n_{1}\rightarrow n_{10},\quad n_{2}\rightarrow n_{20},~\mathbf{u}%
_{1}\rightarrow (-U,0),~\mathbf{u}_{2}\rightarrow (-U,0)\quad \text{at}\quad
|x|\rightarrow \infty , \label{3-2}
\end{equation}%
where $-U$ is the common velocity of both components relative to the
obstacle. Under the assumption that the solution depends only on
\begin{equation}
\xi =\frac{x-ay}{\sqrt{1+a^{2}}}, \label{xi}
\end{equation}%
where $a$ determines the slope of the oblique dark soliton, this system can
be readily reduced to equations
\begin{equation}
\begin{split}
\tfrac{1}{8}(n_{1,\xi }^{2}-2n_{1}n_{1,\xi \xi
})+g_{11}n_{1}^{3}+g_{12}n_{1}^{2}n_{2}+\tfrac{1}{2}qn_{10}^{2}-(\tfrac{1}{2}%
q+\mu _{1})n_{1}^{2}& =0, \\
\tfrac{1}{8}(n_{2,\xi }^{2}-2n_{2}n_{2,\xi \xi
})+g_{12}n_{1}n_{2}^{2}+g_{22}n_{2}^{3}+\tfrac{1}{2}qn_{20}^{2}-(\tfrac{1}{2}%
q+\mu _{2})n_{2}^{2}& =0,
\end{split}
\label{3-3}
\end{equation}%
where $\mu _{1}$ and $\mu _{2}$ %
are the chemical potentials defined above in Eq.~(\ref{1-7}), and
\begin{equation}
q\equiv \frac{U^{2}}{1+a^{2}}. \label{3-5}
\end{equation}%
The flow velocities are related to the densities as follows:
\begin{equation}
{\mathbf{u}}_{i}=\left( \frac{(n_{i0}+a^{2}n_{i})U}{(1+a^{2})n_{i}},-\frac{%
aU(n_{i0}-n_{i})}{(1+a^{2})n_{i}}\right) ,\quad i=1,2. \label{3-5a}
\end{equation}
In general, system (\ref{3-3}) has to be solved numerically. However, if the
chemical potentials of the two components are equal,
\begin{equation}
\mu _{1}=\mu _{2}=\mu , \label{3-6}
\end{equation}%
the system admits a simple analytical solution in a closed form. In this
case, we look for the solution as $n_{1}=n_{10}f(\xi ),$ $n_{2}=n_{20}f(\xi
) $, reducing both equations (\ref{3-3}) to a single one,
\begin{equation}
\tfrac{1}{8}(f_{\xi }^{2}-2ff_{\xi \xi })+\mu f^{3}+\tfrac{1}{2}q-(\tfrac{1}{%
2}q+\mu )f^{2}=0. \label{3-7}
\end{equation}%
Dark-soliton solutions to Eq.~(\ref{3-7}) are known \cite{egk06}:
\begin{equation}
n_{1}=n_{1s}=n_{10}f(\xi ),\quad n_{2}=n_{2s}=n_{20}f(\xi ),\quad f(\xi )=1-%
\frac{1-q/c_{+}^{2}}{\cosh ^{2}\left[ \sqrt{c_{+}^{2}-q}\,(x-ay)/\sqrt{%
1+a^{2}}\right] }, \label{3-8}
\end{equation}%
where we have also taken into regard that condition (\ref{3-6}) leads to the
following expressions for the sound velocities (for definiteness, we
suppose here that $g_{11},g_{22}>g_{12}$, i.e., the inter-species repulsion
is weaker than the repulsive self-interactions of the two components):
\begin{equation}
c_{-}^{2}=(g_{11}-g_{12})n_{10},\quad c_{+}^{2}=\mu . \label{3-9}
\end{equation}
Obviously, solution (\ref{3-8}) exists if the condition $q<c_{+}^{2}$ is
satisfied. If we introduce angle $\theta $ between the direction of the flow
and the orientation of the dark-soliton stripe, so that $a=\cot \theta $,
the latter condition can be transformed into
\begin{equation}
\sin ^{2}\theta <\frac{c_{+}^{2}}{U^{2}}=\frac{1}{M_{+}^{2}},\quad
M_{+}\equiv \frac{U}{c_{+}}. \label{3-12}
\end{equation}%
Thus, the soliton must be located inside the outer Mach cone, which is
defined by the equation
\begin{equation}
\sin \theta _{+}=\frac{1}{M_{+}}. \label{3-13}
\end{equation}
Although we have arrived at this conclusion under assumption (\ref{3-6}), we
conjecture that it is correct too in the general case of unequal chemical
potentials, which is confirmed by the fact that our numerical simulations
always produced oblique solitons confined inside the outer Mach cone. The
respective numerically generated profiles of the densities are shown in Fig.~%
\ref{fig3} as a function of $y$ at a fixed value of $x$; the corresponding
phase profiles are also shown in the figure. It is observed that the oblique
solitons are indeed located inside the outer Mach cone [as defined by Eq.~(%
\ref{3-13})]) but outside of the inner cone, which is defined by $\sin
\theta _{-}=1/M_{-}$. Profiles of the solitons' densities are close to the
analytically predicted ones and the jumps of phases are also in agreement
with the expected dark-soliton behavior.
\begin{figure}[tb]
\subfigure{
\begin{minipage}[b]{0.45\textwidth}
\centering
\includegraphics[scale=0.40]{fig3a.eps}
\end{minipage}}
\subfigure{
\begin{minipage}[b]{0.45\textwidth}
\centering
\includegraphics[scale=0.40]{fig3b.eps}
\end{minipage}}
\subfigure{
\begin{minipage}[b]{0.45\textwidth}
\centering
\includegraphics[scale=0.40]{fig3c.eps}
\end{minipage}}
\subfigure{
\begin{minipage}[b]{0.45\textwidth}
\centering
\includegraphics[scale=0.40]{fig3d.eps}
\end{minipage}}
\caption{(Color online.) Plots of densities (top) and phases (bottom) of the
two BEC components for flow velocity $U=4.5$. The positions of the Mach
cones are shown by dashed lines, making it evident that the oblique dark
solitons are located inside the outer Mach cone, in agreement with Eq. (%
\protect\ref{3-12}). The chemical potentials are $\protect\mu _{1}=\protect%
\mu _{2}=3.5$, cf. Eq.~(\protect\ref{3-6}), and the configurations are shown
at $t=30$.}
\label{fig3}
\end{figure}
According to condition (\ref{3-12}), solution (\ref{3-8}) exists for any
supersonic flow with $U>c_{+}$, the same being true for the existence of
numerical solutions to system (\ref{3-3}) in the general case, $\mu _{1}\neq
\mu _{2}$. However, that does not mean that the oblique dark solitons can be
generated by any such flow. While the numerical results presented in Fig.~%
\ref{fig2} show that oblique solitons indeed exist for velocity $U=3.5$, the
opposite situation, when oblique dark solitons do not emerge, is presented
in Fig.~\ref{fig4} by means of density patterns which correspond to a lower
supersonic flow velocity, $U=1.2$.
\begin{figure}[tb]
\subfigure{
\begin{minipage}[b]{0.45\textwidth}
\centering
\includegraphics[scale=0.40]{fig4a.eps}
\end{minipage}}
\subfigure{
\begin{minipage}[b]{0.45\textwidth}
\centering
\includegraphics[scale=0.40]{fig4b.eps}
\end{minipage}}
\caption{Plots of densities of two BEC components for flow velocity $U=1.2$
at $t=55$.}
\label{fig4}
\end{figure}
It is seen that vortex streets are generated in the latter case, rather than
dark solitons. Such a behavior is related to the well-known instability of
dark solitons with respect to transverse perturbations \cite%
{kp-1970,zakharov-1975,kt-1988}. The instability splits dark solitons into
vortex-antivortex pairs, hence dark solitons cannot develop from the density
depression behind the obstacle moving at a relatively low velocity. However,
the numerical simulations presented in Fig.~\ref{fig2} indicate that the
oblique solitons become effectively stable if the flow velocity is
sufficiently high, as first was noticed in Ref.~\cite{egk06} for the case of
a one-component BEC. The stabilization was explained in Ref.~\cite{kp08} as
a transition from the absolute instability of dark solitons to their
convective instability, at some critical value of the flow velocity, $U_{%
\mathrm{cr}}\geq c_{+}$, so that the unstable disturbances are carried away
by the flow from the region around the obstacle where, as a result, the dark
solitons look as effectively stable objects. Here, we aim to consider such a
stabilization transition for the case of the two-component BEC, which
features two unstable modes of perturbations around the dark soliton.
To find the spectrum of small-amplitude linear waves propagating along the
soliton, we now consider this solution in the reference frame in which the
condensate has zero velocity far from the obstacle. To this end, we rotate
the coordinate system by angle $\varphi =\arctan a$ [recall $a$ determines
the orientation of the dark soliton, according to Eq.~(\ref{xi})], and
perform the Galilean transformation to the frame moving relative to the
obstacle at velocity $(U\cos \varphi ,U\sin \varphi )$:
\begin{equation}
\begin{split}
\widetilde{x}& =x\cos \varphi -y\sin \varphi -U\cos \varphi \cdot t, \\
\widetilde{y}& =x\sin \varphi +y\cos \varphi -U\sin \varphi \cdot t.
\end{split}
\label{4-1}
\end{equation}%
After the transformation, velocity fields (\ref{3-5a}) become
\begin{equation}
\widetilde{\mathbf{u}}_{1}=\left( v(n_{10}/n_{1s}-1),0\right) ,\quad \widetilde{%
\mathbf{u}}_{2}=\left( v(n_{20}/n_{2s}-1),0\right) , \label{4-2}
\end{equation}%
and the densities take the form of
\begin{equation}
\begin{split}
\widetilde{n}_{1s}& =n_{10}f(\zeta ),\quad \widetilde{n}_{2s}=n_{20}f(\zeta ), \\
f(\zeta )& =1-\frac{1-v^{2}/c_+^2}{\cosh ^{2}\left[ \sqrt{c_{+}^{2}-v^{2}}\zeta %
\right] },\quad\zeta =\widetilde{x}-vt,
\end{split}
\label{4-3}
\end{equation}%
where the soliton's velocity in the new reference frame is
\begin{equation}
v=\frac{U}{\sqrt{1+a^{2}}}. \label{4-4}
\end{equation}%
Below, we omit tildes attached to the new variables.
We take small transverse perturbations of the dark-soliton solution as
\begin{equation}
\begin{split}
\psi _{1}& =\psi _{1s}(\zeta )+(\psi _{1}^{\prime }+i\psi _{1}^{\prime
\prime })\exp \left( i\phi _{1s}(\zeta )-{i\mu _{1}t}\right) , \\
\psi _{2}& =\psi _{2s}(\zeta )+(\psi _{1}^{\prime }+i\psi _{2}^{\prime
\prime })\exp \left( i\phi _{2s}(\zeta )-{i\mu _{2}t}\right) ,
\end{split}
\label{4-5}
\end{equation}%
where the unperturbed solution depends only on $\zeta =x-vt$,
\begin{equation}
\psi _{js}=\sqrt{{n}_{js}}\exp \left( i\phi _{js}(\zeta )-{i\mu _{j}t}%
\right) , \label{4-5a}
\end{equation}%
and phases $\phi _{js}$ are related to the densities by equations
\begin{equation}
\frac{\partial \phi _{js}}{\partial \zeta }=v\left( \frac{n_{j0}}{n_{js}}%
-1\right) . \label{4-5b}
\end{equation}%
Perturbations $\psi ^{\prime }$ and $i\psi ^{\prime \prime }$ depend on $y$
and $t$ as $\exp (ipy+\Gamma t)$. Substitution of expressions (\ref{4-5})
into Eqs.~(\ref{1-2}) and the linearization with respect to $\psi ^{\prime }$
and $i\psi ^{\prime \prime }$ lead to an eigenvalue problem,
\begin{equation}
\left(
\begin{array}{cccc}
A_{1} & -L_{I1} & 0 & 0 \\
L_{R1} & A_{1} & B & 0 \\
0 & 0 & A_{2} & -L_{I2} \\
B & 0 & L_{R2} & A_{2}%
\end{array}%
\right) \left(
\begin{array}{c}
\psi _{1}^{\prime } \\
\psi _{1}^{\prime \prime } \\
\psi _{2}^{\prime } \\
\psi _{2}^{\prime \prime }%
\end{array}%
\right) =\Gamma \left(
\begin{array}{c}
\psi _{1}^{\prime } \\
\psi _{1}^{\prime \prime } \\
\psi _{2}^{\prime } \\
\psi _{2}^{\prime \prime }%
\end{array}%
\right) , \label{4-6}
\end{equation}
\begin{equation}
\begin{split}
A_{j}& \equiv \frac{vn_{j0}n_{js,\zeta }}{2n_{js}^{2}}-\frac{vn_{j0}}{n_{js}}%
\frac{\partial }{\partial \zeta }, \\
B& \equiv -2g_{12}\sqrt{n_{1s}n_{2s}}, \\
L_{Ij}& \equiv \frac{1}{2}\frac{\partial ^{2}}{\partial \zeta ^{2}}-\frac{1}{%
2}\frac{n_{j0}v^{2}}{n_{js}^{2}}+\frac{1}{2}%
(v^{2}-p^{2})-g_{jj}n_{js}-g_{lj}n_{ls}+\mu _{j}, \\
L_{Rj}& \equiv \frac{1}{2}\frac{\partial ^{2}}{\partial \zeta ^{2}}-\frac{1}{%
2}\frac{n_{j0}v^{2}}{n_{js}^{2}}+\frac{1}{2}%
(v^{2}-p^{2})-3g_{jj}n_{js}-g_{lj}n_{ls}+\mu _{j}, \\
j& =1,2,~l=1,2,~l\neq j.
\end{split}
\label{4-7}
\end{equation}%
System (\ref{4-6}) determines the growth rates $\Gamma _{1,2}(p)$ of small
perturbations traveling along the dark-soliton's crest. The result is the
presence of two unstable branches, examples of which are shown in Fig.~\ref%
{fig5}. As one can see, both branches indeed feature regions of wave
vector $p$ with $\mathrm{Re}\,\Gamma (p)>0$. The transition to convective
instability should be considered separately for each branch.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=15cm,clip]{fig5.eps}
\end{center}
\caption{(Color online.) Two branches of dispersion curves $\Gamma (p)$ for unstable
disturbances around the dark-soliton solution. Real parts of $\Gamma $ are
shown by crosses, and imaginary parts by dots. Parameters are $g_{11}=1$,$%
\,g_{22}=1.6$, $\,g_{12}=0.1$, $n_{10}=1$,$\,\ n_{20}=0.6$.}
\label{fig5}
\end{figure}
Returning to the reference system attached to the moving obstacle, we get
dispersion relations
\begin{equation}
\omega _{1,2}(p)=U_{s}p+i\Gamma (p,v), \label{4-8}
\end{equation}%
where $U_{s}=v\sin \varphi \equiv aU/\sqrt{1+a^{2}}$ is the component of the
flow velocity along the dark soliton. The perturbations may be represented
as Fourier integrals over linear modes obeying dispersion relations (\ref%
{4-8}),
\begin{equation}
\delta n_{1,2}\propto \int_{-\infty }^{\infty }\delta \tilde{n}%
_{1,2}(p)e^{i[py-\omega _{1,2}(p)t]}dp. \label{4-9}
\end{equation}%
This representation implies that the integral is convergent, i.e., the wave
packet is finite along coordinate $y$. Its time dependence at fixed value of
$y$ is determined by dispersion relations $\omega _{1,2}(p)$. Since
expressions (\ref{4-8}) contain imaginary parts, $\delta n_{1,2}$ can grow
exponentially with time, which implies instability of the dark soliton, as
is well known for the zero flow, $U_{s}=0$. However, it may happen that, for
$U_{s}$ large enough, wave packets are carried away so fast that they cannot
grow at fixed value of $y$, which is precisely the transition from absolute
to convective instability \cite{LL10}. Mathematically, the convective
instability means that one can transform integrals over $p$ in wave packets (%
\ref{4-9}) into integrals over $\omega $, because these wave packets have
finite duration. In other words, function $\omega =\omega (p)$ can be
inverted to define single-valued dependence $p=p(\omega )$. Therefore, the
distinction between absolute and convective instabilities depends on
analytical properties of dispersion relations (\ref{4-8}) \cite%
{LL10,sturrock}. Actually, the transition from absolute to convective
instability is determined by critical points $p_{\mathrm{cr}}$ where $%
d\omega /dp=0$, and function $p=p(\omega )$ changes its behavior: at $%
U_{s}<\left( U_{s}\right) _{\mathrm{cr}}$ it is represented in the complex $%
p $ plane by disconnected curves, whereas for $U_{s}>\left( U_{s}\right) _{%
\mathrm{cr}}$ these curves are connected with each other. In the latter
case, one can deform the contour of the integration over $p$, with regard to
the single-valuedness of $\omega =\omega (p)$, so as to transform it into an
integral over $\omega $. In other words, the spatial Fourier decomposition
of the perturbation wave packet can be transformed to a temporal form, which
means that the instability is convective.
As is known, the asymptotic behavior of integrals (\ref{4-9}) is determined
by branching points of function $p=p(\omega )$, where $d\omega /dp=0$. This
yields equation
\begin{equation}
U_{s}=-i\frac{d\Gamma }{dp}, \label{5-1}
\end{equation}%
where, as one can see in Fig.~5, $\Gamma (p,v)$ has either real or purely
imaginary values for real $p$. Therefore, critical values of $U_{s}$, at
which disconnected contours transform into connected ones, correspond to the
appearance of a double root $p_{\mathrm{br}}$ of Eq.~(\ref{5-1}) on the real
axis of $p$. This means $dp_{\mathrm{br}}/dU_{s}=\infty $ at $U_{s}=\left(
U_{s}\right) _{\mathrm{cr}}$. The differentiation of Eq.~(\ref{5-1}) with
respect to $U_{s}$ then leads to equation
\begin{equation}
\left. \frac{d^{2}\Gamma }{dp^{2}}\right\vert _{p=p_{\mathrm{cr}}}=0
\label{5-2}
\end{equation}%
for the corresponding critical value, $p_{\mathrm{cr}}(v)$. The substitution
of that value into Eq.~(\ref{5-1}) yields function $U_{s}(v)$. When this
function is known, we find, with the help of relations
\begin{equation}
v=\frac{U}{\sqrt{1+a^{2}}},\quad U_{s}=U\sin \varphi =\frac{Ua}{\sqrt{1+a^{2}}}%
=av, \label{5-3}
\end{equation}%
the slope,
\begin{equation}
a_{\mathrm{cr}}(v)=\frac{U_{s}(v)}{v}, \label{5-4}
\end{equation}
and velocity,
\begin{equation}
U(v)=v\sqrt{1+a_{\mathrm{cr}}^{2}(v)}, \label{5-5}
\end{equation}%
as functions of $v$ for all values in the interval $(0<v<c_{+})$. As a
result, we obtain, in a parametric form, the dependence $U(a)$ for the curve
separating regions of the absolute and convective instabilities. Two such
curves for both branches are shown in Fig.~6, where the region of the
convective instability is located above both curves.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=13cm,clip]{fig6.eps}
\end{center}
\caption{(Color online.) Curves separating regions of the absolute and convective
instabilities of dark solitons in the two-component condensate with
parameters $g_{11}=1$, $g_{22}=1.6$, $g_{12}=0.1$, $n_{10}=1$,$\,\
n_{20}=0.6 $. The respective sound velocities are $c_{+}=1.03,\,c_{-}=0.95$.
Squares and circles refer to two different unstable modes.
Below each curve the corresponding mode is absolutely unstable and dark solitons
cannot be created by the flow with velocity $U$ less than $U_{cr}\cong1.5$
for most slopes $a$. Dark solitons become just convectively unstable
(effectively stable) above both curves, that is for $U>u_{cr}\cong1.5$.}
\end{figure}
It is seen that the dark solitons with large slopes $a$ undergo the
transition to the convective instability at flow velocities greater than $%
U_{cr}\approx 1.5$, as suggested also by the numerical findings reported
above.
\section{Conclusion}
In this work, we have considered the effect of dragging a supercritical
obstacle through a two-component Bose-Einstein condensate, which was
motivated by a number of recent experiments in such settings, predominantly
examining the dynamics of mixtures of hyperfine states of $^{87}$Rb. The
presence of two speeds of sound in the mixture results in the existence of
two Mach cones. The motion of the obstacle, in turn, produces two main
features, namely, the linear ``ship waves" and the oblique dark solitons.
For the former pattern, we have developed a description of their density
oscillations, which occur outside the Mach cones, in good agreement with
numerical findings. On the other hand, for the dark solitons we have
developed an analytical description of their profile, which was also found
to be in good agreement with numerical observations. In particular, it was
predicted that the dark solitons are confined to the area inside the outer
Mach cone, which was confirmed by the simulations.
This work may be a relevant starting point towards a more detailed
understanding of the interplay between the inter-species interactions and
the loss of superfluidity caused by the super-critical motion of defects.
Among natural extensions of the present setting, there may be the consideration
of the three-dimensional context, with \textit{vortex rings} being formed as
a result of the motion of the obstacle [which is closest to the experimental
settings of reported in Refs. \cite{engels07,engels08}]. Another possibility
would be to consider the supercritical motion of the obstacle in a
three-component spinor mixture \cite{dabr}, where it would be relevant to
examine the role of the spin-dependent and spin-independent parts of the
interatomic interactions in producing patterns such as those considered
herein. Such studies are currently in progress and will be reported in
future works.
\subsection*{Acknowledgments}
We appreciate a valuable discussion with L.P.Pitaevskii.
Yu.G.G. and A.M.K. thank RFFI (Russia) for financial support.
P.G.K. gratefully acknowledges support from NSF-CAREER,
NSF-DMS-0619492 and NSF-DMS-0806762,
as well as from the Alexander von Humboldt Foundation.
The work of D.J.F. was partially supported by the Special Research Account
of the University of Athens.
|
2,877,628,090,720 | arxiv | \section{Introduction}
Exploration of the holographic duality between planar 4D $\cN=4$ supersymmetric Yang-Mills theory (SYM) and string theory on $\AD$ has led to numerous remarkable results due to integrability discovered on both sides of the duality
\cite{Beisert:2010jr}. Integrability has been particularly successful in application to the problem of computing the
planar spectrum of single trace operator anomalous dimensions/string state energies. In the asymptotically large volume limit the spectrum was found to be captured by a system of nested asymptotic Bethe ansatz (ABA) equations \cite{Beisert:2005fw}. Finite-size corrections \cite{wrapping} were later accounted for via the Thermodynamic Bethe Ansatz (TBA)/Y-system technique
\cite{Gromov:2009tv,Bombardelli:2009ns,Gromov:2009bc,Arutyunov:2009ur, Cavaglia:2010nm, Balog:2012zt}. This approach led to the formulation of an infinite set of integral equations, which are expected to describe the exact spectrum of the theory at any value of the 't Hooft coupling $\lambda$. The main problem of this
approach is that the explicit form of the equations requires case-by-case study and is not known in general except for a
few explicit examples such as Konishi \cite{Gromov:2009zb,Arutyunov:2012tx}. They, however, allowed for a detailed
numerical study of these simplest operators \cite{Gromov:2009zb,Frolov:2010wt,Gromov:2011de,Frolov:2012zv} and led to a prediction for string theory which was confirmed in \cite{Roiban:2011fe,Vallilo:2011fj,Frolov:2013lva}.
Very recently a new set of equations called the quantum spectral curve or the $\bP\mu$-system was proposed \cite{PmuPRL,PmuLong} which generalizes the original TBA equations to all sectors of the
theory and reveals a strikingly simple and concise underlying structure of the spectral problem. It allows one to describe all states of the theory on equal footing\footnote{This is in contrast to the analytic Y-system approach, which requires additional information about the
location of poles and/or zeros.
For simple states, like Konishi, these poles $u^*$ are prescribed to satisfy the
``exact Bethe ansatz equation" $Y_{1,0}^{\rm physical}(u^*)=-1$.
Already for more complicated states
in the $sl_2$ sector there are additional dynamical singularities in the Y-functions
which diverge from those appearing in the asymptotic solution when wrapping effects are taken into account.
$\bP\mu$-system puts under control all such singularities of $Y$-functions including those in even more complicated states.
In particular the BES equations with all types of the Bethe roots are a consequence of the $\bP\mu$-system (see \cite{PmuLong} for details).
The only input information it requires are the integer global R-charges and Lorentz spins entering
through the asymptotics of $\bP$-functions.
}.
The proposal has the form of a nonlinear Riemann-Hilbert problem for a set of a few functions.
Due to its remarkably transparent structure, the $\bP\mu$-system should be suitable to attack a variety of open problems
including such a longstanding problem of
AdS/CFT integrability as the description of the BFKL scaling regime.
Despite its novelty the $\bP\mu$-system was already used in various different situations.
One application which provided nontrivial tests of the proposal is the exact computation of
the Bremsstrahlung function \cite{PmuPRL,Gromov:2013qga}.
The new formulation also allowed to find the 9-loop Konishi anomalous dimension at weak coupling \cite{Volinnew}.
Below in the text we give a short overview of the construction
but we advice the reader to refer to \cite{PmuLong} where the quantum spectral curve is described in complete detail.
In this paper we will apply the $\bP\mu$-system to the calculation of twist operator anomalous dimensions in the $sl(2)$ sector of $\cN=4$ SYM. These operators have the form
\beq
\cO=\Tr\(Z^{J-1}\;\cD^S Z\)+\dots
\eeq
where $Z$ denotes one of the scalars of the theory\footnote{Written in terms of two real scalars as $Z=\Phi_1+i\Phi_2$.}, $\cD$ is a lightcone covariant derivative and the dots stand for permutations. The number of derivatives $S$ is called the spin of the operator, while $J$ is called the twist. We will consider a two-cut configuration with a symmetric distribution of Bethe roots, thus for physical states $S$ is even. We will study the small spin limit, in which the scaling dimension of these operators can be written as
\beq
\label{eq:anomalous_dimension_definition}
\Delta = J+S+\gamma(g),\ \ \ \ g=\rl/(4\pi)
\eeq
with the anomalous dimension $\gamma(g)$ given as an expansion
\beq
\label{eq:slope_definition}
\gamma(g)=\gamma^{(1)}(g)S+\gamma^{(2)}(g)S^2+\mathcal{O}(S^3).
\eeq
The first term, $\gamma^{(1)}(g)$, is called the slope function. Remarkably, it can be found exactly at any value of the coupling \cite{Basso:2011rs}
\beq
\label{slopeIn}
\gamma^{(1)}(g)=\frac{4\pi gI_{J+1}(4\pi g)}{JI_J(4\pi g)}\;.
\eeq
This expression was later derived from the ABA equations in two different ways \cite{Basso:2012ex,Gromov:2012eg}
and further studied and extended in \cite{Beccaria:2012kp,Beccaria:2012xm,Beccaria:2012mx,Tirziu:2008fk,Kruczenski:2012aw}.
This quantity is protected from finite-size wrapping corrections and thus the ABA prediction is exact. It is also not sensitive to the dressing phase of the ABA, which contributes only starting from order $S^2$.
Our key observation is that in the small $S$ regime the $\bP\mu$-system can be solved iteratively order by order in the spin. In this paper we first solve it at leading order and reproduce the slope function \eq{slopeIn}. Then we compute the coefficient of the $S^2$ term in the expansion, i.e. the function $\gamma^{(2)}(g)$ which we call the
\textit{curvature function}. For twist $J=2,3,4$ we obtain closed exact expressions for it in the form of a double integral. Unlike the slope function, $\gamma^{(2)}(g)$ is affected by the dressing phase in the ABA and by wrapping corrections, all of which
are incorporated in the exact $\bP\mu$-system.
Furthermore, we use the strong coupling expansion of our result to find the value of a new coefficient in the Konishi operator (i.e. $\Tr\(\cD^2Z^2\)$) anomalous dimension at strong coupling.
Our result for the Konishi dimension reads
\beq
\Delta_{konishi}=2\,\lambda^{1/4}+\frac{2}{\lambda^{1/4}}+\frac{-3\,\zeta_3+\ofrac{2}}{\lambda^{3/4}}+\frac{\frac{15 \, \zeta_5}{2} + 6 \, \zeta_3+\frac{1}{2}}{\lambda^{5/4}}+\dots\ .
\eeq
We have also obtained two new terms in the strong coupling expansion of the BFKL pomeron intercept,
\beqa
j_0 = 2 + \left.S(\Delta)\right|_{\Delta=0} &=& 2 -\frac{2}{\lambda^{1/2}}-\frac{1}{\lambda }+ \frac{1}{4\,\lambda^{3/2}}+\left(6\,\zeta_3+2\right) \frac{1}{\lambda^2} \\
&+& \nn \left(18 \, \zeta_3 + \frac{361}{64} \right) \frac{1}{\lambda^{5/2}} + \left(39 \, \zeta_3 + \frac{511}{32}\right) \frac{1}{\lambda^3} + \mathcal{O}\left(\frac{1}{\lambda^{7/2}}\right),
\eeqa
where the new terms are in the second line.
In addition we have checked our results against available
results in literature at weak and strong coupling, and found full agreement.
The paper is organized as follows. First in section \ref{sec:pmu} we review the quantum spectral curve construction in a general setting. In section \ref{sec:exact_slope} we demonstrate its applicability by \text{rederiving} the exact slope function of $\mathcal{N}=4$ found
in \cite{Basso:2011rs}. In section \ref{sec:exact_slope_to_slope} we push the calculation further and find the exact expression for the next coefficient in the small spin expansion,
i.e.
the curvature function. In sections \ref{sec:weak} and \ref{sec:SlopeSlopeStrongCoupling} we discuss the weak and strong coupling expansions of our result.
We then use our results to calculate the previously unknown three loop strong coupling coefficient of the Konishi anomalous dimension in subsection \ref{sec:Konishidimension} and two new coefficients for the BFKL intercept at strong coupling in subsection \ref{sec:bfkl}. We finish with conclusions and appendices, which contain detailed calculations left out of the main text for brevity.
\section{$\bP\mu$-system -- an overview}
\label{sec:pmu}
In this section we review the formulation of the $\bP\mu$-system, and also discuss its symmetries which will be useful later.
Below, we will restrict the discussion to states in the $sl(2)$ sector as presented in \cite{PmuPRL}.
Remarkably, the general case is not much more complicated and will appear soon in \cite{PmuLong}.
\subsection{Definitions and notation}
The $\bP\mu$-system is a nonlinear system of functional equations for a four-vector $\bP_a(u)$ and a $4\times4$ antisymmetric matrix $\mu_{ab}(u)$ depending on the spectral parameter $u$.
For full details about the origin of the construction we refer the reader to \cite{PmuLong}. As functions of $u$, both $\bP_a$ and $\mu_{ab}$ have prescribed analyticity properties which play a key role.
First, $\bP_a$ must have only a single branch cut in $u$ going between $-2g$ and $2g$, being analytic in the rest of the complex plane. We call this cut the \textit{short} cut, while the cut on the real
line connecting the same two points through infinity is called the \textit{long} cut. The functions $\mu_{ab}$ have an infinite set of short branch cuts going between $-2g+in$ and $2g+in$ for all $n\in\mZ$ (see Fig. \ref{fig:cuts}). Most importantly, the analytic continuation of $\bP_a$ and $\mu_{ab}$ through these cuts is again expressed in terms of these functions, according to the following equations:
\beq
\tilde \bP_a=-\mu_{ab}\chi^{bc}\bP_c,\; \ \ \ \text{with}\;\ \ \chi^{ab}=\left(
\begin{array}{cccc}
0 & 0 & 0 & -1 \\
0 & 0 & 1 & 0 \\
0 & -1 & 0 & 0 \\
1 & 0 & 0 & 0 \\
\end{array}
\right),
\label{eq:Pmu}
\eeq
and
\beq
\tilde \mu_{ab}-\mu_{ab}=\bP_a \tilde\bP_b- \bP_b \tilde\bP_a\;.
\label{eq:mudisc}
\eeq
Here we denote by $\tilde\bP_a$ and $\tilde\mu_{ab}$ the analytic continuation of $\bP_a$ and $\mu_{ab}$ through the cut on the real axis. In addition, we have a pseudo-periodicity condition
\beq
\label{muper}
\tilde\mu_{ab}(u)=\mu_{ab}(u+i)
\eeq
which, actually, means that $\mu_{ab}(u)$ would be an $i$-periodic function if defined with long cuts instead of the short cuts.
\FIGURE[ht]
{
\label{fig:cuts}
\begin{tabular}{cc}
\includegraphics[scale=0.3]{cuts}\\
\end{tabular}
\caption{\textbf{Cuts in the $u$ plane.} We show the location of branch cuts in $u$ for the functions $\bP_a(u)$ (left) and $\mu_{ab}(u)$ (right). The infinitely many cuts of $\tilde\bP_a$ are shown on the left picture by dotted lines.}
}
The functions $\mu_{ab}$ are also constrained by the relations
\beqa
\label{constraint}
\mu_{12}\mu_{34}-\mu_{13}\mu_{24}+\mu_{14}^2&=&1\;,\\
\label{Pmulast}
\mu_{14}=\mu_{23}\;,
\eeqa
the first of which states that the Pfaffian
of the matrix $\mu_{ab}$ is equal to $1$. Let us also write the equations \eq{eq:Pmu} explicitly:
\beqa
\label{eq:pmuexpanded1}
&&\tilde \bP_1= -\bP_3 \mu_{12}+\bP_2 \mu_{13}-\bP_1 \mu_{14} \\
&&\tilde \bP_2= -\bP_4 \mu_{12}\hspace{16mm}+\bP_2 \mu_{14}-\bP_1 \mu_{24} \\
&&\tilde \bP_3= \hspace{16mm}-\bP_4 \mu_{13}+\bP_3 \mu_{14}\hspace{16mm}-\bP_1 \mu_{34} \\
&&\tilde \bP_4= \hspace{16mm}\hspace{15.5mm}-\bP_4 \mu_{14}+\bP_3 \mu_{24}-\bP_2 \mu_{34}\;.
\label{eq:pmuexpanded}
\eeqa
The above equations ensure that the branch points of $\P_a$ and $\mu_{ab}$ are of the square root type, i.e.
$\tilde{\tilde{\bP}}_a=\bP_a$ and $\tilde{\tilde{\mu}}_{ab}=\mu_{ab}$.
Finally, we require that $\bP_a$ and $\mu_{ab}$ do not have any singularities except these branch points\footnote{For odd values of $J$ the functions $\bP_a$ may have an additional branch point at infinity. However, it should cancel in any product of two $\bP_a$'s, and therefore it will not appear in
any physically relevant quantity (see \cite{PmuPRL}, \cite{PmuLong}). We will discuss some explicit examples in the text.}.
\subsection{Asymptotics and energy}
The quantum numbers and the energy of the state are encoded in the asymptotics of the functions $\bP_a$ and $\mu_{ab}$ at large real $u$. The generic case is described in \cite{PmuLong}, while here we are interested in the states in the $sl(2)$ sector, for which the relations read \cite{PmuPRL}
\beq
\bP_a\sim(A_1u^{-J/2},A_2u^{-J/2-1},A_3u^{J/2},A_4u^{J/2-1})
\label{eq:asymptotics}
\eeq
\beq
\(\mu_{12},\ \mu_{13},\ \mu_{14},\ \mu_{24},\ \mu_{34}\)\sim
\(u^{\Delta-J},\ u^{\Delta+1},\ u^{\Delta},\ u^{\Delta-1},\ u^{\Delta+J}\)
\label{eq:muasymptotics}
\eeq
where $J$ is the twist of the gauge theory operator, and $\Delta$ is its conformal dimension. With these asymptotics, the equations \eq{eq:Pmu}-\eq{Pmulast} form a closed system which fixes $\bP_a$ and $\mu_{ab}$.
Lastly, the spin $S$ of the operator is related \cite{PmuPRL} to the leading coefficients $A_a$ of the $\bP_a$ functions (see \eq{eq:asymptotics}):
\beqa
&&A_1 A_4=\frac{\((J+S-2)^2-\Delta^2\)\((J-S)^2-\Delta^2\)}{16 i J(J-1)} \label{AA1} \\
&&A_2 A_3=\frac{\((J-S+2)^2-\Delta^2\)\((J+S)^2-\Delta^2\)}{16 i J(J+1)} \label{AA2}\;.
\eeqa
\subsection{Symmetries}
\label{sec:Symmetries}
The $\bP\mu$-system enjoys a symmetry preserving all of its essential features. It has the form of a linear transformation of $\bP_a$ and $\mu_{ab}$ which leaves the system \eqref{eq:Pmu}-\eqref{Pmulast} and the asymptotics \eq{eq:asymptotics}, \eq{eq:muasymptotics} invariant. Indeed, consider a general linear transformation $\bP_a'={R_a}^b \bP_b$ with a non-degenerate constant matrix $R$. In order to preserve the system \eqref{eq:Pmu}, $\mu$ should
at the same time be transformed as
\beq
\mu'=-R \mu \chi R^{-1}\chi.
\label{gammaP}
\eeq
Such a transformation also preserves the form of \eqref{eq:mudisc} if
\beq
R^T\chi R\chi=-1\;,
\label{eq:sxsx}
\eeq
which also automatically ensures antisymmetry of $\mu_{ab}$ and (\ref{constraint}), (\ref{Pmulast}).
In general, this transformation will spoil the asymptotics of $\bP_a$.
These asymptotics are ordered as $|\bP_2|<|\bP_1|<|\bP_4|<|\bP_3|$,
which implies that the matrix $R$ must have the following structure\footnote{This matrix would of course be lower triangular if we ordered $\bP_a$ by their asymptotics.}
\beq
R=\left(
\begin{array}{cccc}
* & * & 0 & 0 \\
0 & * & 0 & 0 \\
* & * & * & * \\
* & * & 0 & * \\
\end{array}
\right).
\eeq
The general form of $R$ which satisfies \eqref{eq:sxsx} and does not spoil the asymptotics generates a 6-parametric transformation, which we will call a $\gamma$-transformation. The simplest \text{$\gamma$-transformation} is the following rescaling:
\beq
\bP_1 \to \alpha \bP_1\;\;,\;\;
\bP_2 \to \beta \bP_2\;\;,\;\;
\bP_3 \to 1/\beta \bP_3\;\;,\;\;
\bP_4 \to 1/\alpha \bP_4\;\;,\;\;
\label{eq:alphabeta}
\eeq
\beq
\mu_{12} \to \alpha\beta\mu_{12}\;\;,\;\;
\mu_{13} \to \frac{\alpha}{\beta}\mu_{13}\;\;,\;\;
\mu_{14} \to \mu_{14}\;\;,\;\;
\mu_{24} \to \frac{\beta}{\alpha}\mu_{24}\;\;,\;\;
\mu_{34} \to \frac{1}{\alpha\beta}\mu_{34}\;\;,\;\;
\eeq
with $\alpha,\beta$ being constants.
In all the solutions we consider in this paper all functions $\bP_a$ turn out to be functions of definite parity, so it makes sense to consider $\gamma$-transformations which preserve parity. $\bP_1$ and $\bP_2$ always have opposite parity (as one can see from from \eqref{eq:asymptotics}) and thus should not mix under such transformations; the same is true about $\bP_3$ and $\bP_4$. Thus, depending on parity of $J$ the parity-preserving $\gamma$-transformations are either
\beqa
\label{gammatransform2}
&\bP_3\rightarrow\bP_3+\gamma_3\bP_2,\ \bP_4\rightarrow\bP_4+\gamma_2\bP_1,\\
\nn&\mu_{13}\rightarrow\mu_{13}+\gamma_3\mu_{12},\ \mu_{24}\rightarrow\mu_{24}-\gamma_2\mu_{12},\ \mu_{34}\rightarrow\mu_{34}+\gamma_3\mu_{24}-\gamma_2\mu_{13}-\gamma_2\gamma_3\mu_{12}
\eeqa
for odd $J$ or
\beqa
\label{gammatransform1}
&\bP_3\rightarrow\bP_3+\gamma_1\bP_1,\ \bP_4\rightarrow\bP_4-\gamma_1\bP_2,\\
\nn&\mu_{14}\rightarrow\mu_{14}-\gamma_1\mu_{12},\ \mu_{34}\rightarrow\mu_{34}+2\gamma_1\mu_{14}-\gamma^2_1\mu_{12}\;,
\eeqa
for even $J$.
\section{Exact slope function from the $\bP\mu$-system}
\label{sec:exact_slope}
In this section we will find the solution of the $\bP\mu$-system \eqref{eq:Pmu}-\eq{Pmulast} corresponding to the $sl(2)$ sector operators at leading order in small $S$. Based on this solution we will compute the slope function $\gamma^{(1)}(g)$ for any value of the coupling.
\subsection{Solving the $\bP\mu$-system in LO}
\label{sec:evenLsol}
The solution of the $\bP\mu$-system is a little simpler for even $J$, because for odd $J$ extra branch points at infinity will appear in $\bP_a$ due to the asymptotics \eq{eq:asymptotics}. Let us first consider the even $J$ case.
The description of the $\bP\mu$-system in the previous section was done for physical operators. Our goal is to take some peculiar limit
when the (integer) number of covariant derivatives
$S$ goes to zero. As we will see this requires some extension of the asymptotic requirement for $\mu$ functions.
In this section we will be guided by principles of naturalness and simplicity to deduce these modifications which
we will summarize in section~\ref{sec:ancont}. There we also give a concrete prescription for analytical continuation in $S$, which we then use to derive the curvature function.
We will start by finding $\mu_{ab}$. Recalling that $\Delta=J+{\cal O}(S)$, from \eq{AA1}, \eq{AA2} we see that $A_1A_4$ and $A_2A_3$ are of order $S$ for small $S$, so we can take the functions $\bP_a$ to be of order $\sqrt{S}$. This is a key simplification,
because now \eq{eq:mudisc} indicates that the discontinuities of $\mu_{ab}$ on the cut are small when $S$ goes to zero. Thus at leading order in $S$ all $\mu_{ab}$ are just periodic entire functions without cuts.
For power-like asymptotics of $\mu_{ab}$ like in \eq{eq:muasymptotics} the only possibility is that they are all constants.
However, we found that in this case there is only a trivial solution, i.e. $\bP_a$ can only be zero.
The reason for this is that for physical states $S$ must be integer and thus cannot be arbitrarily small, nevertheless, it is a sensible
question how to define an analytical continuation from integer values of $S$.\footnote{Restricting the large positive $S$ behavior
one can achieve uniqueness of the continuation.}
Thus we have to relax the requirement of power-like behavior at infinity. The first possibility is
to allow for $e^{2\pi u}$ asymptotics at $u\to +\infty$.
We should, however, remember about the constraints \eq{constraint} and \eq{Pmulast} which restrict our choice and the fact that we can also use $\gamma$-symmetry.
Let us show that by allowing $\mu_{24}$ to have exponential behavior and setting it to
$\mu_{24}=C\sinh(2\pi u)$, with other $\mu_{ab}$ being constant, we arrive to the correct result. This choice is dictated by our assumptions concerning the analytic continuation of $\mu_{ab}$ to non-integer values of $S$, and this point is discussed in detail in section~\ref{sec:ancont}. We will also see in that section that by using the $\gamma$-transformation (described in section \ref{sec:Symmetries}) and the constraint \eq{constraint} we can set the constant $C$ to $1$ and also $\mu_{12}=1,\;\mu_{13}=0,\;\mu_{14}=-1,\;\mu_{34}=0$ (see \eq{muresan}).
Having fixed all $\mu$'s at leading order we get the following system of equations\footnote{In this section we only consider the leading order of $\bP$'s at small $S$, so the equations involving them are understood to hold at leading order in $S$. In section 4 we will study the next-to-leading order and elaborate the notation for contributions of different orders.} for $\bP_a$:
\beqa
&&\tilde \bP_1= -\bP_3 +\bP_1, \label{eq:P1L2} \\
&&\tilde \bP_2= -\bP_4 -\bP_2 -\bP_1 \sinh(2\pi u), \label{eq:P2L2}\\
&&\tilde \bP_3= \hspace{10mm}-\bP_3,\hspace{16mm} \label{eq:P3L2} \\
&&\tilde \bP_4= \hspace{10mm}+\bP_4+\bP_3 \sinh(2\pi u).\label{eq:P4L2}
\eeqa
Recalling that the functions $\bP_a$ only have a single short cut, we see from these equations that $\tilde{\bP}_a$ also have only this cut! This means that we can take all $\bP_a$ to be infinite Laurent series in the Zhukovsky variable $x(u)$, which rationalizes the Riemann surface with two sheets and one cut. It is defined as
\beq
x+\ofrac{x}=\frac{u}{g}
\eeq
where we pick the solution with a short cut, i.e.
\beq
x(u)=\frac{1}{2}\left(\frac{u}{g}+\sqrt{\frac{u}{g}-2}\;\sqrt{\frac{u}{g}+2}\;\right)\;\;.\;\;
\eeq
Solving the equation \eqref{eq:P3L2} with the asymptotics \eqref{eq:asymptotics} we find
\beq
\label{P3ck}
\bP_3=\epsilon\(x^{-J/2}-x^{+J/2}\)+\sum_{k=1}^{J/2-1}c_k\(x^{-k}-x^k\)
\eeq
where $\epsilon$ and $c_k$ are constants. Now it is useful to rewrite the equation for $\bP_1$ (i.e. \eq{eq:P1L2}) in the form $\tilde\bP_1-\bP_1=-\bP_3$, and we see that due to asymptotics of $\bP_1$ both sides of this equation must have a gap in the powers of $x$ from $x^{-J/2+1}$ to $x^{J/2-1}$. This means that all coefficients $c_k$ in \eq{P3ck} must vanish and we find
\beq
\bP_1=\epsilon x^{-J/2} \ ,
\eeq
so we are left with one unfixed constant $\epsilon$ (we expect it to be proportional to $\sqrt{S}$).
Thus the equations \eqref{eq:P2L2} and \eqref{eq:P4L2} become
\beqa
\label{eq:P2eq}
\tilde \bP_2+\bP_2&=& -\bP_4 -\epsilon x^{-J/2}\sinh(2\pi u)\;, \\
\label{eq:P4eq}
\tilde \bP_4-\bP_4&=& \epsilon(x^{-J/2}-x^{+J/2}) \sinh(2\pi u)\;.
\eeqa
We will first solve the second equation.
It is useful to introduce operations $[f(x)]_+$ and $[f(x)]_-$, which take parts of Laurent series with positive and negative powers of $x$ respectively. Taking into account that
\beq
\sinh(2\pi u)=\sum\limits_{n=-\infty}^{\infty}I_{2n+1} x^{2n+1},
\eeq
where $I_k\equiv I_{k}(4 \pi g)$ is the modified Bessel function of the first kind, we can write $\sinh(2\pi u)$ as
\beq
\sinh(2\pi u)= \sinh_++\sinh_-,
\eeq
where explicitly
\beqa
&& \sinh_+=[\sinh(2\pi u)]_+=\sum\limits_{n=1}^\infty I_{2n-1}x^{2n-1} \\
\label{defshm}
&& \sinh_-=[\sinh(2\pi u)]_-=\sum\limits_{n=1}^\infty I_{2n-1}x^{-2n+1}\;.
\eeqa
In this notation the general solution of Eq. \eq{eq:P4eq} with asymptotics at infinity $\bP_4\sim u^{J/2-1}$ can be written as
\beq
\bP_4=\epsilon(x^{J/2}-x^{-J/2})\sinh_-+Q_{J/2-1}(u),
\eeq
where $Q_{J/2-1}$ is a polynomial of degree $J/2-1$ in $u$.
The polynomial $Q_{J/2-1}$ can be fixed from the equation \eqref{eq:P2eq} for $\bP_2$. Indeed, from the asymptotics of $\bP_2$ we see that the lhs of \eqref{eq:P2eq} does not have powers of $x$ from $-J/2+1$ to $J/2-1$. This fixes
\beq
Q_{J/2-1}(x)=-\epsilon\sum\limits_{k=1}^{J/2}I_{2k-1}\(x^{\frac{J}{2}-2k+1}+x^{-\frac{J}{2}+2k-1}\).
\eeq
Once $Q_{J/2-1}$ is found, we set $\bP_2$ to be the part of the right hand side of \eqref{eq:P2eq} with powers of $x$ less than $-J/2$, which gives
\beq
\bP_2=-\epsilon x^{+J/2} \sum_{n=\frac{J}{2}+1}^\infty I_{2n-1}x^{1-2n}.
\eeq
Thus (for even $J$) we have uniquely fixed all $\bP_a$ with the only unknown parameter being $\epsilon$. We summarize the solution below:
\beqa
\label{eq:musolLOevenL}
&&\mu_{12}=1,\ \mu_{13}=0,\ \mu_{14}=-1,\ \mu_{24}=\sinh(2\pi u),\ \mu_{34}=0,\\
\label{eq:P1solLOevenL}
&&\bP_1=\epsilon x^{-J/2}\\
\label{eq:P2solLOevenL}
&&\bP_2=-\epsilon x^{+J/2} \sum_{n={J/2}+1}^\infty I_{2n-1}x^{1-2n}\\
\label{eq:P3solLOevenL}
&&\bP_3=\epsilon \(x^{-J/2}-x^{+J/2}\)\\
\label{eq:P4solLOevenL}
&&
\bP_4=\epsilon \(x^{J/2}-x^{-J/2}\)\sinh_- -\epsilon \sum\limits_{n=1}^{J/2}I_{2n-1}\(x^{\frac{J}{2}-2n+1}+x^{-\frac{J}{2}+2n-1}\)\;.
\label{solutionevenL}
\eeqa
In the next section we fix the remaining parameter $\epsilon$ of the solution in terms of $S$ and find the energy, but now
let us briefly discuss the solution for odd $J$. As we mentioned above the main difference is that the functions $\bP_a$ now have a branch point at $u=\infty$, which is dictated by the asymptotics \eq{eq:asymptotics}. In addition, the parity of $\mu_{ab}$ is different according to the asymptotics of these functions \eq{eq:muasymptotics}. The solution is still very similar to the even $J$ case, and we discuss it in detail in Appendix \ref{sec:oddL}. Let us present the result here:
\beq
\mu_{12}=1,\ \mu_{13}=0,\ \mu_{14}=0,\ \mu_{24}=\cosh(2\pi u),\ \mu_{34}=1
\eeq
\beqa
\label{P1oddL}
&& \bP_1=\epsilon x^{-J/2}, \\
&& \bP_2=-\epsilon x^{J/2}\sum\limits_{k=-\infty}^{-\frac{J+1}{2}}I_{2k}x^{2k},\\
&& \bP_3=-\epsilon x^{J/2}, \\
\label{P4oddL}
&& \bP_4=\epsilon x^{-J/2}\cosh_--\epsilon x^{-J/2}\sum\limits_{k=1}^{\frac{J-1}{2}}I_{2k}x^{2k}-\epsilon I_0 x^{-J/2}.
\eeqa
Note that now $\bP_a$ include half-integer powers of $x$.
\paragraph{Fixing the global charges of the solution.}
\label{sec:LOresultevenL}
Finally, to fix our solution completely we
have to find the value of $\epsilon$ and find the energy in terms of the spin using \eqref{AA1} and \eq{AA2}.
For this we first extract the coefficients $A_a$ of the leading terms for all $\bP_a$ (see the asymptotics \eqref{eq:asymptotics}).
From \eqref{eq:P1solLOevenL}-\eqref{eq:P4solLOevenL} or \eq{P1oddL}-\eq{P4oddL}
we get
\beqa
\label{Aexp1}
&& A_1= g^{J/2} \epsilon , \\
&& A_2=-g^{J/2+1} \epsilon I_{J+1}, \\
\label{eq:A3LOL3}
&& A_3=-g^{-J/2} \epsilon , \\
\label{Aexplast}
&& A_4=-g^{-J/2+1}\epsilon I_{J-1}.
\eeqa
Expanding \eq{AA1}, \eq{AA2} at small $S$ with $\Delta=J+S+\gamma$, where $\gamma={\cal O}(S)$, we find at linear order
\beqa
&& \gamma=i(A_1 A_4-A_2 A_3) \\
&& S=i(A_1A_4+A_2A_3)\;.
\eeqa
Plugging in the coefficients \eq{Aexp1}-\eq{Aexplast} we find that
\beq
\label{epss}
\epsilon=\sqrt{\frac{2\pi i S}{JI_J(\sqrt\lambda)}}
\eeq
and we obtain the anomalous dimension at leading order,
\beq
\gamma=\frac{\sqrt{\lambda}I_{J+1}(\sqrt{\lambda})}{JI_J(\sqrt{\lambda})}S+{\cal O}(S^2),
\label{eq:resultLO}
\eeq
which is precisely the slope function of Basso \cite{Basso:2011rs}.
While the above discussion concerned the ground state, i.e. the $sl(2)$ sector operator with the lowest anomalous dimension at given twist $J$, it can be
generalized for higher mode numbers. In the asymptotic Bethe ansatz for such operators we have two symmetric cuts formed by Bethe roots, with corresponding mode numbers being $\pm n$ (for the ground state $n=1$). To describe these operators within the $\bP\mu$-system
we found that we should take $\mu_{24}=C\sinh(2\pi n u)$ instead of $\mu_{24}=C\sinh(2\pi u)$ (and for odd $J$ we similarly use $\mu_{24}=C\cosh(2\pi n u)$ instead of $\mu_{24}=C\cosh(2\pi u)$). Then the solution is very similar to the one above, and we find
\beq
\gamma=\frac{n\sqrt{\lambda}I_{J+1}(n\sqrt{\lambda})}{JI_J(n\sqrt{\lambda})}S\;,
\label{slopen}
\eeq
which reproduced the result of \cite{Basso:2011rs} for non-trivial mode number $n$. In Appendix \ref{sec:Sanyn} we also show how using the $\bP\mu$-system one can reproduce the slope function for a configuration of Bethe roots with arbitrary mode numbers and filling fractions.
In summary, we have shown how the $\bP\mu$-system correctly computes the energy at linear order in $S$. In section \ref{sec:exact_slope_to_slope} we will compute the next, $S^2$ term in the anomalous dimension.
\subsection{Prescription for analytical continuation}
\label{sec:ancont}
To deduce the general prescription for the asymptotics of $\mu_{ab}$ for non-integer $S$ from our analysis,
we first study the possible asymptotics of $\mu_{ab}$
for given $\bP_a$ in more detail. For that we combine \eq{muper}
with \eq{eq:mudisc} and \eq{eq:Pmu} to write a finite difference equation on $\mu_{ab}$:
\beq\la{5bax}
\mu_{ab}(u+i)=\mu_{ab}(u)-\mu_{bc}(u)\chi^{cd}\bP_d\bP_a+\mu_{ac}(u)\chi^{cd}\bP_d\bP_b.
\eeq
As there are $5$ linear independent components of $\mu_{ab}$ this is a 5th order
finite-difference equation which has $5$ independent solutions which we denote $\mu_{ab,A},\;A=1,\dots,5$.
Given the asymptotics of $\bP_a$ \eq{eq:asymptotics} and \eq{AA1}, \eq{AA2}
there are exactly $5$ different asymptotics a solution of \eq{5bax} could have as discussed in \cite{PmuPRL}.
We denote these $5$ independent solutions of \eq{5bax} as $\mu_{12,A}$ where $A=1,\dots,5$ and summarize their
leading asymptotics at large $u>0$ in the table below
\beq
\bea{c||l|l|l|l|l}
A=&1&2&3&4&5\\ \hline\hline
\mu_{12,A}\sim & u^{\Delta-J}& C_{1,2} u^{-S+1-J}& C_{1,3} u^{-J}& C_{1,4} u^{S-1-J}& C_{1,5} u^{-\Delta-J}\\
\mu_{13,A}\sim & C_{2,1} u^{\Delta+1}& C_{2,2} u^{-S+2}& C_{2,3} u^{+1}& { u^{S}}& C_{2,5} u^{-\Delta+1}\\
\mu_{14,A}\sim & C_{3,1} u^{\Delta}& C_{3,2} u^{-S+1}& 1 & C_{3,4} u^{S-1}& C_{3,5} u^{-\Delta}\\
\mu_{24,A}\sim & C_{4,1} u^{\Delta-1}& u^{-S}& C_{4,3} u^{-1}& C_{4,4} u^{S-2}& C_{4,5} u^{-\Delta-1}\\
\mu_{34,A}\sim & C_{5,1} u^{\Delta+J}& C_{5,2} u^{-S+1+J}& C_{5,3} u^{+J}& C_{5,4} u^{S-1+J}& { u^{-\Delta+J}}
\label{tablemu}
\eea
\eeq
where we fix the normalization of our solutions so that some coefficients are set to $1$\footnote{The coefficients $C_{a,A}$ are some rational
functions of $S,\Delta,J$ and $A_1,A_2$. In the small $S$ limit all $C_{a,A}\to 0$ in our normalization.}.
As it was pointed out in \cite{PmuPRL} the asymptotics for different $A's$ are obtained by replacing $\Delta$ in \eq{eq:muasymptotics} by $\pm \Delta,\pm (S-1) $ and $0$.
We label these solutions so that in the small $S$ regime these asymptotics are ordered $\Delta> 1-S>0>S-1>-\Delta$.
Of course any solution of \eq{5bax} multiplied by an $i$-periodic function\footnote{It could be a periodic function with short cuts. In general the set of these coefficients is denoted in
\cite{PmuLong} by $\omega_{ab}$ whereas $\mu_{ab,A}$ is denoted in \cite{PmuLong} as ${\cal Q}_{ab,cd}$.} will still remain a solution of \eq{5bax}.
The true $\mu_{ab}$ is thus a linear combination of the partial solutions $\mu_{ab,A}$ with some constant or periodic coefficients.
This particular combination should in addition satisfy the analyticity condition \eq{muper} which is not guaranteed by \eq{5bax}.
The prescription for analytical continuation in $S$ which we propose here is based on the large $u$ asymptotics of these periodic coefficients.
As we discussed in the previous section the assumption that all these coefficients are asymptotically constant
is too constraining already at the leading order in $S$, and we must assume that at least some of these coefficients grow exponentially as $e^{2\pi u}$.
To get some extra insight into the asymptotic behavior of these coefficients it is very instructive to go to the weak coupling regime.
It is known that at one loop the equation \eq{5bax} reduces to a second order equation. When written as a finite difference equation for $\mu_{12}$ it coincides
exactly with the Baxter equation for the non-compact $sl(2)$ spin chain. For $J=2$ it reads
\beq\la{oneloopbaxter}
\(2u^2-S^2-S-\frac{1}{2}\)Q(u)=(u+\tfrac{i}{2})^2 Q(u+i)
+(u-\tfrac{i}{2})^2 Q(u-i)
\eeq
where $Q(u)=\mu_{12}(u+i/2)$.
This equation is already very well studied and all its solutions are known explicitly \cite{Derkachov:2002wz} -- in particular it is easy to see that one of the solutions
must have $u^S$ asymptotics at infinity, while the other behaves as $1/u^{S+1}$.
It is also known that at one loop and for any integer $S$ \eq{oneloopbaxter} has a polynomial solution which gives the energy as $\Delta=J+S+
\left.2ig^2\d_u\log\frac{Q(u-i/2)}{Q(u-i/2)}\right|_{u=0}=S+J+8g^2 H_S$.
At the same time, for non-integer $S$ there are of course no polynomial solutions,
and according to \cite{Janik:2013nqa} and \cite{GrKazakov}
the solution which produces the energy $S+J+8g^2 H_S$
cannot even have power-like asymptotics, instead the correct large $u$ behavior must be:
\beq
Q(u)\sim \(u^S+\dots\)+(A+B e^{2\pi u})\(\frac{1}{u^{S+1}}+\dots\)\;\;,\;\;u\to +\infty\;.
\eeq
Furthermore, there is a unique entire $Q$ function with the above asymptotics.
For $S>-1/2$ we can reformulate the prescription by saying that the correct solution
has power-like asymptotics, containing all possible solutions, plus a small solution reinforced with an exponent.
In this form we can try to translate this result to our case. We notice that for $g\to 0$ we have
$\mu_{12,1}\sim u^{S}$ and $\mu_{12,2}\sim u^{-S-1}$, which tells us that at least the second solution
must be allowed to have a non-constant periodic coefficient in the asymptotics. We also assume that the coefficient in front of $\mu_{ab,3}$ tends to a constant\footnote{It could be hard or even impossible to separate $\mu_{ab,3}$ from $\mu_{ab,2}$ in a well defined way.
In these cases $\mu_{ab,2}$ is defined modulo $\mu_{ab,3}$ and other subleading solutions. Our prescription then means that the exponential part of the coefficient in front of $\mu_{ab,3}$ is proportional to that of in front of $\mu_{ab,2}$.
}.
This extra condition does not follow from the one loop analysis we deduced from our solution.
We will show how this prescription produces the correct known result for the leading order in $S$.
From our analysis it is hard to make a definite statement about the behavior of the periodic coefficients in front of
$\mu_{12,4}$ and $\mu_{12,5}$, but due to the expected $\Delta \to-\Delta$ symmetry, which interchanges $\mu_{12,5}$ and $\mu_{12,1}$,
one may expect that the coefficient of $\mu_{12,5}$ should also go to a constant. To summarize we should have
\beq\la{prescr}
\mu_{ab}(u)=\sum_{A=1}^5 c_{A}\mu_{ab,A}(u)
+\sum_{A=2,4,5} p_{A}(u)\mu_{ab,A}(u)
\eeq
where $c_A$ are constants whereas $p_{A}(u)$ are some linear combinations of $e^{\pm 2\pi u}$.\footnote{It could be that some of the coefficients of $p_{A}$ should be zero due to the constraint \eq{constraint}.}
\paragraph{Prescription at small $S$.}
In the small $S$ limit, $\bP_a\to 0$ and the equation \eq{5bax} simply tells us that $\mu_{ab}(u+i)=\mu_{ab}(u)$ which implies that our $5$ independent solutions are
just constants at the leading order in $S$.
We begin by noticing that in this limit $\mu_{12}$ must be entirely coming from $\mu_{12,1}$ as all the other solutions could only produce negative powers and thus cannot
contribute at the leading order. So we start from $\mu_{ab}=C_{ab}+D_{ab}\sinh(2\pi u)+E_{ab}\cosh(2\pi u)$ for some constants $C_{ab},D_{ab},E_{ab}$ such that
$D_{12}=E_{12}=0$. Thus we have $5$ different $C$'s, $4$ different $D$'s and $4$ different $E$'s.
We notice that this general form of $\mu_{ab}$ can be significantly simplified.
First, using the Pfaffian constraint \eqref{constraint} and the \text{$\gamma$-transformation} \eqref{gammaP} any generic $\mu_{ab}$ of this form
can be reduced to one belonging to the following two-parametric family inside the original 13-parametric space:
\beqa
&&\mu_{12}=1,\ \mu_{14}=a^2 \sinh{2\pi u}+\frac{a}{2}\cosh{2\pi u}\;,\\\
&&\mu_{24}=b\sinh{2\pi u}+\sinh{2\pi u}\;,\
\mu_{34}=\frac{a^2}{4}\frac{(1-2ab)^2}{b^2-1}+1\;,
\label{mufamilyab}
\eeqa
where $\mu_{13}$ is found from the Pfaffian constraint.
Second, recall that according to our prescription the 1st and 3rd solutions (columns in the table \eqref{tablemu}) cannot contain exponential terms.
Consider $\mu_{14}$ and $\mu_{24}$, we again see that the 4th and 5th solutions
could only contain negative powers of $u$ and thus only the 2nd solution can contribute to the parts of $\mu_{14}$ and $\mu_{24}$ that are non-decaying at infinity.
This means that these components can be represented in the following form
\beqa
\mu_{14}=(a_1\sinh{2\pi u}+ a_2\cosh{2\pi u})\mu_{14,2}(u)+\mathcal{O}\(e^{2\pi u}/u\)\;,\\
\mu_{24}=(a_1\sinh{2\pi u}+a_2\cosh{2\pi u})\mu_{24,2}(u)+\mathcal{O}\(e^{2\pi u}/u\)\;,
\eeqa
for $u\to+\infty$.
The $\mathcal{O}\(e^{2\pi u}/u\)$ terms contain contributions from all of the solutions except for the 2nd. One can see that \eqref{mufamilyab} can be of this form only in two cases: if $a=0$ or if $a=\frac{1}{2b}$.
Both of these cases can be brought to the form
\beq
\label{muresan}
\mu_{12}=1,\ \mu_{13}=0,\ \mu_{14}=0,\ \mu_{24}=d_1\sinh{2\pi u}+d_2\cosh{2\pi u},\ \mu_{34}=1
\eeq
by a suitable $\gamma$-transformation \eqref{mufamilyab}. However, we found that there is an additional constraint which follows from compatibility of $\mu_{ab}$
with the decaying asymptotics of $\bP_2$. As
we show in appendix \ref{sec:Sanyn} for even $J$ one must set $d_2=0$. For odd $J$ we must set $d_1=0$ as a compatibility
requirement. This justifies the choice of $\mu_{ab}$ used in the previous section.
In the next section we will show how the same prescription can be applied at the next order in $S$ and leads to nontrivial results which
we subjected to intensive tests later in the text.
\section{Exact curvature function}
\label{sec:exact_slope_to_slope}
In this section we use the $\bP\mu$-system to compute the $S^2$ correction to the anomalous dimension, which we call the curvature function $\gamma^{(2)}(g)$. First we will discuss the case $J=2$ in detail and then describe the modifications of the solution for the cases $J=3$ and $J=4$, more details on which can be found in appendix \ref{sec:NLOapp}.
\subsection{Iterative procedure for the small $S$ expansion of the $\bP\mu$-system}
\label{sec:SolvingPmuL2}
For convenience let us repeat the leading order solution of the $\bP\mu$-system for $J=2$ (see \eqref{eq:musolLOevenL}-\eqref{eq:P4solLOevenL})
\beqa
{\bf P}^{(0)}_1=\epsilon\frac{1}{x}\;\;&,&\;\;{\bf P}^{(0)}_2=+\epsilon I_1-\epsilon x[\sinh(2\pi u)]_-\;\;,\\
{\bf P}^{(0)}_3=\epsilon\(\frac{1}{x}-x\)\;\;&,&\;\;
{\bf P}^{(0)}_4=
-2\epsilon I_1-
\epsilon \(\frac{1}{x}-x\)[\sinh(2\pi u)]_-.
\label{P10P40}
\eeqa
Here $\epsilon$ is a small parameter, proportional to $\sqrt{S}$ (see \eq{epss}), and by $\bP_a^{^{(0)}}$ we denote the $\bP_a$ functions at leading order in $\epsilon$.
The key observation is that the $\bP\mu$-system can be solved iteratively order by order in $\epsilon$. Let us write $\bP_a$ and $\mu_{ab}$ as an expansion in this small parameter:
\beq
\bP_a=\bP_a^{(0)}+\bP_a^{(1)}+\bP_a^{(2)}+\dots
\eeq
\beq
\mu_{ab}=\mu_{ab}^{(0)}+\mu_{ab}^{(1)}+\mu_{ab}^{(2)}+\dots \;.
\eeq
where $\bP_a^{(0)}=\cO(\eps),\;\bP_a^{(1)}=\cO(\eps^3),\;\bP_a^{(2)}=\cO(\eps^5),\;\dots$, and $\mu_{ab}^{(0)}=\cO(\eps^0),\; \mu_{ab}^{(1)}=\cO(\eps^2),\; \mu_{ab}^{(2)}=\cO(\eps^4),$ etc.
This structure of the expansion is dictated by the equations \eq{eq:Pmu}, \eq{eq:mudisc} of the $\bP\mu$-system (as we will soon see explicitly). Since the leading order $\bP_a$ are of order $\epsilon$, equation \eqref{eq:mudisc} implies that the discontinuity of $\mu_{ab}$ on the cut is of order $\epsilon^2$. Thus to find $\mu_{ab}$ in the next to leading order (NLO) we only need the functions $\bP_a$ at leading order. After this, we can find the NLO correction to $\P_a$ from equations \eqref{eq:mudisc}. This will be done below, and having thus the full solution of the $\bP\mu$-system at NLO we will find the energy at order $S^2$.
\subsection{Correcting $\mu_{ab}$\dots}
\label{sec:muNLOL2}
In this subsection we find the NLO corrections $\mu^{(1)}_{ab}$ to $\mu_{ab}$. As follows from \eqref{eq:mudisc} and \eq{muper},
they should satisfy the equation
\beq
\mu_{ab}^{(1)}(u+i)-\mu_{ab}^{(1)}(u)=\bP_a^{(0)} \tilde\bP_b^{(0)}- \bP_b^{(0)} \tilde\bP_a^{(0)},
\label{eq:mudiscNLO}
\eeq
in which the right hand is known explicitly. For that reason let us define an apparatus for solving equations of this type, i.e.
\beq
f(u+i)-f(u)=h(u).
\label{eqperiod}
\eeq
More precisely, we consider functions $f(u)$ and $h(u)$ with one cut in $u$ between $-2g$ and $2g$, and no poles. Such functions can be represented as infinite Laurent series in the Zhukovsky variable $x(u)$, and we additionally restrict ourselves to the case where for $h(u)$ this expansion does not have a constant term\footnote{The r.h.s. of \eq{eq:mudiscNLO} has the form $F(u)-\tilde F(u)$ and therefore indeed does not have a constant term in its expansion, as the constant in $F$ would cancel in the difference $F(u)-\tilde F(u)$.}.
One can see that the general solution of \eqref{eqperiod} has a form of a particular solution plus an arbitrary $i$-periodic function, which we also call a zero mode.
First we will describe the construction of the particular solution and later deal with zero modes. The linear operator which gives the particular solution of \eqref{eqperiod} described below will be denoted as $\Sigma$.
Notice that given the explicit form \eqref{P10P40} of $\bP^{(0)}_a$, the right hand side of \eqref{eq:mudiscNLO} can be represented in a form
\beq
\alpha(x)\sinh(2\pi u)+\beta(x),
\label{alphabetasinh}
\eeq
where $\alpha(x),\beta(x)$ are power series in $x$ growing at infinity not faster than polynomially. Thus for such $\alpha$ and $\beta$ we define
\beq
\Sigma\cdot\[\alpha(x)\sinh(2\pi u)+\beta(x)\]\equiv \sinh(2\pi u) \Sigma\cdot \alpha(x)+\Sigma\cdot \beta(x).
\eeq
We also define $\Sigma\cdot x^{-n}=\Gamma'\cdot x^{-n}$ for $n>0$, where the integral operator $\Gamma'$ defined as
\beq
\(\Gamma'\cdot h\)(u)\equiv \oint_{-2g}^{2g}\frac{dv}{{4\pi i}}\d_u \log \frac{\Gamma[i (u-v)+1]}{\Gamma[-i (u-v)]}h(v).
\label{Gammaprime}
\eeq
This requirement is consistent because of the following relation \footnote{We remind that $f_+$ and $f_-$ stand for the part of the Laurent expansion with, respectively, positive and negative powers of $x$, while $\tilde f$ is the analytic continuation around the branch point at $u=2g$ (which amounts to replacing $x\to\ofrac{x}$)}
\beq
\(\Gamma'\cdot h\)(u+i)-\(\Gamma'\cdot h\)(u)
=
-\frac{1}{2\pi i}\oint_{-2g}^{2g}\frac{h(v)}{u-v}dv=h_-(u)-\widetilde{h_+}(u).
\label{eq:Gammaproperty}
\eeq
What is left is to define $\Sigma$ on positive powers of $x$. We do it by requiring
\beq
\frac{1}{2}\Sigma\cdot\left[x^a+1/x^a\right]\equiv p_a'(u)
\label{paprime}
\eeq
where $p_a'(u)$ is a polynomial in $u$ of degree $a+1$, which is a solution of
\beq
p_a'(u+i)-p_a'(u)=\frac{1}{2}\(x^a+1/x^a\)
\eeq
and satisfies the following additional properties: $p_a'(0)=0$ for odd $a$ and $p_a'(i/2)=0$ for even $a$. One can check that this definition is consistent and defines of $p'_a(u)$ uniquely. Explicit form of the first few $p_a'(u)$, which we call periodized Chebyshev polynomials, can be found in appendix \ref{sec:appPeriodized}.
From this definition of $\Sigma$ one can see that the result of its action on expressions of the form \eqref{alphabetasinh}
can again be represented in this form - what is important for us is that no exponential functions other than $\sinh(2\pi u)$ appear in the result.
A good illustration of how the definitions above work would be the following two simple examples. Suppose one wants to calculate $\Sigma\cdot\(x-\frac{1}{x}\)$, then it is convenient to split the argument of $\Sigma$ in the following way:
\beq
\Sigma\cdot\(x-\frac{1}{x}\)=\Sigma\cdot\(x+\frac{1}{x}\)-2\Sigma\cdot\frac{1}{x}.
\eeq
In the first term we recognize $p_1'(u)=-\frac{i u(u-i)}{4g}$, whereas in the second the argument of $\Sigma$ is decaying at infinity, thus $\Sigma$ is equivalent to $\Gamma'$ in this context. Notice also that $\Gamma'\cdot \frac{1}{x}=-\Gamma'\cdot x$. All together, we get
\beq
\Sigma\cdot\(x-\frac{1}{x}\)=\Sigma\cdot\(x+\frac{1}{x}\)-2\Sigma\cdot\frac{1}{x}=2p_1'(u)+ 2\Gamma'\cdot x
\eeq
In a similar way, in order to calculate $\Sigma\cdot\frac{\sinh_--\sinh_+}{2}$, one can write $\frac{\sinh_--\sinh_+}{2}=\sinh_--\frac{1}{2}\sinh(2\pi u)$.
Notice that since $\sinh_-$ decays at infinity,
\beq
\Sigma\cdot\sinh_-=\Gamma'\cdot\sinh_-.
\eeq
Also, since $i$-periodic functions can be factored out of $\Sigma$,
\beq
\Sigma\cdot\sinh(2\pi u)=\sinh(2\pi u)\Sigma\cdot 1=\sinh(2\pi u)p_0'(u)/2.
\eeq
Finally,
\beq
\Sigma\cdot\frac{\sinh_--\sinh_+}{2}=\Gamma'\cdot(\sinh_-)-\frac{1}{2}\sinh(2\pi u)p_0'(u).
\eeq
As an example we present the particular solution for two components of $\mu_{ab}$ (below we will argue that $\pi_{12}$ and $\pi_{13}$ can be chosen to be zero, see \eqref{eq:periodicpart})
\beqa
\label{muexpl1}
&&\mu_{13}^{(1)}-\pi_{13}=\Sigma\cdot\({\bf P}_1 \tilde{\bf P}_3-{\bf P}_3 \tilde{\bf P}_1\)=\epsilon^2\Sigma\cdot\(x^2-\frac{1}{x^2}\)=
\epsilon^2\;\;\(\Gamma'\cdot x^2+p_2'(u)\),\\
\nonumber
&&\mu_{12}^{(1)}-\pi_{12}=\Sigma\cdot\({\bf P}_1
\tilde{\bf P}_2-{\bf P}_2
\tilde{\bf P}_1\)= \\&& =
-\epsilon^2\[2 I_1\Gamma'\cdot x-\sinh(2\pi u)\;\Gamma'\cdot x^2-\Gamma'\cdot\(\sinh_-\(x^2+\frac{1}{x^2}\)\)
\].\label{muexpl2}
\eeqa
Now let us apply $\Sigma$ defined above to \eq{eq:mudiscNLO}, writing that its general solution is
\beq
\mu^{(1)}_{ab}=\Sigma\cdot(\bP_a^{(0)} \tilde\bP_b^{(0)}- \bP_b^{(0)} \tilde\bP_a^{(0)})+\pi_{ab},
\label{eq:sol13}
\eeq
where the zero mode $\pi_{ab}$ is an arbitrary $i$-periodic entire function, which can be written similarly to the leading order as $c_{1,ab}\cosh{2\pi u}+c_{2,ab}\sinh{2\pi u}+c_{3,ab}$. Again, many of the coefficients $c_{i,ab}$ can be set to zero. First, the prescription from section \ref{sec:ancont} implies that non-vanishing at infinity part of coefficients of $\sinh(2\pi u)$ and $\cosh(2\pi u)$ in $\mu_{12}$ is zero. As one can see from the explicit form \eqref{muexpl2} of the particular solution which we choose for $\mu_{12}$, it does not contain $\cosh(2\pi u)$ and the coefficient of $\sinh(2\pi u)$ is decaying at infinity. So in order to satisfy the prescription, we have to set $c_{2,12}$ and $c_{2,12}$ to zero. Second, since the coefficients $c_{n,ab}$ are of order $S$, we can remove some of them by making an infinitesimal $\gamma$-transformation, i.e. with $R=1+{\cal O}(S)$ (see section \ref{sec:Symmetries} and Eq. \eqref{gammaP}).
Further, the Pfaffian constraint \eqref{constraint} imposes 5 equations on the remaining coefficients, which leaves the following 2-parametric family of zero modes
\beqa
&& \pi_{12}=0,\ \pi_{13}=0,\ \pi_{14}=\frac{1}{2}c_{1,34}\cosh{2\pi u},\\
&& \pi_{24}= c_{1,24}\cosh{2\pi u},\ \pi_{34}= c_{1,34}\cosh{2\pi u}.
\eeqa
Let us now look closer at the exponential part of $\mu_{14}$ and $\mu_{24}$. Combining the leading order \eqref{eq:musolLOevenL} and the perturbation \eqref{eq:sol13} and taking into account the fact that operator $\Sigma$ does not produce terms proportional to $\cosh{2 \pi u}$, we obtain
\beqa
&&\mu_{14}=\frac{1}{2}c_{1,34}\cosh{2\pi u}+{\cal O}(\epsilon) \sinh{2\pi u}+\mathcal{O}(\epsilon^2)+\dots, \\
&&\mu_{24}=\frac{1}{2}c_{1,24}\cosh{2\pi u}+(1+{\cal O}(\epsilon)) \sinh{2\pi u}+\mathcal{O}(\epsilon^2)+\dots,
\eeqa
where dots stand for powers-like terms or exponential terms suppressed by powers of $u$.
As we remember from section \ref{sec:ancont}, only the 2nd solution of the 5th order Baxter equation \eqref{5bax} can contribute to the exponential part of $\mu_{14}$ and $\mu_{24}$, which means that $\mu_{14}$ and $\mu_{24}$ are proportional to the same linear combination of $\sinh{2\pi u}$ and $\cosh{2\pi u}$. From the second equation one can see that this linear combination can be normalized to be $\frac{1}{2}c_{1,24}\cosh{2\pi u}+(1+{\cal O}(\epsilon)) \sinh{2\pi u}$. Then $\mu_{14}=C\(\frac{1}{2}c_{1,24}\cosh{2\pi u}+(1+{\cal O}(\epsilon)) \sinh{2\pi u}\)$, where $C$ is some constant, which is of order ${\cal O}(\epsilon)$, because the coefficient of $\sinh{2\pi u}$ in the first equation is ${\cal O}(\epsilon)$. Taking into account that $c_{1,24}$ is ${\cal O}(\epsilon)$ itself, we find that $c_{1,34}=\mathcal{O}(\epsilon^2)$, i.e. it does not contribute at the order which we are considering. So the final form of the zero mode in \eqref{eq:sol13} is
\beqa
&& \pi_{12}=0,\ \pi_{13}=0,\ \pi_{14}=0,\\
&& \pi_{24}=c_{1,24}\cosh{2\pi u},\ \pi_{34}=0.
\label{eq:periodicpart}
\eeqa
In this way, using the particular solution given by $\Sigma$ and the form of zero modes \eqref{eq:periodicpart} we have computed all the functions $\mu_{ab}^{(1)}$. The details and the results of the calculation can be found in appendix \ref{sec:appmu2}.
\subsection{Correcting $\bP_{a}$\dots}
\label{sec:CalculationofPa}
In the previous section we found the NLO part of $\mu_{ab}$. Now, according to the iterative procedure described in section \ref{sec:SolvingPmuL2}, we can use it to write a closed system of equations for $\bP_a^{(1)}$.
Indeed, expanding the system \eqref{eq:pmuexpanded} to NLO we get
\beqa
\label{eq:P1eqNLOL2}
&&\tilde \bP^{(1)}_1
- \bP^{(1)}_1
= -\bP^{(1)}_3+r_1, \\
\label{eq:P2eqNLOL2}
&&\tilde \bP^{(1)}_2+\bP_2^{(1)}= -\bP^{(1)}_4 -\bP^{(1)}_1 \sinh(2\pi u)+r_2, \\
\label{eq:P3eqNLOL2}
&&\tilde \bP^{(1)}_3+\bP_3^{(1)}=r_3,\\
\label{eq:P4eqNLOL2}
&&\tilde \bP^{(1)}_4-\bP_4^{(1)}=\bP_3^{(1)} \sinh(2\pi u)+r_4,
\eeqa
where the free terms are given by
\beq
r_a=-\mu_{ab}^{(1)}\chi^{bc}\bP_c^{(0)}.
\label{eq:ra}
\eeq
Notice that $r_a$ does not change if we add a matrix proportional to $\bP_a^{(0)} \tilde\bP_b^{(0)}- \bP_b^{(0)} \tilde\bP_a^{(0)}$ to $\mu^{(1)}_{ab}$, due to the relations
\beq
\bP_a \chi^{ab}\bP_b=0,\;\bP_a\chi^{ab}\tilde\bP_b=0,
\eeq
which follow from the $\bP\mu$-system equations. In particular we can use this property to replace $\mu_{ab}^{(1)}$ in \eq{eq:ra} by $\mu_{ab}^{(1)}+\frac{1}{2}\(\bP_a^{(0)} \tilde\bP_b^{(0)}- \bP_b^{(0)} \tilde\bP_a^{(0)}\)$. This will be convenient for us, since in expressions for $\mu^{(1)}_{ab}$ in terms of $p_a$ and $\Gamma$ (see \eq{muexpl1}, \eq{muexpl2} and appendix \ref{sec:appmu2}) this change amounts to simply replacing $\Gamma'$ by a convolution with a more symmetric kernel:
\beq
\Gamma' \rightarrow \Gamma,
\eeq
\beq
\(\Gamma\cdot h\)(u)\equiv \oint_{-2g}^{2g}\frac{dv}{{4\pi i}}\d_u \log \frac{\Gamma[i (u-v)+1]}{\Gamma[-i (u-v)+1]}h(v),
\label{Gamma}
\eeq
while at the same time replacing
\beq
p_a'(u)\rightarrow p_a(u),\ \
\eeq
\beq
p_a(u)=p_a'(u)+\frac{1}{2}\(x^a(u)+x^{-a}(u)\).
\label{pa}
\eeq
Having made this comment, we will now develop tools for solving the equations \eq{eq:P1eqNLOL2} - \eq{eq:P4eqNLOL2}.
Notice first that if we solve them in the order \eq{eq:P3eqNLOL2}, \eq{eq:P1eqNLOL2}, \eq{eq:P4eqNLOL2}, \eq{eq:P2eqNLOL2}, substituting into each subsequent equation the solution of all the previous, then at each step the problem we have to solve has a form
\beq
\tilde f+f=h\;\; \text{or}\;\; \tilde f-f=h\;\;,
\label{eqs}
\eeq
where $h$ is known, $f$ is unknown and both the right hand side and the left hand side are power series in $x$. It is obvious that equations \eqref{eqs} have solutions only for $h$ such that $h=\tilde h$ and $h=-\tilde h$ respectively.
On the class of such $h$ a particular solution for $f$ can be written as
\beq
f= [h]_-+[h]_0/2\equiv H\cdot h\;\; \Rightarrow\;\; \tilde f+f=h
\label{solfh1}
\eeq
and
\beq
f= -[h]_-\equiv K\cdot h\;\; \Rightarrow\;\; \tilde f-f=h,
\label{solfh2}
\eeq
where $[h]_0$ is the constant part of Laurent expansion of $h$ (it does not appear in the second equation, because $h$ such that $h=-\tilde h$ does not have a constant part).
The operators $K$ and $H$ introduced here can be also defined by their integral kernels
\beqa
H(u,v)&=&-\frac{1}{4\pi i}\frac{\sqrt{u-2g}\sqrt{u+2g}}{\sqrt{v-2g}\sqrt{v+2g}}\frac{1}{u-v}dv, \\
K(u,v)&=&+\frac{1}{4\pi i}\frac{1}{u-v}dv.
\label{eq:HK}
\eeqa
which are equivalent to \eqref{solfh1},\eqref{solfh2} of the classes of $h$ such that $h=\tilde h$ and $h=-\tilde h$ respectively\footnote{We denote e.g. $K\cdot h=\oint_{-2g}^{2g}K(u,v)h(v)dv$ where the integral is around the branch cut between $-2g$ and $2g$.}. The particular solution $f=H\cdot h$ of the equation $\tilde f+ f=h$ is unique in the class of functions $f$ decaying at infinity, and the solution $f=K \cdot h$ of $\tilde f- f=h$ is unique for non-growing $f$. In all other cases the general solution will include zero modes, which, in our case are fixed by asymptotics of $\bP_a$.
Now it is easy to write the explicit solution of the equations
\eq{eq:P1eqNLOL2}-\eq{eq:P4eqNLOL2}:
\beqa
\bP_3^{(1)}&=&H\cdot r_3,\\
\bP_1^{(1)}&=&\frac{1}{2}\P^{(1)}_3+K\cdot \(r_1-\frac{1}{2} r_3\),\\
\bP_4^{(1)}&=&K\cdot\(-\frac{1}{2}\(\tilde\bP_3^{(1)}-\bP_3^{(1)}\) \sinh(2\pi u)+
\frac{2r_4+r_3 \sinh(2\pi u)}{2}\)-2\delta,\\
\bP_2^{(1)}&=&H\cdot\(-\frac{1}{2}
\(
{\bf P}^{(1)}_4+\sinh(2\pi u){\bf P}^{(1)}_1+\tilde{\bf P}^{(1)}_4+\sinh(2\pi u)\tilde{\bf P}^{(1)}_1
\)+\right.\\ \nn
&&\left.
+\frac{r_4+\sinh(2\pi u) r_1+2r_2}{2}\)+\delta,
\label{eq:P4solNLOL2}
\eeqa
where $\delta$ is a constant fixed uniquely by requiring $\mathcal{O}(1/u^2)$ asymptotics for $\bP_2$. This asymptotic also sets the last coefficient $c_{1,24}$ left in $\pi_{12}$ to zero. Thus in the class of functions with asymptotics \eqref{eq:asymptotics} the solution for $\mu_{ab}$ and $\bP_a$ is unique up to a $\gamma$-transformation.
\subsection{Result for $J=2$}
\label{sec:resultL2}
In order to obtain the result for the anomalous dimension, we again use the formulas \eq{AA1}, \eq{AA2} which connect the leading coefficients of $\bP_a$ with $\Delta,\ J\ $ and $S$. After plugging in $A_i$ which we find from our solution, we obtain the result for the $S^2$ correction to the anomalous dimension:
\beqa
\label{gamma2L2}
\gamma^{(2)}_{J=2}&=&\frac{\pi}{g^2(I_1-I_3)^3}\oint \frac{du_x}{2\pi i}\oint \frac{du_y}{2\pi i}\[\frac{8 I_1^2(I_1+I_3) \left(x^3-\left(x^2+1\right) y\right) }{ \left(x^3-x\right) y^2}\right.\\ \nn
&& +\frac{8 \sh_-^x \sh_-^y
\left(x^2 y^2-1\right) \left(I_1 (x^4 y^2+1)-I_3x^2(y^2+1)\right)}{ x^2 \left(x^2-1\right)
y^2}\\ \nn
&&-\frac{4 (\sh_-^y)^2 x^2 \left(y^4-1\right) \left( I_1(2x^2-1)-I_3 \right)}{ \left(x^2-1\right) y^2}\\ \nn
&&+\frac{8
I_1^2 \sh_-^y x \left(2 \(x^3-x\) \left(y^3+y\right)-2 x^2
\left(y^4+y^2+1\right)+y^4+4 y^2+1\right)}{ \left(x^2-1\right) y^2}\\ \nn
&&-\frac{8 (I_1-I_3)
I_1 \sh_-^y x (x-y) (x
y-1)}{ \left(x^2-1\right) y}\\ \nn
&&\left.-\frac{4 (I_1-I_3) (\sh_-^x)^2 \left(x^2+1\right)
y^2}{ \left(x^2-1\right)}\right]
\frac{1}{4\pi i}\d_u \log\frac{\Gamma (i u_x-i u_y+1)}{\Gamma (1-i u_x+i u_y)}\;.
\eeqa
Here the integration contour goes around the branch cut at $(-2g,2g)$. We also denote
$\sh_-^x=\sinh_-(x) ,\ \sh_-^y=\sinh_-(y) $ (recall that $\sinh_-$ was defined in \eq{defshm}). This is our final result for the curvature function at any coupling.
It is interesting to note that our result contains the combination $\log\frac{\Gamma (i u_x-i u_y+1)}{\Gamma (1-i u_x+i u_y)}$ which plays an essential role in the construction of the BES dressing phase. We will use this identification in section \ref{sec:Konishidimension} to compute the integral in \eq{gamma2L2} numerically with high precision.
In the next subsections we will describe generalizations of the $J=2$ result to operators with $J=3$ and $J=4$.
\subsection{Results for higher $J$}
\label{sec:SolvingPmuL3}
Solving the $\bP\mu$-system for $J=3$ is similar to the $J=2$ case described above, except for several technical complications, which we will describe here, leaving the details for the appendix \ref{sec:appnlo3}.
As in the previous section, the starting point is the LO solution of the $\bP\mu$ system, which for $J=3$ reads
\beq
\bP_1=\epsilon x^{-3/2},\ \bP_3=-\epsilon x^{3/2},
\label{P1P3LOsolL3}
\eeq
\beq
\bP_2=-\epsilon x^{3/2}\cosh_- +\epsilon x^{-1/2}I_2,
\label{P2LOsolL3}
\eeq
\beq
\bP_4=-\epsilon x^{1/2}I_2-\epsilon x^{-3/2}I_0-\epsilon x^{-3/2}\cosh_-,
\label{P4LOsolL3}
\eeq
\beq
\mu_{12}=1,\ \mu_{13}=0,\ \mu_{14}=0,\ \mu_{24}=\cosh(2\pi u),\ \mu_{34}=1\;.
\eeq
The first step is to construct $\mu^{(1)}_{ab}$ from its discontinuity given by the equation \eqref{eq:mudiscNLO}. The full solution consists of a particular solution and a general solution of the corresponding homogeneous equation, i.e. zero mode $\pi_{ab}$. In our case the zero mode can be an $i$-periodic function, i.e. a linear combination of $\sinh(2\pi u)$, $\cosh(2\pi u)$ and constants. As in the case of $J=2$, we use a combination of the Pfaffian constraint, prescription from section \ref{sec:ancont} and a $\gamma$-transformation to reduce all the parameters of the zero mode to just one, sitting in $\mu_{24}$:
\beq
\pi_{12}=0,\;\pi_{13}=0,\;\pi_{14}=0,\;\pi_{24}=c_{24,2} \sinh\(2\pi u\),\;\pi_{34}=0.
\label{eq:periodicpartL3}
\eeq
As in the previous section, the next step is to find $\bP_a^{(1)}$ from the $P\mu$ system expanded to the first order, namely from
\beqa
\label{P1L3}
&&\tilde \bP_1^{(1)}+\bP_3^{(1)}=r_1,\\
&&\tilde \bP_2^{(1)}+\bP_4^{(1)}+\bP_1^{(1)} \cosh(2\pi u)=r_2,\\
&&\tilde \bP_3^{(1)}+\bP_1^{(1)}=r_3,\\
&&\tilde \bP_4^{(1)}+\bP_2^{(1)}-\bP_3^{(1)}\cosh(2\pi u)=r_4,
\label{P4L3}
\eeqa
where $r_a$ are defined by \eqref{eq:ra} and for $J=3$ are given explicitly in appendix \ref{sec:appnlo3}.
In attempt to solve this system, however, we encounter another technical complication. As one can see from \eqref{P1P3LOsolL3}-\eqref{P4LOsolL3}, the LO solution contains half-integer powers of $J$, meaning that the $\bP_a$ now have an extra branch point at infinity.
However, the operations $H$ and $K$ defined by \eq{eq:HK} work only for functions which have Laurent expansion in integer powers of $x$. In order to solve equations of the type \eq{eq:mudiscNLO} on the class of functions which allow Laurent-like expansion in $x$ with only half-integer powers $x$, we introduce operations $H^*,K^*$:
\beqa
&&H^*\cdot f\equiv\frac{x+1}{\sqrt{x}}H\cdot\frac{\sqrt{x}}{x+1} f, \\
&&K^*\cdot f\equiv\frac{x+1}{\sqrt{x}}K\cdot\frac{\sqrt{x}}{x+1} f.
\eeqa
In terms of these operations the solution of the system \eqref{P1L3}-\eqref{P4L3} is
\beqa
\label{P1J3}
\bP_{1}^{(1)}&=&\frac{1}{2}\(H^*(r_1+r_3)+ K^*(r_1-r_3)\)+\bP_1^{\text{zm}},\\
\bP_{3}^{(1)}&=&\frac{1}{2}\(H^*(r_1+r_3)- K^*(r_1-r_3)\)+\bP_2^{\text{zm}},\\
\bP_{2}^{(1)}&=&\frac{1}{2}\(H^*(r_2+r_4)+ K^*(r_2-r_4)-\right.\nonumber\\
&-&\left.H^*\(\cosh(2\pi u)K^*(r_1-r_3)\)- K^*\(\cosh(2\pi u)H^*(r_1+r_3)\)\right)+\bP_3^{\text{zm}},\\
\label{P4J3}
\bP_{4}^{(1)}&=&\frac{1}{2}\(H^*(r_2+r_4)- K^*(r_2-r_4)-\right.\nonumber\\
&-&\left.H^*\(\cosh(2\pi u)K^*(r_1-r_3)\)+ K^*\(\cosh(2\pi u)H^*(r_1+r_3)\)\right)+\bP_4^{\text{zm}},
\eeqa
where $\bP_a^{\text{zm}}$ is a solution of the system \eq{P1L3}-\eq{P4L3} with right hand side set to zero, whose explicit form $\bP_a^{\text{zm}}$ is given in Appendix \ref{sec:appnlo3} (see \eqref{P1J3zm}-\eqref{P4J3zm}) and which is parametrized by four constants $L_1,L_2,L_3,L_4$, e.g.
\beqa
\bP_1^{\text{zm}}=L_1 x^{-1/2}+L_3x^{1/2}.
\eeqa
These constants are fixed by requiring correct asymptotics of $\bP_a$, which also fixes the parameter $c_{24,2}$ in the zero mode \eq{eq:periodicpartL3} of $\mu_{ab}$ \footnote{Actually in this way $c_{24,2}$ is fixed to be zero.}. Indeed, a priori $\bP_2$ and $\bP_1$ have wrong asymptotics. Imposing a constraint that $\bP_2$ decays as $u^{-5/2}$ and $\bP_1$ decays as $u^{-3/2}$ produces five equations, which fix all the parameters uniquely.
Skipping the details of the intermediate calculations, we present the final result for the anomalous dimension:
\beqa
\label{gamma2L3}
&&\gamma^{(2)}_{J=3}=\oint \frac{du_x}{2\pi i}\oint \frac{du_y}{2\pi i}
i \frac{1}{g^2(I_2-I_4)^3} \left[\frac{2 \left(x^6-1\right) y (\ch_-^y)^2 (I_2-I_4)}{x^3 \left(y^2-1\right)}-\right.\\ \nn
&&-\frac{4 \ch_-^x
\ch_-^y \left(x^3 y^3-1\right) \left(I_2 x^5 y^3+I_2-I_4 x^2 \left(x y^3+1\right)\right)}{x^3
\left(x^2-1\right) y^3}+\\ \nn
&& +\frac{(y^2-1) (\ch_-^y)^2 I_2 \left( (x^8+1) \left(2 y^4+3 y^2+2\right)-(x^6+x^2)
\left(y^2+1\right)^2\right)}{x^3 \left(x^2-1\right) y^3}-
\\ \nn
&& -\frac{(y^2-1) (\ch_-^y)^2 I_4 \left((x^8+1) y^2+(x^6+x^2) \left(y^4+1\right)\right)}{x^3 \left(x^2-1\right) y^3}-
\\ \nn
&&-\frac{4 I_2 \ch_-^y (x-y) (x y-1) \left(I_2
\left(\(x^6+1\) \left(y^3+y\right)+\(x^5+x\) \left(y^4+y^2+1\right)-x^3 \left(y^4+1\right)\right)+I_4 x^3
y^2\right)}{x^3 \left(x^2-1\right) y^3}-\\ \nn
&& \left.-\frac{I_2^2 (y^2-1) (x-y) (x y-1) \left(I_2 \left(\(x^6 +x^4 +x^2 +1\)y+2 x^3
\left(y^2+1\right)\right)+I_4 \left(x^5+x\right) \left(y^2+1\right)\right)}{x^3 \left(x^2-1\right) y^3}\right]\\ \nn
&& \frac{1}{4\pi i}\d_u \log\frac{\Gamma (i u_x-i u_y+1)}{\Gamma (1-i u_x+i u_y)}.
\eeqa
We defined $\ch_-^x=\cosh_-(x)$ and $\ch_-^y=\cosh_-(y)$, where $\cosh_-(x)$ is the part of the Laurent expansion of $\cosh\(g(x+1/x)\)$ vanishing at infinity, i.e.
\beq
\cosh_-(x)=\sum_{k=1}^{\infty} I_{2k}x^{-2k}.
\eeq
The result for $J=4$ is given in appendix \ref{sec:SolvingPmuL4}.
\section{Weak coupling tests and predictions}
\label{sec:weak}
Our results for the curvature function $\gamma^{(2)}(g)$ at $J=2,3,4$ (Eqs. \eq{gamma2L2}, \eq{gamma2L3}, \eq{gamma2L4}) are straightforward to expand at weak coupling. We give expansions to 10 loops in \text{appendix \ref{sec:weakS3}}. Let us start with the $J=2$ case, for which we found
\beqa
\label{weak22}
\gamma_{J=2}^{(2)}&=&-8 g^2 \zeta_3+g^4 \left(140 \zeta_5-\frac{32 \pi ^2 \zeta_3}{3}\right)+g^6 \left(200 \pi ^2 \zeta_5-2016
\zeta_7\right)
\\ \nn
&+&g^8 \left(-\frac{16 \pi ^6 \zeta_3}{45}-\frac{88 \pi ^4 \zeta_5}{9}-\frac{9296 \pi ^2 \zeta_7}{3}+27720 \zeta_9\right)
\\ \nn
&+&g^{10} \left(\frac{208 \pi ^8 \zeta_3}{405}+\frac{160 \pi ^6 \zeta_5}{27}+144
\pi ^4 \zeta_7+45440 \pi ^2 \zeta_9-377520 \zeta_{11}\right)
+\dots
\eeqa
Remarkably, at each loop order all contributions have the same transcendentality, and only simple zeta values (i.e. $\zeta_n$) appear. This is also true for the $J=3$ and $J=4$ cases.
We can check this expansion against known results, as the anomalous dimensions of twist two operators have been computed up to five loops for arbitrary spin \cite{Kotikov:2001sc,Kotikov:2003fb,Kotikov:2004er,Moch:2004pa,Staudacher:2004tk,Kotikov:2007cy,Bajnok:2008qj,Lukowski:2009ce} (see also \cite{Velizhanin:2013vla} and the review \cite{Freyhult:2010kc}).
To three loops they can be found solely from the ABA equations, while at four and five loops wrapping corrections need to be taken into account which was done in \cite{Bajnok:2008qj,Lukowski:2009ce} by utilizing generalized Luscher formulas. All these results are given by linear combinations of harmonic sums
\beq
S_a(N) = \sum_{n=1}^N\frac{(\mathrm{sign}(a))^n}{n^{|a|}}, \ \
S_{a_1,a_2,a_3,\dots}(N)=\sum_{n=1}^N\frac{(\mathrm{sign}(a_1))^n}{n^{|a_1|}}S_{a_2,a_3,\dots}(n)
\eeq
with argument equal to the spin $S$. To make a comparison with our results we expanded these predictions in the $S\to 0$ limit. For this lengthy computation, as well as to simplify the final expressions, we used the \verb"Mathematica" packages HPL \cite{HPL}, the package \cite{VolinPackage} provided with the paper \cite{Leurent:2013mr}, and the HarmonicSums package \cite{Ablinger}.
In this way we have confirmed the coefficients in \eq{weak22} to four loops. Let us note that expansion of harmonic sums leads to multiple zeta values (MZVs), which however cancel in the final result leaving only $\zeta_n$.
Importantly, the part of the four-loop coefficient which comes from the wrapping correction is essential for matching with our result. This is a strong confirmation that our calculation based on the $\bP\mu$-system is valid beyond the ABA level. Additional evidence that our result incorporates all finite-size effects is found at strong coupling (see section \ref{sec:SlopeSlopeStrongCoupling}).
For operators with $J=3$, our prediction at weak coupling is
\beqa
\gamma_{J=3}^{(2)}&=&-2g^2\zeta_3+g^4\(12 \zeta_5-\frac{4 \pi ^2 \zeta_3}{3}\)
+g^6\(\frac{2 \pi ^4 \zeta_3}{45}+8 \pi ^2 \zeta_5-28
\zeta_7\)\\ \nn
&+&
g^8\(-\frac{4 \pi ^6 \zeta_3}{45}-\frac{4 \pi ^4 \zeta_5}{15}-528 \zeta_9\)
+\dots
\eeqa
The known results for any spin in this case are available at up to six loops, including the wrapping correction which first appears at five loops \cite{Beccaria:2007cn,Beccaria:2009eq,Velizhanin:2010cm}. Expanding them at $S\to 0$ we have checked our calculation to four loops.\footnote{As a further check it would be interesting to expand to order $S^2$ the known results for twist 2 operators at five loops, and for twist 3 operators at five and six loops -- all of which are given by huge expressions.}
For future reference, in appendix \ref{sec:weakS3} we present an expansion of known results for $J=2,3$ up to order $S^3$ at first several loop orders. In particular, we found that multiple zeta values appear in this expansion, which did not happen at lower orders in $S$.
\FIGURE[ht]
{
\label{fig:j4plot}
\begin{tabular}{cc}
\includegraphics[scale=0.9]{j4plot}\\
\end{tabular}
\caption{\textbf{One-loop energy at $J=4$ from the Bethe ansatz.} The dashed line shows the result from the $\bP\mu$-system for the coefficient of $S^2$ in the 1-loop energy at $J=4$, i.e. $-\frac{14 \zeta_3}{5}+\frac{48 \zeta_5}{\pi ^2}-\frac{252 \zeta_7}{\pi
^4}\approx-0.931$ (see \eq{gamma42weak}). The dots show the Bethe ansatz prediction \eq{gammaJ} expanded to orders $1/J^3,1/J^4,\dots,1/J^8$ (the order of expansion $n$ corresponds to the horizontal axis), and it appears to converge to the $\bP\mu$-system result.
}
}
Let us now discuss the $J=4$ case. The expansion of our result reads:
\beqa
\label{gamma42weak}
\gamma_{J=4}^{(2)}&=&g^2 \left(-\frac{14 \zeta_3}{5}+\frac{48 \zeta_5}{\pi ^2}-\frac{252 \zeta_7}{\pi
^4}\right) \\ \nn
&+&g^4 \left(-\frac{22 \pi ^2 \zeta_3}{25}+\frac{474 \zeta_5}{5}-\frac{8568 \zeta_7}{5 \pi
^2}+\frac{8316 \zeta_9}{\pi ^4}\right)\\ \nn
&+&g^6 \left(\frac{32 \pi ^4 \zeta_3}{875}+\frac{3656 \pi ^2 \zeta_5}{175}-\frac{56568 \zeta_7}{25}+\frac{196128 \zeta_9}{5 \pi ^2}-\frac{185328 \zeta_{11}}{\pi ^4}\right) \\ \nn
&+&g^8 \left(-\frac{4 \pi ^6 \zeta_3}{175}-\frac{68 \pi ^4 \zeta_5}{75}-\frac{55312 \pi ^2 \zeta_7}{125}+\frac{1113396 \zeta_9}{25}-\frac{3763188 \zeta_{11}}{5 \pi ^2}\right.
\\ \nn
&& \ \ \ \ \ \ + \left.\frac{3513510 \zeta_{13}}{\pi ^4} \right)+\dots
\eeqa
Unlike for the $J=2$ and $J=3$ cases, we could not find a closed expression for the energy at any spin $S$ in literature even at one loop, however there is another way to check our result. One can expand the asymptotic Bethe ansatz equations at large $J$ for fixed values of $S=2,4,6,\dots$ and then extract the coefficients in the expansion which are polynomial in $S$. This was done in \cite{Beccaria:2012kp} (see appendix C there) where at one loop the expansion was found up to order $1/J^6$:
\beqa
\label{gammaJ}
\nn
\gamma(S,J) &=& g^2\(\frac{S}{2\,J^{2}}-\Big(\frac{S^{2}}{4}+\frac{S}{2}\Big)\,\frac{1}{ J^{3}}
+\Big[
\frac{3 S^3}{16}+\Big(\frac{1}{8}-\frac{\pi ^2}{12}\Big) S^2+\frac{S}{2}
\Big]\,\frac{1}{ J^{4}} +\dots\)+\mathcal{O}(g^4)
\\
\eeqa
Now taking the part proportional to $S^2$ and substituting $J=4$ one may expect to get a numerical approximation to the 1-loop coefficient in our result \eq{gamma42weak}, i.e. $-\frac{14 \zeta_3}{5}+\frac{48 \zeta_5}{\pi ^2}-\frac{252 \zeta_7}{\pi
^4}$. To increase the precision we extended the expansion in \eq{gammaJ} to order $1/J^8$. Remarkably, in this way we confirmed the 1-loop part of the $\bP\mu$ prediction \eq{gamma42weak} with about $1\%$ accuracy! In Fig. \ref{fig:j4plot} one can also see that the ABA result converges to our prediction when the order of expansion in $1/J$ is being increased.
Also, in contrast to $J=2$ and $J=3$ cases we see that negative powers of $\pi$ appear in \eq{gamma42weak} (although still all the contributions at a given loop order have the same transcendentality). It would be interesting to understand why this happens from the gauge theory perspective, especially since expansion of the leading $S$ term \eq{slopeIn} has the same structure for all $J$,
\beq
\gamma_{J}^{(1)}=\frac{8 \pi ^2 g^2}{J (J+1)}-\frac{32 \pi ^4 g^4}{J (J+1)^2
(J+2)}+\frac{256 \pi ^6 g^6}{J (J+1)^3 (J+2) (J+3)}
+\dots
\eeq
The change of structure at $J=4$ might be related to the fact that for $J\geq 4$ the ground state anomalous dimension even at one loop is expected to be an irrational number for integer $S>0$ (see \cite{Beccaria:2008pp}, \cite{Belitsky:2008mg}), and thus cannot be written as a linear combination of harmonic sums with integer coefficients.
In the next section we will discuss tests and applications of our results at strong coupling.
\section{Strong coupling tests and predictions}
\label{sec:SlopeSlopeStrongCoupling}
In this section we will present the strong coupling expansion of our results for the curvature function, and link these results to anomalous dimensions of short operators at strong coupling. We will also obtain new predictions for the BFKL pomeron intercept.
\subsection{Expansion of the curvature function for $J=2,3,4$}
To obtain the strong coupling expansion of our exact results for the curvature function, we evaluated it numerically with high precision for a range of values of $g$ and then made a fit to find the expansion coefficients. It would also be interesting to carry out the expansion analytically, and we leave this for the future.
For numerical study it is convenient to write our exact expressions \eq{gamma2L2}, \eq{gamma2L3}, \eq{gamma2L4} for $\gamma^{(2)}(g)$, which have the form
\beq
\label{ints}
\gamma^{(2)}(g)=\oint {du_x}\oint {du_y} f(x,y) \d_{u_x} \log\frac{\Gamma (i u_x-i u_y+1)}{\Gamma (1-i u_x+i u_y)}
\eeq
where the integration goes around the branch cut between $-2g$ and $2g$, in a slightly different way (we remind that we use notation $x+\ofrac{x}=\frac{u_x}{g}$ and $y+\ofrac{y}=\frac{u_y}{g}$). Namely, by changing the variables of integration to $x,y$ and integrating by parts one can write the result as
\beq
\label{ints2}
\gamma^{(2)}(g)=\oint {dx}\oint {dy} F(x,y) \log\frac{\Gamma (i u_x-i u_y+1)}{\Gamma (i u_y-i u_x+1)}
\eeq
where $F(x,y)$ is some polynomial in the following variables: $x,\;1/x,\;y,\;1/y,\;\sh_-^x$ and $\sh_-^y$ (for $J=3$ it includes $\ch_-^x,\;\ch_-^y$ instead of the $\sh_-$ functions). The integral in \eq{ints2} is over the unit circle. The advantage of this representation is that plugging in $\sh_-^x$, $\sh_-^y$ as series expansions (truncated to some large order), we see that it only remains to compute integrals of the kind
\beqa
C_{r,s}&=&\frac{1}{i}\oint\frac{dx}{2\pi}\oint\frac{dy}{2\pi}x^r y^s\log\frac{\Gamma(i u_x-iu_y+1)}{\Gamma(i u_y-iu_x+1)}
\eeqa
These are nothing but the coefficients of the BES dressing phase \cite{Beisert:2006ez,Dorey:2007xn,Beisert:2006ib,Vieira:2010kb}. They can be conveniently computed using the strong coupling expansion \cite{Beisert:2006ez}
\beq
C_{r,s}=\sum_{n=0}^\infty\[-\frac{2^{-n-1} (-\pi )^{-n} g^{1-n} \zeta_n
\left(1-(-1)^{r+s+4}\right) \Gamma \left(\frac{1}{2}
(n-r+s-1)\right) \Gamma \left(\frac{1}{2}
(n+r+s+1)\right)}{\Gamma (n-1) \Gamma \left(\frac{1}{2}
(-n-r+s+3)\right) \Gamma \left(\frac{1}{2} (-n+r+s+5)\right)}\]
\eeq
However this expansion is only asymptotic and does not converge. For fixed $g$ the terms will start growing with $n$ when $n$ is greater than some value $N$, and we only summed the terms up to $n=N$ which gives the value of $C_{r,s}$ with very good precision for large \text{enough $g$}.
Using this approach we computed the curvature function for a range of values of $g$ (typically we took $7\leq g \leq 30$) and then fitted the result as an expansion in $1/g$. This gave us only numerical values of the expansion coefficients, but in fact we found that with very high precision the coefficients are as follows. For $J=2$
\beqa
\label{eq:ssj2}
\gamma^{(2)}_{J=2}&=&-\pi ^2 g^2+\frac{\pi g}{4}+\frac{1}{8}-\ofrac{\pi g}\(\frac{3 \zeta_3}{16}+\frac{3}{512}\)-\frac{1}{\pi^2g^2}\(\frac{9 \zeta_3}{128}+\frac{21}{512}\)
\\ \nn
&+&
\frac{1}{\pi^3g^3}\(\frac{3 \zeta_3}{2048}+\frac{15 \zeta_5}{512}-\frac{3957}{131072}\) + \dots\;,
\eeqa
then for $J=3$
\beqa
\label{eq:ssj3}
\gamma^{(2)}_{J=3}&=&-\frac{8 \pi ^2 g^2}{27}+\frac{2 \pi g}{27}+\frac{1}{12}-
\ofrac{\pi g}\(
\frac{1}{216}
+\frac{\zeta_3}{8}
\)-
\frac{1}{\pi^2g^2}\(\frac{3 \zeta_3}{64}+\frac{743}{13824}\)
\\ \nn
&+&
\frac{1}{\pi^3g^3}\(\frac{41 \zeta_3}{1024}+\frac{35 \zeta_5}{512}-\frac{5519}{147456}\) + \dots\;,
\eeqa
and finally for $J=4$
\beqa
\gamma^{(2)}_{J=4}&=&-\frac{\pi ^2 g^2}{8}+\frac{\pi g}{32}+\frac{1}{16}-\ofrac{\pi g}\(\frac{3 \zeta_3}{32}+\frac{15}{4096}\)-\frac{0.01114622551913}{g^2}
\\ \nn
&+&\frac{0.004697583899}{g^3}+ \dots\;.
\eeqa
To fix coefficients for the first four terms in the expansion we were guided by known analytic predictions which will be discussed below, and found that our numerical result matches these predictions with high precision. Then for $J=2$ and $J=3$ we extracted the numerical values obtained from the fit for the coefficients of $1/g^2$ and $1/g^3$, and plugging them into the online calculator EZFace \cite{ezface} we obtained a prediction for their exact values as combinations of $\zeta_3$ and $\zeta_5$. Fitting again our numerical results with these exact values fixed, we found that the precision of the fit at the previous orders in $1/g$ increased. This is a highly nontrivial test for the proposed exact values of $1/g^2$ and $1/g^3$ terms. For $J=2$ we confirmed the coefficients of these terms with absolute precision $10^{-17}$ and $10^{-15}$ at $1/g^2$ and $1/g^3$ respectively (at previous orders of the expansion the precision is even higher). For $J=3$ the precision was correspondingly $10^{-15}$ and $10^{-13}$.
For $J=4$ we were not able to get a stable fit for the $1/g^2$ and $1/g^3$ coefficients from EZFace, so above we gave their numerical values (with uncertainty in the last digit). However below we will see that based on $J=2$ and $J=3$ results one can make a prediction for these coefficients, which we again confirmed by checking that precision of the fit at the previous orders in $1/g$ increases. The precision of the final fit at orders $1/g^2$ and $1/g^3$ is $10^{-16}$ and $10^{-14}$ respectively.
\subsection{Generalization to any $J$}
Here we will find an analytic expression for the strong coupling expansion of the curvature function which generalizes the formulas \eqref{eq:ssj2} and \eqref{eq:ssj3} to any $J$. To this end it will be beneficial to consider the structure of classical expansions of the scaling dimension. A good entry point is considering the inverse relation $S(\Delta)$, frequently encountered in the context of BFKL. It satisfies a few basic properties, namely the curve $S(\Delta)$ goes through the points $(\pm J, 0)$ at any coupling, because at $S=0$ the operator is BPS. At the same time for non-BPS states
one should have $\Delta(\lambda)\propto \lambda^{1/4}\to\infty$ \cite{Gubser:1998bc} which indicates that if $\Delta$ is fixed, $S$ should go to zero, thus combining this with the knowledge of fixed points $(\pm J, 0)$ we conclude that at infinite coupling $S(\Delta)$ is simply the line $S = 0$. As the coupling becomes finite $S(\Delta)$ starts bending from the $S=0$ line and starts looking like a parabola going through the points $\pm J$, see fig. \ref{pic:bfkl}. Based on this qualitative picture and the scaling $\Delta(\lambda)\propto \lambda^{1/4}$ at $\lambda\rightarrow\infty$ and fixed $J$ and $S$, one can write down the following ansatz,
\beqa
\label{eq:sofdelta}
S(\Delta) &=& \left( \Delta^2 - J^2\right)\Bigl( \alpha_1 \frac{1}{\lambda^{1/2}} + \alpha_2 \frac{1}{\lambda} + (\alpha_3 + \beta_3 \Delta^2) \frac{1}{\lambda^{3/2}} + (\alpha_4 + \beta_4 \Delta^2) \frac{1}{\lambda^{2}} \Bigr.\\
\nn
&+&
\Bigl.
(\alpha_5 + \beta_5 \Delta^2+\gamma_5\Delta^4) \frac{1}{\lambda^{5/2}}
+(\alpha_6 + \beta_6 \Delta^2+\gamma_6\Delta^4) \frac{1}{\lambda^{3}}
+\dots
\Bigr).
\eeqa
We omit odd powers of the scaling dimension from the ansatz, as only the square of $\Delta$ enters the $\bf{P}\mu$-system. We can now invert the relation and express $\Delta$ in terms of $S$ at strong coupling, which gives
\beq
\label{eq:delta_squared_basso}
\Delta^2=J^2+S
\(
A_1\sqrt{\lambda}+A_2+\dots
\)
+S^2
\(
B_1+\frac{B_2}{\sqrt\lambda}
+\dots
\)
+S^3
\(
\frac{C_1}{\lambda^{1/2}}
+\frac{C_2}{\lambda}
+\dots
\)
+{\cal O}({ S}^4)\;,
\eeq
where the coefficients $A_i,\;B_i,\;C_i$ are some functions of $J$. There exists a one-to-one mapping between the coefficients $\alpha_i$, $\beta_i$, etc. and $A_i$, $B_i$ etc, which is rather complicated but easy to find. We note that this structure of $\Delta^2$ coincides with Basso's conjecture in \cite{Basso:2011rs} for mode number $n=1$ \footnote{The generalization of \eq{eq:delta_squared_basso} for $n>1$ is not fully clear, as noted in \cite{Gromov:2011bz}, and this case will be discussed in appendix \ref{sec:appN}.}. The pattern in \eq{eq:delta_squared_basso} continues to higher orders in $S$ with further coefficients $D_i$, $E_i$, etc. and powers of $\lambda$ suppressed incrementally. This structure is a nontrivial constraint on $\Delta$ itself as one easily finds from \eq{eq:delta_squared_basso} that
\beqa\la{eq:delta_basso}
\Delta&=&J+\frac{S}{2J}
\(
A_1\sqrt{\lambda}+A_2+\frac{A_3}{\sqrt{\lambda}}+\dots
\)\\
\nn&+&S^2
\(
- \frac{A_1^2}{8J^3} \, \lambda
- \frac{A_1A_2}{4J^3} \, \sqrt{\lambda}
+\[\frac{B_1}{2J}-\frac{A_2^2+2A_1 A_3}{8J^3}\]
+
\[
\frac{B_2}{2J}
-\frac{A_2A_3+A_1A_4}{4J^3}
\] \frac{1}{\sqrt\lambda}
+\dots
\).
\eeqa
By definition the coefficients of $S$ and $S^2$ are the slope and curvature functions respectively, so now we have their expansions at strong coupling in terms of $A_i,\;B_i,\;C_i$, etc. Since the $S$ coefficient only contains the constants $A_i$, we can find all of their values by simply expanding the slope function \eq{eq:resultLO} at strong coupling. We get
\beq
\label{eq:bassos_as}
A_1=2\;\;,\;\;
A_2=-1\;\;,\;\;
A_3=J^2-\frac{1}{4}\;\;,\;\;
A_4=J^2-\frac{1}{4}\dots\;.
\eeq
Note that in this series the power of $J$ increases by two at every other member, which is a direct consequence of omitting odd powers of $\Delta$ from \eq{eq:sofdelta}. We also expect the same pattern to hold for the coefficients $B_i$, $C_i$, etc.
The curvature function written in terms of $A_i$, $B_i$, etc. is given by
\beqa
\label{eq:ss_abc}
\gamma^{(2)}_{J}(g) &=& -\frac{2 \pi ^2 g^2 A_1^2 }{J^3} - \frac{\pi g A_1 A_2 }{J^3}-\frac{A_2^2+2 A_1 A_3-4 B_1 J^2}{8 J^3} - \frac{A_2 A_3+A_1 A_4-2 B_2 J^2}{16 \pi g J^3} \\
&-& \frac{A_3^2+2 A_2 A_4+2 A_1 A_5-4 B_3 J^2}{128 \pi^2 g^2 J^3} - \frac{A_3 A_4 + A_2 A_5 + A_1 A_6 - 2 B_4 J^2}{256 \pi^3 g^3 J^3} + \mc{O}\left(\frac{1}{g^4}\right). \nonumber
\eeqa
The remaining unknowns here (up to order $1/g^4$) are $B_1$, $B_2$, which we expect to be constant due to the power pattern noticed above and $B_3$, $B_4$, which we expect to have the form $a J^2 + b$ with $a$ and $b$ constant.
These unknowns are immediately fixed by comparing the general curvature expansion \eq{eq:ss_abc} to the two explicit cases that we know for $J=2$ and $J=3$. We find
\beq
\label{eq:b1b2}
B_1=3/2\;\;,\;\;B_2=-3\,\zeta_3+\frac{3}{8},
\eeq
and
\beqa
\label{eq:b3b4}
B_{3} = -\frac{J^2}{2}-\frac{9 \, \zeta_3}{2}+\frac{5}{16} \;\;,\;\; B_{4} = \frac{3}{16} J^2 (16 \, \zeta_3+20 \, \zeta_5-9)-\frac{15 \, \zeta_5}{2}-\frac{93 \, \zeta_3}{8}-\frac{3}{16}.
\eeqa
Having fixed all the unknowns we can write the strong coupling expansion of the curvature function for arbitrary values of $J$ as
\beqa
\gamma^{(2)}_{J}(g) &=& -\frac{8 \pi ^2 g^2}{J^3}+\frac{2 \pi g}{J^3}+\frac{1}{4 J}+\frac{1-J^2 (24 \, \zeta_3 +1)}{64 \pi g J^3} - \frac{8 J^4+J^2 (72 \, \zeta_3 +11)-4}{512 g^2 \left(\pi ^2 J^3\right)} \nonumber \\
&+& \frac{3 \left(8 J^4 (16 \, \zeta_3 +20 \, \zeta_5-7)-16 J^2 (31 \, \zeta_3 +20 \, \zeta_5+7)+25\right)}{16384 \pi ^3 g^3 J^3} + \mc{O}\left(\frac{1}{g^4}\right).
\eeqa
Expanding $\gamma^{(2)}_{J=4}$ defined in \eq{gamma2L4} at strong coupling numerically we were able to confirm the above result with high precision.
\subsection{Anomalous dimension of short operators}
\label{sec:Konishidimension}
In this section we will use the knowledge of slope functions $\gamma^{(n)}_J$ at strong coupling to find the strong coupling expansions of scaling dimensions of operators with finite $S$ and $J$, in particular we will find the three loop coefficient of the Konishi operator by utilizing the techniques of
\cite{Basso:2011rs,Gromov:2011bz}. What follows is a quick recap of the main ideas in these papers.
We are interested in the coefficients of the strong coupling expansion of $\Delta$, namely
\beq
\Delta = \Delta^{(0)} \lambda^\frac{1}{4} + \Delta^{(1)} \lambda^{-\frac{1}{4}} + \Delta^{(2)} \lambda^{-\frac{3}{4}} + \Delta^{(3)} \lambda^{-\frac{5}{4}} + \dots
\eeq
First, we use Basso's conjecture \eq{eq:delta_squared_basso} and by fixing $S$ and $J$ we re-expand the square root of $\Delta^2$ at strong coupling to find
\beq
\label{eq:delta_abc}
\Delta = \sqrt{A_1 S} \, \sqrt[4]{\lambda} + \frac{\sqrt{A_1} \left( J^2 + A_2 S + B_1 S^2 \right)}{2 A_1 \sqrt{S}} \, \frac{1}{\sqrt[4]{\lambda}} + {\cal O}\(\frac{1}{\lambda^\frac{3}{4}}\).
\eeq
Thus we reformulate the problem entirely in terms of the coefficients $A_i$, $B_i$, $C_i$, etc. For example, the next coefficient in the series, namely the two-loop term is given by
\beq
\label{eq:delta_2loops_abc}
\Delta^{(2)} = -\frac{\left(2 A_2 + 4 B_1+J^2\right)^2-16 A_1 (A_3+2 B_2+4 C_1)}{16 \sqrt{2} A_2^{3/2}}.
\eeq
Further coefficients become more and more complicated, however a very clear pattern can be noticed after looking at these expressions: we see that the term $\Delta^{(n)}$ only contains coefficients with indices up to $n+1$, e.g. the tree level term $\Delta^{(0)}$ only depends on $A_1$, the one-loop term depends on $A_1$, $A_2$, $B_1$, etc. Thus we can associate the index of these coefficients with the loop level. Conversely, from the last section we learned that the letter of $A_i$, $B_i$, etc. can be associated with the order in $S$, i.e. the slope function fixed all $A_i$ coefficients and the curvature function in principle fixes all $B_i$ coefficients.
\subsubsection{Matching with classical and semiclassical results}
Looking at \eq{eq:delta_abc} we see that knowing $A_i$ and $B_i$ only takes us to one loop, in order to proceed we need to know some coefficients in the $C_i$ and $D_i$ series. This is where the next ingredient in this construction comes in, which is the knowledge of the classical energy and its semiclassical correction in the Frolov-Tseytlin limit, i.e. when $\mc S \equiv S/\sqrt\lambda$ and $\mc J \equiv J/\sqrt\lambda$ remain fixed, while $S$, $J$, $\lambda \rightarrow \infty$. Additionally we will also be taking the limit $\cal{S} \rightarrow \mathrm{0}$ in all of the expressions that follow. In particular, the square of the classical energy has a very nice form in these limits and is given by \cite{Gromov:2011de,Gromov:2011bz}
\beqa
\label{delta_tree}
{\cal D}_{\rm classical}^2&=&{\cal J}^2+2 \, {\cal S} \, \sqrt{{\cal J}^2+1}+{\cal S}^2 \, \frac{2 {\cal J}^2+3}{2
{\cal J}^2+2}-{\cal S}^3 \, \frac{{\cal J}^2+3}{8
\left({\cal J}^2+1\right)^{5/2}}
+{\cal O}\left({\cal S}^4\right)\;,
\eeqa
where ${\cal D}_{\rm classical} \equiv \Delta_{classical} / \sqrt{\lambda}$. The 1-loop correction to the classical energy is given by
\beqa
\label{delta_oneloop}
\Delta_{sc} \simeq
\frac{-{\cal S}}{2 \left({\cal J}^3+{\cal J}\right)}+{\cal S}^2\[\frac{3 {\cal J}^4+11 {\cal J}^2+17
}{16 {\cal J}^3 \left({\cal J}^2+1\right)^{5/2}}
\!-\!\sum_{\substack{m>0 \\ m\neq n}}\frac{n^3m^2 \left(2 m^2+n^2 {\cal J}^2-n^2\right)}{{\cal J}^3 \left(m^2-n^2\right)^2
\left(m^2+n^2 {\cal J}^2\right)^{3/2}}\]
\eeqa
If the parameters $\cal{S}$ and $\cal{J}$ are fixed to some values then the sum can be evaluated explicitly in terms of zeta functions. We now add up the classical and the 1-loop contributions\footnote{Note that they mix various orders of the coupling.}, take $S$ and $J$ fixed at strong coupling and compare the result to \eq{eq:delta_squared_basso}. By requiring consistency we are able to extract the following coefficients,
$$
\label{eq:abcd2}
\begin{array}{rcrlrlrcl}
A_1 &=& &2, &A_2& &=& -&1 \\
B_1 &=& &3/2, &B_2& &=& -&3\,\zeta_3+\frac{3}{8} \\
C_1 &=& -&3/8, &C_2& &=& &\frac{3}{16} \, (20 \, \zeta_3 + 20 \, \zeta_5 - 3) \\
D_1 &=& &31/64, &D_2& &=& & \frac{1}{512} (-4720 \, \zeta_3 - 4160 \, \zeta_5 -2520 \, \zeta_7 +81)
\end{array}
$$
As discussed in the previous section, we can in principle extract all coefficients with indices $1$ and $2$. In order to find e.g. $B_3$ we would need to extend the quantization of the classical solution to the next order. Note that the coefficients $A_1$, $A_2$ and $B_1$, $B_2$ have the same exact values that we extracted from the slope and curvature functions.
\subsubsection{Result for the anomalous dimensions at strong coupling}
\begin{table}[t]
\begin{tabular}{|l||rl|l|l||l|l|l|}
\hline
$(S,J)$ & \multicolumn{2}{|l|}{$\lambda^{-5/4}$ prediction} & $\lambda^{-5/4}$ fit & error & fit order\\
\hline
$(2,2)$ & $\frac{15 \, \zeta_5}{2} + 6 \, \zeta_3+\frac{1}{2}$&$= 15.48929958$ & $14.12099034$ & $9.69\%$ & 6\\
$(2,3)$ & $\frac{15 \, \zeta_5}{2} + \frac{63 \, \zeta_3}{8} - \frac{619}{512}$&$= 16.03417190$ & 14.88260078 & $7.74\%$ & 5 \\
$(2,4)$ & $\frac{21 \, \zeta_3}{2} + \frac{15 \, \zeta_5}{2} - \frac{17}{8}$&$= 18.27355565$ & $16.46106336$ & $11.0\%$ & 7\\
\hline
\end{tabular}
\caption{Comparisons of strong coupling expansion coefficients for $\lambda^{-5/4}$ obtained from fits to TBA data versus our predictions for various operators. The fit order is the order of polynomials used for the rational fit function (see \cite{Gromov:2011bz} for details).}
\label{tab:coefficients}
\end{table}
The key observation in \cite{Gromov:2011bz} was that once written in terms of the coefficients $A_i$, $B_i$, $C_i$, the two-loop term $\Delta^{(2)}$ only depends on $A_{1,2,3}$, $B_{1,2}$, $C_{1}$ as can be seen in \eq{eq:delta_2loops_abc}. As discussed in the last section, the one-loop result fixes all of these constants except $A_3$, which in principle is a contribution from a true two-loop calculation. However we already fixed it from the slope function and thus we are able to find
\beq
\Delta^{(2)} = \frac{-21\,S^4 +(24-96\,\zeta_3) S^3+4 \left(5 J^2-3\right) S^2+8 J^2 S -4 J^4}{64 \sqrt{2}\,S^{3/2}}.
\eeq
Now that we know the strong coupling expansion of the curvature function and thus all the coefficients $B_i$, we can do the same trick and find the three loop strong coupling scaling dimension coefficient $\Delta^{(3)}$, which now depends on $A_{1;2;3;4}$, $B_{1,2,3}$, $C_{1,2}$, $D_1$. We find it to be
\beqa
\label{D3anyS}
\Delta^{(3)} &=& \frac{187\,S^6 + 6\,(208\,\zeta_3 + 160\,\zeta_5-43)\,S^5 +\left(-146\,J^2 - 4\,(336\,\zeta_3-41)\right)S^4 }{512 \sqrt{2}\,S^{5/2}} + \nonumber \\
&+& \frac{\left(32\,(6\,\zeta_3+7)\,J^2-88\right)S^3 + \left(-28\,J^4 + 40\,J^2\right) S^2 - 24\,J^4 S + 8\,J^6}{512 \sqrt{2}\,S^{5/2}},
\eeqa
for $S=2$ it simplifies to
\beq
\Delta^{(3)}_{S=2} = \frac{1}{512} \left(J^6-20 J^4+48 J^2 (4 \zeta_3 - 1)+64 (36 \, \zeta_3+60 \, \zeta_5+11)\right)
\eeq
and finally for the Konishi operator, which has $S=2$ and $J=2$ we get\footnote{The $\zeta_3$ and $\zeta_5$ terms are coming from semi-classics and were already known before \cite{Beccaria:2012xm} and match our result.}
\beq
\Delta^{(3)}_{S=2,J=2} = \frac{15 \, \zeta_5}{2} + 6 \, \zeta_3+\frac{1}{2}.
\eeq
In order to compare our predictions with data available from TBA calculations \cite{Frolov:2010wt}, we employed Pad\'{e} type fits as explained in \cite{Gromov:2011bz}. The fit results are shown in table \ref{tab:coefficients}, we see that our predictions are within $\sim10\%$ error bounds, which is a rather good agreement. However we must be honest that for the $J=3$ and especially $J=4$ states we did not have as many data points as for the $J=2$ state and the fit is somewhat shaky.
\subsection{BFKL pomeron intercept}
\label{sec:bfkl}
The gauge theory operators that we consider in this paper are of high importance in high energy scattering amplitude calculations, especially in the Regge limit of high energy and fixed momentum transfer
\cite{ReggeOriginal,Gribov}. In this limit one can approximate the scattering amplitude as an exchange of effective particles, the so-called reggeized gluons, compound states of which are frequently called \emph{pomerons}. When momentum transfer is large, perturbative computations are possible and the so-called `hard pomeron' appears, the BFKL pomeron \cite{bfkl}. The BFKL pomeron leads to a power law behaviour of scattering amplitudes $s^{j(\Delta)}$, where $j(\Delta)$ is called the Reggeon spin and $s$ is the energy transfer of the process. The remarkable connection between the pomeron and the operators we consider can be symbolically stated as
\beq
\mathrm{pomeron} = \Tr\(Z \; \cD_+^S \; Z \)+\dots
\eeq
where we are now considering twist two operators ($J=2$) and the spin $S$ can take on complex values by analytic continuation. The Reggeon spin $j(\Delta)$ (also refered to as a Regge trajectory) is a function of the anomalous dimension of the operator and is related to spin $S$ as $j(\Delta) = S(\Delta) + 2$. Some of these trajectories are shown in figure \ref{pic:bfkl}. A very important quantity in this story is the BFKL intercept $j(0)$, which we consider next.
\FIGURE[ht]{
\label{pic:bfkl}
\begin{tabular}{cc}
\includegraphics[scale=0.7]{bfkl}\\
\end{tabular}
\caption{The BFKL trajectories $S(\Delta)$ at various values of the coupling. Blue lines are obtained using the known two loop weak coupling expansion \cite{Brower:2006ea,Kotikov:2002ab} and red lines are obtained using the strong coupling expansion \cite{Costa:2012cb,Kotikov:2013xu,Brower:2013jga}.}
}
\FIGURE[ht]{
\label{pic:intercept}
\begin{tabular}{cc}
\includegraphics[scale=0.7]{intercept}\\
\end{tabular}
\caption{The BFKL intercept $j(0) = 2 + S(0)$ dependence on the coupling constant $g$ at two orders at weak coupling (blue lines), four orders at strong coupling (red lines) and a Pad\'{e} type interpolating function in between (dashed line).}
}
One can also use the same techniques as in the previous section to calculate the strong coupling expansion of the BFKL intercept. As stated before, the intercept of a BFKL trajectory $j(\Delta)$ is simply $j(0)$ and we already wrote down an ansatz for $S(\Delta)$ in \eq{eq:sofdelta}. The coefficients $\alpha_i$, $\beta_i$, etc. are in one-to-one correspondence with the coefficients $A_i$, $B_i$ etc. from \eq{eq:delta_basso}, values of which we found in the previous sections. Plugging in their values we find
\beq
\alpha_1=1/2,\ \alpha_2=1/4,\ \alpha_3=-1/16\ ,\
\alpha_4=-\frac{3\zeta_3}{2}-\frac{1}{2},\
\eeq
\beq
\alpha_5=-\frac{9 \zeta_3}{2}-\frac{361}{256},\
\alpha_6=-\frac{39 \zeta_3}{4}-\frac{511}{128}
\eeq
\beq
\beta_3=-3/16,\ \beta_4=\frac{3\zeta_3}{8}-\frac{21}{64},\
\beta_5=\frac{9 \zeta_3}{8}-\frac{51}{128},\
\beta_6=\frac{45 \zeta_3}{8}+\frac{15 \zeta_5}{16}+\frac{141}{512}
\eeq
\beq
\gamma_5=\frac{21}{128},\ \gamma_6=-\frac{51 \zeta_3}{64}-\frac{15 \zeta_5}{64}+\frac{129}{256}
\eeq
Furthermore, setting $\Delta = 0$ we find the intercept to be
\beqa \nn
j(0) = 2 + S(0) &=& 2 -\frac{2}{\lambda^{1/2}}-\frac{1}{\lambda }+ \frac{1}{4\,\lambda^{3/2}}+\left(6 \zeta_3+2\right) \frac{1}{\lambda^2} \\
&+& \left(18 \, \zeta_3 + \frac{361}{64} \right) \frac{1}{\lambda^{5/2}} + \left(39 \, \zeta_3 + \frac{511}{32}\right) \frac{1}{\lambda^3} + \mathcal{O}\left(\frac{1}{\lambda^{7/2}}\right).
\eeqa
The first four terms successfully reproduce known results \cite{Costa:2012cb,Kotikov:2013xu,Brower:2013jga} and the last two terms of the series are a new prediction (their derivation relies on the knowledge of the constants $B_{3,4;J=2}$ found in the last section). On Figure \ref{pic:intercept} we show
plots of the intercept at weak and at strong coupling.
\section{Conclusions}
In this paper we applied the recently proposed $\bP\mu$-system of Riemann-Hilbert type equations to study anomalous dimensions in the $sl(2)$ sector of planar $\cN=4$ SYM theory. Our main result are the expressions \eq{gamma2L2}, \eq{gamma2L3} and \eq{gamma2L4} for the curvature function $\gamma_J^{(2)}(g)$, i.e. the coefficient of the $S^2$ term in the anomalous dimension at small spin $S$. These results correspond to operators with twist $J=2,3$ and $4$. Curiosly, we found that they involve essential parts of the BES dressing phase in the integral representation.
We derived these results by solving the $\bP\mu$-system to order $S^2$ and they are exact at any coupling. While expansion in small $S$ (but at any coupling)
seems hardly possible to perform in the TBA approach, here it resembles a perturbative expansion -- the $\bP\mu$-system is solved order by order in $S$ and the coupling is kept arbitrary.
For $J=2$ and $J=3$ our calculation perfectly matches known results to four loops at weak coupling. This includes in particular the leading finite-size correction at $J=2$. At strong coupling we obtained the expansion of our results numerically, and also found full agreement with known predictions. This provides yet another check that our result incorporates all wrapping corrections. Going to higher orders in this expansion we were able to use the EZFace calculator \cite{ezface} to fit
the coefficients as linear combinations of $\zeta_3$ and $\zeta_5$ (and confirmed the outcome with high precision). By combining these coefficients with the other known results, we obtained the 3-loop coefficient in the Konishi anomalous dimension at strong coupling. This serves as a highly nontrivial prediction for a direct string theory calculation, which hopefully may be done along the lines of \cite{Vallilo:2011fj, Roiban:2011fe}. Our results also predict the value of two new coefficients for the pomeron intercept at strong coupling.
For the future analysis it would be interesting to build an integral equation which would generate iteratively small $S$ corrections
and to be able to approach finite values of $S$ at any coupling. Furthermore, extension of this approach to the boundary problems,
twisted boundary conditions and even q-deformations \cite{Bajnok:2013wsa, Bajnok:2012xc, betadef, Arutyunov:2012ai, Arutyunov:2012zt} would give a rich set of new analytical results.
Finally, applications of our methods to ABJM model \cite{Lambert,ABJM}
and comparison with the localization results \cite{Kapustin:2009kz, Drukker:2010nc, Drukker:2011zy} would give the unknown interpolation function
$h(\lambda)$, the only ingredient missing in the integrability framework \cite{GVABA,ABJMTBA1,ABJMTBA2}.
As the $\bP\mu$-system \cite{pmuABJM} for ABJM model has various peculiar features compared to ${\cal N}=4$ SYM
it would be especially interesting to study this case.
\section*{Acknowledgements}
We are grateful to M.~Alfimov, A.~Gorsky, V.~Kazakov, A.~Kotikov,
J.~Penedones,
A.~Tseytlin and D.~Volin for useful discussions. The research leading to these results has received funding from the People Programme (Marie Curie Actions) of the European Union's Seventh Framework Programme FP7/2007-2013/ under REA Grant Agreement No 317089 (GATIS).
N.G. is grateful to the Holograv Conference ``Gauge/Gravity Duality", MPI for Physics, Munich, 15-19 July 2013 where a part of this work was done for hospitality.
The work of F.~L.-M. is supported in part by the grants RFBR-12-02-00351-a and PICS-12-02-91052. We also wish to thank the STFC for partial support from the consolidated grant ST/J002798/1.
|
2,877,628,090,721 | arxiv | \section*{Introduction}
The role of network and its structure has proved to be crucial in addressing a wide range of phenomena in complex systems. In economics it has been shown that the structure of the network can influence fragility of the market \cite{Schweitzer}-\cite{contreras}. As well it has been proved that despite the classical view where the fluctuations in a regular network may cancel out, in scale free networks such fluctuations may contribute to a turn over of the market \cite{Acemoglu}.
From a physical perspective and in a simplified world, economy can be viewed as a network of agents which interact with each other. When agents interact in a system and try to maximize a function such as utility, then we expect to have a wide range of local equilibria. Existence of a spectrum of local equilibria then makes it hard to derive the system to a favored equilibrium and results in a hysteresis in the network. In this paper we notify such hysteresis in economic network in the context of the XY model.
A hot issue in economics which was rerose after the crash of the Lehman Brothers was the role of government in the time of crisis and stimulating policies. Obama's stimulus polices along with the Federal Reserve expansionary programs were successful to help the economy of the United States for recovery. In the debates concerning the stimulus programs, some economists claimed that only big stimulations could help economy for a fast recovery, see for example \cite{{krugmanend},{stiglitzfreefall}}. In other words, there was an inspiration that market may resist for recovery.
Existence of such resistance has been studied in \cite{hosseinyising} in an Ising based model of the network of firms. Actually firms and corporations are customers of each other's products. They buy and sell their products to each other as intermediate goods. Now we can consider a network in which nodes are firms or corporations. Two nodes are connected if they have economic interactions. In other words two firms are connected if they trade.
When two firms are connected, there should be some positive correlations between their level of activities. If a firm increases/decreases it production, it buys or sells more/less from its neighbors. As a result it forces its neighbors to work with higher/lower level of activity. This positive correlation provides a hysteresis in the network.
In the United States after the crisis of 2008 different states faced the crisis with different levels. Consequently, the rate of recovery was different after the crisis. In April 2012 while unemployment was as low as five percent in Iowa, it was close to eleven percent in California. When you provide a service in California you cannot hire new employees if economy is good in Iowa or other states. You should look at the activities of your neighbors in the network.
As a result of local interaction and positive correlations between neighbors, and similar to many ferromagnet models, we might have global order in the system and consequently resistance for change in the global states. Ising model as a model to explain the behavior of manager, firms, and corporation has been presented before, see \cite{William}, \cite{Durlauf}. It as well has been utilized to model behaviors in the financial markets \cite{Zhou}. Actually, in the simplest model we can suppose that firms have the choice to work with either their maximum capacity or their minimum capacity. So, they have binary choices. We then can think what happens when we want to stimulate firms which work with minimum capacity. We should overcome the effect of neighbors and bypass the wall between two vacuums of the system. Studying such problem we find that there is a minimum cost to change states of an Ising model. In other words stimulations with a cost below that threshold fails to recover economy. It seems that it is mainly because in the spectrum of energies of the configuration of states, there is a hump between two vacuums of the Ising model.
Simulating the network of firms with an Ising model sheds light on the metastable features of networks and hysteresis against recovery. It has the benefit of being simple to model, analyze, and simulate. It has however its own restrictions. The major problem is that firms have much more than a binary choice. Basically they can work with any level of activities within a range. Now, one relevant question is what happens if we consider firms with a continuos choice of activity. Does there still exist a minimum bound for successful stimulations? To answer this question we simulate network of firms with an XY model. Despite the Ising model, in the XY model, each agent can have a continuos choice. As a result in the configuration space, vacuums can change without a hump in energy. Now, it is interesting to check if going from an agent base model with binary choices to another one with continuous choice still for successful stimulation we need a minimum of cost.
\section*{Hysteresis in an Ising Model}
Consider two firms or corporations which trade with each other. We connect them in network of firms via an edge. In each firm, managers can rise or reduce working hours. Further, they can employ new or fire old employees. In other words they have choice to decrease or increase the level of their productions or services. This choice however is limited. There is maximum capacity where production above it is impossible, and a minimum capacity where production below it results in loss. In the simplest case we suppose that firms have a binary choice of working with either maximum or minimum level of activity, see \cite{William}.
\begin{figure}[]
\includegraphics[width=.8\columnwidth]{fig1.eps}
\centering
\caption
The favored situation is where a firm and its neighbors have the same status i.e. either working with maximum capacity or minimum capacity. In this figure we have considered a two dimensional lattice where each firm is connected to four other firms. In times of crisis where all firms work with minimum capacity, if the government compensates for decline of neighbors, each firm feels that neighbors work with maximum capacity. In the Ising model when neighbors are downward, if you impose an exogenous field equal to $8J$ then it is like all neighbors are upward.}\label{figising}
\end{figure}
Each manager looks at her neighbors and decides to minimize or maximize production with the probabilities as
\begin{eqnarray}\begin{split}\label{probabilitiyising}
&P_{\uparrow}\propto \exp{\{\frac{(N_{\uparrow}-N_{\downarrow})J}{T} \}},
\cr&P_{\downarrow}\propto \exp{\{\frac{(N_{\downarrow}-N_{\uparrow})J}{T}\}},
\end{split}\end{eqnarray}
where $N_{\uparrow/\downarrow}$ represents the number of neighbors which work with maximum/minimum capacity and $P_{\uparrow/\downarrow}$ indicates the probability to choose a high or low level of activity.
The parameter $J$ shows the strength of connection between two firms. Actually it should be related to the purchase of neighbors from each other where for now we have supposed to be homogenous. In economy we have: the bigger the trade, the stronger the interaction. In this model, the bigger the $J$, the stronger the interaction. So, the strength of trade between firms is encapsulated in $J$. The parameter $T$ controls the stochastic behaviors. If we let $T\rightarrow 0$ then it means that managers have no stochastic behavior; i.e. if the majority of neighbors of a firm work with maximum/minimum capacity then it definitely works with maximum/minimum capacity. By comparison if we let $T$ grow comparing to $\bar N J$ where $\bar N$ is the average degree of the network, the chance for a firm to work with maximum or minimum capacity becomes equal; i.e. the behavior becomes random and correlation between neighbors tends to zero.
If we aim to encapsulate the positive correlation between activity of nodes and the uncertainty of behaviors in a single parameter, the best candidate would be the temperature. If the interaction between firms are strong, the system would reside in the cold phase of the ferromagnetic Ising model. In such a case, if a major portion of firms work with minimum capacity, then without shocks, recovery would be unlikely. If we accept to model the real networks of firms with a homogenous Ising model still finding proper temperature is not easy. The long lasting stagnations after big downturns such as the Great Depression or the Great Recession however is a sign that in some senses the metastable states could exist in economy which for our model suggests that the temperature should be relatively low. The stagnation after the great depression lasted more than a decade leading the second world war. Once the war started and a the government purchase boosted then economy fell into the right track. Even after the war when government purchase declined economy kept its good shape. That is why in a Keynesian economics it is believed that without government stimulation, economy may live in a long lasting depression. If we believe Keynes's terminology, then we should think of temperatures below the critical temperature.
When the government aims to stimulate economy through fiscal policy, it makes some orders from the private party. Regarding networks, it tries to compensate decline of orders by nodes from each other. This means that in our dipole model most of dipoles are downward. When government makes extra purchase from the private party, for firms this new order compensates part of decline of order by their neighbors. So, in strategies in Eq. (\ref{probabilitiyising}) we should add a term similar to the external field for the role of government stimulus purchase.
We suppose that in recession, the system lives in the vacuum where most of firms work with minimum capacity.
Now we impose a stimulus field in an upward direction and track magnetization. We track the magnetization and measure the number of Monte Carlo steps that the stimulus field needs to change the status of at least half of the dipoles to an upward direction denoted by $\tau$. To state clearer $\tau$ is the number of Monte Carlo steps needed for stimulation to elevate the magnetization above zero. In economy a stimulus bill can be imposed in a few seasons or a couple of years. The total bill however is important. So, in our Ising model of economy we should be interested in $\tau H$.
We can impose a relatively weak field $H$. The value for $\tau$ however increases. On the contrary we can impose a stronger field and decrease the value of $\tau$. Now, there is a question. Under what circumstance we can decrease $\tau H$? Dynamic of the Ising model and its metastability has been widely studied, see for example \cite{chakrabartih}-\cite{Rikvold} and references therein. The response of the system depending on the strength of the stimulating field are divided in stochastic an deterministic regimes. Our interest is in deterministic regime where the system changes its vacuum in a predictable time passage.
It can be shown that this value has a minimum bound where we cannot have a successful stimulation for hits smaller than such a bound. Besides, it has been shown that this value is translated to the minimum bound for successful stimulation in economy as
\begin{eqnarray}
bill_{min}=0.44\Delta GDP
\end{eqnarray}
where $\Delta GDP$ is the gap between GDP in expansion and recession.
It is interesting to notify the observation that despite simplicities in the model it had successful predictions for two of the biggest economies in the world. While the US stimulus bill was above this threshold and successful, the EU bill was far below this threshold and unsuccessful, \cite{hosseinyising}.
The model is so far too simple to be reliable for the application in economy. The important point however is that it suggests that local correlation in the network provides a hysteresis and to overcome such hysteresis we need a minimum bound for successful stimulation. The major question to be answered is that what happens if we go to more realistic models. Ising model supposes agents have binary choices.
In economy, managers can choose any level of activity between maximum and minimum capacities.
Should we still expect a minimum bound for a successful stimulation in interactions with continuous choices such as the XY model?
\section*{The XY model}\label{secproductionfunction}
To bypass restrictions of the Ising model and to be closer to the real world we consider the XY model. It is mainly because in the real world managers have much more than binary choices. In this perspective we can suppose that each manager as an agent chooses a random angle in the XY plane. We then suppose that the cosine of the angle with the positive Y direction represents the level of activity of a firm. A zero angle means the highest level of activity and an angle equal to $\pi$ means the lowest level of activity, see Figure \ref{xy8}. In other words, given the firm $i$ we can denote its level of activity by
\begin{eqnarray}
Y_i=\frac{1}{2}\times(Y_{max}+Y_{min})+(Y_{max}-Y_{min})\times\cos\theta_i\;\;
\end{eqnarray}
where $Y_{min}$ and $Y_{max}$ represent the minimum and maximum capacity of production of the firms.
\begin{figure}[]
\centering
\includegraphics[width=.6\columnwidth]{fig2.eps}
\caption{In the XY model of firms, the projection on $Y$ direction identifies the level of activity. If $\cos\theta_i\approx 1$ then the firm works with maximum capacity. An $\cos\theta_i\approx 0$ means a middle level of activity and $\cos\theta_i\approx -1$ identifies a low level of activity.}
\label{xy8}
\end{figure}
Now the XY model of firms makes sense. If neighbors of a firm work with a higher level of activity, they force the firm to work with a higher level. The intensity of their force however is a function of their angles, i.e. it is a function of their level of activities. In our Monte Carlo simulation we update directions proportional to their weights
\begin{eqnarray}
P(\theta_i)\propto\exp{\{-\frac{\sum_{j} Jcos(\theta_i-\theta_j)}{T}\}}
\end{eqnarray}
where summation is over the neighbors of node $i$. This probability notifies one fact; though any direction might be chosen by each node, angles closer to the neighbors' one are more probable.
In economic language this means that managers have a stochastic and heterogenous behavior. It is however more likely that they choose a level of activity closer to their neighbors. Now, we suppose that in economic crises a majority of firms work with a low level of activity. We then ask if we want to stimulate such a network to work with a higher level of activity, what would be the response of the network?
Should there exist some resistance from the network for recovery or equivalently is there still a minimum bound for a successful stimulation?
\section*{Results}
We first suppose that in recession all firms work with their minimum capacity. In the XY language, all dipoles are along the negative Y direction. A fiscal stimulation aims to compensate the decline of orders. In our XY language it can be modeled by a magnetic field along the positive Y direction aiming dipoles to modify directions towards this direction. In an experiment we ran a simulation for a Watts-Strogatz network with 512 nodes, with the average degree $K=8$, and the probability for rewiring $P=0.1$. Since at the beginning all dipoles were along the negative Y direction, we had $\bar m_y=-1$. We then imposed a magnetic field with the intensity $H$ and updated the direction of dipoles under such a field. We tracked the net magnetization and measured how many Monte Carlo steps were needed for the stimulus field to change $m_y$ to elevate above zero. The value which we were interested in was $\tau H$ which in economy means the total bill imposed within a number of seasons.
The result of our simulation for an ensemble of 1000 runs is graphed in Figure \ref{thh}. As it can be seen, there are two regimes: a deterministic regime and a stochastic regime. If we impose a relatively strong field we can expect the magnetism to become zero after a predictable period. If we impose a weak field then the response of the network is stochastic, in a sense that the standard variation for $\tau H$ is comparable with its mean value.
In the world of economy, the stochastic regime is not of interest. First of all, neither policy makers nor politicians are interested in stimulations which their outcomes are stochastic. Secondly, if you want to wait that long, some other parameters such as technological shocks may after all help economy to recover. Keynes says: "In the long run we are all dead". The aim of stimulation is not to leave economy with stochastic shocks. It is an act aiming to recover economy within a reasonable time passage. So, we focus our simulations and analysis within this region. In this region we find a minimum around $H=8$ where the minimum of $\tau H$ is $15.7\pm0.72$. We have so far a good news which is the existence of the minimum bound for successful stimulation. Existence of such a bound however is not a good news if it is not universal. In other words we should look for the minimum bound in other networks and see if it is independent of the network characteristics.
\begin{figure}[]
\centering
\includegraphics[width=.8\columnwidth]{fig3.eps}
\caption
The X axis shows the intensity of the stimulus field. The Y axis shows the size of the hit or $\tau H$. As it can be seen for week fields the response of the system is stochastic where the standard variation of successful hits is comparable with its mean value. For strong intensities however the response is deterministic. The response has a minimum in the strong intensity regime.}\label{thh}
\end{figure}
\section*{Sensitivity Analysis }\label{sectionexamples}
\begin{figure}[]
\centering
\includegraphics[width=.8\columnwidth]{fig4.eps}
\caption
As it can be seen, in the deterministic regime, the responses of the systems are not related to their sizes.}\label{size}
\end{figure}
We now need to check if our minimum bound depends on the properties of the network such as size or degree distribution. So we should perform some analyses. The first point to be checked is the dependency of the result on the size. In Figure \ref{size} we have depicted the result of simulation for different sizes. As it can be seen, the minimum bound of successful stimulations are independent of the sizes. This is a reasonable property. Within the regime we study, the stimulating field is strong. As a result, the metastable lifetime is short, namely a few Monte Carlo steps. So, the boundary conditions do not affect the result.
The second point to be checked is the dependency of the result to the average degree. The results are presented in Figure \ref{k}. From the graph it is clear that the minimum bound grows as the average degree grows. Now we graph the minimum bound vs the average degree in Figure \ref{knormal}. As it can be seen, the minimum grows linearly with the average degree. this means that our minimum bound is related to a property of the networks. If we make a wise analysis however this property is an advantage.
Consider a crisis where a major portion of firms work with minimum capacity. In this case in our dipole models a major of nodes align towards the negative Y direction. Now if the stimulus bill is big enough to compensate the decline of orders, the firms would not mind about any decline of orders by neighbors. In this case, in average for each firms it looks like that its neighbors are working with maximum capacity. In the XY model and in a Watts-Strogatz network with average degree $K$, we suppose that all nodes are downward. If a stimulus field has an intensity equal to $2K$ in the positive Y direction, then for each dipole it is equivalent to the cases where neighbors are upward. So, the gap between production in expansion and depression is proportional to the average degree. Then we can write
\begin{eqnarray}\label{timestep}
\Delta GDP\propto 2KJ
\end{eqnarray}
in which J is interaction in the XY model, and $\Delta GDP$ is the gap between expansion and depression. To have a quantitative result we need to identify other important parameters.
\begin{figure}[]
\centering
\includegraphics[width=.8\columnwidth]{fig5.eps}
\caption
Different degree averages suggest different curves for the response of the system to the external field}\label{k}
\end{figure}
Actually the missing object in our discussion is the link between time in our model and the real world. What is a Monte Carlo time step in the real world? The fact is that for different sectors of economy we should have different time steps. For those sectors which need unskilled or low skilled employees it is much easier to reduce or increase production. For sectors with professional employees it would be harder to fire \& hire employees, since the instability caused by this action is more expensive. For some other sectors such as construction, when a project is started it would be hard to abandon. So, the overall conclusion is that the time step for a manager to change strategy for the level of activity is heterogenous. For the simplest case however we can suppose to have similar time steps. The majority of contracts between employees and employers are on annual basis. So, if a manager aims to reduce production to its minimum capacity, in average she should wait 6 months for contracts to be fulfilled. This means that for the simplest case we can suppose that one Monte Carlo step is six months.
If we suppose a Monte Carlo step in an XY model is six months, then we can rewrite equation (\ref{timestep}) as
\begin{eqnarray}
\Delta GDP\approx4KJ.
\end{eqnarray}
This equation states that if in an XY model we impose a magnetic field with a strength twice the interaction between nodes before updating all the nodes, then it is equivalent with the case where we have stimulated economy with the gap for aggregate production in six months. Now if we stimulate our XY model with different strengths and different numbers of Monte Carlo steps, then for the related bill in the economy side we can write
\begin{figure}[]
\centering
\includegraphics[width=.8\columnwidth]{fig6.eps}
\caption
Minimum of $\tau H$ for different values of average degrees in Figure \ref{k} has been graphed. As it can be seen, the minimum of $\tau H$ grows linearly with K.}\label{knormal}
\end{figure}
\begin{eqnarray}\label{billrelation}
\frac{bill}{\Delta GDP}\approx\frac{\tau H}{4KJ}.
\end{eqnarray}
This equations is very flash. It states that if the value of $\tau H$ in the XY side grows linearly with K, then in the economy side the related stimulation is independent of K. So, our minimum bound is independent of the average degree.
\begin{figure}[]
\centering
\includegraphics[width=.8\columnwidth]{fig7.eps}
\caption
As it can be seen, the minimum of $\tau H$ is not seriously influenced by temperature.}\label{temperature}
\end{figure}
Another point to be checked is the relation between the minimum bound and temperature. In another simulation for a network with 500 nodes and $K=8$ we performed simulation for a range of temperatures below $T_c$. The result has been depicted in Figure \ref{temperature}. As it can be seen, the minimum of $\tau H$ is not substantially affected by the temperature. So, our result in a reasonable range of temperature would not depend on the temperature itself.
As another analysis we checked the impact of the value of rewiring probability $P$. We changed its value in the network and performed simulation for $T=0.8 T_c$. The result has been depicted in Figure \ref{p}. As it can be seen, again the minimum bound is independent of the network characteristics.
\begin{figure}[]
\centering
\includegraphics[width=.8\columnwidth]{fig8.eps}
\caption
For a Watts-Strogatz network the degree average has been kept equal to 16. The probability for reconfiguration (P) in the network has been changed within a range of 0.05 to 0.25. As it can be seen, the minimum of $\tau H$ is independent of $P$.}\label{p}
\end{figure}
So far we have supposed that recession can be represented by a network with all nodes downward. One may however argue that in recession not all firms work with their minimum capacity. In this case we need to change our initial conditions. So we perform another experiment.
In this experiment, we supposed that in bubbles and expansions before the crises, a big portion of firms work with their maximum capacity. Deep crises is denoted by the situation where a big fraction of firms reduce their production. To simulate such a situation, we first set all dipoles along the positive Y direction. We then imposed a strong downward magnetic field. Under this field some dipoles take random directions where downward directions become preferred. We tracked the magnetism along Y direction until it became $m_y=-0.5$. This means that some firms were working with their higher capacities. Some firms were working with lower capacities and some were in the middle. The overall result however represents a serious decline in production. We suppose that it resembles a situation close to the situation in downturn such as the crash of economy after the Lehman Brothers bankruptcy.
\begin{figure}[]
\centering
\includegraphics[width=.65\columnwidth]{fig9.eps}
\caption{The response of the system were in the initial condition magnetization is $m_y\approx -0.5$. For this initial condition the minimum value of $\tau H$ is equal to $11.2\pm1.16$}\label{random}
\end{figure}
We now, impose an upward magnetic field and check its influence. The result has been depicted in Figure \ref{random}. As it can be seen, still the minimum bound for stimulation exists. This bound however has been dropped from our pervious situations to $\tau H_{min}=11.2\pm1.16$. It however should be noticed that if we suppose that in a recession magnetization is around $m_y=-0.5$ then we should rewrite equation (\ref{billrelation}) as
\begin{eqnarray}
\frac{bill}{\Delta GDP}\approx\frac{\tau H}{3KNJ}.
\end{eqnarray}
This is because, in this case the gap for GDP belongs to the situations where we have $m_y=-0.5$ and $m_y=1$. This equation suggests the minimum bound for stimulation to be $\tau H_{min}=(0.47\pm0.05)\Delta GDP$. Surprisingly we observe that still the minimum bound for successful stimulation is not changed substantially.
\section*{Conclusions}
Physicists have tackled a wide spectrum of interdisciplinary problems which ranges from
biology \cite{Peng}-\cite{Mantegna94} to social sciences \cite{Nekovee}-\cite{Yasseri}, econophysics \cite{farmer}-\cite{Aoyama2010} and many other areas of researches.
One of the major concerns in such areas of researches is the matter of time. While in statistical physics, regularly time is not of interests and phenomena are studied at equilibrium, for a big portion of systems such as those studied in socioeconomics, time is a crucial parameter.
This difference results in some unexpected effects.
Ergodic theory for example claims that if a finite size system is left isolated, it spans almost the entire phase space and bypasses different equilibria. Ergodic theory works for a great range of phenomena in physics where we usually have enough time for evolution. In nature and in societies however time is a matter of survive. So, we may not be able to wait for a long time until various local equilibria of the phase space are met. In such cases forcing the system to move to our desired part of the phase space is not costless.
Though theoretically both Ising and XY models have no hysteresis, when we are concerned with time we observe metastability and resistance. Metastability and dynamic hysteresis of the Ising model has been widely studied in the literature. In this paper however we focused on the XY model.
Despite the Ising model, in the XY model below the critical temperature we have a continuous range of vacuums. Though these vacuums share the same level of energy, still triggering the system to the desired vacuum has a minimum cost which is an interesting observation. When translated into our agent based model of economy we observe that in a model more realistic than the Ising model, still there is a dynamic hysteresis and to overcome it we need a minimum bound of budget. This minimum is about $0.48\Delta GDP$ where $\Delta GDP$ is the gap for GDP between recession and expansion.
It should be notified that as the major goal of the work we showed that if agents have the choice to put production level to a continous level, still the minimum bound for stimulation exists. This means that the minimum bound suggested for successful stimulation is not the result of simplification of the economic networks to the Ising model. Beside this finding we showed that the minium bound is close to our earlier observation. Despite our prior goal this finding is not a concrete result. Actually, to translate our result to the economic values, we supposed that one Monte Carlo step means one year in the Ising model and six months in the XY model. One should notify that in real world, managers have choices such as declining the working hours. Such decision can be made pretty fast. To have a proper interpretation for the economic world we need to bring such possibilities into account. In such cases though the minimum bound exist, its value will be less than the prediction of the Ising model.
In our model we supposed that interactions for all nodes are positive. This is while in the real world some interactions are negative. If Ford sells more, then there is a good chance that Chevrolet sells less. So, an improvement of the analysis could be an extension to the spin glass like model. Moreover, for more realistic cases a heterogenous network with different weights should be considered.
Emergence is the most important issue discussed in complex systems. It arises when we study the collective behavior of many body systems. One of the major goals of econophysicists is to understand the collective behaviors in economy, see for example \cite{hosseinypercolation}-\cite{hosseinyrole}.
Though we were concerned with the economic networks, metastability and dynamic hysteresis may arise in many other networks and social systems. An example is the voting model. If people can influence each other's attitudes, then social paradigms may live in local equilibria.
Studying metastability of such systems will be harder than economical systems, but more exotic.
Actually in any game theoretical model, dynamic hysteresis may exist when we aim to move from a local equilibrium to another one.
|
2,877,628,090,722 | arxiv | \section{Introduction}
Neural natural language processing (NLP) models have attained state-of-the-art classification tasks, including natural language inference, sentiment analysis, and textual similarity~\cite{devlin2019bert,yang2019xlnet}.
What drives this performance? A popular argument is: neural models learn certain linguistic skills for these tasks, and their representations encode linguistic knowledge~\cite{lakretz-etal-2019-emergence,hewitt-manning-2019-structural-probe,Chen2019DiscoEval,tenney-etal-2019-bert,jiang2019evaluating,Zhu2020RSTprobe,ettinger2020bert}.
How can neural models encode this linguistic knowledge? \citet{Alain2017} suggested that, by attending to datasets, neural NLP models gradually learn to preserve useful, task-specific information while discarding the rest. In this way, the task-specific information is ``distilled'' in the neural network models.
There are many text-based classification tasks (e.g., \citet{Multi-NLI}), each of which requires some amount of linguistic information to classify that the neural networks distill along the way.
\begin{figure}[t]
\centering
\includegraphics[width=.8\linewidth]{fig/shortcuts_examples.png}
\caption{Classifiers can rely on ``shortcut features'' to reach the correct predictions, but this strategy is not generalizable, since the classifiers do not learn the real linguistic knowledge. Shortcut features, including the occurrence of punctuation marks (e.g., ``{\color{orange}?}'') and stopwords (e.g., {\color{blue}can, the}, {\color{green}to}, {\color{orange}you}), are prevalent in datasets, but should not be part of the linguistic knowledge required to classify.
We propose a method to quantify how much task-specific, shortcut-irrelevant information remains in the datasets.}
\label{fig:shortcut_example}
\end{figure}
The inquiry into the information regime of models leads to an appealing goal in explainable AI~\cite{doran2018does}: to infer the amount of task-specific, linguistic knowledge required for a given task in information-theoretic terms. With this unified metric, we will be able to compare across text-based classification tasks. Typically, classification accuracy and loss are used for comparison. However, recent research showed that a low cross-entropy loss might result from the information that is correlative but not causative to the prediction tasks. This is the ``shortcut learning'' problem, and it happens in a wide variety of classification tasks~\cite{ThomasMcCoy2019,Geirhos2020shortcut,Niven2020,Misra2020,Stali2020} -- even in human cognition, where study participants figure out more accessible ways to solve testing tasks~\cite{geirhos2020unintended}.
Figure \ref{fig:shortcut_example} presents two examples of shortcuts, where we could make predictions based on shortcuts that are irrelevant to the linguistic knowledge of the tasks.
Therefore, shortcuts constitute a gap between how much \textit{is learned} and how much \textit{should be learned} to classify the task. Following the motivations of recent causal analysis papers (e.g., \citet{elazar_amnesic_2021,pryzant-etal-2021-causal}), we want to factor out the impact of the shortcuts while still quantifying the amount of information a neural network model needs to learn for a task.
This paper presents a framework to separate the surface-level shortcuts from the deeper information. We quantify the ``task-specific information'' (TSI) that is not part of the spurious correlations. TSI is hard to compute numerically, but we use a method based on a Bayesian formulation to approximate this quantity (\S \ref{sec:task-specific-information}). The computation only requires computing cross-entropy losses on a pair of classification tasks.
We discuss the proper choice of configurations to compute the TSI (Secs. \ref{subsec:model-consistency},\ref{subsec:experiments:Xb-choices}).
Our method is stable across dataset sizes (\S~\ref{subsection:experiments:dataset-size}), and is easier to compute than existing entropy estimators (\S~\ref{subsec:alternative-estimators}).
Overall, the TSI framework quantifies the ``linguistic knowledge'' required to perform text-based classifications and further allows principled comparisons of the degrees of linguistic knowledge across a wide range of classification tasks. For example, the classification task in MNLI dataset~\cite{Multi-NLI} requires about 0.25 nats more TSI than the sentiment detection task with IMDB movie reviews~\cite{Maas2011IMDB}, and around 0.4 nats more than the textual similarity detection task with the QQP dataset~\cite{Wang2019} (\S~\ref{subsec:full-dataset}), given a fixed set of shortcuts.
\section{Related Work}
Our work is related to prior work in identifying and isolating spurious artifacts (``shortcuts'') in text-based prediction tasks, probing language embeddings for various linguistic phenomena, and analyzing dataset statistics.
\paragraph{Shortcut learning}
Deep neural networks can overtly rely on superficial heuristics, which allows them to perform well on standard benchmarks but prohibits generalization to real-world scenarios.
\citet{Geirhos2020shortcut} called this problem ``shortcut learning'' and referred to these heuristics as ``shortcuts". On text-based classification datasets, shortcuts appear in the form of spurious statistical cues. These include the warrants for argument reasoning~\cite{Niven2020}, syntax heuristics and lexical overlaps in natural language inference~\cite{ThomasMcCoy2019}, and relevant words (``semantic priming'')~\cite{Misra2020}. These spurious surface cues do not contribute to task-specific information.
By carefully constructing test sets that do not have these statistical cues and spurious associations, such shortcuts can be diagnosed~\cite{glockner-etal-2018-breaking,gardner2020evaluating}.
\citet{kaushik2019learning} counterfactually augmented text snippets in several sentiment-classification datasets via crowd-sourcing by applying minimal changes to the original text to flip the prediction label. \citet{Rosenman2020} used challenge sets to reveal the ``learning by heuristics'' problem in the relation extraction task. In contrast to our work, none of these prior works formulate the issue of shortcut learning using information theory. Another strategy to factor out known dataset biases is debiasing algorithms, such as the residual fitting algorithm \citep{He2019}.
\paragraph{Probing}
The probing literature inspires our approach to analyzing the information in neural language models. According to \citet{Alain2017}, the task of probing asks, ``is there any information about factor \underline{\hspace{2em}} in this part of the model?'' Following this line, many subsequent papers queried the amount of knowledge from various parts of neural models. These included syntax-related~\cite{lakretz-etal-2019-emergence,hewitt-manning-2019-structural-probe}, semantic-related~\cite{tenney-etal-2019-bert}, and discourse-related information~\cite{Chen2019DiscoEval,koto-etal-2021-discourse}.
Towards developing reliable probing methods, several papers proposed control mechanisms~\cite{pimentel2020information,zhu-rudzicz-2020-information}. With a collection of imperfect classifiers, we can combine to adjust for potential confounds. Our analyses are motivated by this idea, but we study the classification instead of the probing regime.
\paragraph{Understanding the datasets}
In machine learning and NLP literature, several works studied the ``difficulty'' of datasets \citep{blache2011predicting, Gupta2014, Collins2018, Jain2020DataQuality}, but they did not consider factoring out the impacts of shortcuts.
\citet{Damour2020} framed the shortcut learning issue as an underspecification problem: There is not enough information in training set to distinguish between spurious artifacts and the inductive biases (or rather, the linguistic knowledge).
Recently, researchers have analyzed the behavior of models on individual samples during training to diagnose datasets~\cite{tu2020empirical,kumar2019topics}. \citet{han-etal-2020-explaining} used influence functions to identify influential training samples and characterize the artifacts in datasets. \citet{Swayamdipta2020} computed metrics of training dynamics of a model, i.e., the prediction confidence and variability, to map a ``cartography'' of the data samples.
\citet{warstadt2020learning} introduced a dataset to study linguistic feature learning versus generalization in the RoBERTa base model and considered a probing setup with a control task to investigate the inductive biases of a pretrained model at the fine-tuning time. \citet{lovering_predicting_2021} found that the extent that a feature influences a model's decisions is affected by the probing extractability and its co-occurrence rate with the label.
These works have a common intuition: we should study the datasets to study the spurious correlation (shortcuts). We follow this line of research and quantify the information of shortcuts in the datasets.
\paragraph{Mutual information}
Our work is related to information theory formulations about machine learning. \citet{voita-titov-2020-information} proposed two approaches to measure the minimum description lengths of probing. \citet{li-eisner-2019-specializing} used a method based on variational information bottleneck to compress word embeddings and improve parser performances. \citet{steinke20ConditionalMI} proposed a formulation of conditional mutual information that can be used to reason about the generalization properties of machine learning models.
Empirically, our proposed method (using the difference of a pair of cross-entropy losses) echoes what \citet{xu2020theory} defined as the ``predictive $\mathcal{V}$-information''. We derive TSI from a different perspective from the $\mathcal{V}$-information. We elaborate in \S \ref{sec:task-specific-information}. There are several contemporaneous works. \citet{o2021context} uses $\mathcal{V}$-information to study the effects of each context feature independently. \citet{ethayarajh-2021-information} uses pointwise $\mathcal{V}$-information to describe the dataset difficulty.
\begin{figure}[t]
\centering
\includegraphics[width=.3\linewidth]{fig/shortcuts_model.png}
\caption{An illustration of the relationships between the text data $X$, containing a shortcut part $X_s$, and an unmeasurable task-specific part $X_t$, as well as the task label $Y$. The solid arrow indicates a causal relationship, while the dashed arrow indicates a spurious correlation. We want to factor out the observable $X_s$ from this graph.}
\label{fig:causal-model}
\end{figure}
\section{Learning Task-Specific Information}
\label{sec:task-specific-information}
This section presents our framework to quantify the task-specific information.
\subsection{Removing the shortcuts}
Consider a dataset of data points $\{(x_i, y_i)\}_{n=1}^{N}$, where $x_i\in \mathbb{R}^m$ is the feature vector, and $y_i$ is the label. Let the random variable $X$ represent all possible input features, and the random variable $Y$ represent the task labels.
In our framework, the input random variable $X$ constitutes of the shortcut part, denoted by a random variable $X_s$, and the task-specific part, an un-measurable $X_t$. In other words, $X=f(X_s, X_t)$, where $X_s \perp\!\!\!\!\perp X_t$, and $f(\cdot)$ can be any composition function. Their dependency relationships can be described by Figure \ref{fig:causal-model}. This allows us to write the distributions as:
\begin{align}
p(Y\,|\,X) = p(Y\,|\,X_t)p(Y\,|\,X_s)\underbrace{\frac{p(X_t)p(X_s)}{p(X)p(Y)}}_{\text{prior}}
\label{eq:bayesian_Xt}
\end{align}
When $X_s \perp\!\!\!\!\perp X_t$, $p(X)=p(X_t)p(X_s)$, so the prior term degenerates into $\frac{1}{p(Y)}$. Now, the mutual information between $Y$ and $X_t$ is:
\begin{align}
\begin{split}
I(Y;X_t)&=\mathbb{E} \text{ log }\frac{p(Y,X_t)}{p(X_t)p(Y)} = \mathbb{E} \text{ log }\frac{p(Y\,|\,X_t)}{p(Y)} \\
&= \mathbb{E} \text{ log } \frac{1}{p(Y\,|\,X_s)} - \mathbb{E} \text{ log } \frac{1}{p(Y\,|\,X)}\\
&= H(Y\,|\,X_s) - H(Y\,|\,X)
\label{eq:I_eq_diff_entropy}
\end{split}
\end{align}
where the expectations are taken over the distribution implicitly defined by the data $\{x_i, y_i\}_{i=1}^N$. The equation in the second last line is acquired by substituting in Eq. \ref{eq:bayesian_Xt}.
\subsection{Interpreting the model performance}
Empirically, a model learning this task (e.g., a BERT \citep{devlin2019bert} with a fully connected layer on top) approximates the true, unknown distribution $p(Y\,|\,X)$. Let $q(Y\,|\,X)$ describe the learned model, then by definition:
\begin{align}
H(Y\,|\,X) = \text{NLL}_{Y\,|\,X} - \text{KL}(p\,\|\,q)
\end{align}
where NLL denotes the negative log likelihood loss,\footnote{We assume continuous distributions, so $\text{NLL}_{Y\,|\,X}=\Sigma_{x\in \text{data}} -\text{log}q(Y\,|\,X)$ equals the cross entropy $\mathbb{E}_{x} -\text{log}q(Y\,|\,X)$.} KL is the Kullback-Leibler divergence, $p$ and $q$ are the short-hand notations of $p(Y\,|\,X)$ and $q(Y\,|\,X)$ respectively, and
\begin{align}
\text{NLL}_{Y\,|\,X}=\mathbb{E}_{p(X)} \text{ log }\frac{1}{q(Y\,|\,X)}
\end{align} is the cross-entropy loss. In this paper, we will use $\text{NLL}$ to refer to the cross-entropy loss, for clarity.
A well-trained model would have high performance: a high accuracy, a low $\text{KL}(p\,\|\,q)$ divergence, and a low cross-entropy loss. However, as mentioned before, this could result from the model ``taking shortcuts'', predicting the task labels $Y$ from the shortcuts $X_s$.
\subsection{Computing TSI needs a control task}
Here we consider a control task to specify the features that might benefit the classification but do not contribute to the linguistic knowledge required for the models to perform the task correctly. Figure \ref{fig:shortcut_example} describes some shortcuts. We include the details in the Experiment below.
We refer to the classifier trained only on the shortcuts as the control model. When trained, the control model approximates the unknown distribution $p(Y\,|\,X_s)$ with an empirical distribution, $q(Y\,|\,X_s)$.
\paragraph{Definition 1:} The \textit{task-specific information} (TSI) in the classification task (described by $X,Y$) with respect to the shortcut $X_s$ is quantified by:
\begin{align}
\begin{split}
&I(Y;X_t) = \underbrace{\text{NLL}_{Y\,|\,X_s} - \text{NLL}_{Y|X}}_{\textrm{Known}} + \\
&\underbrace{\text{KL}(p_{Y\,|\,X}\,\|\,q_{Y\,|\,X}) - \text{KL}(p_{Y\,|\,X_s}\,\|\,q_{Y\,|\,X_s})}_{\textrm{Unknown}}
\label{eq:info_y_xg}
\end{split}
\end{align}
Similarly, $\text{NLL}_{Y\,|\,X_s}$ is the cross-entropy loss of the control task. They can be measured empirically, so we mark them as ``known''.
\subsection{On the scales of the intractable KLs}
In Eq. \ref{eq:info_y_xg}, the two ``known'' terms constitute of the predictive $\mathcal{V}$-information \citep{xu2020theory} from $X_t$ to $Y$. Additionally, $I(Y;X_t)$ contains two intractable KL terms. As a sanity check, we use a collection of synthetic datasets to estimate their scales. Following are the distributions to generate these toy datasets $\{X,Y\}$:
\begin{align*}
&X_j \sim \text{Bernoulli}(p_x) \text{, where }j\in \{1,2,.., m\} \\
&X = [X_1, X_2, ..., X_m] \\
&Y = g(X_1, ..., X_m) + \epsilon \text{, where }\epsilon\sim \text{Bernoulli}(p_y)
\end{align*}
where $m$ specifies the number of input features, and $g(X_1, ..., X_m)$ is a deterministic function. This construction allows an exact computation of the conditional entropy $H(Y\ |\ X)$. On the other hand, we compute the cross-entropy $\text{NLL}_{Y\ |\ X}$ by training a default scikit-learn MLPClassifier $q(Y\ |\ X)$ on the train portion of $\{X, Y\}$. Then, the difference between the dev loss and the conditional entropy is the KL values resulting from the imperfect classifier.
We generate toy datasets with different values of $m$ ($2\leq m \leq 10$), $p_x$ and $p_y$ (between 0.1 and 0.9). For $g(\cdot)$, we use two options:
\begin{itemize}[nosep]
\item \texttt{sum}: $g(X)=\sum_j X_j$
\item \texttt{and}: $g(X)=X_1 \wedge X_2 \wedge ... \wedge X_m$
\end{itemize}
Figure \ref{fig:KL_bounds} show the histograms of the two options, respectively. In 99.5\% (1184 of 1190) configurations, the dev losses are within 0.04 nats away from $H(Y\ |\ X)$. In other words, the scales of the $KL(p\ \|\ q)$ are estimated to be one magnitude smaller than those of $I(Y;X_t)$. Additionally, a recent paper, \citet{pimentel2021bayesian} shows that the difference of a pair of cross entropy (they call it Bayesian Mutual Information) converges to the mutual information when there are infinite number of data points. In the subsequent analysis, we empirically ignore the intractable KL terms.
\begin{figure}[t]
\centering
\includesvg[width=\linewidth]{fig/KL_bounds_histogram.svg}
\caption{The histograms of $|NLL-H(Y\ |\ X)|$, i.e., the estimated scales of $\text{KL}(p\ \|\ q)$, with the \texttt{sum} and \texttt{and} option respectively.}
\label{fig:KL_bounds}
\end{figure}
\subsection{Understanding TSI}
\label{subsec:understanding-info}
Before moving to the computation, let us first briefly discuss some properties of TSI.
\textit{Lower bound.} $\text{TSI}\geq 0$, where equality is reached when the information from the shortcuts (e.g., the presence of specific tokens) is sufficient for classification, so the model does not have to learn any task-specific knowledge to perform perfectly.
\textit{Upper bound.} $\text{TSI}\leq H(Y)$, where the equality is reached when $H(Y\,|\,X_t)=0$, i.e., the task label $Y$ is a deterministic function of the task-specific variable $X_t$.
Further, for a task with $m$ distinct labels, Jensen-Shannon inequality gives us $H(Y)\leq \text{log }m$ nats.\footnote{Throughout this paper, we use nats (instead of bits) as the unit for measuring the information-theoretic terms.} When $m=2$ and $3$, the TSI would be correspondingly upper-bounded by $\text{log}2\approx 0.693$ and $\text{log}3\approx 1.097$, respectively. When the number of classes $m$ increases, the upper bound of TSI increases, resembling what \citet{Gupta2014} mentioned about how a larger number of classes contribute to the increased cross-entropy.
\textit{An on-average metric.} TSI is averaged across the dataset samples, allowing comparison across datasets with different sizes. We can compare the TSI scores of a dataset with 50,000 samples (e.g., IMDB \citep{Maas2011IMDB}) to that of a dataset with 400,000 samples (e.g., Quora Question Pairs) to directly compare their ``linguistic informativeness''.
We discuss further about the dataset sizes in \S \ref{subsection:experiments:dataset-size}.
\textit{Quantity but not form.} TSI quantifies the amount rather than describes the actual type of information required to classify a task. The former computes an aggregate metric, while the latter requires a deep understanding of the task knowledge. This paper considers the former.
\section{Experiments}
\label{sec:experiments}
\subsection{Datasets}
We run experiments on several popular benchmarking datasets (in English) that test various linguistic abilities, including sentiment and attitude detection (Yelp and IMDB), entailment recognition (MNLI), and semantic similarity understanding (QQP). The dataset details are in Appendix \ref{sec:dataset-details}.
\subsection{Control task features} The features for the control task need to be scalars. In the experiments, we use the following features to illustrate the application of our framework.
\paragraph{The occurrences of punctuations} We count the punctuation in each input text sample and normalize by the number of tokens in the sentence. If a sample constitutes a pair of sentences, we concatenate the two sentences. Following is an example.
\begin{tcolorbox}
You have access to the facts \textul{.} The facts are accessible to you \textul{.}
\end{tcolorbox}
There are $N=2$ occurrences of punctuations in the (concatenated) sentence with length $L=14$, so the ``occurrence of punctuation'' feature is $\frac{2}{14}$.
\paragraph{The occurrence of stopwords} We count the stopwords (modulo the negation words including ``no'', ``nor'', ``don't'' and ``weren't'') and normalize by the token length of the example. We concatenate the two sentences for the samples consisting of a pair of sentences similar to the punctuation feature. Following is an example.
\begin{tcolorbox}
\textul{You} \textul{have} access \textul{to} \textul{the} facts . \textul{The} facts \textul{are} accessible \textul{to} \textul{you} .
\end{tcolorbox}
There are $N=8$ occurrences of stopwords in this sentence with length $L=14$, so the ``occurrence of stopword'' feature is $\frac{8}{14}$. Note that some stopwords do have semantic roles. For example, \texttt{I}, \texttt{you} and \texttt{they} can specify the person(s) in the situations. Additionally, one could argue that the choice of stopwords between, e.g., \texttt{I} and \texttt{me} could indicate the role of the speaker, and so on. Therefore one could argue that the occurrence of stopwords can be a non-shortcut, dependent on the actual task. However, one can also argue for the opposite, since the information provided by these semantic roles seems irrelevant to various classification tasks -- for example, both ``I like this movie'' and ``You like this movie'' would indicate a positive movie review. This collection serves as an example that the TSI framework allows considering a collection of semantically nontrivial words.
\paragraph{The overlapping of paired sentences} For each pair of sentence $(s_1, s_2)$, we use the number of overlapped tokens (relative to each of the two sentence lengths) to describe the extent of lexical overlapping. Following is an example.
\begin{tcolorbox}
\begin{itemize}[nosep]
\item What \textul{can} \textul{make} \textul{Physics} \textul{easy} \textul{to} \textul{learn} \textul{?}
\item How \textul{can} you \textul{make} \textul{Physics} \textul{easy} \textul{to} \textul{learn} \textul{?}
\end{itemize}
\end{tcolorbox}
The two ``lexical overlap'' features for this sentence pair are overlap\_1=$\frac{8}{9}$, overlap\_2=$\frac{8}{10}$.
\subsection{Classification models} For training $q(Y\,|\,X)$ models, we use BERT \citep{devlin2019bert}, RoBERTa \citep{Liu2019roberta}, and ALBERT \citep{Lan2020ALBERT}, all on the base configuration (12 layers), with a fully connected head. Such transformer-based configurations are the state-of-the-art on classification tasks. We adopt the configurations of \citep{devlin2019bert}: we concatenate the input sentences (for MNLI and QQP) and take the \texttt{[CLS]} token representations to pass in the fully connected head. The training hyperparameters follow the configurations recommended in the literature \citep{devlin2019bert,Liu2019roberta,Lan2020ALBERT,wolf2019huggingface}. For training $q(Y\,|\,X_s)$ models, we use MLPClassifier from scikit-learn \citep{scikit-learn}. We list the details in Appendix \ref{sec:hyperparameters}.
\begin{figure}[t]
\centering
\includesvg[width=\linewidth]{fig/loss_acc_corr_by_task_x.svg}
\caption{A scatter plot of the accuracy against dev loss of models trained on full datasets.}
\label{fig:loss_acc_corr}
\end{figure}
\begin{figure}[t]
\centering
\includesvg[width=\linewidth]{fig/info_by_xs_choice.svg}
\caption{Estimates of TSI with different choices of shortcut features and the best models. Note that the \texttt{O} (lexical overlapping) heuristics only apply for MNLI and QQP, while the \texttt{P} (punctuation) and \texttt{S} (stopwords) heuristics apply to all four tasks. For each task, as more features are excluded, we can see the estimate decreases. Unless specifically mentioned, we consider TSI$^{\text{P+S}}$ for all tasks henceforth.}
\label{fig:xb_variations}
\end{figure}
\begin{figure*}
\centering
\includesvg[width=.8\linewidth]{fig/subset_plots.svg}
\caption{The $I(Y;X_t)$ estimation when we subsample different sizes of datasets.}
\label{fig:subset_impacts}
\end{figure*}
\section{Discussions}
\subsection{Estimating TSI with an suboptimal model}
\label{subsec:model-consistency}
Each $\{X, X_s, Y\}$ configuration uniquely determines the $I(Y;X_t)$ value.
Ideally, the models that perfectly fit the dataset distributions $p(Y|X)$ and $p(Y\ |\ X_s)$ can precisely estimate $I(Y\ ;\ X_t)$. Among all empirical models, the highest performing models approximate $I(Y;X_t)$ the most closely, since they lead to KL values (of Eq. \ref{eq:info_y_xg}) that are the smallest. Therefore, we report the results from finetuning the best of BERT, RoBERTa, and ALBERT, and we recommend using the best possible model.
Empirically, the model at hand might have an accuracy of several points lower than the top model at the GLUE leaderboard. How far do the entropy values of the imperfect models differ from those of the SOTA models (which usually only the accuracies are available)? Figure \ref{fig:loss_acc_corr} plots the correlations between the cross-entropy losses and the accuracies of the non-degenerative finetuned $q(Y\,|\,X)$ models. Interestingly, except for IMDB, the results show linear trends, with the slopes and intercepts varying from task to task. The slopes of the trendlines could be used to interpolate the validation losses of the suboptimal models.
\subsection{TSI and the choice of shortcuts}
\label{subsec:experiments:Xb-choices}
To enable cross-task comparisons, our framework considers TSI with respect to the fixed set of shortcuts. For example, apart from lexical overlap, how much linguistic information is there in the classification? The choice of shortcut features affects the cross-entropy losses, hence the TSI.
Figure~\ref{fig:xb_variations} reports the TSI estimations with various choices of shortcut features (additional results are in the Appendix). As we add features to the $X_s$ set, $\text{NLL}_{Y\,|\,X_s}$ decreases, leading to a corresponding decrease in TSI. The lexical overlap feature exacerbates this decrease to $X_s$ for MNLI. This follows our intuition since the syntactic heuristics such as lexical overlap have been identified as fallible heuristics for MNLI in prior work~\cite{ThomasMcCoy2019}, and though lexemes are shortcut features, they do encode semantics.
\textit{On the completeness of shortcuts}. We do not aim at the unrealistic goal of exhausting all possible shortcuts. Instead, we present a framework where the contribution of the shortcuts, once identified, can be factored out. The TSI framework can generalize to additional shortcuts.
\textit{Generalization of features}. We identified some features as ``shortcut features''. Dependent on the goal of analysis, one can apply other features (e.g., the length of sentences). In addition, automatic identification of shortcut features $X_s$ method (e.g., approaches similar to those of \citet{wang-culotta-2020-identifying}) may be used as well.
\subsection{How stable is TSI to dataset size?}
\label{subsection:experiments:dataset-size}
To evaluate the effects of dataset size, we reduce the training sets with stratified sampling while assessing on the same validation set.
As shown in Figure \ref{fig:subset_impacts}, the robustness of TSI estimations regarding the subset size differs across datasets. For MNLI, the estimation started to fluctuate starting from 25\% of the original size. However, the estimates for IMDB, Quora, and Yelp remain relatively stable until we reduce the train set sizes to as little as $\sim5\%$.
For both the $Y|X$ and $Y|X_s$ classification, the minimum reachable cross-entropy losses increase as we reduce the dataset sizes. A possible reason is that downsampling changes the data distribution and leads to mismatches between the train and the validation distributions. Similar effects are described in e.g., \citet{gardner2020evaluating}. Note that as we reduce the dataset sizes, $H(Y|X)$ rises faster than $H(Y|X_s)$, indicating that the deeper, task-specific knowledge requires more data to capture than those shortcut knowledge, echoing the finding of \citet{warstadt2020learning}.
\subsection{What about alternative estimators?}
\label{subsec:alternative-estimators}
Previous works have proposed several mutual information estimators based on setting up optimization goals, e.g., BA \citep{agakov_im_2004}, DV \citep{donsker1975asymptotic}, NWJ \citep{nguyen2010estimating}, MINE \citep{belghazi2018mutual}, CPC \citep{oord2018representation}, and SMILE \citep{song2020understanding}. We defer to \citet{pmlr-v97-poole19a} and \citet{guo_tight_2021} for summaries. Unfortunately, these variational methods do not directly apply to our problem setting. They involve modeling either the joint distribution $p(X,Y)$ or the generative distribution $p(X|Y)$. However, we consider the classification tasks where the state-of-the-art methods finetune the pretrained deep networks to model the conditional distributions of classification tasks $p(Y|X)$. It is possible to model the generative distribution on text classification datasets, but we consider that out of the scope of this paper. A recent paper, \citet{mcallester2020formal}, argues in favor of using (and minimizing) the difference of entropies to estimate the terms related to mutual information because, unlike DV, NWJ, MINE, and CPC, this setting is not restricted to various statistical limitations.
How about directly estimating the entropy values $H(Y\,|\,X)$ and $H(Y\,|\,X_s)$ from data? It turns out that the computational effort required by this approach can easily grow prohibitive.
Estimating the conditional entropy from the dataset $\{x_i, y_i\}_{i=1..N}$ involves finding the density, which is usually implemented by finding the nearest neighbors. This could require $\mathcal{O}(N\text{log}N)$ computational time with $\mathcal{O}(N)$ memory\footnote{Store all data points using a heap-like data structure, which allows query in $\mathcal{O}(\text{log}N)$ time for each data point.} -- where the memory requirements would grow prohibitively -- or $\mathcal{O}(N^2)$ computational time with $\mathcal{O}(1)$ memory\footnote{Traverse the dataset to find nearest neighbors.} -- where the computational time would grow prohibitively. In comparison, training two models with stochastic gradient descent requires only $\mathcal{O}(N)$ training time and $\mathcal{O}(1)$ memory. In other words, our method is more realistic under real-world computational constraints.
We run Monte Carlo simulations on a fraction of data using an off-the-shelf entropy estimator, NPEET \citep{kraskov2004estimating}. The sizes of the fractions are decided to be stable following the analysis of \S \ref{subsection:experiments:dataset-size}, i.e., $10^3$ for IMDB and Yelp, $10^4$ for Quora, and $10^5$ for MNLI. We sample the subsets in a stratified manner with ten different random seeds. The conditional entropies $H(Y|X)$ and $H(Y|X_s)$ from Monte Carlo simulations differ significantly from those cross-entropy losses. Moreover, these simulations sometimes report negative $I(Y\,|\,X_t)$ values, indicating the prohibitive levels of the errors. We include the details in the Supplementary Data.
\begin{table}[t]
\begin{center}
\begin{tabular}{c c c c c}
\toprule
\textbf{Dataset} &
\textbf{$\text{Acc}_{Y\,|\,X}$} &
\textbf{TSI$^{\text{P+S}}$} &
\textbf{TSI$^{\text{P+S+O}}$}\\
\midrule
MNLI & 0.85 & 0.68 & 0.64 \\
\midrule
IMDB & 0.92 & 0.43 & -- \\
\midrule
Yelp & 0.97 & 0.41 & -- \\
\midrule
QQP & 0.89 & 0.31 & 0.23 \\
\midrule
\bottomrule
\end{tabular}
\end{center}
\caption{Our best estimates of TSI with P+S and P+S+O shortcut features respectively, and the dev accuracies of the corresponding $Y|X$ classifications.
\label{tab:perf}}
\end{table}
\subsection{TSI required to classify each dataset}
\label{subsec:full-dataset}
Table \ref{tab:perf} contains our best estimations for TSI across datasets. The TSI$^{P+S}$ of IMDB and Yelp are similar. Moreover, both TSI$^{P+S}$ and TSI$^{P+S+O}$ of MNLI are about 0.4 nats larger than those of QQP. Considering that the highest dev accuracy on MNLI and QQP are similar, the contrast in TSI provides an alternative perspective in comparing across tasks. On the QQP dataset, neural models rely more on the artifacts, including punctuations and stopwords, than on the MNLI dataset.
Our method does not directly apply to HANS \citep{ThomasMcCoy2019} yet, since existing high-performing models mostly use HANS as a test set. Instead of directly approximating the TSI of HANS, one can compute that of, e.g., HANS + MNLI.
\subsection{Broader impacts}
While there is a general momentum to develop better models on miscellaneous classification tasks, we call for more systematic comparisons across different datasets and propose developing datasets with higher ``signal-to-noise ratios'', as measured by, e.g., TSI. We also encourage the NLP community to think about several closely related problems:
\textit{Identifying shortcut features.} While the release of a new NLP dataset is often paired with strong baselines for the proposed task, we also encourage future researchers to identify potential shortcuts or spurious associations, which could occur either due to the data collection procedure or due to the nature of the task itself (e.g., as reported by ~\citet{romanov2018lessons} for natural language inference tasks).
\textit{Leaderboard practices.} Currently, the leaderboard practices reward high classification performances. We recommend that NLP researchers build leaderboards that additionally incentivize the minimal use of shortcuts. A potential way to do this would be constructing multiple test sets~\cite{glockner-etal-2018-breaking}, testing for different parameters of concern -- such as data efficiency, fairness, etc., -- as identified by ~\citet{ethayarajh-jurafsky-2020-utility}.
\textit{Metrics for cross-task comparison.} Consider reporting the performance on a unified scale of ``task-specific informativeness", rather than relying on average model performance metrics~\cite{Collins2018}. Designing metrics with grounds in linguistic knowledge is an interesting direction of future work.
\section{Conclusion}
We propose a framework to quantify the task-specific information (TSI) for classifications on text-based datasets. Given a fixed collection of shortcut features, TSI quantifies the linguistic knowledge attributable to the classification target that is \textit{independent} of the shortcut features. The quantification method is computable under limited resources and is relatively robust to the dataset sizes. Further, this framework allows comparison across classification tasks under a standardized setting. For example, apart from the effects of punctuations and the non-negation stopwords, MNLI involves around 2.2 times TSI as the Quora Question Pairs, in terms of nats per sample.
\section*{Acknowledgements}
We would like to thank the anonymous reviewers at NAACL 2021 and ACL (2022 August) for the feedback. Rudzicz is supported by a CIFAR Chair in artificial intelligence.
|
2,877,628,090,723 | arxiv | \section{Introduction}
\label{sec:intro}
The standard theory of Type~I (low planet mass) migration presents a
challenge for understanding planet formation. According to the core
accretion model, the growth time from planetesimals to gas giant
planets is dominated by a phase that occurs when a newly formed solid
core with mass $\sim 10 M_{\oplus}$ accretes gas (Pollack et al 1996,
Hubickyj et al 2007). The duration of this slow phase, $\sim 10^6 y$,
is determined by a thermal bottleneck that prevents the gaseous
envelope from contracting, until it achieves sufficient mass. On the
other hand, the standard theory of Type~I migration (Tanaka et al
2002) predicts that planets in the slow growth phase would migrate
into the disk center in $\sim10^5 y$ for the minimum solar mass
nebula. These migration timescales have been confirmed by
multidimensional hydrodynamical simulations (Bate et al 2003, D'Angelo
\& Lubow 2008). Recently, Ida \& Lin (2008) and Schlaufman, Lin, \&
Ida (2008) have studied the effect of the ice line on the disk
surface density profile and consequently on the Type I migration. They
obtained much better agreement with the observed extrasolar planet
mass-semimajor axis distribution if the Type I migration is reduced by
an order-of-magnitude from the linear theory values.
The shortness of the standard migration timescale has motivated
investigations of possible effects to slow or even reverse migration.
These include magnetic fields (Terquem 2003), magneto-rotational
instability (MRI) turbulent fluctuations (Nelson \& Papaloizou 2004),
and density traps (Menou \& Goodman 2004). Recently, protoplanet
migration in the non-isothermal disks has been investigated
(Paardekooper \& Mellema 2006; Baruteau \& Masset 2008; Paardekooper
\& Papaloizou 2008; Kley \& Crida 2008). These simulations show
indications of slowing migration due to coorbital torques for certain
ranges of gas diffusivity and turbulent viscosity. In this Letter we
discuss another mechanism that naturally occurs when the disk
turbulent viscosity is sufficiently small.
The Tanaka et al (2002) Type~I migration rates were derived under the
assumption that the disk density distribution is unaffected by the
presence of the planet. Numerical simulations commonly adopt
turbulent viscosity parameter values $\alpha \ga 10^{-3}$. Such values
are suggested by considering the observationally inferred disk masses
and accretion rates for T Tauri stars (e.g., Hartmann et al 1998).
With such $\alpha$ values, turbulent diffusion suppresses disk
disturbances for planets of mass less than $0.1 M_J$. The numerical
simulations then satisfy the Tanaka et al (2002) assumptions and yield
migration rates that are in close agreement.
The various models of planet formation by core accretion typically
involve lower $\alpha$ values, $\alpha < 10^{-3}$ (see Cuzzi \&
Weidenschilling 2006), which we refer to as nearly laminar values. To
form planetesimals via gravitational instability from small dust
particles (Safronov 1969, Goldreich \& Ward 1973) requires the dust
layer to be very thin $\sim 10^{-4} H$, suggesting $\alpha \ll
10^{-4}$. For the solids to be dynamically decoupled from the gas
requires a dust layer disk of thickness $\la 10^{-2} H$, again
suggesting nearly laminar conditions. Cuzzi \& Weidenschilling (2006)
estimate that $\alpha \la 2 \times 10^{-4}$ in order that meter size
solids avoid destructive effects of collisions due to turbulent
motions.
In the planet formation regions of disks, considerations of the MRI
(Balbus \& Hawley 1991) suggest that the disk maybe unstable only in
surface layers, due to the low levels of ionization below these layers
(Gammie 1996). A major uncertainty is the abundance of small grains
that can suppress the instability. The disk may be nearly laminar for
the purposes of planet formation. However, surface layer turbulent
fluctuations may propagate disturbances to the disk midplane. They may
provide some effective turbulence in that region as well (Fleming \&
Stone 2003; Turner \& Sano 2008).
In nearly laminar disks, density waves launched by a planet at various
Lindblad resonances can redistribute disk mass as they damp. The disk
turbulent viscosity needs to be sufficiently small for the density
perturbation to not diffuse away. The redistributed gas slightly
enhances (reduces) the disk density interior (exterior) to the orbit
of an inwardly migrating planet. Some analytic studies have shown
that such density feedback effects could slow and even halt the migration
(Hourigan \& Ward 1984; Ward \& Hourigan 1989; Ward 1997; Rafikov
2002). The critical planet mass at which the feedback becomes
important depends on the efficiency of the damping of waves excited by
the planet. Wave damping may be due to the shock dissipation and will
generally occur nonlocally, at some distance from the radius where the
wave is launched. In this {\em Letter}, we explore the consequences
of nearly laminar disks on planet migration by means of nonlinear 2D
numerical simulations.
\section{Numerical Method and Initial Setup}
\label{sec:init}
We assume that the protoplanetary disk is thin and can be described by
the 2D isothermal Navier-Stokes equations in a cylindrical \{$r,
\phi$\} plane centered on the star with vertically integrated
quantities. The differential equations are the same as given in Kley
(1999). Simulations are carried out using a hydro code developed at
Los Alamos (Li et al. 2005). We also use the local comoving angular
sweep as proposed in the FARGO scheme of Masset (2000) and modified in
Li et al. (2001). The equations of motion of the planets are the same
as given in D'Angelo et al. (2005), which we adapted to the polar
coordinates with a fourth-order Runge-Kutta solver. During each
hydrodynamics time step, the motion of the planet is divided into
several substeps so that the planet always moves within 0.05 local
grid spacing, $\delta = [(\Delta r)^2 + (r_i\Delta \phi)^2]^{1/2}$, in
one substep. The disk gravitational force on the planet is assumed to
evolve linearly with time between two hydrodynamics time steps.
Furthermore, we have implemented a full 2D self-gravity solver on our
uniform disk grid (Li, Buoni, \& Li 2008). This solver uses a mode
cut-off strategy and combines FFT in the azimuthal direction and
direct summation in the radial direction. The algorithm is
sufficiently fast that the self-gravity solver costs less than $10\%$
of the total computation cost in each run. This code has been
extensively tested on a number of problems. With our pseudo-3D
treatment (see Li et al. 2005 for details) and a small (a few grid
size) softening distance in the planet's potential, migration rates from
simulations with sufficient viscosity (dimensionless kinematic
viscosity $\nu \simeq 10^{-6}$) agree well (within a few percent) with
the 3D linear theory results by Tanaka et al. (2002). As the softening
distance increases to $r_H$, the migration rates from such simulations
are $\sim 30\%$ slower than the 3D linear theory result. The runs
presented here use $r_H$ as the softening distance.
The 2-D disk is modeled between $0.4 \leq r \leq 2$. The planet is
initially located at $r=1$, which corresponds to a physical distance
of Jupiter's orbital radius (5.2 AU), and orbits about a $1 M_{\odot}$
star. The unit of time is the initial orbit period $P$ of the planet,
which is about 12 yr. A corotating frame that rotates with the
initial angular velocity of the planet is used. The coordinate plane
is centered on the central star at $(r,\phi)=(0,0)$ (acceleration due
to frame rotation is also included, the so-called indirect term). The
disk is assumed to be isothermal throughout the simulated region,
having a constant sound speed $c_s$. The dimensionless disk thickness
is scaled by the initial orbital radius of the planet
$h=c_s/v_{\phi}(r=1)$, where $v_{\phi}$ is the Keplerian velocity. We
consider values $h = 0.035$ or $0.05$ in the simulations. We have also
made runs using constant disk aspect ratio $H/r$, and found that our
main conclusions are not changed.
Our numerical schemes require two ghost cells in the radial direction
(the angular direction is periodic). Holding these ghost cells at the
initial steady state values produced the weakest boundary reflections
among all boundary conditions we investigated. We choose an initial
surface density profile normalized to the minimum mass solar nebular
model (Hayashi 1981) as $\Sigma(r) = 152\, f (r/5{\rm AU})^{-3/2}$ gm
cm$^{-2}$, where $f$ ranges from $1-5$ in our simulations. The
initial rotational profile of the disk is calculated so that the disk
will be in equilibrium with the disk self-gravity and pressure
(without the planet). The mass ratio between the planet and the
central star is $\mu=M_{p}/M_{\ast}$, which ranges from $3\times
10^{-6}$ to $10^{-4}$. The planet's Hill (Roche) radius is $r_H = r_p
(\mu/3)^{1/3}$. The dimensionless kinematic viscosity $\nu$ (normalized by
$\Omega^2 r$ at the planet's initial orbital radius) is taken to
be spatially constant and ranges between $0$ and $10^{-5}$. For
$h=0.05$, the effective Shakura and Sunyaev $\alpha = \nu /h^2$ at the
initial planet radius ranges between $0$ and $4\times10^{-3}$. We have
performed various tests to show that when $\nu=0$, the effective
numerical viscosity in our simulations is $\nu < 10^{-9}$ or $\alpha <
4\times 10^{-7}$. We typically evolve the disk without the planet for
10 $P$. Subsequently, the planet's gravitational potential is gradually
``turned-on'' over a 30-orbit period, allowing the disk to respond to
the planet potential gradually. Note that the time shown in all the
figures in this {\it Letter} starts at the time of the planet release.
Runs are made typically using a radial and azimuthal grid of
$(n_{r}\times n_{\phi})= 800\times 3200$, though we have used higher
resolution to ensure convergence on some runs. Simulations typically
last several thousand orbit periods at $r=1$.
\section{Results}
Figure \ref{fig:vis_all} shows the influence of the imposed
disk viscosity on the
migration for a planet with $\mu = 3\times 10^{-5}$ or $10
M_{\oplus}$, $h = 0.035 $, and $f=5$. For relatively large viscosity
($\nu = 10^{-6}$, $\alpha = 8 \times 10^{-4}$), the migration rates
agree well with the Type I rates given by Tanaka et al. (2002), as
discussed above. At early times the migration rates are largely
independent of the disk viscosity. As viscosity decreases, after about 100
P, the migration is drastically slowed or completely halted. The rapid
oscillations with modest amplitude at $t \sim 800 P$ are due to
the excitation of vortices from a secondary instability (Koller et al.
2003; Li et al. 2005; see also Li et al. 2001), which will be a
subject for future studies. Figure \ref{fig:rhos_vis_all} reveals the
reason for the slow-down. A partial gap in the disk around the planet
has formed at $t=500 P$ and the density profile deviates significantly
from the initial power-law. The asymmetry in the density distribution
interior and exterior to the planet has reduced the contribution from
the outer Lindblad torque so that the net torque is approximately
zero. We verified that the slow-down shown in Fig. \ref{fig:vis_all}
is largely caused by the density redistribution. In principle, the
torque distributions per unit disk mass ($d T/dM(r)$ see Fig 1 in
D'Angelo \& Lubow 2008) could also be affected.
But we find these changes only slightly
modify (at the few percent level) the net migration torque.
In the case of $\nu = 10^{-6}$ in Fig~\ref{fig:rhos_vis_all}, the
profile is qualitatively similar to the expectations of steady state
theory (Ward 1997; Rafikov 2002). In particular there is a density
peak at $r < r_p$ and a trough at $r > r_p$. The torque that the disk
exerts on the planet is localized to a region of a few times the disk
thickness or about $\pm 4 r_H$ from radius $r_p$. In
Fig~\ref{fig:rhos_vis_all} for $\nu = 10^{-6}$ , we see that the
perturbed density extends over a somewhat greater region of space,
suggesting that nonlocal damping is involved.
In the case of lower viscosities, $\nu = 10^{-7}, 10^{-9} $ in
Fig~\ref{fig:rhos_vis_all}, the density profiles are quite different
from the $\nu = 10^{-6} $ case. In these cases, the planet has
effectively stopped migrating and the planet is opening a gap. We find
that the gap deepens over time, i.e., the system does not reach a
steady state. The large density gradients cause the vortex instability
to develop, as discussed above.
The strong density feedback in nearly inviscid disks and the reduction
in the total torque suggest the existence of critical planet mass
above which migration can be slowed significantly or halted. Fig.
\ref{fig:r_t_crit} shows the transition from the usual Type I
migration to a much slower migration as the planet mass is increased,
for $f=2$, $h= 0.035, 0.05$, and $\nu = 0$. The reduction in
migration rates is gradual, so it is difficult to define a single
value above which the migration will be halted. The values we
determine apply over the time range of several thousand $P$. We have
performed a large number of runs for six different disk properties:
$f=1, 2, 5$, $h = 0.035$ and $0.05$. In Table 1 we give the estimates
of the planet masses in which the migration has significantly slowed
in our simulations. Above these values, migration was found to be
halted.
Previous studies have emphasized the role of gap formation in slowing
down the planet migration (Lin \& Papaloizou 1986; Crida \& Morbidelli
2007). From Fig. 2, we can see that a partial gap is formed at late
time. But the slowing down of the migration has started much earlier
(see also Fig. 3) where the gap is much less deep. This is consistent
with our interpretation that the slowing down of the migration is
primarily caused by the torque resulting from the asymmetric mass
re-distribution, i.e., the density feedback effects discussed at the
end of \S \ref{sec:intro}. With the planet mass above the critical
mass, it is no longer possible to have steady state migration (as
explained in the analytic studies). In this regime, a gap gradually
develops and it deepens with time. But, it is still the asymmetry in
the density distribution (see Fig. 2) that ensures a much reduced (or
zero) net total torque.
\section{Discussions}
\label{sec:diss}
The local wave damping model of Ward \& Hourigan (1989) suggests
critical planet masses of $M_{\rm cr} \sim \Sigma r_p^2 h^3$, which
evaluates to $0.006 f M_{\oplus}$ and $0.02 f M_{\oplus}$ for $h =
0.035$ and $0.5$, respectively. These values differ from Table 1 by a
factor of more than 100. However the scaling of the critical mass with
disk thickness is close to $h^3$ as given by this theory. The scaling
of $M_{\rm cr}$ with surface density in Table 1 is, however, weaker than
that suggested by this theory. In another local wave damping model,
Ward (1997) suggests that $M_{\rm cr} \sim 0.2 \Sigma r_p^2 h$, which
evaluates to $1.1 f M_{\oplus}$ and $1.6 f M_{\oplus}$ for $h = 0.035$
and $0.05$, respectively. Although these values are numerically closer
to the simulation values in Table 1, the predicted linear scaling in
both $f$ and $h$ does not agree with the trends in the Table 1.
The analytic model of Rafikov (2002)
includes the effects of nonlocal damping by means of shocks. In that
case, the critical masses are given by
\begin{equation}
M_{cr} = \frac{2 c_s^3}{3 \Omega G}~ {\rm min}\left[5.2 Q^{-5/7}, \,
3.8 ( Q/h )^{-5/13} \right],
\end{equation}
where $Q = \Omega c_s/(\pi G \Sigma)$.
All the simulated cases correspond to the strong feedback
branch of $M_{cr}$ that is given by the second
argument of the $min$ function.
Values for these critical
masses are also given in Table 1. We see that the agreement between
the simulation and theory is quite good.
The critical masses are in the range of the core masses during the
slow phase of gas accretion in the core accretion model of planet
formation. This result suggests that planet migration might not be a
limiting factor in planet formation. The phase of run-away mass
accretion follows the slow evolution phase. Previous studies suggest
that run-away mass accretion to $1 M_J$ in a disk with $\alpha \simeq
0.004$ occurs in about $10^{5} y$ (e.g., D'Angelo \& Lubow 2008).
During this phase, we speculate that
more modest levels of turbulent viscosity $ 10^{-4}
< \alpha < 10^{-3}$ may provide sufficient accretion to form a $1 M_J$
planet within $\sim 10^{6} y$, while remaining in this nearly laminar
disk regime of slow Type I migration.
Although this picture is suggestive of a resolution of the migration
problem, the longer term evolution of nearly laminar disk-planet
systems requires further exploration. The slowly migrating planet
will continue to create a deeper gap over time. The steepening density
gradients should lead to the vortex instability (Koller et al. 2003;
Li et al. 2005). The consequences of the vortex instability should be
explored.
\acknowledgments
The research at LANL is supported by a Laboratory Directed Research
and Development program. S.L. acknowledges support from NASA Origins
grant NNX07AI72G.
|
2,877,628,090,724 | arxiv | \section{Introduction}
\label{sec:intro}
\vspace{-1.5mm}
\gls{ASR} is a key technology for the task of automatic analysis of any kind of spoken speech, e.g., phone calls or meetings.
For scenarios of relatively clean speech, e.g., recordings of telephone speech or audio books, \gls{ASR} technologies have improved drastically over the recent years \cite{Xiong2018_Microsoft2017ConversationalSpeech}.
More realistic scenarios like spontaneous speech or meetings with multiple participants in many cases require the \gls{ASR} system to recognize the speech of multiple speakers simultaneously.
In meeting scenarios for example, the overlap is in the range of \SIrange{5}{10}{\percent}\footnote{Measured on the AMI meeting corpus \cite{ami}.} and can easily exceed \SI{20}{\percent} in informal get-togethers\footnote{Measured on the CHiME-5 database.}.
Thus, there has been a growing interest in source separation systems and multi-speaker \gls{ASR}.
A special focus lies on processing of single-channel recordings, as this is not only important in scenarios where only a single-channel is available (e.g., telephone conference recordings), but as well for multi-channel recordings where conventional multi-channel processing methods, e.g., beamforming, cannot separate the speakers well enough in case, e.g., they are spatially too close to each other.
The topic of single-channel source separation has been examined extensively over the last few years, trying to solve the cocktail party problem with techniques such as \gls{DPCL} \cite{Isik2016_Singlechannelmulti}, \gls{PIT} \cite{Yu2016_PermutationInvariantTraining} and TasNet \cite{Luo2017_tasnet,Luo2018_ConvTasNetSurpassing}.
In \gls{DPCL}, a neural network is trained to map each time-frequency bin to an embedding vector in a way that embedding vectors of the same speaker form a cluster in the embedding space.
These clusters can be found by a clustering algorithm and be used for constructing a mask for separation in frequency domain.
Concurrently, \gls{PIT} has been developed which trains a simple neural network with multiple outputs to estimate a mask for each speaker with a permutation invariant training criterion.
The reconstruction loss is calculated for each possible assignment of training targets to estimations for a mixture, and the permutation that minimizes the loss is then used for training.
Both \gls{DPCL} and \gls{PIT} show a good separation performance in time-frequency domain.
The permutation-invariant training scheme was adopted to time domain source separation with a Time domain Audio Separation Network (TasNet) which replaces the commonly used \gls{STFT} with a learnable transformation and directly works on the raw waveform.
TasNet achieves a \gls{SDR} gain of more than \SI{15}{\decibel}, even outperforming oracle masking in frequency domain.
Based on these source separation techniques, multi-speaker \gls{ASR} systems have been constructed.
\gls{DPCL} and \gls{PIT} have been used as frequency domain source separation front-ends for a state-of-the-art single-speaker \gls{ASR} system and extended to jointly trained \gls{E2E} or hybrid systems \cite{Menne2019_AnalysisDeepClustering,Settle2018_Endendmulti,Yu2017_RecognizingMultitalker,Qian2018_Singlechannelmulti}.
They showed that joint (re-)training can improve the performance of these models over a simple cascade system.
The effectiveness of TasNet as a time domain front-end for \gls{ASR} was investigated in \cite{Bahmaninezhad2019_comprehensivestudyspeech}, showing an improvement over frequency domain processing for both source separation and \gls{ASR} results.
However, TasNet was not yet optimized jointly with an \gls{ASR} system, possibly due to the intricacies of dealing with the high memory consumption or the novelty of the TasNet method.
In this paper, we combine a state-of-the-art front-end, i.e., Conv-TasNet [4], with an \gls{E2E} CTC/attention \cite{Kim2017_JointCTCattention,Watanabe2017_HybridCTC/attentionarchitecture,Chan2016_Listenattendspell} \gls{ASR} system to form
an E2E multi-speaker \gls{ASR} system that directly operates on raw
waveform features.
We try to answer the questions whether it is possible to jointly train a time domain source separation system like Conv-TasNet with an \gls{E2E} \gls{ASR} system and whether the performance can be improved by joint fine-tuning.
Going further on the investigations from \cite{Bahmaninezhad2019_comprehensivestudyspeech}, we retrain pre-trained front- and back-end models jointly and show by evaluating on the WSJ0-2mix{} database that a simple combination of an independently trained Conv-TasNet and \gls{ASR} system already provides competitive performance compared to other \gls{E2E} approaches, while a joint fine-tuning of both modules in the style of an \gls{E2E} system can further improve the performance by a large margin.
We enable joint training by distributing the model over multiple GPUs and show that an approximation of truncated back-propagation~\cite{werbos1990backpropagation} through time for convolutional networks enables joint training even on a single GPU by significantly reducing the memory usage while still providing a good performance.
We finally put this work into perspective by providing a compact overview of single-channel multi-speaker \gls{ASR} system and illustrate the complexity of the design space.
\begin{figure*}[t]
\centering
\begin{tikzpicture}
\node[box,minimum height=2cm,minimum width=10mm,fill=black!10] (convtas) {};
\node[box,minimum height=2cm,minimum width=10mm,right=20mm of convtas] (perm) {};
\coordinate (perm-in-1) at ($(perm.west)+(0,7mm)$);
\coordinate (perm-in-2) at ($(perm.west)+(0,-7mm)$);
\coordinate (perm-out-1) at (perm.east|-perm-in-1);
\coordinate (perm-out-2) at (perm.east|-perm-in-2);
\node[box,right=3cm of perm-out-1,minimum height=1cm,fill=black!10] (asr1) {\begin{tabular}{c}
ASR \\ encoder
\end{tabular}};
\node[box,right=3cm of perm-out-2,minimum height=1cm,fill=black!10] (asr2) {\begin{tabular}{c}
ASR \\ encoder
\end{tabular}};
\node[branch,right=5mm of asr1] (br3) {};
\node[branch,right=5mm of asr2] (br4) {};
\node[box,above right=.5mm and 10mm of asr1.east,minimum height=.5cm,fill=green!10] (ctc1) {CTC};
\node[box,above right=.5mm and 10mm of asr2.east,minimum height=.5cm,fill=green!10] (ctc2) {CTC};
\node[box,below right=.5mm and 10mm of asr1.east,minimum height=.5cm,fill=blue!10] (dec1) {att. decoder};
\node[box,below right=.5mm and 10mm of asr2.east,minimum height=.5cm,fill=blue!10] (dec2) {att. decoder};
\node[box,fill=orange!10, yshift=-17.5mm] (sisnr) at ($(convtas.east)!1/2!(perm.west)$) {\begin{tabular}{c} perm. inv. \\ SI-SNR loss \end{tabular}};
\coordinate (mix) at ($(convtas.west) + (-10mm, 0)$);
\coordinate (sin1) at ($(convtas.west|-sisnr.west) + (-10mm, 3mm)$);
\coordinate (sin2) at ($(convtas.west|-sisnr.west) + (-10mm, -3mm)$);
\node (mix-audio) [anchor=east] at (mix) {\includegraphics[height=1cm]{img/mix.png}};
\node (s1-audio) [anchor=east] at (sin1) {\includegraphics[height=1cm]{img/s1.png}};
\node (s2-audio) [anchor=east] at (sin2) {\includegraphics[height=1cm]{img/s2.png}};
\node[right=1cm of dec1] (l-ctc) {$\ensuremath{\mathcal{L}}^{\mathrm{(CTC)}}$};
\node[] at (ctc2-|l-ctc) (l-att) {$\ensuremath{\mathcal{L}}^{\mathrm{(att)}}$};
\node[right=2cm of sisnr] (l-fe) {\ensuremath{\L^{(\mathrm{FE})}}};
\draw[arrow] (mix) -- node [above] {$\ensuremath{\mathbf}{x}$} (convtas);
\draw[] (perm-in-1) -- (perm-out-1);
\draw[] (perm-in-1) -- (perm-out-2);
\draw[] (perm-in-2) -- (perm-out-2);
\draw[] (perm-in-2) -- (perm-out-1);
\draw[] (convtas.east|-perm-in-1) --node[branch,pos=0.33] (br1) {} (perm-in-1);
\draw[] (convtas.east|-perm-in-2) --node[branch,pos=0.66] (br2) {} (perm-in-2);
\draw[arrow] (br1) -- (br1|-sisnr.north);
\draw[arrow] (br2) -- (br2|-sisnr.north);
\draw[arrow] (sisnr) -|node[right]{$\pi_\mathrm{sig}$} (perm);
\draw[arrow] (perm-out-1) -- node [above,near start] {$\ensuremath{\mathbf}{x}^{\mathrm{(enh)}}_1$} node[above, at end, anchor=south east] {\includegraphics[height=.7cm]{img/s1.png}} (asr1);
\draw[arrow] (perm-out-2) -- node [above,near start] {$\ensuremath{\mathbf}{x}^{\mathrm{(enh)}}_2$} node[above, at end, anchor=south east] {\includegraphics[height=.7cm]{img/s2.png}} (asr2);
\draw[arrow] (sin1) -- node [above] {$\ensuremath{\mathbf}{x}_1$} +(10mm, 0) -- (sisnr.west|-sin1);
\draw[arrow] (sin2) -- node [above] {$\ensuremath{\mathbf}{x}_2$} +(10mm, 0) -- (sisnr.west|-sin2);
\draw (asr1) -- (br3);
\draw (asr2) -- (br4);
\draw[arrow] (br3) |- (ctc1);
\draw[arrow] (br3) |- (dec1);
\draw[arrow] (br4) |- (ctc2);
\draw[arrow] (br4) |- (dec2);
\draw[] (mix-audio) -- (mix);
\draw[] (s1-audio) -- (sin1);
\draw[] (s2-audio) -- (sin2);
\draw[arrow,very thick,green!40,dashed] (ctc1) to[out=0,in=170] (l-ctc);
\draw[arrow,very thick,green!40,dashed] (ctc2) to[out=0,in=190] (l-ctc);
\draw[arrow,very thick,blue!40,dashed] (dec1) to[out=0,in=170] (l-att);
\draw[arrow,very thick,blue!40,dashed] (dec2) to[out=0,in=190] (l-att);
\draw[arrow,very thick,dashed,orange!45] (sisnr) to[out=350,in=200] (l-fe);
\node[fill=white, inner sep=0,rotate=90] at (perm) {perm. assign.};
\node[inner sep=0,rotate=90] at (convtas) {Conv-TasNet};
\end{tikzpicture}
\caption{Architecture of the joint E2E ASR model. Sources are separated by a Conv-TasNet and separated audio streams are processed by a single-speaker ASR system. During training, the permutation problem is solved based on the signal level loss with $\pi_\mathrm{sig}$.}
\label{fig:architecture}
\end{figure*}
\section{Relation to Prior Work}
\label{sec:related}
Other works already studied the effectiveness of frequency domain source separation techniques as a front-end for \gls{ASR}.
\gls{DPCL} and \gls{PIT} have been efficiently used for this purpose, and it was shown that joint retraining for fine-tuning can improve performance \cite{Menne2019_AnalysisDeepClustering,Settle2018_Endendmulti,Qian2018_Singlechannelmulti}.
\gls{E2E} systems for single-channel multi-speaker \gls{ASR} have been proposed that no longer consist of individual parts dedicated for source separation and speech recognition, but combine these functionalities into one large monolithic neural network.
They extend the encoder of a CTC/attention-based \gls{E2E} \gls{ASR} system to separate the encoded speech features and let one or multiple attention decoders generate an output sequence for each speaker \cite{Seki2018_purelyendend,Chang2019_EndEndMonauralMulti}.
These models show promising performance, but they are not on par with hybrid cascade systems yet.
Drawbacks of these monolithic \gls{E2E} models compared to cascade systems include that they cannot make use of parallel and single-speaker data and that they do not allow pre-training of individual system parts.
The impact of using raw waveform features directly for the task of multi-speaker \gls{ASR} has only been investigated for a combination of TasNet and a single-speaker \gls{ASR} system \cite{Bahmaninezhad2019_comprehensivestudyspeech}, but not yet jointly trained.
\section{Source separation and speech recognition}
\label{sec:proposed}
\subsection{Time domain source separation with Conv-TasNet}
Conv-TasNet \cite{Luo2018_ConvTasNetSurpassing} is a single-channel source separating front-end which can be trained to produce waveforms for a fixed number of speakers from a mixture waveform.
It is a variant of \cite{Yu2016_PermutationInvariantTraining}, replacing the feature extraction by a learnable transformation and the separation network by a convolutional architecture.
It outputs an estimated audio stream in time domain for each speaker present in the input signal $\ensuremath{\mathbf}{x}$:
\begin{equation}
[\ensuremath{\mathbf}{x}^{\mathrm{(enh)}}_1, \ensuremath{\mathbf}{x}^{\mathrm{(enh)}}_2] = \mathrm{ConvTasNet}(\ensuremath{\mathbf}{x}).
\end{equation}
The model directly works on the raw waveform instead of STFT frequency domain features which makes it possible both to easily model and reconstruct phase information and to propagate gradients through the feature extraction and signal reconstruction parts.
While propagating from raw waveform at the output to raw waveform at the input, it is possible to directly optimize a loss on the time domain signals, such as the \gls{SI-SNR} loss, which we call the front-end loss \ensuremath{\L^{(\mathrm{FE})}} here.
This loss is optimized in a permutation-invariant manner by picking the assignment $\pi_\mathrm{sig}$ of estimations to targets that minimizes the loss.
Since the Conv-TasNet is built upon a convolutional architecture, it can be heavily parallelized on GPUs as compared to RNN-based models, but has a limited receptive field.
When optimized for source separation only, the limited length of the receptive field is actively exploited by performing a chunk-level training on chunks of \SI{4}{\second} length randomly cut from the training examples, both increasing the variability of data within one minibatch and simplifying implementation, since the length of all training examples is fixed.
\subsection{End-to end CTC/attention speech recognition}
As a speech recognizer we use a CTC/attention-based \gls{ASR} system.
We use an architecture similar to \cite{Watanabe2017_HybridCTC/attentionarchitecture} with an implementation included in the ESPnet framework \cite{Watanabe2018_ESPnet}, but we replace the original filterbank and pitch feature extraction by log-mel features implemented in a way such that gradients can be propagated through.
This way, gradients can flow from the \gls{ASR} system to the front-end.
The multi-target loss for the ASR system \ensuremath{\L^{(\mathrm{ASR})}} is composed of a CTC and an attention loss
\begin{equation}
\ensuremath{\L^{(\mathrm{ASR})}} = \lambda \ensuremath{\mathcal{L}}^{(\mathrm{CTC})} + (1 - \lambda)\ensuremath{\mathcal{L}}^{(\mathrm{att})}
\end{equation}
with a weight $\lambda$ that controls the interaction of both loss terms.
During training, teacher forcing using the ground truth transcription labels is employed for the attention decoder.
In teacher forcing, the input for the next step of the recurrent attention decoder is calculated using the ground truth label instead of the network output.
\section{Joint End-to-end multi-speaker asr}
We propose to combine a Conv-TasNet as the source separation front-end with a CTC/attention speech recognizer as displayed in \cref{fig:architecture}.
The input mixture $\ensuremath{\mathbf}{x}$ is separated by the front-end and the separated audio streams are processed by a single-speaker \gls{ASR} back-end.
Although multi-speaker source separation can already be performed by combining independently trained front- and back-end systems, the source separator produces artifacts unknown to the \gls{ASR} system which disturb its performance.
According to \cite{Heymann2017_BeamnetEndend}, and as also shown in \cite{Settle2018_Endendmulti,Qian2018_Singlechannelmulti}, such a mismatch can be mitigated by jointly fine-tuning the whole model at once.
We here compare three different variants of joint fine-tuning: (a) fine-tuning just the ASR system on the enhanced signals, (b) fine-tuning just the front-end by propagating gradients through the ASR system but only updating the front-end parameters and (c) jointly fine-tuning both systems.
The losses for the front- and back-end are combined as
\begin{equation}
\ensuremath{\mathcal{L}} = \alpha\ensuremath{\L^{(\mathrm{FE})}} + \beta\ensuremath{\L^{(\mathrm{ASR})}},
\end{equation}
where $\alpha$ and $\beta$ are manually chosen weights for the front-end and \gls{ASR} losses.
For (a) $\alpha$ is set to $0$ and $\beta$ to $1$, for (b) $\alpha$ is set to $1$ and $\beta$ to $0$, and for (c) $\beta$ is set to $1$ and $\alpha$ is set to $0.5$.
In order to choose the transcription for teacher forcing and loss computation a permutation problem needs to be solved.
Two possible options are to use the permutation $\pi_\mathrm{CTC}$ that minimizes the CTC loss as in \cite{Seki2018_purelyendend}, or the permutation $\pi_\mathrm{sig}$ that minimizes the signal level loss, as in \cite{Settle2018_Endendmulti}.
While using $\pi_\mathrm{CTC}$ has the advantage of not requiring parallel data, permutation assignment based on $\pi_\mathrm{sig}$ works more reliably in our experiments and we use $\pi_\mathrm{sig}$ for all fine-tuning experiments even when the front-end is not optimized.
\subsection{Approximated truncated back-propagation through time}
One-dimensional Convolutional Neural Networks (1D-CNNs) over time, as they are used in the Conv-TasNet, can be seen as an alternative to Recurrent Neural Network (RNN) architectures.
Similar to RNNs, this can lead to enormous memory consumption when a sufficiently long time series is used for back-propagation.
For example, we here fine-tune the Conv-TasNet with the E2E \gls{ASR} model jointly on single mixtures.
Although we constrain ourselves to a batch size of one, this requires to split the model onto four GPUs, three GPUs for the front-end and one GPU for the back-end, by placing individual layers on different devices.
This memory consumption can be addressed by generalizing truncated back-propagation through time (TBPTT) to 1D-CNN architectures.
TBPTT for 1D-CNNs can in theory be realized by back-propagating the gradients for a part of the output only.
While moving back towards the input, the gradients reaching over the borders of this part are ignored.
In practice, however, this is difficult to implement and we here approximate TBPTT for the Conv-TasNet front-end by ignoring the left and right context of the block the gradients are computed for.
We first compute the forward step on the whole mixture without building the backward graph to obtain an output estimation for the whole signal.
Note that this only requires to store the output signal and no persistent data for the backward computation.
We then compute the forward step again with enabled backward graph construction, but only for a chunk randomly cut from the input signal.
The approximated output for the whole utterance is formed by overwriting the corresponding part of the full forward output with the approximated chunk output.
This full output is passed to the \gls{ASR} back-end and gradients reaching the front-end from the back-end are only back-propagated through the approximated chunk.
This technique allows to run the joint training on a single GPU in our case, but even with larger GPU memory this permits increasing the batch size which in general speeds up training and produces a smoother gradient.
\section{Experiments}
\label{sec:experiments}
We carry out experiments on the WSJ database and the commonly used WSJ0-2mix{} dataset first proposed in \cite{Isik2016_Singlechannelmulti}.
The data in WSJ0-2mix{} is generated by linearly mixing clean utterances from WJS0 (si\_tr\_s for training and si\_et\_05 for testing) at ratios randomly chosen from \SIrange{0}{5}{\decibel}.
It consists of two different datasets, namely the min and max datasets.
The min dataset was designed for source separation and is formed by truncating the longer one of the two mixed recordings, so that it only contains fully overlapped speech.
We use this dataset for pre-training the Conv-TasNet, but it is not suitable for joint training where the audio data needs to match the full transcription.
For this need, we use the max dataset that does not truncate any recordings.
We use a sampling frequency of \SI{8}{kHz} for both the front- and back-end to speed up the training process.
We remove any labels marked as noisy, i.e., special tokens such as "lip smack" or "door slam", from the training transcriptions since these cannot be assigned to one speaker based on speech information by the front-end which makes their estimation ambiguous.
We evaluate our experiments in terms of \gls{WER} and, where applicable, by signal reconstruction performance measured by SDR as supplied by the BSS-EVAL toolbox \cite{Fevotte2005_BSS_EVAL} and SI-SNR \cite{LeRoux2019_SDR}.
For the experiments on mixed speech, the \gls{WER} is computed for all possible combinations of predictions and ground truth transcriptions for one example and the \gls{WER} for the permutation with minimum \gls{WER} is reported.
\subsection{Conv-TasNet time domain source separation}
We use the best performing architecture according to \cite{Luo2018_ConvTasNetSurpassing} and optimize it with the ADAM optimizer \cite{Kingma2014_Adam}.
In particular, following the hyper-parameter notations in the original paper, we set $N=512$, $L=16$, $B=128$, $H=512$, $P=3$, $X=8$ and $R=3$ with global layer normalization.
For distributing over multiple GPUs, we split between the three repeating convolutional blocks.
\cref{tab:tas} lists SDR and \gls{SI-SNR} performance for our Conv-TasNet model, comparing the min and max subsets of WSJ0-2mix.
It can be seen that our implementation of the Conv-TasNet reaches a comparable performance on the min dataset when compared to the original paper.
There is a slight degradation in performance on the max dataset caused by the mismatch of training and test data because the model never saw long single-speaker regions during training and learned to always output a speech signal on both outputs, while these regions are present in the max dataset.
\begin{table}[ht]
\vspace{-1em}
\centering
\caption{SDR and SI-SNR in \si{\decibel} for the min and max test (tt) datasets of the WSJ0-2mix{} database.
}
\label{tab:tas}
\begin{tabular}{lSS}
\toprule
Dataset & {SDR} & {SI-SNR} \\
\midrule
WSJ0-2mix{} min \cite{Luo2018_ConvTasNetSurpassing} & 15.6 & 15.3 \\
WSJ0-2mix{} min (ours) & 14.3 & 13.8 \\
WSJ0-2mix{} max (ours) & 13.8 & 13.4 \\
\bottomrule
\end{tabular}
\vspace{-1.5em}
\end{table}
\begin{table*}[t]
\def\ditto{''}
\def\fontseries{b}\selectfont{\fontseries{b}\selectfont}
\robustify\fontseries
\robustify\selectfont
\centering
\caption{CER and WER on max test (tt) set of WSJ0-2mix{} for different variants of fine-tuning. All models are pre-trained.}
\label{tab:fine-tune}
\begin{tabular}{lccccSSSS}
\toprule
\mrow{Model} & \multicolumn{2}{c}{fine-tune} & \mrow{Join training type} & \mrow{additional \\ SI-SNR loss} & {\mrow{CER}} & {\mrow{WER}} & {\mrow{SDR}} & {\mrow{SI-SNR}} \\
\cmidrule{2-3}
& front-end & back-end & & \\
\midrule
Conv-TasNet + RNN & --- & --- & --- & --- & 13.9 & 22.9 & 13.8 & 13.4 \\
\phantom{+}+ fine-tune ASR & --- & \ding{51} & single GPU & --- & 6.7 & 11.7 & \ditto & \ditto \\
\phantom{+}+ fine-tune TasNet & \ding{51} & --- & multi GPU & --- & 7.7 & 14.2 & 10.5 & 9.5 \\
\phantom{++}+ SI-SNR loss & \ding{51} & --- & multi GPU & \ding{51} & 7.7 & 14.3 & 12.5 & 12.1 \\
\phantom{+}+ fine-tune joint & \ding{51} & \ding{51} & multi GPU & --- & 6.2 & 11.7 & 9.8 & 8.4 \\
\phantom{++}+ SI-SNR loss & \ding{51} & \ding{51} & multi GPU & \ding{51} & \fontseries{b}\selectfont 6.0 & 11.1 & \fontseries{b}\selectfont 13.8 & \fontseries{b}\selectfont 13.5 \\
\phantom{+}+ fine-tune joint TBPTT & \ding{51} & \ding{51} & single GPU (TBPTT) & --- & 6.1 & \fontseries{b}\selectfont 11.0 & 11.7 & 11.5 \\
\phantom{++}+ SI-SNR loss & \ding{51} & \ding{51} & single GPU (TBPTT) & \ding{51} & 18.0 & 23.9 & 12.4 & 12.1 \\
\bottomrule
\end{tabular}
\end{table*}
\newcommand{\mc}[1]{\multicolumn{1}{c}{#1}}
\begin{table*}[ht]
\vspace{-4mm}
\centering
\caption{Comparison of single-channel multi-speaker ASR systems. They differ heavily in their used architecture, training data and technique.}
\label{tab:related}
\begin{tabular}{lcccccllHSHH}
\toprule
\mrow{Model}&\mrow{structure}&\mrow{pre-training}& \mrow{joint \\ training} & \mrow{signal \\ reconstruct.} & \mrow{no parallel\\data required} &\multicolumn{2}{c}{data} && {\mrow{WER}} \\
\cmidrule{7-8}
& & & & & & \mc{train} & \mc{eval} & {CER} & & {SDR} & {SI-SNR}\\
\midrule
DPCL & & & &\ding{51} & --- &WSJ0-2mix & &&& {10.4??} & {--} \\
\phantom{+}+ DNN-HMM \cite{Menne2019_AnalysisDeepClustering} & hybrid &\ding{51} & --- & \ding{51} & --- & WSJ0 & WSJ0-2mix && 16.5 & & \\
\phantom{+}+ CTC/attention \cite{Settle2018_Endendmulti}& E2E & \ding{51} & --- & \ding{51} & --- & WSJ0 & WSJ0-2mix & & 23.1 & & \\
\phantom{++}+ joint fine-tuning \cite{Settle2018_Endendmulti} & E2E & \ding{51} & \ding{51} & \ding{51} & --- & WSJ0-2mix &WSJ0-2mix&& 13.2 & 10.7 & \\
PIT-ASR (best) \cite{Qian2018_Singlechannelmulti,Chang2019_EndEndMonauralMulti} & hybrid & --- & \ding{51} & --- & \ding{51} & WSJ0-2mix & WSJ0-2mix & {--} & 28.2 & & \\
E2E ASR \cite{Seki2018_purelyendend} & E2E & (\ding{51}) & \ding{51} & --- & \ding{51} & WSJ-2mix & WSJ-2mix & 18.4 & 28.2 & {--} & {--} \\
E2E ASR \cite{Chang2019_EndEndMonauralMulti} & E2E & --- & \ding{51} & --- & \ding{51} & WSJ-2mix & WSJ-2mix & 10.9 & 18.4 & {--} & {--} \\
E2E ASR \cite{Chang2019_EndEndMonauralMulti} & E2E & --- & \ding{51} & --- & \ding{51} & WSJ0-2mix & WSJ0-2mix & {--} & 25.4 & & \\
\midrule
joint TasNet (our best) & E2E & \ding{51} & \ding{51} & \ding{51} & --- & \begin{tabular}{@{}l@{}}WSJ \&\\ WSJ0-2mix\end{tabular} & WSJ0-2mix{} & 6.0 & 11.0 & 13.8 & 13.5 \\
\bottomrule
\end{tabular}
\vspace{-4mm}
\end{table*}
\subsection{CTC/attention ASR model}
We use a configuration similar to \cite{Seki2018_purelyendend} without the speaker dependent layers for the speech recognizer.
This results in a model with two CNN layers followed by two BLSTMP layers with $1024$ units each for the encoder, one LSTM layer with 300 units for the decoder and a feature dimension of $80$.
The multi-task learning weight was set to $\lambda=0.2$.
We use a location-aware attention mechanism and ADADELTA \cite{Zeiler2012_ADADELTAadaptivelearning} as optimizer.
All decoding is performed with an additional word-level RNN language model.
Our ASR model achieves a WER of \SI{6.4}{\percent} on the WSJ eval92 set.
\subsection{Joint finetuning}
The results of the different fine-tuning variants are listed for comparison in \cref{tab:fine-tune}.
It is notable that combining the independently trained models (Conv-TasNet + RNN) already gives a competitive performance compared other methods (see \cref{sec:experiments:results:ind} and \cref{tab:related}).
Fine-tuning just the \gls{ASR} system (+ fine-tune ASR) can further cut the \gls{WER} almost in half from \SI{22.9}{\percent} to \SI{11.7}{\percent}.
Joint fine-tuning without a signal level loss (+ fine-tune joint), when the system is no longer constrained to transport meaningful speech between front- and back-end, can not improve much over just fine-tuning the ASR system and significantly lowers the source separation performance.
This indicates that there is enough information available for reliable speech recognition in the separated signals (i.e., retraining of the front-end is not required), but that not all information required to reconstruct speech is required for \gls{ASR}.
Using a signal-level loss (+ fine-tune joint + SI-SNR loss) can further improve the \gls{WER} to \SI{11.1}{\percent}.
In this case, the source separation performance stays comparable to the separate Conv-TasNet model.
A signal-level loss might help the model to better separate the speech.
The performance for just fine-tuning the front-end (+ fine-tune TasNet) cannot reach the performance of fine-tuning the back-end.
This means that it is easier to mitigate the mismatch for the \gls{ASR} back-end (i.e., learn to ignore the artifacts produced by the front-end) than it is for the front-end (i.e., learn to suppress the artifacts).
Comparing the results of the chunk-based fine-tuning (+ fine-tune joint TBPTT) as an approximation of TBPTT with the full joint fine-tuning (+ fine-tune joint), it can be seen that even though the TBPTT-based approach is just an approximation, its performance is comparable to the full joint model if no signal-level loss is used.
It even performs slightly better, possibly because TBPTT allowed to use a larger batch size.
The degradation in performance for the case with a signal level loss (+ fine-tune joint TBPTT + SI-SNR loss) might be caused by the signal-level loss penalizing the approximation heavily, while the gradient propagated through the \gls{ASR} system is less harmful to the front-end performance.
\subsection{Comparison with related work}
\label{sec:experiments:results:ind}
This section compares the performance of the different related works presented in \cref{sec:related}.
Their major differences and performance in terms of \gls{WER} are listed in \cref{tab:related}.
While these comparisons are not fair because the presented works differ heavily in their overall model structure, training methods and data, the numbers are meant to give a rough indication of how these methods compare and how complex the design space is.
Keeping in mind that hybrid DNN-HMM models still outperform E2E models in many scenarios, it is notable that the joint fine-tuning of \gls{DPCL} with an E2E model outperforms the independently trained hybrid model.
In the same fashion, the E2E ASR model can outperform the jointly optimized hybrid PIT-ASR on the same dataset.
Although not directly comparable, the best results in this table were produced by cascade models that allow reconstruction of the enhanced separated signals (\gls{DPCL} + joint fine-tuning, joint TasNet), which suggests that having dedicated parts for source separation and speech recognition is helpful, while joint fine-tuning improves the performance.
Our time domain approach gives the best result in this comparison.
\section{Conclusions}
We propose to use a time domain source separation system like Conv-TasNet as a front-end for a single-speaker \gls{E2E} \gls{ASR} system to form a multi-speaker \gls{E2E} speech recognizer.
We show that independently training the front- and back-end already gives a competitive performance and that joint fine-tuning can drastically improve the performance.
Fine-tuning can be performed jointly with the whole model distributed over multiple GPUs, but can as well be sped up roughly by a factor of $2$ on a single GPU by approximating TPBTT for convolutional neural networks, while keeping the performance comparable.
The results suggest that retraining the \gls{ASR} part can much better compensate the mismatch between front-end and back-end than a fine-tuned front-end could.
\pagebreak
\balance
\bibliographystyle{IEEEbib}
|
2,877,628,090,725 | arxiv | \section{introduction}
Despite steady successes in fabrication and measurement techniques, the experimental characterization of multi-qubit systems \cite{PhysTod,PhysCan} remains a challenge due to their complicated level structure. Our goal here is to determine the system's {\em parameters}, as distinct from the more difficult problem of determining its {\em state}, which has to be tackled using quantum state tomography \cite{QST,QST-exp}. For example, neither the strength nor the sign of the qubit-qubit coupling are known a priori. One of several standard approaches studies the resonant response of quantum macroscopic systems to an external coherent signal (see, e.g., \cite{i1,i2,delft1}), allowing to determine the qubit parameters by scanning the frequency range of the external signal. The difficulty in the straightforward application of this approach, due to the fact that only few qubits can be actually accessed, and the relation of this problem to the general field of inverse problems, were addressed in \cite{burg1,burg2}.
An alternative approach to the standard spectroscopic methods of characterization would use as a drive a broad-band noise. We call it {\em active noise spectroscopy}, as distinct from the ``passive" noise spectroscopy of Ref.~\cite{i1}, where the response of the noise spectrum to a {\em coherent monochromatic} drive was measured.
Recently, we have shown \cite{om} that classical noise applied to a qubit produces persistent oscillations of the off-diagonal density matrix elements (``coherences") despite finite dephasing and relaxation times. In other words, a moderate amount of external noise {\em enhances} quantum coherence, which manifests in oscillations with a frequency corresponding to quantum transitions between the ground and first excited states. There exists an optimal noise amplitude: at lower noise level, oscillations are suppressed, while as the noise is increased, the oscillations become random and the corresponding spectroscopic peak is eventually smeared away.
Indeed, for zero noise, the oscillations of the off-diagonal elements of the density matrix decay on the time scale of $\tau$, where $1/\tau$ is the dephasing rate. Moderate phase-insensitive noise excites the system from time to time, allowing the qubit to evolve with its own frequency between the relatively rare noise spikes, thus, uncovering quantum dynamics. Strong noise produces strong spikes very often, thus leaving no time for the coherent evolution. This phenomenon is related to both classical and quantum stochastic resonances, which manifest in various physical systems (see, e.g., \cite{han1,han2,han3,han4,hue1,Galve}).
In this paper we investigate how these effects of classical noise can help determine the parameters of a multiqubit system. Specifically, we consider two coupled qubits and analyze the spectrum of the density matrix excited by white Gaussian classical noise. We numerically show that the resulting noise spectra contain four peaks, which correspond to the interlevel transitions in the system. From these, the energy spectrum and all the model parameters of the qubits are readily obtained. In addition, the correlations in the matrix elements corresponding to different qubits can be used to conclude whether the qubits are coupled ferro- or antiferromagnetically.
\section{Model}
Two coupled qubits can be described by the Hamiltonian \cite{app}
\begin{equation}
H = -\frac{1}{2} \sum_{j=1,2} \left[\Delta_j \sigma^j_z + \epsilon_j(t)\sigma^j_x\right] + g \sigma^1_x\sigma^2_x
\label{eq-ham}
\end{equation}
where $\sigma^j_z$ and $\sigma^j_x$ are Pauli matrices corresponding to either the first ($j=1$) or the second ($j=2$) qubits, and the eigenstates of $\sigma^j_z$
are the basis states in the localized representation of the $j$th qubit at zero coupling. Note that the results obtained below do not
qualitatively depend on the type of coupling (e.g., $\sigma_x^1\sigma_x^2$ versus $\sigma_y^1\sigma_y^2$): in any case the noise will allow to determine
the parameters of the two-qubit Hamiltonian. For this reason, and for demonstrating the physical principles of noise-induced spectroscopy, we consider two identical qubits. The tunneling
splitting energies $\Delta_{1,2}$ (in case of the identical qubits we will be investigating here: $\Delta_1=\Delta_2=\Delta$) are determined by the design and fabrication details of the device, while the bias energies $\epsilon_j(t)$ can be controlled externally and, in our case, are only driven by the noise,
\begin{equation}
\epsilon_j(t) = \delta\!\xi_j(t).
\end{equation}
The Gaussian white noise considered here is zero-averaged and delta-correlated:
\begin{equation}
\langle\delta\! \xi_j(t)\rangle = 0,\:\: \langle\delta\! \xi_j(t)\delta\! \xi_{j'}(t)\rangle = 2D\delta_{j,j'}\delta(t-t').
\label{noise}
\end{equation}
where $D$ is the noise intensity, which should be defined for each particular system (see, e.g., the example of two flux qubits
described below). The uncorrelated noise sources affecting the qubits (``local'' noise) tend to be more detrimental to their quantum coherence than the correlated ones \cite{Storcz,You,wilh,doll,Zhang}, which makes Eq.~(\ref{noise}) the ``worst case scenario''.
\subsection{Master equation}
By writing the qubit density matrix $\hat{\rho}$ as
\begin{equation}
\hat{\rho}=\frac{1}{4} \sum_{a,b=0,x,y,z}\Pi_{ab}
\; \sigma^1_a \otimes \sigma^2_b
\end{equation}
we can rewrite the master equation
$$\frac{d\hat{\rho}}{dt} = -i\left[\hat{H}(t),\hat{\rho}\right]+\hat{\Gamma}\hat{\rho}$$
in the form
\begin{eqnarray}
\begin{array}{lll}
\dot{\Pi}_{0x} & = & \Delta_2\Pi_{0y} - \Gamma_{\phi 2}\Pi_{0x} \\
\dot{\Pi}_{0y} & = & -\Delta_2\Pi_{0x} + \epsilon_2(t)\Pi_{0z} - 2g\Pi_{xz} - \Gamma_{\phi 2}\Pi_{0y}\\
\dot{\Pi}_{0z} & = & -\epsilon_2(t)\Pi_{0y} + 2g\Pi_{xy}- \Gamma_{2}(\Pi_{0z}-Z_{T2})\\
& & \\
\dot{\Pi}_{x0} & = & \Delta_1\Pi_{y0} - \Gamma_{\phi 1}\Pi_{x0} \\
\dot{\Pi}_{y0} & = & -\Delta_1\Pi_{x0} + \epsilon_1(t)\Pi_{z0} - 2g\Pi_{zx} - \Gamma_{\phi 1}\Pi_{y0}\\
\dot{\Pi}_{z0} & = & -\epsilon_1(t)\Pi_{y0} + 2g\Pi_{yx}- \Gamma_{1}(\Pi_{z0}-Z_{T1})\\
& & \\
\dot{\Pi}_{xx} & = & \Delta_2\Pi_{xy} + \Delta_1\Pi_{yx} - (\Gamma_{\phi 1} + \Gamma_{\phi 2})\Pi_{xx} \\
& & \\
\dot{\Pi}_{xy} & = & -2g\Pi_{0z} -\Delta_2\Pi_{xx} + \Delta_1\Pi_{yy} + \epsilon_2(t)\Pi_{xz} - (\Gamma_{\phi 1} + \Gamma_{\phi 2})\Pi_{xy}\\
\dot{\Pi}_{yx} & = & -2g\Pi_{z0} -\Delta_1\Pi_{xx} + \Delta_2\Pi_{yy} + \epsilon_1(t)\Pi_{xz} - (\Gamma_{\phi 1} + \Gamma_{\phi 2})\Pi_{yx}\\
\dot{\Pi}_{xz} & = & 2g\Pi_{0y} - \epsilon_2(t)\Pi_{xy} + \Delta_1\Pi_{yz} - (\Gamma_{\phi 1}+\Gamma_{2})\Pi_{xz}\\
\dot{\Pi}_{zx} & = & 2g\Pi_{y0} - \epsilon_1(t)\Pi_{yx} + \Delta_2\Pi_{zy} - (\Gamma_{\phi 2}+\Gamma_{1})\Pi_{zx}\\
& & \\
\dot{\Pi}_{yy} & = & -\Delta_1\Pi_{xy} - \Delta_2\Pi_{yx} + \epsilon_2(t)\Pi_{yz} + \epsilon_1(t)\Pi_{zy} - (\Gamma_{\phi 1} + \Gamma_{\phi 2})\Pi_{yy}\\
& & \\
\dot{\Pi}_{yz} & = & - \Delta_1\Pi_{xz} - \epsilon_2(t)\Pi_{yy} + \epsilon_1(t)\Pi_{zz} - (\Gamma_{\phi 1}+\Gamma_{2})\Pi_{yz}\\
\dot{\Pi}_{zy} & = & - \Delta_2\Pi_{zx} - \epsilon_1(t)\Pi_{yy} + \epsilon_2(t)\Pi_{zz} - (\Gamma_{1}+\Gamma_{\phi 2})\Pi_{zy}\\
& & \\
\dot{\Pi}_{zz} & = & -\epsilon_1(t)\Pi_{yz} -\epsilon_2(t)\Pi_{zy} - (\Gamma_{1} + \Gamma_{2})(\Pi_{zz}-Z_{T1}Z_{T2})
\end{array}
\label{poxy}
\end{eqnarray}
Here we used the standard approximation for the dissipation operator $\hat{\Gamma}$ via the dephasing and relaxation rates to characterize the intrinsic noise in the system. Also, hereafter we assume for simplicity that relaxation rates are the same for both identical qubits, i.e., $\Gamma_{\phi 1} = \Gamma_{\phi 2}=\Gamma_{\phi}$ and $ \Gamma_{r1}=\Gamma_{r2}=\Gamma_{r}$, and that the temperature is low enough, resulting in the equilibrium values of the diagonal elements of the qubit density matrices being $ Z_{T2} = Z_{T1} = 1$. All the simplifying assumptions (e.g.,
$\Delta_1=\Delta_2$, $\Gamma_{r1}=\Gamma_{r2}$, $\Gamma_{\phi 1}=\Gamma_{\phi 2}$ etc.) do not
qualitatively affect our results reported below. For instance, if $\Delta_1\ne \Delta_2$, the spectrum in Fig. 2 will have more peaks, corresponding
to larger numbers of levels due to the lifting of the artificial degeneracy.
In the limit of zero coupling $(g = 0)$, there exists a solution of Eqs.~(\ref{poxy}) with no entanglement between qubits. This solution can be written as a direct product of two single-qubit density matrices written through the corresponding Bloch vectors: $\hat{\rho}_j = \frac{1}{2} (1+X_j\hat{\tau}_x+Y_j\hat{\tau}_y+Z_j\hat{\tau}_z)$. The components of what can be called the Bloch tensor $\Pi_{ab}$ are then all zero except for $(\Pi_{ox},\Pi_{oy},\Pi_{oz})= (X_1,Y_1,Z_1)$ and $(\Pi_{xo},\Pi_{yo},\Pi_{zo})=(X_2,Y_2,Z_2)$. If the interaction is not zero, the entanglement between these qubits generates all the components of the Bloch tensor to be non-zero \cite{Zhang,i3} and such an entangled state persists on the time scale
$1/\Gamma$ after the interaction is later switched off [$g(t>t_0)=0$].
This reflects the fact that, in the presence of interactions, the eigenstates of the system are entangled \cite{Zhang,i3}, and the noise terms in the eigenbasis will thus maintain the off-diagonal terms in the density matrix of the two-qubit system.
\subsection{Two flux qubits}
As a specific example of our approach, which can be experimentally implemented, we propose to measure two (almost) identical superconducting flux qubits consisting of a superconducting loop interrupted by four Josephson junctions and coupled via a coupler loop \cite{coupl} (See Figure 1). The state of each qubit is controlled by the applied magnetic flux $\Phi_e^{(j)} = f_e^{(j)}\Phi_0$ through the loop, where $\Phi_0$ is the flux quantum. In the vicinity of $f_e^{(1)} = f_e^{(2)} = 1/2$, the ground state of the system is a symmetric superposition of the states $|L\rangle$ and $|R\rangle$, with a clock- and counterclockwise circulating superconducting current $I_p$, respectively. In the basis $\left\{ |L\rangle, |R\rangle\right\}$ the two-qubit system can be described by the
Hamiltonian (\ref{eq-ham}) with $\epsilon_j = I_p\Phi_0\delta\! f_e^{(j)}$ with classical flux bias fluctuations $\delta\! f_e $ in the qubit loops around 1/2, while the tunneling amplitude $\Delta$ is determined by the fabrication of the loop and the junctions. Note that the components of the density matrix can be measured
{\em directly}, e.g., by monitoring the current fluctuations in the flux qubits: $I_{j}(t) = I_p X(t)$. The direct relation of this spectrum to the current/voltage noise spectrum in the resonant tank $(LC)$ circuit coupled to the qubit was
used in Ref.~\onlinecite{i1}.
\section{Simulation results}
Using the dimensionless time $\bar{t} = t \Delta$, we numerically solved the system (\ref{poxy}) by the Ito
method for two coupled qubits driven only by white classical noise, choosing parameters for damping
$\Gamma_{\phi}/\Delta=\Gamma_{r}/\Delta=0.1$ close to the ones experimentally found in flux qubits.
The spectra of $X_1=\Pi_{ox}$ and $Z_1=\Pi_{oz}$, for $g=0.5$, are shown in Fig.~2. Since this two-qubit system is only driven by noise, the spectrum of both $X_1$ and $Z_1$ is enhanced by increasing the noise, and peaks become more distinguished if noise is not too high. These spectra exhibit four maxima, whose positions nicely agree with the frequencies of the interlevel transitions (in units of $\Delta$):
\begin{equation}
2\pi\nu_1=\omega_1=2g,\ \ 2\pi\nu_{2,3}=\omega_{2,3}=\sqrt{1+g^2}\pm g,\ \ 2\pi\nu_4=\omega_4=2\sqrt{1+g^2}
\end{equation}
which have values $\nu_1\approx 0.16,\ \nu_2\approx 0.1, \nu_3\approx 0.26,$ and $\nu_4\approx 0.36$.
Two peaks out of these four frequencies are clearly seen on the $S_X$ spectra in Fig.~2, while the other two peaks are better seen on the $S_Z$ spectra. Either of these two spectra is sufficient to {\em measure both the coupling constant $g$ and the tunneling splitting energy $\Delta$}, while the remaining spectrum can be used for control. Note here that, unlike the single-qubit case \cite{om}, there are peaks on both $S_X$ and $S_Z$ even for small values of the coupling strength $g$, which illustrates our earlier remark on the entangled nature of the eigenstates revealed by classical noise.
To determine whether the coupling is ``ferro-" or ``antiferromagnetic", that is, the sign of the coupling constant $g$, we study the time correlations in the density matrix elements $ \Pi_{ox}(t)= X_1(t)$ and $ \Pi_{xo}(t)= X_2(t)$, for $g=\pm 0.7$ and $g=0$. Numerically solving equations (\ref{poxy}) we obtained the time sequences $X_{j}(t_i)$ shown in Fig. 3, where $t_i$ is the discretized time of the simulation. Correlations and anticorrelations are clearly seen for ferromagnetic and antiferromagnetic coupled qubits, while almost no correlations are seen for the decoupled ones.
A qualitative physical picture of these correlations in the time domain is readily understood. For instance, for ferromagnetic coupling, the Bloch vectors of the two qubits tend to allign for weak enough noise. A stronger noise excites partially-coherent oscillations in the intervals between two sequential
noise spikes, but the qubit-qubit oscillations still tend to preserve the ferromagnetic ordering (which results in the correlations seen in Fig. 3) even for the dynamicaly evolving qubits. Similarly, the antiferromagnetic coupling tends to produce {\em anti}-correlations in the qubit dynamics, as seen in Fig.~3.
To quantitatively describe these correlations we plot the sample Pearson correlation coefficient
\begin{equation}
r=\frac{n\sum_i\Pi_{ox}(t_i)\Pi_{xo}(t_i)-\sum_i\Pi_{ox}(t_i)
\sum_i\Pi_{xo}(t_i)}{ \sqrt{n\sum_i\Pi_{xo}^2(t_i)-\left( \sum_i\Pi_{xo}(t_i)\right)^2} \sqrt{n\sum_i\Pi_{ox}^2(t_i)-\left( \sum_i\Pi_{ox}(t_i)\right)^2} }
\end{equation}
as a function of the coupling constant $g$ (bottom panel of Fig.~3). Here $n$ is the total number of simulation time steps.
The module of the correlation coefficient $r$ exhibits a maximum at $|g|\approx 0.7$. At larger $|g|$ the oscillations become weaker, since the
uncorrelated external noises in two qubits suppress each other via their coupling, so that the noise-induced oscillations weaken. The sign of $r$ coincides with the sign of the coupling $g$, which allows to easily distinguish between ferro- and antiferromagnetic couplings.
\section{Conclusions}
We have demonstrated that quantum correlations in a two-qubit system can be highlighted by the presence of classical noise. As an application of this effect, we suggest the use of noise spectroscopy. Namely, the measurement of the fluctuation spectra of the system, as a means to determine the relevant parameters of the multiqubit system.
\section{acknowledgments}
We acknowledge partial support from the National Security
Agency, Laboratory of Physical Sciences, Army Research Office, National Science
Foundation (Grant No. 0726909), JSPS-RFBR (Grant No.
06-02-92114), MEXT Kakenhi on Quantum Cybernetics, FIRST (Finding Program for innovative R\&D on S\&T), and FRSF (Grant No. F28.21019), and EPSRC
(No. EP/D072518/1).
\begin{figure}[btp]
\begin{center}
\includegraphics[width=8.0cm]{fig1.eps}
\end{center}
\caption{Schematic diagram of two coupled flux qubits, each one with four Josephson junctions.
These qubits can be coupled \cite{coupl} via the central coupler loop allowing to change the magnitude and sign of the coupling constant $g$.}
\end{figure}
\begin{figure}[btp]
\begin{center}
\includegraphics[width=10.0cm]{fig2.eps}
\end{center}
\caption{(Color online.) Spectral density $S_X(\omega)$ (top panel) and $S_Z(\omega)$ for two values of the noise
($D/\Delta=0.04$ and $D/\Delta=0.013$) and normalized coupling $g/\Delta=0.5$.
Four peaks, two per panel, can be easily distinguished, and these correspond to the four interlevel frequencies
$\nu_1, \nu_2, \nu_3,$ and $ \nu_4$. The insets show their corresponding time sequences $X(t)$ and $Z(t)$.}
\end{figure}
\begin{figure}[btp]
\begin{center}
\includegraphics*[width=10.0cm]{fig3.eps}
\end{center}
\caption{ (Color online.) Time sequences for $X_1(t)=\Pi_{ox}(t)$ (continuous red curve) and $X_2(t)=\Pi_{xo}(t)$ (dot-dashed black curve) for values of the coupling constant $g=\pm 0.7$ (top two panels) and $0$ (third panel). The anticorrelations (top panel, $g>0$) and correlations (second panel, $g<0$) are clearly seen for nonzero coupling ($|g|=0.7$), while there are no correlations for $g=0$. The bottom panel shows the dependence of the correlation coefficient $r$ on the coupling constant $g$, when $D/\Delta=0.013$.}
\end{figure}
|
2,877,628,090,726 | arxiv | \section{Introduction}
\label{sec:intro}
\hspace{.6cm} R-parity is an important symmetry in supersymmetric theories (For a review see \cite{Review}). In supergravity theories~\cite{Chamseddine:1982jx}, over most of the parameter space of models consistent with the radiative breaking of the electroweak symmetry, the lightest neutralino is found to be the lightest supersymmetric particle, and this, along with R-parity
(defined as $R=(-1)^{2S + 3 (B-L)}$, where $S$, $B$ and $L$ stand for the spin, baryon number and lepton number, respectively)
and charge neutrality allows for the lightest neutralino to be a promising candidate for cold dark matter as suggested in~\cite{Gold}.
The question then, is, if indeed R-parity turns out to be a conserved symmetry of nature, how does such a symmetry come about, and how one may guarantee that it is conserved. It is known that the MSSM with the inclusion of a right handed neutrino, one for each generation, has an anomaly free $U(1)_{B-L}$ which can be gauged\footnote{A gauged $U(1)_{B-L}$ arises naturally in GUT models such as $SO(10)$ and $E_6$ and in string models.}. Of course, a $U(1)_{B-L}$ gauge boson must grow mass otherwise it would produce an undesirable long range force.
In the analysis that follows it is shown that a gauged $B-L$ symmetry, where the gauge boson develops a mass through the Stueckelberg mechanism extending the Standard Model gauge group~\cite{Kors:2004dx}
\cite{fln1,fln11} preserves R-parity, i.e., R-parity does not undergo spontaneous breaking by renormalization groups effects under the assumption of universality of soft scalar masses,
charge conservation and in the absence of a Fayet-Iliopoulos D-term. We will later refer to this model as the \textit{Minimal $B-L$ Stueckelberg Extension of the MSSM.}
The fact that the minimal gauged $B-L$ model proposed in this work preserves R-parity, with mass growth arising from the Stueckelberg mechanism,
is in contrast to models with a gauged $B-L$ where the symmetry is broken spontaneously and thus does not necessarily preserve the R-parity invariance.
Thus the analyses of~\cite{FP2,Barger:2008wn,pfps,Everett,FP1,FileviezPerez:2011kd} show that R-parity symmetry, even if valid at the grand unification scale, could be broken by
renormalization group effects~\footnote{For grand unified models where R-parity symmetry is automatic see~\cite{Martin:1992mq}. For analyses where the spontaneous breaking of $B-L$ occurs see~\cite{Khalil:2007dr,Frank:2001jp}, for early work on the spontaneous breaking of {R-parity} see~\cite{Aulakh:1982yn,Hall,Mohapatra,MasieroValle} .
For early analyses with R-parity and additional gauge fields see~\cite{Ibanez}.}
We will first discuss the minimal $(B-L)$ Stueckelberg~ extension of the Standard Model and of the minimal supersymmetric Standard Model (MSSM). In these extensions the $Z'$ boson~\footnote{For recent dedicated work on heavy $Z'_{B-L}$ physics see \cite{Accomando,BassoPruna}.} is constrained to be rather heavy, i.e., it lies in the multi-TeV range and
thus a direct detection may be difficult. However, this constraint is overcome in a $U(1)_{B-L}\otimes U(1)_X$ Stueckelberg extension, where $U(1)_X$ is the hidden sector gauge group. Here the Stueckelberg sector generates two extra massive vector neutral bosons, i.e., $Z'$ and $Z''$, one of which would be very narrow and could lie even in the sub-TeV region, and thus would be accessible
at the LHC. The models with massive mediators arise generally via mass mixing and kinetic mixing of Abelian gauge bosons~\cite{darkforce1,darkforce2,darkforceA,rev,darkforce3,darkforce4,darkforce5,darkforce6,darkforce7,darkforce8, darkforce9},~\cite{Nath:2008ch,Liu,darkforcereview} and the mixings are also the source of the so called dark forces~\cite{darkforce1,darkforceA} - the mixings allow for a portal between the hidden (dark) sector via massive mediators~\cite{darkforce1,darkforce2,darkforceA,rev,darkforce3,darkforce4} (from which several components of dark matter can arise) and the visible sector where the states charged under the the Standard Model reside. Specifically, the class of models that we study here allows for two component (Majorana and Dirac) dark matter~\cite{Feldman:2010wy}.
Such models with dark forces have received considerable attention in the context of the recent cosmic anomalies~\cite{Feldman,Arkani,FLNN,Feldman:2010wy}; for recent additional works on dark sectors see e.g. \cite{Quevedo,Mam,ANelson,Mam2}.
The organization of this paper is as follows : In sec.~({\ref{1}}) we propose a $U(1)_{B-L}$ extension of the Standard Model via the Stueckelberg~ mechanism.
In sec.~(\ref{2a}) the $B-L$ Stueckelberg~ extension of MSSM is introduced. In sec.~(\ref{2b}) we outline the conditions
for R-parity to be not spontaneously broken.
In sec.~({\ref{3}}) we give a dedicated analysis of a $U(1)_{B-L} \otimes U(1)_{X}$ extension of the MSSM via the Stueckelberg~ mechanism and show that
the model naturally leads to a sharp $Z'$ prime resonance that can be seen at the LHC, and we
analyze recent constraints from the Tevatron and the LHC.
Here we also analyze the production and decay of new spin-0 particles.
These scalars are the real parts
of the chiral Stueckelberg superfields, where the imaginary part are the axions which are absorbed giving masses
to the $Z'$ and $Z''$.
In sec.~(\ref{4}) we show that the model allows for two component dark matter, one consisting of
\emph{neutral Dirac} dark matter and the other of Majorana dark matter which produce a relic
abundance consistent with WMAP~\cite{WMAP}.
We also explore the detection possibility of dark matter with the recent
limits set by the XENON and CDMS collaborations~\cite{Aprile:2011hi,Ahmed:2009zw} which allows for direct detection constraints
to be connected with the corresponding constraints on the $Z'$ production at colliders.
In sec.~(\ref{5}) we give an overview as to how models of spontaneous {R-parity} breaking can be distinguished
from the R-parity preserving $B-L$ extensions. Conclusions are given in sec.~({\ref{6}}).
\section{ $B-L$ Stueckelberg Extension of the Standard Model \label{1}}
The $B-L$ extension of the Standard Model provides a natural framework to understand the origin
of neutrino masses since the three families of right-handed neutrinos, needed to cancel all anomalies,
are used to generate neutrino masses.
We first consider a $U(1)_{B-L}$ Stueckelberg~ extension of the Standard Model with the gauge group
\begin{equation} SU(3)_C\otimes SU(2)_L\otimes U(1)_Y\otimes U(1)_{B-L} ~.\end{equation}
The mass growth for the $U(1)_{B-L}$ occurs via the Stueckelberg~ mechanism
for which the extended Lagrangian is given by
\begin{eqnarray}
{\cal L} &=& {\cal L}^{B-L}_{\rm St}+{\cal L}^{B-L}_{\rm Yuk}+{\cal L}_{\rm SM}, \\
{\cal L}^{B-L}_{\rm St} &=& - \ \frac{1}{4} C_{\mu\nu} C^{\mu\nu}
- \frac{1}{2} (M_{BL}C_{\mu} + \partial_{\mu} \sigma) (M_{BL} C^{\mu} + \partial^{\mu} \sigma), \\
{\cal L}^{B-L}_{\rm Yuk} &=& \ Y_\nu \ \bar{l}_L \tilde{H} \nu_R .
\label{1.1}
\end{eqnarray}
Here ${\cal L}_{\rm SM}$ is the Standard Model ~Lagrangian, $l_L^T=(\nu_L,e_L)$ and $\tilde{H}= i \sigma_2 H^*$.
As usual, the Standard Model Higgs is $H^T=(H^+, H^0)$. The above Lagrangian is invariant under
the $B-L$ transformations
\begin{equation}
\delta C_{\mu}= \partial_{\mu} \lambda, ~~\delta \sigma =- M_{BL} \lambda.
\end{equation}
Added to the above is a gauge fixing term
\begin{equation}
{\cal L}_{\rm gf}= -\frac{1}{2\xi} (\partial_{\mu} C^{\mu} + M_{BL} \xi \sigma)^2,
\end{equation}
so that the vector field becomes massive while the $\sigma$ field decouples.
Additionally the interaction Lagrangian
\begin{equation}
{\cal L}_{\rm St}^{ \rm int} = g_{BL} C_{\mu}J^{\mu}_{BL}, \label{intBL}
\end{equation}
couples the Stueckelberg~ field $C_{\mu}$ to the conserved $B-L$ vector current
$J^{\mu}_{BL}$. We note that the $B-L$ gauge field $C_{\mu}$ has become
massive with a mass $M_{BL}$ while maintaining the $U(1)_{B-L}$ invariance. Since $B-L$ continues to be
a symmetry even after the mass growth of the $Z'$ its properties are rather different
from the model where the $B-L$ gauge symmetry is spontaneously broken through the Higgs mechanism.
We will return to this in a later section.
It is important to mention that in this theory the neutrinos are Dirac fermions since
there is no way to generate Majorana masses for right-handed neutrinos as in the canonical
$B-L$ model. This is a natural consequence coming from the Stueckelberg mechanism.
In the above, a kinetic mixing term is possible leading to a generalized mass and kinetic mixings for a massive $U(1)$
which will then generally mix with the SM sector
\cite{darkforce1,Feldman:2007wj}
where the hypercharge vector boson $B$ mixes via both mass and kinetic mixings~\cite{darkforce1}.
One then diagonalizes the Stueckelberg mass and kinetic mixing together {\cite{Feldman:2007wj},\cite{Ringwald},\cite{Zhang},\cite{Burgess}.
A further generalization to multiple $U(1)s$ reads
\begin{equation}
\mathcal{L}^{ \rm KM }_{\rm St} =
\frac{1}{2} \sum^{N_V}_{i,j , i \neq j} \frac{ {\epsilon}_{ i j}} {2} V_{i \mu \nu} {V_j}^{ \mu \nu}
-\frac{1}{2} \sum^{N_S}_{n=1}(\partial_{\mu} \sigma_n + \sum^{N_V}_{m=1} M_{nm} V_{\mu m})^2,
\label{gen}
\end{equation}
with $N_V$ Abelian vectors and $N_S$ axions, where $B_{\mu} = V_{\mu 1} $ and the other vector fields
correspond to either hidden
or visible gauge symmetries. Recent works with multiple additional $U(1)s$ have indeed
been discussed recently~{\cite{Feldman:2007wj,Feldman:2010wy,Heeck:2011md}, \cite{FLNN,Feldman,Dimopoulos}.} Our analysis is restricted to non-anomalous extension of the Standard Model (for the anomalous case see e.g. \cite{coriano,anastasopoulos,fucito}).
In the analysis that follows we will assume the kinetic mixing is absent and instead investigate the pure Stueckelberg sector
in the absence of mass mixing of the hypercharge $B$ with the Stueckelberg~ sector. For recent works on the Stueckelberg Mechanism see e.g.~{\cite{Ahlers:2008qc,Goodsell:2009xc,PT,IB,Marsano,Quevedo,Cvetic:2011iq} and for early work in the context of strings see \cite{KR}.
\section{$ B-L $ Stueckelberg Extension of the MSSM \label{2a}}
Here we construct the minimal $U(1)_{B-L}$ extension of the MSSM using the Stueckelberg Mechanism.
The supersymmetric extension of Eq. (\ref{1.1}) is
\begin{equation}
{ \cal L}_{\rm St} =
(M_{BL} C + S_{\rm st} + {\bar S}_{\rm st})^2 |_{\theta^2 \bar \theta^2}\ ,
\label{mass}
\end{equation}
where $C=(C_\mu,\lambda_C,D_C)$ is the gauge vector multiplet for $U(1)_{B-L}$,
and the Stueckelberg multiplet is $S_{\rm st} = (\rho + i \sigma , \psi_{\rm st}, F_S)$ where $\rho$ is a scalar while $\sigma$ is the axionic
pseudo-scalar. The supersymmetrized gauge transformations under the $U(1)_{B-L}$ are
\begin{eqnarray}
\delta_{BL} C = \zeta_{BL} + \bar\zeta_{BL} \ , \quad
\delta_{BL} S_{\rm st} = - M_{BL} \zeta_{BL}\ ,
\end{eqnarray}
where $\zeta$ is an infinitesimal chiral superfield.
Next we couple the chiral matter fields $\Phi_i$ consisting of quarks, leptons and Higgs fields
of MSSM.
These couplings are given by
\begin{eqnarray}
{\cal L}_{\rm matter} ~=~
\bar \Phi_m e^{ 2g_{BL} Q_{BL} C} \Phi_m |_{\theta^2 \bar \theta^2}\
\label{matter}
\end{eqnarray}
where $Q_{BL}\equiv B-L$ and the sum is implicit over the chiral multiplets $m$ and the interaction
term of Eq. (\ref{intBL}) couples the $B-L$ vector field to fermions.
We focus on the bosonic part of the extended Lagrangian which is given by
\begin{eqnarray} \label{stmssm}
{\cal L}_{\rm spin[0,1]} &=&
-\frac14 C_{\mu\nu}C^{\mu\nu}-\frac{1}{2} M^2_{BL} C_{\mu}^{2}
-\frac{1}{2} (\partial_\mu \rho)^2 - \frac12 M^2_{BL} \rho^2\nonumber\\
&&- |D_\mu \tilde f_i|^2
- g_{BL} M_{BL} \ \rho \ {\tilde f}_i^\dagger Q_{
BL} \tilde f_i
-\frac12 \Big[ \sum_i {\tilde f}_i^\dagger g_{BL} Q_{
BL} \tilde f_i \Big]^2 \ .
\end{eqnarray}
The superpotential of the $B-L$ extended theory is simply
\begin{eqnarray}
{\cal W}= \mu H_u H_d + \sum_{\rm gen} [Y_u Q H_u u^c + Y_d Q H_d d^c + Y_e L H_d e^c + Y_{\nu} L H_u \nu^c].
\label{w1}
\end{eqnarray}
Aside from the term $Y_\nu \ L H_u \nu^c$ Eq.(\ref{w1}) is the superpotential of MSSM but without
the terms that violate R-parity.
\section{R-parity Conservation \label{2b}}
As pointed out earlier, while the Stueckelberg~ mechanism gives mass to the $B-L$ gauge boson,
the Lagrangian of the theory, after the mass growth, still has a $B-L$ symmetry and hence a
conservation of R-parity $(R=(-1)^{2S + 3 (B-L)}=(-1)^{2S} M$. Here $M$ denotes matter parity,
which is $+1$ for Higgs and gauge superfields, and $-1$ for all matter chiral superfields). This conservation of {R-parity}
in the minimal $B-L$ Stueckelberg~ extensions is in contrast to models where the $B-L$ gauge symmetry
is broken by a Higgs mechanism and where in general the mass growth of the $B-L$ gauge boson could break the $B-L$ symmetry and thus R-parity invariance is also lost.
For example, for the model of Eq.~(\ref{w1}), a VEV growth for the scalar field in the $\nu^c_l$ multiplet will break $B-L$ invariance and generate a mass
for the $B-L$ gauge boson. However, a VEV growth for $\tilde \nu^c_l$ also violates R-parity invariance which then removes the neutralino as a possible candidate for dark matter.
Specifically, for example, in Eq.~(\ref{w1}) the VEV growth of $\tilde \nu^c_l$ generates the term $LH_u$ which breaks R-parity. However, in
the minimal $B-L$ Stueckelberg~ extension of MSSM even after the mass growth of the $B-L$ gauge boson R-parity is maintained and the R-parity
violating interactions such as $LH_u$, $LLe^c$, $QLd^c$, $u^c d^c d^c$ are all forbidden in the superpotential.
\subsection{\it Scalar Potential and R-Parity Conservation}
We wish to show here that with a Stueckelberg~ mechanism for mass generation the $B-L$ symmetry not only remains
unbroken at the tree level but further that this invariance is not violated by radiative breaking in the minimal model.
We give now the deduction of this result which is rather straightforward. We exhibit below the potential including just one
generation of leptonic scalar fields in the model consisting of $\rho, \tilde \nu, \tilde e, \tilde e^c, \tilde \nu^c$
(An extension to 3 generations is trivial).
Assuming charge conservation so that $\langle \tilde e\rangle=0= \langle \tilde e^c \rangle,~\rm etc.,$
and including soft breaking, the potential that involves $\rho$, $\tilde \nu$ and $\tilde \nu^c$ fields is
%
\begin{eqnarray}
V_{{\rm St}-BL} &=& \frac{1}{2} \left(M_{BL}^2 + m_{\rho}^2 \right) \rho^2
+
M_{\tilde\nu}^2 \tilde \nu^{\dag}\tilde \nu
+
M_{\tilde\nu^c}^2 \tilde{\nu}^{c\dag} \tilde{\nu}^c
\nonumber\\
& +&
\frac{g^2_{BL}}{2} \left(
\tilde\nu^{c\dag} \tilde \nu^c -\tilde \nu^{\dag} \tilde \nu \right)^2
+ g_{BL} \ M_{BL} \ \rho \ \left(
\tilde\nu^{c\dag} \tilde \nu^c
-\tilde \nu^{\dag} \tilde \nu
\right), \nonumber \\
&+& |Y_{\nu}|^2(|H_u^0\tilde\nu^c|^2 + |\tilde\nu H^0_u|^2 + |\tilde\nu\tilde\nu^c|^2) \nonumber\\
&-& \frac{1}{4}(g^2+ g^{\prime 2}) (|H_u^0|^2-|H_d^0|^2)|\tilde \nu|^2+ \frac{1}{8}(g^2+g^{\prime 2})|\tilde\nu|^4\nonumber\\
&+&(-\mu^* Y_{\nu} H_d^{0*}\tilde\nu\tilde\nu^c+ A_{\nu} Y_{\nu} \tilde\nu\tilde\nu^c H_u^0 + h.c.)
\label{V1}
\end{eqnarray}
%
where we have used $Q_{BL}(e)=Q_{BL}({\nu})= -1$
and where $m_{\rho}$, $M_{\tilde \nu}$, and
$M_{\tilde \nu^c}$ are soft masses.
The relevant part of the potential is then
\begin{equation}
V={\sum}_{\rm gen}V_{{\rm St}-BL} + V_{\rm MSSM},
\end{equation}
%
and where as is familiar
\begin{eqnarray}
V_{\rm MSSM} \!&=&\!
(|\mu|^2 + m^2_{H_u}) |H_u^0|^2 + (|\mu|^2 + m^2_{H_d}) |H_d^0|^2
- (B \mu\, H_u^0 H_d^0 + {\rm h.c.})
\nonumber \\ &&
+ {1\over 8} (g^2 + g^{\prime 2}) ( |H_u^0|^2 - |H_d^0|^2 )^2 .
\end{eqnarray}
We begin with universal boundary conditions for the RGEs.
We note that the RG evolution for $M_{\tilde e}$ and $M_{\tilde \nu}$ are identical since
$SU(2)_L \otimes U(1)_Y$ symmetry is unbroken down to electroweak scale. If
$M_{\tilde e}^2$ turned tachyonic it would lead to VEV formation for the field
$\tilde e$ violating charge conservation and thus we disallow this possibility.
Since $\tilde\nu$ and $\tilde e$ lie in the same $SU(2)_L$ multiplet the
same holds for the $\tilde\nu$ field, i.e., it too does not develop a VEV.
This can be seen from the one loop RG sum rule connecting the sneutrino $\tilde\nu$ mass and the
selectron mass
\begin{equation}
M_{\tilde\nu}^2 - M_{\tilde e}^2 = \cos(2\beta) M_W^2 +\delta^2_{\nu,e} ,
\label{a8}
\end{equation}
where $\delta^2_{\nu,e}$ is difference of the mass squares of the fermions
(and is essentially negligible compared to $W$ mass term
the largest of which occurs for $e \to \tau$ which is still negligible).
Thus the right hand side of Eq.(\ref{a8}) is positive definite for any range of $\tan\beta$ in the perturbative domain
in the RG analysis. As a consequence, if the mass square
of $\tilde e$ does not turn tachyonic, this also holds for the mass square of $\tilde \nu$
and $\langle \tilde \nu \rangle=0$.
Thus with $\left<\tilde e\right>=0= \left<\tilde \nu\right> = \left<\tilde e^c\right>$ ,
and integrating on the $\rho$ field, we get the following potential for $\tilde \nu^c$
\begin{eqnarray}
V_{\tilde{\nu}^c}= M_{\tilde\nu^c}^2 \tilde \nu^{c\dag}\tilde \nu^c +
\frac{ g^2_{BL} m_{\rho}^2} { 2 (M_{BL}^2 + m_{\rho}^2)} (\tilde \nu^{c\dag} \tilde\nu^c)^2
+ |Y_{\nu}|^2 |H_u^0\tilde\nu^c|^2.
\end{eqnarray}
The last term above is negligible in size compared to the other terms since it involves
the Yukawa $Y_{\nu}$. Thus the coupling between this sector and the MSSM sector via the
$H^0_u$ field is negligible.
Now in the RG analysis there are no beta functions to turn $M_{\tilde\nu^c}^2$ negative
and the quartic term is positive definite so the potential is bounded from below.
Consequently the potential cannot support spontaneous breaking to generate a VEV for the field
$\tilde\nu^c$ and thus $\left<\tilde \nu^c\right>=0$.
Further, the extrema equation for $\rho$ gives
%
\begin{eqnarray}
\left< \rho \right> = -\frac{g_{BL} M_{BL}}{M_{BL}^2 \ + \ m_{\rho}^2}
\left<
\tilde {\nu^c}^{\dagger} \tilde {\nu^c} - \tilde \nu^{\dagger} \tilde \nu \right> = 0,
\end{eqnarray}
and since $\left<\tilde \nu\right>=0=\left<\tilde \nu^c\right>$,
one also has $\left<\rho\right>=0$. Thus there is no spontaneous symmetry breaking
in the system and the $B-L$ and consequently an R-parity is preserved.
We add that the situation here
is rather different
from the Stueckelberg extensions introduced in \cite{Kors:2004dx,fln1,fln11} where $\rho$ receives a non-vanishing VEV.
In \cite{Kors:2004dx,fln1,fln11}, a non-vanishing VEV for $\rho$ would arise due the Stueckelberg~ sector mixing with the $U(1)_Y$ sector of MSSM.
In contrast in the minimal $B-L$ extension analyzed here there is no mixing with the $U(1)_Y$ sector,
and thus there is no VEV growth for $\rho$.
Thus the entire mass growth in the $U(1)_{B-L}$ sector occurs via the Stueckelberg~ mechanism.
If we include a Fayet-Iliopoulos D term~\cite{Fayet:1974jb} then effectively
the potential for $\tilde \nu^c$ is replaced by
\begin{eqnarray}
V_{\nu^c}= M_{\tilde{\nu}^c}^2 \tilde \nu^{c\dag} \tilde \nu^c +
\frac{ g^2_{BL} m_{\rho}^2} { 2 (M^2_{BL} + m_{\rho}^2)} (\tilde \nu^{c\dag} \tilde \nu^c + \xi)^2+ |Y_{\nu}|^2 |H_u^0\tilde\nu^c|^2.
\end{eqnarray}
For the case when $\xi$ is negative a VEV growth for $\tilde \nu^c$ is possible and R-parity can be broken spontaneously.
While an FI~D-term naturally arises when the $U(1)$ is anomalous the inclusion of an
FI term for a non-anomalous $U(1)$, which is the case we discuss, is superfluous, and we exclude it
from the minimal model. Therefore, \textit{it is apparent that R-parity is always conserved
within the minimal Stueckelberg $ B-L $ extension of the MSSM. }
The analysis above follows with (minimal) universal boundary conditions on
the soft scalar masses. However, since the nature of physics at the Planck scale is still largely
unknown one should consider non-universalities as well. In this case one will have
additional contribution to the mass squares of scalar masses~\cite{Martin:1993zk,Nath:1997qm}.
The analysis of ~\cite{Ambroso} considers a contribution to
$M_{\tilde \nu^c}^2$ arising from $Tr(Q_{BL} m^2)$
with
\begin{eqnarray}
S_{BL} \equiv Tr(Q_{BL} m^2)
= 2(M_{\tilde Q}^2 - M_{\tilde L}^2) + (M_{\tilde e^c}^2 -M_{\tilde d^c}^2)
+ (M_{\tilde \nu^c}^2- M_{\tilde u^c}^2),
\label{X}
\end{eqnarray}
under the constraint $Tr(Ym^2)=0$, where
\begin{eqnarray}
S_{Y} \equiv Tr(Ym^2) = M_{H_2}^2 -M_{H_1}^2 + \sum_{\rm gen} (M_{\tilde Q}^2 - 2 M_{\tilde u^c}^2
+ M_{\tilde d^c}^2 - M_{\tilde L}^2 + M_{\tilde e^c}^2).
\end{eqnarray}
With the universal boundary conditions for only each family one has $S_{BL}=0$.
This can be achieved in {minimal supergravity models} where all scalars have the same soft mass term,
or in $SO(10)$ or $E_6$ scenarios where the boundary conditions tell us that all sfermions of one family should have the same soft mass term.
However, with non-universal boundary
conditions one will have in general $S_{BL}\neq 0$. With inclusion of
$S_{BL}$ one could in principle turn $M_{\tilde \nu^c}^2$ negative. Such a situation is achieved
with inclusion of specific constraints in the analysis of~\cite{Ambroso}.
However, such constraints are not generic and the positivity
$M_{\tilde \nu^c}^2$
may still be
broadly valid even with inclusion of non-universalities of soft parameters.
Now there are stringent bounds on an extra $B-L$ type gauge boson. One finds ~\cite{Carena:2004xs}
\begin{eqnarray}
M_{Z^{'}}/g_{BL} > M_{BL} \sim 6 ~{\rm TeV},
\label{c1}
\end{eqnarray}
which implies that for $g_{BL}\sim 1$ the $B-L$ type $Z'$ boson lies in the several TeV region.
With a $Z'$ of this mass scale, detection at LHC-7 may be difficult, both because of energy considerations and luminosity.
Further, with the constraint as given by Eq.(\ref{c1})
some of the other phenomenological implications of the model associated with the spin 0 and spin $\frac{1}{2}$
sectors will also be difficult to test.
In what follows, we uncover a model which maintains the strict R-parity invariance
of the minimal Stueckelberg~ $B-L$ extensions, even after mass growth of the $B-L$ gauge bosons, but
with testable implications that are far more rich.
\section{{\boldmath $U(1)_{B-L} \otimes U(1)_X $} Stueckelberg Model \label{3}}
As indicated in the last section, the $Z'$ boson of the minimal $B-L$ model may be difficult to detect because
of its heavy mass. We consider now an extension of the model of the previous section which
overcomes this constraint and produces a $Z'$ which is much lighter but still has $B-L$ interactions
with matter. This extension includes
a hidden sector $U(1)_X$ which is anomaly free but allows for a mixing between the visible and the hidden sectors.
The extended gauge group reads:
\begin{equation} SU(3)_C\otimes SU(2)_L\otimes U(1)_Y\otimes U(1)_{B-L} \otimes U(1)_{X} ~.\end{equation}
Thus we have Stueckelberg~ mass growth in the Abelian sector via the interaction
\begin{eqnarray}
{ \cal L}_{\rm St} = \int d^2 \theta d^2\bar\theta \
[(M_1 X + M_2' C+ S +\bar S)^2\nonumber \\
+ (M_1' X + M_2 C+ S'+ \bar S' )^2]\ ,
\label{mass}
\end{eqnarray}
where the model is invariant under the extended gauge transformations
\begin{eqnarray}
\delta_X (X,S,S',C) &=&(\epsilon_X + \bar \epsilon_X , -M_1 \epsilon_X,-M_1' \epsilon_X,0) \nonumber \\
\delta_{BL} (C,S,S',X) &=&(\epsilon_{BL} +\bar \epsilon_{BL},-M_2' \epsilon_{BL},-M_2 \epsilon_{BL},0)
\end{eqnarray}
where $\epsilon_{X,BL}$ are infinitesimal chiral superfields.
One can compute the mass matrix for the $U(1)_X$ and the $U(1)_{B-L}$ gauge vector bosons
by going to the unitary gauge which in the basis $X_{\mu}, C_{\mu}$ gives
\begin{eqnarray}
\label{neutrmass}
M^{2}_{\rm [spin ~1]} =
\left[
\begin{matrix}
M_1^2 + M^{\prime 2}_1 & M_1M_2'+ M_1' M_2 \\
M_1M_2'+ M_1' M_2 & M_2^{2} + M^{\prime 2}_2
\end{matrix}
\right] \ .
\label{vectormass}
\end{eqnarray}
Here $M_1', M_2'$ are the mixing parameters and in the limit that $M_1', M_2' \to 0$ we have
that the masses of the $X_{\mu}, C_{\mu}$ bosons are $M_1, M_2$. The diagonalization gives us two
massive vector
bosons which we may call $Z', Z''$ where
\begin{eqnarray}
X_{\mu}&=& \cos \theta_{BL} Z'_{\mu} + \sin \theta_{BL} Z_{\mu}'',\nonumber \\
C_{\mu}&=& -\sin\theta_{BL} Z'_{\mu} + \cos \theta_{BL} Z_{\mu}''.
\label{z1}
\end{eqnarray}
We consider now the case
of small mixing, i.e., $M_1', M_2' \ll M_1, M_2$ which implies
$\tan \theta_{BL} \ll 1$. For small mixings
the $Z'$ boson lies mostly in the hidden sector
with a small component proportional to $\tan\theta_{BL}$ in the $B-L$ sector while the opposite holds
for $Z''$. Here $Z''$ lies mostly in the $B-L$ sector with a small component proportional to $\tan\theta_{BL}$
in the hidden sector.
\begin{table}[t!]
\begin{center}
\begin{tabular}{cccc|cc}
\hline
$f\bar f$ & $\Gamma(Z'\to f\bar f)/\alpha_{BL} M_{Z'}$ & $\Gamma(Z''\to f\bar f)/\alpha_{BL} M_{Z''}$ \\
\hline
$\ell^+_i \ell^-_i$ & $\sin^2\theta_{BL}/{3} $ & $\cos^2\theta_{BL}/{3}$ \\
$\nu_{\ell_i} \nu_{\ell_i}$ & $\sin^2\theta_{BL}/{3} $ & $\cos^2\theta_{BL}/{3}$ \\
$q\bar q (q\neq t)$ & $ f_s \sin^2\theta_{BL}/{9}$ & $f_s \cos^2\theta_{BL}/9 $ \\
$t\bar t$ & $f_s f_{t,Z'} \sin^2\theta_{BL}/9 $ & $f_s f_{t,Z''} \cos^2\theta_{BL}/{9} $
\end{tabular}
\caption{
The decay widths of the $Z'$ and of the $Z''$ bosons into leptons and into quarks in the $U(1)_X\otimes U(1)_{B-L}$ Stueckelberg~ model
where $\alpha_{BL}\equiv g^2_{BL}/4\pi$ and $ f_s =(1+\frac{\alpha_s}{\pi})$ and for $V =Z',Z''$, one has $ f_{t,V} = (1+2\frac{m_t^2}{M_{V}^2}) (1-\frac{4m_t^2}{M_{V}^2})$. } \label{Tab1}
\end{center}
\end{table}
Since $X_{\mu}$ lies in the hidden sector and has no couplings to the visible
sector matter, the only couplings of $Z', Z''$ to the visible sector arises because of the couplings of
$C_{\mu}$ to the visible sector matter.
Using the couplings of $C_{\mu}$ one finds the
couplings of $Z'$ and $Z''$ to the fermions $(f_i)$ to be of the form
\begin{eqnarray}
{\mathcal L }_{Z',Z''} = (\bar f_{i} \gamma^\mu g_{BL} Q_{BL} f_{i}) [-\sin \theta_{BL} Z_\mu'
+ \cos \theta_{BL} Z_\mu''].
\label{zz}
\end{eqnarray}
In the context of Eq.(\ref{zz}) the constraint of Eq.(\ref{c1}) gives two separate conditions, i.e.,
\begin{eqnarray}
M_{Z^{'}}/g_{BL} > \sin\theta_{BL}\times (6~{\rm TeV}), \nonumber \\
M_{Z^{''}}/g_{BL} > \cos\theta_{BL} \times (6~{\rm TeV}).
\label{c2}
\end{eqnarray}
It is clear that the constraint on the $Z'$ is now considerably weakened relative to the constraint of
Eq.(\ref{c1}) if the mixing angle $\theta_{BL}$ is
small and one can have
\begin{equation}
M_{Z'} \ll {\rm 1 ~TeV}, ~~~~ {\rm Stueckelberg~} ~~U(1)_{B-L} \otimes U(1)_{X}.
\end{equation}
However, $Z''$ is still heavy since $\cos\theta_{BL}\sim 1$ for small $\theta_{BL}$.
\begin{figure*}[t!]\centering
\includegraphics[height=6.6cm]{Tev}
\includegraphics[height=6.6cm]{LHCnew}
\caption{
\label{zprime1}
Upper panel: An exhibition of $\sigma(pp\to Z')\cdot Br(Z'\to e^+e^-)$ vs the mass of the $Z'$ resonance
in the Stueckelberg~ $U(1)_{B-L}\otimes U(1)_X$ extension of MSSM at the Tevatron. Here $g_{BL}= 0.35$ and $\sin\theta_{BL}$ takes on the
values $0.01,0.05$ from the bottom to the top curves in the plot.
The analysis assumes that the
$Z'$ decay into the hidden sector is suppressed.
Lower panel: The same analysis at LHC-7 with
$\sin \theta_{BL}$ taking on the values {(0.01,0.02,0.03,0.05)} from the bottom curve to the top in that order. }
\end{figure*}
\begin{figure*}[t!]\centering
\includegraphics[height=8cm]{chart}
\caption{
Exhibition of a 500~GeV $Z'$ resonance in the Stueckelberg $U(1)_{B-L} \otimes U(1)_X$ model at LHC-7 with
a variable luminosity from 5fb$^{-1}$ to 20fb$^{-1}$ with a $P_T$ cut on leptons of $P_T > 30$~GeV.
Currently the LHC has analyzed $\sim \rm1fb^{-1}$ of luminosity. For a $Z'$ resonance of 500~GeV with $\theta_{BL} = 0.05$
and $g_{BL} \sim g_Y$ the LHC would need about 5fb$^{-1}$ to begin to see any $Z'$ effect.
With a very optimistic 20fb$^{-1}$, the $Z'$ signal will be strong and $Z'$ should be visible with the
mixings and masses of the size discussed. }
\label{zp2}
\end{figure*}
In Table(\ref{Tab1}) we give the decay widths of the $Z'$ and $Z''$ bosons into leptons and into quarks.
The relative strength of the $Z'$ decay into quarks and leptons provides a distinctive signal for
this model. Thus, for example, the ratio of the branching ratios of $Z'$ into charged leptons
vs into quarks (except into $t\bar t$) is given by
$BR(Z'\to \ell^+\ell^-)/BR(Z'\to q\bar q) = 6/(5(1+ \alpha_s/\pi))$.
Further, in this model the decay width of the $Z'$ and $Z''$ are related by
\begin{eqnarray}
\frac{\Gamma (Z'\to \sum_i f_i \bar f_i)}{ \Gamma(Z'' \to \sum_i f_i\bar f_i)} = \tan^2\theta_{BL} \frac{M_{Z'}}{M_{Z''}}.
\label{ratiowidths}
\end{eqnarray}
Eq.(\ref{ratiowidths}) implies that for the $Z'$ mass in the sub TeV range, and the $Z''$ mass in the range
above 6~TeV, and $\tan\theta_{BL}\ll 1$ consistent with Eq.(\ref{c1}), the ratio of the decay widths
of $Z'$ vs of $Z''$ can be vastly different, i.e., a decay width of $Z'$ in the MeV range vs the decay width
of $Z''$ in the hundreds of GeV range. Thus while the $Z'$ will be a very narrow resonance,
the $Z''$ will be a very broad resonance.
It is also instructive to check the contribution of the new interactions to the muon anomalous moment which is
now measured very accurately~\cite{Bennett:2004pv} so that the current error in the determination is given by
$\Delta (g_{\mu}-2)= 1.2\times 10^{-9}$. The contribution of the $Z'$ and of the $Z''$ bosons to the anomalous moment is
given by
\begin{eqnarray}
\Delta (g_{\mu}-2) = \frac{g_{BL}^2 m_{\mu}^2} {24\pi^2} \left[ \frac{\sin^2\theta_{BL}}{M_{Z'}^2}
+\frac{\cos^2\theta_{BL}}{M_{Z''}^2} \right].
\end{eqnarray}
Using the LEP constraint of Eq.(\ref{c2})
one finds that the contributions of the new interactions is
\begin{eqnarray}
\Delta (g_{\mu}-2) \leq \frac{ m_{\mu}^2}{12 \pi^2M_{BL}^2}
\end{eqnarray}
and a substitution of $M_{BL}\sim 6$ TeV gives a rather small contribution, i.e., $\Delta (g_{\mu}-2) \leq O(1) \times 10^{-12}$. Remarkably in this case the LEP constraint of Eq.(\ref{c2}) is stronger than the constraint arising from
the very precise measurement of $g_{\mu}-2$.
\subsection{\it Production of Vector Resonances}
The fact that the $Z'$ boson could have a low mass has important phenomenological implications.
From Table(\ref{Tab1}) we note that the decay width of the $Z'$ boson is proportional to $\sin^2\theta_{BL}$
and since $\sin\theta_{BL}$ is small, the decay width is relatively small, i.e., with the mass of the
$Z'$ in the sub TeV region, its decay width would be in the MeV range and thus
the Stueckelberg~ $Z'$ is a very narrow resonance.
A narrow resonance of this type
should be testable in collider experiments much like the hypercharge Stueckelberg~ $Z'$
on which the D\O\ currently has experimental bounds\cite{DzeroDY}. Further, the decay of the
Stueckelberg~ $Z'$ into leptonic channels will be much more than in the hadronic channels
because the branching ratios are proportional to $(B-L)^2$. Thus one can discriminate
a $ B-L $ Stueckelberg~ $Z'$ boson by a study of its branching ratios. Such a resonance could be
produced in the Drell-Yan process at the LHC and the Tevatron via
\begin{equation}
pp (p \bar p) \to Z'\to \ell\bar \ell, q\bar q.
\end{equation}
In Fig.~(\ref{zprime1}) we show the predictions for the $Z'$ cross section times the branching ratio into $e^+ e^-$
in the $U(1)_{B-L} \otimes U(1)_X $ extension of the Standard Model.
Cross sections and event rates are calculated by implementing the couplings into PYTHIA and PGS~\cite{pythia,cway}.
The bottom panel shows the limits on the production cross section for $\sigma(pp\to Z'\to e \bar e)$ at $\sqrt s = 7 ~\rm TeV$ with the recently released $\sim \rm 1~\rm fb^{-1}$ run~\cite{AtlasDY}.
For these curves we take $g_{BL}$ to be
have the same value as the hypercharge gauge coupling $g_{Y}$ and we let $\sin \theta_{BL}$ run from $0.01$ to $0.05$. The cross section for other values
of the product $g_{BL} \sin \theta_{BL}$ can be estimated by the scaling in the cross section which at the $Z'$ resonance scales like $g^2_{BL} \sin^2 \theta_{BL}$.
The top panel gives a similar analysis for the Tevatron using the D\O\ data with $\rm 5.4/fb$ of integrated luminosity~\cite{DzeroDY} .
From the analysis of Fig.(\ref{zprime1}) we observe that at present the Tevatron bound is about as strong as the
present LHC bound. However, the LHC will
surpass the Tevatron very soon. Indeed, the $Z'$ produced in the model can exist with a much lower
mass~\cite{fln1,fln11,Salvioni:2009mt,Chanowitz:2011ew} than the $Z'$ models presently excluded by ATLAS~\cite{AtlasDY} and CMS~\cite{Chatrchyan:2011wq}.
In Fig.(\ref{zp2}) we display the number of events as a function of the di-lepton invariant mass.
Here one finds that with an optimistic choice of an integrated luminosity of 20fb$^{-1}$ the number of
dileptonic events in excess of 30 in the peak mass bin and should be visible. Thus a $Z'$ mass of 500 GeV with a mixing angle
$\theta_{BL}=0.05$ and $g_{BL}=g_Y$ is a promising candidate for discovery.
\subsection{\it Production and Decay of the Scalars $\rho$ and $\rho'$}
In addition to the $Z'$ phenomenology there are other sectors where new phenomena can arise.
One of these relates to the scalar components $\rho_X$ and $\rho_{BL}$
of $S+\bar S$ and of $S'+ \bar {S'}$ that remain in the bosonic sector after
$Z'$ and $Z''$ gain mass by the Stueckelberg~ mechanism. These fields mix with the D-terms
so that one has the following set in the Lagrangian
\begin{eqnarray}
\rho_{BL} (M_1 D_{BL} + M_2' D_X) + \rho_X
(M_1' D_{BL} + M_2 D_X).
\label{dterm}
\end{eqnarray}
Elimination of the D-terms gives the following mass matrix in the $\rho_X$ and $\rho_{BL}$ basis
\begin{eqnarray}
\label{neutrmass}
M^{2}_{\rm [spin~ 0]} =
\left[
\begin{matrix}
M_1^2 + M^{\prime 2}_2 + m_{X}^2 & M_1M_1'+ M_2 M_2' \\
M_1M_1'+ M_2 M_2' & M^{\prime 2} _1+ M_2^2 + m_{BL}^2
\end{matrix}
\right] \ ,
\label{scalarmass}
\end{eqnarray}
where we have also included the soft contributions to masses for $\rho_X$ and $\rho_{BL}$.
We note that the structure of the spin zero mass squared matrix given by Eq.(\ref{scalarmass}) is
different compared to the mass$^2 $ matrix given by Eq.(\ref{vectormass}). The reason for this
is that while the vector mass squared matrix arises directly from the Stueckelberg term Eq.(\ref{mass}),
the mass squared matrix of Eq.(\ref{scalarmass}) arises from the mixing given by Eq.(\ref{dterm}).
The mass matrix of Eq.(\ref{scalarmass}) gives two mass eigenstates $\rho$ and $\rho'$ with eigenvalues $M_{\rho}$ and
$M_{\rho'}$.
The mass parameters $M_1', M_2'$ can define the mixing and when the mixing is small,
$M_{\rho}^2 \to M^2_1+ m_X^2$ and $M_{\rho'}^2\to M_2^2+ m_{BL}^2$. With
$M_{\rho}$ in the sub TeV range $M_{\rho'}$ may have a mass in the several TeV range.
These mass eigenstates are admixtures of $\rho_X$ and $\rho_{BL}$
so that
$
\rho_X= \cos \theta'_{BL} \rho + \sin\theta'_{BL} \rho'$
and $\rho_{BL}= -\sin\theta'_{BL} \rho + \cos\theta'_{BL} \rho'$.
For the case when the soft terms are absent, the eigenvalues of the mass squared matrix
of Eq.(\ref{scalarmass}) is are identical despite the very different looks of the matrices of
Eq.(\ref{vectormass}) and Eq.(\ref{scalarmass}).
This can be seen by
the following unitary transformation
\begin{eqnarray}
U^{\dagger} M^2_{\rm [spin~1]} U= M^2_{\rm {[spin~0]}}, \label{cool}
\end{eqnarray}
where the unitary matrix that connects the spin~1 and spin~0 matrix is given by
\begin{eqnarray}
U = \left(
\begin{matrix}
\cos\xi & \sin\xi\\
-\sin\xi & \cos\xi
\end{matrix} \right) \ , ~~ \tan\xi= \frac{M_1'-M_2'}{M_1+ M_2}.
\label{UU}
\end{eqnarray}
This result shows that the eigenvalues for the matrices $M^2_{[\rm spin~1]}$ and $M^2_{\rm[spin ~0]}$ are the same
in the limit of vanishing soft masses for the scalars.
Now it is assumed that all the matter fields in the visible sector do not carry any $U(1)_X$ quantum numbers,
i.e., $Q_X=0$ for quarks, leptons and the Higgs fields. Further, following the analysis of {Sec.(4)}, it is
straightforward to establish that the quartic term $(\tilde\nu^{c\dagger} \tilde\nu^c)^2$ has a positive
co-efficient in the scalar potential. Thus once again since there are no couplings in the model to turn
$M_{\tilde \nu^c}^2$ negative, there is no spontaneous violation of R-parity also in this extended model
while the
$B-L$ gauge boson develops a mass via the Stueckelberg~ mechanism.
From the discussion preceding Eq.(\ref{cool}) , it is clear that the field $\rho_X$ has no coupling with the visible
sector while $\rho_{BL}$ has couplings of the form $g_{BL} M \rho_{BL} \bar{\tilde f}_i Q_{BL} \tilde f_i$.
One then has the following interactions of $\rho$ and $\rho'$ with sfermions
\begin{eqnarray}
{\mathcal L }_{\rho \tilde f^\dagger \tilde f} =- \sin\theta'_{BL} g_{BL} M_1 {\tilde f}_i^\dagger Q_{BL} \tilde f_i \rho
+ \cos\theta'_{BL} g_{BL} M_1 {\tilde f}_i^\dagger Q_{BL} \tilde f_i \rho'.
\label{rho2}
\end{eqnarray}
Eq.(\ref{rho2}) allows the decay of the $\rho (\rho')$ via its couplings to the sfermions.
If kinematically allowed $\rho (\rho')$ will decay into leptons + $E_T^{miss}$ or into jets + $E_T^{miss}$
where $E_T^{miss}$ contains at least two neutralinos $\chi^0$
(here ${\chi^0}$ is
the lightest neutralino (LSP) of the $U(1)_{B-L} \otimes U(1)_X$ combined sector - see sec.~(\ref{4})). However, an interesting situation arises
when the mass of $\rho (\rho')$ is smaller than $2 M_{\chi^0}$. In this case $\rho (\rho')$ cannot decay into the
final states with 2 LSPs and only the decays into the Standard Model particles are allowed.
Such decays can occur via loops and the final states will consist of $gg$, $f_i\bar f_i$,
$WW$, $ZZ$, $\gamma Z$, $\gamma\gamma$. There are many diagrams that contribute.
The dominant one
relevant to the model we study here with real scalars $\rho$ and $\rho'$
are the gluon fusion
diagrams (see Fig.(3)).
\begin{figure}[hbt]
\label{gluonX}
\begin{center}
\setlength{\unitlength}{1pt}
\scalebox{1.17}{
\begin{picture}(600,100)(80,0)
\Gluon(210,20)(240,20){-3}{4}
\Gluon(210,80)(240,80){3}{4}
\DashLine(210,80)(210,20){5}
\DashLine(210,20)(180,50){5}
\DashLine(180,50)(210,80){5}
\DashLine(150,50)(180,50){5}
\put(125,46){$\rho, \rho'$}
\put(220,46){$\widetilde{q}$}
\put(245,18){$g$}
\put(245,78){$g$}
\DashLine(300,50)(330,50){5}
\DashCArc(345,50)(15,0,180){5}
\DashCArc(345,50)(15,180,360){5}
\Gluon(360,50)(390,80){3}{5}
\Gluon(360,50)(390,20){-3}{5}
\put(275,46){$\rho, \rho'$}
\put(335,70){$\widetilde{q}$}
\put(395,18){$g$}
\put(395,78){$g$}
\end{picture} } \\
\setlength{\unitlength}{1pt}
\caption[ ]{Diagrams giving rise to
{the production of the Stueckelberg~ scalars, $\rho,\rho'$} at the lowest order.}
\end{center}
\end{figure}
From Eq.(\ref{rho2}) the interactions of $\rho$ and $\rho'$ to the mass diagonal squarks
are given by the following interaction
\begin{eqnarray}
{\mathcal L }_{(\rho, \rho')\tilde q ^{\dagger} \tilde q} & =& -g_{\rho} M_1
\cos(2\theta_{\tilde q_i}) \left(\tilde q^{\dagger}_{1i} \tilde q_{1i} \rho -\tilde q^{\dagger}_{2i} \tilde q_{2i} \rho
\right) -
g_{\rho} M_1
\sin(2\theta_{\tilde q_i}) \left(\tilde q^{\dagger}_{1i} \tilde q_{2i} \rho +\tilde q^{\dagger}_{2i} \tilde q_{1i} \rho
\right) \nonumber\\
&+& (\rho \to \rho', -\sin\theta_{BL}\to \cos\theta_{BL}).
\label{rhoint}
\end{eqnarray}
with the $B-L$ dependance encoded via \begin{equation} g_{\rho}= \frac{1}{3} g_{BL} \sin\theta_{BL}' ,\end{equation}
and where $i$ runs over the squark flavors.
Now while the $\rho,\rho'$ vertices allow couplings with squark mass eigenstates, where the two states
couple to are either the same state or different states, the gluino only couples to squark
states, where both states have the same mass. Thus in Eq.(\ref{rhoint}) only the interaction terms
proportional to $\cos 2\theta_{\tilde q i}$ enter in the gluon fusion diagram. As such, the
decay width of the $\rho$ to gluons is given by
\begin{eqnarray}
\Gamma(\rho \to gg) = \frac{g_{\rho}^2\alpha_s^2 M_{\rho}^3 M_1^2}{512\pi^3}
\left|\sum_{a=1,2; i} (-1)^{1+a} \cos(2\theta_{\tilde q_i})
\frac{L_1(r_{ai})}{m_{\tilde q_{ai}}^2}\right|^2
\label{rhogg}
\end{eqnarray}
with $r_{ ai}= M_{\rho}^2/(4m_{\tilde q_{ai}}^2)$,
and $L_1(r)$ is a loop function defined by\cite{Djouadi:1998az}
\begin{eqnarray} L_1(r)= r^{-2} \left[ r- f(r)\right], ~~~~
f(r) = \begin{cases}
\arcsin^2(\sqrt r)
&r\leq 1 \\
-\frac{1}{4} \left( \log \frac{1+\sqrt{1-r^{-1}}}{1-\sqrt{1-r^{-1}}} - i\pi\right)^2,
&r>1~.
\end{cases}
\label{L1}
\end{eqnarray}
As a consequence of the symmetry of gauge interactions one also has
\begin{eqnarray}
\Gamma(\rho'\to gg) = \frac{M_{\rho'}^3}{M_{\rho}^3} \cot^2\theta_{BL}'
\Gamma(\rho \to gg).
\end{eqnarray}
Further the partonic production cross section of $\rho$ is
given by
\begin{eqnarray}
\hat\sigma(gg\to \rho) = \frac{ g_{\rho}^2 \alpha_s^2 M_1^2}{256 \pi M_{\rho}^4}
\left | \sum_{a=1,2;i}
(-1)^{1+a} \cos (2\theta_{\tilde q_{i}}) r_{ai} L_1(r_{ai})
\right |^2
\delta(1- M_{\rho}^2/\hat s).
\label{rh011}
\end{eqnarray}
The hadronic production cross section relevant to the search for $\rho$ at the LHC is $\sigma(pp\to \rho)$
and is given by a convolution with the parton distribution functions for the gluon, which at leading
order in the narrow width approximation is given by
\begin{eqnarray}
\sigma(pp\to \rho)(s) =
\tau_{\rho}\frac{d L_{gg}^{pp}}{d\tau_{\rho}}
\hat \sigma(gg\to \rho).
\end{eqnarray}
Here $\sqrt s$ is the $pp$ center-of-mass energy,
$\tau_{\rho} = M_{\rho}^2/s$,
and ${dL_{gg}^{pp}}/{d\tau}$ is given by
\begin{eqnarray}
\frac{d{\it L}_{gg}^{pp}}{d\tau}= \int_{\tau}^1 \frac{dx}{x} f_{g/p}(x,Q) f_{g/p}(\frac{\tau}{x}, Q),
\end{eqnarray}
where $f_{g/p}$ is the parton distribution function for finding the gluon inside a proton with momentum
fraction $x$ at a factorization scale $Q$. A numerical analysis shows that
$ \sigma(pp\to \rho)$ can lie in the range O(1000)~fb in the most optimal part of the parameter space
for producing the $\rho$.
\begin{figure*}[t!]\centering
\includegraphics[scale=0.43,angle=0]{rhofigure}
\caption{A display of the production cross section $\sigma(pp\to \rho)$ from gluon fusion as a function of the
mass of the Stueckelberg scalar $\rho$ at the LHC with $\sqrt s= 14$ TeV for several combinations of $\theta'_{BL}$ and $g_{BL}$
for the case which maximizes
the production for the MSSM sector, $|\cos 2 \theta_{\tilde t}| =1$, i.e. $|A_t| \tan \beta = |\mu|$ . From bottom to top the curves have $(\theta'_{BL}, g_{BL}) =(\pi/6,0.65),(\pi/2, 0.65),(\pi/2,1.2)$,
where the top curve is close to the theoretical upper limit on the production.
The kink appears at the point where $M_{\rho}/(2 m_{\tilde t_1})=1$
and the analysis has
the other squarks and the gluino
much heavier than the lighter stop. }
\label{pprho}
\end{figure*}
The final decay modes of the $\rho$ can produce
visible signatures at the LHC, and branching ratios
will generally be different from the
Standard Model Higgs $h_{\rm SM}$. Thus $h_{\rm SM}$ has both tree level decays into the final states
$b\bar b, \tau\bar \tau, c\bar c$ as well as decays via loop diagrams into $gg, WW, ZZ, Z\gamma, \gamma \gamma$.
For a Higgs boson mass of $100$ GeV, dominant decays modes are the tree level decay modes
with $b\bar b$ decay being almost 80\%. Among the loop decays the dominant decay is $gg$ and
sub-dominant decays are $WW$ (off shell) and $\gamma\gamma$ at a Higgs mass of 100 GeV.
Now suppose the tree decays of the Higgs were suppressed, then the decay of the Higgs to
$\gamma\gamma$ will have a branching ratio of $\sim 2.5\times 10^{-2}$.
The decay of the $\rho$ parallels this case since there are no tree decays of the $\rho$.
In the analysis below we will use the above branching ratio to get an approximate estimate
of $\gamma\gamma$ event for the $\rho$ decay.
An analysis of $pp\to \rho$ at the LHC at
$\sqrt s=14$ TeV is given in Fig.(\ref{pprho}). One finds that the cross section at $M_{\rho}=100$ GeV
for the maximal case with $(\theta_{BL}', g_{BL})=(\pi/2, 1.2)$
is $\sim 100$fb. At 200 fb$^{-1}$ of integrated luminosity at the LHC at $\sqrt s=14$ TeV, one will have
$2\times 10^5$ $\rho$ events when $M_{\rho}=100$ GeV. Using $BR(\rho\to \gamma\gamma) = 2.5\times 10^{-2}$
one finds $\sim 5000$ $\gamma\gamma$ events before kinematic and efficiency cuts.
We note that the photons coming from the
$\gamma\gamma$ signal will be monochromatic carrying roughly half the mass of the decaying particle.
Thus the $\gamma\gamma$ signal arising from the decay of the $\rho$ would be distinguishable from the
$\gamma\gamma$ signal from the Higgs decay if the masses of the two are significantly separated.
A $\rho$ mass of $100$ GeV would imply a $Z'$ mass of also 100 GeV assuming no soft terms in
the $\rho$ sector. A $Z'$ mass of 100 GeV is consistent with the current data if either
the mixing angle $\theta_{BL}$ is small or the $Z'$ decays dominantly into the hidden sector
(see Sec.(\ref{DD})). We note also that while the mass of the $Z'$ and the mass of $\rho$
are the same in the absence of soft breaking terms for $\rho$, the couplings of the
$Z'$ to fermions and of $\rho$ to squarks can be of very different sizes. This is apparent from
Eq.(\ref{UU}). Hence the possibility arises of being able to discover both the $\rho$ and the $Z'$.
However it is also quite possible that only one resonance may be visible depending on the overall
size of the Stueckelberg masses and the individual couplings of the two states.
The production cross section for $pp\to \rho, \rho' $ bears resemblance to the
analysis of \cite{FileviezPerez:2011kd} and
is closely related to canonical Higgs production~ (see e.g. \cite{Spira:1997dg,Djouadi:1998az})
but is restricted by the form of the couplings as given in the $B-L$ Stueckelberg~ extension.
We add that recently several models with scalars
have been studied in the literature which can produce large production enhancements
relative to the SM higgs production (see e.g~\cite{Giudice:2000av,DeRujula:2010ys,Low:2011gn,Fox:2011qc}).
The production of $\rho$ does not receive enhancements of the size studied above, but nevertheless does
produce event rates that can be measured at the LHC-14 with larger luminosity as was detailed above.
We note that very recently the LHC has put new constraints
on the allowed mass of the Standard Model Higgs Boson $h_{\rm SM}$. Preliminary analyses based on those reported
at EPS 2011 and at Lepton-Photon 2011~\cite{EPSLP} imply that the SM Higgs boson
has a mass below $\sim$~145~GeV. The above result is compatible with the SUGRA models which typically
indicate a Higgs mass below $\sim 140~\rm GeV$.
Because the production of $\rho$ relative to the $h_{\rm SM}$ differs markedly via their couplings,
as discussed above, the production of the
two fields could be distinguished with sufficient luminosity. This is possible if the $h_{\rm SM}$
resonance and the $\rho$ resonance are sufficiently separated in mass. In addition,
because the production of $\rho$ is weaker than $h_{\rm SM}$, the golden channels
such as $ZZ,WW$ remain available where $h_{\rm SM}$
has been ruled out to have such a mass. Searches for $M_{\rho} \sim (200-500)\rm GeV$
will however have to wait for upgraded luminosity at the LHC.
\section{Neutral Dirac and Majorana Components of Dark Matter \label{4}}
\subsection{\it Majorana Dark Matter }
The $U(1)_{B-L} \otimes U(1)_X $ Stueckelberg~ extension of MSSM have new implications
for the nature of dark matter.
Specifically
in the neutralino sector we have in addition to the MSSM neutralinos, extra gauginos and stinos,
where the stino is the analogue of the higgsino.
Thus from the gauge supermultiplets $X=(X_{\mu}, \lambda_X, D_X)$ and
$C=(C_{\mu}, \lambda_C, D_C)$
we can construct two gaugino states which we label as $\Lambda_X, \Lambda_{BL}$. Similarly from
the chiral multiplets $S+\bar S$ and $S'+\bar S'$ we can construct two higgsino states
$\psi_S, \psi_{S'}$. These four neutralino states in the
Stueckelberg~ sector have no mixings with the MSSM neutralinos.
Thus the neutralino mass matrix in the
$U(1)_X\otimes U(1)_{B-L}$ extension of MSSM
has the form
\begin{eqnarray}
\cal{M}_{\rm neutralino} =
\left(
\begin{array}{c|c}
{\cal M}_{st} & 0_{4 \times 4} \\
\hline
0_{4 \times 4} & {\cal M}_{\rm MSSM} \\
\end{array}
\right)
\end{eqnarray}
Specifically the neutralino mass terms in the $U(1)_X\otimes U(1)_{B-L}$ sector are given by
\begin{eqnarray}
{\cal{L}}^{\rm mass} &=& - Z^T {\cal{M}}_{st} \ Z, \\
Z^T &=& (\psi_S, ~ \psi_{S'}, ~ \Lambda_{B-L}, ~ \Lambda_X),
\end{eqnarray}
where the $4\times 4$ sub-block of the ${U(1)_{B-L}\otimes U(1)_X}$ sector has the form
(omitting for simplicity the soft terms)
\begin{eqnarray}
{\cal{M}}_{st} = \left( \begin{array}{cc}
0_{2\times 2}& m \\
m^T & 0_{2\times 2} \\
\end{array} \right)_{4\times 4},
\label{e5}
\ m =
\left( \begin{array}{cc}
M_1 & M^{\prime }_2 \\
M^{\prime }_1 & M_2\\
\end{array} \right)_{2\times 2}.
\label{e5}
\end{eqnarray}
We can diagonalize the neutralino mass matrix in the $U(1)_X\otimes U(1)_{B-L}$ sector
by an orthogonal transformation $Z= O X$ so that
\begin{eqnarray}
X^T O^T {\cal{M}}_{st} O X= diag(m_{\chi_5^0}, m_{\chi_6^0}, m_{\chi_7^0}, m_{\chi_8^0}) .
\end{eqnarray}
Now the generalization of the matter Lagrangian reads
\begin{eqnarray}
{\cal L}_{\rm matter} ~=~
\bar \Phi_m e^{ 2g_{BL} Q_{BL} C +2g_{X} Q_{X} X } \Phi_m |_{\theta^2 \bar \theta^2}\ ,
\label{matter}
\end{eqnarray} and
gives a coupling of the type $\bar \Lambda_{B-L} f_L \tilde f_L^*$,
where $(f_L, \tilde f_L)$ are a chiral fermion and a chiral scalar,
which leads to couplings of the Stueckelberg~ sector neutralinos with matter of the form $\bar \chi_{k}^0 f_L \tilde f_L^*$
$(k=5-8)$.
Thus we note that even though the neutralino mass matrix does not have a mixing between
the MSSM and the Stueckelberg~ sectors, the neutralino in the Stueckelberg~ sector can decay into the least
massive supersymmetric particle (LSP) which may lie in the MSSM sector. The way it occurs is as follows: The neutralinos
$\chi_k^0$ $(k=5-8)$ have fermion-sfermion interactions as indicated above,
while the neutralinos in the MSSM also have similar
type interactions. If the mass $m_{\chi_k^0} > m_{\chi_1^0}$ we will have decays
of the type $\chi_k^0 \to \bar{\tilde f}_i f_i \to \bar f_i f_i \chi_1^0 + \cdots$. Thus there is only one stable
Majorana supersymmetric particle in the combined MSSM and Stueckelberg~ system.
On the other hand if, for example, $\chi_5^0$ is the lightest neutralino then the LSP will lie in the Stueckelberg~ sector.
In this case the $\chi_{\rm st}^0 =\chi_5^0$ would
be a dark matter candidate {(the notation st denotes stueckelberg and does not imply preference of the stino
component over the gaugino component)}. Its properties are expected
to be similar to those of the bino LSP of the MSSM.
For the case of a thermal relic, the annihilation of $\chi_{\rm st}$
will occur via the t-channel squark exchange so that (dropping the superscript 0 from here on)
$\chi_{\rm st} + \chi_{\rm st} \to f_i \bar f_i,$ as wells
$\chi_{\rm st} + \tilde f_{\rm MSSM} \to \rm SM ~SM', $
$\chi_{\rm st} + \chi_{\rm MSSM} \to \rm SM ~SM',$
where the last two cases indicate that the the coannihilations
will generally occur ~\cite{darkforceA}, \cite{fucito},\cite{FLNN} (for a review see \cite{Feldman:1900zz}).
For the direct annihilations, unlike the annihilation of MSSM neutralinos, there are {\em no direct channel $Z$ or Higgs pole} exchange
diagrams and consequently final states such as $WW$, $ZZ$, $ZH$, $HH$ are absent at the tree level. For the case of co-annihilations this is modified. {If $\rho$ is of low mass, as discussed in the previous section, the
stop should be relatively light to accommodate a signal of $\rho$. In this case
the relic density can be satisfied via stop co-annihilations~\cite{Drees}. \\
}
{
Next, we discuss the the direct detection of $\chi_{\rm st}$. Specifically there are {\em no t-channel Higgs or Z pole} exchange contributions to the direct detection
rates for this case at the tree level.
As pointed out in Ref.~\cite{Drees:1993bu,Hisano} it is important
to include contributions arising in the spin independent scattering
cross section from the twist-2 operators
\begin{eqnarray}
f_p/m_p&\owns& \sum_{q=u,d,s} f_q f_{Tq}+ \sum_{q=u,d,s,c,b}
\frac{3}{4} \left(q(2)+\bar{q}(2)\right)g_q^{(1)}
+ \ldots
\end{eqnarray}
where the additional terms are suppressed and $q(2)$, $\bar{q}(2)$ are matrix elements and are given in \cite{Hisano}.
Specifically $g_q^{(1)}$ is given by, in the limit of massless quarks
\begin{eqnarray}
g_q^{(1)}\simeq \frac{{M_{\chi_{ \rm st}}}}{(m_{\tilde{q}}^2-M_{\chi_{ \rm st}}^2)^2}\frac{a_q^2+b_q^2}{2}
\end{eqnarray}
where $a_q^2+b_q^2 = g^2_{BL} Q^2_{BL}=g^2_{BL} /9$.
In addition, there are terms
of size $\sum_{q=u,d,s} f_q f_{Tq}$ (where $f_q,f_{Tq}$ are given in \cite{Drees:1993bu,Chatto,Cor}).
Here terms in $f_q$ that are
proportional to $a_q^2+b_q^2$ are suppressed by a factor of 4 relative
to $g_q^{(1)}$~\cite{Hisano}. Terms in $f_q$ also contain $a^2_q-b^2_q \propto g^2_{BL} Q^2_{BL} \sin 2\theta_{\tilde q}$
and are ultra suppressed by the smallness of the squark mixing angle.
For the case when the $M_{\chi_{\rm st}}$ is relatively close in mass to $m_{\tilde q}$, up to correction
in the light quark masses, there is an enhancement in the
SI cross section\cite{Hisano}. Utilizing this effect, for mass splitting of order 30-100 GeV,
one easily sees detectable size SI cross sections for squark masses that are in accord with
LHC limits (see Fig.(\ref{stsi})). At even smaller mass splittings, the models are constrained by XENON.
We have verified using micromegas ~\cite{micro} that the small mass splitting between the LSP and the squarks
can lead to cross sections of the size we find. In this
case the relic density can be brought in accord with WMAP from co-annihilatons. In particular
the squarks in the initial state annihilations play a large role in reducing the relic abundance.
There is also mixing
that derives from rotating between
the chiral fermion in the Stueckelberg multiplet. We consider the optimal case where in
the mass diagonal basis, the lighter of the two mass eigenstates is the one which couples
via the larger mixing. Thus we have taken the mixing in the gaugino stino sector $\cos\theta_{\chi_{\rm st}} \to 1 $,
and have fixed $g_{BL} =0.65$ in Fig.(\ref{stsi}).
The result of a large scattering cross section does
require an LSP above around (500-600)~GeV to be consistent
with the current limits from the LHC
\cite{LHC1,LHC2,Akula1,Akula2,Buchmueller:2011ki}.
}
\begin{figure*}[t!]\centering
\includegraphics[height=8cm]{replace}
\caption{
\label{stsi}
{Spin independent {$\chi_{\rm st}$} neutralino-proton cross section vs the Stueckelberg neutralino mass for the case when
the Stueckelberg neutralino is the LSP. Exhibited is the spin independent cross section for several combinations
of {$\Delta \simeq m_{\tilde q} -M_{\chi_{ \rm st}}$} (in units of GeV). The current limits from XENON-100 are also exhibited.}}
\end{figure*}
\subsection{\it Dirac Dark Matter \label{DD}}
Additional matter fields in the form of Dirac fermions (and their supersymmetric counter parts, two
chiral scalars) can exist in the $U(1)_X$ sector which have only vectorial couplings to the gauge field
$X^{\mu}$ and
a mass for the Dirac fermions can be generated via terms in the superpotential~\cite{Feldman:2010wy}.
As seen already, after mixing of the $B-L$ gauge field $C^{\mu}$ with the field $X^{\mu}$, two mass eigenstates
$Z'$ and $Z''$ arise in the mass diagonal basis each of which have $B-L$ type couplings with
the SM fields.
In addition, the interaction of the dark sector Dirac field with the $Z', Z''$ is given by
%
\begin{equation} {\mathcal L}_{D} = \bar D \gamma^{\mu} (C_{Z' D} Z'_{\mu}+ C_{Z'' D}Z''_{\mu})D.
\end{equation}
The interaction vertices with the Dirac particle ($D$)
with the visible sector quarks and leptons enter through the vector mixings so that
%
\begin{eqnarray}
C_{Z' D} &= & g_X Q_X \cos\theta_{BL},~~C_{Z'' D} =
g_X Q_X \sin \theta_{BL}~.
\end{eqnarray}
The dark sector Dirac field can constitute dark matter. It is stable and electrically neutral.
Since the model we consider has two components of dark matter, the total relic density $\Omega h^2$
will be shared by the neutralino and
Dirac particles. In the analysis we assume that the dark matter densities ${ \varrho}_D, { \varrho}_{\chi}$ for the two components in the galaxy
are proportional to their respective relic densities such that sum is the total cold dark matter (CDM) density
\begin{eqnarray}
\frac{\varrho_D}{\varrho_{\chi}} \simeq \frac{\Omega_D}{\Omega_{\chi}},~~~~~~~
{\Omega_{\rm CDM} h^2 }= \underset{\rm (Majorana)}{\Omega_{\chi} h^2} + ~ \underset{\rm (Dirac)}{ \Omega_{D} h^2} .
\end{eqnarray}
The annihilation cross section of $D \bar{D}$ into quarks and leptons via the
$Z',Z^{\prime \prime}$ poles is given by
\begin{eqnarray}
\sigma_{D \bar D \to f\bar f} &=& A_{D,f} |P_{Z'} - P_{Z''} |^{2},
\end{eqnarray}
where the poles and couplings enter as
\begin{eqnarray}
P_{V} &=&( {s-M_{V}^2+i\Gamma_{V} M_{V}})^{-1} , ~~ V= (Z', Z''),
\\
A_{D,f} &=& \frac{g^2_{D,f} N_f}{4 8 \pi s} (2 M^2_D+s )(2 m^2_f+s) \sqrt{\frac{4 m_f^2-s}{4
M_D^2-s} } \tilde \Theta,
\\
g_{D,f} &=&
g_{BL} Q_{BL,f} g_X Q_X\sin 2\theta_{BL},
\end{eqnarray}
and where $s = 4 M^2_D/(1-v^2/4)$, $ \tilde \Theta =\Theta( s-4 m_f^2)$, and $N_f = (1,3)$ for (leptons, quarks).
The relevant partial $Z', Z''$ decay {widths} were given in Table~(1) . In addition the $Z',Z''$ can decay into the Dirac sector:
\begin{eqnarray}
\Gamma_{Z'\to D \bar D}= \Theta \cdot \frac{M_{Z'} g^2_D}{12\pi}
\left(1+\frac{2M_{D}^2}{M^2_{Z^{\prime}}}\right)\left(1-\frac{4M_{D}^2}{M^2_{Z^{\prime}}}\right)^{1/2},
\label{aaa}
\end{eqnarray}
where ${\Theta= \Theta(M_{Z'} - 2 M_{D})}$ and $g_D ={g_XQ_X} \cos \theta_{BL}$.
The partial decay width of the $Z^{\prime \prime}$ is obtained with
$M_{Z^{\prime}} \to M_{Z^{\prime\prime}}$ and
$\cos\theta_{BL} \to \sin\theta_{BL}$ in Eq.(\ref{aaa}).
The relic density can be calculated by integration over the poles.
For the technique of integrating over a pole see \cite{rd,rd1,rd2}.
The relic density for the 2 components of dark matter can be calculated \cite{Feldman:2010wy}
where for the Dirac component
\begin{eqnarray}
\Omega_D h^2 = C_{D} J^{-1}_{D},~~~~
J_{D} = \int_0^{x_{F,D}}\sum_f {\langle\sigma v\rangle}_{D \bar{D} \to \bar f f } ~dx,
~~~~C_{D} = 2\times \frac{1.07 \times 10^{9}~ {\rm GeV^{-1}}}{ \sqrt{g^*}M_{\rm pl}}.
\label{relicdiracabun}
\end{eqnarray}
In Fig.~(\ref{colorful}) we exhibit a satisfaction of the relic density within the WMAP constraint so that
\begin{equation} R^{\rm St}_{\rm Dirac} \equiv \frac{M_D}{M_{Z'}}\simeq 1/2,\end{equation} where the black bands
in Fig.~(\ref{colorful}) show
a presumed fraction of the the total relic abundance.
\begin{figure}[t]
\begin{center}
\includegraphics[width=14cm]{newDiracU}
\caption{An exhibition of the relic density of the Dirac component of dark matter for various values of
$R^{\rm St}_{\rm Dirac}$ which is the ratio the Dirac dark matter mass to the Stueckelberg $Z'$ mass. The
black bands represent about half the relic abundance.
For the analysis we fix $g_{BL} = 0.35, g_X = 0.1, Q_X = 0.5$ . The (blue/darker) curves have the $Z'$ mass running
in the range 200-500 GeV in steps of 100 GeV. We note that
for fixed couplings,
as $M_{Z'}$ gets heavier the curves become more narrow.
The (magenta/lighter) curves correspond to $M_{Z'} = 250 ~\rm GeV$ with $\theta_{BL} = (0.02 -0.05)$.
Similarly, as $\theta_{BL}$ becomes progressively smaller for otherwise fixed couplings and fixed $Z'$ mass, the curves become more narrow.
The right panel is the case when the $Z'$ decays mostly into the hidden sector Dirac fermions, i.e.,
it is the case where $Z' \to D \bar D $ is kinematically allowed and in this case
the dileptonic signals at the LHC will be depleted.
The left panel is the case where $Z' \to D \bar D $ is kinematically disallowed and in this case
the $Z'$ will decay exclusively into the SM particles and thus
the dileptonic signal from the process $pp\to Z'\to l^+l^-$ will be visible.
\label{colorful}
}
\end{center}
\end{figure}
\begin{figure}[htb]\centering
\includegraphics[height=7cm]{curvesXenon}
\caption{ Illustrative curves. At any given point on this plot there exists a funnel where the relic density can
be satisfied for perturbative size coupling via the relic density invariant $R^{\rm St}_{\rm Dirac}$. The particular
values of the parameters on thes curves are $(\theta_{BL}=.001, g_X =0.1, Q_X=1/2)$, $(\theta_{BL}=0.01, g_X =0.5, Q_X=1)$, and $(\theta_{BL}=0.05, g_X =1/2, Q_X=1)$,
where $g_{BL} = 0.35$ and $R^{\rm St}_{\rm Dirac} \sim 1/2$. }
\label{DDM}
\end{figure}
Now, unlike the cases studied previously with the Stueckelberg mass growth,
here the dark Dirac fermion does not carry a milli-charge and is electrically neutral.
The reason the Dirac fermion is neutral is because there is no mixing
of the Stueckelberg~ gauge field with the hypercharge vector boson.
Because of its electrical neutrality and unlike a milli charged particle it cannot be stopped by
the atmosphere or by dirt and rock in the Earth before it reaches an underground detector.
The effective Lagrangian describing the scattering of a dark Dirac fermion from a quark, in the limit of
low momentum transfer, is given by ${\mathcal L}_{\rm eff} = C^D_q \bar D \gamma^{\mu} D \bar q \gamma_{\mu} q$.
The corresponding spin independent D-proton cross section is
\begin{eqnarray}
\sigma_{D p}^{SI} =
\frac{\mu^2_{D p}}{ \pi } G^2
\left( \frac{1}{M^4_{Z'}} + \frac{1}{ M^4_{Z''} } -\frac{2}{ M^2_{Z'} M^2_{Z''} } \right ),
\label{dddd}
\end{eqnarray}
where $G = g_{BL}\sin\theta_{BL} g_X Q_X \cos\theta_{BL}$ and $\mu_{Dp}$ is the reduced mass.
\begin{table}[t!]
\begin{center}
\begin{tabular}{ccc}
\hline
$ M_{Z'}$ & $\sigma_{D p}^{SI} ~(\theta_{BL} = (0.03))$ & $\sigma_{D p}^{SI}~(\theta_{BL} = (0.06))$ \\
$ \rm GeV$ & $\rm cm^2$ & $\rm cm^2$ \\\hline
200 & 1.9$\times 10^{-44}$ & 7.5$ \times 10^{-44} $\\
300 & 3.7$\times 10^{-45}$ & 1.5$ \times 10^{-44} $ \\
400 & 1.2$\times 10^{-45}$ & 4.7$ \times 10^{-45}$ \\
500 & 4.8$ \times 10^{-46}$ & 1.9$ \times 10^{-45}$ \\
\end{tabular}
\caption{ Approximate values of the spin independent scattering cross section for the Dirac component of dark matter for sample models.
The second and third columns have $\theta_{BL} = (0.03,0.06)$ respectively. The first row is on the edge of the discovery limits from the both
XENON and the Tevatron data and is being probed by the LHC. For a given dark matter mass $M_{Z'} \sim 2 M_D$ in order to satisfy $(\Omega h^2)_{\rm WMAP}$.
Model parameters are otherwise fixed as in Figure(4). The middle column of this table corresponds to the blue/dark curves in Fig. (7), while the magenta/light
region is found to be constrained by the XENON data. Models consistent with the relic density constraint and the XENON constraint are therefore favored if the relic density
is satisfied closer to the pole which is obtained for relatively smaller coupling and/or larger $M_{Z'}$.}
\end{center}
\end{table}
Interestingly, for mixing of the size considered in Fig.~(\ref{zprime1}), ($\sin \theta_{BL} \in [0.01,0.05]$)
and for natural size couplings
$g_X =g_{BL} =O( g_Y) $ and $Q_X = \pm 1$ one obtains a spin independent cross sections
which are of the size
\begin{equation}
\sigma_{D p}^{SI} \sim 10^{-45\pm 1} { \rm cm^2}, ~~~M_{Z'} \sim (200-300) ~\rm GeV.
\end{equation}
Since $M_D \gg m_p$, $\mu_{Dp}\sim m_p$, $\sigma_{D p}^{SI}$ is essentially independent
of $M_D$.
However, compatibility with the WMAP data for the
thermal relic density, restricts the ratio $R_{Dirac}^{St}\simeq 1/2$.
Using this constraint the spin independent cross section $\sigma_{D p}^{SI}$ for the case
$M^2_{Z''} \gg M^2_{Z'}$ is given by
\begin{eqnarray}
\sigma_{D p}^{SI} \simeq
\frac{\mu^2_{D p}}{ \pi } G^2 \frac{1}{M^4_{Z'}} \simeq \frac{\mu^2_{D p}}{ 16 \pi } G^2 \frac{1}{M^4_{D}} ~.
\end{eqnarray}
which now has a very strong dependence on the Dirac mass.
The numerical size of $\sigma_{D p}^{SI}$ as a function of the Dirac mass is exhibited in Fig.(5), and the
analysis shows that the $\sigma_{D p}^{SI}$ predicted by the model is accessible in the XENON experiment.
In fact for given values of $g_{BL}, \theta_{BL}, g_X Q_X$ the current limits from XENON100 already
put lower limits on the Dirac mass. We can also use the current upper limit on $\sigma^{SI}$ from the
XENON100 experiment which gives $\sigma^{SI}=7\times 10^{-45}$ cm$^2$ for a WIMP mass of 50 GeV,
to put a general constraint on $|G|/M_D^2$ so that
\begin{eqnarray}
|G|/M_D^2 \lesssim 3 \times 10^{-8}~~~~(M_D~\rm in~{\rm GeV}).
\end{eqnarray}
We note again that the preceding analysis is very different from the previous Stueckelberg~ analyses where the
Dirac fermion in the hidden sector develops a milli charge. As already pointed out this arises in models
where one mixes
the Stueckelberg~ gauge boson with the hypercharge gauge field. In this case the scattering of the Dirac
fermion from a quark will have not only the $Z'$ pole in the t-channel but also a $Z$ boson pole and
a photon pole as well.
In the present model the $Z$ and the photon pole are both absent. The Dirac dark matter candidate
is electrically neutral.\\
As mentioned earlier, for $M_{Z'} \sim 2 M_D$, the relic density will always be satisfied for perturbative size couplings.
For $M_{Z'}< 2 M_D$ but close to $2 M_D$ the $Z'$ signal will manifest at colliders and the relic density can also be satisfied.
However, for the case $M_{Z'} > 2 M_D$, while the relic
density can be satisfied, the $Z'$
signal becomes suppressed due to the branching ratio into the hidden sector overtaking the branching ratio in the visible sector in the presence of mass and kinetic mixings~\cite{diracdarkFLN}.
In addition, the Breit-Wigner enhancement of the annihilation of Dirac particles
in the halo~\cite{Feldman} can be operative very close to the pole
and the following three possibilities become simultaneous observables:
\begin{enumerate}
\item Observation of a very light and narrow $Z'$ vector boson in the dilepton channel at the LHC (see also \cite{fln1}).
\item Observation of the flux of positrons via Satellite data (PAMELA/FERMI)~\cite{PAMELA} from the Breit-Wigner Enhancement in the dark matter annihilations in the galactic halo~\cite{Feldman} consistent with WMAP data~\cite{WMAP}.
\item Relic abundance of dark matter split between a neutralino and dark Dirac (see also \cite{Feldman:2010wy}) .
\item Observational prospects for the corresponding Dark Dirac component in direct detections experiments such as XENON (analyzed here for the neutral dark Dirac particle via the Stueckelberg~ mechanism).
\end{enumerate}
Let us add, that just recently, the 730 kg days of the CRESST-II Dark Matter Search was
released~\cite{Angloher:2011uu}. Two preferred regions
are reported on, and one such region appears close to the CoGeNT preferred
region~\cite{Aalseth:2010vx}. Very low mass
neutralino dark matter with MSSM field content and cross sections
of the size needed to explain the CoGeNT are not
consistent with the collider constraints~\cite{DFZLPNLowmass}.
This result has been confirmed by the LHC with
its updated constraints on the SUSY Higgs sector~\cite{MV}, wherein large $\tan \beta$
and low mass SUSY Higgs of the size needed to explain the spin-independant
scattering are further excluded.
The preferred region reported by
CRESST-II with heavier dark matter mass may be accommodated for a thermal relic with relic density
satisfied via the Z-pole in the MSSM. Such could arise with non-universal gaugino masses
at the the high-scale (see~\cite{Akula1}) leading to WIMP masses
close to 45~GeV. The far boundary of the CRESST-II $2\sigma$ region terminating
close to 55~GeV may also be
achieved with relic density satisfied via the Higgs pole~(see the analysis of \cite{Feldman:2011me}).
A dedicated analyses with the new constraints on the SUSY Higgs sector from the LHC~\cite{MV} would be
needed to make a more definitive statement - however
the CRESST-II results at these potential dark mater masses do not correspond to
reported event rates with CDMS or XENON~\cite{Ahmed:2009zw,Aprile:2011hi}.
The extended model class we
discuss can produce spin independent cross sections
with larger cross sections than that of the MSSM via the Dirac component of Dark Matter (see Fig.(\ref{DDM})).
\section{Discriminating Stueckelberg~ from Models with Spontaneous Breaking \label{5}}
One may discriminate between the Stueckelberg~ mass growth for a $B-L$ gauge
boson in the models discussed here and other models where the mass growth for
the $B-L$ gauge boson occurs
by spontaneous breaking. In the above, we have already discussed the mass growth of a $B-L$ gauge
boson by the Stueckelberg~ mechanism. For the case when the mass growth occurs via spontaneous breaking
there are two possibilities: (i) spontaneous symmetry breaking of $U(1)_{B-L}$ occurs violating
R-parity invariance, (ii) spontaneous symmetry breaking of $U(1)_{B-L}$ occurs without violating
R-parity invariance. We discuss these two cases below individually.\\
\subsection{Spontaneous Symmetry Breaking of $B-L$ and R-parity Violation}
The simplest example of this is when we consider the superpotential of
Eq.(\ref{w1}). Let us assume that the potential of the $\tilde \nu^c$ field is such that it develops a
VEV. In this case one will have a spontaneous breaking of not only $B-L$ but also of
R-parity as indicated by the term $LH_u \langle \tilde \nu^c \rangle$ in Eq.(\ref{w1}) after $\tilde \nu^c$ develops a VEV.
In the mass diagonal basis it will lead to other R-parity violating terms, i.e., $LLe^c$
and $QLd^c$. Here the LSP is no longer stable and specifically the neutralino cannot
be a dark matter particle. Further, since the neutralino is not stable, the signals of supersymmetry
for this case will be very different at hadron colliders. Specifically if the neutralino decays
inside the detector, there will be no missing energy signatures which are the typical
hallmarks of supersymmetry signatures with R-parity symmetry. Further, for the case when there is
a spontaneous breaking of R-parity symmetry via the VEV growth of the right handed sneutrino, there
will be D term contributions to the slepton squared masses proportional to $g_{BL}^2 \left< \tilde \nu^c \right>^2$\cite{FP2}. Such terms are
absent for the case when the mass growth for the $B-L$ gauge boson occurs preserving R-parity
invariance as discussed below. \\
\subsection{$B-L$ Models for R-parity Conservation}
We further consider now the possibility that $B-L$ symmetry is broken but a residual R-parity symmetry
still persists. This is indeed possible following the general line of reasoning of
\cite{Krauss:1988zc} (see also \cite{Martin:1996kn}).
Thus consider additional fields in the theory such as a vector like
multiplet which has the $SU(3)_C\otimes SU(2)_L\otimes U(1)_Y\otimes U(1)_{B-L}$ quantum numbers as
follows
\begin{eqnarray}
\Phi\sim (1, 1, 0, -Q_{BL}), ~~\bar \Phi \sim (1,1,0, Q_{BL}).
\end{eqnarray}
Let us suppose that one manufactures a potential so that VEV formation for the fields $\Phi$ and
$\bar \Phi$ occurs. In this case $B-L$ will be broken. However, as long as $3(B-L)$ is an even integer
R-parity will be preserved. This means that the residual theory will have a $Z_2$ R-parity symmetry.
Thus, for example, the VEV formation of a scalar field
with $3(B-L)=\pm 2$ will violate $B-L$ but preserve R-parity.
In the process of the mass mass growth of the $B-L$ gauge boson, one combination of the imaginary parts of
$\Phi^0$ and $\bar \Phi^0$ will be absorbed while there would three spin zero fields: 2 CP even
and one CP odd (the part orthogonal to the imaginary parts of $\Phi^0$ and $\bar\Phi^0$ which is absorbed)
Higgs field.
In contrast for the $U(1)_{B-L}\otimes U(1)_{X}$ model discussed here,
one is left with only two additional scalars, $\rho_X, \rho_{BL}$, or $\rho, \rho'$, which are both CP even.
Specifically there is no additional CP odd Higgs boson for the Stueckelberg~ models. So this provides a discrimination
between the two models.
There are several interesting and distinguishing features
between the $U(1)_{B-L}\otimes U(1)_{X}$ model and the
$U(1)_{B-L}$ model. This difference can be seen by comparing Eq.(\ref{c1}) vs
Eq.(\ref{c2}). Thus in Eq.(\ref{c1}) one finds that the mass growth of a $B-L$ gauge boson by
spontaneous breaking or by the Stueckelberg~ mechanism
would require the gauge boson to be very heavy. Thus for $g_{BL} \sim 1$,
one will typically have a mass of the $B-L$ gauge boson to be greater than $\sim~6~\rm TeV$~\cite{Carena:2004xs,Strumia}.
In contrast, from Eqs.(\ref{c2}) we find that in the $U(1)_X\otimes U(1)_{B-L}$ model, there are two extra
massive gauge bosons beyond what one has in the Standard Model. Thus the heavier one, i.e.,
the $Z''$ gauge boson, is indeed several TeV in mass. However, the $Z'$ boson we discuss
can be much lighter, and can lie in the
few hundred GeV range. Thus the observation of a low lying $Z'$ with decay branching ratios
characteristic of a $B-L$ gauge boson will be a clear indication of the Stueckelberg~ model involving mixing
of $U(1)_X$ and $U(1)_{B-L}$ discussed here.
\section{Conclusion \label{6}}
In this work we have proposed the Stueckelberg~ mechanism for the mass growth of a $B-L$
gauge boson. {\textit{It was then shown that under the constraints of charge conservation and the absence of
a Fayet-Iliopoulos D term, that R-parity cannot be spontaneously broken in the minimal model of radiative electroweak symmetry breaking}}.
The above is in contrast to models where the mass of the $B-L$ gauge boson is generated by the Higgs mechanism
through the VEV formation for the field $\tilde \nu^c$ which breaks R-parity.
A comparison to the case where the $B-L$ symmetry is spontaneously broken but the R-parity symmetry is preserved
was also given and its distinguishing features from the Stueckelberg~ mass growth for the $B-L$ gauge boson
are uncovered. Further, we analyzed a $U(1)_X\otimes U(1)_{B-L}$ Stueckelberg~ extension of MSSM where a massive
$Z'$ boson with $B-L$ interactions can lie in the sub TeV region, i.e, $M_{Z'}< 1$~TeV.
The observation of a $Z'$ in the sub TeV region with $B-L$ quantum numbers deduced
via branching ratios into charged leptons will provide a test of the
$U(1)_X\otimes U(1)_{B-L}$ Stueckelberg~ extension discussed here.
Other tests of the proposed Stueckelberg~ models were also discussed. This includes an analysis of the
production and decay of the Stueckelberg~ spin 0 boson $\rho$ which has only loop decays into SM final
states via sfermion loops.
An interesting decay of the $\rho$ is into $\gamma\gamma$ {was} analyzed and shown to have
the possibility of observation at the LHC with $\sqrt s=14$ TeV.
With hidden sector Dirac fermions in the $U(1)_X\otimes U(1)_{B-L}$ Stueckelberg~ extension,
two component dark matter manifests, with one component
being either the MSSM neutralino or the Stueckelberg~ neutralino and the other component being a { {\it neutral} } Dirac
fermion. An analysis of the relic density for the Stueckelberg~ neutralino and the Stueckelberg~ neutralino-proton spin independent cross section were also discussed.
An analysis of the
second dark matter component consisting of the Dirac fermion as dark matter was also given and it was shown that
the current XENON100 data already puts constraints on the Dirac fermion mass and mixing angles. The constraints
from the XENON100 data and the LHC data on the couplings of the $Z'$ boson and dark Dirac fermion were shown to
be comparable, both of which limit the mixing of the $B-L$ and dark sector.
Thus the proposed model produces LHC and dark matter signals at mass scales
that are accessible to such experiments and will be tested further as the new data comes in.
\\
\noindent
{\bf Acknowledgments:}
P.F.P. would like to thank Northeastern University for hospitality in the beginning of this project.
The work of D.F. is supported by DOE DE-FG02-95ER40899 and by the Michigan Center for Theoretical Physics.
The work of P. F. P. is supported by the James Arthur Fellowship at CCPP-New York University.
The work of P.N. is supported in part by NSF grant PHY-0757959 and PHY-0704067.
D.F. would like to thank CERN Theory Group for their hospitality while this work was nearing completion.
\clearpage
|
2,877,628,090,727 | arxiv | \section{Introduction}
In this paper we study the problem of constructing extremal K\"{a}hler\
metrics on blow ups at finitely many points of K\"{a}hler\ manifolds which
already carry an extremal metric.
In \cite{ca}, \cite{ca2} Calabi has proposed, as best
representatives of a given K\"{a}hler\ class $[\omega]$ of a complex compact
manifold $(M,J)$, a special type of metrics baptized {\em extremal}.
These metrics are critical points of the $L^{2}$-square norm of the
scalar curvature ${\bf s}$. The corresponding Euler-Lagrange
equation reduces to the fact that
\[
\Xi_{\bf s} : = J \, \nabla {\bf s} + i \, \nabla {\bf s}
\]
is a holomorphic vector field on $M$. In particular, the set of
extremal metrics contains the set of constant scalar curvature K\"{a}hler \
ones. Calabi's intuition of looking at extremal metrics as canonical
representatives of a given K\"{a}hler\ class has found a number of important
confirmations and also (unfortunately) nontrivial constraints.
Calabi himself proved that an extremal K\"ahler metric must have the
maximal possible symmetry allowed by the complex manifold $M$, and,
as observed by LeBrun and Simanca \cite{ls}, this symmetry group
can be fixed in advance. More precisely, the identity component of
the isometry group of any extremal metric $g$ must be a maximal
compact subgroup of $\mbox{Aut}_0(M,J)$, the identity component of
the group $\mbox{Aut} (M,J)$ of biholomorphic maps of $M$ to itself.
This group thus contains the complexification of the isometry group,
but may be strictly larger (the blow-up of ${\mathbb P}^{2}$ at a
point is the simplest example of such a situation). Moreover Lebrun
and Simanca \cite{ls2} have proved that the set of K\"{a}hler\ classes
having an extremal representative is an open subset of $H^{1,1}(M,
{\mathbb C}) \cap H^2 (M, {\mathbb R})$ and Chen and Tian \cite{ct}
have proved the uniqueness of such metrics in a given K\"{a}hler\ class up
to automorphisms. Also, the important relationship between the
existence of extremal metrics and various stability notions of the
corresponding polarized manifolds (algebraic if the class is
rational, analytic otherwise) has been deeply investigated for
example by Tian \cite{ti}, Mabuchi \cite{ma} and Szekelyhidi
\cite{sz}. Yet, a complete understanding of the existence theory for
extremal metrics is still missing. Given this last fact, the first
two authors have started in \cite{ap} and \cite{ap2} to develop a
perturbation theory for constant scalar curvature K\"{a}hler\ metrics,
giving sufficient conditions for the existence of constant scalar
curvature K\"{a}hler\ metrics on the blow up at finitely many points of a
manifold which already carries a constant scalar curvature K\"{a}hler\
metric. The aim of the present paper is to extend these results to
the framework of extremal metrics.
\section{Statement of the result}
Let $(M,J, \omega)$ be a K\"{a}hler\ manifold with complex structure $J$ and
K\"{a}hler\ form $\omega$ and let $g$ denote the metric associated to the
K\"{a}hler\ form $\omega$, so that
\[
\omega (X,Y) = g (J \, X, Y).
\]
Further assume that $g$ is an extremal metric. Since the
automorphism group of any blow up of $M$ can be identified with a
subgroup of $\mbox{Aut}(M,J)$, and in light of the above mentioned
result of Calabi-LeBrun-Simanca about the isometry group of any
extremal metric, our strategy is to fix {\em a priori} a compact
subgroup $K$ of $\mbox{Isom}(M,g)$ and work $K$-equivariantly. Such
a $K$ will then be contained in the isometry group of the extremal
metric we are seeking on the blow up of $M$ at any set of points
$p_1, \ldots, p_n \in M$ in the $\mbox{Fix} \, (K_0)$, the fixed locus
of the identity component $K_0$ of $K$.
We will denote by ${\mathfrak k}$ the Lie algebra associated to
identity component of $K$. Observe that elements of ${\mathfrak k}$ vanish at
the points $p_1, \ldots, p_n$ to be blown up and hence these vector
fields can be lifted to the blown up manifold.
In order to produce extremal metrics on the blown up manifold, we
have to identify, among all $C^{\infty}$ functions on the blown up
manifold, those who generate real-holomorphic vector fields, since
these can arise as scalar curvatures of extremal metrics.
To this aim, we define ${\mathfrak h}$ to be the vector space of
$K$-invariant hamiltonian real-holomorphic vector fields on $M$
or equivalently, the Lie algebra of the group $H$ of exact simplectomorphisms
commuting with $K$. The
correspondence between real-holomorphic vector fields and the scalar
functions on $M$ can be encoded in a compact way in a moment map
$\xi_\omega$
\[
\xi_\omega \colon M \rightarrow {\mathfrak h}^*
\]
for the action of $H$ uniquely determined by imposing to have mean
zero. More explicitly the function $f : = \ip{\xi_\omega , X}$
associated to the vector field $X \in {\mathfrak h}$ is defined to be the
unique solution of
\[
- \mathrm d f = \omega (X, -)
\]
whose mean value over $M$ is $0$.
Using this setup, our main result reads~:
\begin{teor}
\label{mainthm-2} Let $(M,J,\omega)$ be a compact $m$-dimensional
K\"{a}hler\ manifold whose associated K\"{a}hler\ metric $g$ is extremal, and let $K$ be
a compact subgroup of $\mbox{Isom}(M,g)$ whose Lie algebra contains
the vector field $J \, \nabla {\bf s}$ as well as any element of
${\mathfrak h}$. Let $K_0$ denote the identity component of $K$.
Given $p_1, \dots ,p_n \in \mbox{Fix}\, (K_0)$ and $a_1, \ldots, a_n
>0$ such that $a_{j_1} = a_{j_2}$ if $p_{j_1}$ and $p_{j_2}$ are
in the same $K$-orbit, there exists $\varepsilon_0>0$ and, for all $\varepsilon \in (0, \varepsilon_0)$, there
exists a $K$-invariant extremal K\"{a}hler\ metric $\omega_\varepsilon$ on $\tilde M$,
the blow up of $M$ at $p_1, \ldots, p_n$, such that its associated K\"{a}hler\ form $\omega_\varepsilon$
lies in the class
\[
\pi^*[\omega] - \varepsilon^{2} \, \left(a_1^{\tfrac{1}{m-1}} \, PD [E_1] +
\ldots + a_n^{\tfrac{1}{m-1}} \, PD \, [E_n] \right)
\]
where $\pi \colon \tilde M \rightarrow M$ is the standard projection
map, the $PD[E_j]$ are the Poincar\'e duals of the $(2m-2)$-homology
classes of the exceptional divisors of the blow up at $p_j$.
Finally, the sequence of metrics $(g_\varepsilon)_\varepsilon$ converges to $g$ (in
smooth topology) on compacts, away from the exceptional divisors.
\end{teor}
It is important to stress that our analytical construction does not give {\em one}
extremal metric but {\em a family} converging to the starting metric on the base manifold.
For such a construction to work, it is necessary to have
$p_1, \dots ,p_n \in \mbox{Fix}\, (K_0)$ and
$J \, \nabla {\bf s} \in {\mathfrak k}$. On the other hand the condition ${\mathfrak h} \subset {\mathfrak k}$, while often
satisfied in important examples such as toric manifolds with $K$ giving the torus action, is certainly far from being necessary.
We give a simple geometric sufficient condition for it to hold
in Proposition~\ref{fixx}.
In the general case when ${\mathfrak h}$ is not included in ${\mathfrak k}$, there is a
natural decomposition
\[
{\mathfrak h} = {\mathfrak h}' \oplus {\mathfrak h}'' ,
\]
where ${\mathfrak h}' : = {\mathfrak h} \cap {\mathfrak k}$ is the subspace of $K$-invariant
real-holomorphic vector fields in ${\mathfrak k}$. The previous result then
appears as a special case of the more general~:
\begin{teor}
\label{mainthm} Assume that $(M, J, \omega)$ is a compact K\"{a}hler\ manifold whose associated K\"{a}hler\ metric $g$ is extremal, and let $K$ be a
compact subgroup of $\mbox{Isom}(M,g)$ whose Lie algebra contains
the vector field $J \, \nabla {\bf s}$. Let $K_0$ denote the identity component of $K$.
We decompose the space of
$K$-invariant hamiltonian real-holomorphic vector fields ${\mathfrak h} =
{\mathfrak h}' \oplus {\mathfrak h}''$ where ${\mathfrak h}' =({\mathfrak h} \cap {\mathfrak k})$. Given $p_1,
\dots ,p_n \in \mbox{Fix} \, (K_0)$ such that~:
\begin{itemize}
\item[(i)] there exists $a_1, \ldots, a_n >0$ satisfying
\[
\sum_{j} a_j \, \xi_\omega (p_j) \in {\mathfrak h}' \,^* ,
\]
and $a_1, \ldots, a_n
>0$ such that $a_{j_1} = a_{j_2}$ if $p_{j_1}$ and $p_{j_2}$ are
in the same $K$-orbit,
\item[(ii)] the projections of $\xi_\omega (p_1), \ldots,
\xi_\omega (p_n)$ over $ {\mathfrak h}'' \,^*$ span ${\mathfrak h}'' \,^*$,\\[3mm]
\item[(iii)] there is no nontrivial elements of ${\mathfrak h}''$ that vanishes at
$p_1, \ldots, p_n$,
\end{itemize} there exists $\varepsilon_0 > 0$ and, for all $\varepsilon \in
(0, \varepsilon_0)$, there exists a $K$-invariant extremal K\"{a}hler\ metric
$g_\varepsilon$ on $\tilde M$, the blow up of $M$ at $p_1, \ldots, p_n$,
whose associated K\"{a}hler\ form $\omega_\varepsilon$ lies in the class
\[
\pi^*[\omega] - \varepsilon^{2} \, \left( a_1^{\tfrac{1}{m-1}} \, PD [E_1] +
\ldots + a_n^{\tfrac{1}{m-1}} \, PD \, [E_n] \right)
\]
where $\pi \colon \tilde M \rightarrow M$ is the standard projection
map, the $PD[E_j]$ are the Poincar\'e duals of the $(2m-2)$-homology
classes of the exceptional divisors of the blow up at $p_j$.
Finally, the sequence of metrics $(g_\varepsilon)_\varepsilon$ converges to $g$ (in
smooth topology) on compacts, away from the exceptional divisors.
\end{teor}
When ${\mathfrak h} \subset {\mathfrak k}$, ${\mathfrak h}'= {\mathfrak h}$ and
${\mathfrak h}''=\{0\}$, hence (i), (ii) and (iii) become vacuous and
Theorem~\ref{mainthm} reduces to Theorem~\ref{mainthm-2}.
In \S 4 we explain how conditions (i)-(iii) arise in our analytical approach.
\begin{rmk} Condition (iii) can be removed if we leave some freedom
on the weights of the exceptional divisor on the blown up manifold.
More precisely, Theorem~\ref{mainthm} still holds without assuming
(iii) but in this case, the only information we have about
$[\omega_\varepsilon]$ reads
\[
\omega_\varepsilon \in \pi^*[\omega] - \varepsilon^{2} \, \left( \tilde a_1^{\tfrac{1}{m-1}} \, PD
[E_1] + \ldots + \tilde a_n^{\tfrac{1}{m-1}} \, PD \, [E_n] \right)
\]
where $\tilde a_1, \ldots, \tilde a_n >0$ depend on $\varepsilon$ and satisfy
\[
|\tilde a_j - a_j |\leq c\, \varepsilon^{\tfrac{2}{2m+1}}.
\]
In other words, by removing (iii) we slightly loose the control on
the K\"{a}hler\ classes.
\end{rmk}
Theorem~\ref{mainthm} is a generalization of the constructions given
in \cite{ap} and \cite{ap2}. Indeed, in \cite{ap} is treated the
case where $g$ is a constant scalar curvature K\"{a}hler\ metric $K =
\{Id\}$ and where ${\mathfrak h} = \{0\}$, while in \cite{ap2} is treated the
case where $g$ is a constant scalar curvature K\"{a}hler\ metric, $K$ is a
discrete subgroup of $\mbox{Isom} \, (M,g)$, ${\mathfrak h}' = \{0\}$, and
${\mathfrak h}''$ is not necessarily trivial.
The choice of the symmetry groups $K$ is a really delicate problem.
Indeed, given the fact that the blown up points have to be chosen in
$\mbox{Fix} \, (K_0)$ it is rather natural to choose $K_0$ to be fairly
small, so that its fixed-point set is large. However, the smaller
$K_0$ the larger ${\mathfrak h}$ and hence the harder it is to fulfill the
requirement that ${\mathfrak h} \subset {\mathfrak k}$ in Theorem~\ref{mainthm-2} or
the requirement that conditions (i) and (ii) in
Theorem~\ref{mainthm} are fulfilled. Conditions (i) and (ii) are of
course difficult to check given the fact that the moment map
$\xi_\omega$ is in general hard to analyze. Nevertheless, there are
large classes of manifolds, notably all toric ones, for which
computations can be done. Concrete examples listed in section \S 11
will illustrate how delicate this issue is.
Once the necessary notations and definitions are introduced, in \S 4 we will show how
conditions (i)-(iii) naturally arise in our construction of the converging family of extremal
metrics $g_{\varepsilon}$.
\begin{rmk} Condition (i) and (ii) should be related to Mabuchi's $T$-stability
\cite{ma} and to Szekelyhidi's relative $K$-stability \cite{sz} in the
same way the analogue conditions for constant scalar curvature
metrics are related to the asymptotic Chow semi-stability along the
line of ideas described by Thomas in \cite{th} (pages 27 and 28).
Indeed, if instead of fixing the group $K$ a priori, we fix the set of points $\{p_1,\dots, p_n\}$
to be blown up, a natural choice of $K$ would be any maximal torus in the subgroup
of ${\mbox{Isom}}_0(M,g)$ fixing each $p_j$. We believe that with these choices,
conditions (i) and (ii) should be equivalent to the relative $K$-stability of the
blown up manifold, when
the resulting K\"{a}hler\ classes are rational (which in turn should be equivalent to a relative GIT
stability of the configurations of points in $M^n$ \cite{dv}).
Moreover let us observe that one can apply our construction to {\em any} extremal representative
of the class $[\omega]$. While loosing control on the explicit shape of the metric $g_{\varepsilon}$,
this clearly allows to find families of extremal representatives in $[\omega_{\varepsilon}]$, and gives
some flexibility on the choice of points and weights for which our construction works
{\em for some} representative in $[\omega]$. This flexibility is indeed connected to
the above mentioned stability question, and it will be investigated in detail in \cite{aps}.
A first simple appearance of this freedom will be used below in the case of projective spaces.
\label{stopen}
\end{rmk}
If the initial manifold has constant scalar curvature, it might well
be that the extremal metrics we obtain are in fact constant scalar
curvature metrics. There is a simple criterion involving the points
$p_1, \ldots , p_n$ and the parameters $a_1, \ldots, a_n$, which
ensures that this is not the case.
\begin{prop}
Under the assumptions of Theorem~\ref{mainthm} (or
Theorem~\ref{mainthm-2}), if $ \sum_j a_j \, \xi_\omega (p_j) \neq
0$ then the metrics we obtain on $\tilde M$ are extremal with
nonconstant scalar curvature. \label{mainprop}
\end{prop}
We now emphasize the consequences of the above results for
projective spaces and more generally for toric varieties.
When
$(M,\omega)$ is ${\mathbb P}^{m}$ endowed with the K\"{a}hler\ form
$\omega_{FS}$ associated to a Fubini-Study metric, we let $(z^{1},
\ldots, z^{m+1})$ be complex coordinates in ${\mathbb C}^{m+1}$ and {\em let us fix for
the rest of the paper the convention that $[\omega_{FS}] = PD[{\mathbb P}^{m-1}]$,
where ${\mathbb P}^{m-1}\subset {\mathbb P}^{m}$ is a linear subspace}.
This is particularly relevant when getting quantitative estimates on the K\"{a}hler\ classes
reachable by our constructions.
We
consider the group $K: = S^{1} \times \ldots \times S^{1}$, the
maximal compact subgroup of $PGL(m+1)$, whose action is given by
\[
\begin{array}{cccclllll}
K \times {\mathbb P}^m
& \longrightarrow & {\mathbb P}^m \\[3mm]
\left( (\alpha_1, \ldots, \alpha_{m+1}), [z^{1}, \ldots , z^{m+1}]
\right) & \longmapsto & [\alpha_1 z^{1}, \ldots, \alpha_{m+1}
z^{m+1} ]
\end{array}
\]
and we consider the set of fixed points of $K$
\[
p_1 : = [1: 0 : \ldots : 0], \quad \ldots , \quad p_{m+1}: = [0 :
\ldots : 0 : 1]
\]
In this case, the space ${\mathfrak h}$ is spanned by vector fields of the
form
\[
\Re \, \left( z^{j} \, \partial_{z^j} - z^k \,\partial_{z^k} \right)
\]
and we have ${\mathfrak k} = {\mathfrak h} = {\mathfrak h}'$ and ${\mathfrak h}'' = \{0\}$. As a
consequence of the result of Theorem~\ref{mainthm-2}, we obtain
extremal K\"{a}hler\ metrics on the blow up of ${\mathbb P}^m$ at the points
$p_1, \ldots, p_n$, for any $n=1, \ldots, m+1$.
It is worth emphasizing that the special structure of the points
which can be blown up on ${\mathbb P}^m$ has its origin in the fact
that we are starting from a specific choice of a Fubini-Study metric
and hence, away from the blow up points the extremal K\"{a}hler\ metric
$\omega_\varepsilon$ is close to $\omega_{FS}$.
This example shows well the {\em riemannian} nature of our results
(see Remark ~\ref{stopen}).
Now, if $q_1, \ldots, q_n \in
{\mathbb P}^m$ are linearly independent one can find extremal
metrics on the blow up of ${\mathbb P}^m$ at $q_1, \ldots, q_n$ but
this time the metric will be close to $\psi^* \omega_{FS}$ away from
the blow up points, where $\psi$ is an automorphism of the
projective space such that
\[
\psi (p_j) = q_j .
\]
Yet, since $[\psi^* \omega_{FS}]$ is independent of $\psi$ and of
the choice of the Fubini-Study metric, we have obtained the following {\em K\"{a}hler ian}
version of Theorem ~\ref{mainthm-2} for ${\mathbb P}^m$~:
\begin{corol}
Fix $1 \leq n\leq m+1$. Given $q_1, \ldots, q_n \in {\mathbb P}^m$
linearly independent points and $a_1, \ldots, a_n
>0$, there exists $\varepsilon_0 >0$ and for all $\varepsilon \in (0, \varepsilon_0)$ there
exists an extremal K\"{a}hler\ metric $g _\varepsilon$ on the blow up of ${\mathbb
P}^m$ at $q_1, \ldots, q_n$ whose associated K\"{a}hler\ form $\omega_\varepsilon$ lies in the class
\[
\pi^*[\omega_{FS}] - \varepsilon^{2} \, \left( a_1^{\tfrac{1}{m-1}} \, PD[E_1]
+ \ldots + a_n^{\tfrac{1}{m-1}} \, PD [E_n] \right)
\]
In addition, the K\"{a}hler\ metrics $g_\varepsilon$ do not have constant scalar
curvature unless $n = m+1$ and $a_1 =\ldots = a_{m+1}$.
\label{coproj}
\end{corol}
The conditions $n = m+1$ and $a_1 =\ldots = a_{m+1}$ being necessary and
sufficient to get
constant scalar curvature metrics among our family of extremal ones fits exactly with
the more familiar picture of the Futaki invariants. Calabi has in fact proved that
an extremal metric has constant scalar curvature iff its Futaki invariant vanishes \cite{ca2},
and we will show in \S 11, using Mabuchi's result \cite{ma1} relating the
Futaki invariant to the
coordinates of the barycenter of the convex polytope of a toric variety, that the above
conditions are indeed equivalent to the vanishing of the
Futaki invariants for blow ups of ${\mathbb P}^m$.
The case corresponding to $n=1$ in Corollary~\ref{coproj} was
already obtained by Calabi in more generality (i.e. for all K\"{a}hler\
classes) \cite{ca} and the case where ${\mathbb P}^m$ is blown up at
$m+1$ linearly independent points $q_1, \ldots, q_{m+1}$ and $a_1
=\ldots = a_{m+1}$ was already studied in \cite{ap2} where constant
scalar curvature metrics were obtained.
In the case where ${\mathbb P}^m$ is blown up at more than $m+1$
points in general position the resulting manifolds do not have
nonzero holomorphic vector fields, hence extremal metrics are forced
to have constant scalar curvature and the existence of some constant
scalar curvature K\"{a}hler\ metrics follows from \cite{ap2} and
\cite{Rol-Sin}.
The previous Corollary can be understood as a special case of the
the existence of extremal metrics on the blow up of toric varieties, which in fact
leads to a more general result as we will see below even for ${\mathbb P}^m$.
If $(M,J, \omega )$ is a $m$-dimensional toric variety whose associated metric is extremal, one can take
$K$ to be the torus $T^m$ giving the torus action. It then follows from Proposition
~\ref{fixx}
that ${\mathfrak h} = {\mathfrak k}$, the Lie algebra associated to $K$. One can
apply Theorem~\ref{mainthm-2} to get~:
\begin{corol}
Assume that $(M, J, \omega)$ is a toric variety whose associated metric is extremal,
and let $K$ be the torus $T^m$ giving the torus
action. Given $p_1, \ldots, p_n \in \mbox{Fix} \, (K)$ and $a_1,
\ldots, a_n > 0$, there exists $\varepsilon_0 > 0$ and for all $\varepsilon \in (0,
\varepsilon_0)$ there exists an extremal K\"{a}hler\ metric $g_\varepsilon$ on the blow up
of $M$ at $p_1, \ldots, p_n$ whose associated K\"{a}hler\ form $\omega_\varepsilon$ lies in the class
\[
\pi^*[\omega] - \varepsilon^{2} \, \left( a_1^{\tfrac{1}{m-1}} \, PD[E_1] +
\ldots + a_n^{\tfrac{1}{m-1}} \, PD [E_n] \right)
\]
\label{coproj-2}
\end{corol}
In other words, one can blow up {\em any set of points contained in
the fixed-point set of the torus-action} and the weights $a_j
>0$ can be chosen arbitrarily.
Since blowing up a toric variety at such points preserves the toric
structure, one can apply inductively the last Corollary. Therefore,
we obtain extremal metrics on any such iterated blow up. Beside
these applications this last Corollary can be applied to can be
applied to any toric K\"{a}hler-Einstein manifold, the classification of
which has been completed in dimension $m = 2, 3$ and $4$, in
\cite{ma}, \cite{nak1} and all the symmetric examples have been found
by Batyrev and Selivanova \cite{bs} and to the
one parameter family of extremal metrics found by Calabi of the blow
up of ${\mathbb P}^m$ at one point, producing then a wealth of open subsets
of classes in the K\"{a}hler\ cone which have extremal representatives.
For example our result applied to ${\mathbb P}^{2}$, $ {\mathbb P}^{1}
\times {\mathbb P}^{1}$ and $Bl_p{\mathbb P}^{2}$ as base
manifolds leads to the following:
\begin{corol}
\begin{enumerate}
\item
If $M=Bl_{p_1,p_2}{\mathbb P}^{2}$ then the following K\"{a}hler\ classes have extremal representatives
\[
\pi^*[\omega_{FS}] - \, \left( a_1 \, PD[E_1] + \varepsilon^{2} a_2 \, PD [E_2] \right),
\qquad a_1 < 1
\]
\[
\pi^*[\omega_{FS}] - \, \tfrac{a_1 - \varepsilon^2}{a_1+a_2-\varepsilon^2}\, PD[E_1] -
\tfrac{a_2 - \varepsilon^2}{a_1+a_2-\varepsilon^2}\, \, PD [E_2],
\]
\item
If $M=Bl_{p_1,p_2,p_3}{\mathbb P}^{2}$ and the points do not lie on a complex line, then the following K\"{a}hler\ classes have extremal representatives
\[
\pi^*[\omega_{FS}] - \, \left( a_1\, PD[E_1] + \varepsilon^{2} a_2 \, PD [E_2]
+ \varepsilon^{2} a_3 \, PD [E_3] \right),
\qquad a_1 < 1
\]
\[
\pi^*[\omega_{FS}] - \, \tfrac{a_1 - \varepsilon^2}{a_1+a_2-\varepsilon^2}\, PD[E_1] -
\tfrac{a_2 - \varepsilon^2}{a_1+a_2-\varepsilon^2}\, \, PD [E_2] - \varepsilon^4 a_3PD[E_3],
\]
\[
\pi^*[\omega_{FS}] - \, \tfrac{1 -\varepsilon^2(a_1 +a_2)}{2-\varepsilon^2(a_1+a_2+a_3)}\, PD[E_1] -
\, \tfrac{1 -\varepsilon^2(a_1 +a_3)}{2-\varepsilon^2(a_1+a_2+a_3)}\, PD[E_2] -
\, \tfrac{1 -\varepsilon^2(a_2 +a_3)}{2-\varepsilon^2(a_1+a_2+a_3)}\, PD[E_3]
\]
where $a_j +a_k < 1$ for all $j, k$, and $a_1+a_2+a_3 <2$.\\
\item
If $M=Bl_{p_1,p_2,p_3}{\mathbb P}^{2}$ and the points lie on a complex line, then the
following K\"{a}hler\ classes have extremal representatives
\[
\pi^*[\omega_{FS}] - \, \left( a_1\, PD[E_1] + \varepsilon^{2} a_2 \, PD [E_2]
+ \varepsilon^{4} a_3 \, PD [E_3] \right),
\qquad a_1 < 1
\]
\[
\pi^*[\omega_{FS}] - \varepsilon^2 \, \left( a\, PD[E_1] + a\, PD [E_2]
+ b \, PD [E_3] \right),
\qquad b < a.
\]
\end{enumerate}
\label{claproj}
\end{corol}
\begin{rmk}
This last family of examples is interesting also because it has been shown
by A. Della Vedova \cite{dv}, building on Szekelyhidi's work, that the above K\"{a}hler\ classes
do not have extremal representatives for $b>2a$, giving then an explicit upper bound
for our construction to work.
\end{rmk}
In all the above cases, the first families of classes are immediately obtained by our direct construction applied once or twice to $Bl_p{\mathbb P}^{2}$ with a Calabi's metric. The other classes are obtained by applying our result either to $ {\mathbb P}^{1}
\times {\mathbb P}^{1}$ with a product of Fubini-Study metrics, or by using some classical algebraic constructions which will be recalled in \S 11.
We should also recall that when we blow up three not aligned points, the K\"{a}hler\ classes
$ \pi^*[\omega_{FS}] - \, \left( a_1\, PD[E_1] + a_2 \, PD [E_2]
+ a_3 \, PD [E_3] \right)$,
where all the $a_j$ are sufficiently close to $\tfrac{1}{3}$, also have extremal representatives, thanks to the existence of a K\"{a}hler -Einstein metric on the resulting manifold, as shown by Siu-Tian-Yau, and the recalled deformation theory of Lebrun and SImanca \cite{ls}.
\section{Notation and conventions}
The following conventions are used throughout. If $(M,J)$ is a
complex manifold, we write $\mbox{Aut}(M,J)$ for the group of
biholomorphic maps $M\to M$. If $(M,\omega)$ is a symplectic
manifold, we write $\mbox{Exact} (M,\omega)$ for the group of exact
symplectomorphisms; that is, those that are generated by hamiltonian
vector fields. Finally if $(M,g)$ is a riemannian manifold, we write
$\mbox{Isom} (M,g)$ for the group of isometries of $(M,g)$. We
denote by a subscript $0$ the identity-component of these groups
(even though the group of exact symplectomorphisms is already
connected).
The metric $g$, K\"ahler form $\omega$ and complex structure $J$ are
related by
\begin{equation}\label{e1.25.11.5}
g(JX,Y) = \omega(X,Y),\qquad \mbox{or equivalently} \qquad
\omega(X,JY) = g(X,Y).
\end{equation}
The action of $J$ commutes with the musical isomorphisms, but if
$\alpha$ is a $1$-form and $X$ is a vector field, we have
\begin{equation}\label{e2.25.11.5}
J \, \alpha(X) = - \alpha (J X).
\end{equation}
Then $T^{1,0}$ corresponds to the $+i$-eigenspace of $J$ while
$\Lambda^{1,0}$ corresponds to the $-i$-eigenspace of $J$. In
particular, we have
\begin{equation}
\bar \partial f = \tfrac{1}{2} (\mathrm d f - i J \mathrm d f),\qquad \qquad J\,
\mathrm d f = i(\bar \partial f - \partial f) \label{e3.25.11.5}
\end{equation} and
so on.
Recall that a vector field $X$ is said to be a {\em hamiltonian
vector field} if there exists a smooth {\em real valued} function
$f$ satisfying
\begin{equation}
\label{e1.26.11.5} X = J \nabla f \, .
\end{equation}
In this case we will write $X =X_f$. Using (\ref{e1.25.11.5}) we see
that this equation is always equivalent to
\begin{equation}\label{e2.26.11.5}
\omega \, ( X_f , - ) = - \mathrm d f.
\end{equation}
or, using (\ref{e3.25.11.5}), is also equivalent to
\begin{equation}
\label{eq:ldld}
\tfrac{1}{2} \, \omega ( \Xi_f, -) = - \bar \partial f.
\end{equation}
when~\footnote{To help the reader connecting this notation with the
existing literature, let us remark that $\Xi_f = \, 2 \, i \,
\bar{\partial}^{ \uparrow } f = 2 \, i \, \bar{\partial}^{\#} f$,
the $(1,0)$ part of the gradient of $f$, in the notations used in
\cite{ca2} and \cite{ls}.}
\[
\Xi_f := X_f - i J X_f \in T^{1,0}
\]
Let us now define the second order operator
\begin{equation}\label{e3.26.11.5}
\begin{array}{rccccllll}
P_\omega : & {\mathcal C}^{\infty}(M) & \longrightarrow & \Lambda^{0,1}(M,T^{1,0}),\\[3mm]
& f & \longmapsto & \tfrac{1}{2} \, \bar \partial \, \Xi_f
\end{array}
\end{equation}
so that the null-space of $P_\omega$ (beside the constant function)
corresponds to holomorphic vector fields with zeros. Observe that
the operator $P_\omega$ depends on the K\"{a}hler\ metric $\omega$. Also,
with this definition, a metric $\omega$ is extremal if and only if
$P_\omega({\bf s} (\omega)) = 0$.
Clearly, any smooth, complex valued function $f$ solution of
\[
P^*_\omega \, P_\omega \, f = 0
\] on $M$ gives rise to a holomorphic vector field $\Xi_f$ defined
by (\ref{eq:ldld}) since by integration over $M$ implies that
$\|\bar \partial \, \Xi_f \|_{L^2(M)} =0$). We recall the following
important result which shows that the converse is also true~:
\begin{prop} \cite{ca2}, \cite{ls}
\label{compoten} $\Xi \in T^{1,0}$ is a holomorphic vector field
with zeros if and only if there exists a {\em complex valued}
function $f$ solution of $P^*_\omega \, P_\omega \, f =0$ such
that $\tfrac{1}{2} \, \omega (\Xi, -) = - \bar \partial f$.
\end{prop}
In addition, we have the following result which follows from a
theorem of Lichnerowicz (see Besse \cite{be}, Corollary~2.125 and
\cite{ls})
\begin{prop}
\cite{be}, \cite{ls} A vector field $X$ is a Killing vector field
with zeros if and only if there exists a {\em real valued} function
$f$ solution of $P^*_\omega \, P_\omega \, f =0$ such that $\omega
(X, -) = - \mathrm d f$. \label{killhamilton}
\end{prop}
In other words, if $\Xi$ is a holomorphic vector field and $f$ the
function given in Proposition \ref{compoten}, then $f$ can be chosen
to be real valued when $X = \Re \,\Xi$ is a Killing vector field.
Also, any Killing vector field is automatically real-holomorphic.
Observe that, in particular, if $X$ is a Killing vector field with
zeros, then $\Xi = X -i J X$ is a holomorphic vector field. We
recall the~:
\begin{defin} A vector field $X$ is real-holomorphic if and only if $X - iJX$ is a {\em holomorphic}
section of $T^{1,0}M$.
\end{defin}
\section{Equivariant set-up}
We fix a compact subgroup $K$ of $\mbox{Isom} (M, g)$ and we assume
that the Lie algebra of $K$ contains $X_{\bf s} = J \, \nabla {\bf
s}$. We do not insist that $K$ be connected.
Let us denote by ${\mathfrak h}$ the Lie algebra of real-holomorphic vector
fields which are $K$-invariant and are hamiltonian. Note that, since
$\omega$ is $K$-invariant, $X_{\bf s}$ certainly lies in ${\mathfrak h}$ for
any choice of $K$.
There is a large flexibility in the choice of $K$ varying from the
two extreme cases where $X_{\bf s}$ happens to generate a closed
subgroup of $\mbox{Isom}_0 (M,g)$, then this will be a
circle-subgroup $S$ contained in the center of $\mbox{Isom}_0 (M,g)$
and one could choose $K=S$ and the opposite situation where we
choose $K = \mbox{Isom}_0 (M,g)$.
With slight abuse of notations, we will identify elements of ${\mathfrak h}$
with the real-holomorphic vector fields corresponding to the
infinitesimal action of $H$ on $M$. For any K\"{a}hler\ metric $\omega$,
denote by $\xi_ \omega$ the moment map for the action of $H$,
uniquely determined by requiring to have mean zero
\begin{equation}\label{e2.28.11.5}
\xi_\omega : M \to {\mathfrak h}^*
\end{equation}
Recall that this is defined as follows. If $X \in {\mathfrak h}$, then the
function $f = \ip{\xi_\omega, X}$ on $M$ is a hamiltonian for the
vector field $X$, namely, a solution of~:
\begin{equation}
\omega(X, -) = - \mathrm d \, f
\end{equation}
normalized by
\[
\int_M f \, \omega ^m = 0 .
\]
Observe that according to (\ref{e11.26.11.5}) we also have
\[
\tfrac{1}{2} \, \omega(\Xi , -) = - \bar \partial \, \ip{\xi_\omega ,
X}
\]
where $\Xi = X - i \, J \, X$ is the holomorphic vector field
associated to $X$.
This is just an invariant way of introducing the potentials
corresponding to hamiltonian, real-holomorphic vector fields with
zeros.
\begin{rmk}
Notice that as the K\"ahler form varies (among $K$-invariant forms
the moment map varies. In section 4, we will explicitly study the
dependence of $\xi_\omega$ on $\omega$.
\end{rmk}
For the blow-up problem, we note that a vector field $X$ lifts to
$\tilde M$, the blow up of $M$ at $p_1, \ldots, p_n$, if and only if
it vanishes at each of the points $p_j$. If we have fixed the
isometry group to contain $K$, it follows that we only stand a
chance of blowing up points which are fixed by every element of $K$.
So, we suppose that
\begin{equation}
\mbox{For all $j=1, \ldots, n$, $p_j \in \mbox{Fix} \, (K)$}.
\end{equation}
Now, if $\tilde{\omega}$ is a putative extremal K\"ahler metric on
$\tilde{M}$ its scalar curvature must be a sum of $K$-invariant
potentials corresponding to vector fields that vanish at the $p_j$
and are $K$-invariant and hence they have to correspond to vector
fields which are in ${\mathfrak h}'$. Thus we introduce the lie algebra
${\mathfrak h}'$ that is given by
\[
{\mathfrak h} ' = {\mathfrak k} \cap {\mathfrak h}
\]
We denote by ${\mathfrak h}''$ the orthogonal complement of ${\mathfrak h}'$ in ${\mathfrak h}$
with respect to the scalar product
\[
(X, \tilde X)_{\mathfrak h} : = \int_M \ip{\xi_\omega , X} \,
\ip{\xi_\omega , \tilde X} \, dvol_g.
\]
Informally, potentials of the form $\ip{ \xi_\omega , X' }$ (for $X'
\in {\mathfrak h}'$) will the {\em good potentials} corresponding to vector
fields vanishing at the $p_j$, and hence lifting them to $\tilde{M}$
they can be used to deform the scalar curvature of the K\"{a}hler\ form
$\tilde{\omega}$. The potentials $\ip{\xi_\omega , X''}$ (for $X''
\in {\mathfrak h}''$) will be the {\em bad potentials} corresponding to
vector fields that do not necessarily lift to $\tilde{M}$ but, in
any case, these are potentials that will not be used in the
deformation of the scalar curvature of $\tilde{\omega}$.
To apply a perturbation argument, as in \cite{ap2}, we shall need to
solve two linear problems. First we need to find a function
$\Gamma$, a constant $\lambda$ and a vector field $Y' \in {\mathfrak h}'$
solutions of
\begin{equation}
\tfrac{1}{2} \, P^*_\omega \, P_\omega \Gamma + \ip{\xi_\omega ,
Y'} + \lambda = c_m \, \sum_{j=1}^n a_j \, \delta_{p_j}
\end{equation}
where the masses $a_j$ are positive and $c_m >0$ is a positive
constant only depending on the dimension $m$. The solvability of
this problem comes down to the {\em relative moment condition}~:
\begin{equation}
\sum_{j=1}^n a_j \, \xi_\omega (p_j) \in {\mathfrak h}' \,^* \mbox{ for some
}a_j>0
\end{equation}
Using this, we first consider a first perturbation of $\omega$, away
from the points to be blown up. This perturbed K\"{a}hler\ form we consider
is given explicitly by
\[
\hat \omega_\varepsilon : = \omega + i \, \partial \, \bar \partial ( \varepsilon^{2m-2} \,
\Gamma )
\]
where $\varepsilon >0 $ is a small parameter. This K\"{a}hler\ form is well defined
away from the points $p_j$ (provided $\varepsilon$ is chose small enough)
and, as will follow from the analysis in the next section, has
scalar curvature given by
\[
{\bf s} (\hat \omega_\varepsilon) = {\bf s} (\omega ) + \varepsilon^{2m-2} \, \left(
\ip{\xi_{\omega + i \, \partial \, \bar \partial ( \varepsilon^{2m-2} \, \Gamma )} ,
Y'} + \lambda \right) + {\mathcal O} (\varepsilon^{4m-2})
\]
The final task will be to perturb this K\"{a}hler\ metric into an extremal
metric. To this aim, given any (smooth) function $f$, we need to be
able to find a function $\phi$, a constant $\nu$, a vector field $Z'
\in {\mathfrak h}'$ and masses $b_j \in {\mathbb R}$ solutions of
\begin{equation}
\tfrac{1}{2} \, P^*_\omega \, P_\omega \, \phi + \nu +
\ip{\xi_\omega , Z'} + c_m \, \sum_{j=1}^n b_j \, \delta_{p_j} = f
\end{equation}
The solvability of this problem is precisely equivalent to the {\em
genericity condition}~:
\begin{equation}
\mbox{The projections of }\xi_\omega (p_1), \ldots, \xi_\omega
(p_n)\mbox{ in }{\mathfrak h}'' \,^* \mbox{ span }{\mathfrak h}'' \,^*.
\end{equation}
The core of the paper is to show that these conditions are indeed
{\em sufficient} conditions to guarantee the existence of extremal
metrics in the appropriate classes.
\section{Linear operators}
\noindent The linearization of the mapping
\begin{equation}
\label{e5.25.11.5} f\longmapsto {\bf s} (\omega + i\partial \bar \partial f)
\end{equation}
is given by the formula
\begin{equation}\label{e6.25.11.5}
{\mathbb L} : = - \tfrac{1}{2} \, \Delta^{2}_g - \mbox{Ric}_g
\cdot \nabla^{2}_g
\end{equation}
where $\mbox{Ric}_g$ stands for the Ricci tensor of the metric $g$
associated to $\omega$. On the other hand,
\begin{equation}\label{e7.25.11.5}
P^*_\omega \, P_\omega = \Delta^{2}_g + 2 \, \mbox{Ric}_g \cdot
\nabla^{2}_g - J \, X_{\bf s} + i X_{\bf s} .
\end{equation}
where $P$ is the operator defined in \eqref{e3.26.11.5} and $X_{\bf
s}$ is the hamiltonian vector field associated with ${\bf s}$.
Hence we have~:
\begin{equation} \label{e12.26.11.5} {\mathbb L} =
-\tfrac{1}{2} P^*_\omega \, P_\omega - \tfrac{1}{2} \, J \,
X_{\bf s} + \tfrac{i}{2} \, X_{\bf s}
\end{equation}
Working equivariantly with respect to a compact group $K$ whose lie
algebra contains $X_{\bf s}$ has the important effect of making the
last term in \eqref{e12.26.11.5} disappearing, leaving a real
operator on $K$-invariant functions.
Consider the map
\begin{equation}\label{e11.26.11.5}
\begin{array}{rcccc}
F : & {\mathfrak h} \times {\mathcal C}^{\infty}(M)^{K} & \longrightarrow & {\mathcal C}^{\infty}(M)^{K}, \\[3mm]
& (X, f) & \longmapsto & {\bf s} (\omega + i \partial \bar \partial f) - \ip{ \xi_{\omega
+i\partial \bar \partial f} , X }.
\end{array}
\end{equation}
Here the superscripts $K$ denote the $K$-invariant part of the
function space.
The following is due to Calabi and LeBrun--Simanca.
\begin{prop}
\label{linear} Assume that $\omega$ is extremal and $ X_{\bf s} \in
{\mathfrak h}$, then $D_f F|_{(X_{\bf s},0)}$, the linearization of $F$ with
respect to $f$ at $(X_{\bf s} ,0)$ is equal to $- \tfrac{1}{2}\,
P^*_\omega \, P_\omega$.
\end{prop}
\begin{proof} We already know the linearization of the scalar
curvature map, so we only need to know the linearization of
\[
f \longmapsto \xi_{\omega + i \partial \bar \partial f}
\]
with respect to $f$. Take any $X \in {\mathfrak h}$. Since $f$ is
$K$-invariant, $X$ is a Killing vector field (with zeros) for the
K\"{a}hler\ form $\omega + i\partial \bar \partial f$. Hence, using the analysis of
\S 4, we can write
\[
\tfrac{1}{2} \, (\omega + i \partial \bar \partial f)(\Xi , -) = - \bar
\partial \ip{\xi_{\omega + i\partial \bar \partial f}, X}
\]
where $\Xi := X - i\, J \, X$, and we see immediately that
$\dot{\xi}$, the first variation of $f \longmapsto \xi_{\omega + i
\partial \bar \partial f}$ with respect to $f$ computed at $f=0$, satisfies
\[
\tfrac{i}{2} \, \partial \bar \partial \, f(\Xi, -) = - \bar \partial \,
\ip{\dot{\xi} , X}.
\]
Working in local coordinates, the left hand side of this expression
is equal to
\[
\tfrac{i}{2}\, \mathrm d \bar z^{j} \tfrac{\partial}{\partial \bar z^j}
\left(\Xi^k\tfrac{\partial f}{\partial z^k} \right)
\]
because $\Xi$ is holomorphic. Hence we see that
\begin{equation}\label{e5.26.11.5}
\ip{\dot{\xi} , X} = - \tfrac{i}{2}\, \Xi \, f.
\end{equation}
Now, we apply this analysis when $\omega$ is extremal, with extremal
vector field $X_{\bf s} \in {\mathfrak h}$. We obtain for any smooth function
$f$
\[
D_f F|_{(X_{\bf s}, 0)}( f ) = {\mathbb L} f + \tfrac{i}{2}\,
\Xi_{\bf s} \, f \qquad \qquad \mbox{ with } \qquad \qquad \Xi_{\bf
s} : = X_{\bf s} - i \, J \, X_{\bf s}.
\]
Hence
\begin{equation}
D_f F|_{(X_{\bf s}, 0)}( f ) = -\tfrac{1}{2} \, P^*_\omega \,
P_\omega f - \tfrac{1}{2} \, J \, X_{\bf s} \, f + \tfrac{i}{2} \,
X_{\bf s} \, f + \tfrac{i}{2}\, \Xi_{\bf s} \, f = -
\tfrac{1}{2} \, P^*_\omega \, P_\omega \,f + i \, X_{\bf s} \, f.
\end{equation}
Remembering that when $f$ is $K$-invariant and $X_{\bf s} \in {\mathfrak k}$,
we have
\[
X_{\bf s} f =0
\]
we conclude that $D_f F|_{(X_{\bf s}, 0)}( f ) = - \tfrac{1}{2} \,
P^*_\omega \, P_\omega \,f $. This completes the proof.
\end{proof}
\section{Burns-Simanca's metric on the blow up of ${\mathbb C}^m$ at the origin}
We describe a scalar flat K\"{a}hler\ form $\eta$ define on $\tilde{\mathbb
C}^m$, the blow up at the origin of ${\mathbb{C}}^m$. This metric is $U(m)$
invariant and was found by Burns \cite{leb}, when $m=2$, and Simanca \cite{sim}, when $m\geq 3$, following a method introduced in \cite{ca} .
Away from the exceptional divisor, the K\"{a}hler \ form $\eta$
is given by
\[
\eta = i \, \partial \, \bar \partial E_m (v)
\]
where $v = (v^{1}, \ldots, v^m)$ are complex coordinates in
${\mathbb C}^m \setminus \{ 0 \}$ and where the function $E_m$ is
explicitly given, in dimension $m=2$, by
\[
E_2 (v) : = \tfrac{1}{2} \, |v|^2 + \log |v|^2
\]
while in dimension $m \geq 3$, even though there is no explicit
formula for $E_m$ we have the following expansion
\[
E_m (v) = \tfrac{1}{2} \, |v|^2 - |v|^{4-2m} + {\mathcal O} (
|v|^{2-2m} )
\]
as $|v|$ tends to $\infty$. Observe that $E_m$ is defined up to a
constant. Details can be obtained either in \cite{ca}, or \cite{sim}
or even \cite{ap2}.
It is important to stress that the metric $\eta$ is defined in terms
of a choice of coordinates $(v^{1},\dots,v^{m})$, and any choice of
local coordinates around the point $p_j$ gives rise to a preferred
$\eta$. On the other hand the geometry of extremal metrics, and in
particular our choice of the group $K$, points to a preferred choice
as we will see in the next section.
\section{$K$-invariance and extensions on the blow up}
We discuss the crucial question of the lifting of objects (such as
the action of $K$, holomorphic vector fields, associated
potential,\ldots) all of which are defined on $M$, to the blown up
manifold.
Recall that, blowing up a $m$-dimensional complex manifold at a
point can be understood as a connected sum construction which can be
performed by excising a small ball in complex normal coordinates
around the point we want to blow up and replacing it by a large
neighborhood of the exceptional divisor in $\tilde \mathbb{C}^m$, the blow
up of $\mathbb{C}^m$ at the origin, keeping some compatibility between
metrics and complex structures on the different summands.
Now, $M$ is endowed with a $K$ invariant K\"{a}hler\ form $\omega$ and
$\tilde \mathbb{C}^m$ will be equipped with a suitable multiple of the
scalar flat K\"{a}hler\ form $\eta$ defined in the previous section. Since
we want the action of the group $K$ to lift to an isometric action
also for the new metrics we have to impose the condition that $K
\subset U(m)$ on the neighborhood of the point $p$ which will be
blown up, since $U(m)$ is the isometry group of any Burns-Simanca's metric
$\eta$. This is accomplished by linearizing on a small neighborhood
of $p$ the action of $K$, which is ensured by the following
classical result \cite{bm}~:
\begin{prop}
Let $D$ be a domain of a complex manifold and $G \subset \mbox{Aut}
\, (D, J)$ be a compact subgroup with a fixed point $p \in D$. In a
neighborhood of $p$, there exist complex coordinates centered at $p$
such that in these coordinates the action of $G$ is given by linear
transformations. \label{prop:link}
\end{prop}
We will refer to these coordinates as {\em $G$-linear
coordinates}. The following proposition, proved in \cite{ap2}, shows that one can find
$G$-linear coordinates which are also normal coordinates about $p$,
a fixed point of $G$.
\begin{prop}
Assume that $G \subset \mbox{Isom} \, (M,\omega)$ is compact. Then
there exist $(z^{1}, \ldots , z^{m})$, $G$-linear coordinates
centered at $p \in \mbox{Fix} \, (G)$ such that
\[
\omega = i \, \partial \bar \partial ( \tfrac{1}{2} \, |z|^2 + \varphi )
.
\]
where the function $\varphi$ is $G$ invariant and $\varphi =
{\mathcal O}(|z|^{4})$. \label{masterpiece}
\end{prop}
In our construction of extremal K\"{a}hler\ metrics on blow ups, this
proposition will be used in the following way. We apply the previous
result to $G=K$ close to a fixed point $p \in \mbox{Fix} \, (K)$. We
obtain normal coordinates, in $D$ a neighborhood of $p$ for which
the action of $K$ is linear. Given $X \in {\mathfrak k}$ a Killing vector
field vanishing at a point $p$ we can lift this vector field as a
vector field $\tilde X$ on $\tilde D$ the blow up of $D$ at the
point $p$. If $D$ is endowed with $\alpha \, \eta$, a multiple of
Burns-Simanca's metric, then $\tilde X$ will still be a Killing vector
field of $(\tilde D, J, \alpha \, \eta)$. In addition, $\tilde X$
still vanishes at some point on the exceptional divisor over $p$ as
is shown in the following~:
\begin{prop}
\label{vanlift} Let $X$ be a real-holomorphic vector field on $M$,
and let $p$ be any point in $M$ such that $X(p)=0$. We denote by
$\tilde{X}$ the lift of $X$ to the blow up of $M$ at $p$. Then,
there exists a point $q$ on the exceptional divisor over $p$ such
that $\tilde{X}(q) = 0$.
\end{prop}
\begin{proof}
For simplicity, we give the proof in the case where $m=2$. Given $z
: = (z^{1},z^{2})$ complex coordinates centered at $p$, we write
\[
X -iJX = X^{1} \, \partial_{z^{1}} + X^{2} \, \partial_{z^{2}},
\]
with $X^{i}(0)=0$. Let $(u^{1}, u^{2})$ be complex coordinates on
$\tilde \mathbb{C}^{2}$ such that $z^{1} = u^{1} \, u^{2}$ and $z^{2} =
u^{2}$, covering an affine chart of the exceptional divisor over
$p$. Then, $\tilde{X} - iJ\tilde{X}$, the lift of the vector field
$X-iJX$, is given by
\[
\tilde{X} - iJ\tilde{X} = \tfrac{z^{2} \, X^{1} - z^{1}
X^{2}}{(z^{2})^2} \,
\partial_{u^{1}} + X^{2}\partial_{u^{2}}.
\]
We can always write
\[
X^{1} = a \, z^{1} + b \, z^{2} + {\mathcal O} (|z|^2), \qquad
\mbox{and} \qquad X^{2} = c \, z^{1} + d \, z^{2} + {\mathcal O}
(|z|^2)
\]
for $z$ close to $0$.
We consider the point of the exceptional divisor corresponding to
the line $z^{1} = \lambda \, z^{2}$ (i.e. $(u^{1},
u^{2})=(\lambda,0)$). Obviously, we have
\[
\lim_{z^{2}\rightarrow 0} X^{2}(\lambda \, z^{2}, z^{2}) = 0
\]
for all $\lambda {\mathbb C}$ and
\[
\lim_{z^{2} \rightarrow 0} \tfrac{z^{2} \, X^{1} - z^{1}
X^{2}}{(z^{2})^2} \,(\lambda \, z^{2}, z^{2}) = - c\lambda^{2} +
(a-d)\lambda + b ,
\]
Unless $c=0, d=a,b\neq 0$, in which case the point at infinity of
$\mathbb{C} =\{\lambda\}$ annihilates $\tilde{X}$, the equation $ -
c\lambda^{2} + (a-d)\lambda + b =0$ has always a root $\lambda$
which corresponds to a zero of $\tilde X$.
\end{proof}
Let us end this section with a simple but useful result. Since the obstruction
for our construction to be successful is essentially contained in ${\mathfrak h}''$, it is interesting to
have some geometric efficient property which implies that ${\mathfrak h}''=0$.
To state this condition precisely, let us fix $p\in M$ and denote by
$\rho \colon K \rightarrow Gl(T_pM)$
the representation of $K$ induced on $T_pM$ by the action of $K$ on $M$.
\begin{prop}
If there exists $p\in Fix(K)$ s.t. the maximal torus $T_{{\mathfrak k}}$ of
$\rho(K)$ has dimension equal to $dim_{\mathbb{C}}M$, then ${\mathfrak h}''=0$.
\label{fixx}
\end{prop}
\begin{proof}
We need to show that ${\mathfrak h} \subset {\mathfrak k}$. If $p\in Fix(K)$ we can use Proposition
linearize the action of
$T_{{\mathfrak k}}^{\mathbb{C}}$ near $p$ in a such a way that $\rho(K)$ can be written as
$z_j \mapsto e^{i\theta_j} z_j$ in suitable complex coordinates on $T_pM$.
The condition $dim_{\mathbb{R}}T_{{\mathfrak k}} = dim_{\mathbb{C}}M$ then implies that $\theta_j \neq \theta_l$ for $j\neq l$. This immediately implies that the elements of $H$ are also diagonal, and hence
the result follows.
\end{proof}
Observe that the above condition is easily satisfied by any toric manifold with $K$ the
maximal compact torus giving the torus action.
\section{Mapping properties}
For all $r > 0$, we agree that
\[
B_r : = \{ z \in {\mathbb C}^m \quad : \quad |z| < r \},
\]
denotes the open ball of radius $r >0$ in ${\mathbb C}^m$, $\bar
B_r$ denotes the corresponding closed ball and
\[
\bar B_r^* : = \bar B_r -\{0\}
\]
the punctured closed ball. We will also define
\[
C_r : = {\mathbb C}^m - \bar B_r \qquad \mbox{and} \qquad \bar C_r
: = {\mathbb C}^m - B_r
\]
to be respectively the complement in ${\mathbb C}^m$ of the closed
the ball and the open ball of radius $r >0$.
\subsection{Operators defined on $M - \{p_1, \ldots, p_n\}$}
Assume that we are given $n$ distinct points $p_1, \ldots, p_n \in
M$. For each $j = 1, \ldots, n$, we can choose complex coordinates
$z : = (z^{1}, \ldots, z^m)$ in a neighborhood of $0$ in ${\mathbb
C}^m$, to parameterize a geodesic ball of radius $r$ centered at
$p_{j}$ in $M$. Furthermore, as explained in the previous section,
these coordinates can be chosen to be normal at $p_j$ and to be
$K$-linear. In order to distinguish between the different
neighborhoods and coordinate systems, we agree that, for all $r$
small enough, say $r \in (0, r_0)$, $B_{j , r}$ (resp. $\bar B_{j,
r}$ and $\bar B_{j , r}^*$) denotes the open ball (resp. the closed
and closed punctured ball) of radius $r$ in the coordinates $z$
parameterizing a fixed neighborhood of $p_j$.
We fix $r_0$ small enough so that $\bar B_{j,r}$ are disjoint for
all $r \leq 4 \, r_0$. We set
\[
\bar M_{r} : = M - \cup_{j} B_{j,r}
\]
The weighted space for functions defined on the noncompact manifold
\begin{equation}
M^* : = M - \{ p_{1}, \ldots , p_{n} \}. \label{eq:MP0}
\end{equation}
is then defined as the set of functions whose decay or blow up near
any $p_{j}$ is controlled by a power of the distance to $p_{j}$.
More precisely, we have the~:
\begin{dfn}
Given $\ell \in {\mathbb N}$, $\alpha \in (0,1)$ and $\delta \in
{\mathbb R}$, we define the weighted space ${\mathcal C}^{\ell ,
\alpha}_{\delta} (M^*)$ to be the space of functions $f \in
{{\mathcal C}}^{\ell , \alpha}_{loc} (M^*)$ for which the following
norm is finite
\[
\| f \|_{{{\mathcal C}}^{\ell, \alpha}_{\delta} (M^*)} : = \| f
\|_{{{\mathcal C}}^{\ell, \alpha} (\bar M_{r_0})} + \sum_{j=1}^n \,
\left( \sup_{r\leq r_0} \, \left( r^{-\delta} \, \| f \,
_{|_{B_{j,r_0}}} (r\, \cdot \, ) \|_{{{\mathcal C}}^{\ell, \alpha}
(\bar B_2 - B_1 )} \right) \right).
\]
\label{de:MP1}
\end{dfn}
We are interested in the mapping properties of the operator
\[
L_\omega : = - \tfrac{1}{2} P^*_\omega \, P_\omega
\]
which has been defined in \S 6. We define in $\bar B^*_{j,r_0}$ the
function $G_j$ by
\[
G_j (z) = - \log \, |z|^2 \qquad \mbox{ when $m=2$ \qquad and}
\qquad G_j (z) = |z|^{4-2m} \qquad \mbox{ when $m\geq 3$}.
\]
Observe that, unless the metric $\omega$ is the Euclidean metric,
these functions are not solutions of the homogeneous equation
associated to $L_\omega$, however they can be perturbed into $\tilde
G_j$ solutions of the homogeneous problem $L_\omega \, \tilde G_j
=0$. Indeed, reducing $r_0$ if this is necessary, we know from
\cite{ap2} that there exist functions $\tilde G_j$ which are
solutions of $L_\omega \, \tilde G_j =0$ in $B_{j,r_0}^*$ and which
are asymptotic to $G_j$ in the sense that $\tilde G_j- G_j \in
{\mathcal C}^{4, \alpha}_{6-2m} (\bar B^*_{j,r_0})$ when $m \geq 4$
and $\tilde G_j - G_j \in {\mathcal C}^{4, \alpha}_{\delta} (\bar
B^*_{j,r_0})$ for any $\delta < 6-2m$ when $ m = 2 ,3$. The rational
behind these constructions is that the K\"{a}hler\ metric $\omega$ osculates
the Euclidean metric to order $2$ and hence
\[
L_\omega \, G_j \in {\mathcal C}^{0, \alpha}_{2-2m} (\bar
B_{j,r_0}^*).
\]
With the functions $\tilde G_j$ at hand, we define the {\em
deficiency spaces}
\[
{\mathcal D}_0 : = \mbox{Span} \{ \chi_1 , \ldots, \chi_n \} ,
\qquad \mbox{and} \qquad {\mathcal D}_1 : = \mbox{Span} \{ \chi_1 \,
\tilde G_1 , \ldots, \chi_n \, \tilde G_n \} ,
\]
where $\chi_j$ is a cutoff function which is identically equal to
$1$ in $B_{j,r_0/2}$ and identically equal to $0$ in $M-B_{j,r_0 }
$. Furthermore, we assume that $\chi_j$ is $K$-invariant.
When $m \geq 3$, we fix $\delta \in (4-2m, 0)$ and when $m = 2$ we
choose $\delta \in (0,1)$. We define the operator
\[
\begin{array}{rclcllll}
{\mathcal L}_\delta : & ( {{\mathcal C}}^{4, \alpha}_{\delta}
(M^*)^K \oplus {\mathcal D} ) \times {\mathfrak h}' \times {\mathbb
R}
& \longrightarrow & {{\mathcal C}}^{0,\alpha}_{\delta - 4} (M^*)^K \\[3mm]
& (f, X' , \mu ) & \longmapsto & - \tfrac{1}{2} \, P^*_\omega \,
P_\omega f - \ip{\xi_\omega, X'} - \mu ,
\end{array}
\]
Where ${\mathcal D} = {\mathcal D}_1$ when $m \geq 3$ and ${\mathcal
D} = {\mathcal D}_0 \oplus {\mathcal D}_1$ when $m=2$. The main
result of this section reads~:
\begin{prop}
Assume that the points $p_1, \ldots, p_n \in M$ are chosen so that $
$. Then, the operator ${\mathcal L}_\delta$ defined above is
surjective and has a kernel of dimension $ n + 1 + \mbox{dim} \,
{\mathfrak h}'$. \label{pr:f-5.1}
\end{prop}
\begin{proof} The proof of this result follows from the general
theorem described in \cite{Mel} and \cite{Maz}, nevertheless, we
choose here to describe an (almost) self contained proof. Recall
that, when working equivariantly with respect to a group $K$, the
kernel of $L$ is spanned by the functions of the form $\ip{\xi, X}
+\mu$ where $X \in {\mathfrak h}$ and $\mu \in {\mathbb R}$. Also
recall that, by assumption $\ip{\xi, X}$ has mean $0$.
Observe that ${\mathcal C}^{0, \alpha}_{\delta-4} (M^*)^K \subset
L^{1} (M)$ precisely when $\delta > 4-2m$. We use the fact that $L$
is self adjoint and hence, for $h \in L^{1} (M)$, the problem
\[
\tfrac{1}{2} \, P^* P \, f + \ip{\xi , X'} + \mu + \sum_{j=1}^n
b_j \, \delta_{p_j} = h
\]
is solvable in $W^{3,q}(M)$ for all $q \in [1, \tfrac{2m}{2m-1})$ if
and only if
\[
\mu \, \mbox{Vol}_g (M) + \sum_{j=1}^n b_j = \int_M h \, dvol_g
\]
(remember the $\ip{\xi, X}$ is normalized to have mean $0$) and, for
all $X'' \in {\mathfrak h}''$
\[
\sum_{j=1}^n b_j \, \ip{ \xi(p_j), X''} = \int_M h \, \ip{\xi, X''}
\, dvol_g
\]
(remember that ${\mathfrak h}'$ and ${\mathfrak h}''$ are constructed so that
\[
(X', X'')_{{\mathfrak h}} : = \int_M \ip{\xi, X'} \, \ip{\xi, X''} \, dvol_M
=0 ,
\]
for all $X' \in {\mathfrak h}'$ and all $X'' \in {\mathfrak h}''$). The first equation
gives the value of $\mu$ in terms of the $b_j$'s and the function
$h$. While, the second system in $b_j$'s is solvable since we have
assumed that the projection of $\xi(p_1), \ldots , \xi(p_n)$ over
${\mathfrak h}'' \, ^*$ spans ${\mathfrak h}'' \, ^*$.
To complete the proof, we simply invoke regularity theory which
implies that $f \in {{\mathcal C}}^{4, \alpha}_{\delta} (M^*)^K
\oplus {\mathcal D}$. The estimate of the dimension of the kernel is
left to the reader since it will not be used in the paper.
\end{proof}
\subsection{Operators defined on $\tilde {\mathbb C}^m$}
We choose coordinates $u : = (u^{1}, \ldots, u^m)$ to parameterize
$\tilde {\mathbb C}^m$, the blow up of ${\mathbb C}^m$ at the
origin, away from the exceptional divisor.
We start with the~:
\begin{dfn} Given $\ell \in {\mathbb N}$, $\alpha
\in (0,1)$ and $\delta \in {\mathbb R}$, we define the weighted
space ${{\mathcal C}}^{\ell , \alpha}_{\delta} (\tilde {\mathbb
C}^m)$ to be the space of functions $w \in {{\mathcal C}}^{\ell ,
\alpha}_{loc} (\tilde {\mathbb C}^m)$ for which the following norm
is finite
\[
\| f \|_{{{\mathcal C}}^{\ell , \alpha}_{\delta} (\tilde {\mathbb
C}^m)} : = \| f \|_{{{\mathcal C}}^{\ell , \alpha} (\tilde {\mathbb
C}^m - C_1)} + \sup_{r\geq R_0} \, r^{-\delta} \, \| f_{|_{C_2}} (
r \, \cdot \, ) \|_{{{\mathcal C}}^{\ell , \alpha} (\bar B_2 - B_1)}
.
\]
\label{de:MP2}
\end{dfn}
Given $\delta \in {\mathbb R}$, we define the operator
\[
\begin{array}{rclcllll}
\tilde {\mathcal L}_\delta : & {{\mathcal C}}^{4, \alpha}_{\delta}
(\tilde {\mathbb C}^m)^K & \longrightarrow & {{\mathcal C}}^{0,
\alpha}_{\delta - 4} (\tilde {\mathbb C}^m)^K \\[3mm]
& f & \longmapsto & - \tfrac{1}{2} P^*_\eta P_\eta \, f
\end{array}
\]
and recall the following result which is borrowed from \cite{ap2}
\begin{prop}
Assume that $\delta \in (0,1)$. Then the operator $\tilde {\mathcal
L}_\delta$ defined above is surjective and has a one dimensional
kernel spanned by a constant function. \label{pr:MP3}
\end{prop}
\subsection{Bi-harmonic extensions}
The following results are concerned with Bi-harmonic extensions
either on the complement of the unit ball or on the unit ball of
${\mathbb C}^m$ of boundary data defined on the unit sphere. Here
$\Delta$ denotes the Laplacian on ${\mathbb C }^m$.
\begin{prop}
Given $h \in {\mathcal C}^{4, \alpha} (\partial B_1 )$, $k \in {\mathcal
C}^{2, \alpha} (\partial B_1 )$ such that
\[
\int_{\partial B_1} (4 \, m \, h - k) = 0
\]
there exists a function $W^{i} ( = W^{i}_{h,k} )\in {\mathcal C}^{4,
\alpha}_1 (\bar B_1^* )$ such that
\[
\Delta^{2} \, W^{i} = 0 \qquad \mbox{in} \qquad B_1 \qquad \qquad
W^{i}= h \quad \mbox{and} \quad \Delta W^{i} = k \qquad \mbox{on}
\qquad \partial B_1 .
\]
Moreover,
\[
\| W^{i} \|_{{\mathcal C}^{4, \alpha}_1 (\bar B_1^*)} \leq c \,
(\|h\|_{{\mathcal C}^{4, \alpha}(\partial B_1 )} + \|k \|_{{\mathcal
C}^{2, \alpha}(\partial B_1)})
\]
Given $h \in {\mathcal C}^{4, \alpha} (\partial B_1 )$, $k \in {\mathcal
C}^{2, \alpha} (\partial B_1 )$ such that
\[
\int_{\partial B_1 } k =0
\]
there exists a function $W^{o} ( = W^{o}_{h,k} ) \in {\mathcal C}^{4
, \alpha}_{3-2m} ({\mathbb C}^m - B_1)$ such that
\[
\Delta^{2} \, W^{o} = 0, \qquad \mbox{in} \qquad {\mathbb C}^m -B_1
\qquad \qquad W^{o} = h \quad \mbox{and} \quad \Delta W^{o} = k
\qquad \mbox{on} \qquad \partial B_1 .
\]
Moreover,
\[
\| W^{o} \|_{{\mathcal C}^{4, \alpha}_{3-2m} (C_1 ) } \leq c \,
(\|h\|_{{\mathcal C}^{4, \alpha}(\partial B_1 )} + \|k \|_{{\mathcal
C}^{2, \alpha}(\partial B_1)})
\]
\label{pr:f-5.5}
\end{prop}
Let us briefly comment on the assumption. To this aim, let us
concentrate on the case where both $h$ and $k$ are constant
functions in which case their bi-harmonic extensions $W^{i}$ and
$W^{o}$ are given explicitly by
\[
W^{i} (z) = a + b \, |z|^2
\]
and
\[
W^{o} (z)= c \, |z|^{4-2m} + d \, |z|^{2-2m}
\]
when $m \geq 3$ and
\[
W^{o} (z) = c \, \log |z| + d \, |z|^{-2}
\]
when $m=2$. It is easy to see that $W^{i} \in {\mathcal C}^{4,
\alpha}_1 (\bar B_1^*)$ if and only if $a=0$. While $W^{o} \in
{\mathcal C}^{2, \alpha}_{3-2m} ({\mathbb C}^m -B_1)$ if and only
if $ c=0$. These conditions lead to the constraints on the function
$h$ and $k$ as in the statement of the result.
\section{Nonlinear perturbation results}
\subsection{Perturbation of $\omega$}
The first perturbation we will perform is concerned with the
perturbation of the extremal K\"{a}hler\ form $\omega$ which is defined on
the manifold $M$. We keep the notations which have been introduced
in \S 6.
The K\"{a}hler\ metric $\omega$ being extremal, we have
\begin{equation}
{\bf s} (\omega) = \ip{\xi_{\omega} , X_{\bf s} } + \mu_{\bf s}
\label{eq:solutioninitial}
\end{equation}
for some Killing field
\[
X_{{\bf s}}\in {\mathfrak h}'
\]
and some constant $\mu_{\bf s} \in {\mathbb R}$.
Given $n$ points $p_1, \ldots, p_n \in M$ and real parameters $a_1,
\ldots, a_n > 0$. The points $p_j$ are precisely the points where
the manifold $M$ will be blown up and the parameters $a_j >0$ will
be closely related to the K\"{a}hler\ classes and also the volume of the
exceptional divisor on the blow up manifold, once the construction
will be complete.
We recall here the crucial assumption on the choice of the points
$p_j$ and the parameters $a_j$, namely, we assume that
\begin{equation}
\sum a_j \, \xi (p_j) \in {\mathfrak h}' \, ^*
\label{eq:condition-1}
\end{equation}
This condition is precisely the one which allows one to find a
function $\Gamma$, a vector field $Y' \in {\mathfrak h}'$ and a
constant $\lambda \in {\mathbb R} $ such that
\begin{equation}
\tfrac{1}{2} \, P^*_\omega P_\omega \, \Gamma + \ip{\xi_\omega, Y'}
+ \lambda = c_m \, \sum_{j=1}^n a_{j} \, \delta_{p_{j}}
\label{eq:gamma}
\end{equation}
where the constant $c_m$ is defined by
\[
c_m := 4 \, (m-1) \, (m-2) \, |S^{2m-1}| \qquad \mbox{ when $m \geq
3$ \quad and} \qquad c_2 : = 2 \, |S^{3}|
\]
Indeed, the existence of $\Gamma$ depends on the ability to choose
$Y' \in {\mathfrak h}'$ and $\lambda \in {\mathbb R}$ so that
\[
\ip{\xi_\omega , Y' } + \lambda - c_m \, \sum_{j=1}^n a_{j} \,
\delta_{p_{j}}
\]
is "orthogonal" to the kernel of $- \tfrac{1}{2} \,P^*_\omega \,
P_\omega$. Since this kernel is precisely spanned by the functions
of the form $\ip{\xi, X} + \alpha$ where $X \in {\mathfrak h}$ and
$\alpha \in {\mathbb R}$, we see that (\ref{eq:condition-1}) is a
necessary and sufficient condition for the existence of $\Gamma$. We
also have the relation
\begin{equation}
\lambda = c_m \, \sum_{j=1}^n a_{j} \label{eq:lambda}
\end{equation}
and we know that the Killing field $Y' \in {\mathfrak h}'$ has to be
chosen so that
\begin{equation}
\int_M \ip{\xi_\omega , Y'} \, \xi_\omega \, dvol_g - \sum_{j=1}^n
a_j \, \xi_\omega (p_{j}) \in {\mathfrak h}'' \, ^*
\label{eq:Xprime}
\end{equation}
It is not hard to check that the function $\Gamma$ has a nice
expansion near each $p_{j}$. Indeed, we have the~:
\begin{lemma}
Near $p_{j'}$, the following expansions hold
\[
\Gamma (z) = - a_{j} \, |z|^{4-2m} + {\mathcal O} (|z|^{6-2m})
\]
when $m\geq 4$,
\[
\Gamma (z) = - a_{j} \, |z|^{-2} + b_{j} \ \log |z| + c_{j} +
{\mathcal O} (|z|)
\]
for some $b_{j} , c_{j} \in {\mathbb R}$ when $m =3$, while
\[
\Gamma (z) = a_{j} \, \log |z| + b_{j} + c_{j} \cdot z + {\mathcal
O} (|z|^2 \, (-\log |z|))
\]
for some $b_{j} \in {\mathbb R}$ and $c_{j} \in {\mathbb C}^m$, when
$m=2$. Here ${\mathcal O} (|z|^q (-\log |z|)^{q'})$ denotes a
smooth function defined away from $0$, whose partial derivatives,
when taken with respect to the vector fields $|z|\, \partial_{z^j}$
and $|z|\, \partial_{\bar z^j}$, are bounded by a constant
(depending on the number of derivatives) times $|z|^q (-\log
|z|)^{q'}$. \label{le:NPR1}
\end{lemma}
We fix
\begin{equation}
r_\varepsilon: = \varepsilon^{\tfrac{2m-1}{2m+1}} \label{eq:NPR1}
\end{equation}
This corresponds to the radius of the balls centered at the points
$p_{j}$ which will be excised from $M$. Recall that, for $r >0$
small enough we have defined
\[
\bar M_r : = M - \cup_{j} B_{j,r}
\]
On each of the boundaries of $\bar M_{r_\varepsilon}$, we will need some
small boundary data. Hence, we assume that we are given \[ h_{j} \in
{\mathcal C}^{4, \alpha}(\partial B_1)^K\quad \mbox{and} \qquad k_{j}
\in {\mathcal C}^{2, \alpha}(\partial B_1)^K \] for $j = 1, \ldots, n$,
satisfying
\begin{equation}
\| h_{j} \|_{{\mathcal C}^{4, \alpha} (\partial B_1)} + \|
k_{j}\|_{{\mathcal C}^{2, \alpha} (\partial B_1)} \leq \kappa \,
r_\varepsilon^{4} ,\label{eq:NPR2}
\end{equation}
where $\kappa >0$ will be fixed later on. The subscript $K$ in the
definition of the spaces is meant to remind the reader that the
functions $h_j$ and $k_j$ are invariant under the action of $K$. We
further assume that
\begin{equation}
\int_{\partial B_1} k_{j} = 0 \label{eq:NPR3}
\end{equation}
so that the second half of the result of Proposition~\ref{pr:f-5.5}
applies. To keep notation short we set
\[
{\bf h} : = (h_1, \ldots, h_n) \qquad \mbox{and} \qquad {\bf k} : =
(k_1, \ldots, k_n).
\]
In particular, we can define the function $W_{\varepsilon, {\bf h}, {\bf k}}$
which is identically equal to $0$ in $\bar M_{2r_0}$ and, for $j=1,
\ldots, n$ is equal to
\begin{equation}
W_{\varepsilon, {\bf h},
{\bf k}} : = \chi_{j} \, W^{o}_{h_{j}, k_{j}} ( \cdot / r_\varepsilon) ,
\label{eq:doublev}
\end{equation}
in $B_{j,2r_0}$. Here $\chi_{j}$ is a cutoff function which is
identically equal to $1$ in $B_{j,r_0}$ and identically equal to $0$
in $M - B_{j,2r_0}$. Observe that the function $W_{\varepsilon, {\bf h}, {\bf
k}}$ depends (linearly) on ${\bf h}$ and ${\bf k}$ and is $K$
invariant.
This being understood, we have the~:
\begin{prop}
Given $\delta \in (4-2m, 5-2m)$ when $m\geq 3$ or $\delta \in (0,
2/3)$ when $m =2$, there exists $\varepsilon_\kappa >0$ and $c_\kappa
>0$ such that for all $\varepsilon \in (0, \varepsilon_\kappa)$ one can find a
function $\phi_{\varepsilon, {\bf h}, {\bf k}} \in {\mathcal C}^{4,
\alpha}(\bar M_{r_\varepsilon})$, a vector field $Y'_{\varepsilon, {\bf h}, {\bf
k}}\in {\mathfrak h}'$ and a constant $\lambda_{\varepsilon, {\bf h}, {\bf k}} \in
{\mathbb R}$ such that the scalar curvature of the K\"{a}hler\ form
\[
\omega_{\varepsilon, {\bf h}, {\bf k}} = \omega + i \, \partial \, \bar \partial
\, ( \varepsilon^{2m-2} \, \Gamma + W_{\varepsilon, {\bf h}, {\bf k}} + \phi_{\varepsilon, {\bf
h}, {\bf k}})
\]
defined on $\bar M_{r_\varepsilon}$, satisfies
\[
-d {\bf s} (\omega_{\varepsilon, {\bf h}, {\bf k}}) = \omega_{\varepsilon, {\bf h},
{\bf k}} (X_{\bf s} + \varepsilon^{2m-2} \, Y' + Y'_{\varepsilon, {\bf h}, {\bf k}} ,
-)
\]
with
\[
\tfrac{1}{{|\partial B_{j, r_\varepsilon}|}} \int_{\partial B_{j, r_\varepsilon}} {\bf s}
(\omega_{\varepsilon, {\bf h}, {\bf k}}) - {\bf s} (\omega_{\varepsilon, {\bf h}, {\bf
k}}) = \lambda_{\varepsilon, {\bf h} , {\bf k}}
\]
In addition, we have the estimate
\[
\begin{array}{rlllll}
\| Y'_{\varepsilon, {\bf h}, {\bf k}} \|_{L^\infty} + r_\varepsilon^{2m-4} \,
\sup_{j=1, \ldots, n} \, \| \phi_{\varepsilon, {\bf h},{\bf k}} \, |_{\bar
B_{j ,2 r_\varepsilon} -B_{j, r_\varepsilon}} (r_\varepsilon \,
\cdot) \|_{{\mathcal C}^{4, \alpha} ( \bar B_2 -B_1)} \qquad \\[3mm]
\hfill \leq c_\kappa \, (r_\varepsilon^{2m+1} + \varepsilon^{4m-4} \, r_\varepsilon^{6 -4m
-\delta}),
\end{array}
\]
\[
|\lambda_{\varepsilon, {\bf h}, {\bf k}} | \leq c \, \varepsilon^{2m-2}
\]
And we also have
\begin{equation}
\begin{array}{llll}
|\lambda_{\varepsilon, {\bf h}^{(1)},{\bf k}^{(1)}} - \lambda_{\varepsilon, {\bf
h}^{(2)},{\bf k}^{(2)}} | + \| Y'_{\varepsilon, {\bf h}^{(1)},{\bf k}^{(1)}}
- Y'_{\varepsilon, {\bf h}^{(2)},{\bf k}^{(2)}} \|_{L^\infty} \\[3mm]
\qquad \qquad \qquad + r_\varepsilon^{2m-4} \, \sup_{j=1, \ldots, n} \, \| (
\phi_{\varepsilon, {\bf h}^{(1)},{\bf k}^{(1)}} -
\phi_{\varepsilon, {\bf h}^{(2)},{\bf k}^{(2)}} )\,
|_{\bar B_{j ,2 r_\varepsilon} - B_{j, r_\varepsilon}} (r_\varepsilon \, \cdot)
\|_{{\mathcal C}^{4, \alpha} (\bar B_{2} - B_{1} )} \qquad \qquad \\[3mm]
\qquad \qquad \qquad \leq c_\kappa \, (r_\varepsilon^{2m - 3} + \varepsilon^{2m-2}
\, r_\varepsilon^{2-2m-\delta}) \, \| ({\bf h}^{(1)}-{\bf h}^{(2)}, {\bf
k}^{(1)}-{\bf k}^{(2)})\|_{( {\mathcal C}^{4, \alpha})^{n} \times
({\mathcal C}^{2, \alpha})^{n} },
\end{array}
\label{eq:6.14}
\end{equation}
provided the components of ${\bf h}, {\bf h}^{(1)}, {\bf h}^{(2)}$,
${\bf k}, {\bf k}^{(1)}, {\bf k}^{(2)}$ satisfy (\ref{eq:NPR2}) and
(\ref{eq:NPR3}). \label{pr:6.22}
\end{prop}
The remaining of this section is devoted to the proof of this
result.
\begin{proof} To begin with, using the analysis of \S 6, we can expand~:
\[
{\bf s} (\omega + i \partial \bar \partial f ) = {\bf s} (\omega)
-\tfrac{1}{2} \, P^*_\omega \, P_\omega \, f - \tfrac{1}{2} \, J
\, X_{\bf s} f + \tfrac{i}{2} \, X_{\bf s} \, f + Q_\omega
(\nabla^{2} \, f)
\]
The structure of the nonlinear operator $Q_\omega$ is quite
complicated but in each $\bar B_{j,\bar r_0}$, this operator enjoys
the following decomposition
\begin{equation}
\begin{array}{rllllll}
Q_\omega ( \nabla^{2} f ) & = & \sum_{q} B_{q,4,2}(\nabla^{4}
f, \nabla^{2} f) \, C_{q,4,2} ( \nabla^{2} f) \\[3mm]
& + & \sum_{q} B_{q,3,3}(\nabla^{3} f, \nabla^{3} f) \,
C_{q,3,3} ( \nabla^{2} f ) \\[3mm]
& + & |z| \, \sum_{q} B_{q,3,2}(\nabla^{3} f, \nabla^{2}
\varphi) \, C_{q,3,2} ( \nabla^{2} f) \\[3mm]
& + & \sum_{q} B_{q,2,2}(\nabla^{2} f , \nabla^{2} f) \, C_{q,2,2} (
\nabla^{2} f)
\end{array}
\label{eq:6.3}
\end{equation}
where the sum over $q$ is finite, the operators $(U,V) \longmapsto
B_{q,a,b} (U,V)$ are bilinear in the entries and have coefficients
which are smooth functions on $\bar B_{j,\bar r_0}$. The nonlinear
operators $W \longmapsto C_{q,a,b} (W)$ have Taylor expansions (with
respect to $W$) whose coefficients are smooth functions on $\bar
B_{j,\bar r_0}$. These facts follow at once from the expression of
the scalar curvature of ${\bf s}( \omega_0 + i \, \partial \, \bar \partial
\, f)$ in local coordinates \cite{ap}.
The equation we would like to solve in $\bar M_{r_\varepsilon}$ reads
\[
{\bf s} (\omega + i\partial \bar \partial f ) - \zeta - \mu_{\bf s} - \mu
= 0
\]
where
\[
\zeta = \ip{\xi_\omega, X_{\bf s} + X'} - \tfrac{1}{2} \, J \,
(X_{\bf s} + X') f - \tfrac{i}{2} \, (X_{\bf s} + X') \, f
\]
Observe that, using the analysis of section 5, we find that
\[
-d \zeta = (\omega + i \partial \, \bar \partial f) \, (X_{\bf s} + X', -)
\]
Using the above expansion together with
(\ref{eq:solutioninitial}), we can rewrite this equation as
\[
- \tfrac{1}{2} \, P^*_\omega \, P_\omega \, f + i \, X_{\bf s} \, f
+ Q_\omega (\nabla^2 f) - \ip{\xi_\omega , X'} + \tfrac{i}{2} \, X'
\, f + \tfrac{1}{2} \, J\, X' \, f -\mu = 0
\]
Assuming that $X' \in {\mathfrak h}'$ and keeping in mind that we work
equivariantly so that the function $f$ is now assumed to be $K$
invariant, this simplifies into
\[
\tfrac{1}{2} \, P^*_\omega \, P_\omega \, f + \ip{\xi_\omega , X'
} + \mu = \tfrac{1}{2} \, J \, X' \, f + Q_\omega (\nabla^{2} \,
f)
\]
We set
\[
\begin{array}{rlllllll}
f & : = & \varepsilon^{2m-2} \, \Gamma + W + \phi \\[3mm]
X' & : = & \varepsilon^{2m-2} \, Y' + Z' \\[3mm]
\mu & : = & \varepsilon^{2m-2} \, \lambda + \nu
\end{array}
\]
where $\Gamma$, $Y' \in {\mathfrak h}'$ and $\nu$ are defined in
(\ref{eq:gamma}), (\ref{eq:lambda}) and (\ref{eq:Xprime}) and $W =
W_{\varepsilon, {\bf h}, {\bf k}}$ is defined in (\ref{eq:doublev}).
It is easy to se that the system of equations which remains to be
solved on $\bar M_{r_{\varepsilon}}$ can be formally written as
\[
\tfrac{1}{2}\, P^*_\omega \, P_\omega \phi + \ip{\xi_\omega , Z'} +
\nu = {\mathcal Q} ( \phi , Z')
\] where the operator
${\mathcal Q} = {\mathcal Q} (\varepsilon, {\bf h}, {\bf k} ; \cdot , \cdot
)$ is defined by
\[
\begin{array}{rllllll}
{\mathcal Q} (\varepsilon, {\bf h}, {\bf k} ; \phi, Z') & : = & \displaystyle
- \tfrac{1}{2} \, P^*_\omega \, P_\omega \, W + \tfrac{1}{2} \,
J\, (\varepsilon^{2m-2} \, Y' + Z' ) \, (\varepsilon^{2m-2} \, \Gamma + W +
\phi) \\[3mm]
& + & Q_\omega (\nabla^{2} \, (\varepsilon^{2m-2} \, \Gamma + W + \phi ))
\end{array}
\]
We need the~:
\begin{defin}
Given $ \bar r \in (0, r_0)$, $\ell \in {\mathbb N}$, $\alpha \in
(0,1)$ and $\delta \in {\mathbb R}$, the weighted space ${\mathcal
C}^{\ell, \alpha}_{\delta} (\bar M_{\bar r })$ is defined to be the
space of functions $f \in {{\mathcal C}}^{\ell , \alpha} (\bar
M_{\bar r})$ endowed with the norm
\[
\| f \|_{{{\mathcal C}}^{\ell, \alpha}_{\delta} (\bar M_{\bar r})} :
= \| f \|_{{{\mathcal C}}^{\ell , \alpha} (\bar M_{r_0})} +
\sum_{j=1}^n \, \sup_{\bar r \leq r \leq r_0} r^{-\delta} \, \| f
|_{(\bar B_{j, r_{0}} -B_{j,\bar r})} (r \, \cdot) \|_{{{\mathcal C}
}^{\ell , \alpha} ( \bar B_{2} - B_{1})}
\]\label{de:6.1}
\end{defin}
Next, we consider an extension (linear) operator
\[
{\mathcal E}_{\bar r , \delta'} : {\mathcal C}^{0, \alpha}_{\delta'}
(\bar M_{\bar r}) \longrightarrow {\mathcal C}^{0, \alpha}_{\delta'}
(M^*) ,
\]
which is defined as follows~:
\begin{itemize}
\item[(i)] In $M_{\bar r}$, ${\mathcal E}_{\bar r , \delta'} \, (f) = f$. \\
\item[(ii)] In each $\bar B_{j ,\bar r } - B_{j , \bar r/2}$
\[
{\mathcal E}_{\bar r , \delta'} \, (f) (z) = \displaystyle
\tfrac{{2 \, |z| -\bar r }}{{\bar r} }\, f \left( \bar r \,
\tfrac{z}{{|z|}}
\right). \\
\]
\item[(iii)] In each $\bar B_{j , \bar r/2}$, $ {\mathcal E}_{\bar r , \delta'} \, (
f ) = 0 $.
\end{itemize}
It is easy to check that there exists a constant $c = c (\delta')
>0$, independent of ${\bar r} \in (0, r_0)$, such that
\begin{equation}
\| {\mathcal E}_{\bar r , \delta'} ( f ) \|_{{\mathcal C}^{0,
\alpha}_{\delta'} (M^{*})} \leq \, c \, \| f \|_{{\mathcal C}^{0,
\alpha}_{\delta'} (\bar M_{\bar r})} . \label{eq:6.8}
\end{equation}
The equation we would like to solve can now be written as
\begin{equation}
\tfrac{1}{2}\, P^*_\omega \, P_\omega \, \phi + \ip{\xi_\omega ,
Y' } + \nu + {\mathcal E}_{r_\varepsilon, \delta-4} \circ {\mathcal Q}_\varepsilon (
\phi , Y') =0 \label{pac-0701} \end{equation} We fix $\delta \in
(4-2m, 5-2m)$. If ${\mathcal G}_\delta$ denotes a right inverse for
${\mathcal L}_\delta$ which is provided by
Proposition~\ref{pr:f-5.1}, we just need to solve
\[
(\phi, Z' , \nu ) = {\mathcal N} (\varepsilon, {\bf h}, {\bf k}, \phi , Z' )
\]
where $\phi \in {\mathcal C}^{4, \alpha}_\delta (M^*)^K \oplus
{\mathcal D}$, $Z' \in {\mathfrak h}'$, $\nu \in {\mathbb R}$ and
where the nonlinear operator ${\mathcal N} (\varepsilon, {\bf h}, {\bf k} ;
\, \cdot \, , \, \cdot \, )$ is defined by
\[
{\mathcal N} (\varepsilon, {\bf h}, {\bf k} ; \, \cdot \, , \,
\cdot \, ) : = {\mathcal G}_\delta \circ {\mathcal E}_{r_\varepsilon,
\delta-4} \circ {\mathcal Q}
\]
We set
\[
{\mathcal F} : = ( {\mathcal C}^{4, \alpha}_{\delta} (\bar M^*)^K
\oplus {\mathcal D} ) \times {{\mathfrak h}'} \times {\mathbb R}
\]
which is endowed with the product norm.
The existence of a solution to this system depends is a simple
consequence of the fixed point theorem for contraction mappings once
the following estimates are proved.
\begin{lemma}
There exists $c >0$, $c_\kappa > 0$ and there exists $\varepsilon_\kappa =
\varepsilon (\kappa) >0$ such that, for all $\varepsilon \in (0, \varepsilon_\kappa)$
\begin{equation}
\| {\mathcal N} (\varepsilon , {\bf h},{\bf k} ; 0 , 0) \|_{{\mathcal F}}
\leq c_\kappa \, (r_\varepsilon^{2m+1} + \varepsilon^{4m-4} \, r_\varepsilon^{6 -4m -\delta }),
\label{eq:6.1000}
\end{equation}
and
\begin{equation}
\begin{array}{llllll}
\| {\mathcal N} (\varepsilon , {\bf h},{\bf k} ; \phi^{(1)} , Z' \, ^{(1)} )
- {\mathcal N} (\varepsilon , {\bf h},{\bf k} ; \phi^{(2)} , Z' \, ^{(2)})
\|_{{\mathcal F}} \\[3mm]
\hspace{40mm}\leq c_\kappa \, \varepsilon^{2m-2} \, r_\varepsilon^{6-4m-\delta} \, \|
(\phi^{(1)} - \phi^{(2)} , Z' \, ^{(1)} - Z' \, ^{(2)}, 0)
\|_{{\mathcal F}}
\end{array}\label{eq:6.1001}
\end{equation}
Finally,
\begin{equation}
\begin{array}{llllll}
\| {\mathcal N} (\varepsilon , {\bf h}^{(1)},{\bf k}^{(1)} ; \phi, Z' ) -
{\mathcal N} (\varepsilon , {\bf a} , {\bf h}^{(2)},{\bf k}^{(2)} ; \phi , Z'
) \|_{{\mathcal F}} \\[3mm]
\hspace{30mm}\leq c_\kappa \, (r_\varepsilon^{2m-3} + \varepsilon^{2m-2} \,
r_\varepsilon^{2-2m-\delta}) \, \| ({\bf h}^{(1)} - {\bf h}^{(2)}, {\bf
k}^{(1)} - {\bf k}^{(2)}) \|_{( {\mathcal C}^{4, \alpha})^n \times
({\mathcal C}^{2, \alpha} )^n}
\end{array}
\label{eq:6.1002}
\end{equation}
provided $(\phi, Z',0), (\phi^{(1)}, Z' \, ^{(1)},0) , (\phi^{(2)},
Z' \, ^{(2)},0) \in {\mathcal F}$ satisfy
\[
\| (\phi , Z' ,0) \|_{{\mathcal F}} + \| (\phi^{(1)} , Z' \, ^{(1)}
,0) \|_{{\mathcal F}} +\| (\phi^{(2)} , Z' \, ^{(2)} ,0)
\|_{{\mathcal F}} \leq \, 6 \, c_\kappa \, (r_\varepsilon^{2m+1} + \varepsilon^{4m-4}
\, r_\varepsilon^{6 -4m -\delta }) ,
\]
and the components of ${\bf h}, {\bf h}^{(1)}, {\bf h}^{(2)}$, ${\bf
k}, {\bf k}^{(1)}, {\bf k}^{(2)}$ satisfy (\ref{eq:NPR2}) and
(\ref{eq:NPR3}). \label{le:f-8.1}
\end{lemma}
\begin{proof} The proof of these estimates can be follows what is already
done in \cite{ap} and \cite{ap2} with minor modifications. We
briefly recall how the proof of the first estimate is obtained and
leave the proof of the second and third estimates to the reader.
First, we use the result of Proposition~\ref{pr:f-5.5} to estimate
\begin{equation}
\| W \|_{{\mathcal C}^{4, \alpha}_{3-2m} (\bar M_{r_\varepsilon})} \leq
c_\kappa \, r_\varepsilon^{2m+1} . \label{eq:jd}
\end{equation}
Now observe that, by construction, $\Delta^{2} \, W = 0$ in each
$\bar B_{r_0/2} - B_{r_\varepsilon}$ (here $\Delta$ is the Euclidean
Laplacian), hence
\[
L_\omega \, W = (L_\omega + \tfrac{1}{2} \, \Delta^{2} ) \, W
\]
in this set. Making use of the fact that the coordinates near $p_j$
are chosen to be normal, we get the existence of a constant
$c_\kappa > 0$ such that
\[
\| L_\omega \, W \|_{{\mathcal C}^{0, \alpha}_{\delta -4} ( \bar
M_{r_\varepsilon})} \leq c_\kappa \, r_\varepsilon^{2m+1}.
\]
Note that this is where we implicitly use the fact that $\delta <
5-2m$.
Next, we estimate the norm of $\varepsilon^{2m-2} \, J \, Y' \, ( \varepsilon^{2m-2}
\, \Gamma + W )$ by
\[
\| \varepsilon^{2m-2} \, J \, Y' \, ( \varepsilon^{2m-2} \, \Gamma + W )
\|_{{\mathcal C}^{0, \alpha}_{\delta -4} ( \bar M_{r_\varepsilon})} \leq c \,
\varepsilon^{4m -4}.
\]
where the constant $c_\delta$ does not depend on $\kappa$ provided
$\varepsilon$ is chosen small enough, say $\varepsilon \in (0, \varepsilon_\kappa)$.
Finally, we use the structure of the nonlinear operator $Q_\omega$
as described in (\ref{eq:6.3}) together with the estimate
(\ref{eq:jd}) to get
\[
\| Q_\omega ( \nabla^{2} ( \varepsilon^{2m-2} \, \Gamma + W)) \|_{{\mathcal
C}^{0, \alpha}_{\delta-4} (\bar M_{r_\varepsilon})} \leq c \, \varepsilon^{4m-4} \,
r_\varepsilon^{6-4m-\delta}
\]
for some constant $c_\delta >0$ which does not depend on $\kappa$
provided $\varepsilon$ stays small enough. The first estimate then follows at
once. \end{proof}
Reducing $\varepsilon_\kappa >0$ if necessary, we can assume that,
\begin{equation}
c_\kappa \, \varepsilon^{2m-2} \, r_\varepsilon^{6-4m-\delta} \leq \tfrac{1}{2}
\label{eq:6.1400}
\end{equation}
for all $\varepsilon \in (0, \varepsilon_\kappa )$. Then, the estimates
(\ref{eq:6.1000}) and (\ref{eq:6.1001}) in the above Lemma are
enough to show that
\[
(\phi, X' , \nu ) \longmapsto {\mathcal N} (\varepsilon, {\bf h} , {\bf k} ;
\phi, X' )
\]
is a contraction from
\[
\{ (\phi, X' , \nu ) \in {\mathcal F} \quad : \newline \quad
\|(\phi , X' , \nu )\|_{{\mathcal F}} \leq 2 \, c_\kappa \, (
r_\varepsilon^{2m+1} + \varepsilon^{4m-4} \, r_\varepsilon^{6 -4 m -\delta } ) \} ,
\]
into itself and hence has a unique fixed point $(\phi_{\varepsilon, {\bf h},
{\bf k}}, Y'_{\varepsilon, {\bf h}, {\bf k}} , \nu_{\varepsilon, {\bf h}, {\bf k}})$
in this set. This fixed point yields a solution of (\ref{pac-0701})
in $\bar M_{r_\varepsilon}$ and hence provides an extremal K\"{a}hler\ form on $\bar
M_{r_\varepsilon}$. The estimates in Proposition~\ref{pr:6.22} follow at once
from the estimates in Lemma~\ref{le:f-8.1}, increasing the value of
$c_\kappa$ and reducing $\varepsilon_\kappa$ if this is necessary.
\end{proof}
\subsection{Perturbation of $\eta$}
We perform an analysis similar to the one we have done in the
previous section starting with $\tilde {\mathbb C}^m$, the blow up
of ${\mathbb C}^m$ at $0$, endowed with the Burns-Simanca's metric $\eta$.
Given $ a >0$, we now consider on $\tilde {\mathbb C}^m$, the
perturbed K\"{a}hler\ form
\[
\tilde \eta = a^2 \, \eta + i \, \partial \, \bar \partial \, f ,
\]
Everything we will do will be uniform in $a$ as long as this
parameter remains both bounded from above and bounded away from $0$.
In fact, we will apply our analysis when $a$ satisfies
\begin{equation}
a_{min} \leq a \leq a_{max}, \label{eq:aaa}
\end{equation}
Given $\nu \in {\mathbb R}$ and a $K$-invariant killing field $X \in
{\mathfrak k}$, we would like to solve the equation
\begin{equation} {\bf s} \, (a^{2} \, \eta + i \, \partial \, \bar \partial
\, f) = \varepsilon^{2} \, \zeta \label{eq:6.17}
\end{equation} in $N_{R_\varepsilon/a}$ where
\[
\zeta = \tilde \zeta - \tfrac{1}{2} \, J \, X \, f - \tfrac{i}{2}
\, X \, f
\]
and $\tilde \zeta$ is the solution of
\[
- d\tilde \zeta = a^{2} \, \eta ( X , - )
\]
normalized so that the mean of $\zeta$ over $\partial B_{R_\varepsilon/a}$ is
prescribed by
\[
\tfrac{1}{{|\partial B_{R_\varepsilon/a}|}} \, \int_{\partial B_{R_\varepsilon/a}} \zeta =
\nu
\]
Observe that, using the analysis of section 5, we find that
\[
- d \zeta = (a^2 \, \eta + i \, \partial \, \bar \partial f) \, (X, -)
\]
Also, working with $K$-invariant functions, since $X \in {\mathfrak k}$, we
have $X\, f =0$.
It will be convenient to denote
\[
\bar N_R : = \tilde {\mathbb C}^m - C_R
\]
We fix
\begin{equation}
R_\varepsilon : = \tfrac{r_\varepsilon}{\varepsilon} \label{eq:RRe}
\end{equation}
Assume we are given $h \in {\mathcal C}^{4, \alpha}(\partial B_1)^K$ and
$k \in {\mathcal C}^{2, \alpha}(\partial B_1)^K$ satisfying
\begin{equation}
\| h \|_{{\mathcal C}^{4, \alpha} (\partial B_1)} + \| k \|_{{\mathcal
C}^{2, \alpha} (\partial B_1)} \leq \kappa \, R_\varepsilon^{3-2m},
\label{eq:f-23}
\end{equation}
satisfying
\[
\int_{\partial B^1} (4mh-k) =0
\]
where $\kappa >0$ will be fixed later on. We define in $N_{R_\varepsilon/a}$
the function
\begin{equation}
W_{\varepsilon, a , h,k} : = \tilde \chi \, (W^i_{h , k } (a \, \cdot / R_\varepsilon)
, \label{eq:f-24}
\end{equation}
where $\tilde \chi$ is a cutoff function which is identically equal
to $1$ in $C_2$ and identically equal to $0$ in $N_1$ and where
$W^{i}_{h,k}$ has been defined in Proposition~\ref{pr:f-5.5}.
We will assume that the parameter $\nu$ and the vector field $X$ are
uniformly bounded in $N_{R_\varepsilon/a}$. To be more specific, we agree
that
\[
|\nu| + \varepsilon^{-1} \, \| X \|_{L^\infty (N_{2R_\varepsilon/a})} \leq c
\]
Following the analysis already performed in the previous section, we
have the~:
\begin{prop} Given $\delta \in (0,1)$, there exist $c
>0$ (independent of $\kappa$) and $\varepsilon_\kappa = \varepsilon (\kappa) > 0$ such
that, for all $\varepsilon \in (0, \varepsilon_\kappa)$ there exists a function
$\phi_{\varepsilon, a, \nu, X, h,k} \in {\mathcal C}^{4, \alpha} (\bar
N_{R_\varepsilon/a})$ such that the scalar curvature of the K\"{a}hler\ form
\[
\eta_{\varepsilon, a ,\nu, X , h , k } : = a^{2} \, \eta + i \, \partial \,
\bar \partial \, (W_{\varepsilon, a , h , k } + \phi_{\varepsilon, a , \nu , X, h , k } ),
\]
defined on $\bar N_{R_\varepsilon/a}$, is given by
\[
- d {\bf s} (\eta_{\varepsilon, a ,\nu, X , h , k } ) = \varepsilon^2 \, \eta_{\varepsilon, a
,\nu, X , h , k } (X, -)
\]
with
\[
\tfrac{1}{{|\partial B_{R_\varepsilon/a}|}} \, \int_{\partial B_{R_\varepsilon/a}} {\bf s}
(\eta_{\varepsilon, a ,\nu, X , h , k } ) = \varepsilon^2
\, \nu
\] Moreover
\[
\| \phi_{\varepsilon, a , \nu , X, h , k } (R_\varepsilon \, \cdot /a ) \|_{{\mathcal
C}^{4, \alpha} (\bar B_{1} -B_{1/2})} \leq c \, R_\varepsilon^{3-2m} ,
\]
for some constant $c >0$ independent of $\kappa$. In addition, we
have
\begin{equation}
\begin{array}{llll} \| \phi_{\varepsilon, a , \nu , X, h , k } (R_\varepsilon \,
\cdot /a ) - \phi_{\varepsilon, a , \nu' , X', h' , k' } (R_\varepsilon \, \cdot /a'
) \|_{{\mathcal C}^{4, \alpha} ( \bar B_{1} - B_{1/2} )} \\[3mm]
\hspace{20mm} \leq c_\kappa \, ( R_\varepsilon^{\delta -1} \, \| (h - h', k
- k' )\|_{{\mathcal C}^{4, \alpha} \times {\mathcal C}^{2, \alpha} }
+ R_\varepsilon^{3-2m} \,( | \nu - \nu'| + \| X-X' \|_{L^\infty} + |a-a'|)
).
\end{array}
\label{eq:6.29}
\end{equation}
\label{pr:6.2}
\end{prop}
Again the rest of the section is devoted to the proof of this
result.
\begin{proof}
Using the fact that
\[
{\bf s} ( a^{2} \, \eta + i \, \partial \, \bar \partial \, f ) = {\bf s} (
a^{2} \, (\eta + i \, a^{-2} \, \partial \, \bar \partial \, f )) = a^{- 2}
\, {\bf s} ( \eta + i \, a^{- 2} \, \partial \, \bar \partial \, f )
\]
We see that the scalar curvature of $\tilde \eta$ can be expanded as
\begin{equation}
{\bf s} (a^{2} \, \eta + i \, \partial \, \bar \partial \, f) = -
\tfrac{1}{2} \, a^{-2} \, P_{\eta}^* \, P_\eta \, f + a^{- 2} \,
Q_{\eta} (a^{- 2} \, \nabla^{2} f ) , \label{eq:6.1666}
\end{equation}
since the scalar curvature of $\eta$ is identically equal to $0$.
Again, the structure of the nonlinear operator $Q_{\eta}$ is quite
complicated but away from the exceptional divisor, it enjoys a
decomposition similar to the one described in the previous section.
Indeed, we know from \cite{ap2} that we can decompose
\[
\begin{array}{rlllll}
Q_{\eta} ( \nabla^{2} f) & = & \sum_{q} B_{q,4,2}(\nabla^{4}
f, \nabla^{2} f) \, C_{q,4,2} ( \nabla^{2} f) \\[3mm]
& + & \sum_{q} B_{q,3,3}(\nabla^{3} f, \nabla^{3} f) \,
C_{q,3,3} ( \nabla^{2} f) \\[3mm]
& + & \sum_{q} |u|^{1-2m} \, B_{q,3,2}(\nabla^{3} f, \nabla^{2} f)
\, C_{q,3,2} ( \nabla^{2}f)
\\[3mm]
& +& \sum_{q} |u|^{-2m} \, B_{q,2,2}(\nabla^{2} f, \nabla^{2} f) \,
C_{q,2,2} ( \nabla^{2} f)
\end{array}
\]
where the sum over $q$ is finite, the operators $(U,V)
\longrightarrow B_{q,j,j'} (U,V)$ are bilinear in the entries and
have coefficients which are bounded functions in ${\mathcal C}^{0,
\alpha} (\bar C_1)$. The nonlinear operators $W \longrightarrow
C_{q,a,b} (W)$ have Taylor expansion (with respect to $W$) whose
coefficients are bounded functions on ${\mathcal C}^{0, \alpha}
(\bar C_1)$.
We set
\[
L_\eta : = - \tfrac{1}{2} \, P^*_\eta \, P_\eta
\]
Replacing in (\ref{eq:6.1666}) the function
\[
f = W + \phi,
\]
where $W = W_{\varepsilon, h,k}$ has been defined in (\ref{eq:f-24}), we see
that (\ref{eq:6.17}) can be written as
\begin{equation}
L_{\eta} \, \phi + Q_{\eta} ( a^{-2} \, \nabla^{2} ( W + \phi ) ) =
\varepsilon^{2} \, a^{2} \, \zeta , \label{eq:6.99}
\end{equation}
which we would like to solve in $\bar N_{R_\varepsilon/a}$. Remember that the
function $\zeta$ solves
\[
- d \zeta = ( a^{2} \, \eta + i \, \partial \, \bar \partial \, ( W+ \phi))
( X , - )
\]
with
\[
\tfrac{1}{{|\partial B_{R_\varepsilon/a}|}} \, \int_{\partial B_{R_\varepsilon/a}} \zeta =
\nu
\]
We will need the~:
\begin{defin}
Given $ \bar R > 1 $, $\ell \in {\mathbb N}$, $\alpha \in (0,1)$ and
$\delta \in {\mathbb R}$, the weighted space ${\mathcal C}^{\ell,
\alpha}_{\delta} (\bar N_{\bar R})$ is defined to be the space of
functions $f \in {{\mathcal C}}^{\ell, \alpha} (\bar N_{\bar R})$
endowed with the norm
\[
\| f \|_{{{\mathcal C}}^{\ell, \alpha}_{\delta} (\bar N_{\bar R})} :
= \| f \|_{{{\mathcal C}}^{\ell, \alpha} (\bar N_1)} + \sup_{1 \leq
R \leq \bar R} R^{-\delta} \, \| f (R \, \cdot) \|_{{{\mathcal C}
}^{\ell , \alpha} ( \bar B_{1} - B_{1/2})}
\]\label{de:6.2}
\end{defin}
For each $\bar R \geq 1$, will be convenient to define an
"extension" (linear) operator
\[
\tilde {\mathcal E}_{\bar R} : {\mathcal C}^{0, \alpha}_{\delta'}
(N_{\bar R}) \longrightarrow {\mathcal C}^{0, \alpha}_{\delta'}
(\tilde {\mathbb C}^m ) ,
\]
as follows~:
\begin{itemize}
\item[(i)] In $\bar N_{\bar R}$, $\tilde {\mathcal E}_{\bar R} \, ( f ) =
f$,\\
\item[(ii)] in $C_{2\bar R} - C_{\bar R} $
\[
\tilde {\mathcal E}_{\bar R} \, (f ) (u) = \tfrac{{2 \, {\bar R} -
|u|}}{{\bar R}} \, f \left(\bar R \, \tfrac{u}{{|u|}}\right),
\]
\item[(iii)] in $C_{2 \, \bar R }$, $\tilde {\mathcal E}_{\bar R} \,
( f )=0$.
\end{itemize}
It is easy to check that there exists a constant $c = c( \delta' )
>0$, independent of $\bar R \geq 2$, such that
\begin{equation}
\| \tilde {\mathcal E}_{\bar R} ( f ) \|_{{\mathcal C}^{0,
\alpha}_{\delta'} (\tilde {\mathbb C}^m)} \leq \, c \, \| f
\|_{{\mathcal C}^{0, \alpha}_{\delta'} (N_{\bar R})} ,
\label{eq:6.20}
\end{equation}
and furthermore, one can arrange easily for $\tilde {\mathcal
E}_{\bar R}$ to depend smoothly on $\bar R$.
The equation we will solve can be rewritten as
\begin{equation}
L_{\eta} \, \phi = \tilde {\mathcal E}_{R_\varepsilon/a} \left( \left(
\varepsilon^{2} \, a^{2} \, \zeta - Q_{\eta} (a^{-2} \, \nabla^{2} ( W +
\phi )) - L_{\eta} \, W \right) \right). \label{eq:f-25}
\end{equation}
We fix $\delta \in (0, 1)$ and use the result of
Proposition~\ref{pr:MP3}. This provides a right inverse $\tilde
{\mathcal G}_{\delta}$ for the operator $L_{\eta}$.
We can now rephrase the solvability of (\ref{eq:f-25}) as a fixed
point problem.
\begin{equation}
\phi = \tilde {\mathcal N} (\varepsilon, a, \nu, X, h, k ; \phi )
\label{eq:6.22}
\end{equation}
where the nonlinear operator ${\tilde {\mathcal N}}$ is defined by
\[
\tilde {\mathcal N} (\varepsilon , a, \nu, X , h, k ; \phi ) : = \tilde
G_{\delta} \, \circ \tilde {\mathcal E}_{R_\varepsilon/a} \left( ( Q_{\eta} (
a^{-2} \, \nabla^{2} (W + \phi )) ) - {\mathbb L}_{g_1} \, \tilde
H_{h, k} - \varepsilon^{2} \, a^2 \, (\nu +\zeta) \right)
\]
To keep notations short, it will be convenient to define
\[
\tilde {\mathcal F} : = {\mathcal C}^{4, \alpha}_\delta (\tilde
{\mathbb C}^m)
\]
The existence of a fixed point for ${\tilde {\mathcal N}}$ will
follow from the~:
\begin{lemma}
There exists $c >0$ (independent of $\kappa$), $c_\kappa >0$ and
there exists $\varepsilon_\kappa = \varepsilon (\kappa) >0$ such that, for all $\varepsilon \in
(0, \varepsilon_\kappa)$
\begin{equation}
\| \tilde {\mathcal N} ( \varepsilon, a, \nu, X, h, k ; 0 ) \|_{\tilde
{\mathcal F}} \leq c \, R_\varepsilon^{3-2m -\delta} , \label{eq:6.23}
\end{equation} Moreover, we have
\begin{equation}
\| \tilde {\mathcal N} ( \varepsilon, a, \nu, X , h, k ; \phi ) - \tilde
{\mathcal N} ( \varepsilon, a , \nu, X, h, k ; \phi' ) \|_{\tilde {\mathcal
F}} \leq c_\kappa \, R_\varepsilon^{3-2m -\delta} \, \| \phi - \phi'
\|_{\tilde {\mathcal F}} \label{eq:6.24}
\end{equation} and
\begin{equation}
\begin{array}{rllllll}
\| \tilde {\mathcal N} ( \varepsilon, a , \nu, X, h, k ; \phi ) - \tilde
{\mathcal N} ( \varepsilon, a' , \nu' , X', h', k' ; \phi ) \|_{\tilde
{\mathcal F}} \leq c_\kappa \, ( R_\varepsilon^{-1} \, \| (h-h',
k-k')\|_{\mathcal C^{4, \alpha} \times {\mathcal C}^{2, \alpha}} \\[3mm]
\hspace{40mm} + R_\varepsilon^{3-2m -\delta} \, ( |\nu' - \nu| + \| X
-X'\|_{L^\infty}+ |a' - a| ) )
\end{array}
\label{eq:6.25}
\end{equation} provided $\phi , \phi' \in \tilde {\mathcal F}$, satisfy
\[
\|\phi \|_{\tilde {\mathcal F}} + \|\phi'\|_{\tilde {\mathcal F}}
\leq 4 \, c \, R_\varepsilon^{3-2m -\delta} ,
\]
and $h, h'$ and $k,k'$ satisfy (\ref{eq:f-23}). \label{le:f-8.3}
\end{lemma}
The proof of these estimates being identical to the one in
\cite{ap2}, we omit it.
Reducing $\varepsilon_\kappa >0$ if necessary, we can assume that,
\begin{equation}
c_\kappa \, R_\varepsilon^{3-2m-\delta} \leq \tfrac{1}{2} \label{eq:6.28}
\end{equation} for all $\varepsilon \in (0, \varepsilon_\kappa )$. Then, the estimates
(\ref{eq:6.23}) and (\ref{eq:6.24}) in the above Lemma are enough to
show that
\[
\phi \longmapsto \tilde{\mathcal N} (\varepsilon, a , \nu, X, h , k ; \phi )
\]
is a contraction from
\[ \{ \phi \in \tilde {\mathcal F} \quad : \quad \|
\phi \|_{\tilde {\mathcal F}} \leq 2 \, c \, R_\varepsilon^{3-2m-\delta} \} ,
\]
into itself and hence has a unique fixed point $\phi_{ \varepsilon, a , \nu,
X, h, k}$ in this set. This fixed point is a solution of
(\ref{eq:6.99}) in $\bar N_{R_\varepsilon}$ and hence provides a constant
scalar curvature K\"{a}hler\ form on $\bar N_{R_\varepsilon}$. The estimates in
Proposition~\ref{pr:6.2} follow at once from the estimates in
Lemma~\ref{le:f-8.3}, increasing the value of $c_\kappa$ and
reducing $\varepsilon_\kappa$ if this is necessary. \end{proof}
\section{Gluing the pieces together}
Building on the analysis of the previous sections we complete the
proof of Theorem~\ref{mainthm}. As far as technicalities are
concerned the proof is identical to the one in \cite{ap2} therefore,
we shall only emphasize the differences in the present framework.
Before we proceed, a word about notations. In this section
${\mathcal O}_{{\mathcal C}^{\ell, \alpha}} (A)$ will refer to a
function whose ${\mathcal C}^{\ell, \alpha}$-norm is bounded by $A$
times a constant independent of $\varepsilon$ and also independent of
$\kappa$ provided $\varepsilon$ is chosen small enough (but which might
depend on $m$, $\omega$, the points $p_j$ and the coefficients
$a_j$). In general this function will be a nonlinear operator of the
data.
We first exploit the result of Proposition~\ref{pr:6.22}. Given
boundary data
\[
{\bf h} : = (h_1, \ldots, h_n), \qquad {\bf k} : = (k_1, \ldots,
k_n)
\]
so that
\[
\int_{\partial B_1} k_j =0
\]
we can apply the result o Proposition~\ref{pr:6.22} to define on
$\bar M_{r_\varepsilon}$ a K\"{a}hler\ form $\omega_{\varepsilon, {\bf h}, {\bf k}}$ which
can be written as
\[
\omega_{\varepsilon, {\bf h}, {\bf k}} = i \, \partial \, \bar \partial
(\tfrac{1}{2} \, |z|^2 + \varphi^j(z) + \varepsilon^{2m-2} \, \Gamma_{\varepsilon,
{\bf h}, {\bf k}} + W_{\varepsilon, {\bf h}, {\bf k}} + \phi_{\varepsilon, {\bf h},
{\bf k}} )
\]
in $\bar B_{j, 2r_\varepsilon} -B_{j, r_\varepsilon}$ where the function $\varphi^j
(z) = {\mathcal O} (|z|^4)$ is the one which appears in
Proposition~\ref{masterpiece} so that
\[
\omega = i \, \partial \, \bar \partial (\tfrac{1}{2} \, |z|^2 +
\varphi^j(z))
\]
near $p_j$. We define the function
\[
\psi^{o,j} : = \left( \varphi^j + \varepsilon^{2m-2} \, \Gamma_{\varepsilon, {\bf h},
{\bf k}} + W_{\varepsilon, {\bf h}, {\bf k}} + \phi_{\varepsilon, {\bf h}, {\bf
k}}\right) (r_\varepsilon \, \cdot )
\]
in $\bar B_2 - B_1$. Collecting the result of
Proposition~\ref{pr:6.22}, the definition of $W^o_{\varepsilon, {\bf h}, {\bf
k}}$ given in (\ref{eq:doublev}) and the expansion of $\Gamma$ given
in Lemma~\ref{le:NPR1} we find that the function $\psi^{o,j}$ can be
expanded as
\[
\psi^{o,j} = - \tfrac{1}{{m-2}} \, a_j \, \varepsilon^{2m-2} \,
r_\varepsilon^{4-2m} \, |\cdot|^{4-2m} + W^o_{h_j, k_j} + {\mathcal
O}_{{\mathcal C}^{4, \alpha}} (r_\varepsilon^{4})
\]
in dimension $m\geq 3$ while, in dimension $m=2$, in view of the
expansion of $\Gamma$ Lemma~\ref{le:NPR1}, we have
\[
\psi^{o,j} - \varepsilon^{2} \, (a_j \, \log r_\varepsilon + \varepsilon^2 \, b_j) = a_j \,
\varepsilon^{2} \, \log |\cdot| + W^o_{h_j, k_j} + {\mathcal O}_{{\mathcal
C}^{4, \alpha}} (r_\varepsilon^{4}).
\]
We will replace $\psi^{o,j}$ by $\psi_{o,j} - \varepsilon^2 \, ( a_j \, \log
r_\varepsilon + b_j)$ and there is no loss of generality in doing so since
changing the potential by some constant function does not alter the
corresponding K\"{a}hler\ forms.
According to Proposition~\ref{pr:6.22}, the scalar curvature of the
K\"{a}hler\ form $\omega_{\varepsilon, {\bf h}, {\bf k}}$ is given by
\[
{\bf s} (\omega_{\varepsilon, {\bf h}, {\bf k}}) = {\bf s}(\omega) +
\ip{\xi_{\omega_{\varepsilon, {\bf h}, {\bf k}}} , X_{\varepsilon, {\bf h}, {\bf k}}}
+ \mu_{\bf s} + \varepsilon^{2m-2} \, \lambda + \lambda_{\varepsilon, {\bf h}, {\bf
k}}
\]
where we have defined
\begin{equation}
X_{\varepsilon, {\bf h}, {\bf k}} : = X_{\bf s} + \varepsilon^{2m-2} \, Y' + Y'_{\varepsilon,
{\bf h}, {\bf k}} \in {\mathfrak h}' \label{hvf}
\end{equation}
We now exploit the result of Proposition~\ref{pr:6.2}. We choose
boundary data
\[
{\bf \tilde h} : = (\tilde h_1 , \ldots, \tilde h_n), \qquad {\bf
\tilde k} : = (\tilde k_1 , \ldots, \tilde k_n)
\]
whose components satisfy
\begin{equation}
\int_{\partial B_1} (4m \tilde h_j - \tilde k_j) = 0 \label{eq:concon}
\end{equation} as well as real positive parameters $\tilde {\bf a} : =
(\tilde a_1, \ldots, \tilde a_n)$. For each $j =1, \ldots, n$, we
apply the result of Proposition~\ref{pr:6.2} and define on $\bar
N_{R_\varepsilon /\hat a_j}$ the K\"{a}hler\ form
\[
\varepsilon^2 \, \eta_{\varepsilon, \hat a_j , \nu_j, X_j , \tilde h_j, \tilde k_j}
\]
where
\[
\hat a_j : = \tilde a_j^{\tfrac{1}{{2(m-1)}}}
\]
and
\[
\nu_j : = \tfrac{1}{{|\partial B_{j, r\varepsilon}|}} \, \int_{\partial B_{j, r_\varepsilon}}
{\bf s} (\omega_{\varepsilon, {\bf h},{\bf k}}) (r_\varepsilon \cdot)
\]
Moreover the scalar curvature of $\varepsilon^2 \, \eta_{\varepsilon, \hat a_j ,
\nu_j, X_j , \tilde h_j, \tilde k_j}$ satisfies
\[
- d \, {\bf s} (\varepsilon^2 \, \eta_{\varepsilon, \hat a_j , \nu_j, X_j , \tilde
h_j, \tilde k_j}) = \eta_{\varepsilon, \hat a_j , \nu_j, X_j , \tilde h_j,
\tilde k_j} ( X_j, -)
\]
with
\begin{equation}
\int_{\partial B_1} \zeta (R_\varepsilon \cdot / \hat a_j) = \nu_j
\label{eq:nuj}
\end{equation}
and $X_j$ is the lift to $\bar N_{R_e/\hat a_j}$ of the holomorphic
vector field $X_{\varepsilon, {\bf h}, {\bf k}}$ defined in $B_{j, r_\varepsilon}$. As
explained at the end of section 7, this lifting can be performed in
normal $K$-linear coordinates so that the vector field $X_j$ is a
$K$-invariant Killing vector field for the metric $\hat a_j^{2} \,
\eta$.
According to the analysis of section 6 and the result of
Proposition~\ref{pr:6.2}, the K\"{a}hler\ form $\eta$ can be written as
\[
\varepsilon^2 \, \eta_{\varepsilon, \hat a_j , \nu_j, X_j , \tilde h_j, \tilde k_j} =
i \, \partial \bar \partial \left( \varepsilon^2 \, \hat a_j^{2} \, E_m (u) + \varepsilon^2 \,
W_{\varepsilon, \hat a_j, \tilde h_j, \tilde k_j} + \varepsilon^2 \, \phi_{\varepsilon, \hat
a_j, \nu_j, X_j, \tilde h_j, \tilde k_j} \right)
\]
in $B_{R_\varepsilon/\hat a_j} - B_{R_\varepsilon/2\hat a_j}$. We define the function
\[
\psi^{i,j} : = \left( \varepsilon^2 \, \hat a_j^{2} \, (E_m - \tfrac{1}{2}
\, |\cdot|^2 ) + \varepsilon^2 \, W_{\varepsilon, \hat a_j, \tilde h_j, \tilde k_j} +
\varepsilon^2 \, \phi_{\varepsilon, \hat a_j, \nu_j, X_j, \tilde h_j, \tilde k_j}
\right) ( R_\varepsilon \cdot / \hat a_j),
\]
defined in $\bar B_{1} - B_{1/2}$. Using the analysis of section 6
as well as the result of Proposition~\ref{pr:6.2}, we have the
expansion
\[
\psi^{i,j} = - \, \tilde a_j \, \varepsilon^{2m-2} \, r_\varepsilon^{4-2m} \, |\cdot
|^{4-2m} + \varepsilon^2 \, W^i_{\tilde h_j, \tilde k_j} + {\mathcal
O}_{{\mathcal C}^{4, \alpha}}(r_\varepsilon^{4})
\]
in $\bar B_1 - B_{1/2}$, when $m \geq 3$ while we have
\[
\psi^{i,j} - \tilde a_j \, \varepsilon^2 \log \varepsilon = \tilde a_j \, \varepsilon^{2} \,
\log |\cdot | + \varepsilon^2 \, W^i_{\tilde h_j, \tilde k_j} + {\mathcal
O}_{{\mathcal C}^{4, \alpha}}(r_\varepsilon^{4})
\]
when $m=2$. Again, in dimension $m=2$ we will replace $\psi^{i,j}$
by $\psi^{i,j} - \tilde a_j \, \varepsilon^2 \log \varepsilon $ since this does not
affect the definition of the corresponding K\"{a}hler\ metric.
The proof now follows {\it verbatim} the proof in \cite{ap2}. We
first describe the connected sum construction. By construction,
\[
M_\varepsilon : = M \sqcup _{{p_{1}, \varepsilon}} N_1 \sqcup_{{p_{2},\varepsilon}} \dots
\sqcup _{{p_n, \varepsilon}} N_n ,
\]
is obtained by connecting $M_{r_\varepsilon}$ with the truncated spaces
$N_{R_\varepsilon/ \tilde a_1}, \ldots, N_{R_\varepsilon/ \tilde a_n}$. The
identification of the boundary $\partial B_{j , r_\varepsilon}$ in $M_{r_\varepsilon}$
with the boundary $\partial N_{R_\varepsilon/\tilde a_j}$ of $N_{R_\varepsilon/ \tilde
a_j}$ is performed using the change of variables
\[
(z^{1} , \ldots, z^{m} ) = \varepsilon \, \hat a_j \, (u^{1} , \ldots,
u^{m}) ,
\]
where $(z^{1}, \ldots, z^{m} )$ are the coordinates in $B_{j , r_0}$
and $(u^{1}, \ldots, u^{m})$ are the coordinates in $C_1$.
The problem is now to determine these boundary data and parameters
in such a way that, for each $j=1, \ldots, n$ the functions
$\psi^{o,j} $ and $\psi^{i,j} $ have have their partial derivatives
up to order $3$ which coincide on $\partial B_{1}$.
In fact, we shall solve the following system of equations
\begin{equation}
\psi^{o,j} =\psi^{i,j} , \qquad \partial_r \, \psi^{o,j} = \partial_r \,
\psi^{i,j} , \qquad \Delta \, \psi^{o,j} = \Delta \, \psi^{i,j},
\qquad \partial_r \, \Delta \, \psi^{o,j} = \partial_r \, \Delta \,
\psi^{i,j}, \label{eq:6.300}
\end{equation}
on $\partial B_{1}$ where $r =|v|$ and $v= (v^{1}, \ldots, v^{m})$ are
coordinates in ${\mathbb C}^{m}$.
Let us assume that we have already solved this problem. The first
identity in (\ref{eq:6.300}) implies that $\psi^{o,j}$ and
$\psi^{i,j}$ as well as all their $k$-th order partial derivatives
with respect any vector field tangent to $\partial B_{1}$, with $k \leq
4$, agree on $\partial B_{1}$. The second identity in (\ref{eq:6.300})
then shows that $\partial_r \psi^{o,j}$ and $\partial_r \psi^{i,j}$ as well
as all their $k$-th order partial derivatives with respect any
vector field tangent to $\partial B_{1}$, with $k \leq 3$, agree on
$\partial B_{1}$. Using the decomposition of the Laplacian in polar
coordinates, it is easy to check that the third identity implies
that $\partial_r^{2} \psi^{o,j}$ and $\partial_r^{2} \psi^{i,j}$ as well as
all their $k$-th order partial derivatives with respect any vector
field tangent to $\partial B_{1}$, with $k \leq 2$, agree on $\partial
B_{1}$. And finally, the last identity in (\ref{eq:6.300}) implies
that $\partial_r^{3} \psi^{o,j}$ and $\partial_r^{3} \psi^{i,j}$ as well as
all their first order partial derivative with respect any vector
field tangent to $\partial B_{1}$, agree on $\partial B_{1}$.
Moreover, the scalar curvature of the K\"{a}hler\ form
\[
\omega^{o,j} : = i \, \partial \, \bar \partial \, ( \tfrac{1}{2} \,
|v|^{2} + \psi^{o,j} ),
\]
defined in $\bar B_{2} - B_{1}$ and the scalar curvature of the K\"{a}hler\
form
\[
\omega^{i,j} : = i \, \partial \, \bar \partial \, ( \tfrac{1}{2} \,
|v|^{2} + \psi^{i,j}),
\]
defined in $\bar B_{1} - B_{1/2}$, match on $\partial B_1$ to produce a
${\mathcal C}^{2}$ function on $\bar B_2-B_{1/2}$. To see this
observe that both scalar curvature functions have the same mean
value on $\partial B_1$ (this was precisely the purpose of
(\ref{eq:nuj})) and they satisfy
\[
-d {\bf s} = \hat \omega (X , -)
\]
for the same vector field $X$. Since the K\"{a}hler\ form is already
${\mathcal C}^1$, we find that the right hand side is ${\mathcal
C}^1$ and hence the scalar curvature function is ${\mathcal C}^2$.
This then implies that any $k$-th order partial derivatives of the
functions $\psi^{o,j}$ and $\psi^{i,j}$, with $k \leq 4$, coincide
on $\partial B_{1}$.
Therefore, we conclude that the function $\psi$ defined by $\psi : =
\psi^{o,j}$ in $\bar B_{2} -B_{1}$ and $\psi : = \psi^{i,j}$ in
$\bar B_{1} - B_{1/2}$ is ${\mathcal C}^{4}$ in $\bar B_{2} -
B_{1/2}$ and is a solution of the nonlinear elliptic partial
differential equation
\[
{\bf s} \,\left( i \, \partial\, \bar \partial ( \tfrac{1}{2} \, |v|^{2}
+ \psi ) \right) = f .
\]
where $f$ is defined by \[ -df = i \partial \, \bar \partial (
\tfrac{1}{2} \, |v|^2 + \psi) (X, -)
\]
and hence is a nonlocal first order differential operator in $\psi$.
It then follows from elliptic regularity theory together with a
bootstrap argument that the function $\psi$ is in fact smooth.
Hence, by gluing the K\"{a}hler\ metrics $\omega_{\varepsilon, {\bf h},{\bf k}}$
defined on $\bar M_{r_\varepsilon}$ with the metrics $\varepsilon^{2} \, \eta_{\varepsilon,\hat
a_j , \nu_j, X_j, \tilde h_j, \tilde k_j}$ defined on $\bar
N_{R_\varepsilon/\hat a_j}$, we will produced a K\"{a}hler\ metric on $M_{\varepsilon}$ which
has constant scalar curvature. This will end the proof of
Theorem~\ref{mainthm}.
It remains to explain how to find the boundary data
\[
{\bf h} =(h_1, \ldots, h_n), \quad {\bf k} = (k_1, \ldots, k_n),
\quad {\bf \tilde h} = (\tilde h_1, \ldots, \tilde h_n) \qquad
\mbox{and} \qquad {\bf \tilde k} = (\tilde k_1, \ldots, \tilde k_n)
\]
as well as the parameters ${\bf \tilde a} = (\tilde a_1, \ldots,
\tilde a_n)$.
We change the boundary data functions $h_j$ and $k_j$ into $h'_j$
and $k'_j$ defined by
\[
\begin{array}{rllll}
h'_j & : = & (\tilde a_j - a_j) \, r_\varepsilon^{4-2m} \, \varepsilon^{2m-2}
+ h_j \\[3mm]
k'_j & : = & 4 \, (m-2) ( a_j - \tilde a_j) \, \varepsilon^{2m-2} \,
r_\varepsilon^{4-2m} + k_j
\end{array}
\]
when $m\geq 3$ and
\[
\begin{array}{rllll}
h'_j & : = & h_j \\[3mm]
k'_j & : = & 4 \, ( a_j - \tilde a_j) \, \varepsilon^{2} \, + k_j
\end{array}
\]
when $m=2$. Recall that the functions $k_j$ are assumed to have
mean $0$ while the functions $k'_j$ are not assumed to satisfy such
a constraint anymore. The role of the scalar $\tilde a_j - a_j$ is
precisely to recover this lost degree of freedom in the assignment
of the boundary data.
If $k$ is a constant function of $\partial B_1$, we extend the
definition of $W^{o}_{h,k}$ by setting
\[
W^o_{0 , k} : = \tfrac{k}{4 (m-2)} \, (|z|^{2-2m} - |z|^{4-2m})
\]
when $m\geq 3$ and by
\[
W^o_{0 , k} = \tfrac{k}{4} \, \log |z|^{2}
\]
when $m=2$.
We also do not assume anymore that $\tilde h_j$ and $\tilde k_j$
satisfy (\ref{eq:concon}) anymore. If $h$ is a constant function of
$\partial B_1$, we extend the definition of $W^{i}_{h,k}$ by setting
\[
W^i_{h , 0} : = h
\]
Finally, we set
\[
\tilde h_j' := \varepsilon^{2} \, \tilde h_j \qquad \qquad \tilde k_j' :=
\varepsilon^{2} \, \tilde k_j
\]
With these new variables, the expansions for both $\psi^{o,j}$ and
$\psi^{i,j}$ can now be written as
\[
\begin{array}{rllll}
\psi^{o,j} & = & - \tilde a_j \, r_\varepsilon^{4-2m} \, \varepsilon^{2m-2} \,
|\cdot |^{4-2m} + W^o_{h_j', k_j'} + {\mathcal O}_{{\mathcal C}^{4, \alpha}} (r_\varepsilon^{4}) \\[3mm]
\psi^{i,j} & = & - \tilde a_j \, r_\varepsilon^{4-2m} \, \varepsilon^{2m-2} \, |\cdot
|^{4-2m} + \, W^i_{\tilde h'_j , \tilde k'_j } + {\mathcal
O}_{{\mathcal C}^{4, \alpha}} (r_\varepsilon^{4}) .
\end{array}
\]
when $m\geq 3$ and
\[
\begin{array}{rllll}
\psi^{o,j} & = & \tilde a_j \, \varepsilon^2 \, \log |\cdot | + W^o_{h_j', k_j'} + {\mathcal O}_{{\mathcal C}^{4, \alpha}} (r_\varepsilon^{4}) \\[3mm]
\psi^{i,j} & = & \tilde a_j \, \varepsilon^2 \, \log |\cdot | + \,
W^i_{\tilde h'_j , \tilde k'_j } + {\mathcal O}_{{\mathcal C}^{4,
\alpha}} (r_\varepsilon^{4}) .
\end{array}
\]
when $m=2$. Observe that, since $\tilde h_j$ and $\tilde k_j$ a re
not assumed to satisfy (\ref{eq:concon}) anymore, this has changed
the value of $\psi^{i,j}$ by some constant which will not be
relevant for the computation of the corresponding K\"{a}hler\ form.
The system (\ref{eq:6.300}) we have to solve can now be written as~:
For all $j=1, \ldots, n$
\begin{equation}
\begin{array}{cccccccrllllll}
W^{o}_{h_j', k_j'} & = & W^{i}_{\tilde h_j', \tilde k_j'} & +&
{\mathcal O}_{{\mathcal C}^{4, \alpha}} (r_\varepsilon^{4}) \\[3mm]
\partial_r W^{o}_{h_j', k_j'} & = & \partial_r W^{i}_{\tilde h_j', \tilde
k_j'} & + & {\mathcal O}_{{\mathcal C}^{3, \alpha}} (r_\varepsilon^{4}) \\[3mm]
\Delta W^{o}_{h_j', k_j'} & = & \Delta W^{i}_{\tilde h_j', \tilde
k_j'} & + & {\mathcal O}_{{\mathcal C}^{2, \alpha}}
(r_\varepsilon^{4}) \\[3mm]
\partial_r \Delta W^{o}_{h_j', k_j'} & = & \partial_r \Delta W^{i}_{\tilde
h_j', \tilde k_j'} & + & {\mathcal O}_{{\mathcal C}^{1, \alpha}}
(r_\varepsilon^{4}) \end{array} \label{eq:6.3000000}
\end{equation}
on $\partial B_{1}$.
By definition of $W^o_{h,k}$ and $W^i_{h,k}$, the first equations
and third equations reduce to
\begin{equation}
\begin{array}{rllllll}
h_j' & = & \tilde h_j' + {\mathcal
O}_{{\mathcal C}^{4, \alpha}} (r_\varepsilon^{4}) \\[3mm]
k_j' & = & \tilde k_j' + {\mathcal O}_{{\mathcal C}^{2, \alpha}}
(r_\varepsilon^{4}) \end{array} \label{eq:0077}
\end{equation}
Inserting these into the second and third sets of equations and
using the linearity of the mapping $(h,k) \longmapsto W^o_{h,k}$ and
$(h,k) \longmapsto W^i_{h,k}$, the second and third equations become
\begin{equation}
\begin{array}{cccccccrllllll}
\partial_r W^{o}_{h_j', k_j'} & = & \partial_r W^{i}_{h_j', k_j'} & + &
{\mathcal O}_{{\mathcal C}^{3, \alpha}} (r_\varepsilon^{4}) \\[3mm]
\partial_r \Delta W^{o}_{h_j', k_j'} & = & \partial_r \Delta W^{i}_{h_j',
k_j'} & + & {\mathcal O}_{{\mathcal C}^{1, \alpha}} (r_\varepsilon^{4})
\end{array} \label{eq:007}
\end{equation}
for all $j=1, \ldots, n$. We now make use of the following result
whose proof can be found in \cite{ap}~:
\begin{lemma}
The mapping
\[
\begin{array}{rclclll}
\mathcal P :& {\mathcal C}^{4,\alpha}(\partial B_1 ) \times {\mathcal
C}^{2,\alpha} (\partial B_1) & \longrightarrow & {\mathcal
C}^{3,\alpha}(\partial B_1) \times {\mathcal C}^{1,\alpha}(\partial B_1) \\[3mm]
& (h,k) &\longmapsto & (\partial_{r} \, (W^{i}_{h, k}- W^{o}_{h,
k}), \partial_{r} \, \Delta \, (W^{i}_{h, k}- W^{o}_{h, k})) ,
\end{array}
\]
is an isomorphism. \label{le:6.3}
\end{lemma}
Using Lemma~\ref{le:6.3}, (\ref{eq:007}) reduces to
\begin{equation}
\begin{array}{cccccccrllllll}
h_j'& = & {\mathcal O}_{{\mathcal C}^{4, \alpha}} (r_\varepsilon^{4}) \\[3mm]
k_j'& = & {\mathcal O}_{{\mathcal C}^{2, \alpha}} (r_\varepsilon^{4})
\end{array}
\label{008}
\end{equation}
for all $j=1, \ldots, n$. This, together with (\ref{eq:0077}),
yields a fixed point problem which can be written as
\[
({\bf h'} , {\bf \tilde h'} , {\bf k'}, {\bf \tilde k'} ) = S_\varepsilon (
{\bf h'} , {\bf \tilde h'} , {\bf k'} , {\bf \tilde k} ) ,
\]
and we know from (\ref{eq:0077}) and (\ref{008}) that the nonlinear
operator $S_\varepsilon$ satisfies
\[
\| S_\varepsilon ({\bf h'} , {\bf \tilde h'} , {\bf k'}, {\bf \tilde k'} )
\|_{({\mathcal C}^{4, \alpha})^{2n} \times ({\mathcal C}^{2, \alpha}
)^{2n}} \leq c_0 \, r_\varepsilon^{4} ,
\]
for some constant $c_0 >0$ which does not depend on $\kappa$,
provided $\varepsilon \in (0, \varepsilon_\kappa)$. We finally choose
\[
\kappa = 2 \, c_0 ,
\]
and $\varepsilon \in (0, \varepsilon_{\kappa})$. We have therefore proved that $S_\varepsilon$
is a map from
\[
A_\varepsilon : = \left\{ ({\bf h'} , {\bf \tilde h'} , {\bf k'}, {\bf
\tilde k'} ) \in ({\mathcal C}^{4, \alpha})^{2n} \times ({\mathcal
C}^{2, \alpha})^{2n} \quad : \quad \| ({\bf h'} , {\bf \tilde h'} ,
{\bf k'}, {\bf \tilde k'} ) \|_{({\mathcal C}^{4, \alpha})^{2n}
\times ({\mathcal C}^{2, \alpha})^{2n} } \leq \kappa \, r_\varepsilon^{4}
\right\} ,
\]
into itself. It follows from (\ref{eq:6.14}) and (\ref{eq:6.29})
that, reducing $\varepsilon_\kappa$ if this is necessary, $S_\varepsilon$ is a
contraction mapping from $A_\varepsilon$ into itself for all $\varepsilon \in (0,
\varepsilon_\kappa)$. Therefore, $S_\varepsilon$ has a fixed point in this set. This
completes the proof of the existence of a solution of
(\ref{eq:6.300}).
The proof of the existence on $M_{r_\varepsilon} $ of a K\"{a}hler\ form $\omega_\varepsilon$
which has constant scalar curvature is therefore complete. Observe
that the scalar curvature of $\omega$ and $\omega_\varepsilon$ are close
since the estimate
\[
|{\bf s} (\omega_\varepsilon) - {\bf s} (\omega)|\leq c \, \varepsilon^{2m-2}
\]
follows directly from the construction. We also have the estimates
\[
|\tilde a_j - a_j| \leq c \, r_\varepsilon^{2m}\, \varepsilon^{2m-2} = c \,
\varepsilon^{\tfrac{2}{2m+1}}
\]
which is the last estimate which appears in the statement of
Theorem~\ref{mainthm}
Since the construction of the K\"{a}hler\ form $\omega_\varepsilon$ is performed
using fixed point theorems for contraction mappings, it should be
clear that $\omega_\varepsilon$ depends continuously on the parameters of the
construction (such as the K\"{a}hler\ form $\omega$, the points $p_j$ and
the coefficients $a_j$. In particular, when ${\mathfrak h}'' =\{0\}$,
conditions (i), (ii) and (iii) in the statement of
Theorem~\ref{mainthm} is void and in constructing $\omega_\varepsilon$ one is
free to prescribe the parameters $a_j$. A simple degree argument
then shows that given $A \subset ({\mathbb R}^*_+)^n$ the image of
mapping
\[
(a_1, \ldots, a_n) \longmapsto (\tilde a_1, \ldots, \tilde a_n)
\]
contains $A$ provided $\varepsilon$ is chosen small enough. This completes
the proof of the last remark at the end of the statement of
Theorem~\ref{mainthm}. In the case where ${\mathfrak h}''\neq \{0\}$, one
needs to apply a modified version of the analysis of \cite{ap2}
which guaranties that the set of weights is open (keeping the
required symmetries) provided no nontrivial element of ${{\mathfrak h}''}$
vanishes at all points we blow up.
The proof of Proposition~\ref{mainprop} also follows from the
construction itself. Indeed, when $\omega$ is a constant scalar
curvature K\"{a}hler\ form, then $X_{\bf s} =0$. However, in the expansion
of Proposition~\ref{pr:6.22} one directly sees that the scalar
curvature of $\omega_\varepsilon$ will not be constant if the vector field
$Y'$ is not zero. Now $Y' =0$ if and only if $\sum_j a_j \,
\xi_{\omega} (p_j) =0$. This completes the proof of
Proposition~\ref{mainprop}
\section{Examples and Comments}
\subsection{Toric varieties}
If $M^m$ is a toric variety, then we can take $K=T^m$ and ${\mathfrak h} =
{\mathfrak k}$ in virtue of Proposition \ref{fixx}.
Theorem \ref{mainthm-2} asserts that one can blow up {\em any
set of points contained in the fixed-point set of the torus-action}
and the weights $a_j >0$ can be chosen arbitrarily. This is because
the algebra of {\em good vector fields} that extends to the blown up
manifold is precisely the Lie algebra of the torus in this case, so
${\mathfrak h}''=\{0\}$ and conditions (i) and (ii) become vacuous in this
case.
This type of examples leads naturally to some observations~:
\begin{enumerate}
\item[(i)]
As mentioned in the introduction, our procedure can be iterated
since blowing up a toric variety at such points preserves the toric
structure. Therefore, we obtain extremal metrics on any such
iterated blow up. Among this type of manifolds Donaldson \cite{do}
has studied one particular iterated blow up of ${\mathbb P}^{2}$,
where all successive
blow ups take place at fixed points of the torus action at the previous
step. The number of iteration cannot be explicitly determined, yet
these manifolds fall into the category to which our result applies.
Nonetheless Donaldson analysis shows if on these manifolds
we take K\"{a}hler\ classes sufficiently far from the boundary of the K\"{a}hler\ cone
no extremal representative exist.
This
shows that even for these deceptively simple examples the
understanding of the maximal range of application of our result
(namely the determination
of the optimal value of $\varepsilon_0$) is far from been trivial. \\[3mm]
\item[(ii)]
It is known that not every manifold admits an extremal metric but
Tian \cite{ti} has conjectured that every K\"{a}hler\ manifold degenerates
in a suitable sense to a manifold with an extremal metric.
\noindent We know two types of manifolds which do not carry any
extremal metric~: The projectivization of unstable rank two vector
bundles over compact Riemann surfaces \cite{bdb} (which verify
Tian's conjecture by their very construction) and some special
iterated blow up of ${\mathbb P}^{2}$ constructed by Levine
\cite{le}.
\noindent Our result can be used to shows how to fit these examples
in Tian's conjecture. For example, one observes that in Levine's
examples all the blow ups are of the type allowed by our
construction except the last one where two points which do not
correspond to vertices of the polytope are blown up. Nevertheless,
this last manifold degenerates on the manifold obtained by blowing
up one further vertex and then another point on the last exceptional
divisor and, by our main theorem extremal metrics do exist on this
limit manifold.
\end{enumerate}
Besides these general results, we now look at the effect of our construction
in specific cases.
Firstly, since ${\mathbb P}^{m}$ is a toric manifold the construction of the new
extremal metrics on its blow ups at $n\leq m+1$ linearly independent points
follows. The fact that the resulting extremal metrics have constant scalar curvature
iff we blow up $m+1$ points with equal weights follows from Proposition \ref{mainprop}
and from our preceding work \cite{ap2}. Nonetheless this last result can be easily
obtained directly.
Let us in fact look at the Futaki invariants of the resulting manifolds.
Let us recall Mabuchi's result \cite{ma1} stating that the Futaki invariant of a K\"{a}hler\
manifold vanishes if and only if the barycenter of the associated polytope lies
in the origin.
It is in general a hard combinatorial task, knowing the polytope, to determine
its barycenter
(as the reader of \cite{nak} can easily glance)
and this prevents us to state general results, yet restricting ourselves to
projective spaces
we can get a clear picture.
The polytope associated to ${\mathbb P}^{m}$ with its K\"{a}hler -Einstein metric is
well known to be the simplex in $\mathbb{R}^m$ with vertices
\[
p_1=(1,0,\dots,0) \dots p_m=(0,\dots,0,1) \qquad p_{m+1}=(-1,\dots, -1)
\]
and the vertices are exactly the images of the fixed points of the torus action.
Let us recall also the effect of blowing up one of these points. For all the toric
geometry we will use
we refer to \cite{gu}.
Given a vertex $p_j$, the polytope associated to the manifold
$(Bl_{p_j}M, \pi^*[\omega] - aPD[E])$ is the one
of M where the vertex $p_j$ is substituted by $m$ vertices $q_j^k$, $k=1,\dots m+1$,
$k\neq j$, where $q_j^k = p_j + a(p_j - p_k)$.
Blowing up a vertex with weight $a>0$ has then the effect of {\em cutting} the starting
polytope removing a simplex of ``size" $a$.
In general it is possible to get that the barycenter stays in the origin even
blowing up fewer points than the whole set of vertices and with uncontrollable weights.
For example, let
us take as the base manifold the blow up of ${\mathbb P}^2$ at three
points which are not aligned. The polytope associated to the
canonical class $3\pi^*[\omega_{FS}] - \left( PD[E_1] + PD[E_2] + PD
[E_3] \right)$ is the hexagon with vertices $(1,0)$, $(0,1)$,
$(1,1)$, $(-1,0)$, $(0,-1)$, and $(-1,-1)$. The existence of an
Einstein (constant scalar curvature) metric in this class is a
result of Siu \cite{si} and Tian-Yau \cite{ty}. Blowing up two pairs
of vertices symmetric with respect to the origin (hence $4$ points)
with equal pairwise weights, we get new constant scalar curvature
metrics on the blow up.
On the other hand, in the case of ${\mathbb P}^{m}$, plugging in the above
numbers, it is an elementary
calculation to get that the barycenter of the blow up is still in the origin iff $n=m+1$ and
$a_1 = \dots = a_{m+1}$. Calabi's result \cite{ca2}, stating that an extremal metric in
a K\"{a}hler\ class whose Futaki invariant vanishes is of constant scalar curvature,
gives a different proof of our characterization of K\"{a}hler\ constant scalar
curvature metrics among our new extremal ones.
Once we know that all manifolds obtained by blowing up any number $n$ of points
in general position in ${\mathbb P}^{m}$ admit an extremal metric
(for $n\geq m+2$ these are in fact constant scalar curvature metrics \cite{ap2}),
let us focus on the K\"{a}hler\ classes we can reach for $m=2$.
Recall that ${\mathbb P}^{1} \times {\mathbb P}^{1}$ is itself a toric variety whose polytope corresponding to the K\"{a}hler\ class $\alpha [\omega_{FS}] + \beta [\omega_{FS}]$,
$\alpha, \beta >0$, where $[\omega_{FS}^j$ is half the K\"{a}hler -Einstein metric on the $j$-th factor, is the rectangle with vertices $(\alpha, \beta), (-\alpha, \beta),(\alpha, -\beta), (-\alpha, -\beta)$. Let us also briefly recall the following classical construction~: take $p_1, p_2 \in {\mathbb P}^{2}$ and consider $M = Bl_{p_1,p_2}{\mathbb P}^{2}$.
$M$ contains three $(-1)$-curves, the two exceptional divisors $E_1$,$E_2$ and the proper transform $L$ of the line in ${\mathbb P}^{2}$ passing though $p_1$ and $p_2$.
$M$ is in fact biholomorphic to
$Bl_q({\mathbb P}^{1} \times {\mathbb P}^{1})$ for some (hence {\em any}) choice of $q\in {\mathbb P}^{1} \times {\mathbb P}^{1}$. In fact, contracting (``blowing down") $L$
we get a manifold biholomorphic to ${\mathbb P}^{1} \times {\mathbb P}^{1}$ where the rulings correspond to the pencils of lines through $p_1$ and $p_2$.
Now, having called $A_1 = [{\mathbb P}^{1} \times \{pt\}]$,
$A_2 = [\{pt\} \times {\mathbb P}^{1}]$, and $E$ the exceptional divisor in
$Bl_q({\mathbb P}^{1} \times {\mathbb P}^{1})$, it is easy to check the
correspondence of classes
\[
\alpha PD[A_1] + \beta PD[A_2] -\lambda PD[E] \leftrightarrow
(\alpha+\beta-\lambda)\pi^*[\omega_{FS}] - (\alpha-\lambda) PD[E_1]
-(\beta -\lambda)PD[E_2]\, .
\]
Hence our new extremal metrics on $Bl_q({\mathbb P}^{1} \times {\mathbb P}^{1})$,
which lie in the classes $\alpha PD[A_1] + \beta PD[E_2] - \varepsilon^2 \lambda PD[E]$,
give extremal metrics in the classes of $Bl_{p_1,p_2}{\mathbb P}^{2}$
\[
\pi^*[\omega_{FS}] - \, \tfrac{\alpha - \varepsilon^2 \lambda}{\alpha+\beta -\varepsilon^2}\, PD[E_1] -
\tfrac{\beta - \varepsilon^2\lambda}{\alpha+\beta-\varepsilon^2\lambda}\, \, PD [E_2].
\]
Hence we have a whole neighborhood of the boundary line $a+b=1$ in the K\"{a}hler\ cone of
$Bl_{p_1,p_2}{\mathbb P}^{2}$, where $\pi^*[\omega_{FS}] - \, a PD[E_1] -
b\, \, PD [E_2]$, $a+b \leq 1$, $a,b >0$.
This proves Corollary \ref{claproj}, $(1)$.
Another similar construction, known as Cremona transformation (see e.g. \cite{har} pages 397-399), allows us to prove Corollary \ref{claproj}, $(2)$. We wish to thank M.Abreu for bringing it to our attention.
This time we construct an automorphism of $Bl_{p_1,p_2,p_3}{\mathbb P}^{2}$,
when the points do not lie on a complex line, in the following way~:
call $l_{jk}$ the lines in ${\mathbb P}^{2}$ through $p_j$ and $p_k$ and $L_{jk}$ their proper transforms. The key observation this time is that blowing down $L_{jk}$
we are left with a new copy of ${\mathbb P}^{2}$, where the new coordinate lines
are the exceptional divisors of the original blow up. The resulting automorphism of
$Bl_{p_1,p_2,p_3}{\mathbb P}^{2}$ has the following action in cohomology, where we indicate by $F_j$ the exceptional divisors in the new copy of
$Bl_{p_1,p_2,p_3}{\mathbb P}^{2}$~:
\[
\pi^*[\omega_{FS}] - PD[E_1] - PD[E_2] \leftrightarrow PD[F_1] ,
\]
\[
\pi^*[\omega_{FS}] - PD[E_1] - PD[E_3] \leftrightarrow PD[F_2] ,
\]
\[
\pi^*[\omega_{FS}] - PD[E_2] - PD[E_3] \leftrightarrow PD[F_3] ,
\]
\[
\pi^*[\omega_{FS}] \leftrightarrow 2\pi^*[\omega_{FS}] - PD[F_1] - PD[F_2] - PD[F_3].
\]
Hence
\[
\pi^*[\omega_{FS}] -a PD[E_1] - b PD[E_2] - c PD[E_3]
\]
corresponds to
\[
(2-a-b-c)\pi^*[\omega_{FS}] - (1-a-b) PD[F_1] - (1-a-c) PD[F_2] - (1-b-c) PD[F_3]
\]
This gives the seeked K\"{a}hler\ classes as claimed.
\subsection{Sporadic examples}
In the case where the manifold we study is the projective space, we
can also study the existence existence of extremal metrics on the
blown up manifold when the position of the blown up points is not
generic, hence leaving the toric world.
For example in \cite{ap2} it was shown how to construct K\"{a}hler\
constant scalar curvature metrics on ${\mathbb P}^2$ blown up at $4$
points, $3$ of which were lying on a line. Considering extremal
metrics instead on constant scalar curvature metrics, one can do
more~: for example, consider the points
\[
p_1= [0 : \tfrac{1}{\sqrt{2}} : \tfrac{1}{\sqrt{2}}], \, \,
p_2 = [0, \tfrac{\alpha}{\sqrt{|\alpha|^2 + |\beta|^2}} : \tfrac{\beta}{\sqrt{|\alpha|^2 + |\beta|^2}}] \,\,
p_3 = [0 : \tfrac{\beta}{\sqrt{|\alpha|^2 + |\beta|^2}} : \tfrac{\alpha}{\sqrt{|\alpha|^2 + |\beta|^2}}]
\]
and the group $K = S^{1}$ whose action on ${\mathbb P}^2$ is given
by
\[
\begin{array}{crcllllll}
S^1 \times {\mathbb P}^2 & \longrightarrow & {\mathbb P}^2 \\[3mm]
(\theta , [z^{1} : z^{2}: z^{3}]) & \longmapsto & [\theta^{-2} \,
z^{1} : \theta \, z^{2} : \theta \, z^{3} ]
\end{array}
\]
Of course $p_1, p_2$ and $p_3$ are fixed under the action of $K$,
but we want to impose more symmetries working equivariantly with
respect to a discrete group $A$ of permutations of the last two
coordinates. It is easy to check that the space of vector fields of
${\mathfrak h}$ which are invariant under the action of $A$ is given by
\[
{\mathfrak h} _{A} = \mbox{Span} \{ \Re(z^{2} \, \partial _{z^{3}} + z^{3} \,
\partial _{z^{2}}) , \Re(2 \, z^{1} \, \partial_{z^{1}}
- z^{2} \, \partial_{z^{2}} - z^{3} \, \partial_{z^{3}}) \}
\]
It is immediate to see that
\[ {\mathfrak h}' _{A} = \mbox{Span} \{\Re(2 \, z^{1} \, \partial_{z^{1}}
- z^{2} \, \partial_{z^{2}} - z^{3} \, \partial_{z^{3}} )\}
\]
Observe that all points belong to the zero set of the vector fields
in ${\mathfrak h}'$. Condition (ii) in Theorem~\ref{mainthm} is fulfilled
provided
\[
\Re \, (\alpha \, \bar \beta) < 0
\]
with $a_1 = - \tfrac{2 \Re \, (\alpha \, \bar \beta)}{|\alpha|^2 + |\beta|^2}$ and
$a_2=a_3 =1$, while condition
(iii) holds since $ \Re(z^{2} \, \partial _{z^{3}} + z^{3} \,
\partial _{z^{2}}) $ does not vanish at $p_2$ and at $p_3$.
It is important to observe that $a_1$ automatically satisfies the upper bound $a_1\leq 1$.
In fact, it has been shown by A. Della Vedova \cite{dv} that for $a_1 >2$
the corresponding polarized manifold is not relative K-stable, hence forbidding the existence of an extremal metric in the corresponding K\"{a}hler\ class
thanks to Szekelehidi's result \cite{sz}.
This
gives the construction of extremal K\"{a}hler\ metrics on the blow up
${\mathbb P}^2$ at $p_1, p_2$ and $p_3$.
The fact that the corresponding
metrics do not have constant scalar curvature follows directly from Proposition~\ref{mainprop},
or implicitely observing that the resulting manifold does not satisfy the Matsushima-Licherovitz
obstruction so in fact it does not admit constant scalar curvature metrics in {\em any} K\"{a}hler\ class.
The same remark we observed after Corollary \ref{coproj} also holds
for this example. Indeed, the position of three (even aligned)
points in ${\mathbb P}^2$ can be {\em a priori} prescribed (just
leaving them on some line) with the use of an appropriate
automorphism. Therefore the above calculation implies that {\em any}
set of three aligned points can be blow up and extremal metrics on
the blown up manifold can be found even if in the initial
coordinates the symmetries of the above example are not present. Of
course the change of coordinates required to put the initial set of
points into the above one might change the base Fubini-Study metric
we work with, but this does not change its K\"{a}hler\ class. This
discussion can be summarized in the~:
\begin{corol}
Given three aligned points $p_1, p_2, p_3$ in ${\mathbb P}^2$ and
weights $a_1, a_2, a_3$ satisfying $a_j=a_k$ for some $j,k$ and
$a_l \leq a_j$ for all $l,j$, there
exists $\varepsilon_0 >0$ and for all $\varepsilon \in (0, \varepsilon_0)$ there exists an
extremal K\"{a}hler\ form of non constant scalar curvature $\omega_\varepsilon$ on
the blow up of ${\mathbb P}^2$ at $p_1, p_2, p_3$ with
\[
\omega_\varepsilon \in \pi^*[\omega_{FS}] - \varepsilon^2 \, \left( a_1\, PD[E_1] +
a_2\, PD[E_2] + a_3 \, PD [E_3] \right)
\]
\label{co:align}
\end{corol}
\noindent As expected, adding points to be blown up, also on the
same line, makes things even simpler. For example let us work out
the situation where $4$ aligned points are to be blown up. In this
case we can avoid using extra symmetries and we can work directly
with a connected group of isometries. Therefore, we now consider the
points
\[
p_1 = [0: 1 : 0], \quad p_2=[0: \tfrac{1}{\sqrt{3}} : \tfrac{1+i}{\sqrt{3}}], \qquad p_3 = [0 : \tfrac{1}{\sqrt{5}} : \tfrac{-2-i}{\sqrt{5}}],
\qquad \mbox{and} \qquad p_4=[0 : \tfrac{1}{\sqrt{2}} : \tfrac{1}{\sqrt{2}}]
\]
and the group $K = S^{1}$, whose action on ${\mathbb P}^2$ given by
\[
\begin{array}{crcllllll}
S^1 \times {\mathbb P}^2 & \longrightarrow & {\mathbb P}^2 \\[3mm]
(\alpha , [z^{1},z^{2},z^{3}]) & \longmapsto & [\alpha \, z^{1},
z^{2}, z^{3} ]
\end{array}
\]
Of course $p_1, \dots, p_4$ are fixed by the action of $K$. It is
easy to check that ${\mathfrak h}$ is now given by
\[
{\mathfrak h} = \mbox{Span} \{ \Re(z^{2} \partial _{z^{3}} + z^{3} \partial
_{z^{2}}) ,\Re( i \, (z^{2} \partial _{z^{3}} - z^{3} \partial _{z^{2}}))
, \Re(2 z^{1} \partial_{z^{1}} - z^{2}\partial_{z^{2}} -
z^{3}\partial_{z^{3}}), \Re( z^{2}\partial_{z^{2}} - z^{3}\partial_{z^{3}})
\}
\]
We choose
\[
{\mathfrak h}' = \mbox{Span} \{ \Re(2 z^{1} \partial_{z^{1}}
- z^{2}\partial_{z^{2}} - z^{3}\partial_{z^{3}} )\}
\]
Observe that all points belong to the zero set of the vector fields
in ${\mathfrak h}'$ and that none of the nontrivial elements
of ${\mathfrak h}''$ vanish at all the $p_j$. Our construction then gives extremal (non constant
scalar curvature) K\"{a}hler\ metrics on the blow up of ${\mathbb P}^2$ at
$p_1, \dots, p_4$ with weights $a_1 =1$, $a_2 = 3$, $a_3 = 5$
and $a_4= 2$.
\noindent These examples can be easily extended to projective spaces
of any dimension.
|
2,877,628,090,728 | arxiv | \section{Supplemental Material}
\noindent\emph{Experimental details}
The atoms were collected using a Magneto-Optical trap, and were depumped to the $F=1$ hyperfine state prior to their release (Fig.~\ref{fig:FigSupp}a).
The repelling light sheet was a $\sim 400\,\mathrm{mW}$ elliptical beam with vertical waist of $40\,\mathrm{\mu m}$ and horizontal waist of $1\,\mathrm{cm}$ (full width, $1/e^2$), and was blue-detuned from the $F=1 \rightarrow F'=3$ resonance by $13\,\mathrm{GHz}$ in the fold experiment, and by $5\,\mathrm{GHz}$ in the cusp experiment.
Reducing the initial vertical distribution of the cloud was accomplished by using an elliptical beam to pump the atoms from $F=2$ to $F=1$ only at a thin slice at the center of the atomic cloud, and then shining a removal beam resonant with the $F=2 \rightarrow F'=3$ cycling transition to blow away the rest of the atoms (Fig.~\ref{fig:FigSupp}b). If the temperature of the cloud is kept low enough, the motion of the atoms is dominated by gravity and not by their initial velocities, and the cloud bounces like a rigid ball (Fig.~\ref{fig:FigSupp}c), tracing the expected parabolic path determined by Earth's gravitational acceleration $g$.\\\\
\noindent\emph{Imaging setup}
Fluorescence images of the falling atoms (as in Fig.~\ref{fig:FigSupp}b) were captured by a CCD camera using $1\,\mathrm{ms}$ long pulses of resonant light. Since this process scatters the atoms, every measurement required a new loading and dropping cycle, with varying fall durations before the imaging. To present the vertical density as a function of time (as in Fig.~\ref{fig:FigSupp}c) we integrated consecutive fluorescence images horizontally, and cascaded the resulting column vectors.\\\\
\begin{figure*}[h!]
\begin{center}
\includegraphics[width=0.7\linewidth]{FigSupp.eps}
\caption{\label{fig:FigSupp} \textrm{(a)} Energy levels of the $^{87}$Rb $D_2$ transition. The heating beam, used for tuning the cloud's vertical velocity distribution, as well as the removal beam, are resonant with the cycling transition.
\textrm{(b)} The initially round atomic cloud (left fluorescence image) can be brought to a thin pancake shape by irradiating only its middle section with the depump beam, which pumps the atoms to the $F=1$ ground state. The removal beam is then used to blow away the rest of the atoms that remained in $F=2$ (middle and right images).
\textrm{(c)} Measurement of the atomic density along the $z$-axis as a function of time for a bouncing atomic cloud with small initial spatial spread (standard deviation $\sim0.1\,\mathrm{mm}$), and small vertical velocity spread (standard deviation $\sim0.03\,\mathrm{m/s}$). The cloud follows the same parabolic trajectory expected of a rigid ball (dashed line).\vspace{-0.9cm}}
\label{experiment}
\end{center}
\end{figure*}
\newpage
\noindent\emph{Simulations}
The simulations were performed by propagating the position of $10^4$ atoms with given initial position and velocity distributions, and then blurring the image with a convolution kernel whose width equals the spatial resolution.
The solution of Hamilton's equations for an atom that accelerates downward at $g$ and bounces from a perfect barrier, given the bounce number $k$ is:
\begin{eqnarray}
z(t) &=& (1-4k^2)z_0 + v_0 t - \frac{g t^2}{2}\label{zt}\\
&&+ 2k(t-v_0/g) \sqrt{v_0^2 + 2z_0 g}- \frac{2 v_0^2}{g}k^2\nonumber\label{vt}\\
v(t) &=& v_0 - g t + 2k\sqrt{v_0^2 + 2z_0 g}.
\end{eqnarray}\label{density}
While the bounce number $k$ can be trivially calculated from $z_0,v_0,t$, it is more easily defined as the only $k$ for which $z(z_0,v_0,t,k)$ is positive.
\end{document}
|
2,877,628,090,729 | arxiv | \section{Introduction}
\label{intro}
Atom interferometers have demonstrated excellent performances for precision acceleration and rotation
measurements \cite{borde,precision}. Many applications of these sensors, as tests of fundamental physics in space \cite{fundamental} or
gravimetry \cite{girafe}, need the setup to be compact, transportable and robust in order to operate in relevant
environments (satellite, planes, boats). Intensive research has been carried out over the last few years to develop transportable laser systems
\cite{carraz,tino,syrte,laserice} and, in particular, laser systems as compact and immune to perturbations as possible.
Atom interferometers usually operate with alkali atoms by driving transitions in the near-IR spectrum (852 nm for
Cs, 780 nm for Rb, and 767 nm for K). A light pulse atom interferometer sequence consists usually of three stages. First, a gas of
atoms is cooled, trapped and selected in a non sensitive magnetic state. Second, these cold atoms are illuminated by a sequence of
three light pulses driving stimulated Raman transitions performing a Mach-Zehnder type atom interferometer \cite{raman}. Finally,
the phase shift of the interferometer is deduced from fluorescence measurements. In these experiments, we need to use several stable laser
frequencies relative to the atomic transitions of the alkali species considered: one cooling and trapping frequency, one repumping frequency, two Raman
frequencies and one detection frequency. It is noted that the spectral linewidth of the laser needs to be smaller than the linewidth of the atomic transition
for the cooling stage (6 MHz for Rb). Additionally, in order to realize stimulated Raman transitions, an even narrower linewidth laser is required because the frequency
noise of the laser induces a noise on the atom interferometer measurement \cite{sensitivity}. This aspect is very important for gravity
gradiometers for which the sensitivity is not limited by vibrations.
\section{State of the art}
Different technologies of laser sources are available for addressing alkali atoms. For example, laser sources emitting directly at the same wavelength as the
atomic transition can be used: Distributed FeedBack lasers (DFB), Distributed Bragg Reflector lasers (DBR) and Extended-Cavity Diode Lasers
(ECDL) \cite{diode}. However, the disadvantage of these technologies is that large efforts are required to obtain robust systems, immune to
mechanical misalignments caused by vibrations, for onboard applications. Another appropriate solution for Rb and K,
is to use frequency-doubled telecom lasers operating around 1.5 $\mu$m \cite{lienhart,potassium}. This technique is based on the
maturity of the fiber components in the telecom C-band to reduce the amount of free-space optics and to make the setup more compact
and less sensitive to misalignments. Moreover, many types of narrow linewidth laser sources are commercially available such as DFB laser,
DFB with whispering-gallery-mode resonator \cite{WGM}, integrated ECDL diodes \cite{RIO} and Erbium Fiber DFB Laser (EFL).
In this article, we will present results obtained with an EFL source which has already been used for atom interferometry \cite{carraz}.
Different architectures are possible to obtain all the laser frequencies needed for an atom interferometer experiment.
The most common one uses at least two lasers: a master laser and a slave laser \cite{carraz,tino,syrte}. The master laser
has a fixed frequency and is locked on an atomic transition. The slave laser is locked relatively to the master laser thanks to
a beat note. By changing the set point of the beat note lock, it is possible to change dynamically the frequency of the
slave laser and to address all the functions needed for atom interferometry. However, for onboard applications where a
compact and robust laser system is needed, the use of two laser sources is not optimal. Indeed, by using only
one laser source, the size of the laser system is limited, the risk of failure of the system due to laser source breakdown is
reduced and the electrical consumption is lower.
\section{Description of the laser system}
\subsection{Laser system}
In this article, we present a laser system for Rubidium 87 atom interferometry using only one laser source based on a
frequency-doubled telecom fiber bench. A laser system, tunable in few ms, within a frequency range of typically
1 GHz, can generate cooling, detection and the first Raman frequencies. The repumping frequency and the second Raman frequency
can be obtained by creating sidebands on the laser source. In our system, the laser source is a narrow linewidth EFL (IDIL fiber laser, output power: 20 mW, linewidth $<$ 2 kHz), which can be frequency tuned (20 MHz/V) thanks to a piezoelectric actuator (PZT) (Fig.~\ref{laser}). In order to frequency stabilize the laser, part of the laser output goes through a phase modulator (PM1, Photline, RF level: 23 dBm) which generates side bands, then goes through a PPLN waveguide crystal (NTT Electronics, conversion efficiency: 225 \%/W) which performs second harmonic frequency conversion, and finally goes in a Rubidium saturated absorption setup \cite{absorption} where the 1$^{st}$ order sideband of the laser spectrum is locked
to the cross over $F=3\rightarrow{}F'=3 \ c.o.\ 4$ of the $^{85}$Rb-D$_2$ line (Fig.~\ref{absorption}). With a modulation at $\nu_{1} =$ 1070 MHz,
the laser carrier (0$^{th}$ order sideband) is at resonance with the detection transition $F=2\rightarrow{}F'=3$ of the $^{87}$Rb-D$_2$ line. By changing
the frequency modulation $\nu_{1}$ on PM1, the 1$^{st}$ order sideband of the laser remains locked on the atomic transition whereas the frequency of the carrier is varied. In that case, the new carrier is detuned relative to the detection transition by 1070 MHz - $\nu_{1}$. In summary, the 1$^{st}$ order sideband is locked on the deepest saturated absorption peak of the Rubidium 85, while the carrier is tuned to the Rubidium 87 transitions.
With this technique, a frequency tuning range of at least 1 GHz at 780 nm can be obtained, which therefore addresses all the functions needed for atom interferometry.
\begin{figure}[h!]
\centerline{\scalebox{0.30}{\includegraphics{laser.eps}}}
\caption{Diagram of the laser system and the electronic lock : EFL, Erbium Fiber DFB Laser; OI, Optical Isolator; PM, Phase Modulator; C, fiber
Coupler; PPLN-WG, Periodically Poled Lithium Niobate crystal - Wave Guide; VCO, Voltage Controlled Oscillator; LPF, Low Pass Filter; LS, Lock
Switch (open during the frequency step); $\nu_{0}$, optical EFL frequency; $\nu_{1}$, modulation frequency of the PM1.}
\label{laser}
\end{figure}
\begin{figure}[h!]
\centerline{\scalebox{0.40}{\includegraphics{abs_sat1.eps}}}
\caption{Saturated absorption peaks of the D$_2$ Rubidium transition and laser frequencies generated for atom
interferometry (laser frequencies for the cooling and detection stages in blue, and for the interferometric stage in red).}
\label{absorption}
\end{figure}
\subsection{Frequency stabilization}
Frequency stabilization is achieved by modulating at 50 kHz the frequency $\nu_{1}$ driving the PM1, and by collecting saturated
absorption signal with a photodiode. The laser beam used for this saturated absorption has a power equal to 28 $\mu$W @780 nm (3.56 mW @1560 nm before frequency doubling) with a beam diameter of 1.8 mm. This signal is then amplified through a transimpedance amplifier, demodulated at 50 kHz and low-pass filtered, hence providing at this point the dispersive error signal of the lock system proportional to the frequency difference between the 1$^{st}$ order sideband of the laser spectrum and the atomic frequency transition.
Finally, it is integrated and amplified by a high-voltage amplifier, and sent to the PZT of the EFL with a feedback bandwidth of 3 kHz
(Fig.~\ref{laser}). The amplitude of the 50 kHz modulation is adjusted in order to have
the steepest slope (0.218 V/MHz).
With this architecture, a peak-to-peak deviation of 7.8 MHz, and a capture capture range of 45 MHz on the error signal is obtain (Fig.~\ref{error}). This finite range is due to the presence of absorption peaks located before and after the peak where we are locked.
\begin{figure}[h!]
\centerline{\scalebox{0.30}{\includegraphics{error.eps}}}
\caption{Error signal as a function of the frequency scan of the laser (@ 780 nm).}
\label{error}
\end{figure}
\begin{figure}[h!]
\centerline{\scalebox{0.30}{\includegraphics{jump2.eps}}}
\caption{Behavior of the laser system locked during a frequency step : voltage to the PZT of the EFL (black);
error signal (blue). Voltage step of 46.4 V on the PZT, i.e. frequency step of 965.12 MHz on the laser at 780 nm.}
\label{raman}
\end{figure}
In order to improve the stability and reduce the response time of the lock system during frequency steps, a feedforward (Fig.~\ref{laser}) proportional to the frequency of the VCO is added to the corrected signal driving the PZT. During the maximum frequency step of 1 GHz, the frequency deviation is too large compared to the lock range for the laser to remain locked. Therefore, a lock switch (ADG201A) is upstream from the integrator to open the feedback loop during 1 ms after the frequency step. A second
integrator stage is also added to reduce the stabilization time. As a result, after a frequency step of 1 GHz, the laser frequency is stabilized with an error below 100 kHz
after 3 ms only (Fig.~\ref{raman}).
With this laser architecture, to obtain the second Raman frequency and the repumper required to realize the atom interferometry experiment
(Fig.~\ref{absorption}), the laser is modulated at a frequency of 6.5 or
6.8 GHz with the PM2 (Photline, RF level from -1 to 23 dBm) (Fig.~\ref{laser}) \cite{parasite}. Finally, the laser is
amplified in an Erbium Doped Fiber Amplifier (IPG Photonics, input power: 3 mW, output power: 5 W) and sent in a frequency-doubling unit. The frequency doubling can be implemented either with a PPLN waveguide \cite{CNES} or with free-space doubling in a bulk PPLN \cite{carraz}.
\section{Frequency noise and influence on the atom interferometer}
\subsection{Estimation of the frequency noise of the laser}
In order to determine the frequency noise of the laser, it is necessary to analyse the noise of the error signal (Fig.~\ref{model}), which is
proportional to the laser frequency within a bandwdith of 10 kHz (cut-off frequency of the LPF on Fig.~\ref{laser}). First, this noise is measured when the laser is unlocked
and "out of resonance" from the atomic transition (in blue). In this configuration only the noise of our lock system (i.e. the noise
on the error signal which does not come from the frequency noise of the laser) is measured. When the laser is "locked" (in green), the noise on the error signal
is much lower than the noise of the lock. As a result, the frequency noise of the laser is given by the frequency noise of the lock system up to a frequency of 3 kHz.
We investigate then the origin of the noise of the lock system which determines the frequency noise of the laser. The noise of the lock system can come mainly from electronic noise, intensity noise of the laser at 50 kHz and etalon effects in the saturated absorption setup which lead to a temporal fluctuation of the error signal offset.
The origin of the noise can be determined by analyzing the noise of the error signal for different configurations (Fig.~\ref{bruit_laser}). In the configuration
"laser off" (in red), the only contribution comes from the electronic noise. In the configuration laser on, out of resonance and PM1 off ("unmod laser" in black), both the electronic noise and the intensity noise of the laser are present. In the configuration laser "out of resonance" (in blue), all the noise sources are taken into account. Comparing these
configurations shows that the noise of the lock system comes mainly from the intensity noise of the laser between 1 Hz and 10 kHz, whereas the noise comes from etalon effects below 1 Hz. As a summary, the frequency noise of our laser below 10 kHz comes from the noise of the lock which is converted into frequency noise in the feedback loop.
The noise of the lock is mainly due to intensity noise of the laser and etalon effects in the saturated absorption setup.
\begin{figure}[h!]
\centerline{\scalebox{0.30}{\includegraphics{model1.eps}}}
\caption{Square root of the Power Spectral Density (PSD) of the error signal noise : when the laser is locked (green) and when the laser is out of resonance (blue). The noise of the error signal when the laser is out of resonance (blue) determines the low frequency part ($f <$ 10 kHz) of the frequency noise. The red dash line represents the model used to determine the atom interferometry sensitivity.}
\label{model}
\end{figure}
\begin{figure}[h!]
\centerline{\scalebox{0.30}{\includegraphics{noise1.eps}}}
\caption{Square root of the PSD of the error signal noise : when the laser is off (red), when the laser is unmodulated (black) and when the laser is out of resonance (blue).}
\label{bruit_laser}
\end{figure}
\begin{figure}[h!]
\centerline{\scalebox{0.30}{\includegraphics{beatnote.eps}}}
\caption{Beat note between the EFL and an integrated ECDL with a linewidth lower than 10 kHz. The beat note determines the high frequency part ($f >$ 10 kHz) of the frequency noise.}
\label{beatnote}
\end{figure}
In order to estimate the frequency noise of our laser at frequencies higher than 10 kHz, we perform a beat note measurement between our locked EFL and an integrated
ECDL (RIO ORION laser source) which linewidth is below 10 kHz according to the manufacturer.
Because the linewidth of the ECDL is not infinitely narrow, the analysis of the beatnote gives an upper limit of the frequency noise of the EFL. We notice that the wings of the
beatnote fit well with a lorentzian function with a FWHM of 2.5 kHz (Fig.~\ref{beatnote}). As we know that a white frequency noise leads to a lorentzian spectrum with a FWHM equal to $\pi$.S$_{\nu}^{0}$, we suppose that the frequency PSD of our laser at high frequencies is below $\Delta\nu/\pi$. Thus, for the determination of the atom interferometry noise, we will consider that the frequency noise of our laser is equal to (S$_{\nu}^{0})^{1/2} = 28$ Hz/Hz$^{1/2}$ for frequencies above 10 kHz.
\subsection{Estimation of atom interferometry noise induced by the laser frequency noise}
From the two previous measurements,
we can model our laser spectrum by (S$_{\nu}^{0})^{1/2}$ = 10$^{4}$ Hz/Hz$^{1/2}$ for $f <$ 1 Hz, coming from etalon effects, (S$_{\nu}^{0})^{1/2} = 400$ Hz/Hz$^{1/2}$
for 1 Hz $< f <$ 10 kHz, due to intensity noise, and (S$_{\nu}^{0})^{1/2} = 28$ Hz/Hz$^{1/2}$ for $f >$ 10 kHz (red dash line in Fig.~\ref{model}).
Therefore we can estimate the noise on the atom interferometer measurement induced by the frequency noise of the laser.
From the results of \cite{sensitivity} and considering classical experimental parameters for a vertical
atom accelerometer (the distance atom-mirror for gravimetry, or the distance between the two atom clouds for gradiometry: L = 1 m, the duration of a Raman pulse: $\tau_{R}$ = 10 $\mu$s, and the time between two pulses T = 100 ms), a single shot rms noise equal to $\sigma_{a} = 2.6 \times 10^{-9}$ g is obtained, the main contribution coming from low
frequency noise between 1 Hz and 10 kHz.
\section{Improvements}
For more demanding applications, frequency noise could be decreased by improving the saturated absorption setup (i.e. removing intensity noise of
the laser and etalon effect coming from the fiber \cite{abs_diff}). The electronic noise could also be decreased by using a higher laser power or a more efficient
lock system. With these improvements, it should be possible
to obtain a frequency noise of 28 Hz/Hz$^{1/2}$ over the whole frequency range of the laser, leading to single shot rms noise equal to $\sigma_{a} = 6.9 \times 10^{-10}$ g for typical atom accelerometers.
In our laser architecture, the EFL could also be replaced by an integrated ECDL which is more compact and has comparable laser linewidth. Finally, a very compact
system, immune to external disturbances could be built from an all-fibered bench with a fibered amplifier and a wave guided PPLN after the PM2 \cite{CNES}.
\section{Conclusion}
We have developed a tunable narrow linewidth single laser source system for atom interferometry.
This system combines the reliability of fiber components and the agility allowed by phase modulators. These features can lead to
plug-and-play laser sources for laboratories developing cold atom experiments. These sources could be developed for
commercial devices, onboard system or space missions. Finally, the use of a single laser source significantly
reduces failure risks and the amount of additional components for redundancy, particularly critical for space projects.\\
\textbf{\scriptsize{ACKNOWLEDGEMENTS}} \, We thank F. Nez, from the Laboratoire Kastler Brossel (LKB), for his help on the project.
We acknowledge funding support from the Direction Scientifique G\'en\'erale of ONERA, the Direction G\'en\'erale de l'Armement (DGA), and the Centre National d'Etudes Spatiales (CNES).
|
2,877,628,090,730 | arxiv | \section{Introduction}
In the recent decades, there have been many quantum algorithms proposed for the problems inefficient to solve on classical computers. The quantum phase estimation algorithm \cite{Abrams,Kitaev} is a leading illustration of such algorithms. It is used for quantum simulations to estimate the eigenenergy corresponding to a given approximate eigenvector of
the unitary evolution operator of a quantum Hamiltonian. Furthermore,
with different settings, it has been adapted as a sub frame of many quantum algorithms applied to wide variety of applications in different fields (see the review article ref.\cite{Georgescu2014quantum} and the references therein).
However, the success of the algorithm partly depends on the pre-existing approximate eigenvector. Thus, it may fail to output the right eigenvalue in the cases where the approximation to the corresponding eigenvector is not good enough. This hinders the uses of the algorithm when there is not enough prior knowledge to procure a good approximation to the eigenvector.
A symmetric matrix is called non-negative if all of its elements are greater than or equal to zero. It is irreducible if the corresponding graph is strongly connected or if it is not reducible to a block-diagonal form by any column-row permutation. Due to Perron-Frobenius theorem \cite{Meyer2000}, an irreducible non-negative matrix have a positive eigenvector (all elements are positive) with an associated positive principal eigenvalue whose magnitude is greater than the rest of the eigenvalues. These matrices have been studied extensively \cite{Bapat1997nonnegative,Berman1979nonnegative}, the distribution of the coefficients of their principal eigenvectors have been related to the matrix elements \cite{Minc1970maximal}, and the sum of the coefficients have been shown to be related to the number of walks in molecular graphs \cite{Gutman2001}.
The phase estimation algorithm (PEA) mainly uses two quantum registers:$\ket{reg1}$ initially set to zero state and $\ket{reg_2}$ holding an initial approximate eigenvector.
In this article, we show that for irreducible non-negative matrices, one can obtain the principal eigenvalue by using an equal superposition input state in PEA:
i.e., instead of an approximate eigenvector, initial value of \ket{reg_2} is set to \ket{\bf 0} and then put into equal superposition state by applying Hadamard gates. In the output of the algorithm, this generates each one of the eigenvalues with the probability determined by the normalized sum of the coefficients of the associated eigenvector.
In addition, because all eigenvectors but the principal one include positive-negative elements, we show that in most of the random cases the probability to see principal eigenvalue in the output is much larger than the others.
In also some cases, we show that first applying Hadamard gates to the second register in the output (or measuring second register in the Hadamard basis) and then measuring the first register when the measurement outcome of the second register is in \ket{0\dots 0} state increase the success probability further in the output.
Eigenvector-coefficients of stochastic matrices sum to zero for all but the principal eigenvector. Therefore, for these matrices one can generate the eigenvalue in the phase estimation algorithm with probability one \cite{Daskin2014}. Since any given symmetric irreducible non-negative matrix can be converted into a stochastic matrix by a diagonal scaling matrix; using the closeness of the matrix to a stochastic matrix, we also show that the success probability of the algorithm can be predicted beforehand.
We finally compare the estimated success probabilities with the computed ones for random symmetric matrices and 3-local Hamiltonians of different dimensions with non-negative off-diagonal elements.
In the following sections, we shall first discuss PEA in detailed steps, then draw the estimates for the success probability and finally show the possible applications of the algorithm.
\section{Quantum Phase Estimation Algorithm}
The phase estimation algorithm (PEA) in general sense finds the value of $\phi_j$ for a given approximate eigenvector $\ket{\mu_j}$ in the eigenvalue equation $U\ket{\mu_j}=e^{i\phi_j}\ket{\mu_j}$.
The algorithm mainly uses two quantum registers: Viz. $\ket{reg1}$ initially set to zero state and $\ket{reg_2}$ holding an approximate eigenstate of
a unitary matrix $U$. The first operation in the algorithm puts \ket{reg1} into the equal superposition state. In this setting,
a sequence of operators, $U^{2^j}$, controlled by the $j$th qubit
of $\ket{reg1}$ are applied to $\ket{reg_2}$. Here, $j=1 \dots m$ and $m$ is the number of qubits in \ket{reg1} and also determines the precision of the output. These sequential operations generate the quantum Fourier transform $(QFT)$ of the
phase on $\ket{reg1}$. Therefore, the application of the inverse quantum Fourier transform $(QFT^\dagger)$ turns the value of $\ket{reg1}$ into the binary value of the phase. Consequently, one measures \ket{reg1} to obtain the phase. Here, if the unitary operator $U$ is the time evolution operator of a quantum Hamiltonian $\mathcal{H}$, i.e. $U=e^{i\mathcal{H}}$; then one also obtains the eigenenergy of that Hamiltonian.
For a symmetric irreducible non-negative matrix $\mathcal{H}$ of order $2^n$ with ordered eigenvalues as $\phi_1\geq \dots \geq \phi_{2^n}$ and associated eigenvectors $\ket{\mu_1} \dots \ket{\mu_{2^n}}$, it is known that $\ket{\mu_1}$ is the only eigenvector with all positive coefficients. Assume unitary operator $U=e^{i\mathcal{H}}$ and its powers $U^{2^j}$ with $j=0 \dots m$ are readily available for PEA. To estimate the value of $\phi_1$ on the first register, in our setting, we simply modify the conventional phase estimation algorithm in the following way:
\begin{itemize}
\item Instead of an approximate eigenvector, initial value of \ket{reg_2} is set to \ket{\bf 0} and then put into equal superposition state by applying Hadamard gates.
\item (\textbf{optional}) In the output, we change the basis of the second register by applying the Hadamard gates. Then, if the measurement of the second register is equal to \ket{\bf 0} state, then the principal eigenvalue is estimated on the first register. This modification increases the success probability of the algorithm further when the probability of measuring the principal eigenvalue is higher then the other eigenvalues.
\end{itemize}
In the following subsection, we shall describe the phase estimation algorithm in steps, also shown in Fig.\ref{fig:peageneral}, to show how the above modifications have an impact on the algorithm.
\subsection{Steps of the Algorithm}
Here, the algorithm is described in details by showing how the quantum state changes in each step:
\begin{itemize}
\item The system is prepared in a way so that it includes two register: viz., \ket{reg_1} and \ket{reg_2} with $m$ and $n$ number of qubits, respectively.
\item Both registers are initialized into zero state:
\ket{\psi_0}=\ket{reg_1}\ket{reg_2}=\ket{\bf 0}\ket{\bf 0}.
\item Then, the Hadamard operators are applied to both registers:
\begin{equation}
\ket{\psi_1}=(H^{\otimes m} \otimes H^{\otimes n}) \ket{\bf 0} \ket{\bf 0}
=\frac{1}{\sqrt{2^{n+m}}}\sum_{x=0}^{2^{m+n}-1}\ket{\bf x}
\end{equation}
\item As in the customary phase estimation algorithm;
the unitary evolution operators $U^{2^j}$ controlled by the $j$th qubit of \ket{reg_1} are applied to \ket{reg_2},
and finally the inverse of the quantum Fourier transform ($QFT^{\dagger}$) is applied to \ket{reg1}.
At this point of the algorithm, we have a quantum state in which \ket{reg_1} and \ket{reg_2} hold the superposition of the eigenvalues and the associated eigenvectors, respectively:
\begin{equation}
\ket{\psi_2}=\sum_{j=0}^{N-1} \alpha_j \ket{\phi_j}\ket{\mu_j}.
\end{equation}
Here, the amplitude $\alpha_j$ is related to the angle between the equal superposition state and the eigenvector \ket{\mu_j} and can be described as the normalized sum of the coefficients of the eigenvectors since \ket{reg_2} was $\frac{1}{\sqrt{2^n}}\sum_{x=0}^{2^{n}-1}\ket{\bf x}$ at the beginning:
\begin{equation}
\label{eq:normalizedsum}
\alpha_j=1/\sqrt{2^n}\sum_{i=1}^{2^n}{\mu_{ji}}
\end{equation}
If \ket{reg_2} is written in the Hadamard basis,
$\ket{+\dots ++} +
\ket{+\dots +-} +\dots
+\ket{-\dots --}$, where $\ket{\pm}=1/\sqrt(2)(\ket{0}\pm\ket{1})$;
then the following quantum state is obtained:
\begin{equation}
\begin{split}
\label{EqH}
\ket{\psi_{2}}& = \sum_{j=1}^{2^{n}}\beta_{1j}\ket{ \phi_j}\ket{+\dots +}
\\ &
+ \dots +\sum_{j=1}^{2^{n}}\beta_{2^nj}\ket{ \phi_j}\ket{-\dots -}
\end{split}
\end{equation}
where $\beta_{ij}$s are new coefficients.
Now, since there is only one eigenvector, viz. \ket{\mu_1}, with all positive real elements, in the Hadamard basis, we can expect this positive eigenvector to be the closest state to \ket{++\dots ++}.
\item
To revert the above state back to the standard basis, the Hadamard operator $H^{\otimes n}$ is again applied to \ket{reg_2}:
\begin{equation}
\begin{split}
\label{EqFinal}
\ket{\psi_3}& =(I\otimes H^{\otimes n})\ket{\psi_2}\\
& = \sum_{j=1}^{2^{n}}\beta_{1j}\ket{ \phi_j}\ket{0\dots 0}
+ \dots +\sum_{j=1}^{2^n}\beta_{2^nj}\ket{ \phi_j}\ket{1\dots 1}
\end{split}
\end{equation}
\item \ket{reg_2} is measured in the standard basis. As a result, for \ket{reg_2}= \ket{0\dots 00}, the system collapses to the state where the phase associated with the positive eigenvector is expected to be highly dominant in the first register.
\item In the final step, the first register is measured to obtain the phase $\phi_1$ and get the eigenvalue of $\mathcal{H}$. These steps also drawn in Fig.\ref{fig:peageneral}.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{circuit}
\caption{Circuit design for the modified phase estimation algorithm. In the end, note that the application of the Hadamard gates to the second register and the measurement on this register is optional.
}
\label{fig:peageneral}
\end{figure}
\end{itemize}
\subsection{The Success Probability}
It is easy to see that the success probability of the algorithm without the optional Hadamard gates on \ket{reg_2} in the output is determined from the normalized sum of the coefficients of the eigenvectors given in Eq.(\ref{eq:normalizedsum}). Therefore, the success probability for the dominant eigenvalue is $\alpha_1^2$ (note that the coefficients of \ket{\mu_1} are all positive, hence, $\alpha_1^2=|\alpha_1|^2$.).
With the Hadamard gates in the output, we first find the probability to measure \ket{reg_2} in \ket{\bf 0} state and then find the probability to see the dominant eigenvalue in \ket{reg_1} knowing $\ket{reg_2}=\ket{\bf 0}$:
As obvious in Eq.(\ref{EqFinal}), the success probability of getting \ket{\bf 0} is $\left(\sum_{j=1}^{2^n}{\beta_{1j}}\right)^2$. In addition, after measuring \ket{\bf 0} on \ket{reg_2}, the probability to measure $\ket{\phi_1}$ on \ket{reg_1} is
$|\beta_{11}|^2$.
Since we apply an equal superposition as an input, the component of an eigenvector in this direction determines the probability to get the corresponding eigenvalue on \ket{reg_1}. More formally, after the application of $QFT^\dagger$, we get $\ket{\psi_2}=\sum_{j=0}^{N-1} \alpha_j \ket{\phi_j}\ket{\mu_j}$, where $\alpha_j=1/\sqrt{2^n}\sum_{i=1}^{2^n}{\mu_{ji}}$.
If \ket{reg_2} of the state in Eq.(\ref{EqH}) is measured in the Hadamard basis, the probability to see \ket{+\dots+} is the sum of the amount of the components of the eigenvectors in the direction of the initial vector:
\begin{equation}
P_{reg_2}=\sum_{j=1}^{2^n}|\alpha_j|^4\geq \frac{1}{2^n}
\end{equation}
This is also equal to $\sum_{j=1}^{2^n}{\beta_{1j}}$, i.e. the probability of measuring \ket{reg_2} in \ket{0\dots0} state at the end. $P_{reg_2}$ takes the smallest value only when all $|\alpha_j|$s are equal to $1/\sqrt{2^n}$. Moreover, when \ket{reg_2}=\ket{0\dots 0}, the probability to see $\ket{\phi_1}$ on \ket{reg_1} is:
\begin{equation}
P_{reg_1}=|\beta_{11}|^2=\frac{\alpha_1^4}{P_{reg_2}}.
\end{equation}
Although we have drawn the equalities for probabilities, without knowing the eigenvectors, it is not possible to compute exact $|\alpha_j|$s and so $P_{reg_1}$ and $P_{reg_2}$.
Therefore, we shall try to estimate them: Since $\alpha_1$ is the normalized sum of the vector elements, it is easy to see that $\alpha_1\geq 1/\sqrt{2^n}$, where the equality occurs only when the magnitude of the eigenvectors are the columns of the identity matrix. A similar observation is also made in ref.\cite{Daskin2014} where the principal eigenvector is found for a given eigenvalue equal to 1.
Here, we will relate the matrix $\mathcal{H}$ to a stochastic matrix in order to develop some intuition for the estimation of the probabilities. It is known that stochastic matrices have only one eigenvector with coefficients summing to a nonzero value.
A matrix can be made stochastic by a column wise scaling $\mathcal{HD}$, where $\mathcal{D}$ is a diagonal matrix whose diagonal elements are the inverses of the sums of the columns of $\mathcal{H}$. The closeness of the matrix $\mathcal{H}$ to $\mathcal{HD}$ may provide a prediction to estimate the success behavior of the phase estimation algorithm with an initial superposition state.
The closeness of $\mathcal{H}$ to $\mathcal{HD}$ can be defined by $||\mathcal{H}-\mathcal{HD}||$ where $||\cdot||$ is any matrix norm. This also defines a perturbation error. If we normalize this error term by $||\mathcal{H}||$, we get the relative error:
\begin{equation}
\epsilon_1=\frac{||\mathcal{H}-\mathcal{HD}||}{||\mathcal{H}||}\leq ||\mathcal{I}-\mathcal{D}||,
\end{equation}
where $\mathcal{I}$ is an identity matrix. We can also look at the relative error in the inverses:
\begin{equation}
\epsilon_1=\frac{||\mathcal{H}^{-1}-\mathcal{D}^{-1}\mathcal{H}^{-1}||}{||\mathcal{H}^{-1}||}\leq ||\mathcal{I}-\mathcal{D}^{-1}||.
\end{equation}
Since the matrix $\mathcal{D}$ also changes the eigenvector elements; when the variance of the diagonal elements of $\mathcal{D}$ is small, we can expect the eigenvectors of $\mathcal{H}$ to show the similar behaviors as of $\mathcal{HD}$ and have one eigenvector whose elements sum to a much greater value than the others. When $\mathcal{H}=\mathcal{HD}$ or $\mathcal{D=I}$, left and right principal eigenvectors of symmetric $\mathcal{H}$ are the same and have the elements equal to $\frac{1}{\sqrt{N}}$ in the normalized space. Therefore, the variances $\sigma_1$ and $\sigma_2$ of the diagonal elements of $\mathcal{D}^{-1}$ and $\mathcal{D}$, the column sums of $\mathcal{H}$ and their inverses, can somehow describe how much the elements of the principal eigenvector deviate from $\frac{1}{\sqrt{N}}$ which is also the expectation value for randomly generated normalized uniform $N$ numbers. Using these intuitions, we define the estimate $P_{reg_2}$ as:
\begin{equation}
\widetilde{P}_{reg_2}=\frac{\Lambda_1+\Lambda_2}{2}
\end{equation}
with
\begin{equation}
\Lambda_1=\frac{\frac{1}{N}-\sigma_1}{\frac{1}{N}+\sigma_1},\
\Lambda_2=\frac{\frac{1}{N}-\sigma_2}{\frac{1}{N}+\sigma_2}.
\end{equation}
In addition, $\alpha_1^2=1$ for stochastic matrices and so for $\mathcal{H}$ when $\mathcal{H}=\mathcal{HD}$. Since the any deviation of the elements of the principal eigenvector from $\frac{1}{\sqrt{N}}$ would change the equality $\alpha_1^2=1$, we multiply variance by $N$ to get total deviation from 1. Therefore, we define the estimate $\alpha_1^2$ as:
\begin{equation}
\widetilde{\alpha}_1^2=\frac{(1-N\sigma_1)+(1-N\sigma_2)}{2}.
\end{equation}
Note that $\widetilde{\alpha}_1^2$ and $\widetilde{P}_{reg_2}$ are only predictions and may fail to provide good estimates for ${\alpha}_1^2$ and ${P}_{reg_2}$, respectively, for particularly some structured matrices and matrices whose eigenvectors are close to the columns of an identity matrix. For instance, consider the following matrix:
\begin{equation}
\left(\begin{matrix}
21.8214 & 0& 0.6118& 0.4983\\
0 &14.2944 & 0.4983 &0.6118\\
0.6118 & 0.4983 & 12.1626& 0\\
0.4983 & 0.6118 & 0 &5.4111
\end{matrix}\right).
\end{equation}
The eigenvalues of this matrix are
$5.3537, 12.0193,14.4411,$ and $21.8753$, respectively, associated with the following eigenvectors:
\begin{equation}
\left(
\begin{matrix}
0.0304613& 0.0597207& 0.0215934& 0.9975166\\
0.0686662& 0.2074209 & -0.9758165 & 0.0066086\\
-0.0077623& -0.9761393 &-0.2076080 & 0.0631720\\
-0.9971443 & 0.0237068 &-0.0649217 &0.0304360
\end{matrix}\right).
\end{equation}
As computed from the above, the magnitudes of the sums of the components of the eigenvectors are very close to each other: $|-0.90578|, |-0.68529|, |-1.22675|,$ and $|1.09773|$, respectively. In addition, the second eigenvector, not associated to the principal eigenvalue, has the largest sum.
Also note that one can also predict ${\alpha}_1^2$ from the distribution of the matrix elements. For instance \cite{Minc1970maximal,Cioaba2007principal},
\begin{equation}
\label{eq:ratioofelements}
max_{ij}\frac{\mu_{1i}}{\mu_{1j}} \leq \frac{max_{ij}h_{ij}}{min_{ij}h_{ij}},
\end{equation}
where $h_{ij}$s are the nonzero matrix elements of $\mathcal{H}$. However, these bounds are generally not tight enough to give good estimates in many cases.
In Sec.\ref{Sec:NumericSimulations}, we shall show the comparisons of these estimates with the computed probabilities for random matrices.
\section{Possible Applications}
In the previous sections, we have showed the positive eigenpair of an irreducible symmetric non-negative matrix can be obtained by using an initial equal superposition state when the estimated probabilities high. This eliminates the necessity of knowing an initial approximate eigenvector in the applications of the phase estimation algorithm to the problems involving non-negative matrices. Irreducible non-negative matrices are known to have positive eigenpairs due to Perron-Frobenious theorem. There is also more general but the same statements indicated in this theorem for compact operators \cite{Du2006order}. One may also apply permutation matrices or splitting techniques to convert the interested matrix to a non-negative one or to increase the expected probability outcome when the probability is low than an acceptable value.
Since a wide variety of problems in science can be represented by non-negative matrices, the phase estimation algorithm can be applied without an initial approximate eigenvector to these problems. In the following subsections, we shall consider two important classes among these problems: viz., the k-local Hamiltonian problem and the stochastic processes.
\subsection{Applications to Local Hamiltonian Problems}
One of the applications of the non-negative matrices can be found in quantum mechanics where such matrices called stoquastic matrices \cite{Bravyi2008,Bravyi2010}: i.e., matrices with non-negative off-diagonal elements.
If a Hamiltonian operator $\mathcal{H}$, a Hermitian operator acting on $(C^2)^{\otimes n}$, can be expressed as a sum of local Hamiltonians representing the interactions between at most $k$ qubits, then it is called $k$-local $n$-qubit Hamiltonian. More formally, $\mathcal{H}=\sum_sH_s$. Finding the lowest eigenvalue of $\mathcal{H}$ defines an eigenvalue problem which is so called local Hamiltonian problem. This eigenvalue problem can be also described as a decision problem to decide whether the ground state energy of $H$
is at most $a$ or at least
$b$ for given constants $a$ and $b$ such that $a\leq b$ and $b-a\leq 1/poly(n)$. This problem has been shown to be $QMA$-complete for $k\geq 2$ \cite{Kempe2006complexity} (for k=2, it is QMA-complete only when both negative and positive signs in the Hamiltonian are allowed \cite{Jordan2010}).
For the local Hamiltonians with positive off-diagonal matrix elements in the standard basis, the matrix elements of the corresponding Gibbs density matrix, $\rho = e^{-\beta\mathcal{H}}/Tr(-\beta\mathcal{H})$, are all positive for any $\beta \geq 0$. Due to the theorem given above,
The ground state energy of $\mathcal{H}$ with positive off-diagonal elements can be associated with the eigenvector whose coefficients are all positive.
Because one can associate a probability distribution to the ground state, the nature of these Hamiltonians are considered similar to stochastic processes. Thus, the Hamiltonians of these types are called \textit{stoquastic} \cite{Bravyi2008}.(note that the Hamiltonian is not a stochastic matrix whose rows and columns are sum to one and the principal eigenvalue is one associated with an eigenvector of all ones.)
\subsection{Applications to Stochastic Processes}
The phase estimation algorithm can find the principal eigenpair of a stochastic matrix with probability one \citep{Daskin2014} since the sum of the coefficients is one for the principal eigenvector and zero for the rest of the eigenvectors. However, the stochastic processes are generally defined by non-Hermitian matrices, which are also widely seen in quantum physics: The Hamiltonian of a closed system in equilibrium must be Hermitian to have real energy values. However, nonequilibrium processes and open systems connected to reservoirs can be only described by non-Hermitian models \cite{Efetov1997directed}.
\subsubsection{Simulation of Non-Hermitian Operators}
Any non-Hermitian matrix $\mathcal{H}$ can be decomposed into a Hermitian and a skew-Hermitian parts as:
\begin{equation}
\label{EqHSH}
\mathcal{H}=H+S=\frac{1}{2}(\mathcal{H}+\mathcal{H}^\dagger)+\frac{1}{2}(\mathcal{H}-\mathcal{H}^\dagger)
\end{equation}
where $\mathcal{H}^\dagger$ describes the conjugate transpose of $\mathcal{H}$. Here,
$H=\frac{1}{2}(\mathcal{H}+\mathcal{H}^\dagger)$ and
$S=\frac{1}{2}(\mathcal{H}-\mathcal{H}^\dagger)$ define the nearest Hermitian and skew Hermitian matrices to $\mathcal{H}$, respectively \cite{ClosestHermitian1975}.Therefore, the matrix $H$ can be used as an approximation to $\mathcal{H}$ in the simulation.
A matrix $\mathcal{H}$ is called normal if $\mathcal{H}^\dagger \mathcal{H}-\mathcal{H}\mathcal{H}^\dagger=0$.
When $\mathcal{H}$ is a normal, it is easy to see that $H$ and $S$ also commute: $[H,S]=HS-HS=0$. Moreover, because of the symmetry, all the eigenvalues of $H$ are real; and because of the skew-symmetry, all the eigenvalues of $S$ involve only imaginary parts.
Since $\mathcal{H}\mathcal{H}^\dagger=\mathcal{H}^\dagger \mathcal{H}$, the eigenvectors of $H$ and $S$ are the same. In addition, the imaginary part of the eigenvalues of $\mathcal{H}$ are equal to the eigenvalues of $S$ and the real parts are to the eigenvalues of $H$.
Hence, using $U_1=e^{iH}$ and $U_2=e^{S}$ in the simulation, one can simulate the non-Hermitian operator $\mathcal{H}$ on quantum computers by using separate two registers to obtain the imaginary and the real parts of the eigenvalue.
However, in the stochastic cases, if $\mathcal{H}$ is normal, it must be also doubly stochastic: i.e., its left and right principal eigenvectors are already known to be a vector of all ones with the eigenvalue one. Therefore, instead of an approximate normal matrix, we can use the closest Hermitian matrix defined above as $H=\frac{1}{2}(\mathcal{H}+\mathcal{H}^\dagger)$.
In the non-stochastic cases, one can also try to instead find the closest normal matrix by following the Jacobi algorithm which attempts to diagonalize the matrix by using rotation matrices (Givens rotations).
and converges to a matrix so called $\Delta H$.
The best closest matrix in the Frobenius norm is then defined by the sum of the diagonal elements of $\Delta H$ and the rotation matrices used in the algorithm \cite{Higham89MatrixNearness,Ruhe1987closest}. One can also procure a quantum circuit to implement this closest matrix by mapping rotation matrices to quantum gates as done in \cite{Vartiainen2004efficient}.
However, since this algorithm is based on the eigenvalue decomposition, the eigenvalues (the diagonal elements of $\Delta H$, and the eigenvectors (the combination of the rotation matrices ) are already generated for the found normal matrix.
\section{Numerical Results}
\label{Sec:NumericSimulations}
In this section, we compare the estimated $\alpha_1^2$, $P_{reg_1}$, $P_{reg_2}$: i.e. $\widetilde{\alpha}_1^2$, $\widetilde{P}_{reg_1}$, $\widetilde{P}_{reg_2}$, respectively, with the computed probabilities $\alpha_1^2$, $P_{reg_1}$, $P_{reg_2}$ for the randomly generated matrices described below.
\subsection{Random Symmetric Matrices with Non-Negative Off-Diagonal Elements}
In MATLAB, we generate a random symmetric matrix with non-negative off-diagonal elements as follows: first a random diagonal matrix and a strictly upper triangular sparse non-negative matrix are generated. Then, these matrices are combined to have a symmetric matrix with non-negative off-diagonal elements:
\begin{center}
\begin{verbatim}
R = triu(sprand(N,N,0.5),1);
L = randn(N,1);
H = R + R' + diag(L);
\end{verbatim}
\end{center}
Fig.\ref{fig:randomMatrices} shows the comparisons of the estimated and computed probabilities for matrices of different dimensions generated by following the above code. As shown in the figure, the estimated and computed probabilities are almost the same and very high (close to 1). Furthermore, the success probability, $P_{reg_1}$, with the Hadamard gates applied to \ket{reg_2} in the end is almost one and higher than the probability, $\alpha_1^2$, without the Hadamard gates.
\subsection{Random 3-Local Hamiltonians}
As our first example, the following Hamiltonian is employed:
\begin{equation}
\label{Eq:LocalH1}
\mathcal{H}_{XZ}=\sum_{i,j,k}K_{ijk}X_iX_jX_k+J_{ijk}Z_iZ_jZ_k
\end{equation}
with $0 \leq K_{ijk}\leq 1$ and $-1 \leq J_{ijk} \leq 1$. Here, $X$ and $Z$ are the Pauli spin operators. Choosing $K_{ijk}$ and $J_{ijk}$ randomly, 50 numbers of random matrices are generated for each 9, 10, 11, 12, and 13 qubit systems and show the estimated and computed probabilities in Fig.\ref{fig:randomLocalH1}.
In the second example, 1-body and 2-body interaction terms are also included separately in the Hamiltonian:
\begin{equation}
\label{Eq:LocalH2}
\begin{split}
\mathcal{H}_{XZ}& =\sum_{i}d_iX_i+h_iZ_i+\sum_{i,j}{K_1}_{ij}X_iX_j+
{J_1}_{ij}Z_iZ_j\\ &+\sum_{i,j,k}{K_2}_{ijk}X_iX_jX_k+
{J_2}_{ijk}Z_iZ_jZ_k
\end{split}
\end{equation}
with $0 \leq d_i,{K_1}_{ijk},{K_2}_{ijk}\leq 1$ and $-1 \leq h_i,{J_1}_{ijk},{J_2}_{ijk} \leq 1$.
As done for Eq.(\ref{Eq:LocalH1}, choosing the coefficients randomly, Hamiltonians of different sizes are generated. The results are shown in Fig.\ref{fig:randomLocalH2}. As seen in the figure, the estimation is not as good as the cases in Fig.\ref{fig:randomMatrices} and Fig.\ref{fig:randomLocalH1}. This is because the ratio defined in Eq.(\ref{eq:ratioofelements}) is generally higher for the matrices generated from Eq.(\ref{Eq:LocalH2}) in comparison to the ones from Eq.(\ref{Eq:LocalH1}).
\section{Conclusion}
In this paper, we have showed the positive eigenpair of an irreducible symmetric non-negative matrix can be obtained when the estimated probabilities high. This eliminates the necessity of knowing an initial approximate eigenvector in the applications of the phase estimation algorithm to a wide variety of problems involving non-negative matrices. Here, we also have discussed how to apply it to local Hamiltonian problems and stochastic processes and showed the comparisons of the estimated success probabilities with the computed ones for random symmetric matrices and 3-local Hamiltonians.
|
2,877,628,090,731 | arxiv | \section{Introduction}
Landau continuous phase transitions classified by spontaneous symmetry breaking of local order parameters~\cite{Landau1937} are important concepts in condensed matter physics that lie at the heart of our understanding of various aspects such as quantum magnetism and superconductivity. However topological phase transitions among topological phases of matter which are generally indistinguishable by any local order parameters~\cite{Wen1990}, should require a change in topological invariant. For a symmetry-protected topological phase, its topological characterization is only well-defined in the presence of symmetry, like fermionic QSHE with time-reversal symmetry can be characterized by a $Z_2$ topological index, which is the main concern of our paper. In the interacting systems with topologically non-trivial structure, the interplay between topology, symmetry and interaction, may lead to complex nature for the quantum phase transition, possibly a first-order transition~\cite{Varney2011,Amaricci2015,Roy2016}. Nevertheless, whether or not a continuous phase transition can be accompanied by a change of topological order, is an intricate open question~\cite{Samkharadze2016,Lee2015,Castelnovo2008,Hamma2008}, which motivates us to reinvestigate the strong correlation effects on the interacting topological insulators with $Z_2$ topological index.
Recent studies on topological insulators have indicated such a concrete example of interaction-driven continuous quantum phase transition (CQPT) from $Z_2$ topological order to antiferromagnetism, where the universal continuous evolutions of physical quantities are expected~\cite{Ran2006,Moon2012,Whitsitt2016}. Within the Kane-Mele-Hubbard (KMH) model~\cite{Kane2005a,Kane2005b}, Xu and Moore proposed a CQPT from the QSHE to the trivial Mott insulator driven by interactions~\cite{Xu2006}. In the strong coupling limit, Rachel and Le Hur first derived its effective spin Hamiltonian up to second order perturbation, and concluded that the Mott antiferromagnetism (AFM) is in the transverse $xy$-plane, instead of in the longitudinal $z$-direction~\cite{Rachel2010}. And this scenario including the CQPT from the QSHE to the trivial $xy$-AFM was supported by numerical studies including quantum Monte Carlo (QMC) methods~\cite{Hohenadler2011,Zheng2011,Hohenadler2012,Assaad2013a,Assaad2013b,Hohenadler2014}, the variational cluster approach~\cite{Yu2011,Budich2012}, and the mean field theory~\cite{Pesin2010,Murakami2007,Rachel2010,Vaezi2012,Chen2015,Fiete2012,Reuther2012,Wu2012,Liu2013,Laubach2014}. In QMC simulations of spin order, a finite size analysis shows that the transverse long-range spin correlation $\langle S_{\rr}^xS_{\rr'}^x\rangle$ remains a robust finite value as the distance $|\rr-\rr'|$ increases in the antiferromagnetic regime, while the longitudinal long-range spin correlation $\langle S_{\rr}^zS_{\rr'}^z\rangle$ vanishes as the system size increases~\cite{Zheng2011,Hohenadler2012,Assaad2013a,Hohenadler2014}. In Ref.~\cite{Assaad2013a}, a Curie-law signature in the magnetic susceptibility is identified by adiabatically inserting a $\pi$ flux. Early studies based on mean-field theory~\cite{Pesin2010,Murakami2007,Rachel2010,Vaezi2012,Chen2015} predicted the existence of intermediate topological antiferromagnetic phases at certain moderate Hubbard repulsion, making the nature of CQPT more {\it intricate}. However, there is no signal of an intermediate topological phase
being detected by recent numerical QMC simulations of $Z_2$ invariant~\cite{Lang2013,Hung2014}. Furthermore, as to the phase diagram of KMH model, the transition from QSHE to antiferromagnetic Mott insulator has been theoretically predicted to belong to the three-dimensional $XY$ universality class~\cite{Lee2011,Griset2012,Hohenadler2013}, and this continuous transition nature with the universal critical exponents $\beta=0.3486,\nu=0.6717$ is rigorously demonstrated by the finite-size scaling of the $xy$-transverse spin structure factor in the QMC numerical simulations~\cite{Hohenadler2011,Hohenadler2012,Assaad2013a,Hohenadler2014}. Taking into account the rich class of CQPT, it is natural and important to ask whether the CQPT nature is common to different time-reversal symmetric quantum spin Hall systems realized on different lattice geometries.
In this work, we study this interaction-driven transition nature in two representative topological lattice models with time-reversal symmetry through the state-of-the-art density-matrix renormalization group (DMRG) and exact diagonalization (ED) techniques. In Sec.~\ref{model}, we introduce the time-reversal symmetric spinful fermionic Hamiltonian in two typical $\pi$-flux checkerboard and Haldane-honeycomb lattices. In Sec.~\ref{ground}, by tuning the Hubbard repulsion, we demonstrate a CQPT from $Z_2$ QSHE at weak interactions to a trivial Mott antiferromagnetic insulator at strong interactions, with the evidences from Chern number matrix and spin structure factors. In particular, we identify the classification of CQPT is not unique. Specifically, the transition matches with three-dimensional XY universality class in the Haldane-honeycomb lattice, while for the typical $\pi$-flux checkerboard lattice, the transition is possibly in the universality class of the 2D Ising model. Finally, in Sec.~\ref{summary}, we summarize our results and compare the difference between QSHE and integer quantum Hall effect.
\section{Theoretical Models}\label{model}
We consider the spinful fermions in two representative topological lattice models with time-reversal symmetry: (i) the Haldane-honeycomb (HC) lattice~\cite{Wang2011}
\begin{align}
&H_{HC}^{\uparrow}=-t'\sum_{\langle\langle\rr,\rr'\rangle\rangle}[c_{\rr',\uparrow}^{\dag}c_{\rr,\uparrow}\exp(i\phi_{\rr'\rr})+H.c.]\nonumber\\
&-t\!\sum_{\langle\rr,\rr'\rangle}\!\!c_{\rr',\uparrow}^{\dag}c_{\rr,\uparrow}
-t''\!\sum_{\langle\langle\langle\rr,\rr'\rangle\rangle\rangle}\!\!\!\! c_{\rr',\uparrow}^{\dag}c_{\rr,\uparrow}+H.c.,
\end{align}
and (ii) the $\pi$-flux checkerboard (CB) lattice~\cite{Sun2011}
\begin{align}
&H_{CB}^{\uparrow}=-t\!\sum_{\langle\rr,\rr'\rangle}\!\big[c_{\rr',\uparrow}^{\dag}c_{\rr,\uparrow}\exp(i\phi_{\rr'\rr})+H.c.\big]\nonumber\\
&-\!\sum_{\langle\langle\rr,\rr'\rangle\rangle}\!\! t_{\rr,\rr'}'c_{\rr',\uparrow}^{\dag}c_{\rr,\uparrow}
-t''\!\sum_{\langle\langle\langle\rr,\rr'\rangle\rangle\rangle}\!\!\!\! c_{\rr',\uparrow}^{\dag}c_{\rr,\uparrow}+H.c.
\end{align}
Due to time-reversal symmetry, we take $H_{CB}^{\downarrow}=\mathcal{T}H_{CB}^{\uparrow}\mathcal{T}^{-1}$ and $H_{HC}^{\downarrow}=\mathcal{T}H_{HC}^{\uparrow}\mathcal{T}^{-1}$ with $\mathcal{T}$ the time-reversal operation. Here $c_{\rr,\sigma}^{\dag}$ is the particle creation operator of spin $\sigma=\uparrow,\downarrow$ at site $\rr$, $\langle\ldots\rangle$,$\langle\langle\ldots\rangle\rangle$ and $\langle\langle\langle\ldots\rangle\rangle\rangle$ denote the nearest-neighbor, the next-nearest-neighbor, and the next-next-nearest-neighbor pairs of sites, respectively.
Typically, we choose $t''=0,\phi=\pi/2$ for honeycomb lattice which reduces to the famous Kane-Mele (KM) model~\cite{Kane2005a,Kane2005b}, and $t''=0,\phi=\pi/4$ for checkerboard lattice~\cite{Sun2011}. In the flat band limit, we take the parameters $t'=0.6t,t''=-0.58t,\phi=2\pi/5$ for honeycomb lattice and $t'=0.3t,t''=-0.2t,\phi=\pi/4$ for checkerboard lattice.
Taking into account the on-site Hubbard repulsions $V_{int}=U\sum_{\rr}n_{\rr,\uparrow}n_{\rr,\downarrow}$ where $n_{\rr,\sigma}$ is the particle number operator of spin-$\sigma$ at site $\rr$, the model Hamiltonian becomes $H=H_{CB}^{\downarrow}+H_{CB}^{\uparrow}+V_{int}$ ($H=H_{HC}^{\downarrow}+H_{HC}^{\uparrow}+V_{int}$). In the following we explore the many-body ground state of $H$ at half-filling $N_{\uparrow}/N_s=N_{\downarrow}/N_s=1/2$ in a finite system of $N_x\times N_y$ unit cells (the total number of sites is $N_s=2\times N_x\times N_y$) with particle conservation $U(1)\times U(1)$-symmetry. In the ED study, with the translational symmetry, the energy states are labeled by the total momentum $K=(K_x,K_y)$ in units of $(2\pi/N_x,2\pi/N_y)$ in the Brillouin zone. For larger systems we exploit DMRG on the cylindrical geometry, and keep the number of state basis up to 3000 to obtain accurate results.
\section{Interaction-driven phase transitions}\label{ground}
In this section, we present the numerical analysis of the interaction-driven phase transition from two-component QSHE to antiferromagnetism at half-filling. The two-component QSHE can be identified by the Chern number matrix with featureless spin structure factors, and the corresponding charge (spin) pumpings are complementary to and consistent with the Chern number matrix.
\subsection{ED analysis}
\begin{figure}[t]
\includegraphics[height=2.25in,width=3.4in]{energy1.eps}
\caption{\label{energyhc}(Color online) Numerical ED results for two-component fermions at half-filling $N_s=2\times2\times4=16,N_{\uparrow}=N_{\downarrow}=8$ in the Haldane-honeycomb lattice with $t'=0.3t,t''=0,\phi=\pi/2$. (a) The low energy spectrum as a function of onsite repulsion $U$. (b) The energy spectrum gap for the lowest two energy states in the whole parameter plane $(\theta^{x}_{\uparrow}=\theta^{x}_{\downarrow},\theta^{y}_{\uparrow}=\theta^{y}_{\downarrow})$, keeping $Z_2$ symmetry. (c) The antiferromagnetic spin structure factors $S_{AF}^{zz},S_{AF}^{xy}$ of the ground state as a function of $U$. (d) The topological transition signature of the ground state obtained from its many-body Chern number $C_{\uparrow,\uparrow}$ and the standard deviations of Berry curvature as a function of $U$.
}
\end{figure}
We first present an ED study of the ground state properties for HC lattice with two different lattice sizes $N_s=16,12$.
In Fig.~\ref{energyhc}(a), we plot the low energy evolution as a function of on-site repulsion $U$.
For weak interactions, there always exists a stable unique ground state at $K=(0,0)$ with a large gap separated from higher levels. By tuning $U$ from weak to strong, there is not any level crossing between the ground state and high-level excited states. Also this ground state does not undergo the level crossing with excited levels in the $Z_2$-symmetric parameter plane $(\theta^{x}_{\uparrow}=\theta^{x}_{\downarrow},\theta^{y}_{\uparrow}=\theta^{y}_{\downarrow})$ as indicated in Fig.~\ref{energyhc}(b), signaling a continuous phase transition nature with $Z_2$ symmetry. (Here $\theta_{\sigma}^{\alpha}$ is the twisted angle for spin-$\sigma$ particles in the $\alpha$-direction, which shifts the particle crystal momentum $\kk_{\alpha}\rightarrow\kk_{\alpha}+\theta_{\sigma}^{\alpha}/N_{\alpha}$; see the definition below). We emphasize that to fully establish the continuous ground energy evolution without level crossing we need to perform a scaling of the system size results, which is beyond our current ED limit. Instead, we will demonstrate its continuous transition for large system sizes from the DMRG calculation of ground state wavefunction fidelity and antiferromagnetic order parameters, as shown in Sec.~\ref{dmrg}.
Alternatively, the topological index obtained in ED calculation can help us locate the
phase transition boundary. The topological nature of quantum spin-Hall state is characterized by the Chern number matrix by introducing twisted boundary conditions~\cite{Sheng2003,Sheng2006} $\psi(\cdots,\rr_{\sigma}^{i}+N_{\alpha},\cdots)=\psi(\cdots,\rr_{\sigma}^{i},\cdots)\exp(i\theta_{\sigma}^{\alpha})$. The system is periodic when one flux quantum $\theta_{\sigma}^{\alpha}=0\rightarrow2\pi$ is inserted. Meanwhile, the many-body Chern number of the ground state wavefunction $\psi$ is defined as $C_{\sigma,\sigma'}=\frac{1}{2\pi}\int d\theta_{\sigma}^{x}d\theta_{\sigma'}^{y}F_{\sigma,\sigma'}^{xy}$ with the Berry curvature
\begin{align}
F_{\sigma,\sigma'}^{xy}=\mathbf{Im}\left(\langle{\frac{\partial\psi}{\partial\theta_{\sigma}^x}}|{\frac{\partial\psi}{\partial\theta_{\sigma'}^y}}\rangle
-\langle{\frac{\partial\psi}{\partial\theta_{\sigma'}^y}}|{\frac{\partial\psi}{\partial\theta_{\sigma}^x}}\rangle\right).\nonumber
\end{align}
Due to time-reversal symmetry, for any ground state and interaction, one has the antisymmetric properties $C_{\uparrow,\downarrow}=-C_{\downarrow,\uparrow},C_{\uparrow,\uparrow}=-C_{\downarrow,\downarrow}$ in the spanned Hilbert space, and the total Chern number related to the charge Hall conductance is equal to zero $C_q=\sum_{\sigma,\sigma'}C_{\sigma,\sigma'}=0$ for any interaction strength. Therefore we always have an antisymmetric $C$-matrix~\cite{Sheng2006}
\begin{align}
\mathbf{C}=\begin{pmatrix}
C_{\uparrow,\uparrow} & C_{\uparrow,\downarrow}\\
C_{\downarrow,\uparrow} & C_{\downarrow,\downarrow}\\
\end{pmatrix}
\end{align}
For decoupled QSHE at weak interactions, we obtain $C_{\uparrow,\uparrow}=1$, and $C_{\uparrow,\downarrow}=0$. However, for strong interactions, the off-diagonal element $C_{\uparrow,\downarrow}$ related to the drag Hall conductance arising from interspecies correlation may be nonzero for two-component quantum Hall effects~\cite{Sheng2003,Sheng2005,Zeng2017,Nakagawa2017}.
\begin{figure}[t]
\includegraphics[height=1.4in,width=3.4in]{energy2.eps}
\caption{\label{energycb}(Color online) Numerical ED results for two-component fermions at half-filling $N_s=2\times2\times4=16,N_{\uparrow}=N_{\downarrow}=8$ in the $\pi$-flux checkerboard lattice with $t'=0.3t,t''=0,\phi=\pi/4$. (a) The low energy spectrum as a function of onsite repulsion $U$. (b) The antiferromagnetic spin structure factors $S_{AF}^{zz},S_{AF}^{xy}$ of the lowest ground state and its many-body Chern number $C_{\uparrow,\uparrow}$ as a function of $U$. }
\end{figure}
To clarify the interaction-driven topological transition, we calculate the evolution of $C_{\uparrow,\uparrow}$ as a function of $U$. In Fig.~\ref{energyhc}(d), $C_{\uparrow,\uparrow}$ experiences a fast drop as the interaction $U$ increases across the critical threshold $U_c$, where the distribution of Berry curvature exhibits a singular behavior, signalling the topological phase transition of a many-body system~\cite{Carollo2005,Zhu2006}. As a quantitative measure of the fluctuation of the Berry curvature, we take $\Delta F_{\sigma,\sigma'}=\sqrt{\int d\theta_{\sigma}^{x}d\theta_{\sigma'}^{y}[F_{\sigma,\sigma'}^{xy}-\overline{F}]^2}$ where $\overline{F}$ is the average value. Both $\Delta F_{\uparrow,\uparrow}$ and $\Delta F_{\uparrow,\downarrow}$ show a peak at the critical point where topological invariant changes, resulting from the energy level crossing at $(\theta^{x}_{\uparrow},\theta^{y}_{\uparrow})=(\pi,0)$ (see Fig.~\ref{level} for details). Physically, the sudden jump of $C_{\uparrow,\uparrow}$ and the singularity of $\Delta F_{\uparrow,\uparrow}$ mark the quantum phase transition from QSHE to Mott insulator, while the latter is characterized by gapless spin excitations as shown in Fig.~\ref{dmrghc}(d).
In addition, to get a picture about the Mott insulator in the strongly large-$U$ limit, we calculate the antiferromagnetic spin structure factors
\begin{align}
S_{AF}^{zz}=\sum_{\alpha,\beta}[S_{AF}^{zz}]^{\alpha,\beta}\label{zafm}
\end{align}
and
\begin{align}
S_{AF}^{xy}=\sum_{\alpha,\beta}[S_{AF}^{xy}]^{\alpha,\beta},\label{xafm}
\end{align}
with the inner functions defined by
\begin{align}
[S_{AF}^{zz}]^{\alpha,\beta}&=\frac{1}{N_s}\sum_{i,j}(-1)^{\alpha}(-1)^{\beta}\langle S_{i\alpha}^zS_{j\beta}^z\rangle,\label{szz}\\
[S_{AF}^{xy}]^{\alpha,\beta}&=\frac{1}{N_s}\sum_{i,j}(-1)^{\alpha}(-1)^{\beta}\langle S_{i\alpha}^{+}S_{j\beta}^{-}+S_{i\alpha}^{-}S_{j\beta}^{+}\rangle,\label{sxy}
\end{align}
where $i,j$ denote unit cells and $\alpha,\beta\in\{A,B\}$ are sublattice indices, $(-1)^{\alpha}=1 (-1)$ for $\alpha=A (B)$. As indicated in Fig.~\ref{energyhc}(c), both $S_{AF}^{zz}$ and $S_{AF}^{xy}$ undergo a smooth evolution, implying a continuous transition. In the Mott regime for Haldane-honeycomb lattice, the transverse $xy$-antiferromagnetism $S_{AF}^{xy}$ dominates.
\begin{figure}[t]
\includegraphics[height=1.4in,width=3.4in]{level.eps}
\caption{\label{level}(Color online) Numerical ED results for the lowest energy levels at half-filling $N_s=16,N_{\uparrow}=N_{\downarrow}=8$ as a function of Hubbard repulsion under the insertion of half flux quantum $\theta^{x}_{\uparrow}=\pi,\theta^x_{\downarrow}=0$ for (a) the Haldane-honeycomb lattice and (b) the $\pi$-flux checkerboard lattice, respectively. The parameters $t'=0.3t,t''=0$.}
\end{figure}
Similar results have also been obtained for the $\pi$-flux CB lattice, except that the dominant antiferromagnetic order is aligned along the $z$-direction given by $S_{AF}^{zz}$ in the Mott regime. As we will show below, the different nature of antiferromagnetic orders leads to distinct finite-size scaling behaviors of CQPT, depending on the lattice details. As shown in Fig.~\ref{energycb} for $\pi$-flux checkerboard model, the ground energy level is continuously connected between quantum spin-hall state at weak interactions and a trivial Mott antiferromagnetism at strong interactions, as the onsite repulsion is changed. However, in the Mott regime, the dominant antiferromagnetic order is aligned along the $z$-direction given by $S_{AF}^{zz}$, instead of the transverse antiferromagnetism $S_{AF}^{xy}$ in the $xy$-plane. Similarly, the second-order transition from QSHE to Ising antiferromagnetism is also predicted in the correlated Bernevig-Hughes-Zhang model, using dynamical mean field theory~\cite{Hohenadler2013,Yoshida2012,Miyakoshi2013}.
However, under the insertion of flux quantum $\theta^{x}_{\uparrow}=\pi,\theta^x_{\downarrow}=0$ which breaks the $Z_2$ symmetry between spin-up and spin-down particles, the lowest two energy levels indeed cross with each other at the critical point where the Berry curvature becomes singular and topological invariant changes, as indicated in Figs.~\ref{level}(a) and~\ref{level}(b).
\subsection{DMRG results}\label{dmrg}
\begin{figure}[t]
\includegraphics[height=2.6in,width=3.3in]{dmrghc.eps}
\caption{\label{dmrghc}(Color online) Numerical DMRG results on a cylinder Haldane-honeycomb lattice with width $L_y=2N_y$ and length $L_x=N_x=18$ at half-filling. The evolutions of the entanglement entropy $S_{L}$, charge and spin pumpings $\Delta Q,\Delta S$, spin structure factors $S_{AF}^{zz},S_{AF}^{xy}$ as a function of $U$ on a cylinder with width (a) $L_y=6$ and (b) $L_y=8$, respectively; (c) The absolute wavefunction overlap $F(U)=|\langle\psi(U)|\psi(U+\delta U)\rangle|$, $F_i(U)=|\langle\psi(U=3t)|\psi(U)\rangle|$, and $F_f(U)=|\langle\psi(U=9t)|\psi(U)\rangle|$ with different cylinder lengths $L_x=N_x=18,15$ respectively; (d) The spin excitation gap $\Delta_s$ as a function of $U$ for different lengths. The inset shows the ground state energy derivative $\partial E/\partial U$ per site. The smooth transition is characterized by the continuous behavior of these physical quantities. There are no signs of a first-order transition. The parameters $t'=0.3t,t''=0,\phi=\pi/2$.}
\end{figure}
To further verify the continuous interaction-driven transition, we exploit an unbiased DMRG approach for larger system sizes, using a cylindrical geometry up to a maximum width $L_y=8$ ($N_y=4$). As shown in Fig.~\ref{dmrghc} for HC lattice, we measure three different quantities as a function of $U$: the ground state wavefunction overlap $F(U)=|\langle\psi(U)|\psi(U+\delta U)\rangle|$ ($\delta U$ is as small as $0.1t$), the ground state entanglement entropy $S_L$, and ground state energy derivative. We also check the overlaps $F_i(U)=|\langle\psi(U=3t)|\psi(U)\rangle|$ between the ground state $\psi(U)$ and QSHE at $U=3t$, and $F_f(U)=|\langle\psi(U=9t)|\psi(U)\rangle|$ between the ground state $\psi(U)$ and AFM at $U=9t$. All the physical order parameters exhibit continuous evolutions from weak interactions to strong interactions, such that we can exclude the possibility of a first-order phase transition. The spin excitation gap $\Delta_s=E_0(S^z=1)-E_0(S^z=0)$ would tend to diminish continuously in the Mott regime.
\begin{figure}[t]
\includegraphics[height=1.65in,width=3.4in]{corre.eps}
\caption{\label{corre}(Color online) Numerical DMRG results of the long-range antiferromagnetic spin correlation functions $|\langle S_{i,A}^{+}S_{j,B}^{-}\rangle|$,$|\langle S_{i,A}^zS_{j,B}^z\rangle|$ as a distance $|j-i|$ in the $x$-direction between sublattice A and sublattice B on a cylinder Haldane-honeycomb lattice with finite width $L_y=2N_y=6$ and fixed length $L_x=N_x=18$ at half-filling for different Hubbard repulsions: (a) $U=3.6t<U_c$ and (b) $U=7.0t>U_c$, respectively. The blue/red dashed lines are the exponential fit to the decaying behaviors of these correlation functions. The parameters $t'=0.3t,t''=0,\phi=\pi/2$.}
\end{figure}
Second, we characterize the topological nature of the ground state from its topological charge pumping by inserting one flux quantum $\theta_{\uparrow}^{y}=\theta,\theta_{\downarrow}^{y}=0$ from $\theta=0$ to $\theta=2\pi$ on cylinder systems based on the newly developed adiabatic DMRG~\cite{Gong2014} in connection to the quantized Hall conductance. The net transfer of the total charge from the right side to the left side is encoded by the expectation value $Q(\theta)=N_{\uparrow}^{L}+N_{\downarrow}^{L}=tr[\widehat{\rho}_L(\theta)\widehat{Q}]$. Here we partition the lattice system on the cylinder along the $y$-direction into two halves with equal lattice sites. $N_{\sigma}^{L}$ is the particle number of spin-$\sigma$ in the left cylinder part, and $\widehat{\rho}_L$ the reduced density matrix of the corresponding left part~\cite{Zaletel2014}. Under the inserting of the flux $\theta_{\uparrow}^{y}=\theta,\theta_{\downarrow}^{y}=0$ in the $y$-direction, the change of $N_{\uparrow}^{L}+N_{\downarrow}^{L}$ indicates the transverse charge transfer from the right side to the left side in the $x$-direction, induced by both diagonal Hall conductance $C_{\uparrow,\uparrow}$ and drag Hall conductance $C_{\downarrow,\uparrow}$. From the Chern number matrix of two-component quantum Hall effects, in each cycle we obtain~\cite{Zeng2017}
\begin{align}
\Delta Q=Q(2\pi)-Q(0)=C_{\uparrow,\uparrow}+C_{\downarrow,\uparrow}.
\end{align}
In order to quantify the spin-Hall conductance, we also calculate the spin pumping by inserting one flux quantum $\theta_{\uparrow}^{y}=\theta_{\downarrow}^{y}=\theta$ from $\theta=0$ to $\theta=2\pi$ in the $y$-direction, and define the $Z_2$ spin transfer $\Delta S$ from the right side to the left side in the $x$-direction by the physical quantity $S(\theta)=N_{\uparrow}^{L}-N_{\downarrow}^{L}=tr[\widehat{\rho}_L(\theta)\widehat{S}]$ in analogy to the charge transfer. Similarly, we obtain~\cite{Zeng2017}
\begin{align}
\Delta S=S(2\pi)-S(0)=C_{\uparrow,\uparrow}-C_{\downarrow,\uparrow}+C_{\uparrow,\downarrow}-C_{\downarrow,\downarrow}.
\end{align}
For each flux cycle, we obtain both $\Delta Q\simeq1$ and $\Delta S\simeq2$ for the QSHE in the weakly interacting regime. However in the strongly interacting regime $\Delta Q\simeq0,\Delta S\simeq0$. The change of the charge pumping is shown in Fig.~\ref{dmrghc}(a), where the critical $U_c\simeq4.8t$, while the $Z_2$ spin pumping persists a finite value deviating from the integer quantized value $2$ up to $U_{c}'>U_c$. However, with increasing $L_y=6$ to $L_y=8$, we find that the difference between $U_c$ and $U_c'$ becomes substantially reduced as shown in Figs.~\ref{dmrghc}(a-b), which may be consistent with a direct transition from the QSHE to the Mott insulator.
\begin{figure}[t]
\includegraphics[height=1.5in,width=3.4in]{dmrgcb.eps}
\caption{\label{dmrgcb}(Color online) Numerical DMRG results on a cylinder $\pi$-flux checkerboard lattice with width $L_y=2N_y=6$ at half-filling with parameters $t'=0.3t,t''=0$. (a) The evolutions of the entanglement entropy $S_{L}$, charge and spin pumpings $\Delta Q,\Delta S$, spin structure factors $S_{AF}^{zz}$ as a function of $U$ on a cylinder with length $L_x=N_x=18$. (b) The absolute wavefunction overlap $F(U)=|\langle\psi(U)|\psi(U+\delta U)\rangle|$, $F_i(U)=|\langle\psi(U=3t)|\psi(U)\rangle|$, and $F_f(U)=|\langle\psi(U=9t)|\psi(U)\rangle|$ with different cylinder lengths $L_x=N_x=18,12$ respectively.}
\end{figure}
Third, we measure the antiferromagnetic order from spin structure factors $S_{AF}^{zz},S_{AF}^{xy}$. Both $S_{AF}^{zz},S_{AF}^{xy}$ exhibit a continuous evolution near the critical point for different system sizes as shown in Figs.~\ref{dmrghc}(a) and~\ref{dmrghc}(b), similar to our ED analysis. In the thermodynamic limit, $S_{AF}^{zz}$ should be vanishingly small in the strong interacting limit $U\gg t$. In Figs.~\ref{corre}(a) and~\ref{corre}(b), our DMRG results show that for $U<U_c$, both of the antiferromagnetic spin correlations $\langle S_{i,A}^{+}S_{j,B}^{-}\rangle,\langle S_{i,A}^zS_{j,B}^z\rangle$ decay exponentially as the distance $|j-i|$, while for $U>U_c$, only the longitudinal long-range order parameters $\langle S_{i,A}^zS_{j,B}^z\rangle$ decays exponentially as the distance $|j-i|$, but the transverse long-range order parameters $\langle S_{i,A}^{+}S_{j,B}^{-}+S_{i,A}^{-}S_{j,B}^{+}\rangle$ maintain to be a robust finite value of the order 0.01, which determines the square of transverse XY spontaneous magnetization $m_{xy}^2=\lim_{|j-i|\rightarrow\infty}|\langle S_{i,A}^{+}S_{j,B}^{-}+S_{i,A}^{-}S_{j,B}^{+}\rangle|$. For $|j-i|>6$, $\langle S_{i\alpha}^zS_{j\beta}^z\rangle$ becomes already smaller than $10^{-6}$, These are in good agreement with the physical picture proposed in Refs.~\cite{Rachel2010,Hohenadler2012,Assaad2013a,Hohenadler2014}. For our study, a very small value of spin structure factor $S_{AF}^{zz}$ in both ED and DMRG, is likely due to the finite width effects in the $y$-direction. Figures.~\ref{dmrgcb}(a) and~\ref{dmrgcb}(b) show the continuous phase transition through the tunable repulsion $U$ on a cylinder $\pi$-flux checkerboard lattice for large system sizes. The topological phase transition is characterized by the charge and spin pumpings when inserting one flux quantum. All the physical order parameters like spin structure factor, entanglement entropy and the wavefunction fidelity exhibit a continuous evolution from weak interactions to strong interactions.
As shown in Figs.~\ref{dmrghc} and~\ref{dmrgcb}, for HC lattice, only $S_{AF}^{xy}$ shows a rapid increase, signaling an antiferromagnetic order in the transverse $xy$-plane for $U>U_c$; In contrast for $\pi$-flux CB lattice, only $S_{AF}^{zz}$ shows a rapid increase near the critical point, signaling an antiferromagnetic order in the longitudinal $z$-direction for $U>U_c$. To understand this, let us consider the antiferromagnetic long-range-ordered phase in the strongly large-$U$ limit. For $U\gg t$, similar to the usual Hubbard model, we expand the Hamiltonian in powers of $t/U$ up to the second order, and arrive at the effective spin models $J\sum_{\langle\rr,\rr'\rangle}[S_{\rr}^{z}S_{\rr'}^{z}+i(S_{\rr}^{+}S_{\rr'}^{-}-S_{\rr}^{-}S_{\rr'}^{+})/2]
+J'\!\sum_{\langle\langle\rr,\rr'\rangle\rangle}\mathbf{S}_{\rr}\cdot\mathbf{S}_{\rr'}
+J''\!\sum_{\langle\langle\langle\rr,\rr'\rangle\rangle\rangle}\mathbf{S}_{\rr}\cdot\mathbf{S}_{\rr'}$ for $\pi$-flux CB lattice and $J\sum_{\langle\rr,\rr'\rangle}\mathbf{S}_{\rr}\cdot\mathbf{S}_{\rr'}
+J'\!\sum_{\langle\langle\rr,\rr'\rangle\rangle}[S_{\rr}^{z}S_{\rr'}^{z}+(e^{2i\phi}S_{\rr}^{+}S_{\rr'}^{-}+e^{-2i\phi}S_{\rr}^{-}S_{\rr'}^{+})/2]
+J''\!\sum_{\langle\langle\langle\rr,\rr'\rangle\rangle\rangle}\mathbf{S}_{\rr}\cdot\mathbf{S}_{\rr'}$ for Haldane HC lattice, where $J=4t^2/U, J'=4(t')^2/U,J''=4(t'')^2/U$~(see also the related effective spin Hamiltonian for HC lattice in Refs.~\cite{Rachel2010,Reuther2012}).
\begin{figure}[t]
\includegraphics[height=2.4in,width=3.4in]{scale.eps}
\caption{\label{scalexy}(Color online) Numerical DMRG results for the spin structure factor on a cylinder with width $L_y=2N_y=6$ at half-filling with parameters $t''=0$. (a-b) Finite-size dependence of the structure factor $S_{AF}^{xy}$ as a function of $U$ for HC lattice with critical exponents $\beta\simeq0.31,\nu\simeq0.7$ at $t'=0.3t$. (c-d) Finite-size scaling of the structure factor $S_{AF}^{zz}$ as a function of $U$ for CB lattice with critical exponents $\beta\simeq1/8,\nu\simeq1.0$ at $t'=0.3t$.}
\end{figure}
When $t''=0$, for $\pi$-flux CB lattice the nearest-neighbor term is an Ising exchange, while the next-nearest-neighbor term is an isotropic antiferromagnetic Heisenberg exchange. However for Haldane HC lattice the nearest-neighbor term is an isotropic antiferromagnetic Heisenberg exchange, while the next-nearest-neighbor term is antiferromagnetic in the longitudinal direction but ferromagnetic in the transverse direction when $\phi$ is close to $\pi/2$. In our typical parameters $J'/J\lesssim0.3$ which is away from possible spin liquid regime~\cite{Gong2014a,Gong2013}, combining all the exchange terms, we expect an antiferromagnetic order in the $z$-direction for $\pi$-flux checkerboard lattice, but in the $xy$-plane for Haldane-honeycomb lattice, due to the next-nearest-neighbor frustration term. As a result, the scaling behavior around the critical point is different for HC and CB lattices.
\begin{figure}[t]
\includegraphics[height=1.35in,width=3.3in]{phase.eps}
\caption{\label{phase}(Color online) Numerical DMRG results of the phase diagram for (a) Haldane-honeycomb lattice and (b) $\pi$-flux checkerboard lattice models on a cylinder with finite width $L_y=2N_y=6$ and fixed length $L_x=N_x=18$ at half-filling for $t''=0$.}
\end{figure}
Due to the numerical difficulty of well-controlled DMRG convergence for two-component particles on cylinder width $L_y>8 (N_y>4)$, we cannot perform a finite-size scaling in the $y$-direction, and therefore focus on the quasi-one dimensional scaling of the cylinder length $L_x$. This is different from QMC methods where the finite-size scaling is done at the same time in both $x,y$-directions. Despite its limitation, we show that it still sheds some light into the critical scaling exponents. For HC lattice, in Figs.~\ref{scalexy}(a) and~\ref{scalexy}(b), a finite size scaling of $S_{AF}^{xy}$ by using the scaling function $S_{AF}^{xy}/N_s\propto L_{x}^{-2\beta/\nu}f(L_{x}^{1/\nu}(U-U_c))$ gives the critical exponents $\beta=0.31,\nu=0.70$. For QMC simulations in Refs.~\cite{Hohenadler2012,Assaad2013a,Assaad2013b}, they extract the exponents $\beta=0.3486,\nu=0.6717$ in fully agreement with those of 3D XY model. In comparison, we can see that our DMRG results are in reasonable agreement with 3D XY universality class. While for CB lattice in Figs.~\ref{scalexy}(c) and~\ref{scalexy}(d), $S_{AF}^{zz}$ from different sizes can merge together by using the scaling function $S_{AF}^{zz}/N_s\propto L_{x}^{-2\beta/\nu}f(L_{x}^{1/\nu}(U-U_c))$
using the critical exponents $\beta=1/8,\nu=1$, which indicates that the phase transition falls into the 2D Ising universality class~\cite{Pelissettoa2002,Moukouri2012}. When $U$ approaches a critical value $U_c$, $F(U)$ shows a small bump, implying a peak of the fidelity susceptibility $\chi_F=(1-F(U))/(\delta U)^2$ which is a signature of phase transition~\cite{Gu2010,Saadatmand2017}. We obtain a similar picture for $\chi_F\propto L_{x}^{2/\nu}f(L_{x}^{1/\nu}(U-U_c))$. Thus we conjecture that this phase transition maybe belong to the 2D Ising universality class, which is different from that in HC lattice, although a stronger evidence of finite-size scaling in cylinder width is necessary.
\begin{figure}[t]
\includegraphics[height=1.5in,width=3.2in]{flat.eps}
\caption{\label{flat}(Color online) Numerical DMRG results on a cylinder lattice in the flat band limit with width $L_y=2N_y$ and length $L_x=N_x=18$ at half-filling. The evolutions of the absolute wavefunction overlap $F(U)$, spin structure factors $S_{AF}^{zz},S_{AF}^{xy}$, and entanglement entropy $S_{L}$ are shown for (a) the Haldane-honeycomb lattice with parameters $t'=0.6t,t''=0.58t,\phi=2\pi/5$ and (b) the checkerboard lattice with parameters $t'=0.3t,t''=-0.2t,\phi=\pi/4$, respectively. The smooth transition is characterized by the continuous behavior of these physical quantities.}
\end{figure}
Finally, we present our DMRG results of the phase diagram in the parameter plane $(U,t')$ without $t''$, as indicated in Figs.~\ref{phase}(a) and~\ref{phase}(b). First of all, we identify a CQPT separating the QSHE from the antiferromagnetic ground state on both HC and CB lattices, without the evidence of intermediate phase in between. The apparent non-vanishing spin pump is due to the fluctuating off-diagonal Berry curvature driven by interspecies correlation, as also identified from ED analysis in Fig.~\ref{energyhc}(d). Here we do not consider the situation $t'\rightarrow0$ where the system is a gapless Dirac semimetal for both models, and the transition from such a Dirac semimetal to AFM has been claimed to be of Gross-Neveu universality class in several QMC simulations~\cite{Assaad2013a,Assaad2013b}, which we leave for future study.
Moreover, by including the next-next-nearest-neighbor hopping in the flat band limit, we obtain the similar physical picture of a continuous phase transition. As shown in Fig.~\ref{flat}(a) and~\ref{flat}(b), for both Haldane-honeycomb and $\pi$-flux checkerboard, the quantum phase transition is continuous, identified from three physical quantities including the absolute wavefunction overlap $F(U)$, spin structure factors $S_{AF}^{zz},S_{AF}^{xy}$, and entanglement entropy $S_{L}$.
\section{Summary and Discussions}\label{summary}
In summary, using both ED and DMRG calculations, we have demonstrated a continuous phase transition from a quantum spin-Hall state to an antiferromagnetic Mott insulator driven by onsite Hubbard repulsion at half-filling, which is characterized by the continuous evolutions of the physical quantities, including the wave function fidelity, spin structure factors, entanglement entropy. The topological transition nature is encoded by the singular behavior of the Berry curvature driven by strong interspecies correlation, but the total charge Hall conductance remains unchanged. In close comparison, for an integer quantum Hall state with a symmetric $C$-matrix $\mathbf{C}=\begin{pmatrix}
1 & 0\\
0 & 1\\
\end{pmatrix}$ in both $\pi$-flux checkerboard and Haldane-honeycomb lattices with broken time-reversal symmetry, recent ED and DMRG studies show that a direct first-order level crossing occurs at one of high-symmetry twisted boundary conditions~\cite{Zeng2017,Vanhala2016} from integer quantum Hall effect (IQHE) to a trivial Mott insulator. Physically,
these two classes of transitions indeed can belong to different classes as the transition between IQHE to a Mott insulator has a quantized jump of charge Chern number between two charge insulators, while the transition between QSHE and a Mott insulator only has a change of spin Chern number from a spin insulator to a gapless spin system.
We believe our current work may provide a new insight into the interaction-driven topological transition nature. As one
intriguing direction of our study, it is interesting and important to investigate the role of broken U(1)-spin symmetry by adding spin-orbit coupling in the transition between QSHE and the magnetic phase, which is very relevant to the transition metal oxide Na$_{2}$IrO$_{3}$ materials~\cite{Shitade2009,Jackeli2009}, and we leave it for a future follow-up project.
\begin{acknowledgements}
W.Z. thanks Kai Sun for stimulating discussions.
This work is supported by National Science Foundation Grants PREM DMR-1205734 (T.S.Z.) and DMR-
1408560 (D.N.S.)
The work at Los Alamos was supported by the U.S. DOE Contract No. DE-AC52-06NA25396
through the LDRD Program (W.Z. \& J.-X.Z.),
and supported in part by the Center for Integrated Nanotechnologies,
a U.S. DOE Office of Basic Energy Sciences user facility.
\end{acknowledgements}
|
2,877,628,090,732 | arxiv | \subsection{0pt}{2pt plus 1pt minus 1pt}{0pt plus 1pt minus 1pt}
\usepackage{verbatim}
\usetikzlibrary{decorations.pathreplacing}
\newcommand{|c|c|c|}{|c|c|c|}
\newcommand{ & 0.03 & 0.06 & mean}{ & 0.03 & 0.06 & mean}
\newcommand{\results}[3]{}
\newcommand{\gripMicrodesc}{gripper+micro }
\newcommand{\gripMicroA}{\results{0.86}{0.74}{0.80}}
\newcommand{\gripMicroB}{\results{0.80}{0.65}{0.72}}
\newcommand{\gripMicroC}{\results{0.83}{0.70}{0.76}}
\newcommand{0.96}{0.96}
\newcommand{gripper+reacher }{gripper+reacher }
\newcommand{\gripReacherA}{\results{0.17}{0.08}{0.13}}
\newcommand{\gripReacherB}{\results{0.32}{0.15}{0.23}}
\newcommand{\gripReacherC}{\results{0.24}{0.11}{0.18}}
\newcommand{0.64}{0.64}
\newcommand{gripper+reacher+micro }{gripper+reacher+micro }
\newcommand{\gripReacherMicroA}{\results{0.92}{0.01}{0.46}}
\newcommand{\gripReacherMicroB}{\results{0.75}{0.01}{0.40}}
\newcommand{\gripReacherMicroC}{\results{0.83}{0.01}{0.43}}
\newcommand{0.99}{0.99}
\newcommand{gripper+s-reacher }{gripper+s-reacher }
\newcommand{\gripReacherStabA}{\results{0.88}{0.69}{0.81}}
\newcommand{\gripReacherStabB}{\results{0.87}{0.70}{0.79}}
\newcommand{\gripReacherStabC}{\results{0.88}{0.70}{0.80}}
\newcommand{0.99}{0.99}
\DeclareMathOperator*{\argmax}{arg\,max}
\section{Introduction}
Recent advancements in material science and device technology have increased interest in creating prosthetics for improving human movement. Designing these devices, however, is difficult as it is costly and time-consuming to iterate through many designs. This is further complicated by the large variability in response among many individuals. One key reason for this is that our understanding of the interactions between humans and prostheses is not well-understood, which limits our ability to predict how a human will adapt his or her movement. Physics-based, biomechanical simulations are well-positioned to advance this field as it allows for many experiments to be run at low cost. Recent developments in using reinforcement learning techniques to train realistic, biomechanical models will be key in increasing our understanding of the human-prosthesis interaction, which will help to accelerate development of this field.
In this competition, participants were tasked with developing a controller to enable a physiologically-based human model with a prosthetic leg to move at a specified direction and speed. Participants were provided with a human musculoskeletal model and a physics-based simulation environment (OpenSim \cite{delp2007opensim,seth2018opensim}) in which they synthesized physically and physiologically accurate motion (Figure \ref{fig:running}). Entrants were scored based on how well the model moved according to the requested speed and direction of walking. We provided competitors with a parameterized training environment to help build the controllers, and competitors' scores were based on a final environment with unknown parameters.
This competition advanced and popularized an important class of reinforcement learning problems, characterized by a large set of output parameters (human muscle controls) and a comparatively small dimensionality of the input (state of a dynamic system). Our challenge attracted over 425 teams from the computer science, biomechanics, and neuroscience communities, submitting 4,575 solutions. Algorithms developed in this complex biomechanical environment generalize to other reinforcement learning settings with highly-dimensional decisions, such as robotics, multivariate decision making (corporate decisions, drug quantities), and the stock exchange.
In this introduction, we first discuss state-of-the-art research in motor control modeling and simulations as a tool for solving problems in biomechanics (Section \ref{ss:scope}). Next, we specify the details of the task and performance metrics used in the challenge (Section \ref{ss:task}). Finally, we discuss results of the challenge and provide a summary of the common strategies that teams used to be successful in the challenge (Section \ref{ss:solutions}). In the following sections, top teams describe their approaches in more detail.
\subsection{Background and scope}\label{ss:scope}
Using biomechanical simulations to analyze experimental data has led to novel insights about human-device interaction. For example, one group used simulations to study a device that decreased the force that the ankle needed to produce during hopping but, paradoxically, did not reduce energy expenditure. Simulations that included a sophisticated model of muscle-tendon dynamics revealed that this paradox occurred because the muscles were at a length that was less efficient for force production \cite{farris2014exo}. Another study used simulations of running to calculate ideal device torques needed to reduce energy expenditure during running\cite{uchida2016device}, and insights gained from that study were used to decrease the metabolic cost in experimental studies \cite{lee2017exosuit}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.72 \linewidth]{img2018/00_intro/prost.png}
\caption{A patient with a prosthetic leg (left); musculoskeletal simulation of a patient with a prosthetic leg as modeled in Stanford's OpenSim software (right).}
\label{fig:running}
\end{figure}
A limitation of the previous studies, however, is that they used the kinematics from the experiments as a constraint. Since assistive devices may greatly change one's kinematics, these analyses cannot predict the neural and kinematic adaptations induced by these devices on the human. Instead, we require a framework that can generate simulations of gait that can adapt realistically to various perturbations. This framework, for instance, would help us understand the complex motor control and adaptations necessary to control both a healthy human and one with a prosthetic.
Recent advances in reinforcement learning, biomechanics, and neuroscience enable us to build a framework that tackles these limitations. The biomechanics community has developed models and controllers for different movement tasks, two of which are particularly relevant for this challenge. The first study developed a model and controller that could walk, turn, and avoid obstacles \cite{song2015neural}. The second study generated simulated walking patterns at a wide range of target speeds that reproduced many experimental measures, including energy expenditure \cite{ong2017walking}. These controllers, however, are limited to generating walking and running and need domain expertise to design.
Modern reinforcement learning techniques have been used recently to train more general controllers in locomotion. Controllers generated using these techniques have the advantage that, compared to the gait controllers previously described, less user input is needed to hand-tune the controllers, and they are more flexible in their ability to learn additional, novel tasks. For example, reinforcement learning has been used to train controllers for locomotion of complicated humanoid models \cite{lillicrap2015continuous,schulman2015trust}. Although these methods find solutions without domain specific knowledge, the resulting motions are not realistic, possibly because these models do not use biologically accurate actuators.
Through the challenge, we aimed to investigate if deep reinforcement learning methods can yield more realistic results with biologically accurate models of the human musculoskeletal system. We designed the challenge so that it stimulates research at the intersection of reinforcement learning, biomechanics, and neuroscience, encouraging development of methods appropriate for environments characterized by the following: 1) a high-dimensional action space, 2) a complex biological system, including delayed actuation and complex muscle-tendon interactions, 3) a need for a flexible controller for an unseen environment, and 4) an abundance of experimental data that can be leveraged to speed up the training process.
\subsection{Task}\label{ss:task}
Competitors were tasked with building a real-time controller for a simulated agent to walk or run at a requested speed and direction. The task was designed in a typical reinforcement learning setting \cite{Sutton1999} in which an agent (musculoskeletal model) interacts with an environment (physics simulator) by taking actions (muscle excitations) based on observations (a function of the internal state of the model) in order to maximize the reward.
\textbf{Agent and observations.} The simulated agent was a musculoskeletal model of a human with a prosthetic leg. The model of the agent included $19$ muscles to control $14$ degrees-of-freedom (DOF). At every iteration the agent received the current observed state, a vector of consisting of $406$ values: ground reaction forces, muscle activities, muscle fiber lengths, muscle velocities, tendon forces, and positions, velocities, and accelerations of joint angles and body segments.
Compared to the original model from \cite{ong2017walking} we allowed the hip to abduct and adduct, and the pelvis to move freely in space. To control the extra hip degrees of freedom, we added a hip abductor and adductor to each leg, which added 4 muscles total. The prosthesis replaced the right tibia and foot, and we removed the 3 muscles that cross the ankle for that leg. Table \ref{tbl:action} lists all muscles in the model.
\begin{table}[h!]
\setlength\tabcolsep{0.25cm}
\setlength{\extrarowheight}{.2em}
\centering
\begin{tabular}{r l l l }
Name & Side & Description & Primary function(s)\\
\hline
abd & both & Hip abductors & Hip abduction (away from body’s vertical midline)\\
add & both & Hip adductors & Hip adduction (toward body’s vertical midline)\\
bifemsh & both & Short head of the biceps femoris & Knee flexion\\
gastroc & left & Gastrocnemius & Knee flexion and ankle extension (plantarflexion)\\
glut\_max & both & Gluteus maximus & Hip extension\\
hamstrings & both & Biarticular hamstrings & Hip extension and knee flexion\\
iliopsoas & both & Iliopsoas & Hip flexion\\
rect\_fem & both & Rectus femoris & Hip flexion and knee extension\\
soleus & left & Soleus & Ankle extension (plantarflexion)\\
tib\_ant & left & Tibialis anterior & Ankle flexion (dorsiflexion)\\
vasti & both & Vasti & Knee extension\\
\end{tabular}
\caption{A list of muscles that describe the action space in our physics environment. Note that some muscles are missing in the amputated leg (right).}
\label{tbl:action}
\end{table}
\textbf{Actions and environment dynamics.} Based on the observation vector of internal states, each participant's controller would output a vector of muscle excitations (see Table \ref{tbl:action} for the list of all muscles). The physics simulator, OpenSim, calculated muscle activations from excitations using first-order dynamics equations. Muscle activations generate movement as a function of muscle properties such as strength, and muscle states such as current length, velocity, and moment arm. An overall estimate of muscle effort was calculated using the sum of muscle activations squared, a commonly used metric in biomechanical studies \cite{crowninshield1981, thelen2003cmc}. Participants were evaluated by overall muscle effort and the distance between the requested and observed velocities.
\textbf{Reward function and evaluation.} Submissions were evaluated automatically. In Round 1, participants interacted directly with a remote environment. The overall goal of this round was to generate controls such that the model would move forward at 3 m/s. The total reward was calculated as
$$ \sum_{t=1}^{T} 9 - |v_x(s_t) - 3|^2, $$
where \(s_t\) is the state of the model at time \(t\), \(v_x(s)\) is the horizontal velocity vector of the pelvis in the state \(s\), and \(s_t = M(s_{t-1}, a(s_{t_1}))\), i.e. states follow the simulation given by model \(M\). $T$ is the episode termination time step, which is equal to $1000$ if the model did not fall for the full 10 second duration, or is equal to the first time point when that the pelvis of the model falls below $0.6$ m to penalize falling.
In Round 2, in order to mitigate the risk of overfitting, participants submitted Docker containers so that they could not infer the specifics of the environment by interacting with it. The objective was to move the model according to requested speeds and directions. This objective was measured as a distance between the requested and observed velocity vector.
The requested velocity vector varied during the simulation. We commanded approximately $3$ changes in direction and speed during the simulation. More precisely, let $q_0 = 1.25,r_0 = 1$ and let $N_t$ be a Poisson process with $\lambda = 200$. We define
\[
q_t = q_{t-1} + \mathbf{1}_{(N_t \neq N_{t-1})} u_{1,t} \text{ and }
r_t = r_{t-1} + \mathbf{1}_{(N_t \neq N_{t-1})} u_{2,t},
\]
where $\mathbf{1}_{(A)}$ is the indicator function of the event $A$ (here, a jump in the Poisson process). We define $u_{1,t} \sim \mathcal{U}([-0.5,0.5])$ and $u_{2,t} \sim \mathcal{U}([-\pi/8,\pi/8])$, where $\mathcal{U}([a,b])$ denotes a uniform distribution on $[a,b]$ interval.
In order to keep the locomotion as natural as possible, we also added a penalties for overall muscle effort as validated in previous work \cite{crowninshield1981, thelen2003cmc}. The final reward function took form
\[
\sum_{t=1}^{T} \left( 10 - |v_x(s_t) - w_{t,x} |^2 - |v_z(s_t) - w_{t,z} |^2 - 0.001\sum_{i=1}^{d}a_{t,i}^2 \right),
\]
where $w_{t,x}$ and $w_{t,z}$ correspond to $q_t$ and $r_t$ expressed in Cartesian coordinates, $T$ is termination step of the episode as described previously, $a_{t,i}$ is the activation of muscle $i$ at time $t$, and $d$ is the number of muscles.
Since the environment is subject to random noise, submissions were tested over ten trials and the final score was the average from these trials.
\subsection{Solutions}\label{ss:solutions}
Our challenge attracted 425 teams who submitted 4,575 solutions. The top 50 teams from Round~1 qualified for Round~2. In Table \ref{tbl:leaderboard} we list the top teams from Round~2. Detailed descriptions from each team are given in the subsequent sections of this article. Teams that achieved 1st through 10th place described their solutions in sections 2 through 11. Two other teams submitted their solutions as well (Sections 12 and 13).
\begin{table}[h!]
\setlength\tabcolsep{0.25cm}
\setlength{\extrarowheight}{.2em}
\centering
\begin{tabular}{r | l l l l l }
& Team & Score & \# entries & Base algorithm\\
\hline
1 & Firework & 9981 & 10 & DDPG \\
2 & NNAISENSE & 9950 & 10 & PPO \\
3 & Jolly Roger & 9947 & 10 & DDPG \\
4 & Mattias & 9939 & 10 & DDPG \\
5 & ItsHighNoonBangBangBang & 9901 & 3 & DDPG \\
6 & jbr & 9865 & 9 & DDPG \\
7 & Lance & 9853 & 4 & PPO \\
8 & AdityaBee & 9852 & 10 & DDPG \\
9 & wangzhengfei & 9813 & 10 & PPO \\
10 & Rukia & 9809 & 10 & PPO \\
\end{tabular}
\caption{Final leaderboard (Round 2).}
\label{tbl:leaderboard}
\end{table}
In this section we highlight similarities and differences in the approaches taken by the teams. Of particular note was the amount of compute resources used by the top participants. Among the top ten submissions, the highest amount of resources reported for training the top model was 130,000 CPU-hours, while the most compute-efficient solution leveraged experimental data and imitation learning (Section~\ref{s:lance}) and took only 100 CPU-hours to achieve 7th place in the challenge (CPU-hours were self-reported). Even though usage of experimental data was allowed in this challenge, most participants did not use it and focused only on reinforcement learning with a random starting point. While such methods are robust, they require very large compute resources.
While most of the teams used variations of well-established algorithms such as DDPG and PPO, each team used a combination of other strategies to improve performance. We identify some of the key strategies used by teams below.\\
\ \newline
\noindent\textbf{Leveraging the model.} Participants used various methods to encourage the model to move in a realistic way based on observing how humans walk. This yielded good results, likely due to the realistic underlying physics and biomechanical models. Specific strategies to leverage the model include the following:
\begin{itemize}
\item Reward shaping: Participants modified the reward for training in such a way that it still makes the model train faster for the actual initial reward. (see, for example, Sections \ref{sss:reward-jolly}, \ref{sss:reward-mattias}, or \ref{sss:reward-itshigh}).
\item Feature engineering: Some of the information in the state vector might add little value to the controller, while other information can give a stronger signal if a non-linear mapping based on expert knowledge is applied first (see, for example, Sections \ref{sss:reward-jolly}, \ref{sss:observation-mattias}, or \ref{sss:observation-jbr}). Interestingly, one team achieved a high score without feature engineering (Section \ref{sss:large-space}).
\item Human-in-the-loop optimization: Some teams first trained a batch of agents, then hand picked a few agents that performed well for further training (Section \ref{sss:human-in-the-loop}).
\item Imitation learning: One solution used experimental data to quickly find an initial solution and to guide the controller towards typical human movement patterns. This resulted in training that was quicker by a few orders of magnitude (Section \ref{s:lance}).
\end{itemize}
\ \newline
\noindent\textbf{Speeding up exploration.} In the early phase of training, participants reduced the search space or modified the environment to speed up exploration using the following techniques:
\begin{itemize}
\item Frameskip: Instead of sending signals every $1/100$ of a second (i.e., each frame), participants sent the same control for, for example, 5 frames. Most of the teams used some variation of this technique (see, for example, Section \ref{sss:observation-mattias})
\item Sample efficient algorithms: all of the top teams used algorithms that are known to be sample-efficient, such as PPO and DDPG.
\item Exploration noise: Two main exploration strategies involved adding Gaussian or Ornstein--Uhlenbeck noise to actions (see Section \ref{sss:parameter-jolly}) or parameter noise in the policy (see Section \ref{s:nnaisense} or \ref{sss:parameter-itshigh}).
\item Binary actions: Some participants only used muscle excitations of exactly 0 or 1 instead of values in the interval $[0,1]$ (``bang-bang'' control) to reduce the search space (Section \ref{sss:large-space}).
\item Time horizon correction: An abrupt end of the episode due to a time limit can potentially mislead the agent. To correct for this effect, some teams used an estimate of the value behind the horizon from the value function (see Section \ref{sss:time-horizon}).
\item Concatenating actions: In order to embed history in the observation, some teams concatenated observations before feeding them to the policy (Section \ref{sss:concatenation}).
\item Curriculum learning: Since learning the entire task from scratch is difficult, it might be advantageous to learn low-level tasks firsts (e.g., bending the knee) and then learn high-level tasks (e.g., coordinating muscles to swing a leg) (Section \ref{ss:methodology})
\item Transfer learning: One can consider walking at different speeds as different subtasks of the challenge. These subtasks may share control structure, so the model trained for walking at 1.25 m/s may be retrained for walking at 1.5 m/s etc. (Section \ref{ss:methodology})
\end{itemize}
\ \newline
\noindent\textbf{Speeding up simulations.} Physics simulations of muscles and ground reaction forces are computationally intensive. Participants used the following techniques to mitigate this issue:
\begin{itemize}
\item Parallelization: Participants ran agents on multiple CPUs. A master node, typically with a GPU, was collecting these experiences and updating weights of policies. This strategy was indispensable for success and it was used by all teams (see, for example, Sections \ref{sss:distributed-jolly}, \ref{sss:distributed-apex}, \ref{sss:distributed-jbr} or \ref{sss:distributed-shawk})
\item Reduced accuracy: In OpenSim, the accuracy of the integrator is parameterized and can be manually set before the simulation. In the early stage of training, participants reduced accuracy to speed up simulations to train their models more quickly. Later, they fine-tuned the model by switching the accuracy to the same one used for the competition \cite{kidzinski2018learning}.
\end{itemize}
\ \newline
\noindent\textbf{Fine-tuning.} A common statistical technique for increasing the accuracy of models is to output a weighted sum of multiple predictions. This technique also applies to policies in reinforcement learning, and many teams used some variation of this approach: ensemble of different checkpoints of models (Section \ref{sss:ensemble}), training multiple agents simultaneously (Section \ref{s:mattias}), or training agents with different seeds (Section \ref{s:jbr}).\\
\ \newline
While this list covers many of the commonly used strategies, a more detailed discussion of each team's approach is detailed in the following sections.
\input{sections2018/01_firework}
\input{sections2018/02_nnaisense}
\input{sections2018/03_jolly}
\input{sections2018/04_mattias}
\input{sections2018/05_itshigh}
\input{sections2018/06_jbr}
\input{sections2018/07_lance}
\input{sections2018/08_aditya}
\input{sections2018/09_wangzhengfei}
\input{sections2018/10_rukia}
\input{sections2018/16_shawk91}
\input{sections2018/90_jeremy}
\section{Affiliations and acknowledgments}\label{s:acknowledgements}
\textbf{Organizers:} {\L}ukasz Kidzi\'nski, Carmichael Ong, Jennifer Hicks and Scott Delp are affiliated with Department of Bioengineering, Stanford University. Sharada Prasanna Mohanty, Sean Francis and Marcel Salathé are affiliated with Ecole Polytechnique Federale de Lausanne. Sergey Levine is affiliated with University of California, Berkeley.
\textbf{Team Firework, 1st Place, Section~\ref{s:BaiDu_NLP}}. Bo Zhou, Honghsheng Zeng, Fan Wang, Rongzhong Lian, is affiliated with Baidu, Shenzhen, China. Hao Tian is affiliated with Baidu US.
\textbf{Team NNAISENSE, 2nd Place, Section~\ref{s:nnaisense}}. Wojciech Jaśkowski, Garrett Andersen, Odd Rune Lykkebø, Nihat Engin Toklu, Pranav Shyam, and Rupesh Kumar Srivastava are affiliated with NNAISENSE, Lugano, Switzerland.
\textbf{Team JollyRoger, 3rd Place, Section~\ref{s:dqec}}. Sergey Kolesnikov is affiliated with DBrain, Moscow, Russia; Oleksii Hrinchuk is affiliated with Skolkovo Institute of Science and Technology, Moscow, Russia; Anton Pechenko is affiliated with GiantAI.
\textbf{Team Mattias, 4th Place, Section~\ref{s:mattias}}. Mattias Ljungström is affiliated with Spaces of Play UG, Berlin, Germany.
\textbf{Team ItsHighNoonBangBangBang, 5th Place, Section~\ref{s:apexddpg}}. Zhen Wang, Xu Hu, Zehong Hu, Minghui Qiu, Jun Huang are affiliated with Alibaba Group, HangZhou, China.
\textbf{Team jbr, Place 6th, Section~\ref{s:jbr}}. Aleksei Shpilman, Ivan Sosin, Oleg Svidchenko and Aleksandra Malysheva are affiliated with JetBrains Research and National Research University Higher School of Economics, St.Petersburg, Russia. Daniel Kudenko is affiliated with JetBrains Research and University of York, York, UK.
\textbf{Team lance, 7th Place, Section~\ref{s:lance}}. Lance Rane is affiliated with Imperial College London, London, UK. His work was supported by the NCSRR Visiting Scholars' Program at Stanford University (NIH grant P2C HD065690) and the Imperial College CX compute cluster.
\textbf{Team AdityaBee, 8th Place, Section~\ref{s:aditya}}. Aditya Bhatt is affiliated with University of Freiburg, Germany.
\textbf{Team wangzhengfei, Place 9th, Section~\ref{s:wangzhengfei}}. Zhengfei Wang is affiliated with inspir.ai
and Peking University, Penghui Qi, Peng Peng and Quan Yuan is affiliated with inspir.ai, Zeyang Yu is affiliated with
inspir.ai and Jilin University, Wenxin Li is affiliated with Peking University.
\textbf{Team Rukia, 10th Place, Section~\ref{s:rukia}}. Yunsheng Tian, Ruihan Yang and Pingchuan Ma is affiliated with Nankai University, Tianjin, China.
\textbf{Team shawk91, Place 16th, Section~\ref{s:shawk91}}. Shauharda Khadka, Somdeb Majumdar, Zach Dwiel, Yinyin Liu, and Evren Tumer are affiliated with Intel AI, San Diego, CA, USA.
The challenge was co-organized by the Mobilize Center, a National Institutes of Health Big Data to Knowledge (BD2K) Center of Excellence supported through Grant U54EB020405. The challenge was partially sponsored by Nvidia who provided four GPUs Titan V for top solutions, by Google Cloud Services who provided 70000 USD in cloud credits for participants, and by Toyota Research Institute who funded one travel grant.
\section{Asynchronous PPO}\label{s:HP}
\sectionauthor{Jeremy Watson}
\graphicspath{{../img2018/90_jeremy/}}
We used a vanilla implementation of PPO in the open-source framework Ray RLLib \citep{ppo,moritz2018ray}. Highest position achieved was a transitory top-10 in an intermediate round (subsequently falling to 17) but no final submission was made due to issues with action space. Some minor reward shaping was used, along with quite a lot of hardware - typically 192 CPUs, but no GPUs, over several days per experiment. The model achieved comparatively poor sample efficiency - best run took ~120 million steps in training, somewhat more than the Roboschool Humanoid environments which were trained on 50-100 million timesteps in\citep{ppo}. Initially we used the baselines \citep{baselines} PPO implementation and verified training with Roboschool which is much faster than Opensim \citep{seth2018opensim} but lacks physiologically realistic features like muscles. Source for Ray version on github\footnote{\url{https://github.com/hagrid67/prosthetics_public}}.
\subsection{Methods}
\subsubsection{Vanilla PPO with continuous actions}
Action $a_i$ is the sum of the $\tanh{}$ activations from the final hidden layer outputs $x_j$ with the addition of an independent Normal random variable with trainable logstdev $\sigma_i$:
$$ a_i := \sum_j W_{ij} x_j + b_i + Y_i, \ \ Y_i \sim N(0,\exp(\sigma_i))
$$
For submission and testing we set $\sigma_i=-10$ (typical trained values were around $-1$) to reduce exploration noise. (In the baselines implementation a \texttt{stochastic} flag is available to remove the noise but this is not yet implemented in Ray.)
\subsubsection{Unfair advantage from actions outside the permitted space}
Initially an unbounded action space was used, as per vanilla PPO above. It appears the model exploited this, using actions (muscle activations) greater than one. This gave unfair advantage in the environment, achieving a score of 9,790 out of 10,000, which was briefly in the top 10, without any significant innovation. Subsequently with bounded actions the model trained poorly. This was not resolved and hence no final submission was made.
We conjecture that the slow timesteps are perhaps associated with the out-of-bounds actions, ie muscle activations $> 1$.
\subsubsection{Asynchronous Sampling}
This options "sample\_async" and "truncate\_episodes" within Ray allows episodes to continue running from one SGD epoch to the next, so that the SGD doesn't need to wait for longer-running episodes. This allows all the environments to keep busy generating samples.
\subsubsection{Change of Opensim Integrator to EulerExplicit}
With the default configuration of the Runge-Kutta-Merson occasionally a single step would take tens of minutes, while most would take less than 1 second. The Opensim C++ package was recompiled with the alternative integrator to give a performance boost as described elsewhere. Along with a reduced accuracy setting, from $5\times10^{-5}$ to $10^{-3}$, this also reduced the huge variance in the step time.
The ray asynchronous setting did not seem to be able to cope with long individual timesteps. Attempts were made to accommodate the long timesteps but this was abandoned.
\begin{figure*}[ht!]%
\centering
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=1\textwidth]{img2018/90_jeremy/gappy.jpg}
\label{fig:episode_gaps}
\end{subfigure}
\caption{Sporadic pauses in episode completions due to occasional slow timesteps under Runge Kutta Merson integrator (blue dots are episodes, the x-axis is the wall clock formatted as Month-Day-Hour. Green is completed timesteps. Rolling means in yellow and orange. Taken from round 1. This implementation based on Baselines, not Ray. Here a single long-running episode holds up all the environments - the large gap represents about 3 hours. )}
\label{fig:perf1}
\end{figure*}
\begin{figure*}[ht!]%
\centering
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=1\textwidth]{img2018/90_jeremy/rewmean-clipped.jpg}
\end{subfigure}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=1\textwidth]{img2018/90_jeremy/rewmean-9790.jpg}
\end{subfigure}
\caption{Performance with bounded vs unbounded action space (modified reward scales). Although the unbounded chart is incomplete, it indicates that there was still improvement after 100m steps}
\label{fig:perf1}
\end{figure*}
\subsubsection{Binary Actions, Logistic function }
To confine the action space, "binary actions" $ a_i \in \{0,1\} $ were tried. In Ray this was implemented as a "Tuple of Discrete spaces". In our attempts it wasn't clear that this approach was learning any faster than continuous actions (as other competitors found).
We also faced the difficulty that it would require a code change in Ray to use binary actions without exploration noise. Un-normalised log-probabilities are used for the different discrete actions, and there is no explicit field for the log-stdev, which we were able to edit in the model snapshot file, in the continuous-action case.
We also tried applying the logistic function (with a factor of 10), $a_i=\frac{1}{1+e^{-10x}}$, to give similar behaviour, but using the established continuous action space output. The factor was to steepen the gradient between 0 and 1 compared to the standard logistic. By using a continuous action space we could use the existing "DiagGaussian" distribution for the output space in training and then reduce the variance for submission. (OpenAI baselines\citep{baselines} PPO has a \texttt{stochastic} flag but this is not implemented in Ray.
\begin{figure*}[ht!]%
\centering
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=1\textwidth]{img2018/90_jeremy/logistic.jpg}
\end{subfigure}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=1\textwidth]{img2018/90_jeremy/rewards2.jpg}
\end{subfigure}
\caption{logistic function applied to actions, and some reward / penalty functions tried for reward shaping.)}
\label{fig:logistic}
\end{figure*}
\subsubsection{Observation Space}
We used all the 350 available observation scalars (not all these necessarily vary) plus the two target velocity values $\tilde{v}_{x,z}$. We did not repeat the previous observation as velocities and accelerations were already available. We made all $x,z$ positions relative to the pelvis, but left $y$ positions untouched. We did not adjust velocities, accelerations, orientations, or joint angles. Although muscle activations, fibre forces etc seem to correspond closely with the action outputs of the policy, we found slightly worse performance without them, so we kept them in.
\subsubsection{Basic reward shaping}
The default reward was $ b - ((v_x-\tilde{v}_x)^2 + (v_y-\tilde{v}_y)^2) $ where target velocity $\tilde{v}_{x,z}$ is initially $(1.25, 0)$ and subsequently varies. With the default $b=10$ the agent tended to learn to simply stand still, to easily earn the reward for prolonging the episode, so we reduced $b$ to $2$ so that the velocity penalty could outweigh the base reward, leading to negative reward for standing still.
To discourage some sub-optimal gaits like walking sideways, we applied a penalty on pelvis orientation, $k_{pelvis}(r_x^2+r_y^2+r_z^2)$. This was successful in causing the agent to walk facing forward. (The angular orientation (0,0,0) corresponded to facing forward. We didn't get as far as adapting this penalty to the changing target velocity.)
\subsection{Unsuccessful experiments}
\subsubsection{leg-crossing penalty}
The agent developed a gait where its legs crossed in a physically impossible configuration. (The environment did not implement collision detection for the legs so they pass through each other.) To discourage this we implemented a penalty on "hip adduction" - essentially the left-right angle\footnote{joint\_pos hip\_l [1] in the observation dictionary } of the thigh in relation to the hip. A positive value means the thigh has crossed the centre line and is angled toward the other leg; a negative value means the thigh is pointing out to the side (we ignored this). (The rotation of the thigh joint did not vary in the prosthetics model.)
The penalty was $k_{hip} (\theta_{hip} + 0.2)^+ $; this failed to cure the leg-crossing. $k_{hip} \in \{0, 0.5, 1\} $
\subsubsection{Straight knee penalty}
The agent walked with straight legs. Intuitively this would make it harder for the agent to respond to changes of target velocity. We applied a penalty of $k_{knees} (\theta_{knee} + 0.2)^+, k_{knees} \in \{0, 1, 2\} $ but were not able to detect any meaningful effect in the gait.
\subsubsection{Kinetics Reward / Imitation Learning}
An approach similar to DeepMimic \citep{peng2018deepmimic} was attempted, at a late stage after the bounds were imposed on the action space, with credit to Lance Rane (see \ref{s:lance} above), who suggested it on the public forum. Sadly we were not able to train a walking agent using this approach.
\subsubsection{Wider network}
We found that increasing the hidden-layer width from 128 to 256 slowed down learning so we kept a width of 128 (despite an observation size of 352).
\subsubsection{Frameskip}
We found no improvement in learning from repeating the actions for a few timesteps (2 or 3).
\subsection{Experiments and results}
\begin{table}
\caption{Hyper-parameters used in the experiments}
\begin{tabularx}{\columnwidth}{r|X}
\toprule
Actor network architecture & $[FC128, FC128]$, Tanh for hidden layers, \\
Action noise & Normal / DiagGaussian with trainable variance (as logstdev). PPO Baselines standard for continous actions. \\
Value network architecture & $[FC128, FC128]$, Tanh for hidden layers and linear for output layer\\
learning rate (policy and value networks) & 5e-5 (Ray PPO default), sometimes 1e-4 \\
SGD epochs & 30 (Ray PPO default), sometimes 20 \\
Batch size & 128 \\
$\gamma$ & 0.99 \\
\bottomrule
\end{tabularx}
\label{tab:hyperopt}
\end{table}
\section{Collaborative Evolutionary Reinforcement Learning}\label{s:shawk91}
\sectionauthor{Shauharda Khadka, Somdeb Majumdar, Zach Dwiel, Yinyin Liu, Evren Tumer}
We trained our controllers using Collaborative Evolutionary Reinforcement Learning (CERL), a research thread actively being developed at Intel AI. A primary reason for the development and utilization of CERL was to scale experimentation for Deep Reinforcement Learning (DRL) settings where interacting with the environment is very slow. This was the case with the Opensim engine used for the AI for Prosthetics challenge. CERL is designed to be massively parallel and can leverage large CPU clusters for distributed learning. We used Intel Xeon servers to deploy the CERL algorithm for learning in the osim environment.
\begin{figure}[h]
\centering
\includegraphics[width=0.75\textwidth]{img/CERL.png}
\caption{Illustration of the Collaborative Evolutionary Reinforcement Learning framework}
\label{cerl}
\vspace{-2em}
\end{figure}
\subsection{Methods}\label{sss:distributed-shawk}
Collaborative Evolutionary Reinforcement Learning (CERL) is a multilevel optimization framework that leverages a collection of learners to solve a Reinforcement Learning (RL) problem. Each learner explores independently, while exploiting their collective experiences jointly through the sharing of data and policies during learning. A central integrator (neuroevolution) serves to drive this process by adaptively distributing resources - imposing a strong selection pressure towards good learners. The ‘collective learner’ that emerges from this tightly integrated collaborative learning process combines the best of its composite approaches. There are five core components that jointly describe the CERL framework:
\begin{enumerate}
\item Data Generators - Generate experience by rolling out in the environment.
\item Data Repository - Store collective experiences.
\item Data Consumers - Exploit collective data to learn policies.
\item Policy Repository - Store policies.
\item Integrator - Exploit decomposability of policy parameters to synergestically combine a diverse set of policies, and adaptively redistribute resources among learners.
\end{enumerate}
A general flow of learning proceeds as follows: a group of data generators run parallel rollouts to generate experiences. These experiences are periodically pushed to the data repository. A group of data consumers continuously pull data from this database to train policies. The best policy learned by each data consumer is then periodically written to the policy repository. The data generators and consumers periodically read from this policy database to update their populations/policies, hereby closing the learning loop. The integrator runs concurrently exploiting any decomposability within the policy parameters space and operates in combining the diverse set of behaviors learned. The integrator also acts as a 'meta-learner' that adaptively distributes resources across the learners, effectively adjusting the search frontier.
Figure \ref{cerl} depicts the organization of the CERL framework. A crucial feature of the CERL framework is that a wide diversity of learning algorithms spanning across the off-policy, on-policy, model-free and model-based axis of variation can work collaboratively, both as data generators and consumers. Collaboration between algorithms is achieved by the exchange of data and policies mediated by the data and policy repository, respectively. The \textbf{"collective learner"} that emerges from this collaboration inherits the best features of its composite algorithms, exploring jointly and exploiting diversely.
\vspace{-1em}
\subsection{Experiments and Results}
\textbf{Reward Shaping: } A big advantage of using neuroevolution as our integrator was that we could perform dynamic reward shaping on the entire trajectory originating from a controller. This allowed us to shape behaviors at the level of gaits rather than static states. Some target behaviors we shaped for were bending the knee (static) and maximizing the swing for the thighs (dynamic). This really helped train our controllers in running and matching the 3m/s speed in the first round with a semi-realistic looking gait.
We started the second round (adaptive walking) by seeding the best policy from the first round. However, this was perhaps our biggest error. Unlike the first round, the second round required our agent to move at a much slower speed. The gait with bent knees and a hurling movement pattern that our agent learned in the first round was not ideal for the second round. Our bootstrapped agent learned quickly at first but hit a ceiling at ~9783 (local evaluation). The agile and rather energetic movement pattern that our agent implemented from Round 1 was counterproductive for Round 2. This pattern led to high overshoot and twitching (jerk) of the waist. This was exacerbated for slower target speeds and led to large losses especially for our 20Hz controller (frameskip=5).
\textbf{Hardware:} The biggest bottleneck for learning was experience generation, stemming from the high-fidelity of the OpenSim engine. To address this issue we leveraged the parallelizability of CERL and used a Xeon server (112 cpu cores) in addition to a GPU to train our controller.
\vspace{-1em}
\subsection{\textbf{Discussion}}
We used CERL to develop controllers for the AI for Prosthetics challenge. The core idea behind the implementation of CERL was to leverage large CPU nodes in scaling DRL for settings where interacting with the environment is slow and laborious. We leveraged a diversity of reinforcement learning algorithms to define a collaborative learner closely mediated by the the core integrator (neuroevolution). The challenge provided a very good platform to test some exploratory ideas, and served to accelerate the development the CERL framework which is an active research effort at Intel AI. Future work will continue to develop and expand the CERL framework as a general tool for solving deep reinforcement learning problems where interactions with the environment is extremely slow.
\section{Asynchronous DDPG with multiple actor-critics}\label{s:mattias}
\sectionauthor{Mattias Ljungström}
An asynchronous DDPG\citep{lillicrap2015continuous,silver2014deterministic} algorithm is setup with multiple actor-critic pairs. Each pair is trained with different discount factors on the same replay memory. Experience is collected asynchronously using each pair on a different thread. The goal of the setup is to balance time used on training versus simulation of the environment. The final agent scores 9938 on average over 60 test seeds, and placed fourth in the NeurIPS 2018 AI for Prosthetics competition.
\subsection{Methods}
\subsubsection{Multiple Actor-Critic pairs}
Simulation of the given environment is extremely costly in terms of CPU. Each step can require between 0.1s to 20 minutes to complete. To optimize utilization of CPU during training, an asynchronous approach is beneficial. During round 1 of the competition, a DDPG algorithm was used in combination with asynchronous experience collection on 16 threads. Training was done on a physical computer with 16 cores. Analysis of CPU utilization during training showed that only a fraction of CPU was used to train the network, and most time (>95\%) was spent on environment simulation.
To shift this more in favor of network training, a multiple of actor-critic (AC) pairs were trained on the same experience collected. Each AC pair has a different discount factor, and is trained with unique mini-batches. After a set number of steps, actors are shifted in a circular way to the next critic. During training each AC takes turns to run episodes on 16 threads. To support this larger network setup, training is done on GPU instead of CPU. All experience is stored into the same replay memory buffer. For the final model, 8 pairs of ACs were used, to balance CPU but also due to GPU memory limits.
At inference, each actor produces an action based on the current observation. Further new actions are created from averages of 2 and 3 of the initial actions, and are added to a list of potential actions. Potential actions are evaluated by all critics, and the action with the maximum value is picked (Fig.~\ref{img:ac-pairs}).
Complete source code for solution is available at \url{https://github.com/mattiasljungstrom/learningtorun_2018}.
\begin{figure}[ht!]
\centering
\begin{subfigure}[t]{0.6\textwidth}
\centering
\includegraphics[width=1\textwidth]{img2018/04_mattias/actor_critic_pairs.pdf}
\caption{Inference with Actor-Critic pairs}
\label{img:ac-pairs}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.35\textwidth}
\centering
\includegraphics[width=1\textwidth]{img2018/04_mattias/rewards_penalty_scaling_150_sq2.pdf}
\caption{Effect of penalty scaling}
\label{img:penalty-scaling}
\end{subfigure}
\caption{Actor-Critic pairs and reward penalties}
\label{fig:setup-penalty}
\end{figure}
\subsubsection{Reward Shaping}\label{sss:reward-mattias}
The original reward has a very low penalty for not moving. This resulted in agents that were satisfied with standing still. To counter this the original penalty was multiplied with 8, but capped to a maximum of 12. The purpose was to give standing still a slightly negative reward. Moving at the correct velocity would still reward a value of 10. See Fig.~\ref{img:penalty-scaling}.
Furthermore, additional penalties were added to the reward. A penalty for not bending the knees turned out to be helpful in making the agent learn to walk in a more natural way. A penalty was added to ensure that each foot was below the upper part of the respective leg of the skeleton. This helped the agent avoid states were legs were pointing sideways. The pelvis orientation was penalized for not pointing in the velocity direction. This helped the agent to turn correctly. To further encourage correct turns, a penalty for not keeping the feet in the target velocity direction was added. Finally, a penalty was added to avoid crossing legs and feet positions, as this usually meant the skeleton would trip over itself.
Only penalties were used, since trials with positive rewards showed that the agent would optimize towards fake rewards. With pure penalties, the agent is only rewarded for actions that would also lead to good scores in a penalty free setup. The total sum of penalties was capped at -9 to keep the final reward in the range of [-11, 10].
\subsubsection{Replay Memory Optimization}
During initial tests it became clear that a too small replay memory would lead to deteriorating performance of the agent when early experience was overwritten. To counter this, a very large memory buffer was used. Development time was spent optimizing the performance of this memory buffer, and changing from a dynamic buffer to a static buffer increased training performance 10 times for large buffer sizes. The size of the buffer used in final training was 4 million experiences.
\subsubsection{Hyper parameters, Environment Changes and Observation Shaping}\label{sss:observation-mattias}
Due to limited compute resources, a full hyperparameter search was not feasible. A few parameters were evaluated during training for 24 hours each. Discount factors between 0.95 and 0.99 were evaluated. Trials showed the agent learned to walk faster using a value range of [0.96, 0.976]. Different rates of learning rate per step were evaluated. Evaluating mini-batch sizes from 32 to 128 showed that a higher value was more beneficial.
During training the environment was tweaked so that the agent would be forced to turn 6 times instead of 3. The agent would also skip every other step during training, so that experience was collected at 50Hz instead of 100Hz.
All observation values were manually re-scaled to be in an approximate range of [-5, 5]. All positions, rotations, velocities and accelerations were made to be relative towards pelvis. The target velocity was reshaped into a speed and relative direction change.
\subsection{Experiments and results}
\subsubsection{Refinement Training}
After training the final model for 22000 initial episodes, the average score during testing was 9888. At this point a series of evaluations were done on 60 test seeds. It was discovered that the agent would still fall in certain situations.
\begin{figure}[ht!]
\centering
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=1\textwidth]{img2018/04_mattias/training_initial_300dpi_3.png}
\caption{Rewards during initial 22000 episodes}
\label{fig:training-rewards}
\end{subfigure}
\begin{subfigure}[t]{0.47\textwidth}
\includegraphics[width=1\textwidth]{img2018/04_mattias/test_initial_60.png}
\caption{Path plot of initial model tested on 60 seeds. Color [red, green] maps to rewards [9800, 9950]}
\label{fig:test-initial-60}
\end{subfigure}
\caption{Training statistics}
\label{fig:training}
\end{figure}
From these 60 tests the worst performing seeds were selected. The agent was run on these to collect experience without training for a thousand episodes. Training was then continued using only seeds with low scores. In addition, learning rates for both actor and critic were lowered to allow fine-tuning. After another 9000 episodes of training in this way, the average score of the agent was 9930.
At this point training was switched from turning 6 times per episode to 3 times, as in the final evaluation. Again, new experience was collected without training, and training was then continued using 3 turns. After approximately 4000 more training episodes, the final agent\footnote{\url{https://mljx.io/x/neurips_walk_2018.gif}} scored 9938.7 on average over 60 test seeds.
\subsection{Discussion}
Using multiple actor-critic pairs allow the agent to learn more efficiently on the experience collected. Refinement training with human-assisted adjustments enabled agent to go from average score of 9888 to 9938. Reward penalties allow the agent to learn to walk faster by excluding known bad states, but probably limited exploration which could have generated better rewards.
\section{Model-guided PPO for sample-efficient motor learning}\label{s:lance}
\sectionauthor{Lance Rane}
Proximal policy optimisation (PPO) \cite{schulman2017proximal} has become the preferred reinforcement learning algorithm for many due to its stability and ease of tuning, but it can be slow relative to off-policy algorithms. Leveraging a model of the system’s dynamics can bring significant improvements in sample efficiency.
We used human motion data in combination with inverse dynamics and neuromusculoskeletal modelling to derive, at low computational cost, guiding state trajectories for learning. The resulting policies, trained using PPO, were capable of producing flexible, lifelike behaviours with fewer than 3 million samples.
\subsection{Methods}
We describe methods and results for the final round of the competition only, where the task was to train an agent to match a dynamically changing velocity vector.
\subsubsection{State pre-processing and policy structure}
Positional variables were re-defined relative to a pelvic-centric coordinate system, to induce invariance to the absolute position and heading of the agent. The state was augmented with segment and centre of mass positions, velocities and accelerations, muscle fibre lengths and activation levels, and ground contact forces prior to input to the policy network. \textit{x} and \textit{z} components of the target velocity were provided in both unprocessed form and also after subtracting corresponding components of the current translational velocity of the agent. All values were normalized using running values for means and standard deviations.
The policy was a feedforward neural network with 2 layers of 312 neurons each, with \textit{tanh} activation functions. The action space was discretized such that the output of the network comprised a multicategorical probability distribution over excitations for each of the 18 muscles.
\subsubsection{Phase 1: Policy initialisation}
During phase 1, training was supported by the use of guiding state trajectories. Motion data describing multiple gait cycles of human non-amputee straight-line walking at 1.36m/s \cite{john2013stabilisation} were processed by resampling to generate three distinct sets of marker trajectories corresponding to average pelvic translational velocities of 0.75, 1.25 and 1.75m/s. For each of these three datasets the following pipeline was executed:
\begin{itemize}
\item Virtual markers corresponding to the experimental markers were assigned to the OpenSim prosthetic model, except those for which no parent anatomy existed in the model.
\item Inverse kinematic analysis was performed to compute a trajectory of poses (and resultant joint angles) of the model that closely matched the experimental data.
\item Computed muscle control (CMC) \cite{thelen2003generating} was used to derive trajectories of states and muscle excitations consistent with the motion.
\end{itemize}
CMC performs inverse dynamic analysis followed by static optimization with feedforward and feedback control to drive a model towards experimental kinematics. As a method for finding controls consistent with a given motion, it may be viewed as a compromise between the simplicity of pure static optimization and the rigour of optimal control methods such as direct collocation, which can provide more robust solutions at greater computational expense. CMC was favoured here for its speed and simplicity, and the existence of an established implementation in OpenSim.
Paired trajectories of states and excitations may be used to initialize policies, for example by imitation learning and DAgger \cite{ross2011reduction}, but this approach failed here, possibly due to a reliance of CMC upon additional ‘residual’ ideal torque actuators to guarantee the success of optimization. However, the trajectory of states derived from CMC, which includes muscle fibre lengths and activation levels, was found to contain useful information for policy learning. Following \cite{peng2018deepmimic}, we used two methods to convey this information to the agent.\bigskip
\textbf{1. Reward shaping}. An imitation term was incorporated into the reward, describing the closeness of the agent’s kinematics (joint angles and speeds) to those of the reference kinematic trajectory at a given time step. The full reward function is described by:
\begin{equation}
r_{t} = w_{1}*r_{t}^{imitation} + w_{2}*r_{t}^{goal}
\end{equation} where $r_{t}^{imitation}$ is the imitation term and $r_{t}^{goal}$ describes the agent’s concordance with the target velocity vector at time \textit{t}. At each step, the motion clip used to compute the imitation objective was selected from the three available clips on the basis of minimum euclidean distance between the clip’s speed and the magnitude of the desired velocity vector. The choice of the coefficients $w_{1}$ and $w_{2}$ by which these separate terms were weighted in the overall reward scheme was found to be an important determinant of learning progress. Values of 0.7 and 0.3 for imitation and goal terms respectively were used in the final model. \bigskip
\textbf{2. Reference state initialization (RSI)}. At the start of each episode, a motion clip was selected at random and a single state was sampled and used for initialization of the agent. The sampled state index determined the reference kinematics used to compute imitation rewards, which were incremented in subsequent time steps.
\subsubsection{Phase 2: Policy fine-tuning}
Following a period of training, the imitation objective was removed from the reward function, leaving a sole goal term for further training.
\subsection{Experiments and results}
Test scores in excess of 9700 were achieved after training for 10 hours - approximately 2.5m samples - on a 4-core machine (Intel core i7 6700). Further training improved the final score to 9853.
\begin{figure}
\begin{center}
\label{img:lance-ablations}
\includegraphics[width=0.85\textwidth]{img2018/07_lance/ablations_big.png}
\caption{Performance impact of key feature ablations. Each line represents the average score over three independent runs. \textit{no multi-clip} refers to models trained using only a single motion clip at 1.25m/s. Agents trained without RSI scored relatively well, but did so by learning a policy that favoured remaining still.}
\end{center}
\end{figure}
\subsection{Discussion}
Learned policies demonstrated natural walking behaviours and were capable of adapting both speed and direction on demand. Despite restriction of motion data to just three discrete speeds during training, agents learned to generalize to a continuous range of walking speeds, within and beyond the range of these clips, and were able to effectively combine changes in speed with changes of direction, for which no motion data were provided. Both reward shaping and reference state initialization proved critical for effective learning, with the absence of either leading to total decimation of performance. In a boost to the potential flexibility of the method, and unlike in \cite{peng2018deepmimic}, training was not dependent on the use of a synchronizing phase variable.
To some extent, imitation resulted in suboptimal learning - for example, the goal term was based on a constant pelvic velocity, but there is considerable fluctuation in pelvic velocity during normal human walking. This may explain why a period of finetuning without imitation boosted scores slightly. Further improvements may have been possible with the use of data from turns during gait, which unfortunately were not available during the competition. Nevertheless, the techniques described here may find use in the rapid initialization of policies to serve as models of motor control or as the basis for the learning of more complex skills. Code, detailed design choices and hyperparameters may be viewed at \url{https://github.com/lancerane/NIPS-2018-AI-for-Prosthetics}.
\section{Deep Reinforcement Learning with GPU-CPU Multiprocessing}\label{s:jbr}
\sectionauthor{Aleksei Shpilman, Ivan Sosin, Oleg Svidchenko, Aleksandra Malysheva, Daniel Kudenko}
One of the main challenges we faced is that running the simulation is very CPU-heavy, while the optimal computing device for training neural networks is a GPU. One way to overcome this problem is building a custom machine with GPU to CPU proportions that avoid bottlenecking one or the other. Another is to have the GPU machine (such as AWS accelerated computing instance) work together with the CPU machine (such as the AWS compute optimized instance). We have designed a framework for such a tandem interaction \cite{ivan_sosin_2018_1938263}.
For the AI in Prosthetics competition we used the DDPG algorithm \cite{lillicrap2015continuous} with 4 layers of 512 neurons in the actor network and 4 layers of 1024 neurons in the critic network. We also performed additional feature engineering, two-stage reward shaping, and ensembling through SGDR \cite{loshchilov2017SGDR}.
\subsection{Methods}
\subsubsection{GPU-CPU Multiprocessing}\label{sss:distributed-jbr}
\begin{figure}
\caption{Framework for running processes on a tandem GPU (client) and CPU (server) machines.}
\label{img:jbr-framework}
\begin{center}
\includegraphics[width=0.7\textwidth]{img2018/06_jbr/jbr-framework.png}
\end{center}
\end{figure}
Figure \ref{img:jbr-framework} shows our training framework. We divide it into a client and a server side. The client (GPU instance) trains the model based on data received from the server (CPU instance). On the server side we launch a number of real environments wrapped in a HTTP-server to run the physical simulation. On the client side we launch a corresponding number of virtual environments that redirect requests to OpenSim environments. These virtual environments transmit the state (in queue) to model workers that process the state and output the actions. Model workers' networks are constantly updated by the trainer via shared memory.
Samplers handle complete episodes and produce a batch for trainers to train the actor and critic networks on.
\subsubsection{Additional tricks}\label{sss:observation-jbr}
We used the DDPG algorithm \cite{lillicrap2015continuous} with the following methodologies that seem to improve the final result:
\textbf{Feature engineering}. In addition to the default features, we have engineered the following additional features:
\begin{itemize}
\item XYZ coordinates, velocity, and acceleration relevant to the pelvis center point, body point, and head point ($10\times3\times3\times3=270$ features).
\item XYZ rotations, rotational velocities and rotational accelerations relevant to the pelvis center point ($10\times3\times3=90$ features).
\item XYZ coordinates, velocities, and accelerations of center of mass relevant to the pelvis center point ($3\times3=9$ features).
\end{itemize}
The size of the resulting feature vector was 510. Figure \ref{img:jbr-fe} shows the result of adding reward shaping and new features to the baseline DDPG algorithm.
\begin{figure}
\caption{Improvement of training speed and performance after engineering additional features.}
\label{img:jbr-fe}
\begin{center}
\includegraphics[width=0.7\textwidth]{img2018/06_jbr/jbr-RS-FE.png}
\end{center}
\end{figure}
\textbf{Reward shaping}. We used two-stage reward shaping. For the first 7 hours we used the following reward function, that is much easier to train with because it is less punishing at the beginning of training, than the original Round 2 reward function
\begin{equation}
r = 1 - \frac{||v_{target} - v|| ^ 2}{||v_{target}||^2},
\end{equation}
where $v_{target}$ and $v$ is the target velocity and actual velocity, respectively. After that we used a modified and clipped Round 2 reward function:
\begin{equation}
r =
\begin{cases}
2 \cdot r_{origin} - 19&\text{if }\ r_{origin} \in (9.5, 10)\\
-1 & \text{otherwise,}
\end{cases}
\end{equation}
where $r_{origin}$ is the original Round 2 reward function. This reward function awards the model for a high score and penalizes any score below 9.5.
\textbf{Ensembles}. We used Stochastic Gradient Descent with Warm Restarts (SGDR \cite{loshchilov2017SGDR}) to produce an ensemble of 10 networks, and then we chose the best combination of 4 networks by grid-search. The final action was calculated as an average output vector of those 4 networks.
\subsection{Experiments and results}
We used an AWS p3.2xlarge instance with Nvidia Tesla V100 GPU and 8 Intel Xeon CPU cores for training the network in tandem with a c5.18xlarge instance with 72 CPU cores (50 were utilized) for running the simulations. We used 50 environments (real and virtual), 4 Model workers, Samplers and Trainers. We trained models for ~20 hours with an additional ~10 hours used for SGDR ensembling.
The results presented in Table \ref{tbl:jbr-tricks} show confidence intervals for the score, calculated on 100 different random seeds for the Round 2 reward. The final score that was achieved on the ten seeds used by the organizers was 9865.
\begin{table}
\centering
\begin{tabular}{|c|c|c|}
\hline
Model & Score & Frequency of falling \\
\hline
Baseline & 4041.10 $\pm$ 539.23 & N/A \\
\hline
Feature Engineering & 8354.70 $\pm$ 439.3 & 0.36 \\
\hline
Feature engineering + Reward Shaping & 9097.65 $\pm$ 344 & 0.21 \\
\hline
Feature engineering + Reward Shaping + Ensembles & 9846.72 $\pm$ 29.6 & 0.02 \\
\hline
\end{tabular}
\caption{Performance of models with different modules on 100 random seeds for Round 2 rewards}
\label{tbl:jbr-tricks}
\end{table}
\subsection{Discussion}
Our framework allows for the utilization of GPU and CPU machines in tandem for training a neural network. Using this approach we have been able to train an ensemble of networks that achieved the score of 9865 (6th place) in only 20 hours (+10 hours for ensembling with SGDR) on tandem p3.2xlarge-c5.18xlarge instances.
Our code is available at \url{https://github.com/iasawseen/MultiServerRL}.
\section{TEMPLATE: [One sentence title of your solution]}\label{s:template}
\sectionauthor{[names of team members]}
One paragraph summary of your approach including: the key algorithm (PPO, DDPG, etc?), important tricks, results and lessons learned.
\subsection{Methods}
Methods you used
\begin{itemize}
\item Hyper-parameters
\item Parallel computing framework
\item Hardware you used
\end{itemize}
\subsection{Experiments and results}
Technical details and results:
\begin{itemize}
\item Hyper-parameters
\item Parallel computing framework
\item Hardware you used
\end{itemize}
\subsection{Discussion}
\begin{enumerate}
\item What went well?
\item What could you improve?
\end{enumerate}
\section{Ensemble of PPO Agents with Residual Blocks and Soft Target Update}\label{s:rukia}
\sectionauthor{Yunsheng Tian, Ruihan Yang, Pingchuan Ma}
Our solution was based on distributed Proximal Policy Optimization \cite{schulman2017proximal} for stable convergence guarantee and parameter robustness. In addition to careful observation engineering and reward shaping, we implemented residual blocks in both policy and value networks and witnessed faster convergence. To address the instability of gait when the target speed changes abruptly in round 2, we introduced Soft Target Update for a smoother transition in observation. We also found Layer Normalization helps to learn, and SELU outperforms other activation functions. Our best result was established on multiple agents fine-tuned at different target speeds, and we dynamically switch agent during evaluation. We scored 9809 and placed 10th in the NeurIPS 2018 AI for Prosthetics competition.
\subsection{Methods}
\subsubsection{Observation Engineering}
In our task, the full state representation of the dynamic system is determined by hundreds of physical quantities which are very complex and not efficient for the agent to learn. Therefore, we proposed several observation engineering techniques to alleviate this issue.
\textbf{Dimension Reduction}. Among the original observation provided, we carefully observed each kind of physical quantities of the skeleton model and found that acceleration-related values have a much larger variation range than others and seem to be unstable during simulation. Considering position and velocity quantities are reasonably enough to represent model dynamics, we removed acceleration values from the observation. As a result, we found the removal did not deteriorate performance and even sped up convergence due to the reduction of nearly 100 dimensions on network input.
\textbf{Observation Generalization}. Because our ultimate goal is to walk at a consistent speed, there is no need for the agent to care about its absolute location, i.e., the agent's observation should be as similar as possible when walking in the same pose at different places. Therefore, we subtract the position of all bodies, except the pelvis, by the pelvis position in observation. Doing so prevents many input values to the policy and value networks from going to infinity when the agent runs farther and farther, but this still lets the agent know the current distance from the starting point.
\textbf{Soft Target Update}. From our observation, our trained agents are more likely to fall during abrupt changes of target velocity in observation. The lack of generalization of our policy network could be the reason of this falling behavior, but it is also possible that our agent reached a local optima by falling down in the direction of the changed target velocity. Thus, denoting the target velocity that we feed into observation as $v_{curr}$, and the real target velocity as $v_{real}$, we smoothed the change of $v_{curr}$ by linear interpolation between $v_{curr}$ and $v_{real}$ in each step: $v_{curr}=\tau*v_{curr}+(1-\tau)*v_{real}$, where $\tau$ is a coefficient between 0 and 1 controlling the changing rate of target velocity. In practice we choose $\tau=0.8$ which guarantees $v_{curr}\approx v_{real}$ within 20 steps.
\subsubsection{Reward Shaping}
Our reward function consists of three parts, shown in equation \ref{eq:rukia_reward}.
\begin{equation}
\begin{split}
Reward & =r_{speed}+r_{straight}+r_{bend}\\
& =w_{speed}*||\max(\sqrt{|\Vec{v}_{target}|}-\sqrt{|\Vec{v}_{pelvis}-\Vec{v}_{target}|},\Vec{0})||_1\\
& +w_{straight}*\sum_{i=head,torso}{(\frac{||\Vec{v}_{target}\times\Vec{v}_{i}||_2}{||\Vec{v}_{target}||_2})^2}\\
& +w_{bend}*\sum_{i=left\_knee,right\_knee}\min(\max(\theta_{i},-0.4),0),
\end{split}
\label{eq:rukia_reward}
\end{equation}
where $w_{speed},w_{straight},w_{bend}$ are weights chosen as $5,4,2$ respectively, $\Vec{v}$ represents 2-dimension velocity vector on the X-Z plane, and $\theta$ is a negative value that accounts for the bending angle of a knee. The detailed meaning of each part of the reward is discussed below.
\begin{figure}
\centering
\begin{subfigure}{0.45\textwidth}
\includegraphics[scale=0.45]{img2018/10_rukia/rew_comp.png}
\caption{The reward curve comparison between the original reward function and our version.}
\label{fig:rukia_rew_speed}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[scale=0.45]{img2018/10_rukia/result.png}
\caption{An example learning curve from final agents.}
\label{fig:rukia_result}
\end{subfigure}
\caption{Modified reward function comparison and an example overall learning curve}
\end{figure}
\textbf{Speed matching}. Obviously, it is most important that the velocity of the pelvis matches the target velocity. In practice, we found it is easy for the agent to speed up but hard to control its speed around the target value. Thus, instead of the speed term in the original reward which is $-||\Vec{v}_{pelvis}-\Vec{v}_{target}||_2$, we changed the L2-norm to squared root for more sensitivity in the region near the target, which turned out to converge faster than the original reward (see figure \ref{fig:rukia_rew_speed}). But, only using speed matching reward seemed insufficient for this task due to some local optima in learning gaits, so we also introduced other auxiliary reward terms that helped the agent behave more reasonably.
\textbf{Going straight}. Because the speed matching reward only pays attention to the agent's pelvis, sometimes the agent cannot keep walking straight even though the movement of its pelvis point nearly matches the target value. Thus, we also encourage its head and torso to move at the target speed, which further ensures that the skeleton body keeps vertically straight.
\textbf{Bending knees}. Our agents could hardly learn to bend its knees before adding this reward term. Also, keeping the legs straight makes the agent more likely to fall. This term encourages the agent to bend its knee to a small angle which improves its stability of walking at a consistent speed.
\subsubsection{Residual Blocks}
\begin{wrapfigure}{r}{5cm}
\centering
\includegraphics[width=6cm]{img2018/10_rukia/network.png}
\caption{Policy Network Overview}
\label{fig:rukia_network}
\end{wrapfigure}
We applied the idea of residual blocks to our policy network and value network, i.e., we added shortcut connections on the basis of 4 fully connected layers, as illustrated in figure \ref{fig:rukia_network}. Consequently, it further improved about 10\% on our shaped reward and sped up convergence compared to results based on networks without shortcut connections.
\subsubsection{Ensemble of Agents}
Limited by our time and computing resources, we found that it was hard to train an agent that automatically adapts to a different speed, so we trained several agents that specialize at one particular target velocity and combined them at evaluation time. To select target velocities for training, we generated thousands of different targets using the original generation algorithm provided, and observed that target velocities approximately conformed to Gaussian distributions $v_{x}\sim\mathcal{N}(1.27,0.32)$ and $v_{z}\sim\mathcal{N}(0, 0.36)$. We then picked $v_{x}$ from \{0.7, 0.9, 1.25, 1.4, 1.6, 1.8\} which are within the $2\sigma$ range of the distribution and simply used $v_{z}=0$ to train agents. Consequently, it was a clear boost on performance but not an elegant and scalable approach compared to a real multi-speed agent.
\subsubsection{Additional Tricks}
Based on our benchmark experiments, we found adding Layer Normalization before activation stabilizes the convergence and SELU is better than other activation functions, such as RELU and ELU. We also tried several scales of parameter noise, but it turned out to only have a minor effect on results, thus we did not add noise.
\subsection{Experiments and results}
We ran experiments mainly on the Google Cloud Engine and the Tianhe-1 Supercomputer (CPU: Intel Xeon X5670). Our final result took us one day of training on 240 CPUs on Tianhe-1. Our PPO implementation was based on OpenAI's Baselines\citep{baselines} and achieved parallelization by distributed sampling on each process.
The hyper-parameters for our reward function and some tricks are stated above. For hyper-parameters of RL algorithm and optimizer, we mainly used default values from OpenAI's PPO implementation and the Adam optimizer with some minor changes. We sampled 16000 steps per iteration, used a batch size of 64, a step size of $3*10^{-4}$, an epoch of 10 for optimization per iteration, annealed clipping $\epsilon$ of 0.2, and policy entropy penalization of 0.001. For more detailed information, please see our source code at \url{https://github.com/Knoxantropicen/AI-for-Prosthetics}.
Figure \ref{fig:rukia_result} shows an example learning curve of an agent adapted to running at 1.6 m/s. The Y-axis of the figure represents mean original reward per step (which is not our shaped reward, and 10 is the maximum). It took more than 16M samples to get this result. The cumulative reward of a whole trajectory usually varies between 9800 and 9900 according to different random seeds. We achieved 9809 in the official evaluation.
\subsection{Discussion}
Thanks to all of the team members' effort, we got a pretty good result and a satisfying rank. However, our solution is relatively brute-force and naive compared with those winning solutions and could be improved in multiple ways. So, this section focuses on potential improvements.
\subsubsection{Better Sample Efficiency using Better Off-policy Methods}
Our solution is based on Proximal Policy Optimization (PPO), whose sample efficiency is much better than previous on-policy methods. Nevertheless, recent advances in off-policy methods like DDPG, SAC, TD3 has shown us that off-policy methods sometimes could perform better in continuous control tasks. Due to the time-consuming simulation in this challenge, methods with better sample efficiency could be a better choice.
\subsubsection{Special Treatment to Crucial Points}
According to our reward analysis, most of our reward loss comes from two parts:
\begin{enumerate}
\item [-] \textbf{Starting stage}.
Our agent suffers from a slow start. Perhaps a specialized starting model could improve the agent's performance in the starting stage.
\item [-] \textbf{Significant direction change}. Currently, we ensemble multiple models and use the soft update of target velocity to deal with the change of target velocity, but using the ensemble of models trained with different target velocity is likely to achieve a sub-optimal solution. Instead, the ensemble of models for different specialized tasks, like changing direction and running forward/backward, could be a better solution.
\end{enumerate}
Moreover, our agent performs extremely poorly in some rare situation. For instance, if the target velocity is extremely slow, our agent is still likely to go forward at a high speed and is unable to remain still. Maybe some special treatment to these corner cases could also help our agent.
\subsubsection{Model-based Solution}
In this challenge, complex physical simulation in high dimensional continuous space makes sampling pretty time-consuming. Another alternative is to use model-based methods to get more imaginary samples for training, and a known reward function in this challenge makes model-based methods feasible.
\subsubsection{Observation with False Ending Signal}
In this kind of infinite-horizon tasks, it is natural to set a maximum time step limit for a simulation trajectory. Thus, the sample from the last step is associated with an ending signal, but the several previous samples are not, even if their states are quite similar. When the RL algorithm considers this ending signal for computing and predicting Q/V values, e.g., in 1-step TD estimation, this difference on ending signal could cause a significant error in value prediction, which destabilizes the training process. In our implementation, this situation is not well treated, though a one-step bootstrap could solve it.
\section{Efficient and Robust Learning on Elaborated Gaits with Curriculum Learning}\label{s:BaiDu_NLP}
\sectionauthor{Bo Zhou, Hongsheng Zeng, Fan Wang, Rongzhong Lian, Hao Tian}
We introduce a new framework for learning complex gaits with musculoskeletal models. Our framework combines Reinforcement Learning with Curriculum Learning \cite{Yoshua2009Curriculum}. We used Deep Determinsitc Policy Gradient (DDPG) \cite{lillicrap2015continuous} which is driven by the external control command. We accelerated the learning process with large-scale distributional training and bootstrapped deep exploration \cite{Ian2016Deep} paradigm. As a result, our approach\footnote{Find open-source code at: https://github.com/PaddlePaddle/PARL} won the NeurIPS 2018: AI for Prosthetics competition, scoring more than 30 points than the second placed solution.
\subsection{Challenges}
Compared with the 2017 Learning To Run competition, there are several changes in 2018 AI for Prosthetics competition. Firstly, the model is no longer restricted to 2D movement, but rather the model has 3D movement, including the lateral direction. Secondly, a prosthetic leg without muscles replaces the intact right leg. Thirdly, external random velocity command is provided, requiring the model to run at the specified direction and speed, instead of running as fast as possible. These changes raise a more functional challenge on human rehabilitation. We believe that there are several challenges in this problem.
\textbf{High-Dimensional Non-Linear System.}
There are 185 dimension of observations, with 7 joints and 11 body parts in the whole system. The action space includes 19 continuous control signals for the muscles. Though the number of observation dimensions is not extremely large, the system is highly non-linear, and, furthermore, the action space is relatively large compared with many other problems. Moreover, as shown in Fig.~\ref{fig:velocity}, the agent is required to walk at different speeds and directions, which further expands the observation space and transition space. The curse of dimensionality \cite{Richard1961Adaptive} raises core issues of slow convergence, local optimum, and instability.
\begin{figure*}[ht!]%
\centering
\includegraphics[width=1\textwidth]{img2018/01_firework/velocity_distribution.png}
\caption{\textbf{Target velocity distribution.} Each distribution is based on $10^7$ samplings of the target velocity after each change.}
\label{fig:velocity}
\end{figure*}
\textbf{Local optimum \& Stability.}
Though the local optimum problem is commonly faced in most dynamic systems, low-speed walking for the current model is especially problematic. According to Fig.~\ref{fig:velocity}, the required speed range is relatively low around 1.25m/s. If we try learning from scratch to achieve some specific speed, our early investigation revealed that the skeleton walks with a variety of gestures that result in pretty much the same performance in rewards. The agent either walks in a lateral direction (crab-like walking), bumping, or dragging one of its leg. While none of those walking gaits is natural, they are nearly indistinguishable in the rewards. However, although we found that those unrealistic gaits can reasonable generate static velocity walking, they perform very poorly with respect to stability. Transferring the model to other specified velocities becomes a problem, and the system is prone to fall, especially in the moment of switching velocities.
\subsection{Methodology}\label{ss:methodology}
To deal with the challenges mentioned above, we tried several main ideas. As there are a variety of basic RL algorithms to choose from, we chose DDPG \cite{lillicrap2015continuous} as DDPG is an efficient off-policy solver for continuous control. PPO, as an on-policy solver, often suffers from the problem of larger variance and lower efficiency. To further increase our efficiency, we applied Deep Exploration with multi-head bootstrapping \cite{Ian2016Deep}, which has been proven to converge much faster compared with $\epsilon$-greedy. In order to allow our policy to closely follow the velocity target, we inject the velocity as a feature to the policy and value network. At last, to address the core issue of local optimum, we applied curriculum learning to transfer efficient and stable gaits to various velocity range.
\textbf{Model architecture.} As shown in Fig.~\ref{fig:target-driven}, we used 4 fully connected layers for both actor network and critic network in the DDPG algorithm. Compared to general DDPG network architectures, it has two distinct features. We inject the target velocity from the bottom of both networks, as the value function needs to evaluate the current state based on target velocity, and the policy needs to take the corresponding action to reach the target velocity. This is similar to adding the target velocity as part of the observation. Though it introduces some noise when the velocity is switching, it benefits more by automatically sharing the knowledge of different velocities.
We also use multiple heads for the value and policy network in our model. It is a similar architecture as deep exploration \cite{Ian2016Deep}, which simulates the ensemble of neural networks with lower cost by sharing the bottom layers.
\begin{figure}
\centering
\includegraphics[width=0.25\linewidth]{img2018/01_firework/target-driven.png}
\caption{Architecture of target-driven DDPG.}
\label{fig:target-driven}
\vspace*{-.2in}
\end{figure}
\textbf{Transfer Learning.} We propose that by sharing knowledge of walking or running at different speeds, the agent can learn more robust and efficient patterns of walking. We found in our experiment that the unstable gaits learned from scratch for low speed walking do not work well for high-speed running. We investigated on running as fast as possible instead of at a specified speed. We obtained an agent that can run very fast with reasonable and natural gaits just as humans. Starting with the trained policy for fast running, we switched the target to lower speed walking. This process resembles transfer learning, where we want the "knowledge" of gait to be kept but with a slower speed. Our fastest running has velocities over 4.0 m/s. We transferred the policy to 1.25 m/s, but it results in gestures that are still not natural enough and were prone to falling. Still, we made progress by transferring from a higher speed as the fall rate drops substantially.
\textbf{Curriculum Learning.} Curriculum learning \cite{Yoshua2009Curriculum} learns a difficult task progressively by artificially constructing a series of tasks which increase the difficulty level gradually. Recently it has been used to solve complex video game challenges\cite{Yuxin2017Training}. As the direct transfer of a higher speed running policy to lower speed did not work well, we devised 5 tasks to decrease the velocity linearly, with each task starting with the trained policy of the former one. At last, we have a policy running at target = 1.25m/s, with natural gaits that resembles a human being and low falling rate as well.
\textbf{Fine-tuning.}
Based on the pretrained walking model targeted at 1.25 m/s, we fine-tune the model on the random velocity environment. Firstly, we try to force the policy to walk at 1.25 m/s, given any target velocity between -0.5 m/s and 3.0 m/s. This is to make a good start for other target velocities besides 1.2 m/s. We collect walking trajectories at 1.25 m/s, but change the features of target velocity and direction to a random value. We use the collected trajectories to re-train the policy with supervised learning. Secondly, we use the re-trained model as the start point, and fine-tune it in the randomized target velocity environments using target-driven DDPG, which gives our final policy.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{img2018/01_firework/curriculum-learning.png}
\caption{Learning curves comparing curriculum-learning to learning from scratch. Average scores are computed from 50 episodes.}
\label{fig:curriculum-learning}
\end{figure}
\subsection{Experiments}
Our experiments compared curriculum learning with learning from scratch in the fine-tuning stage. We use the same model architecture for both experiments. For the actor model, we use tanh as activation function for each layer. For the critic model, selu \cite{klambauer2017self} is used as activation functions in each layer except the last layer. The discount factor for cumulative reward computation is 0.96. We also use the frame-skip trick, as each step of the agent corresponds to 4 simulation step in the environment with the same action. Twelve heads are used for bootstrapped deep exploration. This is decided by considering the trade-off between the diverse generalization of each head and computation cost in practice.
Figure~\ref{fig:curriculum-learning} shows the comparison of learning from scratch and starting from a policy learned with curriculum learning. Each curve is averaged on 3 independent experiments. Significant improvements on both performance and stability for the curriculum learning can be observed. Further investigating the walking gaits shows that curriculum learning has a more natural walking gesture, as shown in Figure~\ref{fig:learned-gaits}, while learning from scratch group results in crab-like walking.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{img2018/01_firework/learned-gaits.png}
\caption{Learned gaits. (a) The agent walks forward while heading at strange directions. (b) The skeleton walks naturally with small steps.}
\label{fig:learned-gaits}
\end{figure}
\subsection{Discussion}
We have shown that curriculum learning is able to acquire sensible and stable gait. Firstly, we train the policy model to run as fast as possible to learn human-like running gestures. We then diminish the target velocity gradually in later training. Finally, we fine-tune the policy in randomized velocity environment. Despite the state-of-the-art performance, there are still questions to be answered. Applying reinforcement learning in large and non-linear state and action spaces remains challenging. In this problem, we show that a sophisticated start policy is very important. However, the reason that running as fast as possible learns better gait is not yet fully answered. Moreover, the curriculum courses are hand-designed. Devising universal learning metrics for such problems can be very difficult. We are looking forward to further progresses in this area.
\section{ApeX-DDPG with Reward Shaping and Parameter Space Noise}\label{s:apexddpg}
\sectionauthor{Zhen Wang, Xu Hu, Zehong Hu, Minghui Qiu, Jun Huang}
We leverage ApeX~\citep{horgan2018distributed}, an actor-critic architecture, to increase the throughput of sample generation and thus accelerate the convergence of the DDPG~\citep{lillicrap2015continuous,silver2014deterministic} algorithm with respect to wall clock time.
In this way, a competitive policy, which achieved a $9900.547$ mean reward in the final round, can be learned within three days.
We released our implementation\footnote{\url{https://github.com/joneswong/rl_stadium}} which reuses some modules from Ray~\citep{moritz2018ray}.
Based on a standard ApeX-DDPG, we exploited reward shaping and parameter space noise in our solution, both of which bring in remarkable improvements.
We will describe these tricks thoroughly in this section.
\subsection{Methods}
\subsubsection{ApeX}\label{sss:distributed-apex}
In the ApeX architecture, each actor interacts with its corresponding environment(s) and, once a batch of samples has been generated, sends the samples to the learner.
Meanwhile, each actor periodically pulls the latest model parameters from learner.
The learner maintains collected samples in a prioritized replay buffer~\citep{schaul2015prioritized} and continuously updates model parameters based on mini-batches sampled from the buffer.
osim-rl~\citep{kidzinski2018learningtorun} consumes around $0.22s$ for simulating one step where the actor side is the bottleneck and the throughput (time steps per second) as well as convergence speed increases significantly as we add more actors (see Fig.~\ref{fig:cmpnumactor}).
\begin{figure}[ht!]
\centering
\begin{subfigure}[t]{0.49\textwidth}
\centering
\includegraphics[width=1\textwidth]{img/cmpnumactor.png}
\caption{Comparison of different \#actor}
\label{fig:cmpnumactor}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.49\textwidth}
\centering
\includegraphics[width=1\textwidth]{img/cmpexplorations.png}
\caption{Comparison of exploration strategies}
\label{fig:cmpexplorations}
\end{subfigure}
\label{fig:cmp}
\caption{Effectiveness of ApeX and parameter space noise}
\end{figure}
\subsubsection{Reward Shaping}\label{sss:reward-itshigh}
osim-rl favors agents that walk at a target velocity, regardless if the gait is unnatural.
Nonetheless, we assume that the optimal policy does not use an unnatural gait and thus, in order to trim the search space, we shape the original reward for encouraging our agent to walk in a natural way.
First, we noticed that our agent is inclined to walk with scissor legs (see Fig.~\ref{fig:scissors}).
With this gait, agents become extremely brittle when the target orientation substantially changed.
We remedy scissor legs by adding a penalty to the original reward:
\begin{equation}
p^{\text{scissors}}=[x^{\text{calcn\_l}}\sin\theta+z^{calcn\_l}\cos\theta]_{+}+[-(x^{\text{foot\_r}}\sin\theta+z^{foot\_r}\cos\theta)]_{+}
\end{equation}
where $\theta$ is the rotational position of pelvis about the $y$-axis, $[x]_{+}\triangleq\max(x,0)$, and all positions are measured in a relative way with respect to pelvis positions.
We show the geometric intuition of this penalty in Fig.~\ref{fig:pgeometric}.
Some case studies confirmed its effectiveness (see Fig.~\ref{fig:counterscissors}).
Another important observation is that, at the early stage of a training procedure, the subject follows the target velocity by walking sideways, leading to a residual between the current heading and the target direction.
This residual may accumulate for each time the target velocity changes, e.g., the subject persists heading in $x$-axis and consecutively encounters changes that all introduce increments in the positive $z$-axis.
Intuitively speaking, heading changes that exceed the upper bound of osim-rl (i.e., $\frac{\pi}{8}$) are intractable.
Thus, we define a penalty as below to avoid a crab walk:
\begin{equation}
p^{\text{sideways}}=1-\frac{(v_{x}\cos\theta-v_{z}\sin\theta)}{\sqrt{v_{x}^{2}+v_{z}^{2}}}
\label{eq:sideways}
\end{equation}
where $v_x,v_z$ stand for the target velocity in $x$ and $z$ axes respectively.
The RHS of \eqref{eq:sideways} is the cosine distance between the target velocity and the pelvis orientation in the $x,z$-plane.
Our solution got $9828.722$ reward at the most difficult episode of the final round (i.e., episode3) which potentially consists of large heading changes.
To the best of our knowledge, our gap between this reward and the mean reward is smaller than that of many other competitors which strongly indicates the usefulness of this penalty.
\begin{figure}[ht!]%
\centering
\begin{subfigure}[t]{0.23\textwidth}
\centering
\includegraphics[width=1\textwidth]{img/scissors.png}
\caption{Without Penalty}
\label{fig:scissors}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=1\textwidth]{img/geometric_intuition.png}
\caption{Geometric Intuition}
\label{fig:pgeometric}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.195\textwidth}
\centering
\includegraphics[width=1\textwidth]{img/counter_scissors.png}
\caption{With Penalty}
\label{fig:counterscissors}
\end{subfigure}
\caption{Eliminate scissor leg via reward shaping}
\end{figure}
\subsubsection{Parameter Space Noise}\label{sss:parameter-itshigh}
In our solution, the actors explore by either posing Ornstein--Uhlenbeck (OU) noise~\citep{lillicrap2015continuous} upon the actions or adding a Gaussian perturbation to the model parameters~\citep{plappert2017parameter}, with a fifty-fifty chance~\citep{pavlov}.
For each actor, the OU samples are multiplied by an actor-specific coefficient in analogy to taking $\epsilon$-greedy exploration with various $\epsilon$ at the same time.
On the other hand, the parameter space noise enriches the behavior policies, without which the policies of different actors are close to each other at each certain moment.
We present the advantages of such a hybrid exploration strategy in Fig.~\ref{fig:cmpexplorations}.
\subsection{Experiments and results}
We trained our model on the Alibaba Cloud PAI platform~\footnote{\url{https://www.alibabacloud.com/press-room/alibaba-cloud-announces-machine-learning-platform-pai}}.
For ApeX, we used 1 learner (x1 P100 GPU, x6 CPU) and 128 actors (x1 CPU).
For DDPG, our configuration can be found at \textit{/examples/r2\_v4dot1b.json} in our repository.
There are mainly two peculiarities that need to be clarified.
First, distinguishing from the critic network used in original DDPG, we apply a fully-connected layer (128 neurons) to the action before concatenating the action and state channels.
We argue that, in high-dimensional continuous control like osim-rl, the action also needs such a feature extraction procedure for better approximating the Q function.
Empirical evaluation confirmed our point.
Second, we noted that there are three target velocity changes within 1000 time steps on average.
Time steps when such changes occur are accidental observations to the agent.
Such "noisy" samples are likely to import oscillations which can often be alleviated by increasing the batch size.
We made rough comparisons among batch sizes of $\{256, 512, 1024\}$ and the results support using the largest one.
\subsection{Discussion}
In addition to the posture features, we lumped a normalized time step into our state representations.
At the first glance, this seems redundant, as the principles of running doesn't change whichever step the subject is at.
However, the time step feature advanced the mean reward of our solution from around $9800$ to $9900$.
We regard its contribution as variance reduction of V/Q-values.
Without this feature, the V/Q-values of the same state (i.e., posture) decrease along time step, since this challenge considers a finite horizon.
We think distributional Q-learning is an alternative, and how to combine it with deterministic policy gradient deserves further investigation.
\section{Distributed Quantile Ensemble Critic with Attention Actor}\label{s:dqec}
\sectionauthor{Sergey Kolesnikov, Oleksii Hrinchuk, Anton Pechenko}
Our method combines recent advances in off-policy deep reinforcement learning algorithms for continuous control, namely Twin Delayed Deep Deterministic Policy Gradient (TD3)~\cite{fujimoto2018addressing}, quantile value distribution approximation (QR-DQN)~\cite{dabney2017distributional}, distributed training framework~\cite{horgan2018distributed,barth2018distributed}, and parameter space noise with LayerNorm for exploration~\cite{plappert2017parameter}. We also introduce LAMA (last, average, max, attention~\cite{bahdanau2014neural}) pooling
to
take into account several temporal observations effectively. The resulting algorithm scored a mean reward of $9947.096$ in the final round and took \textbf{3rd place} in NeurIPS'18 AI for Prosthetics competition. We describe our approach in more detail below and then discuss contributions of its various components. Full source code is available at \url{https://github.com/scitator/neurips-18-prosthetics-challenge}.
\subsection{Methods}
\subsubsection{Twin Delayed Deep Deterministic Policy Gradient (TD3)}
TD3 algorithm is a recent improvement over DDPG which adopts Double Q-learning technique to alleviate overestimation bias in actor-critic methods. The differences between TD3 and DDPG are threefold. Firstly, TD3 uses a pair of critics which provides pessimistic estimates of Q-values in TD-targets (equation $10$ in~\cite{fujimoto2018addressing}).
Secondly, TD3 introduces a novel regularization strategy, target policy smoothing, which proposes to fit the value of a small area around the target action (equation $14$ in~\cite{fujimoto2018addressing}).
Thirdly, TD3 updates an actor network less frequently than a critic network (for example, one actor update for two critic updates).
In our experiments, the application of the first two modifications led to much more stable and robust learning. Updating the actor less often did not result in better performance, thus, this modification was omitted in our final model.
\subsubsection{Quantile value distribution approximation}
Distributional perspective on reinforcement learning~\cite{bellemare2017distributional} advocates for learning the true return (reward-to-go) $Z_\theta$ distribution instead of learning a value function $Q_\theta$ only.
This approach outperforms traditional value fitting methods in a number of benchmarks with both discrete~\cite{bellemare2017distributional, dabney2017distributional} and continuous~\cite{barth2018distributed} action spaces.
To parametrize value distribution we use a quantile approach~\cite{dabney2017distributional} which learns $N$ variable locations and applies a probability mass of $\frac{1}{N}$ to each of them. The combination of quantile value distribution approximation and the TD3 algorithm is straightforward: first, we choose the critic network with minimum Q-value, and second, we use its value distribution to calculate the loss function and perform an update step:
\[
i^* = \arg\min_{i=1,2} Q_{\theta_i} (\mathbf{s}_{t+1},\mathbf{a}_{t+1}),\quad \mathcal{L}_{\theta_i} = \mathcal{L}^\text{quantile} \left( Z_{\theta_i}(\mathbf{s}_t,\mathbf{a}_t),r_t+\gamma Z_{\theta_{i^*}}(\mathbf{s}_{t+1},\mathbf{a}_{t+1}) \right),
\]
where $\mathbf{s}_t$ is the state and $\mathbf{a}_t$ is the action at the step $t$.
\subsubsection{Distributed training framework}\label{sss:distributed-jolly}
We propose the asynchronous distributed training framework\footnote{\url{https://github.com/scitator/catalyst}} which consists of training algorithms (trainers), agents interacting with the environment (samplers), and central parameter and data sharing server implemented as redis database. Unlike previous methods~\cite{horgan2018distributed,barth2018distributed} which use a single learning algorithm and many data collecting agents, we propose training several learning algorithms simultaneously with shared replay buffer. First of all, it leads to more diverse data, as a result of several conceptually different actors participating in a data collection process (for example, we can simultaneously train DDPG, TD3, and SAC~\cite{haarnoja2018soft}). Secondly, we can run several instances of the same algorithm with a different set of hyperparameters to accelerate hyperparameter selection process, which may be crucial in the case of limited resources.
\subsubsection{LAMA pooling}\label{sss:concatenation}
Sometimes the information from only one observation is insufficient to determine the best action in a particular situation (especially when dealing with partially observable MDP). Thus, it is common practice to combine several successive observations and declare a state using simple concatenation~\cite{mnih2015human} or more involved recurrent architecture~\cite{recurrent2018}. We introduce LAMA which stands for last-average-max-attention pooling -- an efficient way to combine several temporal observations into a single state with soft attention~\cite{bahdanau2014neural} in its core:
\[
\mathbf{H}_t = \{\mathbf{h}_{t-k},\dots,\mathbf{h}_t\},\quad \mathbf{h}^\text{lama}=[\mathbf{h}_t,\;\text{avgpool}(\mathbf{H}_t),\;\text{maxpool}(\mathbf{H}_t),\;\text{attnpool}(\mathbf{H}_t)].
\]
\subsubsection{Hybrid exploration}\label{sss:parameter-jolly}
We employ a hybrid exploration scheme which combines several heterogeneous types of exploration.
With $70\%$ probability, we add random noise from $\mathcal{N}(0,\sigma I)$ to the action produced by the actor where $\sigma$ changes linearly from $0$ to $0.3$ for different sampler instances. With $20\%$ probability we apply parameter space noise~\cite{plappert2017parameter} with adaptive noise scaling, and we do not use any exploration otherwise. The decision of which exploration scheme to choose is made at the beginning of the episode and is not changed till its end.
\subsubsection{Observation and reward shaping}\label{sss:reward-jolly}
We have changed the initial frame of reference to be related to pelvis by subtracting its coordinates $(x,y,z)$ from all positional variables. In order to reduce inherent variance, we have standardized input observations with sample mean and variance of approximately $10^7$ samples collected during early stages of experiments. We have also rescaled the time step index into a real number from $[-1,1]$ and included it into observation vector as was recently proposed by~\cite{pardo2017time}.
At the early stages of the competition we have noticed that sometimes the learned agent tended to cross its legs as the simulator allowed one leg to pass through another. We have assumed that such behavior led to suboptimal policy and excluded it by introducing additional ``crossing legs'' penalty.
Specifically, we have computed the scalar triple product of three vectors starting at pelvis and ending at head, left toe, and right prosthetic foot, respectively, which resulted in a penalty of the following form ($\vec{r}$ is a radius vector):
\[
p^\text{crossing legs} = 10 \cdot \min \left\{(\vec{r}^\text{head}-\vec{r}^\text{pelvis}, \vec{r}^\text{left}-\vec{r}^\text{pelvis}, \vec{r}^\text{right}-\vec{r}^\text{pelvis}), 0 \right\}.
\]
We have also rescaled the original reward with formula $r \leftarrow 0.1 \cdot (r - 8)$ to experience both positive and negative rewards, as without such transformation the agent always got positive reward (in the range of $\sim[7,10]$) which slowed learning significantly.
\subsubsection{Submit tricks}\label{sss:ensemble}
In order to construct the best agent from several learned actor-critic pairs we employ a number of tricks.
\textbf{Task-specific models}. Our experiments have revealed that most points are lost at the beginning of the episode (when agent needs to accelerate from zero speed to $1.25$ m/s) and when target velocity has big component on z-axis. Thus, we have trained two additional models for this specific tasks, namely ``start'' (which is active during the first $50$ steps of the episode) and ``side'' (which becomes active if the z-component of target velocity becomes larger than $1$ m/s).
\textbf{Checkpoints ensemble}. Adapting the ideas from~\cite{huang2017learning,dietterich2000ensemble,huang2017snapshot} and capitalizing on our distributed framework, we simultaneously train several instances of our algorithm with different sets of hyperparameters, and then pick the best checkpoints in accordance to validation runs on $64$ random seeds. Given an ensemble of actors and critics, each actor proposes the action which is then evaluated by all critics. After that, the action with the highest average Q-value is chosen.
\textbf{Action mixtures}. In order to extend our action search space, we also evaluate various linear combinations of the actions proposed by the actors. This trick slightly improves the resulting performance at no additional cost as all extra actions are evaluated together in a single forward pass.
\subsection{Experiments and results}
During our experiments, we have evaluated training performance of different models. For the complete list of hyperparameter and their values we refer the reader to our GitHub repository.
\textbf{Model training performance}. Figure~\ref{fig:td3_vs_ddpg} shows a comparison of distributed TD3~\cite{fujimoto2018addressing} (without updating actor less often) and distributed DDPG with categorical value distribution approximation (also known as D4PG~\cite{barth2018distributed}). As we can see the TD3 exhibits much more stable performance which advocates for the use of two critics and fitting the value of a small area around the target action in continuous control. Figure~\ref{fig:jolly_roger_final} shows the learning curve for the final models we used in our solution. Although training to convergence takes quite a time ($4$ days), our method exhibits remarkable sample efficiency exceeding a score of $9900$ after just $10$ hours of training with $24$ parallel CPU samplers.
\begin{figure}[ht!]
\centering
\begin{subfigure}[t]{0.49\textwidth}
\centering
\includegraphics[width=1\textwidth]{img2018/03_jolly/td3_vs_ddpg.png}
\caption{Comparison between TD3 and DDPG with categorical value distribution approximation (also known as D4PG~\cite{barth2018distributed}).}
\label{fig:td3_vs_ddpg}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.49\textwidth}
\centering
\includegraphics[width=1\textwidth]{img2018/03_jolly/final.png}
\caption{Learning curve for final model in the original scale (gray) and its rescaled version (blue).}
\label{fig:jolly_roger_final}
\end{subfigure}
\caption{Learning curves for different approaches.}
\label{fig:jolly_roger}
\end{figure}
\textbf{Submit trick results}. Figure~\ref{fig:jolly_roger2} depicts the performance of our models with different submit tricks applied. The combination of all tricks allows us to squeeze an additional $11$ points from a single actor-critic model.
\begin{figure}[ht!]
\centering
\begin{subfigure}[t]{0.49\textwidth}
\centering
\includegraphics[width=1\textwidth]{img2018/03_jolly/jolly_boxplot.png}
\caption{Performance on local evaluation, $64$ random seeds.}
\label{fig:jolly_boxplot}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.49\textwidth}
\centering
\includegraphics[width=1\textwidth]{img2018/03_jolly/jolly_barplot.png}
\caption{Performance on the final submit, $10$ seeds.}
\label{fig:jolly_barplot}
\end{subfigure}
\caption{Performance of different submit tricks.}
\label{fig:jolly_roger2}
\end{figure}
\subsection{Discussion}
We proposed the Distributed Quantile Ensemble Critic (DQEC), an off-policy RL algorithm for continuous control, which combines a number of recent advances in deep RL. Here we briefly summarize the key features of our algorithm and discuss the open questions from NeurIPS'18 AI for Prosthetics challenge.
\textbf{Key features.} Twin Delayed Deep Deterministic Policy Gradient (TD3)~\cite{fujimoto2018addressing}, quantile value distribution approximation~\cite{dabney2017distributional}, distributed framework~\cite{barth2018distributed} with an arbitrary number of trainers and samplers, LAMA (last, average, max, attention~\cite{bahdanau2014neural}) pooling, actor-critic ensemble~\cite{huang2017learning}.
\textbf{What could we do better?} First of all, we should analyze the particular features of the problem at hand instead of working on a more general and widely applicable approach. Specifically, we discovered that our agents fail on episodes with a high target velocity component on the z-axis only three days before the deadline and no time was left to retrain our models. If we found this earlier we could have updated our pipeline to train on less imbalanced data by repeating such episodes more often.
\textbf{What to do next?} Our model comprises a number of various building blocks. Although we consider all of them important for the final performance, a careful ablation study is required to evaluate contribution of each particular component. We leave this analysis for future work.
\section{Introduction}
Recent advancements in material science and device technology have increased interest in creating prosthetics for improving human movement. Designing these devices, however, is difficult as it is costly and time-consuming to iterate through many designs. This is further complicated by the large variability in response among many individuals. One key reason for this is that our understanding of the interactions between humans and prostheses is not well-understood, which limits our ability to predict how a human will adapt his or her movement. Physics-based, biomechanical simulations are well-positioned to advance this field as it allows for many experiments to be run at low cost. Recent developments in using reinforcement learning techniques to train realistic, biomechanical models will be key in increasing our understanding of the human-prosthesis interaction, which will help to accelerate development of this field.
In this competition, participants were tasked with developing a controller to enable a physiologically-based human model with a prosthetic leg to move at a specified direction and speed. Participants were provided with a human musculoskeletal model and a physics-based simulation environment (OpenSim \cite{delp2007opensim,seth2018opensim}) in which they synthesized physically and physiologically accurate motion (Figure \ref{fig:running}). Entrants were scored based on how well the model moved according to the requested speed and direction of walking. We provided competitors with a parameterized training environment to help build the controllers, and competitors' scores were based on a final environment with unknown parameters.
This competition advanced and popularized an important class of reinforcement learning problems, characterized by a large set of output parameters (human muscle controls) and a comparatively small dimensionality of the input (state of a dynamic system). Our challenge attracted over 425 teams from the computer science, biomechanics, and neuroscience communities, submitting 4,575 solutions. Algorithms developed in this complex biomechanical environment generalize to other reinforcement learning settings with highly-dimensional decisions, such as robotics, multivariate decision making (corporate decisions, drug quantities), and the stock exchange.
In this introduction, we first discuss state-of-the-art research in motor control modeling and simulations as a tool for solving problems in biomechanics (Section \ref{ss:scope}). Next, we specify the details of the task and performance metrics used in the challenge (Section \ref{ss:task}). Finally, we discuss results of the challenge and provide a summary of the common strategies that teams used to be successful in the challenge (Section \ref{ss:solutions}). In the following sections, top teams describe their approaches in more detail.
\subsection{Background and scope}\label{ss:scope}
Using biomechanical simulations to analyze experimental data has led to novel insights about human-device interaction. For example, one group used simulations to study a device that decreased the force that the ankle needed to produce during hopping but, paradoxically, did not reduce energy expenditure. Simulations that included a sophisticated model of muscle-tendon dynamics revealed that this paradox occurred because the muscles were at a length that was less efficient for force production \cite{farris2014exo}. Another study used simulations of running to calculate ideal device torques needed to reduce energy expenditure during running\cite{uchida2016device}, and insights gained from that study were used to decrease the metabolic cost in experimental studies \cite{lee2017exosuit}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.72 \linewidth]{img2018/00_intro/prost.png}
\caption{A patient with a prosthetic leg (left); musculoskeletal simulation of a patient with a prosthetic leg as modeled in Stanford's OpenSim software (right).}
\label{fig:running}
\end{figure}
A limitation of the previous studies, however, is that they used the kinematics from the experiments as a constraint. Since assistive devices may greatly change one's kinematics, these analyses cannot predict the neural and kinematic adaptations induced by these devices on the human. Instead, we require a framework that can generate simulations of gait that can adapt realistically to various perturbations. This framework, for instance, would help us understand the complex motor control and adaptations necessary to control both a healthy human and one with a prosthetic.
Recent advances in reinforcement learning, biomechanics, and neuroscience enable us to build a framework that tackles these limitations. The biomechanics community has developed models and controllers for different movement tasks, two of which are particularly relevant for this challenge. The first study developed a model and controller that could walk, turn, and avoid obstacles \cite{song2015neural}. The second study generated simulated walking patterns at a wide range of target speeds that reproduced many experimental measures, including energy expenditure \cite{ong2017walking}. These controllers, however, are limited to generating walking and running and need domain expertise to design.
Modern reinforcement learning techniques have been used recently to train more general controllers in locomotion. Controllers generated using these techniques have the advantage that, compared to the gait controllers previously described, less user input is needed to hand-tune the controllers, and they are more flexible in their ability to learn additional, novel tasks. For example, reinforcement learning has been used to train controllers for locomotion of complicated humanoid models \cite{lillicrap2015continuous,schulman2015trust}. Although these methods find solutions without domain specific knowledge, the resulting motions are not realistic, possibly because these models do not use biologically accurate actuators.
Through the challenge, we aimed to investigate if deep reinforcement learning methods can yield more realistic results with biologically accurate models of the human musculoskeletal system. We designed the challenge so that it stimulates research at the intersection of reinforcement learning, biomechanics, and neuroscience, encouraging development of methods appropriate for environments characterized by the following: 1) a high-dimensional action space, 2) a complex biological system, including delayed actuation and complex muscle-tendon interactions, 3) a need for a flexible controller for an unseen environment, and 4) an abundance of experimental data that can be leveraged to speed up the training process.
\subsection{Task}\label{ss:task}
Competitors were tasked with building a real-time controller for a simulated agent to walk or run at a requested speed and direction. The task was designed in a typical reinforcement learning setting \cite{Sutton1999} in which an agent (musculoskeletal model) interacts with an environment (physics simulator) by taking actions (muscle excitations) based on observations (a function of the internal state of the model) in order to maximize the reward.
\textbf{Agent and observations.} The simulated agent was a musculoskeletal model of a human with a prosthetic leg. The model of the agent included $19$ muscles to control $14$ degrees-of-freedom (DOF). At every iteration the agent received the current observed state, a vector of consisting of $406$ values: ground reaction forces, muscle activities, muscle fiber lengths, muscle velocities, tendon forces, and positions, velocities, and accelerations of joint angles and body segments.
Compared to the original model from \cite{ong2017walking} we allowed the hip to abduct and adduct, and the pelvis to move freely in space. To control the extra hip degrees of freedom, we added a hip abductor and adductor to each leg, which added 4 muscles total. The prosthesis replaced the right tibia and foot, and we removed the 3 muscles that cross the ankle for that leg. Table \ref{tbl:action} lists all muscles in the model.
\begin{table}[h!]
\setlength\tabcolsep{0.25cm}
\setlength{\extrarowheight}{.2em}
\centering
\begin{tabular}{r l l l }
Name & Side & Description & Primary function(s)\\
\hline
abd & both & Hip abductors & Hip abduction (away from body’s vertical midline)\\
add & both & Hip adductors & Hip adduction (toward body’s vertical midline)\\
bifemsh & both & Short head of the biceps femoris & Knee flexion\\
gastroc & left & Gastrocnemius & Knee flexion and ankle extension (plantarflexion)\\
glut\_max & both & Gluteus maximus & Hip extension\\
hamstrings & both & Biarticular hamstrings & Hip extension and knee flexion\\
iliopsoas & both & Iliopsoas & Hip flexion\\
rect\_fem & both & Rectus femoris & Hip flexion and knee extension\\
soleus & left & Soleus & Ankle extension (plantarflexion)\\
tib\_ant & left & Tibialis anterior & Ankle flexion (dorsiflexion)\\
vasti & both & Vasti & Knee extension\\
\end{tabular}
\caption{A list of muscles that describe the action space in our physics environment. Note that some muscles are missing in the amputated leg (right).}
\label{tbl:action}
\end{table}
\textbf{Actions and environment dynamics.} Based on the observation vector of internal states, each participant's controller would output a vector of muscle excitations (see Table \ref{tbl:action} for the list of all muscles). The physics simulator, OpenSim, calculated muscle activations from excitations using first-order dynamics equations. Muscle activations generate movement as a function of muscle properties such as strength, and muscle states such as current length, velocity, and moment arm. An overall estimate of muscle effort was calculated using the sum of muscle activations squared, a commonly used metric in biomechanical studies \cite{crowninshield1981, thelen2003cmc}. Participants were evaluated by overall muscle effort and the distance between the requested and observed velocities.
\textbf{Reward function and evaluation.} Submissions were evaluated automatically. In Round 1, participants interacted directly with a remote environment. The overall goal of this round was to generate controls such that the model would move forward at 3 m/s. The total reward was calculated as
$$ \sum_{t=1}^{T} 9 - |v_x(s_t) - 3|^2, $$
where \(s_t\) is the state of the model at time \(t\), \(v_x(s)\) is the horizontal velocity vector of the pelvis in the state \(s\), and \(s_t = M(s_{t-1}, a(s_{t_1}))\), i.e. states follow the simulation given by model \(M\). $T$ is the episode termination time step, which is equal to $1000$ if the model did not fall for the full 10 second duration, or is equal to the first time point when that the pelvis of the model falls below $0.6$ m to penalize falling.
In Round 2, in order to mitigate the risk of overfitting, participants submitted Docker containers so that they could not infer the specifics of the environment by interacting with it. The objective was to move the model according to requested speeds and directions. This objective was measured as a distance between the requested and observed velocity vector.
The requested velocity vector varied during the simulation. We commanded approximately $3$ changes in direction and speed during the simulation. More precisely, let $q_0 = 1.25,r_0 = 1$ and let $N_t$ be a Poisson process with $\lambda = 200$. We define
\[
q_t = q_{t-1} + \mathbf{1}_{(N_t \neq N_{t-1})} u_{1,t} \text{ and }
r_t = r_{t-1} + \mathbf{1}_{(N_t \neq N_{t-1})} u_{2,t},
\]
where $\mathbf{1}_{(A)}$ is the indicator function of the event $A$ (here, a jump in the Poisson process). We define $u_{1,t} \sim \mathcal{U}([-0.5,0.5])$ and $u_{2,t} \sim \mathcal{U}([-\pi/8,\pi/8])$, where $\mathcal{U}([a,b])$ denotes a uniform distribution on $[a,b]$ interval.
In order to keep the locomotion as natural as possible, we also added a penalties for overall muscle effort as validated in previous work \cite{crowninshield1981, thelen2003cmc}. The final reward function took form
\[
\sum_{t=1}^{T} \left( 10 - |v_x(s_t) - w_{t,x} |^2 - |v_z(s_t) - w_{t,z} |^2 - 0.001\sum_{i=1}^{d}a_{t,i}^2 \right),
\]
where $w_{t,x}$ and $w_{t,z}$ correspond to $q_t$ and $r_t$ expressed in Cartesian coordinates, $T$ is termination step of the episode as described previously, $a_{t,i}$ is the activation of muscle $i$ at time $t$, and $d$ is the number of muscles.
Since the environment is subject to random noise, submissions were tested over ten trials and the final score was the average from these trials.
\subsection{Solutions}\label{ss:solutions}
Our challenge attracted 425 teams who submitted 4,575 solutions. The top 50 teams from Round~1 qualified for Round~2. In Table \ref{tbl:leaderboard} we list the top teams from Round~2. Detailed descriptions from each team are given in the subsequent sections of this article. Teams that achieved 1st through 10th place described their solutions in sections 2 through 11. Two other teams submitted their solutions as well (Sections 12 and 13).
\begin{table}[h!]
\setlength\tabcolsep{0.25cm}
\setlength{\extrarowheight}{.2em}
\centering
\begin{tabular}{r | l l l l l }
& Team & Score & \# entries & Base algorithm\\
\hline
1 & Firework & 9981 & 10 & DDPG \\
2 & NNAISENSE & 9950 & 10 & PPO \\
3 & Jolly Roger & 9947 & 10 & DDPG \\
4 & Mattias & 9939 & 10 & DDPG \\
5 & ItsHighNoonBangBangBang & 9901 & 3 & DDPG \\
6 & jbr & 9865 & 9 & DDPG \\
7 & Lance & 9853 & 4 & PPO \\
8 & AdityaBee & 9852 & 10 & DDPG \\
9 & wangzhengfei & 9813 & 10 & PPO \\
10 & Rukia & 9809 & 10 & PPO \\
\end{tabular}
\caption{Final leaderboard (Round 2).}
\label{tbl:leaderboard}
\end{table}
In this section we highlight similarities and differences in the approaches taken by the teams. Of particular note was the amount of compute resources used by the top participants. Among the top ten submissions, the highest amount of resources reported for training the top model was 130,000 CPU-hours, while the most compute-efficient solution leveraged experimental data and imitation learning (Section~\ref{s:lance}) and took only 100 CPU-hours to achieve 7th place in the challenge (CPU-hours were self-reported). Even though usage of experimental data was allowed in this challenge, most participants did not use it and focused only on reinforcement learning with a random starting point. While such methods are robust, they require very large compute resources.
While most of the teams used variations of well-established algorithms such as DDPG and PPO, each team used a combination of other strategies to improve performance. We identify some of the key strategies used by teams below.\\
\ \newline
\noindent\textbf{Leveraging the model.} Participants used various methods to encourage the model to move in a realistic way based on observing how humans walk. This yielded good results, likely due to the realistic underlying physics and biomechanical models. Specific strategies to leverage the model include the following:
\begin{itemize}
\item Reward shaping: Participants modified the reward for training in such a way that it still makes the model train faster for the actual initial reward. (see, for example, Sections \ref{sss:reward-jolly}, \ref{sss:reward-mattias}, or \ref{sss:reward-itshigh}).
\item Feature engineering: Some of the information in the state vector might add little value to the controller, while other information can give a stronger signal if a non-linear mapping based on expert knowledge is applied first (see, for example, Sections \ref{sss:reward-jolly}, \ref{sss:observation-mattias}, or \ref{sss:observation-jbr}). Interestingly, one team achieved a high score without feature engineering (Section \ref{sss:large-space}).
\item Human-in-the-loop optimization: Some teams first trained a batch of agents, then hand picked a few agents that performed well for further training (Section \ref{sss:human-in-the-loop}).
\item Imitation learning: One solution used experimental data to quickly find an initial solution and to guide the controller towards typical human movement patterns. This resulted in training that was quicker by a few orders of magnitude (Section \ref{s:lance}).
\end{itemize}
\ \newline
\noindent\textbf{Speeding up exploration.} In the early phase of training, participants reduced the search space or modified the environment to speed up exploration using the following techniques:
\begin{itemize}
\item Frameskip: Instead of sending signals every $1/100$ of a second (i.e., each frame), participants sent the same control for, for example, 5 frames. Most of the teams used some variation of this technique (see, for example, Section \ref{sss:observation-mattias})
\item Sample efficient algorithms: all of the top teams used algorithms that are known to be sample-efficient, such as PPO and DDPG.
\item Exploration noise: Two main exploration strategies involved adding Gaussian or Ornstein--Uhlenbeck noise to actions (see Section \ref{sss:parameter-jolly}) or parameter noise in the policy (see Section \ref{s:nnaisense} or \ref{sss:parameter-itshigh}).
\item Binary actions: Some participants only used muscle excitations of exactly 0 or 1 instead of values in the interval $[0,1]$ (``bang-bang'' control) to reduce the search space (Section \ref{sss:large-space}).
\item Time horizon correction: An abrupt end of the episode due to a time limit can potentially mislead the agent. To correct for this effect, some teams used an estimate of the value behind the horizon from the value function (see Section \ref{sss:time-horizon}).
\item Concatenating actions: In order to embed history in the observation, some teams concatenated observations before feeding them to the policy (Section \ref{sss:concatenation}).
\item Curriculum learning: Since learning the entire task from scratch is difficult, it might be advantageous to learn low-level tasks firsts (e.g., bending the knee) and then learn high-level tasks (e.g., coordinating muscles to swing a leg) (Section \ref{ss:methodology})
\item Transfer learning: One can consider walking at different speeds as different subtasks of the challenge. These subtasks may share control structure, so the model trained for walking at 1.25 m/s may be retrained for walking at 1.5 m/s etc. (Section \ref{ss:methodology})
\end{itemize}
\ \newline
\noindent\textbf{Speeding up simulations.} Physics simulations of muscles and ground reaction forces are computationally intensive. Participants used the following techniques to mitigate this issue:
\begin{itemize}
\item Parallelization: Participants ran agents on multiple CPUs. A master node, typically with a GPU, was collecting these experiences and updating weights of policies. This strategy was indispensable for success and it was used by all teams (see, for example, Sections \ref{sss:distributed-jolly}, \ref{sss:distributed-apex}, \ref{sss:distributed-jbr} or \ref{sss:distributed-shawk})
\item Reduced accuracy: In OpenSim, the accuracy of the integrator is parameterized and can be manually set before the simulation. In the early stage of training, participants reduced accuracy to speed up simulations to train their models more quickly. Later, they fine-tuned the model by switching the accuracy to the same one used for the competition \cite{kidzinski2018learning}.
\end{itemize}
\ \newline
\noindent\textbf{Fine-tuning.} A common statistical technique for increasing the accuracy of models is to output a weighted sum of multiple predictions. This technique also applies to policies in reinforcement learning, and many teams used some variation of this approach: ensemble of different checkpoints of models (Section \ref{sss:ensemble}), training multiple agents simultaneously (Section \ref{s:mattias}), or training agents with different seeds (Section \ref{s:jbr}).\\
\ \newline
While this list covers many of the commonly used strategies, a more detailed discussion of each team's approach is detailed in the following sections.
\section{Proximal Policy Optimization with improvements}\label{s:wangzhengfei}
\sectionauthor{Zhengfei Wang, Penghui Qi, Zeyang Yu, Peng Peng, Quan Yuan, Wenxin Li}
We apply Proximal Policy Optimization (PPO) \cite{schulman2017proximal} in the
NeurIPS 2018: AI for Prosthetics Challenge. To improve the performance further,
we propose several improvements, including reward shaping, feature engineering
and clipper expectation. Our team placed 9th in the competition.
\subsection{Methods}
\subsubsection{Reward Shaping}
With substantial experiments with various combinations of observation and reward,
we find that it seems hard to train the model successfully with a single reward.
As in human walking in reality, we divide the whole walking procedure into
phases and describe each phase with a reward function. We name these reward functions
as courses and our model is trained course by course. Details about the courses
are shown in Table ~\ref{tb:wangzhengfei-courses}.
\begin{table}[h]
\centering
\caption{Courses for walking in AI for Prosthetics Challenge.}
\label{tb:wangzhengfei-courses}
\begin{tabular}{c | c | c}
\hline
Courses & Reward Function Changes & Intuition \\
\hline
Penalty & lean back and low pelvis height & typical falling pattern, avoid early termination \\
Init & pelvis velocity and survival reward & motivate agent to extend his leg to move forward \\
Cycle & velocity of two feet & move both legs in turns \\
Stable & distance between feet (minus, as punishment) & avoid too much distance between two feet \\
Finetune & official evaluation (replace pelvis velocity) & adapt requested velocity for competition \\
\hline
\end{tabular}
\end{table}
Near the end of the competition, we propose a new reward function based on
exponential function, as shown below.
\begin{equation}
r_t = e^{|v_x(t) - tv_x(t)|} + e^{|v_z(t) - tv_z(t)|}
\label{eq:wangzhengfei-reward}
\end{equation}
where $v(t)$ and $tv(t)$ represent current velocity and target (requested)
velocity at step $t$. This function is smoother and provide larger gradient
when there is small distance between current velocity and requested velocity.
\subsubsection{Clipped Expectation}
A requested velocity introduces stochasticity to the environment, and the agent
always tries to adapt it to get higher reward. However, we find that
when difference between current velocity and requested velocity is big
enough, the agent will become unstable and perform worse. Also the episode
will terminate early, resulting in score loss. To handle this problem,
we manually set a threshold $Th$ for the agent, whose current velocity is
$v$. We clip the requested velocity into the range $[v - Th, v + Th]$ before
passing it to the agent.
\subsection{Experiments and results}
\subsubsection{Baseline Implementation}
We use PPO as our baseline. To describe the state of the agent, we apply
feature engineering based on last year's solutions. To stress the importance
of the requested velocity, we copy it twice in the state.
To improve the performance of parallel computing, we replace default sampling
environments from subprocess based with Ray \cite{moritz2018ray}. This
makes our PPO scalable across servers and clusters. Furthermore, inspired by
Cannikin Law, we propose to launch extra sampling environments to speedup.
We have open-sourced the code
\footnote{\url{https://github.com/wangzhengfei0730/NIPS2018-AIforProsthetics}}
and details about the states and parallel computing can be found online.
\subsubsection{Clipped Expectation}
During the competition's round 2 evaluation, there is one episode where our model
fails to walk for complete episode all the time. Our best model without
clipped expectation can achieve about 800 steps and get less 8000 points.
We set a threshold for axis x as 0.3 and axis z as 0.15. This modification
helps our model complete that episode and improve with nearly 1500 points.
\subsubsection{Slow Start}
We have plotted velocity distribution along the episode in round 1 for
analysis, and we can regard round 1's requested velocity as
$[3.0, 0.0, 0.0]$. The plot is shown in Fig.~\ref{fig:wangzhengfei-vd}, our model's
velocity at first 50 steps is extremely slow, and even negative at times. We have tried
several methods to fix this problem during both round 1 and 2. We think
it could kind of overfitting and gave up on trying to fix it.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.7\textwidth]{img2018/09_wangzhengfei/v_distribution}
\end{center}
\caption{Velocity distribution in round 1.}
\label{fig:wangzhengfei-vd}
\end{figure}
\subsection{Discussion}
We apply PPO to solve the AI for Prosthetics Challenge, and to improve its
performance further, we implement several modifications. As in
real human walking, we divide the whole task into courses and train the
agent course by course. We also propose several minor improvements
to speed up and adapt better to the requested velocity.
Our model has a very slow velocity at the beginning steps, which result
in a nearly 100 points loss. However, we cannot afford retraining a new
model and gave up on fixing this issue. Besides, our clipped expectation
shall influence the performance for unnecessary clip. These could have
been improved to some degree.
\section{Bang-Bang Control with Interactive Exploration to Walk with a Prosthetic}\label{s:nnaisense}
\sectionauthor{Wojciech Jaśkowski, Garrett Andersen, Odd Rune Lykkebø, Nihat Engin Toklu, Pranav Shyam, Rupesh Kumar Srivastava}
Following the success \cite{jaskowski2018rltorunfast} in the NeurIPS 17 ``Learning to Run'' challenge, the NNAISENSE Intelligent Automation team used a similar approach for the new NeurIPS 18 ``AI for Prosthetics'' contest. The algorithm involved using the Proximal Policy Optimization for learning a ``bang-bang'' control policy and human-assisted behavioural exploration. Other important techniques were i) a large number of features, ii) time-horizon correction, and, iii) parameter noise. Although the approach required a huge number of samples for fine-tuning the policy, the learning process was robust, which led NNAISENSE to win round 1 and to place second in round 2 of the competition.
\subsection{Methods}
\subsubsection{Policy Representation}\label{sss:large-space}
A stochastic policy was used. Both the policy function $\pi_\theta$, and the value function,
$V_\phi$, were implemented using feed-forward neural networks with two
hidden layers, with $256$ $\tanh$ units each.
The network input consisted of all $406$ features provided by the environment. The joint positions ($x$, and $z$'s) were made relative to the position of the pelvis. In addition, the coordinate system was rotated around the vertical axis to zero out the $z$ component of the target velocity vector. All the features were standardized with a running mean and variance.
Instead of the typical Gaussian policy, which gives samples in $[0,1]^d$, our network outputs a Bernoulli policy, which gives samples from $\{0,1\}^d$. Previously \cite{jaskowski2018rltorunfast}, it was found out that restricting control in this way leads to better results, presumably, due to reducing the policy space, more efficient exploration, and biasing the policy toward action sequences that are more likely to activate the muscle enough to actually generate movement.
To further improve exploration and reduce the search space, in the first part of training, each action was executed for $4$ consecutive environment steps.
Our policy network utilized parameter noise \cite{fortunato2017noisy}, where the network weights are perturbed by Gaussian noise. Parameter noise was implemented slightly differently for on-policy and off-policy methods, interestingly, we found that our on-policy method benefited most from using the off-policy version of parameter noise.
\subsubsection{Policy Training}\label{sss:time-horizon}
The policy parameters $\theta$, $\phi$, and $\psi$ were learned with Proximal Policy Gradient (PPO, \cite{ppo}) with the Generalized Advantage Estimator (GAE, \cite{schulman2015high}) as the target for the advantage function.
A target advantage correction was applied in order to deal with the non-stationarity of the environment caused by the timestep limit. The correction, described in detail in \cite{jaskowski2018rltorunfast}, hides from the agent the termination of the episode that is caused by the timestep limit by making use of the value estimate. In result, it improves the precision of the value function, thus reducing the variance of the gradient estimator.
\subsubsection{Training Regime}\label{sss:human-in-the-loop}
The methodology applied to the NeurIPS competition task consisted of three stages: i) global initialization, ii) policy refinement, and iii) policy fine-tuning.
PPO is susceptible to local minima. Being an on-policy algorithm, every iteration improves the policy by a little bit. In result, it is unlikely to make large behavioural changes of the agent. Once the agent starts to exhibit a certain gait, PPO is unable to switch to a completely different way of walking later. To alleviate this problem, during the global initialization stage, $50$ runs were executed in parallel. After around $1000$ iterations, two gaits were selected based on their performance and behavioural dissimilarity to be improved in the subsequent stages.
The second stage, policy refinement, involved a larger number of samples per run and lasted until a convergence was observed. Afterwards, the steps per action parameter was reduced to the default $1$.
In the final stage, policy fine-tuning, all the exploration incentives were eventually turned off and the policy was specialized into two sub-policies, one for each task modes: i) \textit{ready-set-go}, used for the first $100$ timesteps; and ii) \textit{normal operation} for the rest of the episode. For the last day of the training, $576$ parallel workers were used (see Table~\ref{tab:nnhyperparams} for details).
\begin{table}[ht]
\centering
\caption{The hyper-parameters used and statistics used for subsequent learning stages.}
\label{tab:nnhyperparams}
\setlength\tabcolsep{0.25cm}
\setlength{\extrarowheight}{.1em}
\begin{tabular}{l c cc cc c}
\toprule
& \multicolumn{5}{c}{\textbf{Training stage}}
& \textbf{} \\
\cmidrule(l){2-6}
& \textbf{Global initialization} &
\multicolumn{2}{c}{\textbf{Refinement}} &
\multicolumn{2}{c}{\textbf{Specialization}} &
\textbf{Total} \\
\cmidrule(l){2-6}
\textbf{} & \textbf{} & \textbf{I} & \textbf{II} & \textbf{I} & \textbf{II} & \\
\cmidrule(l){1-7}
Parallel runs & 50 & 2 & 2 & 2 & 2 & \\
Iterations $[\times1000]$ & 1.2 & 1.3 & 1.7 & 1.6 & 0.3 & 6.2 \\
Episodes $[\times10^{5}]$ & 0.5 & 7.8 & 2.6 & 2.8 & 3.6 & 17.3 \\
Steps $[\times10^{8}]$ & 0.4 & 7.8 & 2.6 & 2.8 & 3.6 & 17.1 \\
Training time $[\text{dyas}]$ & 3.8 & 3.9 & 2.7 & 3.5 & 0.9 & 14.5 \\
Resources used $[\text{CPU hours}\times1000]$ & 36 & 27 & 16 & 24 & 26 & 130 \\
\cmidrule(l){1-7}
Workers & 8 & 144 & 144 & 144 & 576 & \\
Steps per worker & 1,024 & 1,024 & 1,024 & 1,024 & 2,048 & \\
Steps per action & 4 & 4 & 1 & 1 & 1 & \\
Entropy Coeff & 0.01 & 0.01 & 0.01 & 0.01 & 0 & \\
Parameter noise & yes & yes & yes & yes & no & \\
Policy networks & 1 & 1 & 1 & 2 & 2 & \\
\cmidrule(l){1-7}
PPO learning rate & \multicolumn{6}{c}{3$\times10^3$} \\
PPO clip parameter ($\epsilon$) & \multicolumn{6}{c}{0.2} \\
PPO batch size & \multicolumn{6}{c}{256} \\
PPO optimizations per epoch & \multicolumn{6}{c}{10} \\
PPO input normalization clip & \multicolumn{6}{c}{5 SD} \\
PPO entropy coefficient & \multicolumn{6}{c}{0} \\
GAE $\lambda$ GAE & \multicolumn{6}{c}{0.9} \\
GAE $\gamma$ in GAE & \multicolumn{6}{c}{0.99} \\
\cmidrule(l){1-7}
\textbf{Final avg. score during training} & \textbf{9796} & \textbf{9922} & \textbf{9941} & \textbf{9943} & \textbf{9952} & \\
\bottomrule
\end{tabular}
\end{table}
\subsubsection{Results}
The described training method resulted in two distinct gaits of similar average performance (see Table~\ref{tab:nnresults}). Both gaits have interesting characteristics. The slightly better policy (the ``Dancer''\footnote{\url{https://youtu.be/ckPSJYLAWy0}}), starts forward with his prosthetic leg, then turns around during his first few steps, and finally continues his walk backward from this time on. It seems the training found that walking backwards was a more efficient way to deal with the changes of the target velocity vector.
The other policy (the ``Jumper''\footnote{\url{https://youtu.be/mw9cVvaM0vQ}}) starts with lifting his prosthetic leg; he jumps on this healthy leg for the whole episode using the prosthetic leg to keep balance. This is, definitely, not the most natural way of walking, but keeping balance with the prosthetic leg looks surprisingly natural.
The training curve for the ``Dancer'' policy is shown in Fig.~\ref{fig:nntraining}.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{img2018/nntraining.pdf}
\caption{The training curve for the ``Dancer'' policy. The curve in the top figure is noisier since it shows the average score for all training episodes whereas the bottom one shows the score only for the completed episodes. Note that the performance during training is always worse than the performance during testing when the action noise is turned off.}
\label{fig:nntraining}
\end{figure}
\begin{table}[ht]
\centering
\caption{The performance of the trained policies}
\label{tab:nnresults}
\setlength\tabcolsep{0.15cm}
\setlength{\extrarowheight}{.1em}
\begin{tabular}{lr@{.}lr@{.}lr@{.}lr@{.} lr@{.}l}
\toprule
\multirow{2}{*}{\bf Policy} & \multicolumn{4}{c}{\bf Score} & \multicolumn{6}{c}{\bf Avg. penalties} \\
\cmidrule(l){2-11}
& \multicolumn{2}{l}{\bf mean} & \multicolumn{2}{l}{\bf stdev} & \multicolumn{2}{l}{\bf velocity x} & \multicolumn{2}{l}{\bf velocity y} & \multicolumn{2}{l}{\bf activation} \\
\cmidrule(l){1-11}
Dancer & 9954&5 & 6&7 & 30&9 & 7&6 & 6&9 \\
Jumper & 9949&0 & 15&1 & 32&3 & 12&9 & 5&8 \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Discussion}
The policy gradient-based method used by NNAISENSE to obtain efficient policies for this task is not particularly sample efficient. However, it requires only a little tuning and it robustly leads to well performing policies as evidenced in the ``Learning to Run'' 2017 and ``AI for Prostetics'' 2018 NeurIPS Challenges.
The peculiarity of the obtained gaits in this work is probably be due to the non-natural characteristic of the reward function used for this task. In the real world, humans are not rewarded for keeping a constant speed. Designing a reward function that leads to desired behaviour is known to be a challenging task in control and reinforcement learning.
\section{Accelerated DDPG with synthetic goals}\label{s:aditya}
\sectionauthor{Aditya Bhatt}
A Deep Deterministic Policy Gradient (DDPG)\citep{lillicrap2015continuous} agent is trained using an algorithmic trick to improve learning speed, and with the Clipped Double-Q modification from TD3 \cite{fujimoto2018addressing}. Transitions sampled from the experience buffer are modified with randomly generated goal velocities to improve generalization. Due to the extreme slowness of the simulator, data is gathered by many worker agents in parallel simulator processes, while training happens on a single core. With very few task-specific adjustments, the trained agent ultimately gets 8th place in the NeurIPS 2018 AI for Prosthetics challenge.
\subsection{Methods}
\subsubsection{Faster experience gathering}
Because running the simulator is very slow, depending on the different machines available, between 16 and 28 CPU cores were used to run parallel instances of the simulator, for the purpose of gathering data. The transitions were encoded as \((s,a,r,s')\) tuples with values corresponding to the state, the action, the reward and the next state. Transitions were sent into a single training thread's experience dataset. The simulator was configured to use a lower precision, which helped speed up execution. This risks producing biased data which could hurt agent performance, but no significant adverse impact was observed.
\subsubsection{Pre-processing}
The only augmentation done to the sensed observations was to change all absolute body and joint positions to be relative to the pelvis's 3-dimensional coordinates. As a form of prior knowledge, the relative coordinates ensure that the agent does not spend training time trying to learn that the reward is invariant to any absolute position. The components of the 353-dimensional\footnote{Each observation provided by the simulator was a python dict, so it had to be flattened into an array of floats for the agent's consumption. This flattening was done using a function from the helper library \cite{seungjaeryanlee}. Due to an accident in using this code, some of the coordinates were replicated several times, thus the actual vector size used in the training is 417.} state vector had very diverse numerical scales and ranges; however, no problem-specific adjustment was done to these.
\subsubsection{Algorithm}
Because the simulator was slow to run, on-policy algorithms like PPO were impractical on a limited computational budget. This necessitated using an off-policy algorithm like DDPG. The same neural network architecture as in the original DDPG paper was used, with the two hidden layers widened to $1024$ units. Batch normalization was applied to all layers, including the inputs; this ensured that no manual tuning of observation scales was needed.
DDPG and its variants can be very sample-efficient, however its sample-complexity is raised by artificially slowing down the learning with target networks (which are considered necessary to avoid divergence). To alleviate this problem, a new stabilizing technique\footnote{Unpublished ongoing research by the author, to be made public soon.} was used to accelerate the convergence of off-policy TD learning. This resulted in much faster learning than is usual.
A problem with doing policy improvement using Q functions is that of Q-value overestimation, which can cause frequent collapses in learning curves and sub-optimal policies. The Clipped Double Q technique was used to avoid this problem; this produced an \textit{underestimation} bias in the twin critics, but gave almost monotonically improving agent performance.
\subsubsection{Exploration}
Gaussian action noise caused very little displacement in position. It was also not possible to produce the desired amount of knee-bending in a standing skeleton with DDPG's prescribed Ornstein-Uhlenbeck noise, so instead another temporally correlated \textit{sticky gaussian} action noise scheme was tried: a noise vector was sampled from $\mathcal{N}(0, 0.2)$ and added to the actions for a duration of 15 timesteps.
\subsubsection{Reward shaping}
Aside from a small action penalty, the original reward function uses the deviation between the current and target velocities as: $$r=10 - ||v_{current} - v_{target}||^2$$This encourages the agent to remain standing in the same spot. Also, the reward does not penalize small deviations much. To provide a stronger learning signal, an alternative reward function was employed: $$r=\frac{10}{(1+||v_{current} - v_{target}||^2)}$$This change produces a stronger slope in the reward with a sharp peak at the desired velocity, ideally encouraging the agent to aim for the exact target velocity, and not settle for nearby values.
\subsubsection{Synthetic goals}
Despite having a shaped reward, the task is similar in nature to problems in RL with goals. For any transition triplet $(s,a,s')$, the reward $r$ can be directly inferred using the aforementioned function, because $s$ contains $v_{target}$ and $s'$ contains $v_{current}$ . Then, in a similar spirit to Hindsight Experience Replay \cite{Andrychowicz2017HindsightER}, whenever a batch of transitions is sampled, a batch of synthetic $v_{target}$ vectors with entries from $\mathcal{U}(-2.5, 2.5)$ is transplanted into $s$. The new $r$ is easily computed and the actual training therefore happens on these synthetic transitions.
The point of synthetic goals is that the agent can reuse knowledge of any previously attempted walking gait by easily predicting correct returns for completely different hypothetical goal velocities.
\subsection{Experiments}
In terms of hyperparameters, a batch size of 1024 was used. There was no frame-skipping. The optimizer was RMSprop with learning rates of $10^{-4}$ and $10^{-3}$ for the actor and critic respectively. The critic used a weight decay of strength $10^{-2}$. The batch size was 256. A discount factor of 0.98 was used.
A single training run contained 2 million simulator steps, with the same number of transitions stored in the experience replay memory.
The wall-clock time for a single training run was approximately 18 hours, by which point the total reward would have stabilized around 9850.
\begin{figure}[!h]
\centering
\includegraphics[width=0.8\textwidth]{img2018/08_aditya/nips-contest.png}
\caption{An example training run (with smoothing).}
\label{fig:setup-penalty}
\end{figure}
It was noticed only a short while before the competition's deadline that the most difficult part of the episode and also the one where improvement would gain the most reward points was the first 150 steps, when the skeleton goes from standing to walking. To this end, a separate training run was launched to fine-tune the agent on the first phase of $v_{target}^x=1.25$, with an extra coordinate for the episode timestep added to the state vector. To save time, this agent started training with a union of the experience memories of three different good training runs, in addition to its own growing memory. In a short while, with a batch size of 1024, this agent performed on average better than all three of the agents. The performance on the start phase improved, with a minor degradation in the post-start phase.
Even better test-time performance was then extracted by using the critics, at each episode step, to choose the best action from 2000 candidates sampled from a Gaussian (of variance $0.2$) centered at the actor-produced action vector.
The final agent contained two network models: a start-phase fine-tuned network and a network from another good training run. This brought up the average score to 9912, with the exception of rare episode seeds with very difficult target velocities which caused agent to fall down.
During round 2, one of the hidden seeds corresponded to a particularly tricky velocity and the agent stumbled, bringing down the 10-seed average to 9852.
\subsection{Conclusion}
Even better performance could have been attained had the start-phase fine-tuned agent simply been trained for a longer time, and if problematic velocities had been emphasized in the training. That said, it is encouraging that a strong combination of algorithmic ingredients can competitively solve such a complex reinforcement learning problem, with very little problem-specific tailoring.
|
2,877,628,090,733 | arxiv | \section{Introduction}
In the ongoing quest to classify \mbox{$C^*$}-algebras of real rank zero by
algebraic invariants much progress has been made on classes of
\mbox{$C^*$}-algebras defined from a narrow class of {\em building blocks}\/
as the smallest collection, containing these, which is closed under
taking matrices, finite direct sums, and countable inductive limits.
Considering the single building block $C(S^1)$ one gets the $AT$
algebras, whereas the more extensive $AH$ class is obtained by
allowing every commutative \mbox{$C^*$}-algebra $C(Y)$ for $Y$
a finite $CW$-complex. The $AD$ algebras are derived from the building
blocks $C(S^1)$ and the dimension drop intervals $\Bbb I_n^{\sim}$, and the
$ASH$ algebras are defined allowing both all finite $CW$ complexes and
the dimension drop intervals. In the cases of classes where the
dimension of the topological spaces is allowed to vary ($AH$, $ASH$)
one must often impose the {\it slow dimension growth} condition
introduced in \cite{bbmdmr:rrilca}.
In recent years, examples have appeared to
demonstrate that ordered, graded $K$-theory is {\em not}\/ a complete
invariant for any class of stable rank one, real rank zero
\mbox{$C^*$}-algebras with slow dimension growth that extends much beyond the $AT$ class, unless one
imposes restrictions on the ideal structure of the algebras in
question. The first such example -- a pair of nonisomorphic
$AH$ algebras with slow dimension growth having isomorphic ordered, graded $K$-theory -- was found by Gong
in \cite{gg:ccrrzuet}. Subsequently, Elliott-Gong-Su in
\cite[2.19]{egs:ccrrzrlsdt} provided similar examples involving algebras which
were simultaneously $AH$ and $AD$, and
D{\u{a}}d{\u{a}}rlat-Loring (\cite{mdtal:ccomk}) found a pair of $AD$ algebras of real rank
zero of which one was $AH$ with slow dimension growth and the other
was not, (cf. \DG{10.21}).
These examples extinguished the hope, lit by results of Zhang, that
many stable rank one,
real rank zero \mbox{$C^*$}-algebras would be classified by this invariant,
regardless of
ideals. Indeed, it was proven in \cite{sz:rdpisma}
that the ideal lattice of a \mbox{$C^*$}-algebra $A$ in this class is
reflected by the ideal lattice of $K_0(A)$, so as an order isomorphism of
$K_0$-groups hence preserves ideals, it was believed that
by extending this order to the graded group $K_0(A)\oplus K_1(A)$, one
could achieve that an order isomorphism $(\varphi_0,\varphi_1)$ was induced
by a $*$-isomorphism respecting the pairing of ideals given by
$\varphi_0$. This turned out not to be the case in general.
Several augmented invariants have been introduced recently to remedy
this situation (\cite{se:ciabtk}, \cite{mdtal:umctkg}, \cite{mdgg:crahcrrz}), all
based on the
ingenious definition of an order structure on $K$-theory with
coefficients introduced in \cite{mdtal:ccomk}. That order structure was defined
by $K\!K$-theoretical means, but it has subsequently proven useful to
work instead with the classical order structures on certain tensor
products as suggested already in \cite[5.6]{bb:stc}.
These invariants are complete for
large classes of stably finite, real rank zero \mbox{$C^*$}-algebras which
may have arbitrary ideal lattices. It is the purpose of this note to
show how, in the case that the ideal lattices of the \mbox{$C^*$}-algebras to
be classified are {\em finite}, one may induce from an order isomorphism
of the graded $K$-groups an isomorphism of the larger
invariants, hence proving completeness of the classical invariant in
this case.
More specifically, we shall prove:
\begin{theorem}\label{finite}
The invariant
\[
[K_\star(-),K_\star(-)^+,\Sigma(-)]
\]
is complete for the class of $AD$ algebras with real rank zero and
finitely many ideals.
\end{theorem}
The first result on this form was given by Gong in \cite{gg:ccrrzuet},
and the idea of the proof given there surely may be employed in our
setting also. Conversely, with a little more work our line of proof
carries over to the invariants considered in \cite{mdgg:crahcrrz}, and hence covers
the full
case of $ASH$ algebras of real rank zero and slow dimension growth.
Rather than giving the most general classification result, it is our prime
objective to display a fundamental phenomenon that accounts
for the fact that
the generalized $K$-groups are irrelevant in the finite ideal lattice
case.
This comes out most clearly when one works with one of the subclasses
of the $ASH$ algebras which are classified by a finite number of
$K$-groups. We have chosen to work with the $AD$ algebras, which are
classified by a single short exact sequence of (ordered) $K$-groups,
and are rewarded for making this choice by a number of shortcuts along
the way. To substantiate our claims about the general case we briefly
indicate at the end of the paper what must be done here.
\section{Invariants for $AD$ algebras}
The invariant $\mathbf K(-;n)$ defined in \cite{se:ciabtk} consists of
the ordered groups $K_\star(A)$ and $\pKu{n}{A}$ (see \cite{mdtal:ccomk}) along with the maps at
the center of
the exact sequence
$$
\xymatrix{
{K_0(A)}\ar[r]^-{\times n}&{K_0(A)}\ar[r]^-{\rho_n}&{\pK{n}{A}}\ar[r]^-{\beta_n}&
{K_1(A)}\ar[r]{\times n}&{K_1(A)}.}
$$
There are also natural maps
$$
\kmapk{m}{n}:\pK{n}{A}\rightarrow \pK{m}{A},
$$
satisfying
\begin{eqnarray}\label{bkcoh}
\beta_m\kmapk{m}{n}&=&\tfrac{n}{(n,m)}\beta_n\\
\label{rkcoh}\kmapk{m}{n}\rho_n&=&\tfrac{m}{(n,m)}\rho_m\\
\label{kkcoh}\kmapk{k}{m}\kmapk{m}{n}&=&\tfrac{m(k,n)}{(k,m)(m,n)}\kmapk{k}{n}.
\end{eqnarray}
After taking inductive limits over ${\Bbb{N}}$ ordered by
$$
n\leq m\Longleftrightarrow n\uparrow m
$$
we get
$$
\xymatrix{
{K_0(A)}\ar[r]^-{\operatorname{id}\otimes 1}&{K_0(A)\otimes{\Bbb{Q}}}\ar[r]^-{\rho}&{\pKz{A}}\ar[r]^-{\beta}&
{K_1(A)}\ar[r]^-{\operatorname{id}\otimes 1}&{K_1(A)\otimes{\Bbb{Q}}}.}
$$
where
$$
\pKz{A}={\displaystyle\lim_{\longrightarrow}}{(\pK{n}{A},\kmapk{kn}{n})}.
$$
The orders on $\pKu{n}{A}$ induce one on $K_0(A)\otimes{\Bbb{Q}}\oplus\pKz
{A}$, and we get an invariant $\PK{A}$. We then have
(\cite[5.3]{se:ciabtk}, \mythesis{mainGENclas}):
\begin{theorem}\label{clas}
$\PK{-}$ is a complete
invariant for the class of $AD$ algebras of real rank zero.
$\mathbf{K}(-;n)$ is a complete invariant for the subclass where $n$
annihilates the torsion of $K_1(-)$.
\end{theorem}
As these invariants include the maps $\rho,\beta$, isomorphisms of
the invariants are triples of positive isomorphisms which are also
complex homomorphisms, i.e., commute with the maps in the complex.
In the case of $\PK{-}$, we shall require that the map defined on
$K_0(-)\otimes {\Bbb{Q}}$ is the one induced by the map $\varphi_0$ on
$K_0(-)$, i.e., of the form $\varphi_0\otimes\operatorname{id}$. As a consequence of
torsion freeness, this is equivalent to requiring the maps to commute
with $\operatorname{id}\otimes -$.
\section{Ideal K\"unneth splittings}
Unsplicing the complexes discussed above, we get {\em K\"unneth
sequences}
\[
\xymatrix{
{0}\ar[r]&{K_0(A)\otimes\ZZp{n}}\ar[r]^-{\tilde{\rho}_n}&{\pK{n}
{A}}\ar[r]^-{\tilde{\beta}_n}& {K_1(A)[n]}\ar[r]& 0\\
{0}\ar[r]&{K_0(A)\otimes{\Bbb{Q}}/{\Bbb{Z}}}\ar[r]^-{\tilde{\rho}}&
{\pKz
{A}}\ar[r]^-{\tilde{\beta}}& {\operatorname{tor}(K_1(A))}\ar[r]& 0. }
\]
It is clear by injectivity of
$K_0(A)\otimes {\Bbb{Q}}/{\Bbb{Z}}$ that the second sequence splits,
and by results of B\"odigheimer (\cite{cfb:skskI}), so does the first.
In the second case, we shall refer to a splitting map
\[
\sigma:\operatorname{tor}(K_1(A))\rightarrow \pKz{A}
\]
as a {\em K\"unneth splitting}. See Remark \ref{ASH} for what should
be understood as a K\"unneth splitting in the first case.
Let $A$ be an $AD$ algebra of real rank zero and let $I$ be an ideal
of $A$. It is well known (\cite{mdtal:ecrrzc}) that both $I$ and $A/I$
are $AD$ algebras of real rank zero and that the six term exact
sequence in $K$-theory then becomes \[ \xymatrix{
0\longrightarrow{K_\star(I)}\longrightarrow{K_\star(A)}\longrightarrow{K_\star(A/I)}\longrightarrow 0} \] Furthermore,
both sequences are {\em pure exact} by \cite[8,11]{lgbmd:ecq}, in the
case of $K_0$, simply because all groups are torsion free. We get that
the vertical maps in
\begin{flushleft}
\begin{picture}(0,90)
\put(00,70){$0$}
\put(10,70){$\vector(1,0){30}$}
\put(45,70){${K_0(I)\otimes{\Bbb{Q}}/{\Bbb{Z}}}$}
\put(110,70){$\vector(1,0){30}$}
\put(120,75){$\tilde{\rho}$}
\put(70,60){$\vector(0,-1){30}$}
\put(145,70){${\pKz {I}}$}
\put(200,70){$\vector(1,0){30}$}
\put(210,75){${\tilde{\beta}}$}
\put(170,60){$\vector(0,-1){30}$}
\put(240,70){${\operatorname{tor}(K_1(I))}$}
\put(300,70){$\vector(1,0){30}$}
\put(340,70){$0$}
\put(00,10){$0$}
\put(10,10){$\vector(1,0){30}$}
\put(45,10){${K_0(A)\otimes{\Bbb{Q}}/{\Bbb{Z}}}$}
\put(110,10){$\vector(1,0){30}$}
\put(120,15){${\tilde{\rho}}$}
\put(145,10){${\pKz{A}}$}
\put(200,10){$\vector(1,0){30}$}
\put(260,60){$\vector(0,-1){30}$}
\put(210,15){${\tilde{\beta}}$}
\put(240,10){${\operatorname{tor}(K_1(A))}$}
\put(300,10){$\vector(1,0){30}$}
\put(340,10){$0$}
\end{picture}
\end{flushleft}
are all
embeddings. This is a consequence of torsion freeness of $K_0(A/I)$ to
the left, and in the middle then follows from exactness. We will
identify the $K$-groups derived from $I$ with their images in the
$K$-groups derived from $A$.
We say that a complex homomorphism
$\Phi=(\varphi_0\otimes\operatorname{id},\varphi,\varphi_1)$ from $\PK{A}$ to $\PK{B})$ is {\em
ideal-preserving} when $\varphi_0
K_0(I)\subseteq K_0(J)$ implies $\Phi\PK{I}\subseteq
\PK{J}$, i.e.\
\[
\varphi\pKz{I}\subseteq \pKz{J}\qquad
\varphi_1K_1(I)\subseteq K_1(J).
\]
An {\em ideal isomorphism}\/ is an isomorphism which is
ideal-preserving and has ideal-preserving inverse. This concept
replaces a similar notion for $K\!K$-theory or $E$-theory introduced in
\cite{gg:ccrrzuet}.
The following
result, explaining the relevance of the order structures when working
with ideals, is essentially contained in \DG{4.12,9.2}. A detailed
alternative proof will appear in \cite{mdse:ccpikc}.
\begin{proposition}\label{kuidealprespos}
Let $A,B$ be real rank zero $AD$ algebras and assume that
$\Phi=(\varphi_0\otimes\operatorname{id},\varphi,\varphi_1)$ is an isomorphism of the
complexes $\PK{A}$ and $\PK{B}$. Then the following conditions are
equivalent:
\begin{itemize}
\item[(i)] $\Phi$ is an order isomorphism
\item[(ii)] $\varphi_0$ is an order isomorphism and $\Phi$ is an ideal
isomorphism.
\end{itemize}
\end{proposition}
Because $\rho,\beta$ are natural maps, they always preserve ideals;
for instance, $\rho(K_0(I))$ $\subseteq \pK{n}{I}$. K\"unneth
splittings are not natural, and so we need the following definition.
\begin{definition}\label{idspli}
A \mbox{$C^*$}-algebra $A$ is {\em ideally split}\/ if there is a K\"unneth
splitting $\sigma$ preserving ideals, i.e., \[ \sigma
\operatorname{tor}(K_1(I))\subseteq \pKz{I} \] for all ideals $I$ of $A$.
\end{definition}
\begin{remark}\label{exx}\mbox{}
\begin{gralist}
\item A simple $AD$ algebra is ideally split.
\item An $AD$ algebra with torsion free $K_1$ is ideally split.
\item An $AD$ algebra with divisible $K_0$ is ideally split. For
then $\tilde{\beta}$ is invertible, and $\tilde{\beta}^{-1}$ provides a natural,
hence ideal-preserving, splitting map.
\item An $AD$ algebra for which every proper ideal has torsion
free $K_1$ is ideally split. For as the map from $K_1(I)$ to
$K_1(A)$ is an embedding, the image of $K_1(I)$ misses $\operatorname{tor}
K_1(A)$.
\item Not all $AD$ algebras of real rank zero are ideally split.
Consider for instance the algebra $D_p$ in \cite[3.3]{mdtal:ccomk}.
We have $K_1(D_p)=\ZZp{p}$, and \[
\pK{p}{D_p}=\left\{(\overline{a},\overline{b},\overline{c}_i)
\in\ZZp{p}\oplus\ZZp{p}\oplus \prod_{{\Bbb{Z}}}{\ZZp{p}} \left|
\begin{array}{l}
\overline{c}_i=\overline{a}\text{ as }i\rightarrow
\infty\\ \overline{c}_i=\overline{b}\text{ as }i\rightarrow
-\infty
\end{array}\right\}\right.
\] There are ideals $I_n$ with $K_1(I_n)=\ZZp{p}$ and
$\pK{p}{I_n}=\{(a,b,c_i)\mid c_i=0, |i|\leq n\}$, so an ideal
splitting must map into $\ZZp{p}\oplus\ZZp{p}\oplus 0$. As this
set intersects trivially with $\pK{p}{D_p}$, there is no ideal
splitting. (In fact, the algebra $C_p$ considered in \DL{3.3}
{\em is}\/ ideally split).
\end{gralist}
\end{remark}
The reader has probably already guessed why we are interested in
ideally split algebras. Here is the accurate statement.
\begin{proposition}\label{idealsplit}
If $A$ and $B$ are ideally split $AD$ algebras of real rank zero, and
\[ [K_\star(A),K_\star(A)^+,\Sigma(A)]\simeq [K_\star(B),K_\star(B)^+,\Sigma(B)] \]
then $A\simeq B$.
\end{proposition}
\begin{pf}
Let $A,B$ be ideally split $AD$ algebras of real rank zero, and let
$(\varphi_0,\varphi_1)$ be an order isomorphism of $K_\star(A)$ with $K_\star(B)$.
There is a diagram
\begin{flushleft}
\begin{picture}(0,90)
\put(00,70){$0$}
\put(10,70){$\vector(1,0){30}$}
\put(45,70){${K_0(A)\otimes{\Bbb{Q}}/{\Bbb{Z}}}$}
\put(110,70){$\vector(1,0){30}$}
\put(120,75){${\tilde{\rho}}$}
\put(70,60){$\vector(0,-1){30}$}
\put(75,50){${\varphi_0}$}
\put(145,70){${K_0(A,{\Bbb{Q}}/{\Bbb{Z}})}$}
\put(200,70){$\vector(1,0){30}$}
\put(200,63){$\prec\ldots\ldots$}
\put(210,75){${\tilde{\beta}}$}
\put(210,55){${\sigma}$}
\put(170,60){$\vector(0,-1){30}$}
\put(175,50){$\varphi$}
\put(240,70){${\operatorname{tor}(K_1(A))}$}
\put(260,60){$\vector(0,-1){30}$}
\put(265,50){$\varphi_1$}
\put(300,70){$\vector(1,0){30}$}
\put(340,70){$0$}
\put(00,10){$0$}
\put(10,10){$\vector(1,0){30}$}
\put(45,10){${K_0(B)\otimes{\Bbb{Q}}/{\Bbb{Z}}}$}
\put(110,10){$\vector(1,0){30}$}
\put(120,15){${\tilde{\rho}}$}
\put(145,10){${K_0(B,{\Bbb{Q}}/{\Bbb{Z}})}$}
\put(200,10){$\vector(1,0){30}$}
\put(200,05){$\prec\ldots\ldots$}
\put(210,15){${\tilde{\beta}}$}
\put(210,00){$\tau$}
\put(240,10){${\operatorname{tor}(K_1(B))}$}
\put(300,10){$\vector(1,0){30}$}
\put(340,10){$0$}
\end{picture}
\end{flushleft}
in which
$\sigma$ and $\tau$ are ideal splittings and $\varphi$ is induced by \[
\varphi(\tilde{\rho}(x)+\sigma(y))=\tilde{\rho}(\varphi_0(x))+\tau(\varphi_1(y)).
\] As when proving Proposition \ref{kuidealprespos}, one gets that since $(\varphi_0,\varphi_1)$ is an order
isomorphism, we have \[ \varphi_0K_0(I)\subseteq K_0(J)\mbox{$\Longrightarrow$}
\varphi_1K_1(I)\subseteq K_1(J) \] (and similarly for
$(\varphi_0^{-1},\varphi_1^{-1})$). We hence only need to prove that $\varphi$
(and $\varphi^{-1}$)
preserves ideals, and by its definition, this follows by the
ideal-preserving properties of $\varphi_1,\sigma,$ and $\tau$. The
triple
$(\varphi_0,\varphi,\varphi_1)$ will be an order isomorphism by
Proposition \ref{kuidealprespos}, and we may apply Theorem \ref{clas}.
\end{pf}
\begin{remark}\label{someclas}
Combining Proposition \ref{idealsplit} with Remark \ref{exx} $1^\circ$--$2^\circ$ we
regain the well-known classification results, essentially contained
in \cite{gae:ccrrz}, that $AD$ algebras which are either simple or
have torsion free $K_1$ are classified by their ordered, graded
$K$-groups. See \DL{4.2} and \cite[5.4]{se:ciabtk}.
\end{remark}
\section{Building ideal splittings}
In this section, we shall prove
\begin{proposition}\label{existidspli}
Any $AD$ algebra of real rank zero with finitely many ideals is
ideally split.
\end{proposition}
Combining this with Proposition \ref{idealsplit}, we get Theorem
\ref{finite}. Note how the arguments are predominantly algebraic.
We only use \mbox{$C^*$}-algebra results to get purity of certain exact
sequences as mentioned above, and to conclude that a certain lattice of
subgroups is distributive, owing to the fact that the lattice of
ideals of a \mbox{$C^*$}-algebra has this property. Such a result can also,
as in \cite{gg:ccrrzuet}, be obtained by appealing to the inductive
limit structure of the \mbox{$C^*$}-algebras in question.
\begin{lemma}\label{maximalcase}
Let $I$ be an ideal of a real rank zero $AD$ algebra $A$. Suppose a
splitting map
$$\xymatrix{
0\ar[r]&
{K_0(I)\otimes{\Bbb{Q}}/{\Bbb{Z}}}\ar[r]^-{\tilde{\rho}_I}&
{\pKzm{I}}\ar[r]^-{\tilde{\beta}_I}&
{\operatorname{tor}(K_1(I))}\ar[r]\ar@{..>}@/_5mm/[l]_{\tau} &0}
$$
is given. Then there is a splitting map
$$\xymatrix{
0\ar[r]&
{K_0(A)\otimes{\Bbb{Q}}/{\Bbb{Z}}}\ar[r]^-{\tilde{\rho}_A}&
{\pKzm{A}}\ar[r]^-{\tilde{\beta}_A}&
{\operatorname{tor}(K_1(A))}\ar[r]\ar@{..>}@/_5mm/[l]_{\sigma} &0}
$$
extending $\tau$.
\end{lemma}
\begin{pf}
Choose, by Zorn's lemma, a subgroup $D$ of $\pKz{A}$ maximal with
respect to the properties
$$
D\cap {\operatorname{im}}\tilde{\rho}_A=0\qquad {\operatorname{im}}\tau\subseteq D.
$$
By maximality, $({\operatorname{im}}\tilde{\rho}_A+D)/D$ is an essential subgroup in $\pKz
{A}/D$, and we have
$$
0\longrightarrow {\operatorname{im}}\tilde{\rho}_A\longrightarrow \pKz{A}/D\longrightarrow
\dfrac{\pKz{A}/D}{({\operatorname{im}}\tilde{\rho}_A+D)/D}\longrightarrow 0
$$
where injectivity to the left is a consequence of
$D\cap{\operatorname{im}}\tilde{\rho}_A=0$. This sequence splits by divisibility of
${\operatorname{im}}\tilde{\rho}_A\simeq K_0(A)\otimes{\Bbb{Q}}/{\Bbb{Z}}$, and hence the quotient must
vanish. Consequently, $D+{\operatorname{im}}\tilde{\rho}_A=\pKz{A}$.
As $\ker\tilde{\beta}_A={\operatorname{im}}\tilde{\rho}_A$, we infer that $\tilde{\beta}_A\restr{D}$ is an
isomorphism, and we let $\sigma=(\tilde{\beta}_A\restr{D})^{-1}$. As
$\tilde{\beta}_A\tau=\tilde{\beta}_I\tau=\operatorname{id}$ and ${\operatorname{im}}\tau\subseteq D$, $\sigma$ does
extend $\tau$.
\end{pf}
Combining 6.1 and 8.1 of \cite{gae:dgt}, we get that the lattice
isomorphism between ideals of $A$ and order ideals of $K_0(A)$
extends to a lattice isomorphism
\[
I\mapsto K_0(I)\oplus K_1(I)
\]
into the order ideals of $K_0(A)\oplus K_1(A)$ for the
\mbox{$C^*$}-algebras we consider. As this is also a Riesz group, sums and
intersections of order ideals are again order ideals. We conclude, in
particular, that
\[
K_1(I\cap J)=K_1(I)\cap K_1(J)\qquad
K_1(I + J)=K_1(I)+ K_1(J)
\]
Let $G_1,\dots,G_n$ be a set of subgroups of a group $H$. There are
maps
\[
\Gamma^1:\bigoplus_{i<j}{G_i\cap G_j}\rightarrow \bigoplus_i{G_i}
\qquad
\Gamma^0: \bigoplus_i{G_i}\rightarrow H
\]
given on each summand by $\Gamma^0(g_i)=g_i$ and
\[
\Gamma^1(g_{ij})_k=\left\{\begin{array}{ll}
g_{ij}&k=i\\-g_{ij}&k=j\\0&\text{other }k
\end{array}\right.
\]
Note that $\Gamma^0\Gamma^1=0$.
\begin{lemma}\label{maxideals}
Suppose $I$ and $J$ are ideals in a real rank
zero $AD$ algebra $A$ such that $I+J=A$. Then
$$
0\longrightarrow
{K_1(I\cap J)}\stackrel{\Gamma^1}{\longrightarrow}
{K_1(I)\oplus K_1(J)}\stackrel{\Gamma^0}{\longrightarrow}
{K_1(A)}\longrightarrow 0
$$
is a pure exact extension.
\end{lemma}
\begin{pf}
Assume that $n(x_1,x_2)$ lies in the image of $\Gamma^1$. There is
then $x\in K_1(I\cap J)$ with $x=nx_1$. As $I\cap
J$ is an ideal of $I$, the subgroup $K_1(I\cap J)$ is pure in
$K_1(I)$, which implies that $x=ny$ with $y\in K_1(I\cap J)$. Clearly
$n(x_1,x_2)=n\Gamma^1(y)$.
\end{pf}
The following result is very similar to \cite[4.11]{gg:ccrrzuet} and is
proved by the exact same argument. We include it for the convenience
of the reader.
\begin{lemma}\label{exact}
When $(I_j)$ is a finite family of comaximal ideals of a real rank
zero $AD$ algebra, the complex
$$
{\bigoplus_{i<j}{\operatorname{tor} K_1(I_i\cap I_j)}}\stackrel{\Gamma^1}{\longrightarrow}
{\bigoplus_i{\operatorname{tor} K_1(I_i)}}\stackrel{\Gamma^0}{\longrightarrow}
{\operatorname{tor} K_1(A)}\longrightarrow 0
$$
is exact.
\end{lemma}
\begin{pf}
Surjectivity of $\Gamma^0$ follows from Lemma \ref{maxideals}.
We prove exactness at $\bigoplus_i{\operatorname{tor} K_1(I_i)}$ for $n=2$ and
$n=3$. A straightforward induction argument will then prove the
general claim. The case $n=2$ is obvious from Lemma \ref{maxideals}.
For the case $n=3$, assume that
$$
y_1+y_2+y_3=0
$$
with $y_i\in\operatorname{tor} K_1(I_i)$. As the lattice of ideals in $A$ is
distributive (since $I^2=I$ for every ideal $I$), we have
$$
I_1=I_1\cap(I_2+I_3)=(I_1\cap I_2)+(I_1\cap I_3),
$$
and we apply Lemma \ref{maxideals} to $I_1$ to get that
$y_1=z_2+z_3$
for some $z_i\in\operatorname{tor} K_1(I_1\cap I_i)$. From the claim for $n=2$ we get
that
$$
(0,y_2+z_2,y_3+z_3)=(0,w,-w)
$$
for some $w\in\operatorname{tor} K_1(I_2\cap I_3)$. And then
$$
(y_1,y_2,y_3)=\Gamma^1(z_2,z_3,w).
$$
\end{pf}
{\bf Proof of 4.1:}\label{existidsplit}
Let $\Omega$ be a proper, hereditary subset of the lattice of ideals
in $A$. We get from the finiteness assumption that there is an ideal
$I\not\in\Omega$ with $\min(I)$, the set of proper ideals of $I$,
contained in $\Omega$. This means that $\Omega\cup\{I\}$ is again
hereditary, and we can prove the claim inductively by showing that
when $\Omega$ is such a set, and K\"unneth splittings $\sigma_J$ are
given for all $J\in\Omega$ with
\begin{eqnarray}\label{cohere}
J_1\subseteq J_2, J_i\in \Omega\mbox{$\Longrightarrow$} \sigma_{J_2}\restr{\operatorname{tor}
K_1(J_1)}=\sigma_{J_1},
\end{eqnarray}
and $I\not\in\Omega$ is given with $\min(I)\subseteq \Omega$, then we
can define $\sigma_I$ such that \eqref{cohere} holds true for
$\Omega\cup\{I\}$. Let $N$ be the number of maximal elements of
$\min(I)$. We must deal with the cases $N=1$ and $N>1$ separately.
\gritem{1} $N=1$: Denote the single maximal ideal by $J$ ($J$ could be
the zero ideal). By Lemma \ref{maximalcase}, $\sigma_J$ extends to a
splitting map $\sigma_I$, and \eqref{cohere} follows from
\[
\sigma_{I}\restr{\operatorname{tor} K_1(J_1)}=
\sigma_{J}\restr{\operatorname{tor} K_1(J_1)}=\sigma_{J_1}.
\]
\gritem{2} $N>1$: Let $I_1,\dots I_N$ be the set of different maximal
ideals. Clearly $I_i+I_j=I$ for all $i\not=j$, and we get a diagram
\begin{flushleft}
\begin{picture}(00,140)
\put(100,130){$0$}
\put(220,130){$0$}
\put(100,105){$\vector(-0,1){20}$}
\put(220,105){$\vector(-0,1){20}$}
\put(80,90){${\pKz{I}}$}
\put(210,90){${\operatorname{tor} K_1(I)}$}
\put(200,90){$\vector(-1,0){50}$}
\put(70,50){${\displaystyle\bigoplus_i{\pKz{I_i}}}$}
\put(100,60){$\vector(-0,1){20}$}
\put(220,60){$\vector(-0,1){20}$}
\put(230,70){${\Gamma}_0$}
\put(85,70){${\Gamma}_0$}
\put(200,50){${\displaystyle\bigoplus_i{\operatorname{tor} K_1(I_i)}}$}
\put(190,55){$\vector(-1,0){40}$}
\put(165,58){${\oplus{\sigma_{i}}}$}
\put(60,10){${\displaystyle\bigoplus_{i<j}{\pKz{I_i\cap I_j}}}$}
\put(100,20){$\vector(-0,1){20}$}
\put(220,20){$\vector(-0,1){20}$}
\put(85,30){${\Gamma}_1$}
\put(200,10){${\displaystyle\bigoplus_{i<j}{\operatorname{tor} K_1(I_i\cap I_j)}}$}
\put(230,30){${\Gamma}_1$}
\put(190,10){$\vector(-1,0){30}$}
\put(165,15){${\oplus{\sigma_{ij}}}$}
\end{picture}
\end{flushleft}
\noindent
in which the vertical maps form a complex and the solid square is
commutative by \eqref{cohere}. By Lemma \ref{exact},
the horizontal maps induce a map $\sigma_I:\operatorname{tor} K_1(I)\rightarrow \pKz
{I}$, such that the other square commutes. This implies that
\eqref{cohere} holds for $\Omega\cup\{I\}$ also.
\begin{flushright}$\Box$\end{flushright}
\begin{remark}\label{finalwords}
The method of proof given here differs from the one given in
\cite{gg:ccrrzuet} by us being able to argue by sheer algebraic
means, and without
going back into the inductive limit presentation of the \mbox{$C^*$}-algebra
in question. That is of course only possible because we have a
complete algebraic invariant at our service.
On the other hand, the reader will note that the overall structure of
the two proofs are the same, in particular in the way we deal with the
lattice of ideals. We have certainly been inspired by (cite{gg:ccrrzuet}.
\end{remark}
\begin{remark}\label{ASH}
When working with general $ASH$ algebras of real rank zero and with
slow dimension growth, the invariant consists of
the full collection of groups $\pKsubi{n}{A}$ along with order
structures and coherence maps as described in \linebreak \DG{4}.
As mentioned above, every unspliced sequence
\[
0\longrightarrow{K_i(A)\otimes\ZZp{n}}\stackrel{\tilde{\rho}_n^i}{\longrightarrow}{\pKsubi{n}{A}}\stackrel{\tilde{\beta}_n^i}{\longrightarrow}
{K_{i+1}(A)[n]}\longrightarrow 0
\]
is split. In fact, as in
\cite{cfb:skskI},\cite{cfb:skskII} one can choose an entire family
of splitting maps which is {\em coherent}\/ in the coefficient. This
means, when we denote by $\lmapk{m}{n}^i$ the natural
map
\[
\lmapk{m}{n}^i:K_i(A)[n]\rightarrow K_i(A)[m], \] that
$\sigma_m^i\lmapk{m}{n}^{i+1}=\kmapk{m}{n}^i\sigma_n^i$. Such a
coherent family is what we understand by a K\"unneth splitting in this
setting.
We define {\it ideal} K\"unneth splittings as above, and get the natural analogue of
Proposition \ref{idealsplit} from \DG{9.1-2}. We can prove that $ASH$
algebras of real rank zero with
finitely many ideals are ideally split by proceeding as in the proof
of Proposition \ref{existidspli}. Here, our approach in case $2^\circ$ easily generalizes
because
\[
{\bigoplus_{j<k}{K_i(I_j\cap I_k)[n]}}\stackrel{\Gamma^1}{\longrightarrow}
{\bigoplus_j{K_i(I_j)[n]}}\stackrel{\Gamma^0}{\longrightarrow}
{K_i(A)[n]}\longrightarrow 0
\]
is exact as a consequence of purity, but we only know of a fairly
laborious way to generalize our approach in case $1^\circ$. A possible
method is to achieve an analogue of
Lemma \ref{maximalcase}, by hands-on extending a given coherent
family defined on the $K$-theory of an ideal. This is seen by
inspection of the proofs of
\cite[2.8]{cfb:skskI}, \cite[2]{cfb:skskII}, using at a crucial point
that if a basis (cf. \cite[17.2]{lf:iagI}) for $K_1(I)[p^r]$ is given, it
can be augmented to a basis for $K_1(A)[p^r]$ by purity arguments.
\end{remark}
\begin{center}
{\bf Acknowledgements}
\end{center}
I am grateful to Peter Friis and Niels Peter J{\o}rgensen for helpful discussions, and to Ken Goodearl for comments on the first version of the paper which lead to a substantial increase in clarity and a dramatic reduction of the proof of Lemma 4.2.
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
2,877,628,090,734 | arxiv | \section{Introduction}
\label{sec:introduction}
Topological insulators (TIs) are a new class of quantum matter, where strong spin-orbit coupling results in a bulk energy gap but gapless metallic surface states.\cite{Hasan10, Qi11} In strong TIs, a topological invariant associated with the bulk band structure guarantees the existence of a single (or odd number) surface state with characteristic linear Dirac energy dispersion, where the electron spin is locked to the momentum.\cite{Fu07}
The surface state is topologically protected against any time-reversal invariant perturbations. This is intimately connected with the absence of backscattering for nonmagnetic impurities, since a spin-flip is required for 180$^\circ$ backscattering. The lack of backscattering was established theoretically early on within a two-dimensional (2D) continuum model for the surface state\cite{Lee09, Zhou09, Guo10} and later also confirmed in experiments.\cite{Roushan09, Zhang09, Alpichshev10} The same 2D surface continuum model finds that, while a local impurity-induced resonance state exists for a potential impurity, its weight diminish as the energy approaches the Dirac point for unitary scatterers and the Dirac point is left unperturbed.\cite{Biswas10}
Surface-only models, however, ignore the finite bulk gap, thus neglecting bulk-assisted processes.
Using a microscopic 3D lattice model for a strong TI we recently established that a strong impurity on the surface gives rise to a large resonance peak in the local density of states (LDOS) at and around the Dirac point.\cite{Black-Schaffer11TI} Consequently, the topological protection of the Dirac point is destroyed close to the impurity and it splits into two nodes that move off-center.
Recent scanning tunneling spectroscopy (STS) results\cite{Teague12} on Bi$_2$Si$_3$ have confirmed the existence of such strong resonance peaks at and around the Dirac point. Other experimental data has also shown how localized bound states at defects\cite{Alpichshev11b} and steps \cite{Alpichshev11} do not agree with results from a purely 2D surface continuum model.
These recent experiments warrant a close investigation of impurities which might give rise to surface resonance states. In particular, since the surface state extends many layers into the material,\cite{Black-Schaffer11TI} even subsurface impurities might significantly affect the LDOS measured on the surface.
In the opposite limit, the properties of deep subsurface impurities ought to be closely connected to those of bulk impurities. Both potential impurities\cite{Lu11} and finite sized holes\cite{Shan11} in the bulk of a 3D TI have previously been treated within a continuum theory focusing on in-gap bound states. In the case of a finite sized hole, it constitutes an interior surface and will thus necessarily host a surface state in a TI. As the hole radius shrinks, the surface state is transformed into bound states, which are expelled towards the bulk bands due to the finite hole size.
This is in striking contrast to surface single-site vacancies which produce impurity-bound states at the Dirac point.
In this article we present a comprehensive microscopic study of impurities positioned all the way from the surface to the bulk. In particular, we address the influence of subsurface impurities on the surface LDOS, how bulk impurities behave on a microscopic scale, and we show how the behavior of impurities in these two opposite limits are intimately connected.
More specifically we find that:
i) Subsurface impurities and vacancies as far as 15 layers into the material create a non-dispersive resonance peak in the surface LDOS. Thus, even deep subsurface impurities will affect the low-energy region of the surface state spectrum and be visible in STS measurements.
ii) The resonance energy $E_{\rm res}$ is always inversely proportional to the impurity strength $U$.
However, for the resonance state to enter the low-energy region, the impurity strength needs to be stronger the deeper down the impurity is buried.
iii) Both impurities and vacancies in the bulk produce in-gap resonance states, connecting smoothly with the behavior of surface impurities and vacancies. These low-energy states will give rise to non-insulating bulk transport.
iv) Fully symmetric multiple-site vacancy clusters have no in-gap resonance peaks, in agreement with continuum results.\cite{Shan11} However, any small deviation from full symmetry produces low-lying resonance peaks. Any realistic microscopically created hole in a 3D TI will therefore have a resonance peak around $E = 0$, mimicking the results of a single vacancy instead of that of a finite size continuum hole.
The rest of the article is organized as follows. In Sec.~\ref{sec:model} we introduce a general microscopic lattice model for studying defects and vacancies in a strong 3D TI. In Sec.~\ref{sec:resultsA} we discuss the surface LDOS and impurity-induced resonance peaks for surface and subsurface impurities and vacancies. In particular, we focus on the dependence of the resonance energy on layer position and impurity strength.
In Sec.~\ref{sec:resultsB} we discuss multiple-site bulk vacancy clusters. We conclude in Sec.~\ref{sec:conclusion} by summarizing our results and discussing experimental consequences.
\section{Model}
\label{sec:model}
We create a strong TI by using a four band $s$-orbital tight-binding scheme on the diamond lattice with spin-orbit coupling:\cite{Fu07}
\begin{align}
\label{eq:H0}
H_0 = & \ t \sum_{\langle i,j\rangle} c^\dagger_{i}c_{j} + \mu \sum_i c^\dagger_i c_j \\ \nonumber
& + \frac{4i\lambda}{a^2} \sum_{\langle \langle i,j\rangle \rangle} c^\dagger_{i} {\bf s \cdot (d}^1_{ij}\times {\bf d}^2_{ij}) c_{j}.
\end{align}
Here $c_{i}$ is the annihilation operator on site $i$ where we, for simplicity, have suppressed the spin-index. Furthermore, $t$ is the nearest neighbor hopping, $\mu = 0$ the chemical potential, $\lambda = 0.3t$ the next-nearest neighbor spin-orbit coupling, $\sqrt{2}a$ the cubic cell size, ${\bf s}$ the Pauli spin matrices, and ${\bf d}_{ij}^{1,2}$ the two bond vectors connecting next-nearest neighbor sites $i$ and $j$.
By further distorting the hopping amplitude to $1.25t$ along one of the nearest neighbor directions not parallel to the (111) direction, this system becomes a strong TI, with a single surface Dirac cone.\cite{Fu07}
In order to access a surface we create a slab of Eq.~(\ref{eq:H0}) along the (111) direction, see Fig.~\ref{fig:lattice}(a). We are mainly studying slabs with ABBCC...AABBC stacking terminations, hereafter labeled AB termination, but will also compare these results with AABBCC...AABBCC terminated slabs, labeled AA termination, in order to generalize our results.
\begin{figure}[htb]
\includegraphics[scale = 0.65]{FIG1_lattice}
\caption{\label{fig:lattice} (Color online) (a) Stacking structure for the (111) direction in the diamond lattice. 1st and 2nd A layers (filled circles), 3rd and 4th B layers (crosses), 5th and 6th C layers (squares). In-plane nearest neighbor distance is $a = 1$. Layer separations are $\sqrt{3}a/\sqrt{8}$ for AA layers and $a/\sqrt{24}$ for AB layers. (b) 5-site nearest neighbor cluster with center site (black) and nearest neighbor sites (blue). (c) 11-site nearest neighbor and in-plane next-nearest neighbor cluster with next-nearest neighbor sites (cyan). (d) 17-site next-nearest neighbor cluster.}
\end{figure}
We choose an energy scale such that the slope of the surface Dirac cone $\hbar v_F\approxeq 1$ for an AB slab, which is achieved by setting $t = 2$ throughout this work.
We find that for slabs with $r \gtrsim 5$ lateral unit cells, where each lateral cell contains six atomic layers, there is only a minimal amount of cross-talk between the two slab surfaces, resulting in a negligible surface energy gap.
We label the different layers in the slab starting with layer 1 for the surface layer. Around layer 15, the remnant DOS of the surface state is becoming negligible and also located close to the bulk gap\cite{Black-Schaffer11TI} and we are thus approaching bulk conditions at this depth.
In order to study the effect of potential impurities we create a rectangular-shaped surface supercell with $n$ sites along each direction. This gives a supercell surface area of $\sqrt{3}n^2a^2/2$ where we use $a=1$ (the nearest neighbor distance on the surface) as the unit of length. We add impurities to our model by adding the term
\begin{align}
\label{eq:Himp}
H_{\rm imp} = U \sum_i c_i^\dagger c_i.
\end{align}
to the Hamiltonian in Eq.~(\ref{eq:H0}). Here $U\geq 0$ is the impurity strength and the summation is over all impurity sites in the supercell. We note that by adding $H_{\rm imp}$ we break particle-hole symmetry and thus our model, even with $\mu =0$, corresponds to a rather general situation.
We solve $H= H_0 + H_{\rm imp} = X^\dagger \mathcal{H}X$, where $X^T =(c_{i\uparrow}, c_{i\downarrow})$,
in the supercell using exact diagonalization. From the eigenvalues $E^\nu_k$ and eigenvectors $U^\nu_k$ of $\mathcal{H}$, the LDOS resolved at every site $i$ is calculated as
%
\begin{align}
\label{eq:LDOS}
D_i(E) = \sum_{k,\nu} (|U^\nu_k(i)|^2 + |U^\nu_k(N+i)|^2)\delta(E-E^\nu_k),
\end{align}
where $N$ is the total number of sites and the summation is over all $k$-points in the supercell Brillouin zone and all eigenvalues indexed by $\nu$. The two different terms in Eq.~(\ref{eq:LDOS}) are for spin-up and spin-down electrons, respectively. We will mainly be concerned with the layer-resolved LDOS on nearest neighbor sites to the impurity, which is the average of the site-resolved LDOS in Eq.~(\ref{eq:LDOS}) on nearest neighbor sites in each layer.
We find that a $50 \times 50$ supercell $k$-point grid gives sufficient resolution, while at the same time using a Gaussian broadening of $\sigma = 0.005$ when calculating the LDOS.
\section{Results}
\subsection{Subsurface impurities}
\label{sec:resultsA}
We start by studying single-site, isolated, potential impurities with varying impurity strength $U$ including the case of a single vacancy ($U\rightarrow \infty$), which represents the unitary scattering limit. Here we study impurities located from the surface all the way down to the bulk.
\begin{figure}[htb]
\includegraphics[scale = 0.98]{FIG2_LDOSbulkdep}
\caption{\label{fig:LDOSbulk} (Color online) Layer-resolved LDOS averaged over in-plane nearest neighbor sites to a vacancy (a-f) and an $U = 40$ impurity (g-l) positioned in layer 1, 2, 3, 4, 7, 20 (counted from the top) plotted for each layer across a $r =7$ lateral unit cell wide slab with AB termination with a supercell size of $n = 10$. Zero (white), 0.1 (black) states per energy and area unit. Red/Grey dashed vertical lines mark impurity layer, whereas horizontal dotted lines mark $E = 0$.}
\end{figure}
Figure~\ref{fig:LDOSbulk} shows nearest neighbor layer-resolved LDOS for both a vacancy (left column) and a $U = 40$ impurity (right column), positioned in different layers. Starting with a surface layer vacancy (topmost left), there is a wide, double-peak resonance roughly centered at the Dirac point at $E = 0$. As carefully analyzed in Ref.~\onlinecite{Black-Schaffer11TI}, a surface vacancy creates a resonance peak firmly situated on top of the original Dirac point, which splits into two Dirac points situated on either side of the resonance peak. These two Dirac points are the termination points of the valence and conduction Dirac surface states, respectively.
The local destruction of the topologically protected low-energy Dirac surface state spectrum, and its Dirac point, is due to surface-bulk interaction always present in TIs with a finite bulk band gap.
The width of the resonance peak decreases as the impurity-impurity distance increases with supercell size $n$. However, the total weight of the peak approaches a constant value as $n$ increases,\cite{Black-Schaffer11TI} corroborating the existence of a finite resonance peak even in the limit of a fully isolated impurity. The double-peak structure is also less visible as the impurity-impurity overlap decreases and the center of the peak remains fixed. In fact, the resonance peak is non-dispersive throughout the whole slab for all impurity concentrations and positions.
Since the surface state penetrates relatively deep into the material, by reciprocity argument, impurities positioned in subsurface layers might also have a profound effect on the surface LDOS.
Figures~\ref{fig:LDOSbulk}(b-f) show single vacancies positioned in layer 2, 3, 4, 7, and 20 respectively. There is some oscillation in the energy of the peak as function of layer position, but for all subsurface layer positions $\lesssim 15$, there is still a finite sized resonance peak located at or around $E = 0$ in the surface LDOS. Thus, the original single Dirac point on the surface is destroyed even for vacancies positioned deep into the TI.
We also find a single-double peak oscillation where double peaks only appear for vacancies in every other layer, but this layer position difference diminishes with increasing impurity-impurity distance.
When approaching the bulk layers, the resonance peak centers firmly at $E = 0$, and its impact on the surface state diminishes as the distance to the surface increases.
In Fig.~\ref{fig:LDOSbulk}(f), the vacancy is positioned deep within the bulk, and there is a narrow, but tall, impurity resonance peak at $E = 0$, but it does not penetrate to the surface.
This result can be understood rather straightforwardly by applying the $T$-matrix formalism to an idealized, but normal, insulator.
In the presence of a scattering potential $\hat{V}$, the Green's function $\hat{G}$ is determined by
\begin{align}
\label{eq:G}
\hat{G} = \hat{G}^0 + \hat{G}^0\hat{T}\hat{G}^0,
\end{align}
where $\hat{G}^0$ is the bare Green's function and the $T$-matrix is given by
\begin{align}
\label{eq:T}
\hat{T} = (1-\hat{V}\hat{G}^0)^{-1}\hat{U}.
\end{align}
Since the poles of the Green's function give the energy spectrum for single-particle excitations, we can find the energy $E_{\rm res}$ of any impurity-induced resonance state by searching for poles in the $T$-matrix.
For an atomically sharp impurity, described by the $\delta$-function potential $\langle x| \hat{U}|x\rangle = U\delta(x)$, the resonance energy is given by
\begin{align}
\label{eq:res}
\frac{1}{U} = {\rm Re} [G^0(E_{\rm res})],
\end{align}
as long as ${\rm Im}[G^0(E_{\rm res})]$ is sufficiently small.\cite{Balatsky06}
Using an idealized insulator with $k$-independent valence and conductions bands separated by a band gap $E_g$, the bare Green's function is $G^0(\omega, {\bf k}) = (\omega - E_g/2 + i\eta)^{-1} + (\omega + E_g/2 - i \eta)^{-1}$, with $\eta$ infinitesimal small. Thus Eq.~(\ref{eq:res}) gives $E_{\rm res} \rightarrow 0$ as $V \rightarrow \infty$.
If, on the other hand, the TI has a finite doping such that the Fermi energy $E_F = E_D + x \equiv 0$, where $E_D$ is the energy of the Dirac point and $|x|<E_g/2$ for $E_F$ to still be inside the bulk gap, the same argument gives $E_{\rm res} = -x = E_D$. That is, the resonance will always be situated at the Dirac point for a unitary impurity, independent of the doping of the system. We have confirmed this result numerically by including a finite chemical potential in Eq.~(\ref{eq:H0}).
The above derivation is dependent on the valence and conduction bands being mirror-symmetric with respect to $E_D$ for all $k$-values. While this is true in our model TI, it is in general not true in a real material. However, as long as valence and conduction bands are approximately mirror-symmetric in $E_D$ in the part of the Brillouin zone where the band gap is as smallest, we expect our results to still be qualitatively correct. If on the other hand, the energy difference $E_c$ between conduction band and Dirac point, and $E_v$ between valence band and Dirac point are different, the resonance energy is instead $E_{\rm res} = -x + (E_c-E_v)/2$ which is located away from the Dirac point.
This $T$-matrix calculation is also important as it shows that our results are independent of the particular lattice model.
The LDOS for a finite $U$-impurity are very similar to those of a single vacancy. The main difference is that the resonance peak in general do not appear at or around $E = 0$, unless $U$ is large, and thus do not destroy the low-energy features of the Dirac surface state. There is also a clear trend that the deeper the impurity, the larger the $U$ needed for a low-energy resonance peak. This is clearly seen in in Figs.~\ref{fig:LDOSbulk}(g-l) where a $U = 40$ surface impurity is seen to destroy the original Dirac point, but where the same impurity in subsurface layers produces an impurity-induced resonance away from $E = 0$.
Figure~\ref{fig:LDOSbulk}(l) shows how a bulk $U = 40$ impurity clearly produces an in-gap resonance peak, associated with a state tightly bound to the impurity site. It was recently argued, based on results from a continuum model, that a non-magnetic $\delta$-function impurity cannot produce in-gap bound states in a 3D TI.\cite{Lu11} Our results, however, show that the closest lattice equivalent of a $\delta$-function, i.e.~the single-site impurity, clearly produces in-gap bound states. This result is true as long as the impurity strength is large enough to put the resonance peak within the bulk gap. In our model system that means $U \gtrsim 20$. For smaller $U$ there is still a resonance state but it is located at energies above the bulk gap.
In Fig.~\ref{fig:peakpos} we analyze in more detail the resonance energy peak position $E_{\rm res}$, extracted from the layer-resolved LDOS surface spectra, as function of both impurity layer position (a) and impurity strength (b). Since the resonance peak is non-dispersive, the peak energy position is the same in all layers.
\begin{figure}[htb]
\includegraphics[scale = 0.98]{FIG3_peakpos}
\caption{\label{fig:peakpos} (Color online) (a) Impurity resonance peak position as function of layer position for AB surface termination (thick lines, $\times$) and AA surface termination (thin lines, $\circ$) for a vacancy ($U = \infty$) (solid black), $U = 80$ (solid red), $U = 30$ (dashed black), and $U = 14$ (dashed red) impurities, where the last set of results are only displayed within the bulk gap $E_g \approx 0.6$.
(b) Impurity resonance peak position as function of the inverse impurity strength $1/U$ for AB surface termination (thick lines, $\times$) and AA surface termination (thin lines, $\circ$) for impurity layer position 1 (solid black), 2 (solid red), 3 (dashed black), 4 (dashed red), and bulk (thickest black).}
\end{figure}
As clearly seen in Fig.~\ref{fig:peakpos}(a), the resonance peak appear at larger (negative) energies, i.e.~farther from the low-energy region, for subsurface impurity positions. Thus for an impurity to influence the low-energy region of the surface Dirac spectrum it needs to be stronger the farther it is from the surface. It is also clear that the resonance peak move toward the low-energy region from larger (negative) energies as $U$ increases. This is equally true for both surface and subsurface positions.
Apart from these trends, there is also a layer oscillation in the peak position but it quickly dies out as the impurity position approaches the bulk.
We have here also included results for an AA terminated surface (thin lines, $\circ$) alongside the AB surface results (thick lines, $\times$). We note that the specifics of the layer oscillations are somewhat surface dependent as the AA surface termination produces slightly different results, but, in general, both surface terminations display remarkably similar results.
In Fig.~\ref{fig:peakpos}(b) we plot the peak position as function of the inverse impurity strength $1/U$. For all impurity layer positions, including both the surface and the bulk, the peak position is proportional to $1/U$, with $E_{\rm res} = k/U + m$.
For AB surface termination, the slope $k$ is approximately constant between different impurity layer positions but the off-set $m$ varies for impurities close to the surface. For AA surface termination there is also some variation of the slope $k$ between layer positions. However, already for impurities in layer 4, the peak position is largely set by the bulk behavior (thickest line). The $1/U$-dependence for the resonance peak position in the bulk follow directly from the same $T$-matrix argument given above and the $1/U$-dependence for surface impurities has been established using a 2D continuum model for the surface state.\cite{Biswas10} However, the resonance peak was in the latter case found to disappear at unitary scattering, something we most notably do not see in our microscopic lattice model.
To summarize this section, we conclude that subsurface and bulk impurities, behave very similar to surface impurities, with a $1/U$ resonance peak energy dependence, although a stronger impurity is needed in subsurface positions in order to observe in-gap resonances. The non-dispersiveness of the resonance peak means that for any finite impurity-surface coupling, a resonance peak will also be present in the surface energy spectrum. We find that resonance peak traces are clearly present in the surface LDOS for impurities as far down as $\sim 15$ layers below the surface. Moreover, both finite strength impurities and vacancies in the bulk produce low-energy resonance states, a result which connects smoothly with the behavior of impurities close to the surface.
\subsection{Bulk vacancy clusters}
\label{sec:resultsB}
The $E = 0$ resonance peak present for a single-site bulk vacancy is associated with a very tightly bound state around the vacancy site. As Eq.~(\ref{eq:res}) showed, the $E_{\rm res}=0$ peak is the same as that of a vacancy in an idealized normal insulator, and is thus a very robust result.
On the other hand, Shan {\it et al.}\cite{Shan11}~recently used a continuum model to demonstrate the existence of bound states for a finite sized hole in a TI. In that case the bound states are simply a manifestation of the fact that a finite sized hole creates an interior surface in the TI. Holes with a very large radius $R$ possess a surface state very similar to that of a planar surface, although, technically, the surface state will have to obey periodic boundary conditions around the hole. As the radius $R$ becomes finite, the surface state turns into bound states with an energy separation which gets larger with decreasing $R$. Finally, for small enough holes the bound states are expelled to the bulk bands. Most notably, this continuum model do {\it not} produce $E = 0$ bound states for any size holes, unless $R \rightarrow \infty$. Clearly this result is at odds with our microscopic result for a single-site vacancy. To further shed light on this discrepancy we have studied highly-symmetric bulk vacancy clusters, involving as many as 17 sites, in order to increase the effective radius of our microscopically created hole.
\begin{figure}[htb]
\includegraphics[scale = 0.98]{FIG4_vacclusters}
\caption{\label{fig:vacclusters} (Color online) LDOS averaged over in-plane nearest neighbor sites in layer 17 (a, b) and layer 1 (c, d) for different highly symmetric vacancy clusters centered at layer 17. (a, c): 5-site nearest neighbor vacancy cluster with radius $\sqrt{3}a/\sqrt{8}$ (thick black), 5-site nearest neighbor cluster with 1 next-nearest neighbor substitution (thin black), and 2 next-nearest neighbor substitutions (red), 1-site single impurity (dashed). (b, d): 17-site next-nearest neighbor vacancy cluster with radius $a$ (thick black), 17-site next-nearest neighbor cluster with two different 1 next-next-nearest neighbor substitutions (thin black and red), 11-site cluster consisting of the 4 nearest neighbors and the 6 in-plane next-nearest neighbors (dashed). Small finite gap at $E = 0$ in the surface state is due to the finite width of the slab ($r = 6$). Surface termination is AB and the supercell size is $n = 10$.}
\end{figure}
Figure~\ref{fig:vacclusters}(a) shows the LDOS on nearest neighbor sites to both a single-site vacancy (dashed line) and three different 5-site vacancy clusters. The diamond lattice has four nearest neighbors situated at the corners of a tetrahedron, a distance $\sqrt{3}a/\sqrt{8} \approx 0.6a$ from the center site, see Fig.~\ref{fig:lattice}(b). For such a 5-site nearest neighbor vacancy cluster, the resonance peak move up close to the bulk band gap at $E_g \approx 0.6$ (thick line). However, if we replace one of the nearest neighbors with a next-nearest neighbor, the impurity-bound state reappears close to $E = 0$ (black line). Further distortion by replacing two nearest neighbors with next-nearest neighbors creates a resonance state at $E = 0$ (red/grey line), the same result as for the single-site vacancy. We see in Fig.~\ref{fig:vacclusters}(c) how these peaks also show up as extremely small impurity resonances in the surface LDOS at the same energies when these vacancies are centered around layer 17.
In Fig.~\ref{fig:vacclusters}(b, d) we show the same result for even larger vacancy clusters. The diamond (111) slab has six in-plane next-nearest neighbors and an additional six next-nearest neighbors out-of-plane, situated a distance $a$ from the center site, see Figs.~\ref{fig:lattice}(c,d). An 11-site cluster including the four nearest neighbors and the six in-plane next-nearest neighbors creates a resonance around $E = 0$ (dashed line). When including all next-nearest neighbors into a fully-symmetric 17-site cluster, the resonance peaks move up to around $E = 0.4$ (thick line). However, distorting this 17-site cluster by exchanging only one next-nearest neighbor for a next-next-nearest neighbor again produces peaks in the very low-energy part of the spectrum (black and red/grey lines).
We thus find that fully-symmetric vacancy clusters involving all nearest and next-nearest neighbor sites expels the impurity-bound states to high energies, in accordance with earlier continuum model results. Also, when increasing the radius from $0.6a$ for the nearest neighbor cluster to $a$ for the next-nearest neighbor cluster, the resonance peak moves to slightly lower energies, in agreement with the continuum results. However, even the smallest possible distortion of either of these two clusters produces results more resembling those of a single-site vacancy, where the resonance peak sits firmly at $E = 0$. Thus, despite the topological origin of the surface state in a TI, there is a surprisingly large sensitivity to small deviations in the cluster shape.
Since any microscopically sized hole in a TI will likely have some asymmetry, we conclude that even for fairly large such holes, the continuum limit will not be reached, but a resonance peak will be present at or around $E = 0$.
\section{Concluding remarks}
\label{sec:conclusion}
Using a 3D microscopic lattice model of a strong TI we have shown that strong potential impurities and vacancies create low-lying impurity-bound resonance peaks, with an $1/U$-dependence for the resonance peak energy for impurities in any layer, including the bulk. Impurities as far as 15 layers below the surface have resonance peaks visible in the surface LDOS. This is also approximately the penetration depth of the surface state into the interior of the TI. Thus any vacancy or unitary impurity, within the penetration depth of the TI surface state, produces a peak in the LDOS at or very near the Dirac point, which is subsequently destroyed and split into two nodes that move off-center.
Recent STS data\cite{Teague12} on nonmagnetic unitary impurities in Bi$_2$Se$_3$ has shown sharp energy resonance peaks at the Dirac point, with diverging strength as the Fermi level approaches the Dirac point.
Our results show that the impurities do not necessarily have to be located on the surface, but also subsurface impurities can generate such surface resonance peaks.
The experimental presence of strong resonance states at the Dirac point confirms the need for a 3D model, which explicitly includes bulk states, since 2D continuum results do not find any strong resonance peaks near the Dirac point.\cite{Biswas10}
For surface impurities the resonance peak decays quickly, approximately as $1/R^3$ with distance on the surface,\cite{Black-Schaffer11TI} and we find a similar dependence for subsurface impurities in the surface LDOS. This fast decay should be contrasted with a rather extended spread perpendicular to the surface.
Experimentally, the resonance peaks were found to decay within as little as 2\AA, which is in qualitative agreement with our results. Such fast decay signals a quick healing of the single Dirac point spectrum, as would be expected for a topologically protected surface.
The impurity-induced resonance peaks in the surface state in a TI are, in fact, similar to impurity resonances in graphene \cite{Ugeda10} and $d$-wave high-temperature superconductors,\cite{Balatsky06} two other materials with Dirac-like low-energy spectra. Thus, once any topological protection is lost due to strong scattering, there is a strong argument for a unified local response to impurities for all ``Dirac" materials.\cite{Wehling11} This unified response corroborates the model-independence of our numerical results, as it is only the Dirac-like surface state, in combination with a finite bulk gap, that is important.
Closely connected to the behavior of near-surface impurities are that of bulk impurities, where we find $E_{\rm res} = 0$ peaks for single-site vacancies in the bulk. This result does not agree with continuum model results for finite holes in a TI.\cite{Shan11} To expand on this discrepancy we have studied extended bulk vacancy clusters. We find that, while fully-symmetric 5- and 17-site clusters do not have any low-energy resonance states in agreement with continuum results, any asymmetry in the clusters produces $E_{\rm res} \approx 0$ resonance peaks. Since any vacancy cluster of microscopic origin is likely to not be fully symmetric, we conclude that a microscopic approach is required for such holes.
For a finite strength bulk impurity, we similarly find contradictions with continuum model results. The $1/U$-dependence for the resonance energy produces in-gap resonances for strong impurities, in contrast to the absence of in-gap states for $\delta$-potential impurities in continuum models. \cite{Lu11}
In fact, both the bulk vacancy $E_{\rm res} = 0$ resonance state and the $1/U$ bulk impurity energy dependence are independent of the topological index and also present in a trivial band insulator, as we show by a simple T-matrix calculation. As a consequence, these conclusions for bulk impurities are independent on the specifics of the lattice model.
The low-energy resonance peaks for deep subsurface and bulk impurities can have a profound effect on the conductivity, as they can give rise to gapless bulk conductivity, thus masking the surface transport properties. Moreover, in the presence of a finite overlap between surface and bulk vacancy states, the surface electrons can be scattered by these zero-energy resonance states. In the limit of dense vacancy concentration, vacancy-band formation will allow edge-edge transitions, thus opening a gap in the topologically protected surface state.
\begin{acknowledgments}
We are grateful to R.~Biswas, Z.~Hasan, D.-H.~Lee, H.~Manoharan, N.~Nagaosa, A.~Wray, S.-C.~Zhang for discussions. AMBS acknowledges support from the Swedish research council (VR). Work at Los Alamos was supported by US DoE Basic Energy Sciences and in part by the Center for Integrated Nanotechnologies, operated by LANS, LLC, for the National Nuclear Security Administration of the U.S. Department of Energy under contract DE-AC52-06NA25396 and by UCOP.
\end{acknowledgments}
|
2,877,628,090,735 | arxiv | \section{\label{sec-intro}Introduction}
Resonances in electron-atom or electron-molecule scattering, also addressed as transient
negative ions, have attracted attention over the last decades. It is because these
temporary states provide a pathway for electron-driven chemistry via dissociative
electron attachment (DEA) and therefore, applications can be found in chemistry of
the planetary atmospheres
\cite{Carelli_FAG_2013},
nanolithography in microelectronic device fabrication
\cite{Dorp_PCCP_2012,Thorman_Oddur_BJN_2015},
and in cancer research where these states provide a mechanism for the DNA damage by
low-energy electrons
\cite{Sanche_PRL_2003,Sanche_PRL_2006}.
Accurate calculation of energies and lifetimes of the resonances represents a
challenging task that is more complicated than the determination of energies
of the bound atomic or molecular states. Temporary negative ions differ from the bound
states in two important respects: (i) they are not stable and decay into various
continua, (ii) corresponding poles of the $S$-matrix are complex and they are
expressed by
$E=E_r - i\Gamma/2$.
There have been numerous studies published using several methods for determination
of the resonance energies and widths. Stabilization methods
\cite{Taylor_Hazi_PRA_1976,Hazi_Taylor_PRA_1970,Hazi_Kurilla_PRA_1981,Frey_Simons_JCP_1986}
search for a region of stability of the energies with respect to different confining
parameters. Stieltjes imaging technique
\cite{Hazi_Kurilla_PRA_1981} allows to represent the resonant state by a square-integrable
basis and the width is defined by the resonance-continuum coupling.
Complex rotation methods
\cite{Moiseyev_PR_1998,McCurdy_Rescigno_PRL_1978,Reinhardt_ARPC_1982}
and the methods employing complex absorbing potential
\cite{Riss_Meyer_JPB_1993,Feurbacher_Cederbaum_JCP_2003}
compute complex resonant energy as an eigenvalue of a complex, non-Hermitian
Hamiltonian.
Recently the method of analytic continuation in coupling constant (ACCC)
\cite{Kukulin_Krasnopolsky_JPA_1977,Krasnopolsky_Kukulin_PL_1978,KKH_book}
has been applied to several molecular targets, such as N$_2$
\cite{Horacek_MU_PRA_2010,White_HGMC_JCP_2017}, ethylene
\cite{JIR_JPCA_2014,Sommerfeld_MHPEM_JCTC_2017},
and amino acids \cite{Papp_Horacek_CP_2013}.
Furthermore, the known low-energy analytic structure of the resonance
was incorporated into the inverse ACCC (IACCC) method providing so-called
regularized analytic continuation (RAC) method. The RAC method was
successfully employed for determination of $\pi^*$ resonances of
acetylene \cite{JIR_EPJD_2016} and diacetylene \cite{JIR_JCP_2015} anions,
proving that the ACCC method can yield accurate resonance energies and
widths for various molecular systems using data obtained with standard
quantum chemistry codes.
Common feature of all methods of analytic continuation is an application of the
perturbation potential $\lambda V$ to the multi-electron Hamiltonian $H$,
i.e. $H \rightarrow H + \lambda V$.
The role of this attractive perturbation is to transform the resonant
state into a bound state.
Although the RAC method was developed
for strictly short-range perturbation $V$, authors were able to
successfully use the Coulomb potential in its stead
\cite{JIR_EPJD_2016,JIR_JCP_2015}. This obvious inconsistency can
yield reasonable results, because in practical applications the
perturbation potential is often projected on a finite set of short-range
basis functions, e.g. Gaussian functions used by the quantum chemistry
software. However, so obtained weakly-bound states need to be examined
carefully because they may, in fact, be Rydberg states supported by
the basis and the long-range tail of the Coulomb perturbation $V$
\cite{JIR_JCP_2015}.
Such states need to be excluded from the continuation procedure as they
do not represent a resonance transferred to a bound state.
In order to avoid such complications, in the present study we adopt
a short-range perturbation potential in a form the Gaussian function
\begin{equation}
\label{eq-Gauss1}
V(r) = -\lambda e^{-\alpha r^2}\;.
\end{equation}
This choice of the perturbation was recently evaluated
by \citeauthor{White_HGMC_JCP_2017} \cite{White_HGMC_JCP_2017}
and applied to the well-known $^2\Pi_g$ resonance of N$_2^-$.
Furthermore, \citeauthor{Sommerfeld_Ehara_JCP_2015}
\cite{Sommerfeld_Ehara_JCP_2015}
introduced another short-range potential, termed as Voronoi
soft-core potential, which they successfully used to analyze the
$^2\Pi_u$ resonance of CO$_2^-$.
Present analysis of the Gaussian perturbation potential
(\ref{eq-Gauss1}) will be carried out for expectedly simpler
problems - atomic shape resonances of beryllium and magnesium.
Both atoms are known to possess a $p$-wave shape resonance
very close to the elastic threshold.
While in the case of the Mg$^-$ the agreement between the
available computed resonance parameters
\cite{Hunt_Moiseiwitsch_JPB_1970,Krylstedt_REB_JPB_1987,Kim_Greene_JPB_1989}
and the experimental data
\cite{Burrow_MC_JPB_1976,Buckman_Clark_RMP_1994} is quite good,
the situation is very different for the beryllium atom.
There has been a great number of theoretical studies
\cite{Kurtz_Ohrn_1979,Kurtz_Jordan_JPB_1981,McCurdy_RDL_JCP_1980,Rescigno_MO_PRA_1978,McNutt_McCurdy_PRA_1983,Krylstedt_EB_JPB_1988,Zhou_Ernzerhof_JPCL_2012,Venka_MJ_TCA_2000,Samanta_Yeager_JPCB_2008,Samanta_Yeager_IJQC_2010,Tsednee_LY_PRA_2015,Zatsarinny_BFB_JPB_2016}
aiming to numerically characterize Be$^-$ 2$s^2\varepsilon p\; ^2P$
resonance, with various levels of success. Table III in Ref.
\cite{Tsednee_LY_PRA_2015} clearly summarizes that the theory of the last
four decades predicts the resonance position between 0.1 and 1.2 eV
and the resonance width between 0.1 and 1.7 eV. Even the most recent
calculations differ by about a factor of 3 for the two resonant parameters.
Moreover, there are no experimental data available for the Be$^-$ resonance
that could narrow the spread of all the available theoretical predictions.
Convergence patters shown in Refs.
\cite{Tsednee_LY_PRA_2015,Zatsarinny_BFB_JPB_2016} demonstrate that the
Be$^-$ resonance may be very sensitive to an accurate description of the
electronic correlation energy. Therefore, in the present study we employ
coupled-clusters (CCSD-T) and full configuration interaction (FCI) methods
for the perturbed Be$^-$ electron affinities that will be then continued
the complex plane by the RAC method. The basic ideas of the RAC method
are given in the Sec.~\ref{sec-rac}. Quick summary of the quantum chemistry
details is presented in Sec.~\ref{sec-qchem}.
In Sec.~\ref{sec-res} we analyze the stability
and accuracy of the RAC method with the Gaussian perturbation potential
(\ref{eq-Gauss1}). Then conclusions follow.
\section{\label{sec-rac}RAC method}
The RAC method represents a very simple method for calculation of resonance
energies and widths which embraces all known analytical features of coupling
constant $\lambda(\kappa)$ near the zero energy \cite{JIR_JCP_2015}.
The method works as follows:
\begin{itemize}
\item The atom or molecule is perturbed by an attractive interaction $V$
multiplied by a real constant $\lambda$
\begin{equation}
\rm H_{neutral} \rightarrow H_{neutral} + \lambda V\;,
\end{equation}
and bound states energies $E^N_i$ of the neutral state are calculated for
a set of values $\lambda_i$.
\item The same procedure is carried out for the corresponding negative ion
\begin{equation}
\rm H_{ion} \rightarrow H_{ion} + \lambda V\;,
\end{equation}
where the bound state energies $E^I_i$ are calculated for the same values of $\lambda_i$.
\item Both energies are subtracted forming the electron affinity
in the presence of the perturbation potential $V$
\begin{equation}
E^N_i - E^I_i = E_i = \kappa_i^2 \;.
\end{equation}
\end{itemize}
The new set of data points $\{\kappa_i,\lambda_i\}$ is then used to fit the function
\begin{equation}
\label{eq-PA31}
\lambda(\kappa)=\lambda_0\frac{(\kappa^2+2\alpha^2\kappa+\alpha^4+\beta^2)
(1+\delta^2\kappa)}{\alpha^4+\beta^2+
\kappa(2\alpha^2+\delta^2(\alpha^4+\beta^2))}.
\end{equation}
It is represented as a Pad\'{e} 3/1 function and it defines the level of complexity of
the pole behavior at the low bound or continuum energies. We term it as
RAC [3/1] method. The origin of its form and the fit formulae for
[2/1], [3/2], and [4/2] methods can
be found in Ref.~\cite{JIR_EPJD_2016}.
The parameters of the [3/1] fit, namely $\alpha, \beta, \delta$ and $\lambda_0$
are found by minimizing the $\chi^2$ functional
\begin{equation}
\label{eq-chisqr}
\chi^2=\frac{1}{N}\sum_{i=1}^N\frac{1}{\varepsilon_i^2}\left|
\lambda(\kappa_i)-\lambda_i\right|^2,
\end{equation}
where $N$ denotes the number of the points used, while $\kappa_i$ and $\lambda_i$
are the input data.
Once an accurate fit is found, only the parameters $\alpha$ and $\beta$
determine the resonance energy
\begin{equation}
E_r=\beta^2-\alpha^4\;,
\end{equation}
and the resonance width
\begin{equation}
\Gamma=4\beta\alpha^2\;.
\end{equation}
Role of the parameter $\delta$ is to describe a virtual state with $E_v=-1/\delta^4$.
Even in the case the studied system does not possess a virtual state this parameter
represents a cumulative effect of the other resonances and
other poles not explicitly included in the model.
The weights $\varepsilon_i$ (accuracy of the data) in Eq.~(\ref{eq-chisqr}) are generally unknown.
The calculation can be routinely performed with constant $\varepsilon_i$ = 1
or, if an importance of the data points closest to the origin needs to be stressed,
increasing weights sequence (e.g. $\varepsilon_i$ = i) can be used.
The RAC method has been recently critically evaluated by
\citeauthor{White_HGMC_JCP_2017} \cite{White_HGMC_JCP_2017}. Authors tested
three types of the perturbation potential
\begin{eqnarray}
V(r) &=& -\frac{\lambda}{r},\\
\label{eq-cg-pot}
V(r) &=& -\lambda \frac{e^{-\alpha r^2}}{r},\\
\label{eq-g-pot}
V(r) &=& -\lambda e^{-\alpha r^2}\;,
\end{eqnarray}
and they suggested that the attenuated Coulomb potential (\ref{eq-cg-pot}) is
the best choice out of the three options and the Gaussian potential
(\ref{eq-g-pot}) does not represent a good choice for the RAC method.
All these potentials are
easily implemented into the standard quantum chemistry codes. The aim of the
present contribution is twofold:
\begin{itemize}
\item to explore application of the Gaussian-type perturbation
and to find its parameters that allow accurate extraction of the resonance
data with the RAC method
\item to demonstrate that the RAC method can be applied with success to
low-lying atomic shape resonances
\end{itemize}
Before applying the RAC method one must consider two important issues.
\begin{itemize}
\item [1.] First is a choice of the perturbation potential, i.e. in the present
context the choice of the exponent $\alpha$ in Eq.~(\ref{eq-g-pot}).
Presently there exist no general rule, no guide that helps us to choose
the perturbation potential. Therefore, it is necessary to perform
calculations for a set of values of the parameter $\alpha$ to find
an optimal choice. If the optimal range of values is found, it is
reasonable to expect that the obtained resonance data should stabilize
in such a range, because the exact function $\lambda(\kappa)$ gives
the same resonant data for every choice of the perturbation potential.
Since the present [3/1] RAC function is only approximative, one
can only expect an existence of a plateau that gives approximative values
of the resonance parameters.
\item [2.] The RAC method represents essentially a low-energy approximation
to the exact function $\lambda(\kappa)$. It is therefore obvious that
the method should be used in a range of energies (or momenta)
limited by some maximal energy $E_M$. Our empirical experience shows that
$E_m \sim 8 E_r$ ($E_r$ is the sought resonance energy) gives a reasonable
estimate for the range of energies.
\end{itemize}
\section{\label{sec-qchem}Electron affinities}
{\it Ab initio} calculations for the electron affinities $E_i(\lambda_i)$
in presence of the external Gaussian field (\ref{eq-g-pot}) were carried
out using the CCSD-T
\cite{Knowles_HW_CC_JCP_1993,Deegan_Knowles_CPL_1994} and FCI methods
as implemented in MOLPRO 10 package of
quantum-chemistry programs \cite{MOLPRO10}. Core of the basis set employs
Dunning's augmented correlation-consistent basis of quadruple-zeta quality
aug-cc-pVQZ \cite{Prascher_WOKDW_TCA_2011} for both atoms, Be and Mg. This basis set
was additionally extended, in an even-tempered fashion, by 2
($s$, $d$, $f$, $g$)-type functions and 6 $p$-type functions.
\begin{figure}[thb]
\includegraphics[width=0.8\textwidth]{eas.pdf}
\caption{\label{fig-eas}
(Color online) Electron affinities of Be$^-$ and Mg$^-$ ions under the
influence of the perturbation potential (\ref{eq-g-pot}). Full lines
are shown for the exponent $\alpha$ = 0.025, while the broken lines
are for $\alpha$ = 0.035. Red color (light gray) describes the Be$^-$
ion and the black color is for the Mg$^-$.
}
\end{figure}
Calculations
for the neutral atoms and corresponding negative ions used the same
basis sets and the same correlation methods (CCSD-T or FCI).
Typical dependence of the electron affinities on
external field (\ref{eq-g-pot}) is shown in Fig.~\ref{fig-eas}
for both negative ions, Be$^-$ and Mg$^-$, and in the range of energies
used for the present analytic continuation. Fig.~\ref{fig-eas} yields
the following observations:
\begin{itemize}
\item As expected, the weaker perturbation potential with
$\alpha$ = 0.035 requires
a stronger scaling parameter $\lambda$ to achieve the same
binding negative ion energies as the perturbation with
$\alpha$ = 0.025.
\item Surprisingly, a larger scaling parameter (stronger perturbation)
is necessary to bind the Mg$^-$ resonance that lies closer
to the zero when compared to the Be$^-$ resonance (as will be seen
below). Such behavior may be caused by the spatial extent of the Mg$^-$
3$p$ resonant wave function when compared to the reach of the
2$p$ wave function of the Be$^-$ ion.
\item The lowest binding energies are not included in the continuation
input data because of the difficulties we encountered while using
the quantum chemistry software. Hartree-Fock method
is known to destabilize in very diffused basis sets, however
low binding energies are inaccurate if a more compact
basis is used.
\end{itemize}
Most of the present results were obtained with the CCSD-T method. However, once
the the optimal exponent $\alpha$ (see the Sec.~\ref{sec-res}) was found
for the beryllium atom, the affinity curve shown in Fig.~\ref{fig-eas} was
also recomputed with the expensive FCI method and the basis as described above.
\section{\label{sec-res}Results}
As discussed in Sec.~\ref{sec-rac}, our goal is to search for regions of stable
results with respect to the two optimization parameters. First is the range of the
input electron affinities defined by maximal affinity $E_M$. The second parameter,
the exponent $\alpha$ in Eq.~(\ref{eq-g-pot}) defines the shape of the perturbation
potential.
\begin{figure}[thb]
\includegraphics[width=0.8\textwidth]{emax.pdf}
\caption{\label{fig-emax}
(Color online) Resonance energy (shown as circles) and width (displayed as diamonds)
calculated for Be$^-$ and Mg$^-$ as functions of the energy extent defined by
the maximal energy $E_M$. The exponents $\alpha$ are fixed at $\alpha$ = 0.035
for Be$^-$ and $\alpha$ = 0.025 for Mg$^-$.
}
\end{figure}
\begin{figure}[thb]
\includegraphics[width=0.8\textwidth]{chi2.pdf}
\caption{\label{fig-chi2}
(Color online) Quality of the RAC fit for the resonance of Be$^-$
as a function of the maximal energy. Exponent $\alpha$ is fixed at 0.035
and the increasing weights set $\varepsilon$ = $i$ are used.
}
\end{figure}
Typical dependence of the resonance parameters on the maximal energy is shown
in Fig.~\ref{fig-emax} for the fixed $\alpha$ parameters.
It is clear that the stability is little worse for the Be$^-$ ion when compared
to Mg$^-$ ion. However, it is possible to narrow the spread of the obtained
resonance data by considering the value of $\chi^2$ defined by
Eq.~(\ref{eq-chisqr}).
Fig.~\ref{fig-chi2} shows the dependence of $\chi^2$ quantity on the
maximal energy $E_M$. A pronounced minimum at $E_M$ = 1.92 eV is clearly
visible. This allows an application of a condition of the best fit. Such a
restriction leads to a well defined $E_M$ for each choice of the perturbation
parameter $\alpha$ producing a data sets shown in Fig~\ref{fig-alpha}.
\begin{figure}[thb]
\includegraphics[width=0.8\textwidth]{alpha.pdf}
\caption{\label{fig-alpha}
(Color online) Resonance energy $E_r$ (circles connected full lines) and the
resonance width $\Gamma$ (diamonds connected with dashed lines) as functions
of $\alpha$ parameter of the perturbation potential.
}
\end{figure}
For beryllium the resonance position and width stabilizes for
$\alpha >$~0.02. The best fit is obtained for $\alpha$ = 0.035 resulting in
$E_r$ = 0.323 eV and $\Gamma$ = 0.317 eV. In order to estimate accuracy
of the correlation energy provided by the CCSD-T method we also recomputed this
final results with the FCI method. The FCI affinities yield
$E_r$ = 0.282 eV and $\Gamma$ = 0.316 eV.
Detailed summary of the available theoretical results
for the Be$^-$ resonance was presented in
Tab.~III of Ref.~\cite{Tsednee_LY_PRA_2015}.
A comparison with the most recent computations will be given in
Sec.~\ref{sec-conc}.
In
case of magnesium ion the resonance energy is very stable over the whole
range of examined perturbation parameters $\alpha$. However, the width exhibits
a weak dependence on the exponent $\alpha$. This feature may indicate that the
low-order RAC method is inadequate for the Mg$^-$ resonance. Nonetheless, the
best fit is obtained for $\alpha$ = 0.025, giving $E_r$ = 0.188 eV and
$\Gamma$ = 0.167 eV. The available data for the Mg$^-$ resonance are summarized
in Tab.~\ref{tab-Mg}.
\begin{table}
\caption{\label{tab-Mg}
Comparison of the available data for the resonance energy
$E_r$ and the resonance width $\Gamma$ for the
3s$^2\varepsilon$p $^2$P state of atomic magnesium.
}
\begin{center}
\begin{tabular}{lcc}
\hline
Method & Resonance energy $E_r$(eV) & Resonance width $\Gamma$(eV) \\
\hline\hline
Model potential \cite{Hunt_Moiseiwitsch_JPB_1970} & 0.37 & 0.10\\
Model potential \cite{Kim_Greene_JPB_1989} & 0.161 & 0.160\\
Complex rotation \cite{Krylstedt_REB_JPB_1987} & 0.08 & 0.17\\
Stabilization \cite{Chao_FY_JCP_1990} & 0.14 & 0.08\\
Complex SCF \cite{McCurdy_LM_JCP_1981} & 0.50 & 0.54\\
Finite elements \cite{Gallup_PRA_2011} & 0.159 & 0.12\\
Experiment \cite{Burrow_MC_JPB_1976} & 0.15$\pm$0.03 & $\sim$0.14\\
Recommended value \cite{Buckman_Clark_RMP_1994} & 0.15 & 0.16\\
Present RAC & 0.19 & 0.16\\
\hline
\end{tabular}
\end{center}
\end{table}
Presently computed resonance energy is about 40 meV higher that the experimental
value of \citeauthor{Burrow_MC_JPB_1976}
\cite{Burrow_Comer_JPB_1985,Burrow_MC_JPB_1976}. Such a discrepancy may have several
possible reasons:
\begin{itemize}
\item[1.] The experimental resolution is about 30--40 meV \cite{Burrow_MC_JPB_1976}.
\item[2.] Discrepancy between the correlation energies of the CCSD-T and FCI methods
and the present basis set is about 41 meV for the electron affinity of
the beryllium atom. Similar difference can also be expected for the magnesium.
Moreover,
weaker stability of $\Gamma$ with respect to the perturbation potential
(shown in Fig.~\ref{fig-alpha})
indicates that higher order continuation may be necessary.
\item[3.] The experimental resonance energy \cite{Burrow_MC_JPB_1976} was determined
from the maximum of the measured cross section, whereas present method
defines the resonance energy from a pole of the $S$-matrix. The two definitions
give similar results for a narrow resonance ($\Gamma < E_r$), but for
for a broader resonance ($\Gamma \geq E_r$), as in the present case,
the results may differ.
\end{itemize}
\section{\label{sec-conc}Conclusions}
Present study confirms the observations of \citeauthor{White_HGMC_JCP_2017}
\cite{White_HGMC_JCP_2017} in which the authors state that the Gaussian
perturbation potential is more difficult to apply than potentials possessing
the Coulomb singularity. It has been shown in the case of a model potential
\cite{White_HGMC_JCP_2017} that the trajectory of the resonant pole is more
complicated for the Gaussian perturbation. In the present study we have shown
that in order to obtain stable results, the RAC method must be restricted
to fairly low electron affinities and a careful analysis of the results with
respect to the width of the perturbation potential must be carried out.
Such procedure allowed us to apply the RAC method to one of the remaining
enigmas among shape resonances of small atoms, the 2$s^2\varepsilon p\; ^2P$
resonance of Be$^-$. To the best of our knowledge there are no experimental
data available for this resonance. Important role of the correlation energy
in this system creates a challenging task for the theory, albeit the fact
that Be$^-$ possess only 5 electrons. Consequently, about two dozens of
theoretical predictions (found in Ref.~\cite{Kurtz_Ohrn_1979,Kurtz_Jordan_JPB_1981,McCurdy_RDL_JCP_1980,Rescigno_MO_PRA_1978,McNutt_McCurdy_PRA_1983,Krylstedt_EB_JPB_1988,Zhou_Ernzerhof_JPCL_2012,Venka_MJ_TCA_2000,Samanta_Yeager_JPCB_2008,Samanta_Yeager_IJQC_2010,Tsednee_LY_PRA_2015,Zatsarinny_BFB_JPB_2016})
do not result in any kind of a consensus.
Two methods with high level of correlation description, the CCSD-T and FCI methods,
were applied
in the present study. While the position of the resonance shifts to the lower energies
by about 41 meV for the more accurate FCI method, the resonance width was found insensitive
to the correlation treatment.
Presently calculated FCI
resonant energy $E_r$ = 0.282 eV and width $\Gamma$ = 0.316 eV are in
a good agreement with the complex CI results of
\citeauthor{McNutt_McCurdy_PRA_1983} \cite{McNutt_McCurdy_PRA_1983} that
predict the $E_r$ = 0.323 eV and $\Gamma$ = 0.296 eV. Recent scattering
calculations
\cite{Zatsarinny_BFB_JPB_2016} determined the resonance with $E_r$ =
$0.31 \pm 0.04$ eV and $\Gamma$ = $0.40 \pm 0.06$ eV again in a good
agreement with the present results.
However, another set of recent calculations by
\citeauthor{Tsednee_LY_PRA_2015} \cite{Tsednee_LY_PRA_2015} place the
resonance at $E_r$ = 0.756 eV and $\Gamma$ = 0.874 eV.
In case of the 2$s^2\varepsilon p\; ^2P$ resonance of Mg$^-$ a comparison
with the experiment is available. Although, the present calculations determine
the resonance about 40~meV higher than the experiment
\cite{Burrow_MC_JPB_1976}, they still exhibit the best agreement with the
experimental data among the {ab-initio} methods.
\begin{acknowledgments}
The contributions of R\v{C} were supported by
the Grant Agency of the Czech Republic (Grant No. GACR 18-02098S).
JH conducted this work with support of the Grant Agency of Czech Republic
(Grant No. GACR 16-17230S).
IP acknowledges support from the Grant Agency of the Czech Republic
(Grant No. GACR 17-14200S).
\end{acknowledgments}
\bibliographystyle{apsrev}
|
2,877,628,090,736 | arxiv | \section{Introduction}
\label{sec:introduction}
In several processes of different nature, the inherent complexity to obtain a detailed model sometimes require the designer to simplify the modeling in order to be able to control the plant \cite{Nise2000}. In some cases, as for power systems \cite{Chaudhuri2012,Xie2021}, a precise model of the grid is required to design a satisfactory controller. For dc-dc converters, for example, the majority of the control techniques assume the existence of an accurate model \cite{Kazimierczuk2008,Kobaku2017}, presenting a challenge to the designer since power converters have nonlinear dynamics. Another situation that presents difficulty to the designer is the obtention of robust low order controllers, such as PI and PID controllers, which are simpler to be implemented and are vastly applied in industry \cite{Aguiar2018,Tharanidharan2022,Tudon-Martinez2022,vanTan20222}. This can be originated from a poor modeling due to the process complexity, and/or from the limited performance of the chosen controller structure \cite{Keel2008}.
In another hand, data-driven control design techniques are used to overcome common problems related to models, such as the dilema on representativity and complexity, or even unavailability of them \cite{Remes2021,Zenelis2022,Huang2022}. Some of the data-driven approaches require several plant experiments and iterative acquisition of data, as Iterative Feedback Tuning (IFT) \cite{Hjalmarsson1998} and Correlation-based Tuning (CbT) \cite{Karimi2004}, whilst others as Virtual Reference Feedback Tuning (VRFT) \cite{Campi2002}, Direct Iterative Tuning (DIT) \cite{Kammer2000}, Optimal Controller Identification (OCI) \cite{Campestrini2017}, Virtual Disturbance Feedback Tuning (VDFT) \cite{Eckhard2018}, and Data-Driven Linear Quadratic Regulator (DD-LQR) \cite{Goncalves2019} only require a single batch of data in order to tune the controller parameters. Having a one-shot method can be a desirable feature in data-driven control design because it requires simpler experimentation, less memory requirements, and results in a less tedious process.
Robustness considering low order controllers is a frequent topic of discussion \cite{Perez2018,Alcantara2013}, since certain processes can present uncertainties, as well as disturbances that might occur over time. Another point to be observed is that a poor choice of reference model or limited controller class in, e.g., the VRFT design, may result in poor performance or robustness \cite{Bazanella2014}. Since robustness can be measured by the $\mathcal{H}_{\infty}$ norm of the sensitivity transfer function $S(z)$ of a closed-loop system \cite{Skogestad2005}, its inclusion to the data-driven design of controllers could be considered, allowing for a more robust design when necessary. A recent methodology proposed in \cite{Chiluka2021} has suggested the inclusion of robustness criteria in the VRFT design, at the expense of: i) more experiments, since the proposed design procedure iterates in a trial-and-error fashion until the desired robustness is achieved, and essentially removes one of the greatest advantages of the VRFT - being a one-shot method; and ii) this type of iterative procedure usually requires more background knowledge from the designer for choosing reference models and specifying requirements. A recent data-driven one-shot approach for multivariable systems regarding robust solutions of $\mathcal{H}_{2}$, $\mathcal{H}_{\infty}$ criteria, and loop-shaping specifications, has been recently addressed \cite{Karimi2017}, which relies on an initial solution that influence the final one since it is convexified by linearization around the initial stabilizing controller. In the case of data-driven one-shot approach for SISO systems, considering an $\mathcal{H}_{\infty}$ robust performance criteria, a recent work \cite{Nicoletti2019} presents a solution that increase the controller order as the solution converges, only dealing with noise-free data. Both methods \cite{Karimi2017, Nicoletti2019} use only frequency-domain data. Other data-driven robust solutions are achieved in an online fashion and require higher computational processing than offline techniques, and also have the need to measure signals in real-time. Some of those techniques are the use of the modified Riccati equation with online data-driven learning \cite{Na2021} and the application of a data-driven Model Predictive Control method with robustness guarantees \cite{Berberich2021}.
Among the aforementioned data-driven design techniques, the VRFT has been more broadly applied to a several class of problems, fact that can be useful to attest the feasibility of the proposed method of this paper, as well as it requires less computational cost than, e.g., OCI and online methods. Therefore, this work is based on the VRFT technique, as presented in \cite{Campi2002} and \cite{Campestrini2011}, proposing the inclusion of the $\norm{S(z)}_{\infty}$ norm criteria in the VRFT design maintaining one of its most attractive features, which is the necessity of a single batch of data. In order to achieve such robustness requirements, the robustness criteria is inserted as a constraint in the VRFT optimization problem, in the form of a penalty \cite{Luenberger2015}, which compromises the convex behavior of the VRFT cost function. To deal with this non-convex optimization procedure, the proposed method is tackled in two main steps:
i) design, in a data-driven fashion, a controller using the VRFT approach, if a previous controller is unexistent; and ii) considering the controller from the previous step as initial solution and using the same batch of data, apply a metaheuristic optimization algorithm to minimize the cost function considering an $\norm{S(z)}_{\infty}$ norm constraint.
Since metaheuristic algorithms may do well for a class of problems, but worse over other class of problems, according to the No Free Lunch (NFL) theorems \cite{Wolpert:NFL:1997}, more than a single metaheuristic optimization algorithm must be evaluated. Looking over the available types of metaheuristics, three can be highlighted: evolutionary algorithms \cite{Mirjalili2019}; physics-based algorithms \cite{Alatas2015}; and swarm intelligence algorithms. Although some authors group evolutionary algorithms with multiple agents and swarm algorithms together \cite{Wahab2015}, this paper considers the two classes separately, as done by other authors in the metaheuristic optimization subject \cite{Mirjalili:GWO:2014}. This work focus on swarm intelligence algorithms, since they have usually less parameters to be tuned by the user or designer \cite{Wahab2015}. Four swarm intelligence algorithms are considered: Particle Swarm Optimization (PSO) \cite{Kennedy:PSO:1995} and Artificial Bee Colony (ABC) \cite{Karabog:ABC:2005}, since those are two of the most used in literature; and the two swarm intelligence algorithms with the least number of hyperparameters, Grey Wolf Optimizer (GWO) \cite{Mirjalili:GWO:2014}, and its most recent version, the Improved Grey Wolf Optimizer (I-GWO) \cite{Nadimi:IGWO:2021}.
This paper is structured as follows: Section~\ref{sec:preliminaries} describes the system that is considered in this paper for the theoretical formulation; Section~\ref{sec:ddc} presents the basis of data-driven controller design and details the basic VRFT design procedure; Section~\ref{sec:Ms} explains the method for $\mathcal{H}_{\infty}$ norm estimation of the sensitivity transfer function; Section~\ref{sec:swarm} details the four used swarm intelligence algorithms; Section~\ref{sec:method} presents the proposed method; Section~\ref{sec:examples} illustrates and validates the method by showing its application in two real-world inspired examples; and finally, Section~\ref{sec:conclusion} concludes this work.
\section{Preliminaries: description of the system}
\label{sec:preliminaries}
The system considered in this paper for the theoretical formulation is a discrete-time, causal, linear time-invariant, and Single-Input Single-Output (SISO) system $G(z)$. It is considered that $z$ is the forward discrete time-shift operator such that $zx(k) = x(k+1)$. The output $y(k)$ of this system can be described as
\begin{equation}
\label{c4:eq:y1}
y(k) = G(z) u(k) + v(k),
\end{equation}
where $u(k)$ is the input signal and $v(k)$ is the process noise - stochastic effects that are not represented by $G(z)$, i.e., not captured by the input-output relation of $u(k)$ and $y(k)$.
The closed-loop system taken into account in this work regards a controller $C(z)$ with the process $G(z)$ and a unit gain feedback, as shown in Figure~\ref{fig:block_casestudy}, where $r(k)$ is the reference signal and $e(k)$ is the error signal. The closed-loop control law is
\begin{equation}
u(k) = C(z,\rho) (r(k) - y(k)),
\end{equation}
with
\begin{equation}
\label{eq:c-rhobar}
C(z,\rho) = \rho' \bar{C}(z)
\end{equation}
being a controller with parameter $\rho \in \mathbb{R}^p$, $\bar{C}(z)$ a vector of transfer functions belonging to the controller class $\mathcal{C}$ (e.g., PI or PID controller classes) and $r(k)$ the reference signal.
The output of the closed-loop system is given as
\begin{equation}
y(k) = T(z) r(k) + S(z)v(k),
\end{equation}
where the reference signal $r(k)$ is applied to the transfer function from the reference $r(k)$ to the output $y(k)$, $T(z)$, with
\begin{equation}
\label{eq:T}
T(z) = \frac{C(z)G(z)}{1 + C(z)G(z)},
\end{equation}
and $S(z)$ is the sensitivity transfer function such that $S(z) + T(z) = 1$ and
\begin{equation}
\label{eq:sensitivity}
S(z) = \frac{1}{1 + C(z)G(z)}.
\end{equation}
\begin{figure}[!htb]
\centering
\label{fig:block_casestudy}
\begin{tikzpicture}[auto, node distance=1.5cm]
\node [input, name=input] {};
\node [sum, right of=input] (sum) {};
\node [block, right of=sum, node distance=2cm] (controller) {$C(z)$};
\node [block, right of=controller, node distance=3cm] (system) {$G(z)$};
\draw [->] (controller) -- node[name=u] {$u(k)$} (system);
\node [sum, right of=system, node distance = 2cm] (sum2) {};
\node [output, right of=sum2, node distance = 2cm] (output) {};
\draw [draw,->] (input) -- node[pos=0.2] {$r(k) \ \ +$} (sum);
\node [input, name=noise_input, above of=sum2, node distance = 1cm] {};
\draw [draw,->] (noise_input) node[above of=sum2,node distance=1.3cm] {$v(k)$} -- node[near end] {$+$} (sum2);
\draw [->] (sum) -- node {$e(k)$} (controller);
\draw [->] (system) -- node [near end] {$+$} (sum2);
\draw [->] (sum2) -- node [name=y] {$y(k)$}(output);
\node [tmp, below of=u] (tmp1){$H(z)$};
\draw [->] (y) |- (tmp1)-| node[pos=0.99] {$-$} (sum);
\end{tikzpicture}
\caption{Block diagram of the considered closed-loop system structure for this paper.}
\end{figure}
In the next section, the Model Reference Control (MRC), which is the basis for the VRFT, is introduced. The VRFT method is described for the cases where the process has a minimum and non-minimum phase.
\section{Data-driven controller design}
\label{sec:ddc}
The Model Reference Control (MRC), which is the basis for the VRFT, more generally called as model matching control \cite{Goodwin2000}, concerns reference tracking of the closed-loop system's response to the reference, disregarding the effects of noise at the output \cite{Bazanella2014}. %
In order to obtain a controller, the MRC requires the designer to elaborate a target transfer function for the controlled closed-loop system, called reference model ($T_d(z)$), which generates the output $y_d(k) = T_d(z) r(k)$. A reference tracking performance criterion evaluated by the two-norm tracking error is then obtained by solving the optimization problem
\begin{mini}|l|
{\rho}{J^{MR}(\rho) = \norm{(T(z,\rho) - T_d(z)) r(k)}_2^2}{}{}
\label{eq:mrc_J}
\end{mini}
which can be solved considering \eqref{eq:T}, resulting in the solution controller for the MRC, which is called ideal controller $C_d(z)$.
\subsection{Virtual Reference Feedback Tuning}
\label{ssec:vrft}
The VRFT is a one-shot optimization data-driven controller design technique based on the MRC. It is defined as one-shot since only a single batch of input-output data is required to solve the model reference control problem \eqref{eq:mrc_J}, which can be done by the use of least squares when the controller is linearly parametrized as in \eqref{eq:c-rhobar}, resulting in the parameter $\rho$ of a controller with predefined class. The VRFT design depicted in this paper follows the procedures of \cite{Bazanella2014,Remes2021}.
Consider an experiment, in open-loop or closed-loop, that results in a batch of collected data $\{ u,y\}_{k=1}^{N}$. A \textit{virtual reference} signal $\bar{r}(k)$ is defined such that $T_d(z)\bar{r}(k) = y(k)$. A virtual error can be obtained as $\bar{e}(k) = \bar{r}(k) - y(k) = (T_d^{-1}(z) - 1) y(k)$. In summary, a controller $C(z,\rho) = \rho' \bar{C}(z)$ is considered satisfactory if it generates $u(k)$ when fed by $\bar{e}(k)$. The closed-loop block diagram for the VRFT controller design is illustrated in Figure~\ref{fig:vrft}.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.7\linewidth]{Figures/virtual_system.pdf}
\caption{Closed-loop block diagram for the VRFT controller design.}
\label{fig:vrft}
\end{figure}
The VRFT solves the optimization problem
\begin{mini}|l|
{\rho}{J^{VR}(\rho) = \norm{u(k) - C(z,\rho) (T_d^{-1}(z) - 1) y(k)}_{2}^{2}}{}{}
\label{eq:vrft_j}
\end{mini}
which has the same minimum as \eqref{eq:mrc_J} if the ideal controller $C_d(z)$ in \eqref{eq:T} belongs to the same controller class $\mathcal{C} = \{ C(z,\rho), \rho \in \mathbb{R}^p \}$ as $C(z,\rho)$. To compensate the fact that the ideal controller rarely belongs to the chosen controller class, a filter $L(z)$ is applied to the data to approximate the minimum of $J^{VR}$ to the minimum of $J^{MR}$, which amplitude should satisfy \cite{Bazanella2014}:
\begin{equation}
\label{eq:filter}
|L(e^{j\Omega})|^2 = |T_d(e^{j\Omega})|^2 |1 - T_d(e^{j\Omega})|^2 \frac{\Phi_r(e^{j\Omega})}{\Phi_u (e^{j\Omega})}, \quad \forall \Omega \in [-\pi,\pi],
\end{equation}
where $x(e^{j\Omega})$, with $x$ representing any signal or system, represents the Discrete Fourier Transform of $x(k)$, $\Phi_r(e^{j\Omega}),\Phi_u (e^{j\Omega})$ are, respectively, the power spectra of the signals $r(k),u(k)$.
Instrumental variables can be used in order to suppress estimation bias caused by the noise in data \cite{Ljung1999}, requiring the use of a second data batch. In practice, the input signal can be formed by two identical sequences at the same experiment, if memory restrictions do not impose a problem. Then, the signals can be synced together afterwards, resulting in two batches of data from one single experiment.
In the presence of a Non-Minimum Phase (NMP) zero at the process, a flexible reference model can be used, as presented in \cite{Bazanella2014}. The optimization problem \eqref{eq:vrft_j}, then, becomes
\begin{mini}|l|
{\rho}{J^{VR}(\rho) = \norm{\eta' F(z) (u(k) + \rho' \bar{C}(z) y(k)) - \rho' \bar{C}(z) y(k)}_{2}^{2}}{}{}
\label{eq:vrft_j_f}
\end{mini}
where $\eta \in \mathbb{R}^{m}$, and $F(z)$ is a vector of transfer functions such that $T_d(z,\eta) = \eta' F(z)$. The step-by-step design for the VRFT with flexible reference model, from data collection to the algorithm design, is detailed in \cite{Remes2021}.
To be able to include a robustness criteria at the VRFT cost function, a means of evaluating this parameter is required. Nonetheless, this is approached in the next section.
\section{Robustness index estimation}
\label{sec:Ms}
Depending on the choice of $T_d(z)$ or the controller class $\mathcal{C}$, as well as the response of the plant $G(z)$, the VRFT-designed controller can result in poor robustness of the controlled process. For such cases, a robustness constraint can be included in the form of the $\mathcal{H}_{\infty}$ norm of $S$, here called $M_S$, which can be used as a measure of robustness \cite{Skogestad2005}.
Typically, a system that presents $M_S > 2$ is considered to have poor robustness \cite{Skogestad2005}. In this context, and considering the use of data-driven design approaches, it is needed to estimate the value of $\norm{S(z)}_{\infty}$ only using data, since it is assumed that no plant model is available to the designer. From this reasoning, the estimation of $M_S$ is explained in the following subsection.
\subsection{Estimation of \texorpdfstring{$M_S$}{Ms}}
\label{ssec:ms_est}
The $\mathcal{H}_{\infty}$ norm estimation procedure developed in this work is based on the Impulse Response (IR) of the system, as presented in \cite{Fiorio2022}, modified from \cite{Goncalves2020} to a SISO impulse response identification procedure, which allows for a regularized estimation according to existing literature \cite{Chen2012}. Also, in order to maintain the one-shot characteristic of the VRFT, the estimation of the $\mathcal{H}_{\infty}$ norm of $S$ ($M_S$), based on its impulse response, and considering a single batch of data is addressed as follows: consider the linear discrete-time causal and SISO system $S$, represented by its transfer function $S(z,\rho)$, such that its output signal $\psi(k)$ is given by the convolution
\begin{equation}
S : \psi(k) = s(k)\ast \zeta(k) = \sum_{n=0}^{\infty}s(k-n)\zeta(n),
\label{eq:sk}
\end{equation}
where $\zeta(k)$ is the input signal of $S$, whose impulse response is $s(k)$.
Since \eqref{eq:sk} requires infinite data to be obtained, an order $M$ is defined such that it is assumed that any IR term greater than $M$ is negligible, which is valid for stable systems, since $\lim_{k\to\infty}s(k) = 0$. Nevertheless, the convolution in \eqref{eq:sk} can be truncated to $M$ terms, leading to:
\begin{equation}
S : \psi(k) = \sum_{n=0}^{\infty}s(k-n)\zeta(n) \approx \underbrace{\sum_{n=0}^{M}s(k-n)\zeta(n)}_{|s(M+1)|< \epsilon\text{, with }\epsilon\to 0^+}.
\label{eq:sk_approx}
\end{equation}
On the other hand, the definition of $\mathcal{H}_{\infty}$ norm, when applied to the system $S$, can be written as \cite{Skogestad2005}
\begin{equation}
\label{eq:sinf_def}
\mathcal{H}_{\infty} : \norm{S}_{\infty} = \max_{\zeta(k)\neq 0}\dfrac{\norm{s(k)\ast \zeta(k)}_{2}}{\norm{\zeta(k)}_{2}},
\end{equation}
which requires the whole set of possible inputs $\{\zeta(k) \neq 0\}$. Therefore expression \eqref{eq:sinf_def} cannot be directly calculated. An alternative strategy is to obtain a matrix relation for $S$, which allows for the use of induced norm properties. Expanding \eqref{eq:sk_approx} to the $M$ first terms,
\begin{equation}
\begin{cases}
& \psi(0) = s(0) \zeta(0)\\
& \psi(1) = s(1) \zeta(0) + s(0) \zeta(1)\\
& \vdots \\
& \psi(M) = s(M)\zeta(0) + \cdots + s(0) \zeta(M),
\end{cases}
\label{eq:S_expanded}
\end{equation}
the following matrix relation truncated at $M$ elements is obtained:
\begin{equation}
\underbrace{
\begin{bmatrix}
\psi(0) \\ \psi(1) \\ \cdots \\ \psi(M)
\end{bmatrix}
}_{\Psi_M}
=
\underbrace{
\begin{bmatrix}
s(0) & 0 & \cdots & 0 \\
s(1) & s(0) & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
s(M) & s(M-1) & \cdots & s(0)
\end{bmatrix}
}_{S_M}
\underbrace{
\begin{bmatrix}
\zeta(0) \\ \zeta(1) \\ \cdots \\ \zeta(M)
\end{bmatrix}
}_{Z_M}
.
\label{eq:SM}
\end{equation}
From the assumption that the order $M$ is sufficiently high, it can be said that matrix $S_M$ characterizes the IR $s(k)$ of $S$.
An useful matrix property is the induced norm \cite[A.5]{Skogestad2005}, which can be applied to \eqref{eq:SM}, such that:
\begin{equation}
\norm{S_M}_{ip} = \max_{Z_M\neq 0}\dfrac{\norm{S_M Z_M}_{p}}{\norm{Z_M}_{p}},
\label{eq:ind_norm}
\end{equation}
where the subscript $i$ stands for induced. In short, \eqref{eq:ind_norm} is a matrix form of representing the system gain considering a set of possible input signals $Z_M$. From the induced-2 norm, it is obtained that
\begin{equation}
\label{eq:gmi2}
\norm{S_M}_{i2} = \bar{\sigma}(S_M) = \sqrt{\lambda_{max}\left(S_M' S_M\right)},
\end{equation}
where $\bar{\sigma}$ and $\lambda_{max}$ stands for largest singular value and largest eigenvalue, respectively, and comparing \eqref{eq:sinf_def} with \eqref{eq:ind_norm}, it can be seen that
\begin{equation}
\label{eq:norm_Sinf}
\norm{S}_\infty \approx \max_{Z_M \neq 0}\frac{\norm{S_M Z_M}_{2}}{\norm{Z_M}_{2}} = \sqrt{\lambda_{\max}(S_M' S_M)}.
\end{equation}
Since the $\norm{S(z,\rho)}_{\infty}$ norm can be estimated based on its IR via \eqref{eq:norm_Sinf}, an expression for the input signal $\zeta(k)$ and the output signal $\psi(k)$ of $S(z,\rho)$ shall be obtained in order to estimate its impulse response in the VRFT design context.
\subsubsection{Input-output signals of the sensitivity transfer function}
Considering the system presented in Figure~\ref{fig:vrft}, its sensitivity transfer function in \eqref{eq:sensitivity} can be rewritten as
\begin{equation}
\label{eq:S_dif}
1 + C(z,\rho) G(z) = S^{-1}(z,\rho).
\end{equation}
Assuming that $u(k)$ is sufficiently informative to capture all relevant characteristics of $S(z,\rho)$, and multiplying both sides of \eqref{eq:S_dif} by $u(k)$, it is obtained that
\begin{equation}
\label{eq:S_halfway}
u(k) + C(z,\rho) G(z) u(k) = S^{-1}(z,\rho) u(k).
\end{equation}
It is known that $G(z) u(k) = y(k)$. Substituting such relation in \eqref{eq:S_halfway}:
\begin{equation}
u(k) + C(z,\rho) y(k) = S^{-1}(z,\rho) u(k).
\end{equation}
Finally, the signals
\begin{equation}
\label{eq:S_signals}
\xi(k) = u(k), \quad \zeta(k) = u(k) + C(z,\rho) y(k),
\end{equation}
can be defined, which means that when a signal $\zeta(k)$ formed by $u(k) + C(z,\rho) y(k)$ is applied to $S(z,\rho)$, an output $\xi(k) = u(k)$ is obtained. Therefore, the impulse response of $S(z,\rho)$ can be estimated considering the data set $\{ \xi,\zeta\}_{k=1}^{N}$, as presented in \eqref{eq:S_signals}.
In this work, the IR estimation is done through identification with regularization techniques, since: i) the variance of the estimates increases with $M$, which is suppressed with the use of regularization \cite{Chen2012}; and ii) knowing that IR is a sparse signal for sufficiently high $M$, the use of regularization is known to improve sparse signal estimates \cite{Brunton2019}. The algorithm for regularized estimation of impulse response is described in \cite{Chen2012,Chen2013}, and it is available in MATLAB\textregistered~\cite{Matlab2017}, Python~\cite{Fiorio2021}, and R~\cite{Yerramilli2017}.
The inclusion of an $\norm{S(z,\rho)}_{\infty}$ constraint in the problem (VRFT cost function \eqref{eq:vrft_j}) to be minimized spoils the convexity characteristic of the VRFT method and the solution cannot be obtained anymore through the least squares algorithm. A strategy to deal with local minima and other characteristics that may arise from a non-convex cost function is to use metaheuristic optimization \cite{Du2016}. Nevertheless, this work addresses the use of swarm intelligence algorithms to solve the proposed problem, which are described in the following section
\section{Swarm intelligence algorithms}
\label{sec:swarm}
Swarm intelligence algorithms consist of algorithms that are based on the collective intelligence of groups composed by simple agents, usually based on the behavior of animals in nature \cite{Bonabeau2001}.
In order to cope with the NFL theorems \cite{Wolpert:NFL:1997}, four algorithms are chosen to be used:
\begin{enumerate}
\item Particle Swarm Optimization (PSO) \cite{Kennedy:PSO:1995};
\item Artificial Bee Colony (ABC) \cite{Karaboga2007};
\item Grey Wolf Optimizer (GWO) \cite{Mirjalili:GWO:2014};
\item Improved Grey Wolf Optimizer (I-GWO) \cite{Nadimi:IGWO:2021}.
\end{enumerate}
The PSO and ABC algorithms are well known and widely used in metaheuristic optimization regarding swarm intelligence algorithms \cite{Du2016,Talbi2009}. The GWO algorithm is more recent and presented some interesting results, as well as it has less hyperparameters than the aforementioned, which is a desirable feature. At last, the I-GWO algorithm is the most recent, which improves the GWO aiming to avoid local minima. The four algorithms are briefly commented in the subsections below.
\subsection{Particle Swarm Optimization}
\label{ssec:pso}
Particle swarm optimization is a stochastic optimization technique that mimics the social behavior of flocking, schooling, and buzzing of animals like birds, fish, and bees. It involves populations (or swarms) in which each element is called a \textit{particle}. Each particle follows a social behavior under the swarm, representing a form of directed mutation, which maintains a static population number during the whole optimization procedure \cite{Talbi2009,Kennedy:PSO:1995}.
The swarm is composed by $\ell$ particles searching in a $D$-dimensional space, which are initialized randomly within the search space. Each particle has its own position ($\overrightarrow{X}_i$) and velocity ($\overrightarrow{V}_i$) and is considered as a possible solution for the problem. The best solution ever found locally by a particle $i$ is represented by $\overrightarrow{P}_i = \{ P_{i1},P_{i2},...,P_{iD} \}$, while $\overrightarrow{G} = \{ G_1,G_2,...,G_D \}$ is the best solution found by the whole swarm, i.e., globally. As for the standard algorithm, each particle starts at a random location with random velocity.
Each new value of position and velocity of a particle depends on its previous value and its neighbourhood. At each iteration, velocity is updated as
\begin{equation}
\overrightarrow{V}_i(n) = w_1 \overrightarrow{V}_i(n-1) + w_2 C_1 (\overrightarrow{P}_i - \overrightarrow{X}_i(n-1)) + w_3 C_2 (\overrightarrow{G} - \overrightarrow{X}_i(n-1)),
\end{equation}
where $w_1$ is an inertia weight, $w_2$ and $w_3$ are random variables such that $w_2,w_3 \sim U(0,1)$. The constant $C_1$ is the cognitive learning factor, whilst $C_2$ is the social learning factor.
After the velocity is updated, the algorithm updates the position of each particle according to
\begin{equation}
\overrightarrow{X}_i(n) = \overrightarrow{X}_i(n-1) + \overrightarrow{V}_i(n).
\end{equation}
The global solution at the stopping iteration is considered to be the solution of the optimization problem.
\subsection{Artificial Bee Colony}
\label{ssec:abc}
The artificial bee colony algorithm simulates the behavior of bees performed during their foraging process, conducting local search in each iteration. Possible solutions are represented by food sources, whilst the quality of each solution is proportional to the nectar amount in each source \cite{Du2016,Karaboga2007}.
There are three types of bees: scout, employed, and onlooker. At the initialization, the scout bees find, randomly, possible food sources (solutions). Each food source receive an employed bee. By roulette wheel selection, onlooker bees choose food sources to be exploited based on its quality, but both types perform local search in its neighbourhood.
During the execution of the algorithm, the four phases that occurs at each iteration can be detailed as follows:
\begin{enumerate}
\item Initialization phase: the $\ell$ food sources $\overrightarrow{X}_i = \{ X_{i1}, X_{i2}, ... , X_{iD} \}$ with dimension $D$, where $i=1,...,\ell$, are initialized randomly
\item Employed bees phase: employed bees search for new food sources having more nectar within its neighbourhood, defined as $\overrightarrow{X}_i (n+1) = \overrightarrow{X}_i(n) + r_a (\overrightarrow{X}_i(n) - \overrightarrow{X}_{r}(n))$, where $\overrightarrow{X}_i(n+1)$ and $\overrightarrow{X}_i(n)$ is the food source at iteration $n+1$ and $n$, respectively, $r_a$ is a random number such that $r_a \sim U(-a,a)$ where $a$ is the acceleration coefficient, and $\overrightarrow{X}_{r}(n)$ is a randomly selected food source at iteration $n$. A greedy selection is applied to the fitness of each food source. The information is then shared with onlooker bees, which are waiting in the hive;
\item Onlooker bees phase: since the onlooker bees receive the information of the food sources from the employed bees, they select randomly a food source $i$. After an onlooker bee has chosen a food source, a greedy selection is applied between two sources in the neighbourhood. When there are no more spare food sources, this phase is ended;
\item Scout bees phase: when an employed bee solution cannot be improved anymore, those bees abandon their food sources and become scouts, choosing randomly a new food source $\overrightarrow{X}_i$ where they will be employed. If a user-defined limit number of maximum food sources $L$ is surpassed, the employed bees on the sources with less food (greater fitness value) will abandon their current food source and become scouts.
\end{enumerate}
Phases 2, 3, and 4 are repeated until an user defined stopping criteria is met. The food source with more food (less cost) at the stopping iteration is considered to be the best solution for the problem.
\subsection{Grey Wolf Optimizer}
\label{ssec:gwo}
The GWO is an algorithm based on the hunting behavior of grey wolves, which have a strict social dominant hierarchy. The leaders are the alphas, responsible for making decisions. At the second level are the betas, subordinates to the alphas that help in decision-making and other pack activities. The third level wolves are the deltas, representing scouts, sentinels, elders, hunters, and caretakers. The rest of the pack is called omega, which must submit to the higher ranking wolves \cite{Nadimi:IGWO:2021}.
Respecting the social behavior of wolves, the fittest solution in an optimization fashion is considered to be the alpha (position vector $\overrightarrow{X}_{\alpha}$), whilst second is the beta (position vector $\overrightarrow{X}_{\beta}$), and third the delta (position vector $\overrightarrow{X}_{\delta}$). The rest are assumed to be omega, with position $\overrightarrow{X}_{\omega,k}$, where $k$ represent a specific wolf among the omegas with $k \in \{ 1...\ell-3 \}$, and will follow the mean position of the three best wolves. The wolves encircle the prey, reducing the radius as the algorithm advances.
At iteration $(n+1)$, the three wolves with the lowest (best) fitness are considered the new $\alpha$, $\beta$, and $\delta$. Notice that the wolves move towards the average of the three best solutions, since they have an encircling behavior for hunting.
The optimization procedure continues until a user defined stopping criteria is met. The best found solution at the stopping iteration, the position of the alpha $\overrightarrow{X}_\alpha$, is considered the solution of the minimization procedure.
\subsection{Improved Grey Wolf Optimizer}
\label{ssec:igwo}
There are three main problems perceived in literature around the GWO algorithm \cite{Nadimi:IGWO:2021}: i) lack of population diversity; ii) imbalance between the exploitation and exploration; iii) premature convergence. The improved grey wolf optimizer changes the search strategy of the GWO algorithm, dividing it into three phases - initializing, movement, selecting and updating - which are described below according to \cite{Nadimi:IGWO:2021}.
\begin{enumerate}
\item Initializing phase: $\ell$ wolves are randomly distributed in the search spac
\item Movement phase: individual hunting by each wolf is included in the algorithm, apart from the base GWO algorithm, with a strategy named Dimension Learning-based Hunting (DLH). A radius is defined as the Euclidean distance between the current position $\overrightarrow{X}_i(n)$ and the candidate position $\overrightarrow{X}_{i,GWO}(n+1)$, which is calculated exactly as in the standard GWO
Then, multi-neighbours learning is performed resulting in the DLH solution $\overrightarrow{X}_{i,DLH}(n+1) = \overrightarrow{X}_{i} (n) + r_i(\overrightarrow{X}_{n}(n) - \overrightarrow{X}_{r}(n))$, with $\overrightarrow{X}_{n}(n)$ being a random neighbour, $\overrightarrow{X}_{r}(n)$ a random wolf, and $r_i$ a random vector.
\item Selecting and updating phase: the fitness value of the solutions is compared and selected. If the best fitness value until the current iteration, with solution $\overrightarrow{X}_i(n)$, is greater than $f(\overrightarrow{X}_i(n+1))$, the best found solution is updated. Otherwise, it remains the same.
\end{enumerate}
The I-GWO algorithm runs until some user defined stopping criteria is met. The position of the alpha wolf $\overrightarrow{X}_\alpha$ at the iteration that the algorithm stops running is considered to be the best solution of the optimization procedure.
The next section details the proposed method of this work, allowing for the inclusion of a robustness criteria at the VRFT cost function, but still maintaining its one-shot characteristic.
\section{VRFT with robustness restrictions}
\label{sec:method}
The proposed method regards a two-step procedure. The first step follows the design of a controller using the VRFT, as commented in Subsection~\ref{ssec:vrft}. At the second step, the same data of the first step is used, avoiding the need of a second experiment, since the estimation of $M_S$, which is represented in this context as $\hat{M}_S (\rho)$, as proposed in Subsection~\ref{ssec:ms_est}, was developed to avoid the need of new data. In this case, the the cost function $J^{VR}$ is modified by the addition of a robustness restriction, regarding the value of the $\norm{S(z,\rho)}_{\infty}$ norm, leading to a new optimization problem:
\begin{mini}|l|
{\rho}{J^{VR}(\rho)}{}{}
\addConstraint{\hat{M}_S(\rho) \leq M_{Sd}}{}{}
,
\end{mini}
which can be applied directly to the cost function as a penalty \cite{Luenberger2015}, resulting in the \textit{Swarm Intelligence} optimization cost function:
\begin{mini}|l|
{\rho}{J^{SI} (\rho) = \norm{u(k) - C(z,\rho) (T_d^{-1}(z) - 1) y(k)}_{2}^{2} + c H(\rho)}{}{}
\label{eq:opt_si}
\end{mini}
where $c$ is a positive constant, usually with $c \gg 1$, and the penalty element regarding the estimated ($\hat{M}_S (\rho)$) and desired ($M_{Sd}$) $\mathcal{H}_{\infty}$ norm value of $S(z,\rho)$ can be applied as
\begin{equation}
\label{eq:P_leq}
H(\rho) = \frac{1}{2} (max[0,\hat{M}_S(\rho) - M_{Sd}])^2.
\end{equation}
The robustness index $\hat{M}_S(\rho)$ is estimated at each iteration of the swarm algorithm optimization following the procedure described in Subsection~\ref{ssec:ms_est}.
Considering a search space $\mathcal{O} \in [l_b,u_b], l_b,u_b \in \mathbb{R}$, in order to accelerate the convergence of the metaheuristic algorithm, the initialization of its agents can inherit the first step solution $\rho_0 \in \mathbb{R}^p$ as a central point, as expressed in
\begin{equation}
\label{eq:swarm_init}
\overrightarrow{X}_{b}(0) = R \cdot \overrightarrow{X}(0) + \rho_0, \quad R = \frac{|\max \{ l_b, u_b \}|}{2},
\end{equation}
with $R$ being the initial population spawn radius, and $\overrightarrow{X}(0) \in \mathbb{R}^p$ a random position vector such that $\overrightarrow{X}(0) \sim U(0,1)$.
An inherent step of the method is to collect input-output data from the process, as suggested in \cite{Bazanella2014,Remes2021}. Remember to take into account system identification theory \cite{Ljung1999} in order for data to be sufficiently informative. Then, the two proposed design steps can be applied:
\begin{enumerate}
\item Use the VRFT to design a controller for the process. Use a flexible reference model if the plant is NMP, as presented in \cite{Campestrini2011}. Check the obtained robustness index, proceed to the second step if it does not satisfy $\hat{M}_S \leq M_{Sd}$. Such main step can be divided into the following specific steps:
%
\begin{itemize}
\item acquire a data set $\{ u,y\}_{k=1}^{N}$ from the closed-loop system with an initial stabilizing controller;
\item use the data set to design a controller with the VRFT method, as detailed in Subsection~\ref{ssec:vrft}. Controller parameters $\rho$ are obtained after the minimization procedure of the VRFT method. In the NMP case, parameters for the reference model $\hat{\eta}$ are also obtained;
\item estimate the robustness index according to the method in Subsection~\ref{ssec:ms_est}. If $\hat{M}_S > M_{Sd}$ proceed to the second step, else, use the VRFT-obtained controller with no further modification.
\end{itemize}
\item Apply a swarm intelligence algorithm considering the optimization problem described in \eqref{eq:opt_si} according to a desired value of $M_S$, with restriction applied in the form of a penalty as \eqref{eq:P_leq}, with initial spawn of agents following the recommendation of \eqref{eq:swarm_init}. The second step can be divided in:
%
\begin{itemize}
\item implement the VRFT cost function with the penalty as in \eqref{eq:P_leq} regarding the desired maximum value of $M_S$;
\item change the initialization procedure of the chosen swarm intelligence algorithm to consider a center spawn $\rho_0$, i.e., the VRFT-obtained solution at the first step, and a spawn radius as suggested in \eqref{eq:swarm_init} to accelerate convergence;
\item run the algorithm and obtain controller parameters that satisfy the robustness restriction.
\end{itemize}
\end{enumerate}
\section{Validation results}
\label{sec:examples}
In order to validate and illustrate the proposed method, two real-world inspired examples are considered. The method is applied as suggested in Section~\ref{sec:method} with all four swarm intelligence algorithms commented in Section~\ref{sec:swarm}. The results are compared in terms of: i) fitness value obtained for best solution (best fitness); ii) $\norm{S(z,\hat{\rho})}_{\infty}$ value obtained for best solution; iii) convergence speed. It is worthwhile to mention that the system model is only used to generate data in simulation. The knowledge of the model is neglected at any stage of the design, maintaining a pure data-driven fashion.
\subsection{Example 1: a second-order plant}
\label{ssec:g1}
The first system to be considered is
\begin{equation}
\label{eq:g1}
G_1(z) = \frac{-0.05(z-1.4)}{z^2 - 1.7z + 0.7325}
\end{equation}
with a time step of 1 second, which is similar to the discrete-time model of a Boost/Buck-Boost converter operating in Continuous Conduction Mode (CCM) \cite{Erickson2001}. The presence of a non-minimum phase zero makes it necessary to use the VRFT method with flexible criterion \cite{Bazanella2014} at the first step of the proposed method.
Assuming that the system model \eqref{eq:g1} is unknown, there is no previous knowledge about its zero being NMP. In this sense, it is possible to analyze the estimated IR, since the IR of NMP systems initially moves in the opposite direction (downwards) related to the steady-state one \cite{Brunton2019}. Therefore, a Pseudo-Random Binary Signal (PRBS), which is persistently exciting of high order \cite{Ljung1999}, containing $N=2000$ samples is applied to $G_1(z)$ in simulation, generating an output signal. Additive white gaussian noise with a Signal-to-Noise Ratio (SNR) of 20~dB was added to the system at the output, representing measurement noise. With the input-output dataset, the IR of $G_1(z)$ can be identified with an IR identification algorithm available in the literature \cite{Chen2013,Fiorio2021,Matlab2017,Yerramilli2017}, resulting in the signal presented in Figure~\ref{fig:g1_ir}. Clearly, the IR initially goes downwards, indicating the presence of an NMP zero, justifying the VRFT with a flexible reference criterion \cite{Bazanella2014}.
\begin{figure} [!htb]
\centering
\includegraphics[width=0.7\linewidth]{Figures/g1_ir.pdf} \\
\caption{Identified impulse response of $G_1(z)$.}
\label{fig:g1_ir}
\end{figure}
\subsubsection{Data collection}
\label{sssec:g1_datacollection}
The data for estimation is obtained in closed-loop with a proportional stabilizing controller \cite{Remes2021}, since its presence in the system avoids signal divergence. By the small gain theorem \cite{Skogestad2005}, a stabilizing controller can be obtained as
\begin{equation}
k_p < \frac{1}{\norm{G}_{\infty}}.
\end{equation}
Therefore, the stabilizing controller $k_p$ is chosen as
\begin{equation}
\label{eq:kp}
k_p = \frac{0.5}{\norm{G_1(z)}_{\infty}} = 0.8039.
\end{equation}
In order to obtain the $\mathcal{H}_{\infty}$ norm of $G_1(z)$, its impulse response is estimated according to \cite{Fiorio2021} and the norm is calculated as proposed in Subsection~\ref{ssec:ms_est}.
The input signal considered for the VRFT is a PRBS with $N=2000$ samples. It is applied to the control reference of the closed-loop formed by $G_1(z)$ with stabilizing controller $k_p$. The control output signal $u(k)$ and the system output signal $y(k)$ are acquired, forming the input-output set $\{ u,y\}_{k=1}^{N}$.
\subsubsection{Step 1 - VRFT with flexible criterion}
\label{sssec:vrft_g1}
Assume a situation where the control requirements are: i) null error in steady-state; ii) settling time approximately 2.5 times faster than in closed-loop with the stabilizing controller $k_p$; iii) null overshoot for a step reference. A reference model, chosen as proposed in \cite{Remes2021}, that fits the requirements is
\begin{equation}
T_d(z,\hat{\eta_0}) = \frac{-21 (z-1.01)}{(z-0.7)(z-0.3)}.
\end{equation}
Notice that the initial zero of $T_d(z)$ is set as greater than 1, as suggested in \cite{Bazanella2014}, allowing for the VRFT with flexible criterion to identify the plant's NMP zero. The chosen controller class to be used is the PID class of controllers, which gives
\begin{equation}
\bar{C}(z) =
\left[
1 \quad \frac{z}{z-1} \quad \frac{z-1}{z}
\right]'
.
\end{equation}
After solving the cost function \eqref{eq:vrft_j_f} according to the VRFT method with flexible criterion, the following solution pair of $\eta,\rho$ is obtained:
\begin{subequations}
\begin{equation}
\hat{\eta} =
\begin{bmatrix}
-0.4793 & 0.6377
\end{bmatrix}'
\end{equation}
\begin{equation}
\hat{\rho} =
\begin{bmatrix}
1.1246 & 0.3124 & 6.9713
\end{bmatrix}',
\label{eq:rho0}
\end{equation}
\end{subequations}
resulting in a new reference model $T_d(z,\hat{\eta})$, and in the controller $C(z,\hat{\rho})$, respectively:
\begin{subequations}
\label{eq:vrft_sol_g1}
\begin{equation}
T_d(z,\hat{\eta}) = \hat{\eta} F(z) = \frac{-0.6899 (z - 1.33)}{(z-0.7)(z-0.2401)},
\end{equation}
\begin{equation}
C(z,\hat{\rho}) = \hat{\rho}' \bar{C}(z) = \frac{8.4083 (z^2 - 1.792z + 0.8291)}{z (z-1)}.
\end{equation}
\end{subequations}
The non-dominant pole of the reference model, now $T(z,\hat{\eta})$, is updated altogether with the minimization of $\eta$ and $\rho$, as suggested in \cite{Remes2021}.
By estimating the $\mathcal{H}_{\infty}$ norm of $S(z,\hat{\rho})$ of the closed-loop system based on the solution \eqref{eq:vrft_sol_g1}, an $\hat{M}_S = 2.1952$ is obtained, which may be too high for applications that require lower robustness indexes, since it is greater than $2$ \cite{Skogestad2005}. The next subsection presents the application of the second step of the proposed method to reduce $M_S$ for the obtained VRFT solution.
\subsubsection{Step 2 - Swarm intelligence algorithm}
\label{sssec:swarm_g1}
Four swarm intelligence algorithms - PSO, ABC, GWO, and I-GWO - were used to solve the problem \eqref{eq:opt_si} for an initial solution of $\rho_0$ \eqref{eq:rho0}, with upper search bound $u_b = 10$, which should be sufficient considering that the maximum $\rho$ value of the VRFT-obtained controller is $6.9713$ and, taking into account a choice of $M_{Sd}$ that is not too ambitious, the parameters values should not differ that much from the initial ones. The lower search bound $l_b$ is chosen as $l_b = 0$ to avoid negative controller gain, making the obtained controller passive \cite{Bao2007}. The reference model for the swarm intelligence algorithm minimization is considered to be the one obtained through the VRFT with flexible criterion, $T_d(z,\hat{\eta)}$. Also, an initial population spawn radius of $R = u_b/2 = 5$ is considered, as suggested in \eqref{eq:swarm_init}. The desired $M_S$ to be achieved, $M_{Sd}$, was set to 1.8, which is a sufficient value in terms of robustness, satisfying $M_{Sd} \leq 2$, and also should not compromise substantially the performance of the system.
To make the comparison between algorithms possible, the number of agents was fixed to 50, as well as the number of iterations, limited to 100. In order to obtain a satisfactory number of realizations for the analysis of results each algorithm was run 50 times. The parameter settings that are chosen by the designer for the PSO and ABC algorithms, except from the number of agents and maximum number of iterations, are presented in Table~\ref{tab:parameters}. The PSO parameters were chosen as the MATLAB\textregistered\ default parameters of the Global Optimization Toolbox \cite{Matlab2017}, whilst the ABC parameters were used according to the algorithm implementation of \cite{Heris2015}. GWO and I-GWO do not contain any hyperparameter set by the user aside from number of agents and maximum number of iterations.
\begin{table}[!htb]
\centering
\begin{tabular}{c c c}
\hline
Algorithm & Parameter settings & Value \\
\hline
\multirow{3}{*}{PSO} & Cognitive learning factor ($C_1$) & 1.49 \\
& Social learning factor ($C_2$) & 1.49 \\
& Inertia range (range of $w_1$) & [0.1,1.1] \\
\cline{2-3}
\multirow{2}{*}{ABC} & Limit of food sources ($L$) & 90 \\
& Acceleration coefficient ($a$) & 1 \\
\hline
\end{tabular}
\caption{Parameters settings for PSO and ABC.}
\label{tab:parameters}
\end{table}
Figure~\ref{fig:conv_g1} shows the average convergence curve of all algorithms for 50 runs, considering system $G_1(z)$ as aforementioned. Table~\ref{tab:timing_g1} presents the time that each iteration took and the number of iterations needed to converge, considering the average value for all 50 realizations and a criteria of convergence with a variation of $\delta = 1 \times 10^{-3}$ from a sample to the subsequent one. The results were obtained with an Intel Core i5 4670 3.40 GHz processor, with 8 GB of RAM - DDR3 1600 MHz. The I-GWO algorithm took a longer time to converge, followed by ABC, PSO, and at last, GWO. Considering all 50 runs, Figure~\ref{fig:fitness_box_g1} presents the best fitness statistics obtained for all algorithms in the form of a box plot. Clearly, I-GWO had the most desired performance in terms of fitness, since it contains less outliers and a very low dispersion if compared to the other algorithms' solutions. PSO, ABC and GWO, in general, resulted in higher fitness values than I-GWO for the considered cost function. Table~\ref{tab:fitness_quant_g1} shows the quantitative values related to the best fitness of all algorithms at each run, confirming the conclusions taken from Figure~\ref{fig:fitness_box_g1}.
\begin{figure} [!htb]
\centering
\includegraphics[width=0.7\linewidth]{Figures/g1_convergence_curves.pdf} \\
\caption{Average convergence curves for all algorithms considering a Monte Carlo experiment of 50 runs for example 1.}
\label{fig:conv_g1}
\end{figure}
\begin{table}[!htb]
\centering
\begin{tabular}{c c c c c}
\hline
Algorithm & 1-it. time (s) & It. to converge & Time to converge (s) \\
\hline
PSO & $\ 8.04$ & $41$ & $329.75$ \\
ABC & $22.63$ & $19$ & $430.02$ \\
GWO & $11.25$ & $14$ & $157.53$ \\
I-GWO & $24.18$ & $20$ & $483.66$ \\
\hline
\end{tabular}\\
\caption{Time for convergence of all algorithms for example 1.}
\label{tab:timing_g1}
\end{table}
\begin{figure} [!htb]
\centering
\includegraphics[width=0.7\linewidth]{Figures/g1_fitness_boxplot.pdf} \\
\caption{Box plot of a Monte Carlo experiment with 50 runs for all algorithms in terms of best fitness value obtained for example 1.}
\label{fig:fitness_box_g1}
\end{figure}
\begin{table}[!htb]
\centering
\begin{tabular}{c c c c c}
\hline
Algorithm & median & $\sigma$ & min & max \\
\hline
PSO & $0.2032$ & $0.2782$ & $0.2017$ & $2.1710$ \\
ABC & $0.2334$ & $0.0402$ & $0.2025$ & $0.4009$ \\
GWO & $0.2489$ & $0.0858$ & $0.2017$ & $0.4764$ \\
I-GWO & $0.2017$ & $5.0516 \times 10^{-5}$ & $0.2017$ & $0.2018$ \\
\hline
\end{tabular}
\caption{Quantitative results from the box plot in terms of best fitness for example 1.}
\label{tab:fitness_quant_g1}
\end{table}
Finally, Figure~\ref{fig:sinf_box_g1} presents the box plot for the obtained $\norm{S(z,\hat{\rho})}_{\infty}$ by the best solution of each algorithm at each run, in a closed-loop with $G_1(z)$. I-GWO obtained the most desired result in terms of $\hat{M}_S$ considering the lack of outliers and a low dispersion. PSO had one outlier with $\hat{M}_S > 2$, whilst ABC obtained three outliers of higher $\hat{M}_S$, and the performance by the GWO algorithm for this problem was not satisfactory since there are too many solutions that achieved an $\hat{M}_S$ higher than 2. Table~\ref{tab:sinf_quant_g1} shows the quantitative data of the box plot presented in Figure~\ref{fig:sinf_box_g1}, in agreement with what is commented over the results.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.7\linewidth]{Figures/g1_sinf_boxplot.pdf} \\
\caption{box plot of 50 runs for all algorithms in terms of $\norm{S(z,\hat{\rho})}_{\infty}$ value obtained for example 1.}
\label{fig:sinf_box_g1}
\end{figure}
\begin{table}[!htb]
\centering
\begin{tabular}{c c c c c}
\hline
Algorithm & median & $\sigma$ & min & max \\
\hline
PSO & $1.8014$ & $0.1130$ & $1.7509$ & $2.5988$ \\
ABC & $1.8091$ & $0.4740$ & $9.9798 \times 10^{-5}$ & $2.8496$ \\
GWO & $1.8087$ & $0.2923$ & $ 1.7995$ & $2.5760$ \\
I-GWO & $1.8020$ & $4.5561 \times 10^{-4}$ & $1.8008$ & $1.8030$ \\
\hline
\end{tabular}
\caption{Quantitative results from the box plot in terms of $\norm{S(z,\hat{\rho})}_{\infty}$ for example 1.}
\label{tab:sinf_quant_g1}
\end{table}
\subsection{Example 2: fourth-order plant}
\label{ssec:g2}
The fourth-order plant consists of
\begin{equation}
G_2(z) = \frac{0.1381 (z-0.95) (z^2 - 1.62 z + 0.6586)} {(z^2 - 1.7z + 0.7325) (z^2 - 1.84z + 0.8564)},
\end{equation}
with a time step of 1 second, which has the same structure as the model of a SEPIC converter \cite{Kassick2011}. Since the plant's zeros have minimum phase, which can be evaluated with data as aforementioned in Subsection~\ref{ssec:g1}, the VRFT method is used without flexible model reference criterion \cite{Bazanella2014}.
\subsubsection{Data collection}
\label{sssec:g2_datacollection}
For plant $G_2(z)$, the data is obtained the same way as described for example 1, in Subsection~\ref{ssec:g1}, with a PRBS signal of $N=2000$ samples applied to the closed-loop system with stabilizing controller,
\begin{equation}
k_p = \frac{0.5}{\norm{G_2(z)}_{\infty}} = 0.3828,
\end{equation}
considering additive white Gaussian noise with an SNR of 20~dB to represent measurement noise. The input-output set is formed by $\{ u,y\}_{k=1}^{N}$.
\subsubsection{Step 1 - VRFT}
After the data is acquired, the next step is to use the VRFT to design a controller, which solves the cost function \eqref{eq:vrft_j}. For this example, the following control requirements are assumed: i) null steady-state error; ii) settling time of approximately 6.5 times faster than the closed-loop settling time with stabilizing controller $k_p$; iii) null overshoot for a step reference. Considering such requirements, the choice of the reference model is done as suggested in \cite{Remes2021}, obtaining
\begin{equation}
Td(z) = \frac{1.4 (z-0.6)}{(z-0.3) (z-0.2)}.
\end{equation}
Suppose a limited situation where only a PI controller is available, say for hardware limitations on a certain product. Therefore, the controller class to be considered is the PI class of controllers, resulting in
\begin{equation}
\bar{C}(z) = \left[ 1 \quad \frac{z}{z-1} \right]'.
\end{equation}
The obtained VRFT solution for the problem results in the controller parameters
\begin{equation}
\label{eq:vrft_sol_ex2}
\hat{\rho} = [6.6568 \quad 3.3728],
\end{equation}
which, via \eqref{eq:c-rhobar}, results in the controller
\begin{equation}
C(z,\hat{\rho}) = \hat{\rho}' \bar{C}(z) = \frac{10.03 (z-0.6637)}{(z-1)}.
\end{equation}
Considering the VRFT-obtained solution \eqref{eq:vrft_sol_ex2}, the robustness index of the system can be estimated according to Subsection~\ref{sec:Ms}, obtaining $\hat{M}_S = 2.2767$. As aforementioned, an $M_S \leq 2$ is desired to ensure sufficient robustness \cite{Skogestad2005}, which leads to the application of the proposed solution.
\subsubsection{Step 2 - Swarm intelligence algorithm}
The swarm intelligence algorithms PSO, ABC, GWO, and I-GWO are applied to the problem \eqref{eq:opt_si} for the fourth-order plant case. The upper search bound is kept $u_b = 10$ and lower bound $l_b = 0$, in order to increase the passivity of the controller as mentioned in Subsection~\ref{ssec:g1}. An upper bound of $10$ should be sufficient, considering that the maximum desired robustness is not too far from the estimated robustness index at the end of step 1. The initial population spawn radius follows \eqref{eq:swarm_init}, $R = (|u_b|+|l_b|)/2 = 5$. A desired $\norm{S(z,\hat{\rho})}_{\infty}$ is set to 1.5, satisfying $M_{Sd} \leq 2$.
The number of agents of all algorithms is set to 50, with a maximum of 100 iterations per run. Each algorithm is run 50 times for different noise realizations, so that a proper analysis over the results can be made. For PSO and ABC algorithms, parameters are set as presented in Table~\ref{tab:parameters}. Aside from number of agents and maximum number of iterations, no other parameter is set by the user with the proposed GWO and I-GWO algorithm.
The average convergence curve of all algorithms for this case is presented in Figure~\ref{fig:conv_g2}. Table~\ref{tab:timing_g2} presents the time for one iteration and how many iterations it took for the algorithms to converge. The hardware configuration is the same as in Example 1, as well as the convergence criteria $\delta = 1 \times 10^{-3}$. The ABC took a longer time to converge, followed by I-GWO, PSO, and at last, GWO. Figure~\ref{fig:fitness_box_g2} shows the box plot regarding best fitness value for each algorithms, for all runs. PSO, ABC, and I-GWO did not present far outliers, as those seen in the case of GWO. The quantitative values of the box plot are shown in Table~\ref{tab:fitness_quant_g2}, in agreement with the commented results.
\begin{figure} [!htb]
\centering
\includegraphics[width=0.7\linewidth]{Figures/NEWg2_convergence_curves.pdf} \\
\caption{Average convergence curves for all algorithms considering a Monte Carlo experiment of 50 runs for example 2.}
\label{fig:conv_g2}
\end{figure}
\begin{table}[!htb]
\centering
\begin{tabular}{c c c c c}
\hline
Alg. & 1-it. time (s) & It. to converge & Time to converge (s) \\
\hline
PSO & $7.24$ & $20$ & $144.72$ \\
ABC & $22.93$ & $10$ & $229.32$ \\
GWO & $7.55$ & $9$ & $68.00$ \\
I-GWO & $22.34$ & $9$ & $201.10$ \\
\hline
\end{tabular}\\
\caption{Time for convergence of all algorithms for example 2.}
\label{tab:timing_g2}
\end{table}
\begin{figure} [!htb]
\centering
\includegraphics[width=0.7\linewidth]{Figures/veryNEWg2_fitness_boxplot.pdf} \\
\caption{Box plot of a Monte Carlo experiment with 50 runs for all algorithms in terms of best fitness value obtained for example 2.}
\label{fig:fitness_box_g2}
\end{figure}
\begin{table}[!htb]
\centering
\begin{tabular}{c c c c c}
\hline
Algorithm & median & $\sigma$ & min & max \\
\hline
PSO & $0.49284$ & $2.2270 \times 10^{-9}$ & $0.49284$ & $0.49284$ \\
ABC & $0.49287$ & $4.7174 \times 10^{-5}$ & $0.49284$ & $0.49303$ \\
GWO & $0.49290$ & $1.0414 \times 10^{-2}$ & $0.49284$ & $0.53568$ \\
I-GWO & $0.49285$ & $6.4636 \times 10^{-6}$ & $0.49284$ & $0.49286$ \\
\hline
\end{tabular}
\caption{Quantitative results from the box plot in terms of best fitness for the example with system $G_2(z)$.}
\label{tab:fitness_quant_g2}
\end{table}
The $\norm{S(z,\hat{\rho})}_{\infty}$ norm obtained for the best solution at each run is shown in Figure~\ref{fig:sinf_box_g2}, with is quantitative values presented in Table~\ref{tab:sinf_quant_g2}. Since all algorithms presented similar median, close enough to the desired $M_S$ value, with considerably low dispersion, the algorithms that stand out are PSO and I-GWO, since they have the least number of outliers and the lower dispersion, which usually are desired results.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.7\linewidth]{Figures/NEWg2_sinf_boxplot.pdf} \\
\caption{Box plot of a Monte Carlo experiment with 50 runs for all algorithms in terms of $\norm{S(z,\hat{\rho})}_{\infty}$ for example 2.}
\label{fig:sinf_box_g2}
\end{figure}
\begin{table}[!htb]
\centering
\begin{tabular}{c c c c c}
\hline
Algorithm & median & $\sigma$ & min & max \\
\hline
PSO & $1.5169$ & $1.4558 \times 10^{-6}$ & $1.5169$ & $1.5169$ \\
ABC & $1.5165$ & $4.4060 \times 10^{-4}$ & $1.5148$ & $1.5172$ \\
GWO & $1.5133$ & $8.4447 \times 10^{-3}$ & $1.4825$ & $1.5173$ \\
I-GWO & $1.5168$ & $1.2216 \times 10^{-4}$ & $1.5165$ & $1.5171$ \\
\hline
\end{tabular}
\caption{Quantitative results from the box plot in terms of $\norm{S(z,\hat{\rho})}_{\infty}$ for example 2.}
\label{tab:sinf_quant_g2}
\end{table}
\newpage
\section{Conclusion}
\label{sec:conclusion}
This work proposed a data-driven one-shot technique to increase the robustness of a closed-loop discrete-time system by changing the controller parameters using swarm intelligence algorithms. The considered optimization problem \eqref{eq:opt_si} is the VRFT cost function with a penalty regarding the value of the $\norm{S(z,\rho)}_{\infty}$ norm, which can be directly used as a measure of robustness. Such value is estimated via impulse response at each iteration of the metaheuristic algorithm.
Four swarm intelligence algorithms - PSO, ABC, GWO, and I-GWO - have been considered to illustrate the proposed technique with two real-world inspired plants. At both examples, the I-GWO obtained satisfactory results, presenting lower dispersion than other algorithms, less outliers, lower (best) fitness and acceptable values of $\norm{S(z,\rho)}_{\infty}$. In the first example, although, PSO also achieved similar results to the I-GWO. The ABC and GWO algorithm performed worst in terms of dispersion, outliers and fitness. Aditionaly, I-GWO and ABC were the slowest algorithms to converge, and GWO the fastest.
As for future works, it is suggested: the inclusion of other constraints (e.g., for control effort) simultaneously with the robustness constraints; the use of other types of metaheuristics, as evolutionary of physics-based algorithms; the inclusion of a robustness constraint to the OCI, VDFT, or DD-LQR methods; extension of the current work for MIMO systems.
\section{Acknowledgments}
\label{sec:acknowledgments}
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001, and partly by the Fundação de Amparo à Pesquisa e Inovação do Estado de Santa Catarina (FAPESC) - Grant number 288/2021.
\bibliographystyle{elsarticle-num}
|
2,877,628,090,737 | arxiv | \section{The Direct Proof}
\section{Introduction}
\label{sect_intro}
\vspace{2mm}
Theorem 1 in \cite{TAC2008-Z-2008} provides in (19) the set of the admissible solutions $(x_k,p_k,u_k)$
of the singular Hamiltonian system (10)
defined on the discrete-time interval $0\le k \le k_f-1$. The proof therein presented is twofold: sufficiency is shown by direct replacement of
(19) in (10); necessity relies on maximality of the involved structural invariant subspaces, as it is deducible from Properties~1 and 2.
In the following, it
will be shown that a direct proof, which does not distinguish between the {\em if} and the {\em only-if} part, but extensively uses relations
pointed out in \cite{TAC2008-Z-2008}, is also feasible. The main point of the direct proof is replacing the coupled diffence equations
(10) in \cite{TAC2008-Z-2008} with two decoupled difference equations by means of a suitable state space basis transformation. The direct proof herein presented can also be used to prove (20) in \cite{TAC2008-Z-2008}, that
expresses the set of the admissible solutions $(x_k.p_k)$ of the same Hamiltonian system in the extended time interval $0\le k \le k_f$.
\section{Direct Proof of Relation (20) and Theorem 1 in~\cite{TAC2008-Z-2008}}
\label{sect_dirproof}
\vspace{2mm}
The direct proof is based on the following lemma.
\vspace{2mm}
\begin{lemma}
\label{lem1}
{\em \ \ The problem of finding the sequences $x_k$, $p_k$, and $u_k$, with $0\,{\le}\,k\,{\le}\,k_f\,{-}\,1$, that solve the
equations (10) of \cite{TAC2008-Z-2008}, or, equivalently,
\bea
x_{k+1} \ns&\ns = \ns&\ns A\,x_k+B\,u_k,\label{10a} \\
-A^\top p_{k+1} \ns&\ns = \ns&\ns Q\,x_k-p_k+S\,u_k, \label{10b} \\
-B^\top p_{k+1} \ns&\ns = \ns&\ns S^\top x_k+R\,u_k, \label{10c}
\eea
with $0\,{\le}\,k\,{\le}\,k_f\,{-}\,1$,
can be reduced to that of finding the sequences $v_k$ and $w_k$
that solve the pair of decoupled difference equations
\bea
v_{k+1}\ns&\ns = \ns&\ns A_+\,v_k, \label{20a} \\
A_+^\top\,w_{k+1}\ns&\ns = \ns&\ns w_k, \label{20b}
\eea
with $0\,{\le}\,k\,{\le}\,k_f\,{-}\,1$,
where $A_+$ is defined by (14) in \cite{TAC2008-Z-2008},
provided that the following correspondences are set up
\bea
x_k \ns&\ns = \ns&\ns v_k+W\,w_k, \label{30a} \\
p_k \ns&\ns = \ns&\ns P_+\,v_k+(-I+P_+W)\,w_k, \label{30b} \\
u_k \ns&\ns = \ns&\ns -K_+v_k + \bar{K}_+w_{k+1} \label{30c},
\eea
where $P_+$ is the positive semidefinite symmetric solution of (11)--(12) in~\cite{TAC2008-Z-2008},
$W$ is the solution of the symmetric discrete Lyapunov equation (15),
$K_+$, and $\bar K_+$ are defined by (13) and (17).}
\end{lemma}
\vspace{2mm}
\IEEEproof
First, the following relation will be shown:
\be
\label{eqW}
-W+B\bar{K}_+ = -A\,W\,A_+^\top.
\ee
\par\noindent
Use of (17) in \cite{TAC2008-Z-2008} yields the identity
\[
-W+B\bar{K}_+ = -W+ B\,(R+B^\top P_+B)^{-1}(B^\top - B^\top P_+ A\,W A_+^\top - S^\top W A_+^\top)\,=
\]
and, by applying distributivity of the product with respect to the sum,
\[
=-W + B\,(R+B^\top P_+B)^{-1}B^\top - B\,(R+B^\top P_+B)^{-1} B^\top P_+ A \,W A_+^\top
\]
\[ - B\,(R+B^\top P_+B)^{-1} S^\top W A_+^\top\,=
\]
and, by collecting $W A_+^\top$ in the last two terms,
\[
=-W + B\,(R+B^\top P_+B)^{-1} B^\top - B\,(R+B^\top P_+B)^{-1} (B^\top P_+ A + S^\top)\,W A_+^\top\,=
\]
and, by the definition (13) of $K_+$ in \cite{TAC2008-Z-2008}, and summing and subtracting the term $A\,WA_+^\top$
\[
=-W + B\,(R+B^\top P_+B)^{-1} B^\top - B\,K_+ W A_+^\top + A\,WA_+^\top - A\,WA_+^\top\,=
\]
and, by reordering,
\[
=(A-BK_+)\,W A_+^\top - W + B\,(R+B^\top P_+B)^{-1} B^\top - A\,WA_+^\top\,=
\]
and, by using (14) in \cite{TAC2008-Z-2008},
\[
=A_+ W A_+^\top - W + B\,(R+B^\top P_+B)^{-1} B^\top - A\,WA_+^\top\,=
\]
and, eventually, tacking (15) in \cite{TAC2008-Z-2008} into account,
\[
=- A\,WA_+^\top\,.
\]
\par
Thus, (\ref{eqW}) is proven. Now we are ready to obtain the difference equation in the unknowns $v_k$ and $w_k$.
By using (\ref{30a}) and (\ref{30c}) in (\ref{10a}), it follows that:
\[
v_{k+1}+Ww_{k+1}=Av_k+A\,Ww_k-B\,K_+v_k+B\,\bar{K}_+w_{k+1}\,,
\]
or also
\[
v_{k+1}=(A-BK_+)\,v_k+(-W+B\bar{K}_+)\,w_{k+1}+A\,Ww_k\,,
\]
or, by the definition (14) in \cite{TAC2008-Z-2008},
\begin{equation}
\label{eqrefx}
v_{k+1}=A_+v_k+(-W+B\bar{K}_+)\,w_{k+1}+A\,Ww_k\,,
\end{equation}
or, equivalently because of (\ref{eqW}),
\begin{equation}
\label{eqref}
v_{k+1}=A_+v_k-A\,WA_+^\top w_{k+1}+A\,Ww_k\,.
\end{equation}
\par
Similarly, by using (\ref{30a})--(\ref{30c}) in (\ref{10b}), the following is obtained:
\[
-A^\top\bigl(P_+ v_{k+1} + (P_+ W - I)\,w_{k+1}\bigr) =
\]
\[
=Q (v_k + W w_k) - \bigl(P_+v_k - (P_+ W - I) w_k\bigr)
+ S (-K_+ v_k + \bar{K}_+ w_{k+1})\,,
\]
or
\[
-A^\top P_+ v_{k+1} - A^\top (P_+ W - I)\,w_{k-1} =
\]
\[= Q v_k +QWw_k - P_+ v_k - (P_+ W - I)w_k -S K_+ v_k + S \bar{K}_+ w_{k+1}\,.
\]
\par
By the identity $- A^\top (P_+ W - I) = Q W A_+^\top - (P_+ W - I) A_+^\top + S \bar{K}_+$ (see the proof of Property 2 in \cite{TAC2008-Z-2008} -- second row block), the following holds:
\[
-A^\top P_+ v_{k+1} + \bigl(QWA_+^\top - (P_+ W - I) A_+^\top + S \bar{K}_+\bigr)w_{k+1}=
\]
\[
= Qv_k + QWw_k -P_+v_k - (P_+ W - I) w_k - SK_+ v_k + S \bar{K}_+ w_{k+1}\,,
\]
and, by doing away with the terms $S \bar{K}_+ w_{k+1}$ at the right of both members,
\[
-A^\top P_+ v_{k+1} + \bigl( QW - (P_+ W - I) \bigr) A_+^\top w_{k+1}=
\]
\[ =(Q-P_+-SK_+)v_k +\bigl(QW - (P_+ W - I)\bigr) w_k\,.
\]
\par
Recall the identity $Q-P_+-SK_+ = -A^\top P_+ A_+$ (see the proof of Property 1 in \cite{TAC2008-Z-2008} -- second row block), the following is obtained:
\be
\label{eqdot}
-A^\top P_+ v_{k+1} + \bigl(QW - (P_+W-I) \bigr) A_+^\top w_{k+1}=
-A^\top P_+A_+ v_k + \bigl( QW - (P_+ W - I) \bigr) w_k\,.
\ee
Let us multiply both members of (\ref{eqref}) by $A^\top P_+$, thus obtaining
\be
\label{eqtridot}
A^\top P_+ v_{k+1}=A^\top P_+ A_+ v_k-A^\top P_+ A\,WA_+^\top w_{k+1}+A^\top P_+A\,Ww_k\,,
\ee
and, by summing both members of (\ref{eqdot}) and (\ref{eqtridot}), it follows that
\[
\bigl(QW - (P_+ W - I) \bigr) A_+^\top w_{k+1} =
\]
\[ = \bigl(QW - (P_+ W - I) \bigr) w_k - A^\top P_+ A W A_+^\top w_{k+1} + A^\top P_+ A W w_k\,.
\]
By collecting $w_{k+1}$ on the left and $w_k$ on the right, it follows that
\[
\bigl(QW - (P_+ W - I) + A^\top P_+ AW \bigr) A_+^\top w_{k+1} = \bigl(QW - (P_+ W - I) + A^\top P_+ AW \bigr) w_k\,,
\]
or $A_+^\top w_{k+1} = w_k$, that is (\ref{20b}).
Taking into account this latter equation in (\ref{eqref}) one gets
\[
v_{k+1} = A_+ v_k - AWw_k + AWw_k\,,
\]
or $v_{k+1}=A_+v_k$, that is (\ref{20a}). \endproof
\vspace{4mm}
Now we are ready to conclude the direct proof of both (20) and (19) in \cite{TAC2008-Z-2008}.
Refer to the pair of decoupled difference equations (\ref{20a}), (\ref{20b}), defined in the
time interval $0\le k \le k_f-1$. Their solutions can be expressed as
\be
\label{eqf1}
\begin{array}{rcl}
v_k \ns&\ns = \ns&\ns A_+^k \alpha\,, \\ [2mm]
w_k \ns&\ns = \ns&\ns (A_+^\top)^{k_f-k} \beta\,,
\end{array}
\quad 0 \le k \le k_f\,,
\ee
where $\alpha,\beta\in\real^n$ are parameters. Substitution of (\ref{eqf1}) in (\ref{30a}),
(\ref{30b}) yields
\[
\begin{array}{rcl}
x_k \ns&\ns = \ns&\ns A_+^k \alpha+W\,(A_+^\top)^{k_f-k} \beta\,, \\ [2mm]
p_k \ns&\ns = \ns&\ns P_+ A_+^k \alpha +(P_+W-I)(A_+^\top)^{k_f-k} \beta\,,
\end{array}
\quad 0 \le k \le k_f\,,
\]
that, re-written in compact notation as
\[
\left[\begin{array}{c} x_k \\ p_k \end{array}\right]=\left[\begin{array}{c}
I \\ P_+ \end{array} \right]\, A_+^k \alpha + \left[\begin{array}{c} W \\
P_+W-I \end{array}\right] (A_+^\top)^{k_f-k}\beta\,, \quad 0\le k \le k_f\,,
\]
coincides with equation (20) in \cite{TAC2008-Z-2008}. \\
\par
To prove equation (19) in \cite{TAC2008-Z-2008}, let us substitute (\ref{20b}), i.e.,
\[
w_k=A_+^\top\,w_{k+1}\,, \quad 0\le k \le k_f-1\,,
\]
in (\ref{30a}), (\ref{30b}), thus obtaining
\be
\label{eqxx}
\begin{array}{rcl}
x_k \ns&\ns = \ns&\ns v_k + W\,A_+^\top w_{k+1}\,, \\ [2mm]
p_k \ns&\ns = \ns&\ns (P_+W-I)\,A_+^\top w_{k+1}\,,
\end{array}
\quad 0\le k \le k_f-1\,.
\ee
Using (\ref{eqf1}) in (\ref{eqxx}) yields
\be
\label{eqn45}
\begin{array}{rcl}
x_k \ns&\ns = \ns&\ns A_+^k\,\alpha + W A_+^\top (A_+^\top)^{k_f-k-1}\beta\,, \\ [2mm]
p_k \ns&\ns = \ns&\ns P_+\,A_+^k\,\alpha + (P_+W-I) A_+^\top (A_+^\top)^{k_f-k-1}\beta\,,
\end{array}
\quad 0 \le k \le k_f-1\,,
\ee
while using (\ref{eqf1}) in (\ref{30c}) provides
\be
\label{eqn6}
u_k=-K_+ A_+^k\,\alpha + \bar{K}_+(A_+^\top)^{k_f-k-1}\beta\,,
\quad 0 \le k \le k_f-1\,.
\ee
Equations (\ref{eqn45}), (\ref{eqn6}) can be re-written in compact form as
\[
\left[\begin{array}{c} x_k \\ p_k \\ u_k \end{array}\right] =
\left[\begin{array}{c} I \\ P_+ \\ -K_+ \end{array}\right] A_+^k\, \alpha +
\left[\begin{array}{c} W\,A_+^\top \\ (P_+W-I)A_+^\top \\ \bar{K}_+ \end{array}\right]
(A_+^\top)^{k_f-k-1}\beta\,,
\quad 0 \le k \le k_f-1\,,
\]
that coincides with (19) in \cite{TAC2008-Z-2008}. Thus, Theorem 1 in \cite{TAC2008-Z-2008} has
been directly proven by using the correspondences stated in Lemma~\ref{lem1}.
\vspace{2mm}
\bibliographystyle{IEEEtran}
|
2,877,628,090,738 | arxiv | \section{Introduction}\label{sec:intro}
{Gamma-ray astronomy with ground-based instruments was enabled by the pioneering efforts by the Whipple collaboration \citep{1989ApJ...342..379W} by demonstrating the excellent inherent sensitivity of imaging atmospheric Cherenkov telescopes (IACTs) for detecting astrophysical TeV photons.}
{The detection of several astrophysical sources (the Crab Nebula and several blazars including Mrk 421, Mrk 501, 1ES 2344+524, H 1426+428, 1ES 1959+650) with the Whipple 10~m, HEGRA and CAT instruments in the mid-1990s motivated construction of the current generation of telescopes (H.E.S.S. \citep{2006A&A...457..899A}, MAGIC-II \citep{2012APh....35..435A} and VERITAS \citep{2006APh....25..391H}) in the early twenty first century. These modern instruments employ the stereoscopic imaging technique and exceed the sensitivity of any previous TeV gamma-ray telescope by an order of magnitude. The future potential of TeV astronomy to explore high energy astrophysics and particle astrophysics is amply demonstrated by a collective catalog of over two hundred TeV $\gamma$-ray sources\footnote{TeVCat: \url{http://tevcat.uchicago.edu}}.}
{Observations of celestial photons with energies in the interval 20 GeV - \revision{100}{300} TeV address important questions of modern astrophysics and particle physics. Indeed, $\gamma$-ray\ astronomy promises key insights for a diverse range of topics including: the origin of cosmic rays; particle acceleration and propagation; cosmological radiation and magnetic fields and even the composition and nature of dark matter \citep[see e.g.][]{Acharya20133}. Future IACTs will also provide important follow-up observations of sources that are identified at energies above 0.1 GeV by the \revision{the}{} \textit{Fermi} space telescope's all-sky survey. To date, \textit{Fermi} has delivered a catalog containing over 3000 distinct $\gamma$-ray\ sources \citep{2015ApJS..218...23A}, of which only a small fraction have been studied at multi-TeV energies. The forthcoming Cherenkov Telescope Array \citep[CTA;][]{2013APh....43....3A} will extend the energy coverage of the \textit{Fermi} survey to span six orders of magnitude in energy, with sufficient sensitivity to perform statistically rich population studies for source classes in the 100~TeV regime. CTA will exhibit substantially improved angular resolution and reduced background contamination with respect to current-generation instruments. These enhancements \revision{will permit}{are expected to permit the establishment of} mutiwavelenth associations \revision{to be established}{} for over 1000 sources that were discovered by Fermi but lack a plausible counterpart at other wavelengths.}
{Building on the nascent successes of this new research field, the CTA project has been undertaken by a world-wide collaboration of scientists who have coalesced around the shared goal of constructing and operating \revision{a single next-generation IACT}{a next-generation IACT array}. The basic concept of CTA \citep[see e.g.][]{Acharya20133} envisions heterogeneous arrays comprising telescopes with differing sizes and capabilities. The prevalent designs include a large number of \revision{small-size telescope}{small-sized telescopes} (SSTs, $\rm \varnothing_{mirror} \sim 4~m$), several tens of medium-size telescopes (MSTs, $\rm \varnothing_{mirror} \sim 10 - 12~m$) and a small number of large-size telescopes (LSTs, $\rm \varnothing_{mirror} \sim 23~m$), which combine to provide wide energy coverage spanning 20~GeV - 100~TeV.}
{The MSTs probe the sub-TeV to multi-TeV energy band, which is \revision{}{a} regime for which the IACT technique achieves maximal sensitivity and excellent angular resolution. At higher energies, the Cherenkov light intensity for $\gamma$-ray\ showers is subtantially increased, such that detection and imaging becomes feasible using smaller mirror areas provided by SSTs. Conversely, large mirror areas provided by the LSTs are required in order to access the 20 - 100 GeV regime, for which the Cherenkov light intensity is faint.}
{The energy ranges to which each telescope type is sensitive exhibit substantial overlap but the effective collection areas \citep[see e.g.][]{Acharya20133} for sub-arrays comprising the MSTs and LSTs are markedly disparate ($\rm \sim 1 km^{2}$ and $\rm \sim 0.1 km^{2}$, respectively). A substantial increase in low-energy event statistics could be achieved by enabling detection of $\lesssim100\;\mathrm{GeV}$ GeV $\gamma$-rays\ using the MST sub-array, thereby augmenting the much smaller effective collection area of the LSTs in this energy regime. Accordingly, it is important to consider how improvements in their electronics designs could enhance the low energy response of MSTs.}
{Compelling scientific motivation for improving the sensitivity in the sub-100~GeV regime is provided by a renewed interest in $\gamma$-ray\ emission from pulsars, especially examples that exhibit an unexpected emission component in the 10 - 100 GeV regime. A further motivation is the potential to expand the size of the observable Universe for IACTs. On cosmological distance scales, the extragalactic background light strongly attenuates photons with energies exceeding $\sim10$s of GeV according to an opacity that increases with $\gamma$-ray\ energy \citep[e.g.][]{Dwek2013112}.}
{In this paper we describe a \textit{Decentralized Intelligent Array Trigger} (DIAT) system that is specifically designed to enable opera\revision{ta}{}tion of IACTs with a low energy threshold, while maintaining stable and manageable data rates in the presence of varying observing conditions and ambient illuminaton. Transient brightening of the night sky background light (NSB) can occur for several reasons including partial cloud coverage during moonless nights, observing the bright regions in the galactic plane or observation with partial moonlight. Stable telescope operation across these regimes would require the adjustment of the \textit{single telescope} trigger threshold, if the rates were not moderated by a hardware \textit{array} trigger. Telescope and array triggering strategies are discussed further in $\S$\ref{sec:tel_trig}.}
{In addition to mitigating the effect of bright NSB, the DIAT system we present is also capable of substantially reducing the background trigger rate produced by cosmic-ray-induced air showers. Configurable firmware can selectively tune the background rate suppression from factors of a few up to two orders of magnitude, while maintaining an acceptably high acceptance for $\rm gamma$-ray events.}
{The capabilities of the DIAT system are particularly well suited for a highly innovative telescope design \citep[e.g.][]{2008ICRC....3.1445V}, which uses a dual-mirror \textit{Schwarzschild Couder} (SC) configuration, and incorporates a finely pixelated camera comprising 11,328 independent silicon photomultiplier-based readout channels \citep{2015arXiv150902345O}.}
{This SC telescope (SCT) design provides a wide field of view ($\sim8^{\circ}$) and high-resolution imaging of air showers with excellent angular and energy resolution. However, the large number of pixels combined with a nominal camera trigger rate of 10~kHz per telescope implies a substantial cost in terms of data transfer, storage and processing requirements. Reduction in overall data volume can be achieved at the trigger level using front-end electronics and potentially further by off-line post-processing.}
\section{Telescope and Array-Level Triggering}\label{sec:tel_trig}
IACTs operate in a strongly background-dominated regime. \textit{Array} triggering schemes use information from multiple telescopes to veto many background events \textit{before} camera readout, with the goal of stabilizing the array's energy threshold and dead time under variable ambient illumination. For the densely pixelated SCT camera, control over the array trigger rate is also desirable to guarantee that data-transfer rates remain tractable at the extremes of normal observing conditions.
{The overwhelming majority of individual \textit{pixel} triggers are associated with low-level illumination from the ambient night-sky background \revision{(NSB)}{} light. The NSB induces a rate of random pixel triggers that is often sufficient to generate a large rate of spurious individual \textit{telescope} triggers.}
{{To address this issue, current-generation IACTs implement \textit{multi-level} hardware trigger systems that require sequential fulfillment of criteria that involve signals from an increasing number of imaging elements \citep[See e.g.][for more details regarding the trigger systems of H.E.S.S., MAGIC-II and VERITAS]{2004APh....22..285F,2011ITNS...58.1685M,2008ICRC....3.1539W,2013arXiv1307.8360Z,2007ITNS...54..404P,2016JInst..11P4005L}}. Triggering of a single \textit{telescope} typically requires a cluster of adjacent camera pixels to trigger within a temporal coincidence window lasting a few nanoseconds. This precaution subtantially reduces the rate of telescope triggers caused by the accidental pileup of night sky background photons in a single pixel.}
{For typical sky brightnesses, the intensity of NSB photons is such that the rate of NSB-induced triggers becomes negligible for per-pixel signals exceeding $\sim5$ photons.} For larger {per-pixel} photon intensities, another background component comprising \textit{temporally correlated} Cherenkov light from cosmic-ray (CR) initiated air showers becomes dominant.
{Modern multi-telescope IACT arrays reduce contamination from CRs using some variant of a \textit{multiplicity} trigger that requires a minimum of two telescopes to trigger within 50 - 100~ns. Basic multiplicity array trigger systems help to stabilize the data rates at which these telescopes must operate, primarily by rejecting a faint subset of cosmic-ray- and single-muon-induced showers that trigger only a single telescope. CR-initiated air-showers that trigger multiple telescopes often exhibit temporal correlation between individual telescope triggers that is sufficient to impair the efficacy of traditional multiplicity array triggers.}
{The DIAT concept represents a significant extension to the functionality of existing array trigger schemes. It is the first system to use imaging information to discriminate between $\gamma$-ray- and cosmic-ray-induced shower images in real time at the \textit{hardware trigger} level. This has become possible possible thanks to the recent emergence of fast Field Programmable Gate Arrays (FPGAs) that can evaluate and apply individual camera trigger criteria while simultaneously performing reduction of the camera pixel hit pattern into summary image parameters in real-time. Rapid computation using the image parameters that are generated by neighbouring telescopes within the array permits near-real-time stereo analysis of the event.}
Subsequent sections demonstrate that spurious \revision{triggers CR-induced}{CR-induced triggers} \textit{can} be identified and vetoed in near-real-time by processing reduced image data using modern FPGAs. \revision{W}{In section \ref{sec:pwidth_def} we define the \textit{parallax width} event discriminator and mathematically describe the algorithm that is used to compute it. Section \ref{sec:passthrough} outlines how excessive trigger suppression for high energy $\gamma$-ray\ events can be ameliorated by indiscriminately accepting events that surpass a combined image brightness threshold. In section \ref{sec:hw_implementation}, we provide a brief description of a distributed hardware array trigger that could be used to deployed to compute the \textit{parallax width} in near real-time for large IACT arrays. Section \ref{sec:results} presents and examines our results, which were obtained by applying the \textit{parallax width} algorithm to discriminate between simulated $\gamma$-ray- and proton-like events for a variety of $\gamma$-ray\ source configurations. We summarize our results in section \ref{sec:conclusions}. Overall, w}e explicitly show that image parameters involving only the first moments of each telescope image \revision{image}{} are sufficient to discriminate between hadronic and $\gamma$-ray-induced showers. If the rate of spurious telescope triggers can be effectively controlled, then the sensitivity of an IACT array to low energy $\gamma$-rays\ can be enhanced by reducing the trigger threshold of individual camera pixels.
{We note that there exist alternative proposals for data rate suppression that do not use multi-telescope information, and do not veto events in their entirety. Instead, such schemes implement strategies for lossy compression of event data by discarding segments of the Cherenkov image that are signal-free or deemed unlikely to contain signals produced by Cherenkov light \citep[e.g.][]{2015arXiv150807584C,2013JInst...8P6011N,2016JInst..11P4005L}. }
\section{The Parallax Width Discriminator}\label{sec:pwidth_def}
Electromagnetic air-showers initiated by $\gamma$-rays\ are characterized by a single shower axis, and triggered telescopes capture coherent elliptical images with a well defined nucleus-within-coma structure. In contrast, the hadronic component of cosmic-ray showers produces multiple sub-showers and typically yields a more fragmentary distribution of Cherenkov light \citep[see e.g.][]{1997JPhG...23.1013F}. The \textit{parallax width} \citep{1995ExpAstro...6...285,10.1063/1.3076821,2009arXiv0908.0179S} discriminator ($P$) leverages the difference between CR and $\gamma$-ray\ shower images to rapidly distinguish between these event categories at the \textit{hardware} level. It is instructive to separate the distinct algorithmic stages that comprise the computation of $P$ into {two separate categories: Camera Image Preprocessing, and Computation of the Parallax Width Discriminator which are described in $\S$\ref{sec:im_preproc} and $\S$\ref{sec:pwidth_comp}, respectively.}
{\subsection{Telescope Simulation Framework}\label{sec:simulations}
To investigate the efficacy of $P$ to distinguish $\gamma$-ray\- and cosmic-ray initiated air-showers, simulations of the SCT array configuration
were produced using the \texttt{sim\_telarray} software package \citep{2008APh....30..149B}. Simulations of $\gamma$-ray\ emission from point-like and diffuse \footnote{\revision{}{$\gamma$-ray\ events were simulated with randomly distributed arrival directions, distributed within a cone with a $10^{\circ}$ opening angle around the telescope pointing axis. Such an event distribution appears genuinely diffuse for the SCT, which has an 8 degree field of view.}} sources, as well as simulations of a diffuse proton background were used. The energy spectra of the simulated $\gamma$-rays\ and protons described falling power-law distributions ($dN/dE\propto E^{-\Gamma_{x}}\;:x\in\{\gamma, p\}$) with spectral indices $\Gamma_{\gamma}=2$ and $\Gamma_{p}=2.7$, respectively\footnote{\revision{}{Although $\gamma$-ray\ spectra with indices as hard as 2 are seldom observed in nature, using simulated events with $\Gamma_{\gamma}=2$ enables the performance of the parallax width trigger to be evaluated with good number statistics for high $\gamma$-ray\ energies.}}. Unless it is stated otherwise, $\gamma$-ray\ events model astrophysical sources located centrally in the field of view at an altitude of $70^{\circ}$ and an azimuth of $180^{\circ}$. The raw simulated camera images were provided as input to a computer simulation of the DIAT array triggering scheme.}
\subsection{Camera Image Preprocessing}\label{sec:im_preproc}
Before $P$ is computed, preprocessing algorithms are applied to each camera image at the single telescope level. Figure \ref{fig:P_computation_Camera} illustrates the manner in which telescope-specific camera data are processed to derive a compact geometrical representation of the captured shower image.
To mitigate the effect of Night Sky Background
photons randomly triggering each of the 11,328 individual readout channels, and thereby enable a reduction of the trigger threshold, 2,832 lower resolution \textit{super-pixels}\footnote{{Throughout this work the terms \textit{super-pixel} and \textit{trigger pixel} are considered to be synonymous and are used interchangeably.}} are formed from four adjacent imaging pixels. The combined signal amplitudes of the super-pixels are used to form a Boolean-valued \textit{trigger image} with a predefined threshold on the summed output level ($n_{\rm pe}$, typically expressed in photoelectron-equivalent units, hereafter \textit{p.e.}) segregating true (triggered) and false values. An \textit{individual telescope} is deemed to have triggered if its Boolean trigger image includes 3 or more adjacent triggered super-pixels\footnote{\revision{}{Super-pixels are considered to be adjacent if they share at least one common vertex.}}.
\begin{figure*}
\centering
\includegraphics[width=0.5\textwidth]{unprocessed_noTelNum-eps-converted-to.pdf
\includegraphics[width=0.5\textwidth]{triggerWithAmp_noTelNum-eps-converted-to.pdf}\\%{event_87_trigPixPE}
\includegraphics[width=0.5\textwidth]{trigger_noTelNum-eps-converted-to.pdf
\caption{\small The \textit{upper-left-hand} panel illustrates an unprocessed, finely pixelated image corresponding to a $\gamma$-ray-initiated air-shower. The \textit{upper-right-hand} panel illustrates the intermediate, coarsely pixelated image that is derived by combining the signals from sets of 4 neighbouring imaging pixels. The \textit{lower} panel illustrates the Boolean-valued trigger image that is required by the algorithm that computes $P$. The \textit{orange} arrows correspond to the vector $\mathbf{r}_{F}$, which connects the implicitly \textit{unweighted} trigger-image centroid with the \textit{fiducial} camera-plane coordinate $\mathbf{r}^{\star} = (0,0)$ at the camera centre. For comparison, the \textit{white} arrow (\textit{top-left-hand} panel) connects the \textit{signal-amplitude-weighted} mean position of all \textit{imaging} pixels with $\mathbf{r}^{\star}$.}\label{fig:P_computation_Camera}
\end{figure*}
A two-level image cleaning algorithm is applied to each trigger image to remove small clusters of NSB-induced pixel triggers that can incorrectly trigger the telescope or bias subsequent computation of $P$ by contaminating valid telescope images with noise. Nominally triggered pixels are retained in the cleaned trigger image if valid triggers were generated by \textit{at least} $n_{1}$ immediately adjacent pixels, \textit{at least one} of which \textit{itself} has $n_{2}$ triggered neighbours\footnote{The multilevel neighbour multiplicities for the cleaning algorithm were fixed to $n_{1}=3$ and $n_{2}=5$ after empirical investigation indicated good performance using these values.}. This cleaning algorithm is designed to retain extended contiguous groups of trigger pixels that are consistent with genuine air-shower images. Figures \ref{fig:two_level_cleaning_gamma} and \ref{fig:two_level_cleaning_noisy} demonstrate the algorithm for an image of a $\gamma$-ray-initiated air shower event and an image that contains
a large number of randomly triggered super-pixels, respectively. Most triggered pixels in the genuine $\gamma$-ray\ event are retained, while the sparse pixels that are randomly triggered are removed.
\begin{figure*}[p]
\centering
\includegraphics[width=0.5\textwidth]{telescope_80_trigPixPE_noTelNum-eps-converted-to.pdf
\includegraphics[width=0.5\textwidth]{telescope_80_trigPixNAdj_noTelNum-eps-converted-to.pdf}\\%{event_87_trigPixPE}
\includegraphics[width=0.5\textwidth]{telescope_80_trigPixBoolCleaned_noTelNum-eps-converted-to.pdf
\caption{\small Demonstration of the two-level cleaning algorithm for $n_{1} = 3$, $n_{2} = 5$ and a pixel threshold $n_{\rm pe} = 2\,{\rm p.e.}$ when applied to a genuine $\gamma$-ray\ image. The \textit{upper-left} panel shows signals registered by each triggered superpixel in photoelectron equivalent counts. The number of triggered neighbours for each super-pixel is illustrated in the \textit{upper-right} panel. The \textit{lower} panel reveals the result of applying the two-level cleaning algorithm to the original trigger image.}\label{fig:two_level_cleaning_gamma}
\end{figure*}
\begin{figure*}[p]
\centering
\includegraphics[width=0.5\textwidth]{telescope_95_trigPixPE_noTelNum-eps-converted-to.pdf
\includegraphics[width=0.5\textwidth]{telescope_95_trigPixNAdj_noTelNum-eps-converted-to.pdf}\\%{event_87_trigPixPE}
\includegraphics[width=0.5\textwidth]{telescope_95_trigPixBoolCleaned_noTelNum-eps-converted-to.pdf
\caption{\small Demonstration of the two-level cleaning algorithm for $n_{1} = 3$, $n_{2} = 5$ and a pixel threshold $n_{\rm pe} = 2\,{\rm p.e.}$ when applied to an image comprising randomly triggered super-pixels that would otherwise be sufficiently numerous to fulfill the pass-through criterion ($n_{\rm TP} > 16$). The panels correspond to their counterparts in Figure \ref{fig:two_level_cleaning_gamma}.}\label{fig:two_level_cleaning_noisy}
\end{figure*}
\subsection{Computation of the Parallax Width Discriminator}\label{sec:pwidth_comp}
\begin{table*}
\begin{tabular}{lcm{0.7\textwidth}}
\hline
Coordinate System & Symbol & Description \\
\hline
\multirow{3}{*}{Camera Plane}
& $\mathbf{r}_{C,i}$ & The coordinates of the centroid of the boolean-valued \textit{trigger image} for the $i$th telescope. \\
& $\mathbf{r}^{\star}_{F,i}$ & The vector pointing \textbf{from} the centroid of the boolean-valued \textit{trigger image} \textbf{to} the camera centre for the $i$th telescope. \\
\hline
\multirow{3}{*}{Mirror Plane}
& $\mathbf{r}^{\prime}_{F,i}$ & The projection of $\mathbf{r}^{\star}_{F,i}$ \textbf{from} the camera plane \textbf{into} the array mirror plane for each telescope. \\
& $\mathbf{r}_{\times,j}^{\prime}$ & The mirror plane coordinates of a forward intersection between the $\mathbf{r}^{\prime}_{F,i}$ for two separate telescopes. \\
\hline
\end{tabular}\caption{Notation used in the derivation of $P$.}\label{tab:p_deriv_vars}
\end{table*}
\begin{figure*}[p]
\center
\includegraphics[width=0.68\textwidth]{mpProjGamma_labelFix-eps-converted-to.pdf}\\
\includegraphics[width=0.68\textwidth]{mpProjGammaDiff_labelFix-eps-converted-to.pdf}
\caption{\small Array mirror plane projections illustrating $P$ computation for an on-axis $\gamma$-ray\ event (\textit{top}), and an off-axis $\gamma$-ray\ event with $E_{\gamma}\lesssim2$ TeV (\textit{bottom}). \textit{Red circles} identify SCT telescopes (\textit{magenta circles}) that triggered. \textit{Red dashed} arrows indicate the projected direction of $\mathbf{r}_{F,i}^{\prime}$ for each telescope in the array that generated a valid trigger-image. \textit{Red crosses} are used to indicate the set of valid \textit{forward} intersections $\{\mathbf{r}_{\times,j}^{\prime}\}$ between the $\{\mathbf{r}_{F,i}^{\prime}\}$. The green marker indicates the coordinates at which the shower core intersects the mirror plane. \textit{Grey panels} list the computed values of $P$, the number of {\textit{forward}} intersections {for which $20^{\circ} < \theta_{\times} < 160^{\circ}$} ($n_{\times}$) that were used in the computation, the \textit{true} energy ($E_{\rm MC}$) of the simulated event, and the \textit{true} angular offset ($\theta_{MC}$) of the between the incident particle direction and the array pointing.}\label{fig:P_computation_MP1}
\end{figure*}
\begin{figure*}[p]
\center
\includegraphics[width=0.68\textwidth]{mpProjGammaDiffHE2_labelFix-eps-converted-to.pdf}\\
\includegraphics[width=0.68\textwidth]{mpProjProton_labelFix-eps-converted-to.pdf}
\caption{\small Array mirror plane projections illustrating $P$ computation for an off-axis $\gamma$-ray\ event for $E_{\gamma}\gtrsim2$ TeV (\textit{top}), and an off-axis cosmic-ray proton event (\textit{bottom}). See Figure \ref{fig:P_computation_MP1} for an explanation of the event-specific quantities that each marker is used to indicate.}\label{fig:P_computation_MP2}
\end{figure*}
Table \ref{tab:p_deriv_vars} summarizes the notation used in the subsequent derivation which also uses the index variable $i$ to enumerate the set of all triggered telescopes.
\begin{enumerate
\item For each telescope that triggers, the \textit{centroid}-vector $\mathbf{r}_{C,i}$ of the coarsely-sampled trigger image is defined as the mean camera-coordinates of the set of triggered super-pixels.
\item For each centroid vector, a second vector $\mathbf{r}_{F,i} = \mathbf{r}_{C,i} - \mathbf{r}^{\star}$ is defined where $\mathbf{r}^{\star}$ is an arbitrarily selected \textit{fiducial} camera-plane coordinate $\mathbf{r}^{\star} = (x_{C}^{\star}, y_{C}^{\star})$.\footnote{The \textit{direction} of this vector is hereafter described as the \textit{forward} direction and intersections between pairs of vectors along their forward directions are described as \textit{forward intersections}.} For real telescope arrays, the introduction of $\mathbf{r}^{\star}$ provides the flexibility to correct for non-uniform mechanical deformations of individual telescope structures, or non-parallel telescope pointing modes, and ensure that $\mathbf{r}_{F,i}$ corresponds to the projection of identical celestial coordinates in all telescope cameras. This study simulates all telescopes identically and computation of $P$ is simplified by defining $\mathbf{r}^{\star}$ to be the camera centre $(0,0)$.
\end{enumerate}
The events shown in each panel of Figures \ref{fig:P_computation_MP1} and \ref{fig:P_computation_MP2} provide representative examples of $\gamma$-ray\ and proton-initiated events with various simulated characteristics. They illustrate how the subsequent algorithmic operations are used to compute $P$ for each event class.
\begin{enumerate
\item The direction of each computed $\mathbf{r}^{\star}_{F}$ vector is projected from the independent coordinate systems of each telescope camera plane into a unified coordinate system spanning the \textit{mirror plane} of the telescope array, defining a set of \textit{projected} vectors $\{\mathbf{r}_{F,i}^{\prime}\}$. The mirror plane is defined to intersect the mean of the geographical coordinates of all telescopes comprising the complete array and to lie perpendicular to their common pointing axes.\footnote{{Accordingly, \textit{if} the telescopes of the array were coaligned to point vertically upwards, then the mirror plane and ground plane would be identical.}}
\item A set of $n_{\times}$ mirror-plane coordinates $\{\mathbf{r}_{\times,j}^{\prime}\}$ is computed that corresponds to the \textit{forward} intersections between the projected $\{\mathbf{r}_{F,i}^{\prime}\}$ vectors. {Only intersections that subtend angles $20^{\circ} < \theta_{\times} < 160^{\circ}$ are used for the computation of $P$\footnote{Intersections that subtend angles outside of this range are excluded since discretization of the camera image induces small errors affecting the computation of the $\{\mathbf{r}_{F,i}^{\prime}\}$ which can amplify to produce large, spurious offsets when the $\{\mathbf{r}_{\times,j}^{\prime}\}$ are computed.}.} The index variable $j$ is used to enumerate the set of all forward intersections.
\item Finally, the \textit{parallax width} is defined as the dispersion of the intersection coordinates
\begin{equation}\label{eq:parwidth_expression}
P = \sqrt{\frac{\sum_{j}\left| \mathbf{r}_{\times,j}^{\prime} - \langle \mathbf{r}_{\times,j}^{\prime} \rangle \right|^{2}}{n_{\times}}}.
\end{equation}
\end{enumerate}
The \textit{upper}, \textit{lower} panels of Figure \ref{fig:P_computation_MP1} and the \textit{upper} panel of Figure \ref{fig:P_computation_MP2} all correspond to $\gamma$-ray-initiated events. For such events, the computed trigger-image centroids $\mathbf{r}_{C,i}$ correspond closely with the camera-plane projection of a single point in 3-dimensional space. This point lies close to the air-shower axis at the height of maximal Cherenkov emission. The fiducial camera coordinate $\mathbf{r}^{\star} = (0,0)$ is also the camera-plane projection of a second, infinitely distant point that is identical for all telescopes. For $\gamma$-ray-initiated events, each $\mathbf{r}_{F,i}$ connects projected points that are effectively identical for all telescopes. Accordingly, the mapping $\{\mathbf{r}_{F,i}\}\rightarrow\{\mathbf{r}_{F,i}^{\prime}\}$ yields a set of vectors which intersect at a tightly clustered region of the array mirror plane and the computed value of $P$ is small.
Figure \ref{fig:P_computation_MP1} (\textit{top}) corresponds to a $\gamma$-ray-initiated event aligned with the telescopes' optical axes (\textit{on-axis}). The arrival directions of on-axis events correspond precisely with the fiducial camera-plane coordinate $\mathbf{r}^{\star} = (0,0)$, when projected into the camera plane's coordinate system\footnote{All points along the event's arrival direction-vector are implicitly \textit{on the air-shower axis} and that $\mathbf{r}^{\star}$ corresponds to the projection of the point on that vector that is located at infinity}. Consequently, the derived $\{\mathbf{r}_{\times,j}^{\prime}\}$ coincide closely with the projection of the axis into the array mirror plane.
The \textit{lower} panel of Figure \ref{fig:P_computation_MP1} is representative of an \textit{off-axis} $\gamma$-ray\ event with energy \textit{below} 2 TeV. Low energy events typically produce compact camera images with small offsets between the true image centroid and that of the Boolean-valued trigger image. Accordingly, the projected point pairs connected by each of the $\mathbf{r}^{\star}_{F}$ remain \textit{almost} identical for all telescopes and the tight clustering of $\{\mathbf{r}_{\times,j}^{\prime}\}$ is preserved. The arbitrary, \textit{a-priori} definition of $\mathbf{r}^{\star}$ implies that the single, infinitely distant point that is projected to those camera-coordinates for \textit{all} telescopes will \textit{generally not} lie on the air shower axis. Accordingly, the coincidence between the $\{\mathbf{r}_{\times,j}^{\prime}\}$ and the mirror-plane projection of the air-shower axis is lost. A more detailed discussion of this effect, as well as some additional considerations that apply to higher energy off-axis gamma-ray events, is presented in $\S$\ref{sec:passthrough}.
Finally, the \textit{lower} panel of Figure \ref{fig:P_computation_MP2} illustrates a proton-initiated event. There is now no guarantee that $\mathbf{r}_{C,i}$ for each telescope corresponds with the projection of a single 3-dimensional point, since different telescopes may image multiple, different sub-showers of the hadronic cascade. Accordingly, the tight clustering of the $\{\mathbf{r}_{\times,j}^{\prime}\}$ is lost and the computed value of $P$ is large.
\section{Indiscriminate Acceptance for High-Energy Events}\label{sec:passthrough}
The parallax width algorithm assumes that the centroids of \textit{trigger} images reliably encode the geometry of the air-showers.
If the intensity profile of the Cherenkov light image is substantially as{s}ymetric, the validity of this assumption may degrade. Ideally, the trigger-image centroid $\mathbf{r}_{C,i}$ should correspond closely with the \textit{full image centroid} $\mathbf{r}_{C^{\dag},i}$, defined as the signal-amplitude-weighted mean position of all \textit{imaging} pixels that trigger in response to incident Cherenkov photons. Without access to the information provided by the individual pixel \textit{amplitudes}, the Boolean-valued \textit{trigger} images appear more symmetric and the $\mathbf{r}_{C,i}$ that are used to compute $P$ may not accurately represent the air-shower geometry.
The \textit{left} and \textit{right-hand} panels of Figure \ref{fig:P_computation_Camera} illustrate schematically how any camera-coordinate offsets $\epsilon_{C,i} = \mathbf{r}_{C,i} - \mathbf{r}_{C^{\dag},i}$ between the two centroid definitions produce corresponding directional perturbations of each $\mathbf{r}_{F,i}$. A representative mirror-plane projection of these misaligned vectors is illustrated in the \textit{top panel} of Figure \ref{fig:P_computation_MP2}, and yields a set $\{\mathbf{r}_{F,i}^{\prime}\}$ that typically increases the dispersion between the $\{\mathbf{r}_{\times,j}^{\prime}\}$ intersection coordinates, and inflates the computed value of $P$. The potential magnitude of $\epsilon_{C,i}$ increases for high-energy $\gamma$-ray-initiated air showers, which typically produce extensive images that comprise a large number ($n_{\rm TP}$) of triggered super-pixels.
To prevent spurious rejection of \textit{genuine} high-energy, $\gamma$-ray-initiated events, a pre-calibrated multiplicity threshold $n_{\rm TP} = n^{\rm pass}_{\rm TP}$ is used to unconditionally accept (or \textit{pass through}) events for which \textbf{any} telescope in the array captures a trigger image comprising $n^{\rm pass}_{\rm TP}$ or more super-pixels. As illustrated by Figure \ref{fig:passthrough_comp}, the expected \textit{single-telescope} trigger rate $R_{p}$ for simulated, \textit{proton}-initiated air-showers is used to calibrate an appropriate value for $n^{\rm pass}_{\rm TP}$. To retain effective suppression of the most frequent cosmic-ray triggers, a threshold corresponding to the typical super-pixel multiplicity for \textit{proton}-initiated events that trigger at 3\% of the peak single-telescope rate is adopted. \revision{}{Data will be serialized and readout will occur for any telescope that records an image with $n_{\mathrm{TP}} > n_{\mathrm{TP}}^{\mathrm{pass}}$. Accordingly, in order to select a threshold that effectively controls the single telescope dead time induced by cosmic-ray events that fulfill the pass-through criterion, it is important to calibrate $n_{\mathrm{TP}}^{\mathrm{pass}}$ using the differential trigger rate for a \textit{single} telescope.}
\begin{figure*}[p]
\centering
\includegraphics[width=0.7\textwidth]{passThroughCompRToE_2_5_bold-eps-converted-to.pdf}\\
\includegraphics[width=0.7\textwidth]{passThroughCompEToN_2_5_bold-eps-converted-to.pdf}
\caption{\small Computation of the super-pixel multiplicity $n^{\rm pass}_{\rm TP}$ that is required to satisfy the pass-through criterion. In the \textit{top} panel, the energy-binned effective collection area for \textit{proton}-initiated events that trigger \textit{at least one} telescope is folded with the expected cosmic ray spectral shape ($\propto E_{p}^{-2.7}$) and used to determine the typical proton energy $E_{p,3\%}$ at which the expected rate of \textit{single} telescope triggers falls below 3\% of its peak value. Note that $\gamma$-ray-like images of a hadronic sub-shower typically sample a fraction of the energy of the incident proton. Accordingly, genuine $\gamma$-rays\ that produce images comprising $n^{\rm pass}_{\rm TP}$ trigger pixels have energies that are typically $\sim30\%$ that of protons that do so. In the \textit{bottom} panel, the energy-binned distribution of mean trigger image super-pixel multiplicities is used to derive the value of $n^{\rm pass}_{\rm TP}$ that corresponds to events for which $E_{p}\sim E_{p,3\%}$.}\label{fig:passthrough_comp}
\end{figure*}
\section{Hardware Implementation of the Parallax Width Algorithm}\label{sec:hw_implementation}
The simplicity of the algorithm that calculates $P$ was intentionally designed to facilitate rapid, FPGA-based computation \citep[See e.g.][for examples of existing VHE $\gamma$-ray\ telescopes that use similar technology]{4774947, 2013arXiv1307.8360Z}. {Rapid exchange of locally synthesized trigger-images and high-resolution timing data between nearby telescopes} can be used to derive independent, telescope-specific values of $P$ that can inform the decision to initiate or veto readout of finely pixelated image data. The ability to intelligently filter events before they are fully digitized, transmitted and stored \revision{also eliminates any}{substantially reduces the} system deadtime that would be associated with those processes. \revision{}{We note that the parallax width veto does not completely eliminate system dead time, since events that are \textit{accepted} by the array trigger will still require digitization and readout.}
The design of the SCT prototype includes a $16\,\mu{\rm s}$ pixel readout buffer \citep{2015arXiv150902345O,2015arXiv150806296T}, which exceeds the combined time required to exchange data between telescopes and compute $P$, enabling high-resolution signal data to be temporarily stored by each telescope pending a trigger decision. Hardware and firmware that implement the DIAT triggering scheme have also been developed and will be deployed concurrently with the SCT prototype. {Hardware-based computation of $P$ by direct evaluation of \eqref{eq:parwidth_expression} is not practical. A proven alternative approach entails straightforward combination of values that are extracted from precomputed lookup tables \citep[e.g.][]{4774947,2013arXiv1307.8360Z}. Moreover, provided with appropriately parameterised tables, factors that complicate the computation, such as telescope field rotation or structural deformation under slewing, can be effectively addressed.}
\section{Results and Discussion}\label{sec:results}
Figure \ref{fig:pwidth_dists} (\textit{top}) shows the three distributions of $P$ values that are computed for each investigated dataset, while the \textit{bottom} panel displays the corresponding cumulative distributions for $P$. For the point-like $\gamma$-ray\ source, all the simulated photons are incident on-axis, and 90\% of computed $P$ values are $<11$ m, in accordance with expectation. For extended \revision{}{and diffuse} $\gamma$-ray\ sources, the majority of incident photons are incident off-axis. Nonetheless, the expected prevalence of small $P$ values is realized, with 90\% of events that do \textit{not} fulfil the pass-through criterion having $P<40\,{\rm m}$. In contrast $\sim 93\%$ of all proton-initiated events yield $P>40\,{\rm m}$, confirming the expectation that $P$ is a highly efficient discriminator between $\gamma$-ray- and proton-initiated events.
Figure \ref{fig:pw_thresh_vs_offset} illustrates the evolution of $\gamma$-ray\ event retention with increasing angular offset $\theta_{\rm off}$ between the assumed $\gamma$-ray\ incidence direction and the telescope optical axis. A point-like $\gamma$-ray\ source was simulated at various offsets from the camera centre and the resultant data were used to derive the threshold parallax width value of $P_{\rm th}$ that \textit{retains} 90\% of events which yield three valid trigger images. The expected energy dependence of $P_{\rm th}$ (see $\S$\ref{sec:passthrough} for details) is evident, with event retention degrading most rapidly at large offsets for $\gamma$-ray\ energies exceeding 10 TeV but remaining $\lesssim40\,{\rm m}$ for $E_{\gamma} < 1$ TeV. {Critically, the lowest energy event subset is characterized by the faintest Cherenkov images and therefore overlaps significantly with the set of events for which \textit{none} of the triggering telescopes fulfill the pass-through criterion ($n^{\rm pass}_{\rm TP} = 16$).} {An important corrolary is that those higher energy, large-offset events that \textit{would} have been rejected on the basis of $P$ alone are retained if the pass-through criterion is re\revision{}{s}pected.}
{For the simulated array, indiscriminately accepting all events that fulfill the pass-through criterion, and adopting a $P_{\rm th} = 40\,{\rm m}$ for the remainder results in retention of $\gtrsim 90\%$ of genuine $\gamma$-ray\ events for $\theta_{\rm off} \lesssim 3.5^{\circ}$.}
\begin{figure*}
\centering
\includegraphics[width=0.7\textwidth]{pwidthDists_bold-eps-converted-to.pdf}\\
\includegraphics[width=0.7\textwidth]{pwidthCDFs_bold-eps-converted-to.pdf}
\caption{\small The computed distributions (\textit{top}) and CDFs (\textit{bottom}) of $P$ values corresponding to point-origin (on-axis, \textit{red}) and diffuse (\textit{blue}) $\gamma$-ray\ events, and diffuse cosmic-ray background events (\textit{green}).}\label{fig:pwidth_dists}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.7\textwidth]{pwThresh90thPercentileVsOffset-eps-converted-to.pdf}
\caption{\small Evolution of $\gamma$-ray\ retention with increasing angular offset $\theta_{\rm off}$ between the common optical axes of all telescopes in the array and true air-shower axis. The various curves represent different subsets of the simulated event ensemble and illustrate the threshold value $P_{\rm th}(\theta_{\rm off})$ \textit{below} which 90\% of $\gamma$-rays\ that trigger \textit{at least three} telescopes in the array are retained by the selection algorithm. The \textit{red} curve corresponds to predominantly low-energy $\gamma$-ray\ events that would \textbf{not} fulfill the pass-through array trigger criterion. The \textit{blue}, \textit{yellow} and \textit{magenta} curves represent energy-selected subsets and illustrate that $\gamma$-ray\ retention at large offsets degrades with increasing $\gamma$-ray\ energy ($E_{\gamma}$). Finally, the \textit{green} curve represents the union of the energy-selected subsets. Reliable reconstruction of events that are incident at offsets in excess of $\theta_{\max}^{\rm ana.} = 3.5^{\circ}$ (\textit{dashed vertical} line) is typically problematic and such events would be discarded by subsequent data analysis. The \textit{dot-dashed vertical} indicates the angular extent ($\theta_{\max}^{\rm FoV} = 4^{\circ}$) of the SCT camera.}\label{fig:pw_thresh_vs_offset}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.7\textwidth]{multiRateNsbBiasCurves-eps-converted-to.pdf}
\caption{\small Projected \textit{single telescope} readout rates following cosmic-ray or NSB-induced array triggers for various simulated super-pixel trigger thresholds. The expected array trigger rate in the absence of a multi-telescope trigger criterion is illustrated by the \textit{black crosses}. The \textit{blue squares} are derived by requiring that \textit{at least} two telescopes in the array trigger within $25\,{\rm ns}$, while \textit{magenta triangles} illustrate the effect of increasing the telescope multiplicity requirement from two to three. \textit{Orange diamonds} illustrate rates that result when the requirement that \textit{at least} three telescopes trigger within $25\,{\rm ns}$ is retained before application of the two-level cleaning algorithm and subsequent indiscriminate acceptance of events with $n_{\rm TP} \geq 16$.
\textit{Thick, solid curves} indicate the expected readout rate of cosmic-ray\revision{}{-proton}-induced triggers. The remaining curves illustrate the expected readout rate of NSB-induced triggers for nominal (\textit{thin solid lines}), two-times-nominal (\textit{dashed lines}) and four-times-nominal (\textit{dotted lines}) NSB intensity levels. The nominal NSB level \revision{}{implies production of 11.96 photoelectrons per super-pixel, per microsecond, which} is consistent with a typical extragalactic field of view.}\label{fig:crAndNsbBiasCurves}
\end{figure*}
Figure \ref{fig:crAndNsbBiasCurves} illustrates how the rate of background-induced telescope triggers increases as the super-pixel trigger threshold is reduced. The rates were derived using simulations of proton-initiated air showers and random incidence of NSB photons upon each camera pixel is modelled by the detector simulation software. Individual curves represent the frequency of single-telescope triggers that are subsequently retained by a particular array triggering strategy. The requirement for retention by an array trigger implies a direct correspondence between the derived rates and the average frequency with which readout of finely pixelated image data would be initiated for each telescope.
The upper four curves in Figure \ref{fig:crAndNsbBiasCurves} share a number of distinctive characteristics. For small $n_{\rm pe}$, the majority of single telescope triggers are induced by random triggering of camera pixels by NSB photons. The probability that random pixel triggers will generate a valid telescope trigger decreases rapidly with increasing $n_{\rm pe}$ and the resultant rate curve is characterized by a steeply falling power law. An \textit{inflection point} marked by a reduction of the absolute power law index at a particular threshold ($n^{\star}_{\rm pe}$) represents the transition to a regime in which Cherenkov photons from cosmic-ray-induced air showers dominate the single telescope trigger rates. Whereas the incident flux of cosmic rays is typically stable and isotropic for a particular telescope location, the NSB intensity can vary markedly between different astronomical fields, and sporadic ambient light sources may also illuminate telescopes without warning. Figure \ref{fig:crAndNsbBiasCurves} illustrates that \revision{}{without an array trigger, reliably stable operation would require $n^{\star,\mathrm{1-fold}}_{\rm pe} \gtrsim 3.5\,\mathrm{ p.e.}$. Relative to this threshold, \textit{any} array trigger schemes enable reduction of $n^{\star}_{\rm pe}$ and thereby reduce the threshold energy of $\gamma$-rays\ that the instrument can detect. Moreover, while traditional 2- and 3-fold multiplicity array triggers yield $n^{\star,\mathrm{2-fold}}_{\rm pe}\approx{3.0}\,\mathrm{p.e.}$ and $n^{\star,\mathrm{3-fold}}_{\rm pe}\approx{2.7}\,\mathrm{p.e.}$}, a simulated array trigger that implements the parallax width algorithm and unconditionally accepts events that fulfill $n_{\rm TP} \geq 16$, lowers the threshold at which the inflection point occurs to $n^{\star}_{\rm pe}\approx{2.5}$$\,{\rm p.e}$\@. Specifying a pixel threshold \revision{$n_{\rm pe}\gg n^{\star}_{\rm pe}$}{$n_{\rm pe}$ that exceeds $n^{\star}_{\rm pe}$ by factors of at least a few} enables stable array operation in all but the most extreme observational conditions, but sacrifices sensitivity to intrinsically faint, low-energy $\gamma$-ray\ events. By substantially reducing $n^{\star}_{\rm pe}$, the parallax width trigger algorithm also lowers the overall energy threshold of the instrument.
\begin{figure*}[p]
\centering
\includegraphics[width=0.7\textwidth]{aEffGamma_2_5_bold-eps-converted-to.pdf}\\
\includegraphics[width=0.7\textwidth]{aEffGammaPT_2_5_bold-eps-converted-to.pdf}
\caption{\small Collection areas for a point-origin $\gamma$-ray\ source and a super-pixel trigger threshold of {2.5}$\,{\rm p.e.}$ assuming $n^{\rm pass}_{\rm TP}\rightarrow\infty$ (\textit{top}), and $n^{\rm pass}_{\rm TP}=16$ (\textit{bottom}). The \textit{black} curve represents a single telescope trigger requirement (i.e. no array trigger). The \textit{green} curve plots the collection area for events that trigger at least two telescopes and are subsequently found to yield two Cherenkov images that are useable for parameterization and geometric reconstruction of the corresponding air shower and the properties of its progenitor $\gamma$-ray.
The \textit{violet} curve illustrates the collection area for a more desirable subset of events that yield \textit{at least three} high-quality Cherenkov images. The \textit{red} curve represents the collection area \revision{}{for events that are accepted} after application of the parallax width criterion, rejecting any events for which $P>40\,{\rm m}$.}\label{fig:gamma_aeffs}
\end{figure*}
In addition to providing efficient rejection of unwanted background events, the parallax width algorithm must also retain a large fraction of genuine $\gamma$-ray\ events. The \textit{top} panel of Figure \ref{fig:gamma_aeffs} illustrates the degree to which this is requirement is fulfilled by plotting the energy-binned effective collection areas for the simulated array configuration, assuming a point-like $\gamma$-ray\ source, \textit{without} application of a pass-through for events yielding extensive high-multiplicity trigger images. {The parallax width algorithm requires at least two intersections (and therefore three triggering telescopes) to compute $P$. Air showers with a lower overall energy content produce less Cherenkov light and consequently, they trigger fewer telescopes. Accordingly, although the parallax-width algorithm operates effectively for images that are not minimally reconstructible, the effective collection area for a two-telescope, minimally reconstructible image criterion (\textit{green dashed} curves in Figure \ref{fig:gamma_aeffs} and the \textit{top} panel of Figure \ref{fig:gamma-diff_and_proton_aeffs}) nonetheless exceeds that of the parallax-width algorithm at energies below $\sim300\;\mathrm{GeV}$. However, it should be noted that the ability to exploit images that are not minimally reconstructible enables the parallax width algorithm to retain \textit{all} fully reconstructible three-telescope triggers at \textit{all} energies.}
The \textit{bottom} panel of Figure \ref{fig:gamma_aeffs} demonstrates that adoption of a pass-through threshold $n^{\rm pass}_{\rm TP} = 16$ further enhances the retention of minimally reconstructible $\gamma$-ray\ events by the DIAT. {We use \textit{minimally reconstructible} to describe those events that trigger \revision{}{at} least two telescopes and are subsequently found to yield at least two Cherenkov images that are useable for parameterization and geometric reconstruction of the corresponding air shower and the properties of its progenitor $\gamma$-ray.}
The \textit{top} panel of Figure \ref{fig:gamma-diff_and_proton_aeffs} uses simulated effective collection areas to demonstrate the response of the DIAT that implements the parallax width algorithm for \revision{an extended}{a diffuse} astrophysical $\gamma$-ray\ source, adopting $n^{\rm pass}_{\rm TP} = 16$. {Hereafter, we describe images that can be used for parameterization and geometric reconstruction of the corresponding air shower and the properties of its progenitor $\gamma$-ray\ as \textit{high quality}}. At low energies the rejection of events yielding $P>40\,{\rm m}$ retains a large fraction of events that yield three high-quality Cherenkov images, while at energies $\gtrsim1$ TeV, the pass-through trigger results in retention of \textit{all} minimally reconstructible events. The \textit{bottom} panel of Figure \ref{fig:gamma-diff_and_proton_aeffs} illustrates the power of $P$ to effectively reject cosmic-ray proton-initiated events. The \textit{blue} curve illustrates the {expected camera readout rate as a function of proton energy} for a traditional two-telescope multiplicity array trigger, that accepts events for which at least two neighbouring telescopes trigger. The parallax width trigger reduces the expected rate of incorrectly accepted CR events by a factor of $\sim4$, which is almost double the suppression that is achieved by the two-telescope multiplicity requirement. Moreover, the contrast in CR suppression efficacy is maximized at lower energies where the incident cosmic ray rate is largest.
Figure \ref{fig:event_recovery} illustrates how a DIAT using the parallax width algorithm can be used to improve the low-energy sensitivity of the SCT subarray. Field-to-field variability of the NSB intensity between observations can induce unpredictable spikes in the array trigger rate if accidental coincidences between spurious single telescope triggers cannot be effectively suppressed. Typically, the required suppression is achieved by requiring a higher single-pixel trigger threshold, which increases the overall low-energy threshold of the array. The ability of the DIAT to perform real-time multi-telescope event vetoing allows lower pixel thresholds to be used while maintaining tractable array trigger rates. In Figure \ref{fig:event_recovery} (\textit{top panel}), the differential trigger rates for a Crab-pulsar-like $\gamma$-ray\ source
are compared for minimally reconstructible events that satisfy $P<40\,{\rm m}$ assuming a trigger pixel threshold of {2.5}$\,{\rm p.e.}$, with the set of minimally reconstructible events that
fulfil a traditional two-telescope multiplicity array trigger with $n_{\rm pe} = 3.5\,{\rm p.e.}$.
The lower pixel threshold made feasible by a hardware-level DIAT yields a substantial improvement in sensitivity below 200 GeV, achieving a factor of $\sim4$ enhancement in collection area at the lowest energies.
\begin{figure*}[p]
\centering
\includegraphics[width=0.7\textwidth]{aEffGammaDiffPT_labelFix_bold-eps-converted-to.pdf}\\
\includegraphics[width=0.7\textwidth]{aEffProtonPTFoldedRebin_bold-eps-converted-to.pdf}
\caption{\small \textit{Top panel:} Collection areas for diffuse $\gamma$-ray\ emission that correspond to different array triggering strategies\revision{}{, assuming a super-pixel trigger threshold of {2.5}$\,{\rm p.e.}$ and $n^{\rm pass}_{\rm TP}=16$}. See Figure \ref{fig:gamma_aeffs} for a detailed description of the criteria used to generate each curve. \textit{Bottom panel:} Differential readout rates induced by diffuse proton events for different array-triggering scenarios. The \textit{black dashed} curve illustrates the single telescope readout rate, in the absence of any array trigger. The \textit{blue dotted} curve represents the rate at which two telescopes trigger coincidentally and initiate camera readout. \revision{}{The \textit{solid red} curve illustrates the collection area for events that are accepted by an array trigger implementing the parallax width algorithm with $P < 40\,{\rm m}${, assuming a super-pixel trigger threshold of 2.5$\,{\rm p.e.}$} and} reveals the additional vetoing of proton initiated events that is achieved. Each curve is annotated with the relative \textit{integral} readout rates, using the parallax width array trigger as the fiducial case. \textit{Both panels} correspond to a pass-through super-pixel multiplicity threshold $n^{\rm pass}_{\rm TP}=16$.}\label{fig:gamma-diff_and_proton_aeffs}
\end{figure*}
\begin{figure*}[p]
\centering
\includegraphics[width=0.7\textwidth]{recoveryFoldedAeffs_bold-eps-converted-to.pdf}
\includegraphics[width=0.7\textwidth]{recoveryRatio_bold-eps-converted-to.pdf}
\caption{\small The expected array trigger rates for on-axis $\gamma$-ray-initiated events are shown in the \textit{top panel} for a Crab-pulsar-like spectral shape. The red curve corresponds to minimally reconstructible events that are retained after application of a trigger threshold corresponding to 2.5$\,{\rm p.e.}$ per super-pixel, and subsequent filtering using the parallax width algorithm with $P<40\,{\rm m}$, and a pass-through threshold $n^{\rm pass}_{\rm TP}=16$. For comparison, the \textit{blue} curve plots the expected array rate for minimally reconstructible events, assuming a higher super-pixel threshold of 3.5$\,{\rm p.e.}$, which is typically required to ensure stable operation under variable NSB illumination without an array trigger.
The ratio of the \textit{red} and \textit{blue} curves is shown in the \textit{bottom} panel and illustrates that application of the parallax width discriminator enables effective recovery of low energy events by rendering a lower super-pixel threshold feasible.}\label{fig:event_recovery}
\end{figure*}
\section{Summary}\label{sec:conclusions}
Monte-Carlo simulations that model an array of SC Cherenkov Telescopes have been used to demonstrate that efficient rejection of cosmic-ray-initiated events is possible using an innovative, distributed, intelligent array trigger. The simulated trigger implements an algorithm (designated \textit{Parallax Width}) that performs hardware-level analysis using computed moments of the Cherenkov light distributions that are imaged by multiple telescopes. It successfully discriminates between background and genuine $\gamma$-ray\ triggers, while retaining a large majority of reconstructible events.
Simulated application of \revision{the}{} a DIAT that implements the \textit{Parallax Width} algorithm demonstrated several advantages over arrays that utilize traditional \textit{telescope multiplicity} triggers or operate without an array trigger.
\begin{itemize}
\item By vetoing spurious single and multiple telescope triggers before data are read out, the algorithm reduces the array trigger rate by a factor of $\sim7$. This enables finely sampled events with heavy data payloads to be generated by participating SCTs without overwhelming the array data transfer infrastructure.
\item Real-time consideration of data from multiple telescopes also allows the rate of NSB-induced array triggers to be controlled without increasing the super-pixel trigger thresholds. Indeed, a reduction in super-pixel trigger threshold corresponding to an additional photoelectron-equivalent count is rendered feasible.
\item The resultant enhancement of low-energy sensitivity increases the number of reconstructible events with energies between $\sim 100$ GeV and $\sim 200$ GeV, which may be particularly useful for studies of spectrally soft targets like the Crab pulsar.
\end{itemize}
The configuration of the simulated telescope array represents a realistic scenario but was \textit{not} tailored to maximize the efficacy of the parallax width trigger algorithm. Accordingly, a suitably optimized telescope array is likely to realize additional improvements in overall performance.
\section{Acknowledgements}
The authors thank Dr Michael Punch, Dr John Ward, and Dr Matthew Wood for the valuable advice and insight they provided with regard to this study. This paper has gone through internal review by the CTA Consortium.
|
2,877,628,090,739 | arxiv |
\section{Introduction}
Muons have played an important role in the history of particle physics since their discovery.
The absence of the radiative muon decay, $\mu \to e \gamma$, during the early days~\cite{Hincks47} implied that the muon is a distinct elementary lepton rather than an excited electron. Today, muons are recognized as the charged lepton in the second {\it generation}.
We know a muon decays into an electron with two neutrinos as $\mu \to e \nu_{\mu} \bar{\nu_{e}}$.
The two neutrinos from a muon decay are unable to annihilate and thus should have different {\it flavor}; this is known as Lepton Flavor Conservation in the Standard Model (SM) of particle physics.
Nowadays, muons are again attracting a great deal of attention to search for physics beyond the SM.
This article describes an experiment to search for muon-to-electron ($\mu$-$e$) conversion, which is one of the lepton-flavor-violating processes in the charged lepton sector.
The experimental sensitivity is expected to be significantly improved in the coming decade with planned experiments.
An introduction to muon-to-electron conversion is shown below in this section.
The experimental methods are described in Section~\ref{sec:experiment}, followed by the sensitivity and background estimation in Section~\ref{sec:sensitivity}.
We discuss expandability and comparison with another experiment in Section~\ref{sec:discuss}, and finally describe the prospects in Section~\ref{sec:prospect}.
\subsection{Theoretical and Phenomenological Aspects}
Lepton flavor-violating muon decays are extremely suppressed in the SM even when neutrino oscillation effects are taken into account; e.g., the branching ratio of $\mu \to e \gamma$ can be calculated as~\cite{Petkov77, Marciano77, Lee77a, Lee77b}
\begin{equation}
B(\mu \to e \gamma) = \frac{3\alpha}{32\pi} \left|\sum_{i=2,3} U^{*}_{\mu i} U_{ei} \frac{\Delta m_{i1}^2}{{M_W}^2} \right| ^2
\sim O(10^{-54}) ,
\label{eq:mueg}
\end{equation}
where $U_{ij}$ are elements of the Pontecorvo-Maki-Nakagawa-Sakata matrix, and $\Delta m_{ij}^2$ represents the squared-mass differences between neutrinos.
Using experimental values from neutrino oscillations, the branching ratio is found to be of the order of $10^{-54}$ due to the huge suppression factor of $(\Delta m_{ij}^2 / {M_W}^2)^2$.
For coherent muon-to-electron conversion in the field of a nucleus, $\mu^- N \to e^- N$, where $N$ denotes a nucleus, the rate is found to be of a similar order.
Note that the SM predictions are negligibly small from an experimental point of view.
On the other hand, many well-motivated physics models beyond the SM predict sizable branching ratios within reach of experimental sensitivities in the near future~\cite{Kuno01, Merciano08, Mihara13, Bernstein13}.
Hence, an observation of charged lepton flavor violation would provide clear evidence of new physics.
From a simplified model-independent perspective, it is known that the $\mu^+ \to e^+ \gamma$ process is strongly sensitive to the photon-mediated dipole interaction, whereas the $\mu^- N \to e^- N$ process is sensitive to not only the dipole term but also contact terms~\cite{Gouvea09};
here, the contact terms indicate tree-level or box-diagram interactions mediated by massive unknown particles.
Note that the situation is not so simple in reality; for a more detailed approach, see~\cite{Crivellin17}.
Which terms are dominant depends highly on the models of new physics.
Therefore, the ratio of strengths of $\mu^+ \to e^+ \gamma$ and $\mu^- N \to e^- N$ can be a powerful discriminator among models.
For instance, supersymmetric (SUSY) loops favor the dipole terms and thus enhance $\mu^+ \to e^+ \gamma$, whereas models with extra U(1) symmetry, leptoquarks, or R-parity violating SUSY induce contact terms in a tree level and thus enhance $\mu^- N \to e^- N$.
For $\mu$-$e$ conversion, since a muon interacts with a quark in protons or neutrons in a nucleus, the conversion rate depends on the target nucleus, and the dependence varies with different models~\cite{Kitano02, Cirigliano09, Davidson19, Davidson20}.
Once $\mu$-$e$ conversion is discovered, a comparison of the rates for different target nuclei may help distinguish among models.
\subsection{Experimental Aspects}
\label{sec:expaspect}
Since $\mu$-$e$ conversion is an interaction with a nucleus, it is natural to use a negative muon beam so that the muon can be captured and undergo the lepton-flavor-violating interaction with the nucleus.
Negative muons are stopped in a target material, form a muonic atom with a nucleus, and cascade down to the $1s$ ground state.
Within the SM, the fate of a muon is either to decay in orbit (DIO) or undergo nuclear muon capture.
Muon DIO is the $\mu^- \to e^- \nu_{\mu} \bar{\nu}_e$ decay of the bound-state muon, and nuclear muon capture is expressed as $\mu^- + (Z,A) \to \nu_{\mu} + (Z-1,A)$.
For the $^{27}$Al nucleus case, the muon lifetime of 864~ns implies a fraction of 39\% for the former and 61\% for the latter~\cite{Suzuki87}.
Then, if $\mu$-$e$ conversion happens, we would observe an electron without neutrinos as $\mu^- + (Z,A) \to e^- + (Z,A)$.
Since the process is coherent, i.e., does not change the nuclear state, the observed electron has a specific energy,
\begin{equation}
E_{\mu e} = m_{\mu} - B_{\mu} - E_{\mathrm{rec}} ,
\label{eq:energy}
\end{equation}
where $B_{\mu}$ and $E_{\mathrm{rec}}$ are the muon binding energy and the recoil energy of the nucleus, respectively.
The converted electron energy, $E_{\mu e}$, is 104.97~MeV for the $^{27}$Al case.
This single monoenergetic electron signature is an experimental feature of $\mu$-$e$ conversion.
The branching ratio of $\mu$-$e$ conversion is defined by the ratio of the $\mu$-$e$ conversion rate to the total muon capture rate, namely,
\begin{equation}
B(\mu^- N \to e^- N) = \frac{\Gamma(\mu^- + N \to e^- + N)}{\Gamma(\mu^- + N \to \mathrm{all} \ \mathrm{captures})}.
\label{eq:def}
\end{equation}
The normalization to captures has an advantage in calculation since many details of the nuclear wavefunction cancel in the
ratio.
The current experimental upper limit is $7 \times 10^{-13}$ (90\% C.L.) for a gold target obtained by the SINDRUM-II experiment at Paul Scherrer Institute (PSI)~\cite{Bertl06}.
\section{Materials and Methods}
\label{sec:experiment}
The COMET experiment~\cite{Kuno13, COMET20, Moritsu21} will search for $\mu$-$e$ conversion with a sensitivity of $O(10^{-17})$, which is four orders of magnitude better than the current limit.
The experiment will take place at the Japan Proton Accelerator Research Complex (J-PARC).
The construction of the dedicated beam line, muon source and detectors is in progress.
Schematic layouts of the COMET experiment are shown in Figure~\ref{fig:layout}.
The experiment adopts a staged approach, Phase-I and Phase-II, which will be described in Section~\ref{sec:staging}.
In this section, we explain the conceptual design of the experiment; accelerator, beams and facility; and experimental apparatus.
\begin{figure}[H]
\includegraphics[width=0.7\linewidth]{fig/COMET-layout2.pdf}
\caption{Schematic layouts of the COMET experiment Phase-I (\textbf{a}) and Phase-II (\textbf{b}). Simulated particles for a single beam bunch are shown for Phase-I.}
\label{fig:layout}
\end{figure}
\subsection{Concept of the COMET Experiment}
\subsubsection{Highly-Intense Muon Source}
\label{sec:muonsource}
Going back in history, an original idea for modern $\mu$-$e$ conversion searches was proposed by Dzhilkibaev and Lobashev in 1989~\cite{MELC89} for the MELC experiment at INR, Russia, followed by the MECO experiment~\cite{MECO97} at BNL.
Although neither of the experiments was realized, after 30 years, the concept has been inherited by the COMET and Mu2e~\cite{Pezzullo, Mu2e15} experiments today.
One of the key concepts is an unprecedentedly powerful muon source.
Instead of a conventional muon beam line using dipole and quadrupole magnets, all the sections related to pion production and transport use superconducting solenoid magnets.
A proton beam impinges on a long target placed inside the production solenoid, and backward-generated low-energy pions are captured by a gradient solenoidal field which varies from 5~T at the target position to 3~T in the transport section.
Note that the gradually-decreasing solenoidal field adiabatically translates part of the transverse momentum ($p_{\mathrm T}$) into the longitudinal component ($p_{\mathrm L}$), and make the particle trajectory more parallel, which helps increase the muon yield.
Muons, as the decay product of pions, are efficiently delivered through a curved transport solenoid, where the center of their helical trajectories drifts vertically according to the particle charge ($q$) and momentum ($p$).
The magnitude of the drift is given by
\begin{eqnarray}
D & = & \frac{1}{qB} \left( \frac{s}{R} \right) \frac{p_{\mathrm L}^2 + p_{\mathrm T}^2}{p_{\mathrm L}} \\
& = & \frac{1}{qB} \left( \frac{s}{R} \right) \frac{p}{2} \left( \cos \theta + \frac{1}{\cos \theta} \right) ,
\label{eq:drift}
\end{eqnarray}
where $B$ is the magnetic field along the axis; $s/R$ is a ratio of the path length to the curvature radius of the curved solenoid, i.e., the total bending angle; and $\theta$ is the pitch angle of the helical trajectory.
In the COMET experiment, a compensating dipole field parallel to the drift direction is applied to keep the center of the helical trajectories of negative muons with momenta of 40 MeV/$c$ on the bending plane.
The magnitude of the compensating field is given by
\begin{eqnarray}
B_{\mathrm{comp}} = B \frac{D}{s} = \frac{1}{qR} \frac{p}{2} \left( \cos \theta + \frac{1}{\cos \theta} \right) ,
\label{eq:bcomp}
\end{eqnarray}
for charged particles with momenta $p$ and pitch angles $\theta$.
Positive or high-momentum muons are eliminated by a collimator placed after the curved solenoid.
This type of pion-capture and muon-transport system has been demonstrated in the MuSIC beam line at Research Center for Nuclear Physics, Osaka University~\cite{Cook17}.
The muon yield per beam power was found to be 1000 times better than the conventional method in existing facilities.
\subsubsection{Beam-Related Backgrounds and High-Quality Pulsed Beam}
\label{sec:beambg}
The past SINDRUM-II experiment~\cite{Bertl06} was performed by using a continuous beam at PSI.
Their results suggested a limitation on future experimental sensitivities from beam-related backgrounds caused by the continuous beam structure.
The pulsed beam is therefore an essential requirement for next-generation $\mu$-$e$ conversion experiments.
The COMET experiment utilizes a pulsed proton beam at J-PARC.
In this subsection, we explain the beam-related backgrounds and measures to cope with them.
Since the negative muon beam is generated by pion decay in flight, the muon beam is contaminated by pions, and its momentum spread is quite broad.
Beam-related backgrounds are mainly caused by pions remaining in the muon beam.
Radiative pion capture, $\pi^- + (Z,A) \to \gamma + (Z-1,A)$, followed by $\gamma \to e^+ + e^-$ conversion, is capable of generating a 105-MeV electron that mimics the $\mu$-$e$ conversion signal.
Muon decays in flight may also emit a 105-MeV electron if the muon momentum is greater than 75~MeV/$c$.
Antiprotons are negatively charged and therefore transported together with the negative muon beam.
The antiprotons that annihilate in the stopping target could also produce \mbox{105-MeV electrons}.
Notice that the above beam-related backgrounds (except for delayed antiprotons) occur promptly, i.e., just after the primary proton beam timing, whereas the time distribution of $\mu$-$e$ conversion occurs according to the bound muon lifetime.
As illustrated in Figure~\ref{fig:extinct}, the prompt backgrounds fade out in a few hundreds nanoseconds after the primary proton pulse.
We therefore open the data-taking window from a few hundred nanoseconds after the proton pulse.
Aluminum was selected as the target nucleus because the muon lifetime of muonic $^{27}$Al is 864~ns~\cite{Suzuki87}, long enough to perform the measurement.
The data-taking window is closed before the next proton pulse, 1.2~$\upmu$s after the initial pulse.
\begin{figure}[H]
\includegraphics[width=0.8\linewidth]{fig/extinction.pdf}
\caption{Time relation among the proton beam pulses, prompt background, stopped muon decay, and a signal event in the data-taking window.}
\label{fig:extinct}
\end{figure}
For the above scheme to work, cleanliness of the proton pulse structure is crucial.
If there were leaked protons in between the primary proton pulses, it could produce prompt background in the data-taking window.
The fraction of leaked protons in between the pulses, the so-called {\it extinction} factor, is required to be below $10^{-10}$ to achieve our sensitivity.
\subsubsection{Intrinsic Physics Background}
As described in Section~\ref{sec:expaspect}, 39\% of stopped muons decay in orbit in the muonic aluminum.
The DIO electron energy distribution has a high-energy tail beyond the kinematic endpoint energy of normal muon three-body decay in free space, i.e., 52.8~MeV.
It can be an intrinsic physics background.
The energy tail extends in principle up to the $\mu$-$e$ conversion signal energy of 105~MeV, but falls steeply as $(E_{\mu e} - E_{\mathrm{DIO}})^5$~\cite{Czarnecki11}.
Therefore, in order to distinguish the signal from the DIO background, a momentum resolution of 0.2\% is required for the electron detectors.
Figure~\ref{fig:spectra} shows the expected momentum spectra for the signal and the DIO background.
It should be noted that the DIO tail endpoint energy depends on the atomic nucleus.
Since light elements from boron ($Z=5$) to magnesium ($Z=12$) have higher endpoints than aluminum ($Z=13$), we should avoid using these materials in the detector region.
\begin{figure}[H]
\includegraphics[width=0.7\linewidth]{fig/CDC-momentumsignaldio.pdf}
\caption{Simulated electron momentum distributions of the $\mu$-$e$ conversion signal (red) and the decay-in-orbit events (blue), assuming a branching ratio of $3 \times 10^{-15}$ in COMET Phase-I. The vertical scale is normalized such that the integral of the signal is equal to one event.}
\label{fig:spectra}
\end{figure}
\subsubsection{Cosmic-Ray Background}
\label{sec:crb}
Cosmic-ray induced backgrounds are not negligible in highly-sensitive $\mu$-$e$ conversion experiments.
Cosmic-ray muons hitting the stopping target or the transport beam line may create 105-MeV/$c$ electrons, which mimic the signal of $\mu$-$e$ conversion.
In addition, cosmic-ray muons, scattered at the stopping target or surrounding materials with a momentum near 105 MeV/$c$, are also hard to distinguish from the signal events.
In order to suppress the cosmic-ray induced background, the detector region should be covered by veto counter arrays with a detection efficiency of 99.99\%.
Note that cosmic-ray background events are proportional to the data-taking time, and therefore a shorter running time with higher beam intensity is preferable.
\subsubsection{Staging Scenario}
\label{sec:staging}
The COMET experiment adopts a staged approach to accomplish the physics goal in a timely manner.
Experimental layouts of Phase-I and Phase-II are shown in Figure~\ref{fig:layout}.
In Phase-I, the Pion Capture Solenoid (CS), the Muon Transport Solenoid (TS), and the Detector Solenoid (DS) are constructed.
The TS is constructed up to the end of the first $90^{\circ}$ bend.
There are two goals in Phase-I: (1) muon beam measurement, and (2) $\mu$-$e$ conversion search with an intermediate sensitivity.
(1) In the muon beam measurement, the beam quality at the end of the first $90^{\circ}$ bend position can be measured directly.
This will help us understand the beam-related background more reliably based on real data instead of simulations.
The muon beam measurement will be carried out with the {\it StrECAL} detector system, composed of straw-tube trackers and an electron calorimeter, as shown in Figure~\ref{fig:strecal}.
The beam momentum, energy, and arrival time are measured.
Details of the StrECAL detector system will be given in Section~\ref{sec:strecal}.
(2) We will also conduct the $\mu$-$e$ conversion search with an intermediate sensitivity by using a primary proton beam power of 3.2~kW in Phase-I.
The sensitivity goal is $3 \times 10^{-15}$, which is 100 times better than the current limit.
This $\mu$-$e$ conversion search will be performed with the {\it CyDet} system, which consists of a cylindrical drift chamber (CDC) and cylindrical trigger hodoscopes (CTH) with a muon stopping target, as shown in Figure~\ref{fig:cydet}.
The StrECAL system will be replaced by the CyDet system once the muon beam measurement is done.
Muons stop in the aluminum target discs placed at the center of the DS.
The transversely-emitted electrons from the muon stopping target are detected with the CDC and hit the CTH.
Details of the CyDet system will be given later in Section~\ref{sec:cydet}.
\begin{figure}[H]
\includegraphics[width=0.7\linewidth]{fig/StrEcal.pdf}
\caption{Schematic layout of the StrECAL detector system for the muon beam measurement during Phase-I.}
\label{fig:strecal}
\end{figure}
\vspace{-6pt}
\begin{figure}[H]
\includegraphics[width=0.8\linewidth]{fig/CyDet_3DLayout_151113-TDR_mod2.pdf}
\caption{Schematic layout of the CyDet system for the $\mu$-$e$ conversion search at Phase-I.}
\label{fig:cydet}
\end{figure}
In Phase-II, the beam power is increased to 56 kW.
The TS is extended to the full $180^{\circ}$ bend, and the Electron Spectrometer Solenoid (ES) will be constructed.
The ES is a $180^{\circ}$ bend curved solenoid installed at the downstream of the muon stopping section.
It selects the charge and momentum of the emitted electrons with its curved solenoidal structure as shown in Figure~\ref{fig:spec-solenoid}.
Low momentum electrons are eliminated by the DIO blocker installed at the bottom of the ES.
Neutral particles can not directly reach the detector section.
These help decrease the hit rate of the downstream detectors.
The StrECAL system will be upgraded and used as detector in Phase-II.
In this phase, we can improve the $\mu$-$e$ conversion sensitivity down to $2 \times 10^{-17}$, a 10,000 times better than the current limit.
\begin{figure}[H]
\includegraphics[width=\linewidth]{fig/ElectronSpectrometerSolenoid.pdf}
\caption{Schematic layout of the downstream section of the Phase-II setup, which includes the muon stopping target, the Electron Spectrometer Solenoid, and the Detector Solenoid.}
\label{fig:spec-solenoid}
\end{figure}
\subsection{Accelerator, Beams and Facility}
\subsubsection{Accelerator and Proton Beam}
\label{sec:acc}
The accelerator must provide a proton intensity as high as possible, and suppress beam-related backgrounds, as explained in Section~\ref{sec:beambg}.
The COMET experiment requires a dedicated operation of the J-PARC accelerator.
The proton beam energy is reduced from the normal Main Ring operation of 30 GeV to 8 GeV, which is sufficiently high to produce a large number of pions but low enough to minimize antiproton production.
The beam power is adjusted to 3.2 kW in Phase-I, and will be upgraded up to 56 kW in Phase-II.
We fill four out of nine buckets of the Main Ring to realize the required pulse structure with an interval of 1.17 $\upmu$s as shown in Figure~\ref{fig:jparc-acc}.
The proton beam is slowly extracted during \mbox{0.5 s}, keeping the bunch structure, and delivered through a new beam line to the COMET experimental hall.
Most of the beam line components have already been constructed and will be ready to transport the beam in 2022.
\begin{figure}[H]
\includegraphics[width=\linewidth]{fig/J-PARC_photo20210714_overlay3.pdf}
\caption{A bird's eye view of the J-PARC accelerator facility. COMET takes place in the Hadron Experimental Facility. The orange ovals shown on RCS and Main Ring represent the proton bunch structure. The solid and open ovals indicate filled and empty proton bunches, respectively, in the COMET 8-GeV operation. (The photo was provided by the J-PARC Center).}
\label{fig:jparc-acc}
\end{figure}
A series of acceleration tests of 8-GeV proton beam and extinction measurements were carried out at J-PARC.
The accelerator operation was successful with $7.3 \times 10^{12}$ protons per bunch, equivalent to the Phase-I design value.
It was confirmed that the extinction factor satisfies the requirement of $10^{-10}$, and is expected to be reduced by improving the beam injection scheme to the Main Ring~\cite{Tomizawa19, Nishiguchi19}.
In 2021, another test has been carried out to confirm the improved extinction scheme, and the result will be published soon~\cite{Noguchi21}.
During physics data taking periods, online monitor devices installed in the proton beam line will measure the spill-by-spill extinction.
In order to withstand the huge amount of radiation dose, e.g., $O(10^{19})$ total protons or higher, we have been developing wide-gap semiconductor detectors, such as chemical-vapor-deposition diamond~\cite{Fujii19}, silicon \mbox{carbide, etc.}
\subsubsection{Production Target and Superconducting Magnets}
A pion production target is enclosed in the superconducting solenoid magnet.
A graphite target is used in Phase-I, and a tungsten target in Phase-II to increase the production rate.
To mitigate severe radiation from the production target, the inner surface of the production region is protected with a radiation shield, and aluminum-stabilized NbTi wires are adopted as the superconducting coils~\cite{Yoshida13}.
As described in Section~\ref{sec:muonsource}, the superconducting solenoid magnet system consists of the Pion Capture Solenoid, the Muon Transport Solenoid, and the Detector Solenoid as well as the Electron Spectrometer Solenoid for Phase-II, resulting in a total length of 15~m at Phase-I and 30~m at Phase-II.
Manufacturing of the solenoid magnets~\cite{Yoshida15} including their cooling systems~\cite{Okamura20} is in progress, and will be completed in 2023.
\subsection{Experimental Apparatus}
\subsubsection{Straw-Tube Tracker and Electron Calorimeter}
\label{sec:strecal}
The StrECAL system is being developed for both the beam measurement program in Phase-I, and the $\mu$-$e$ conversion search in Phase-II (Figure~\ref{fig:strecal}).
The thin-wall straw-tube planar tracker with extremely light materials operates inside a vacuum~\cite{Nishiguchi17, Volkov21}.
The straw tubes are made of a 20-$\upmu$m thick Mylar foil with a 70-nm thick aluminum layer, and have a diameter of 9.8~mm and a length of more than 1~m.
Straw tubes with even thinner walls of 12~$\upmu$m with a diameter of 5~mm are being developed for Phase-II, making use of ultrasonic welding techniques.
The production of the 3400 straw tubes for Phase-I has been completed, and assembly of the tracker station is in progress as shown in Figure~\ref{fig:strawphoto}.
Five stations of straw-tube trackers are used for tracking in the Detector Solenoid at 1~T.
Each station consists of two horizontal and two vertical staggered pairs of planes.
\begin{figure}[H]
\includegraphics[width=0.8\linewidth]{fig/photo_straw2021.jpeg}
\caption{Photograph of the straw tracker being assembled (first station).}
\label{fig:strawphoto}
\end{figure}
The electron calorimeter (ECAL) is placed downstream of the straw trackers to measure the energy of electrons and provide a trigger signal.
The ECAL adds redundancy to electron energy and hit position measurements, and provides initial parameters for precise tracking with the straw trackers.
The ECAL consists of 1920 LYSO (Lu$_{2(1-x)}$Y$_{2x}$SiO$_{5}$) scintillating crystals with a cross section of 2 $\times$ 2 cm$^2$ and a depth of 12~cm corresponding to \mbox{10.5 radiation} lengths.
The crystals are read out by avalanche photo diodes.
A prototype beam test demonstrated an energy resolution better than 5\% and a fast decay time of 40~ns~\cite{Oishi18}.
Signals from both the straw tracker and the ECAL are processed with front-end readout electronics, which contains preamplification, pulse shaping, discrimination, and digitization~\cite{Ueno19}.
All the functionalities are controlled by FPGAs.
The straw-tracker front-end boards are installed inside a gas manifold in vacuum.
To minimize the number of vacuum feedthrough connectors, we implemented a system to connect several boards with a daisy-chain of Gigabit Ethernet~\cite{Hamada21}.
\subsubsection{Cylindrical Detector System}
\label{sec:cydet}
As introduced in Section~\ref{sec:staging}, the $\mu$-$e$ conversion search in Phase-I is conducted with the CyDet system (Figure~\ref{fig:cydet}).
The momenta of transversely-emitted electrons are measured with the cylindrical drift chamber (CDC) in a magnetic field of 1 T.
The CDC design was optimized to measure 105-MeV/$c$ electrons while suppressing unwanted low-energy particles.
In order to achieve the required momentum resolution of 0.2\%, the chamber is operated with a low-$Z$ gas mixture of He (90\%) and iC$_{4}$H$_{10}$ (10\%) to minimize multiple scattering effects.
The inner cylinder of the CDC is made of carbon-fiber reinforced plastic with a thickness of 0.5~mm.
The 4986 sense wires are made of gold-plated tungsten with a diameter of 25~$\upmu$m, while the 14562 field wires are made of unplated aluminum with a diameter of 126~$\upmu$m.
An alternated all-stereo layer configuration is adopted to enhance the position resolution in the longitudinal direction.
We tested a small CDC prototype with an electron beam, and demonstrated an average spatial resolution of 150 $\upmu$m with hit efficiency of 99\%~\cite{Wu21}.
Construction of the CDC has already been completed and performance tests with cosmic rays are in progress~\cite{Moritsu19, Moritsu20} as shown in Figure~\ref{fig:cdcphoto}.
\begin{figure}[H]
\includegraphics[width=0.8\linewidth]{fig/DSF4482_reduced.jpg}
\caption{Photograph of the CDC being tested with cosmic rays.}
\label{fig:cdcphoto}
\end{figure}
The first level trigger is issued by the cylindrical trigger hodoscope, which consists of a combination of plastic scintillation and acrylic Cherenkov counters.
The high-level trigger is generated by using hit information of CDC at online level, resulting in an overall trigger rate down to about 10 kHz~\cite{Nakazawa21}.
The muon stopping target is made of 17 aluminum discs with a thickness of 200~$\upmu$m, a radius of 100~mm, and a spacing of 50~mm.
When the muonic atoms are formed on the muon stopping target, a cascade of X-rays is emitted as the muons drop down to the $1s$ state.
The X-rays from muonic aluminum can be used to estimate the number of stopped muons, which leads to the denominator of the $\mu$-$e$ conversion branching ratio (Equation~(\ref{eq:def})).
The major X-ray transition that we will use is the $2p \to 1s$ producing X-rays of 347~keV since it has the highest emission rate of 80\%
with a relatively low background.
The X-rays are counted with a high-efficiency Germanium detector installed in the downstream of the Detector Solenoid off the beam axis with a radiation shield.
\subsubsection{Cosmic-Ray Veto Counters}
As described in Section~\ref{sec:crb}, the cosmic-ray veto (CRV) counter array is installed to cover all the Detector Solenoid.
The main part of the CRV is composed of four layers of plastic scintillation counters read out by wave-length-shifting fibers and SiPMs.
Note that the CRV counters suffer from a large neutron flux, especially in the upstream area.
Since the hit rate of scintillators is predicted to be very high and the radiation damage to SiPM from neutrons is a concern, the upstream connection area between the Transport and Detector Solenoids is planned to be covered by glass resistive plate chambers.
\subsubsection{Trigger and Data Acquisition}
In the central trigger and timing system, the master trigger processor board collects the primary trigger, makes the final trigger decision, and provides the control clock to readout boards.
To keep the commonality between the master trigger board and each frontend readout or trigger board connections using an optical serial link, a custom FPGA board has been developed, which is connected to the frontend boards.
The high-level trigger system for CyDet is already described in Section~\ref{sec:cydet}.
The data acquisition system covers the data transfer from the frontend readout electronics to the data storage through an event builder.
The system is being constructed with standard Ethernet network with the online software based on the MIDAS framework~\cite{Midas99}.
The event-building throughput achieved is faster than 300~MiB/s and 1~GiB/s for the frontend and backend networks, respectively~\cite{Igarashi21}.
\subsubsection{Radiation Tolerance}
Radiation damage on frontend readout and trigger electronics is also an important issue in the experiment.
The requirements for the radiation tolerance are a total dose of \mbox{1.0 kGy} and a neutron fluence of $1.0 \times 10^{12}$ cm$^{-2}$ for 1-MeV equivalent neutron in the severest case.
We performed irradiation tests and selected electronics components that meet our requirements~\cite{Nakazawa19,Nakazawa20}.
\subsubsection{Offline Software}
To treat real and simulated data in the same way, we have developed an offline software framework called ICEDUST (Integrated Comet Experimental Data User Software Toolkit).
A series of large-scale Monte Carlo simulations has been performed with each major software release to improve the experimental details.
Since precise and clean track reconstruction is a challenging subject in the COMET experiment, several modern techniques are being developed, such as machine learning techniques for track reconstruction~\cite{Gillies18}, multi-turn track fitting algorithm~\cite{Zhang19}, and GPU-accelerated event reconstruction~\cite{Yeo21}.
\section{Results---Sensitivity and Backgrounds}
\label{sec:sensitivity}
A summary of specifications of the COMET experiment is given in Table~\ref{tab:spec}.
The single event sensitivity is given by
\begin{equation}
B(\mu^- + \mathrm{Al} \to e^- + \mathrm{Al}) = \frac{1}{N_{\mu} \cdot f_{\mathrm{cap}} \cdot f_{\mathrm{gnd}} \cdot A_{\mu e}} ,
\label{eq:ses}
\end{equation}
where $N_{\mu}$ is the number of stopped muons, $f_{\mathrm{cap}}$ is the fraction which undergoes the nuclear muon capture, $f_{\mathrm{gnd}}$ is the fraction for which the final state of the nucleus is the ground state, and $A_{\mu e}$ is the overall acceptance including the efficiency of the detector system.
In Phase-I (Phase-II), $1.5 \times 10^{16}$ ($1.6 \times 10^{18}$) stopped muons are accumulated in 150 days (\mbox{$\sim$1 year}), resulting in a single event sensitivity of $3.0 \times 10^{-15}$ ($2.0 \times 10^{-17}$).
\begin{table}[H]
\setlength{\tabcolsep}{8.9mm}
\caption{Summary of the characteristics of Phase-I and Phase-II of the COMET experiment.}
\label{tab:spec}
\begin{tabular}{lll}
\toprule
& \bf{Phase-I} & \bf{Phase-II} \\
\midrule
Proton beam energy & 8 GeV & 8 GeV \\
Proton beam power & 3.2 kW & 56 kW \\
Total protons on target & $3.2 \times 10^{19}$ & $1.0 \times 10^{21}$ \\
Total stopped muons & $1.5 \times 10^{16}$ & $1.6 \times 10^{18}$ \\
Detector acceptance $\times$ efficiency & 0.041 & 0.057 \\
DAQ time & 150 days & 260 days \\
Single event sensitivity & $3.0 \times 10^{-15}$ & $2.0 \times 10^{-17}$ \\
Estimated background events & 0.032 & 0.66 \\
\bottomrule
\end{tabular}
\end{table}
Table~\ref{tab:bg} shows a summary of the estimated background events in the COMET experiment.
In Phase-I, assuming a realistically-achievable extinction factor of $3 \times 10^{-11}$ and a time window from 700~ns after the beam bunch timing, the beam-related background can be suppressed to less than 0.01 events.
Physics backgrounds such as DIO are estimated to be 0.01 events by selecting a momentum window from 103.6 to 106.0~MeV/$c$, while keeping the signal efficiency of 93\% (Figure~\ref{fig:spectra}).
The estimated cosmic-ray background of less than 0.01 events is currently limited by the uncertainty on the simulation statistics.
In total, the background events are estimated as 0.032 events in Phase-I.
\begin{table}[H]
\setlength{\tabcolsep}{5.5mm}
\caption{Summary of estimated background events for the COMET experiment~\cite{COMET20,Krikler16}.}
\label{tab:bg}
\begin{tabular}{llll}
\toprule
\multirow{2}{*}{\bf{Type}} & \multirow{2}{*}{\bf{Background}} & \multicolumn{2}{c}{\bf{Estimated Events}} \\
& & \bf{Phase-I} & \bf{Phase-II} \\
\midrule
\multirow{3}{*}{Beam} & Radiative pion capture & 0.0028 & 0.001 \\
& other prompt events & $<$0.0038 & 0.002 \\
& delayed antiproton & 0.0012 & 0.296 \\
\midrule
\multirow{3}{*}{Physics} & Muon decay in orbit & 0.01 & 0.068 \\
& Radiative muon capture & 0.0019 & $\sim$0 \\
& Particle emission after muon capture & $<$0.001 & --- \\
\midrule
Others & Cosmic rays & $\lesssim$0.01 & 0.294 \\
\midrule
Total & & 0.032 & 0.662 \\
\bottomrule
\end{tabular}
\end{table}
The Phase-II sensitivity and background can be improved by tuning experimental parameters based on the Phase-I measurement results.
The background is currently expected to be as small as 0.66 events~\cite{Krikler16}.
In Phase-II, one dominant background source will be the cosmic-ray events because a larger area should be covered by veto counters including not only the Detector Solenoid but also the Electron Spectrometer Solenoid and the stopping target solenoid.
The other dominant background source will be delayed-antiproton events, which in principle increase as the muon beam increases.
These backgrounds should be refined using information from the Phase-I experiment.
\section{Discussions}
\label{sec:discuss}
\subsection{Further Improvements}
Further improvements towards $O(10^{-18})$ are being considered in the COMET Collaboration.
Such improvements are likely to arise from refinements to the experimental design in Phase-II.
We found that there is still room to optimize the position and size of the production target based on the latest estimates of the proton beam size.
There is also room to increase the number of muon stopping target discs based on recent simulations.
These improvements potentially increase the number of stopped muons by a factor of 3.
An optimization on the height of the DIO blocker in the Electron Spectrometer Solenoid (Figure~\ref{fig:spec-solenoid}) has also been considered.
The improvement potentially increases the signal acceptance by a factor of 2.
If we assume 50\% longer running time than the nominal Phase-II and implement these optimizations, we can expect to improve the experimental sensitivity by one order of magnitude towards $O(10^{-18})$~\cite{COMET18}.
It should be noted that the study is still preliminary and we have to carefully assess the backgrounds as well.
\subsection{Byproduct Experiments}
The COMET experimental setup is capable of accommodating byproduct experiments.
Lepton-number-violating muon-to-positron conversion, $\mu^- + (Z,A) \to e^+ + (Z-2,A)$ (hereafter $\mu^{-}$-$e^{+}$ conversion), is a promising candidate; see also an article in this issue by Lee and MacKenzie~\cite{MJLee}.
It could provide insight into the Majorana property of neutrinos.
In the $\mu^{-}$-$e^{+}$ conversion search, the radiative muon capture (RMC) followed by $\gamma \to e^+ e^-$ could be the most significant background.
The kinematic endpoint of the positron energy from RMC is higher than the $\mu^{-}$-$e^{+}$ conversion signal energy in case of the Al target.
Therefore, we might need to consider other isotopes, e.g., $^{40}$Ca, $^{32}$S, or $^{48}$Ti, as an alternative stopping target~\cite{Yeo17}.
By using an optimal target, the sensitivity is expected to be improved by four orders of magnitude for a number of stopped muons equivalent to Phase-II.
It should be noted that we have to flip the polarity of the compensating dipole field of the Electron Spectrometer Solenoid so that positive particles are accepted.
Another candidate is $\mu^- + e^- \to e^- + e^-$ in a muonic atom~\cite{Koike10, Uesaka16, Uesaka18}.
The Coulomb attraction from the nucleus in a heavy muonic atom leads to a significant enhancement in its rate.
Since the experimental signature is two nearly back-to-back electrons with momenta close to 50 MeV/$c$, we need to decrease the magnetic field of the Detector Solenoid in Phase-I, making this measurement presumably difficult to conduct in Phase-II.
In addition, $\mu^- \to e^- + a$ in a muonic atom is being considered, where $a$ can be an axion-like particle, familon, or majoron with a lepton-flavor-violating coupling to leptons.
Since the emission of the neutral particle $a$ distorts the DIO electron spectrum, we need to detect the subtle distortion with high precision~\cite{Wu20}.
\subsection{Comparison with the Mu2e Experiment}
COMET and Mu2e are competitive experiments to search for $\mu$-$e$ conversion.
The final experimental sensitivity is at a similar level of $2 \times 10^{-17}$.
Here, we briefly summarize differences between COMET (Phase-II) and Mu2e.
\begin{itemize}
\item Their strategies to realize the beam extinction are different.
COMET takes place at J-PARC, whereas Mu2e is performed at FNAL.
As described in Section~\ref{sec:acc}, COMET utilizes four of the existing nine buckets of the Main Ring to realize the required pulse timing structure.
Therefore, in terms of the extinction, residual protons in the empty buckets should be mainly taken care of.
On the other hand, Mu2e manipulates the beam bunch structure from two to four bunches during beam delivery.
Therefore, straying protons in between bunches need to be eliminated with an additional extinction device, such as a dipole magnet with time-varying field (AC dipole) that sweeps out-of-time protons out of the beam.
\item The shapes of the curved Transport Solenoids (TS) are different.
Mu2e adopted a ``S'' shape TS with $90^{\circ}$ bend followed by $-90^{\circ}$ bend to compensate the vertical drift in Equation~(\ref{eq:drift}).
As described in Section~\ref{sec:muonsource}, COMET adopted a ``C'' shape TS with $180^{\circ}$ bend, compensating the vertical drift by the dipole field embedded in TS.
Since the magnitude of the vertical drift is proportional to the bending angle, we expect good separation of particle charge and momentum.
\item The shapes of the Electron Spectrometer Solenoids are different.
Mu2e adopted a conventional straight-shape solenoid which accommodates straw-tube trackers and electron calorimeters.
The detectors have a hole in the central axis region to avoid hits from low-momentum particles.
As shown in Figure~\ref{fig:layout}b, COMET adopted another ``C'' shape curved solenoid with $180^{\circ}$ bend.
Unwanted low-momentum particles can be eliminated by the vertical drift of the curved solenoid with a compensation dipole field before reaching the tracker.
\item The primary beam intensities are different.
COMET utilizes a proton beam with \mbox{56 kW} and a data taking period of about one year; whereas Mu2e uses a 8-kW beam with about three years of data taking.
Higher beam power shortens the data taking time although we need to handle more severe radiation environment and a higher particle rate.
It should be noted that the data taking time does not take into account time sharing with other experiments in the same facility.
COMET needs to share the J-PARC machine time with other neutrino or hadron experiments, while Mu2e can run in parallel with a neutrino experiment in FNAL.
\item COMET adopted a two-staged approach as described in Section~\ref{sec:staging}, while Mu2e plans on a single stage.
Since the final goal is to improve the sensitivity by a factor 10,000 with respect to the current limit, COMET has chosen to climb step by step even if it costs time.
The Phase-I results will improve the detailed design of Phase-II to mitigate the risks.
\end{itemize}
We hope both experiments should be competitive but collaborative in some aspects, e.g., critical components for radiation damage, accelerator operation expertise, etc.
\section{Prospects}
\label{sec:prospect}
We plan to conduct the first beam transport to the COMET experimental facility in the beginning of 2023.
The proton beam commissioning and a muon beam yield and profile measurement will be performed with a simplified setup without the Pion Capture Solenoid: this is called {\it Phase-$\alpha$}~\cite{Phase-a}.
Figure~\ref{fig:phase-a} shows the setup around the production target in Phase-$\alpha$.
We use a stainless steel L-shape target to measure the horizontal and vertical positions of the proton beam.
Secondary particles from the proton beam interaction with the L-shape target will be measured by a beam-loss monitor through a side hole in a radiation shield.
We also use a graphite plate target with a thickness of 1 mm for muon yield and profile measurements.
Since the Pion Capture Solenoid will not be installed at that time, muons that accidentally come into the Transport Solenoid (TS) are delivered to the downstream detector region.
We expect about 5~kHz of negative muons at the exit of the TS with 200-W beam power.
Since the Detector Solenoid will not be installed at that time, we plan to install scintillating fiber and hodoscope counters with a drift chamber at the downstream of the TS to measure the beam profile.
A range counter composed of several layers of scintillators and absorbers will be also installed.
The energy deposit, range, and time of flight information are used for particle identification.
The Phase-$\alpha$ measurement will provide the muon yield and demonstrate the charge and momentum separation in \mbox{the TS}.
After Phase-$\alpha$, we will install the Pion Capture and Detector Solenoids and move into Phase-I as described in Section~\ref{sec:staging}.
The Phase-I experiment will start in 2024.
\begin{figure}[H]
\includegraphics[width=\linewidth]{fig/PhaseAlpha.pdf}
\caption{Setup for the first beam commissioning phase (Phase-$\alpha$). A picture of the production target holder is also shown on the right.}
\label{fig:phase-a}
\end{figure}
\section{Conclusions}
The COMET experiment searching for $\mu$-$e$ conversion at J-PARC was described.
The dedicated accelerator operation for the 8-GeV proton is ready, and construction of the beam line, superconducting magnets and detector system is progressing.
The first beam commissioning is about to start, and the Phase-I experiment will follow.
Phase-I encompasses both the muon beam measurement and $\mu$-$e$ conversion search with an intermediate sensitivity of $O(10^{-15})$ which is 100 times better than the current limit.
The muon beam measurement will provide us a deeper understanding of the background, which will be incorporated in Phase-II.
By increasing the beam power and improving the experimental apparatus, the Phase-II experiment will further improve the sensitivity, reaching $O(10^{-17})$.
A possible discovery would open the door leading us to physics beyond the Standard Model.
\vspace{6pt}
\funding{The author was supported in part by the Japan Society of the Promotion of Science (JSPS) KAKENHI Grant No.~19K14747, and the Chinese Academy of Sciences President's International Fellowship Initiative.
\acknowledgments{The author acknowledges strong support from the COMET Collaboration. This article was written on behalf of the COMET Collaboration. We acknowledge support from JSPS, Japan; Belarus; NSFC, China; IHEP, China; IN2P3-CNRS, France; CC-IN2P3, France; SRNSF, Georgia; DFG, Germany; JINR; IBS, Korea; RFBR, Russia; STFC, United Kingdom; and Royal Society, \mbox{United Kingdom}.}
\conflictsofinterest{The author declares no conflict of interest.}
\printendnotes[custom]
\reftitle{References}
\begin{adjustwidth}{-\extralength}{0cm}
|
2,877,628,090,740 | arxiv | \section{Introduction }
The axioms of vertex algebras (VA), including the Jacobi identity \cite{B86, FLM}, make sense for any commutative ring, thus VA over $\mathbb Z$ can be defined naturally. An integral form of a vertex operator algebra (VOA) has been studied in \cite{BR, B98} for the Monster VOA and
in general in \cite{DG1}. As a precursor it was treated as the Kostant $\mathbb Z$-form of the enveloping algebra \cite{S} in the case of affine Lie algebras \cite{G} and their level one modules \cite{P}. For the most important examples of the lattice VOA and affine VA, a special integral form is spanned by monomials in Schur polynomials \cite{M} indexed either by lattice elements or by simple roots of the finite dimensional Lie algebra.
Modular Virasoro VAs and affine VAs have shown to carry finitely many irreducible modules over the field of finite characteristic \cite{JLM}, and other important properties for
modular VOAs have been studied in \cite{AW, GL, DR, LM, Mu}.
In the classical approach to finite algebraic groups, an important lattice property plays a crucial role: all divided powers of
integral Chevalley basis elements preserve the integral form of the underlying enveloping algebra, which ensures the introduction of
Lie groups of finite type over the characteristic $p$. This property was first studied by Garland \cite{G} for the affine Lie algebra,
and then was generalized to the basic module of the
affine Lie algebra of simply laced type \cite{P} (see also \cite{Mc}).
Lusztig has developed the $U_{\mathbf A}(\mathfrak g)$-lattice structure for studying quantum groups at roots of unity \cite{L}, where $\mathbf{A}=\mathbb{C}[q,q^{-1}]$.
First in \cite{J2} for affine $sl(2)$ then in general \cite{CJ}
for affine ADE types, it was proved that the level one modules of the quantum affine algebra $\mathbf{U}_q(\hat{\mathfrak g})$
admits an integral form as a $\mathbf{U}_{\mathbf{A}}(\hat{\mathfrak g})$-lattice in the sense
of Lusztig. In particular, the lattice is closed under quantum divided powers of basis elements of $\mathbf{U}_q(\hat{\mathfrak g})$, which then facilitates the study of the modules over the root of unity.
The {\it goal} of the paper is to show that there is a similar lattice structure on a large class of
vertex operator algebras and their irreducible modules.
Let $V=S(\hat{\mathfrak h}_{L})\otimes \mathbb C\{L\}$ be the lattice VOA associated with an integral lattice $L$
of rank $r$. Suppose $L$ has an integral basis $\{\alpha_i|i=1, \ldots, r\}$. This also includes the case of super-VOAs when
$L$ is not even. The symmetric algebra $S(\hat{\mathfrak h}_{L})$ can be viewed as a ring of symmetric functions
in the variables $\alpha_i(-n)$, $n\in\mathbb N, r\in\{1, \cdots, r\}$. Let $s_{\lambda}(\alpha_k)$ be the Schur function in the $\alpha_k(-n)$ \cite{IJS}.
Then the algebra $V$ admits an integral form $V_{\mathbb Z}=S_{\mathbb Z}(\hat{\mathfrak h}_{L})\otimes \mathbb Z\{L\}$ (see \cite{DG1}), where
$S_{\mathbb Z}(\hat{\mathfrak h}_{L})$ consists of the $\mathbb Z$-span of Schur functions indexed by partition-valued function $\underline{\lambda}=(\lambda^{(1)}(\alpha_1), \ldots, \lambda^{(r)}(\alpha_r))\in\mathcal P^r$.
The algebra naturally decomposes itself into
$$ V=\bigoplus_{n\in\mathbb Z_+, \alpha\in L}V_{n, \alpha}
$$
where $V_{n, \alpha}$ is spanned by $s_{\underline{\lambda}}e^{\alpha}$, and $e^{\alpha}\in\mathbb Z\{L\}$, the integral group algebra of $L$. So the general elements of the vertex operator algebra $V$ are divided into two parts: the components of $Y(v, z)$, $v\in V_{n, \alpha}, \alpha\neq 0$ and the components of
$Y(v, z)$, $v\in V_{n, 0}$. It is known that the operators of the first kind can generate the second kind \cite{Mc} in the vertex operator algebra.
For the {\it elements of the first kind} $v\in V_{n, \alpha}, \alpha\neq 0$, we will prove in great generality that the divided powers of integral vertex algebra elements from $Y(v, z)$
preserve the integral form of the lattice VOA, as long as the lattice is integral.
We will also show that the integral form of any irreducible module of the lattice VOA is also preserved by the lattice of the integral VOA
(the later lattice structure is in Lusztig's sense).
As a consequence, we recover Garland's result for the affine Lie algebras \cite{G} and their irreducible modules \cite{P}.
Then for any $v\in V_{n, \alpha}$ ($\alpha\neq 0$), $Y(v, z)=\sum_{n\in\mathbb Z}v_nz^{-n-1}$, we can define the invertible map:
\begin{equation}
\exp(v_nt)=\sum_{i=0}^{\infty}\frac{(v_n)^i}{i!}t^i
\end{equation}
which preserves the integral form $V_{\mathbb Z}$ as well as its irreducible modules as $\mathbb Z$-spaces. Note that the automorphism is in general sense as
the infinitesimal group clearly does not preserve the grading.
For the {\it elements of the second kind} $v\in V_{n, 0}$, they are analogs of the Cartan subalgebra in the affine Lie algebra. By generalizing
Garland's operator \cite{G} we introduce the following symmetric functions: for any $k\in\mathbb N$, $\alpha\in$ basis of $L$
\begin{equation}
\exp(\sum_{n=1}\frac{\alpha(-kn)}{n}z^n)=\sum_{n=0}^{\infty}h_{-\alpha, n}^{[k]}z^n
\end{equation}
and we will show that $h_{-\alpha, n}^{[k]}$ preserves the integral lattice $(V_L)_{\mathbb Z}$.
When $L$ is the root lattice of ADE type, $V_L$ reduces to the basic module of the affine Lie algebra \cite{FLM}. In such a situation,
our proof also offers a different and simpler proof of Garland's results.
The paper is organized as follows. In Section 2, we discuss vertex operator algebras and symmetric functions,
and construct the integral forms
spanned by Schur functions. In Section 3, we derive some basic relations about vertex algebras and study the integral forms based on
integral lattices. An important technical lemma will be given, which generalizes a result of \cite{BFJ} and \cite{CJ}.
In Section 4, we will show that the divided powers of general vertex operators preserve the integral form
of the lattice vertex operator algebra as well as their irreducible modules $M_{\mathbb Z}$ over $\mathbb Z$. Moreover
the maps $\exp(v_nt)$ are in $\mathrm{GL}_{\mathbb Z}(V_{\mathbb Z})$ and $\mathrm{GL}_{\mathbb Z}(M_{\mathbb Z})$, and one can study modular vertex algebras
of lattice type and their irreducible modules.
\section{Vertex algebras and symmetric functions}
We first recall the notion of the vertex algebra (VA).
A vertex algebra $V$ is a graded vector space $V=\oplus_{n\in\mathbb Z} V_n$ equipped with the state-field correspondence
map $Y_V(\cdot, z): V\longrightarrow \mathrm{End}(V)[[z, z^{-1}]]$, and $Y_V(u, z)=\sum_n u_nz^{-n-1}$ which defines
a sequence of vertex algebra products $u_nv$ for any $u, v\in V, n\in\mathbb Z$ that satisfies several defining
properties of the vertex algebras, see \cite{FLM, FHL} or \cite{LL}
for details. In short, the vertex algebra $V$ is a non-associative
algebra equipped with
infinitely many products $u_nv$ that enjoy the local commutativity and associativity of the
vertex algebra.
An integral form $W_{\mathbb{Z}}$ of a finite-dimensional vector space $W$ is a $\mathbb Z$-span of its basis. If $V=\oplus_{n\in\mathbb Z} V_n$
is a graded algebra with finite dimensional homogeneous subspaces, then an integral form $V_{\mathbb Z}$ is a $\mathbb Z$-subspace $V_{\mathbb Z}$ such that $V_{\mathbb Z}\cap V_n$ are integral forms of $V_n$ and the product is closed under $V_{\mathbb Z}$.
For any nonnegative integer $r$ and an operator $x$, we define $x^{(r)}=\frac{x^r}{r!}$. Now let $U_{\mathbb Z}(V)$ be the $\mathbb Z$-linear span
of $(u_1)_{n_1}^{(r_1)}(u_2)_{n_2}^{(r_2)}\cdots (u_k)_{n_k}^{(r_k)}$, where $u_1, u_2, \cdots u_k\in V_{\mathbb Z}$ and $n_i\in \mathbb Z_+$.
In general, $U_{\mathbb Z}(V)$ may belong to the completion of the algebra and is the integral form of the vertex enveloping algebra $U(V)$ defined in \cite{FZ}.
\begin{defn} \rm
An integral form $V_{\mathbb{Z}}$ of a vertex algebra $V$ is an integral basis of $V$
as a graded vector space such that it contains the vacuum vector $\textbf{1}$ and is closed under vertex algebra products.
\end{defn}
Here the closeness means that the product of any two basis elements is a $\mathbb Z$-linear combination of the basis elements.
If the vertex algebra $V$ has a conformal element $\omega$, we will not require an integral form of $V$ to contain $\omega$ as it will
spoil many properties of the integral form.
Let $\mu=\{\mu_1, \mu_2, \cdots ,\mu_k\}$ be a sequence or composition of integers, $\lambda=(\lambda_1,\lambda_2,\cdots, \lambda_s)$ is called a subsequence of $\mu$ if its coordinates are part of those of $\{\mu_1, \mu_2, \cdots ,\mu_k\}$ (with the same order), the remaining part of
the sequence is denoted by $^c{\lambda}$ and called
the complementary subsequence. When $\mu_i$ are non-ascending sequence of integers: $\mu_1\geq \ldots\geq \mu_l>0$, then $\mu=(\mu_1, \ldots, \mu_l)$
is called a partition with length $l$. The weight of $\mu$ is defined to be $\sum_i\mu_i$. Sometimes we also arrange the parts of $\mu$ in ascending way $\mu=(1^{m_1}2^{m_2}\cdots)$, where
$m_i$ is the multiplicity of the part $i$ in $\mu$.
Let $L$ be an even non-degenerate integral lattice of rank $d$ provided with a nondegenerate symmetric $\mathbb Z$-bilinear form $\langle\cdot,\cdot\rangle$, such that the set
$\Delta=\{ \epsilon_1,\epsilon_2,\cdots,\epsilon_d\}$ is a $\mathbb Z$-basis of $L$.
Let
$\mathfrak{h}=\mathfrak{h}_L$ be the complexified abelian Lie algebra $\mathbb C\otimes L$, and bilinear form
is naturally extended to $\mathfrak h$. The Heisenberg algebra is the infinite-dimensional Lie algebra
$$\hat{\mathfrak h}_L=\coprod \limits_{n\in \mathbb{Z^{\times}}}\mathfrak{h}\otimes t^n\oplus \mathbb{C}k$$
with the commutation relation
\begin{equation}
[h_1(m), h_2(n)]=m\langle h_1, h_2\rangle\delta_{m, -n}k
\end{equation}
where we set $h(m)=h\otimes t^m$.
Let the twisted group algebra $\mathbb{C}\{L\}$ be the space generated by $e^{\alpha}$, $\alpha\in L$
with the multiplication
$$
e^{\alpha}e^{\beta}=\varepsilon(\alpha, \beta)e^{\alpha+\beta}
$$
where $\varepsilon( \ , \ )$ is the cocycle on $L$ defined by the central extension
\begin{equation*}
1\rightarrow \langle \kappa | \kappa^2=1\rangle \hookrightarrow \hat{L} \rightarrow L \rightarrow 0,
\end{equation*}
so that $e^{\alpha}e^{\beta}=(-1)^{\langle \alpha, \beta\rangle}e^{\beta}e^{\alpha}$.
We remark that if $L$ is not even, one can choose a central extension of $L$ by the cyclic group
$\langle \kappa | \kappa^s=1\rangle$ so that $e^{\alpha}e^{\beta}=(-1)^{\langle \alpha, \beta\rangle+\langle \alpha, \alpha\rangle\langle \beta, \beta\rangle}e^{\beta}e^{\alpha}$. In the sequel, we will mainly work in the even case though many results
also hold in the super case.
View $\mathbb{C}\{L\}$ as a trivial $\hat{\mathfrak{h}}_{\mathbb{Z}}$-module and for $\alpha \in \mathfrak{h}$, define an action of $\mathfrak{h}$ on $\mathbb{C}\{L\}$ by:
$\alpha(0)\cdot e^\eta=\langle \alpha,\eta\rangle e^\eta$, for $\eta \in L$.
Let $\hat{\mathfrak{h}}^{\pm}=\mathfrak{h}\otimes t^{\pm 1}\mathbb{C}[t^{\pm 1}]$
be the subalgebras of $\hat{\mathfrak h}$,
then $\hat{\mathfrak{h}}$ has the triangular decomposition:
$\hat{\mathfrak{h}}=\hat{\mathfrak{h}} ^+\oplus \hat{\mathfrak{h}}^-\oplus \mathbb{C} k$.
Let $S(\hat{\mathfrak{h}}^-)=M(1)$ be the symmetric algebra generated by $\hat{\mathfrak{h}}^-$, which is naturally a
$\hat{\mathfrak{h}}$-module with $k=1$. Set
\begin{equation}
V_L=M(1)\otimes \mathbb{C}\{L\},
\end{equation}
and let $\mathbf{1}=1\otimes 1$ be the vacuum vector.
Since both $S(\hat{\mathfrak{h}}^-)$ and $\mathbb{C}\{L\}$ are $\hat{\mathfrak{h}}$-module, we have $V_L$ is an $\hat{\mathfrak{h}}$-module via the tensor product action.
The lattice vertex operator algebra $V_L$ \cite{FLM, LL}
is specified by the state-field correspondence $v\mapsto Y(v, z)$.
For $\alpha \in \mathfrak{h}$, we define the operator $z^{\alpha(0)}$ on the space $V_L$:
$z^{\alpha(0)} \cdot (x\otimes e^\eta)=z^{\langle\alpha, \eta\rangle}(x\otimes e^\eta)$,
for $x\in S(\hat{\mathfrak{h}}_{\mathbb{Z}} ^-), \eta\in L$.
Define
\begin{equation}\label{e:st-field}
Y(e^\alpha,z)=E^-(-\alpha,z)E^+(-\alpha,z)e^\alpha z^{\alpha(0)}
\end{equation}
where
\begin{align}
E^+(-\alpha,z)&=\exp\bigg(-\sum\limits_{n\in {\mathbb{Z}_+}}\frac{\alpha(n)}{n}z^{-n}\bigg) , \\ \label{e:hom}
E^-(-\alpha,z)&=\exp\bigg( \sum\limits_{n\in {\mathbb{Z}_+}}\frac{\alpha(-n)}{n}z^n\bigg)=
\sum\limits_{n\geq0}h_{\alpha,-n}z^n.
\end{align}
For $\alpha \in \mathfrak{h}$ we set
$$\alpha(z)=\sum\limits_{n\in \mathbb{Z}}\alpha(n)z^{-n-1}, $$
and for the general basis element
$v=\alpha_1(-n_1)\cdots \alpha_k(-n_k)e^{\gamma} \in V_L$,
$\gamma, \alpha_i \in \mathfrak{h},n_i,k\geq 1$, we define
\begin{align*
Y(v,z)&=\sum \limits_{n\in \mathbb{Z}}v_nz^{-n-1} \\
&=\,: \left(\partial^{(n_1-1)}\alpha_1(z)
\cdots
\partial^{n_k-1}\alpha_k(z)\right)
Y(e^{\gamma},z) :,
\end{align*}
where $\partial^{(n)}=\frac1{n!}(\frac{\partial}{\partial z})^n$ and $:\ \ :$ is the normal ordered product.
Let $L^{\circ}$ be the dual lattice of $L$, so the space
\begin{equation}
V_{L^{\circ}}=S(\hat{\mathfrak h}^-)\otimes \mathbb{C}\{L^{\circ}\}
\end{equation}
is similarly defined as in $V_L$ (see \cite{LL}) and
is naturally a $V_L$-module with the action given by the state-field map \eqref{e:st-field}.
Let $\{\gamma_i\}$ be the set of coset representatives of $L^{\circ}/L$. It is known \cite{D, LL} that $V_{L^{\circ}}$ decomposes into irreducible $V_{L}$-modules:
\begin{equation}
V_{L^{\circ}}=\bigoplus_{i=1}^{|L^{\circ}/L|}S(\hat{\mathfrak h}^-)\otimes \mathbb{C}\{L+\gamma_i\},
\end{equation}
and the irreducible components exhaust all irreducible $V_L$-modules.
For a partition $\lambda=(\lambda_1,\lambda_2,\cdots, \lambda_k)$,
define the homogeneous symmetric function $h_{\alpha, -\lambda}$ by
\begin{equation*}
h_{\alpha,-\lambda}=h_{\alpha,-\lambda_1}h_{\alpha,-\lambda_2}\cdots h_{\alpha,-\lambda_k}.
\end{equation*}
whose generating function is $E(-\alpha, z_1)\cdots E(-\alpha, z_k)$.
For partition $\lambda$ of length $l$, the Schur function $s_{\alpha, -\lambda}$ is defined by the Jacobi-Trudi formula
\begin{equation}
s_{\alpha, -\lambda}=\det(h_{\alpha, -\lambda_i+i-j})_{l\times l}
\end{equation}
It is well-known that $\alpha(-n)$ is an integral linear combination of $h_{\alpha, -\lambda}$, $|\lambda|=n$.
The homogenous function $h_{\alpha, -n}$ is an integral linear combination of the Schur functions $s_{\alpha, -\mu}$, where
$|\mu|=|\lambda|$, and vice versa.
Moreover, Schur functions can be created by vertex operators.
For $\alpha\in L$ we introduce the following vertex operator \cite{J2}
\begin{equation}
S(\alpha, z)=E^-(-\alpha, z)E^+(-\frac{\alpha}2, z)=\sum_{n\in\mathbb Z} S(\alpha)_n z^{-n}
\end{equation}
\begin{prop} \cite{J1} For each partition $\lambda=(\lambda_1, \ldots, \lambda_l)$, the
Schur function $s_{\lambda}$ in the variable $\alpha(-n)$, viewed as the power sum $p_n=\sum_i x_i^n$, is given by
\begin{equation}
s_{\alpha, -\lambda}=S(\alpha)_{-\lambda_1}S(\alpha)_{-\lambda_2}\cdots S(\alpha)_{-\lambda_l}.1
\end{equation}
\end{prop}
Let $\underline{\lambda}=(\lambda^{(1)}, \ldots, \lambda^{(d)})$ be a multi-partition with weight $n$, we define the
multivariate or tensor product Schur function $s_{\underline{\lambda}}$ \cite{IJS} as
$$
s_{\beta_1, -\lambda^{(1)}}s_{\beta_2, -\lambda^{(2)}}\cdots s_{\beta_m, -\lambda^{(m)}}.
$$
whose generating function is
\begin{equation}\label{e:genSchur1}
S(\beta_1, z_1)\cdots S(\beta_1, z_{l(\lambda^{(1)})})S(\beta_2, w_1)\cdots S(\beta_2, w_{l(\lambda^{(2)})})\cdots S(\beta_m, t_1)\cdots S(\beta_1, t_{l(\lambda^{(m)})}).1
\end{equation}
According to \cite{IJS}, the above is a $\mathbb Z$-linear combination of tensor products of homogeneous symmetric functions. Therefore we can use the following generating series for the tensor product Schur function:
\begin{equation}\label{e:genSchur2}
E^-(-\gamma_1, z_1)E^-(-\gamma_2, z_2)\cdots E^-(-\gamma_l, z_l)
\end{equation}
where $\gamma_i$ are a sequence of vectors (with multiplicity) in $L$.
\section{integral forms of lattice vertex operator algebras }
Let $\underline{\lambda}=(\lambda^{(1)}, \ldots, \lambda^{(d)})$ be a multi-partition with weight $n$. The $\mathbb Z$-span
of Schur functions
$$s_{\beta_1, -\lambda^{(1)}}s_{\beta_2, -\lambda^{(2)}}\cdots s_{\beta_m, -\lambda^{(m)}}e^{\eta},$$
where $\beta_i\in\Delta$,
$\eta \in L$
can be simply written as a $\mathbb Z$-span of the following elements:
\begin{equation}\label{e:wrSchur}
s_{\beta_1',-n_1}s_{\beta_2',-n_2}\cdots s_{\beta_k',-n_k}\otimes e^\eta,
\end{equation}
where $\beta_i', \eta\in \Delta$ and $n=\sum_i{n_i}$ such that $n_1\geq n_2\geq \cdots \geq n_k\geq 0$. For convenience, we will stick this type of Schur elements
and note that the generating series is also given by appropriate products of the exponential operators
given in \eqref{e:genSchur2}.
Let $(V_L)_{\mathbb{Z}}$ be the $\mathbb Z$-span of
the elements in \eqref{e:wrSchur}. We also define $(V_{L^{\circ}})_{\mathbb Z}$ and $(V_{L+\gamma_i})_{\mathbb Z}$ similarly by the
same span (with $e^{\eta}$ being in $\mathbb C\{L^{\circ}\}$ or $\mathbb C\{L+\gamma_i\}$ respectively).
The following theorem is essentially from \cite{DG1} (see also \cite{Mc}).
\begin{thm}
The space $(V_L)_{\mathbb{Z}}$
is an integral form of $V_L$ (or $(V_{L+\gamma_i})_{\mathbb Z}$)) generated by $e^{\pm \alpha_i}$, $\alpha_i\in \Delta$.
Moreover, the space $(V_{L+\gamma_i})_{\mathbb Z}$
is an integral form of $V_{L+\gamma_i}$ generated by the action of $Y(e^{\pm \alpha_i}, z)$, $\alpha_i\in \Delta$
on the vacuum vector $e^{\gamma_i}$.
\end{thm}
\begin{proof} It is enough to show the theorem for $(V_L)_{\mathbb{Z}}$.
By the relation between Schur functions and the homogeneous symmetric functions, we see that the basis
elements $s_{\beta_1,-n_1}s_{\beta_2,-n_2}\cdots s_{\beta_k,-n_k}\otimes e^\eta$, $\beta_i\in\Delta$
can be expressed as integral linear combinations of $h_{\beta_1',-m_1}h_{\beta_2',-m_2}\cdots h_{\beta_l',-m_l}\otimes e^\eta$,
$\beta_i'\in\Delta$, where both partitions $(n_1, \ldots, n_k)$ and
$(m_1, \ldots, m_l)$ are partitions of the same weight.
The latter are the coefficients of $E^-(-\beta'_1,w_1)\cdots$ $E^-(-\beta'_l,w_l)\otimes e^\eta$.
Then the result follows from \cite{DG1}.
\end{proof}
Let $\delta=(\delta_1,\delta_2,\cdots,\delta_r)=(r-1,r-2,\cdots,0)
\in \mathbb{Z}^r$ be the special partition,
$l(\sigma)$ is the inverse number of the permutation $\sigma$. For any r-tuple $t=(t_1,t_2,\cdots, t_r), t_i\in \mathbb{Z}$,
let $R$ be a commutative ring, and let the space of truncated formal Laurent series be
$$R((z_1,z_2,\cdots,z_r))=
\bigg\{ \sum \limits_{t}a_tz^t=\sum \limits_{t}a_{t_1t_2\cdots t_r}z_1^{t_1}z_2^{t_2}\cdots z_r^{t_r}\Big|
a_{t_1t_2\cdots t_r}\in R, a_{t_1t_2\cdots t_r}=0 \ \mbox{for $t_i<<0$}
\bigg\}.$$
The symmetric group $S_r$ acts on the formal Laurent series by permuting their variables, i.e. for $\sigma\in S_r$,
$$\sigma.f(z_1, \cdots, z_r)=f(z_{\sigma^{-1}(1)}, \cdots, z_{\sigma^{-1}(r)}), \qquad\quad f\in R((z_1,z_2,\cdots,z_r)).$$
\begin{lemma}\label{r!} If
$G\in R((z_1,z_2,\cdots,z_r))$
is invariant under the action of the symmetric group $S_r$. Then for all $n \in \mathbb{Z}, k\in \mathbb{N}$, the coefficient of
$(z_1z_2\cdots z_r)^n$ in $\prod \limits_{1\leq i<j\leq r}(z_i-z_j)^kG$ is divisible by $r!$.
\end{lemma}
\begin{proof} If $k>3$, write $k=2q+k_0$ with $k_0=0,1$ and $q\geq 1$. Then
$\prod \limits_{1\leq i<j\leq r}(z_i-z_j)^{2q}G$ is clearly invariant under $S_r$. So it is enough to show the result for $k=1, 2$.
First of all,
\begin{align}\label{e:Van}
\prod\limits_{1\leq i<j\leq r}(z_i-z_j)=\sum\limits_{\sigma\in S_r}(-1)^{l(\sigma)}z^{\sigma(\delta)}
\end{align}
where $\sigma$ runs through all permutations of $S_r$, $\delta=(r-1, \ldots, 1, 0)$ and $z^{\mu}=z_1^{\mu_1}\cdots z_r^{\mu_r}$.
Then
\begin{align}\notag
\prod\limits_{1\leq i<j\leq r}(z_i-z_j)^2&=\sum\limits_{\sigma\in S_r}(-1)^{l(\sigma)}z^{\sigma(\delta)}\prod\limits_{1\leq i<j\leq r}(z_i-z_j)\\ \label{e:sq}
&=\sum\limits_{\sigma\in S_r}\sigma.\left(z^{\delta}\prod\limits_{1\leq i<j\leq r}(z_i-z_j)\right)
\end{align}
Note that if $(z_1\cdots z_r)^n$ appears in $\prod\limits_{1\leq i<j\leq r}(z_i-z_j)^2$, it must appear
inside $z^{\delta}\prod\limits_{1\leq i<j\leq r}(z_i-z_j)$, i.e. we can write
\begin{equation*}
z^{\delta}\prod\limits_{1\leq i<j\leq r}(z_i-z_j)=c_n(z_1\cdots z_r)^n+\sum_{\mu\neq (n, \cdots, n)}c_{\mu}z^{\mu}.
\end{equation*}
It follows from \eqref{e:sq} that
\begin{align*}
\prod\limits_{1\leq i<j\leq r}(z_i-z_j)^2&=\sum\limits_{\sigma\in S_r}\sigma.\left(c_n(z_1\cdots z_r)^n+\sum_{\mu\neq (n, \cdots, n)}c_{\mu}z^{\mu}\right)\\
&=r!c_n(z_1\cdots z_r)^n+\sum_{\mu\neq (n, \cdots, n), \sigma\in S_r}c_{\sigma^{-1}(\mu)}z^{\mu}.
\end{align*}
So the lemma is proved for $k=2$.
Now assume $k=1$. Suppose $(z_1\cdots z_r)^n$ appears inside $\prod \limits_{i<j}(z_i-z_j)G=\sum\limits_{\sigma\in S_r}(-1)^{l(\sigma)}z^{\delta}G$,
then the coefficient is determined by that of $(z_1\cdots z_r)^n$ in $z^{\delta}G$. We can write
\begin{equation}\label{e:sep}
z^{\delta}G=a_n(z_1\cdots z_r)^n+\sum_{\mu\neq (n, \cdots, n)}a_{\mu}z^{\mu}.
\end{equation}
where $a_{\mu}\neq 0$ for finitely many $\mu\in\mathbb Z^{r}$ with any fixed weight $|\mu|=\sum_i\mu_i$. Using \eqref{e:Van} and
\eqref{e:sep}, we have
\begin{align*}
\prod\limits_{1\leq i<j\leq r}(z_i-z_j)G&=\sum\limits_{\sigma\in S_r}(-1)^{l(\sigma)}\sigma.\left(a_n(z_1\cdots z_r)^n+\sum_{\mu\neq (n, \cdots, n)}a_{\mu}z^{\mu}\right)\\
&=\sum\limits_{\mu\neq (n, \cdots, n), \sigma\in S_r}(-1)^{l(\sigma)}a_{\sigma^{-1}(\mu)}z^{\mu},
\end{align*}
where the first summand vanishes due to anti-symmetry and the second summand contains no term $(z_1\cdots z_r)^n$, i.e.
the coefficient of $(z_1\cdots z_r)^n$ in $\prod\limits_{1\leq i<j\leq r}(z_i-z_j)G$ is zero. This completes the proof.
\end{proof}
\begin{thm}\label{CJ}
For $\alpha\in L$, let $Y(e^{\alpha},z)=\sum\limits_{n\in \mathbb{Z}}y_nz^{-n-1}$, then
$(V_L)_\mathbb{Z}$ (or $(V_{L+\gamma_i})_\mathbb{Z})$) is preserved by $y^{(r)}_n=\frac{y^r_n}{r!}$.
\end{thm}
\begin{proof} Recall that for $\alpha,\beta \in L$ \cite{FLM}
\begin{equation*}
E^+(-\alpha,z)E^-(-\beta,w)=E^-(-\beta,w)E^+(-\alpha,z)(1-\frac{w}{z})^{\langle \alpha, \beta \rangle}
\end{equation*}
So we have,
\begin{align*}
Y(e^{\alpha},z_r)& \Big(E^-(-\beta_1,w_1)\cdots E^-(-\beta_l,w_l)\otimes e^\eta \Big)\\
=&\epsilon (\alpha, \eta)z_r^{\langle \alpha, \eta \rangle}
\prod\limits_{s=1}\limits^l(1-\frac{w_s}{z_r})^{\langle \alpha, \beta_s \rangle}
E^-(-\alpha,z_r)E^-(-\beta_1,w_1)\cdots E^-(-\beta_l,w_l)\otimes e^{\alpha+\eta}.
\end{align*}
Then
\begin{align}\label{ynr}
Y(e^{\alpha},z_1)&Y(e^{\alpha},z_2)\cdots Y(e^{\alpha},z_r)
E^-(-\beta_1,w_1)\cdots E^-(-\beta_l,w_l)\otimes e^\eta \nonumber\\ \nonumber
&=\epsilon \cdot (z_1z_2\cdots z_r)^{\langle \alpha, \eta \rangle}
\prod\limits_{1\leq k< s\leq r}(z_k-z_s)^{\langle \alpha, \alpha \rangle}
\prod\limits_{1\leq k\leq r,1\leq s\leq l}(1-\frac{w_s}{z_k})^{\langle \alpha, \beta_s \rangle}\\
&\quad \cdot E^-(-\alpha,z_1) \cdots E^-(-\alpha,z_r)E^-(-\beta_1,w_1)\cdots E^-(-\beta_l,w_l)\otimes e^{r\alpha+\eta},
\end{align}
where $\epsilon=\epsilon(\alpha,\eta)\epsilon(\alpha,\alpha+\eta)\cdots \epsilon(\alpha,(r-1)\alpha+\eta)$.
Subsequently using Lemma \ref{r!}, for $\alpha \in L, \langle \alpha, \alpha \rangle \in \mathbb{Z}_+$, we have that
\begin{equation*}
y_n^{(r)}\cdot \bigg(E^-(-\beta_1,w_1)\cdots E^-(-\beta_l,w_l)\bigg)
w_1^{\mu_1}w_2^{\mu_2}\cdots w_l^{\mu_l}\otimes e^{\eta}\in (V_L)_\mathbb{Z}
\end{equation*}
where $\mu_1,\mu_2,\cdots,\mu_l\in \mathbb{Z}, \eta \in \Delta$, which means that
$y_n^{(r)}(V_L)_\mathbb{Z} \subset (V_L)_\mathbb{Z}$.
\end{proof}
\begin{corollary}\rm
For $\alpha_i \in L, k_i \in \mathbb{Z} $, let $v=\sum\limits_{i=1}\limits^m k_ie^{\alpha_i}$ such that
$\langle\alpha_i, \alpha_j\rangle\geq 0$, and $Y(v,z)=\sum\limits_{n\in \mathbb{Z}}v_nz^{-n-1}$, then
$(V_L)_\mathbb{Z}$ is preserved by $v^{(r)}_n=\frac{v^r_n}{r!}$.
\end{corollary}
\begin{proof} By vertex operator calculus it is well-known that if $\langle\alpha_i, \alpha_j\rangle\geq 0$, then
\begin{equation*}
Y(e^{\alpha_1},z_1)Y(e^{\alpha_2},z_2)=Y(e^{\alpha_2},z_2)Y(e^{\alpha_1},z_1).
\end{equation*}
Also when the space $V$ is preserved under the action of the divided powers of commuting operators $A_1,\cdots, A_n$,
so is $A=\sum_i A_i$ under the action of $\mathbb Z$-linear combinations of product of divided powers of $A_i$.
In fact, this follows easily from $(A+B)^{(n)}=\sum_{i=1}^nA^{(i)}B^{(n-i)}$.
\end{proof}
Before discussing the general vertex operator, let's consider the case of the Heisenberg algebra.
Recall that we have denoted that
$E^-(-\alpha,z)=\exp\bigg( \sum\limits_{n\in {\mathbb{Z}_+}}\frac{\alpha(-n)}{n}z^n\bigg)=
\sum\limits_{n\geq0}h_{\alpha, -n}z^n$. The right analog for the divided power in this case is
the following operator.
For any $k\in\mathbb N$, we introduce the Garland operators $h_{\alpha, -n}^{[k]}$ by
\begin{equation}
\exp\bigg( \sum\limits_{n\in {\mathbb{Z}_+}}\frac{\alpha(-kn)}{n}z^n\bigg)=\sum\limits_{n\geq0}h_{\alpha, -n}^{[k]}z^n.
\end{equation}
Special case of the operator $h_{\alpha, -n}^{[k]}$ was considered by Garland for affine Lie algebras \cite{G}.
\begin{thm}
For any $k, n\in \mathbb N$, the elements
$h_{\alpha, -n}^{[k]}\in \mathbb{Z}[h_{\alpha, -1},h_{\alpha, -2}, \cdots, h_{\alpha, -kn}]$.
In particular, as an operator $h_{\alpha, -n}^{[k]}$ preserves $(V_L)_\mathbb{Z}$.
\end{thm}
\begin{proof}
Let $\omega $ be the kth roots of unity: $\omega^k=1$, then
$$E^-(-\alpha, z)E^-(-\alpha, z\omega)\cdots E^-(-\alpha, z\omega^{k-1})=\exp\bigg( \sum\limits_{n\in {\mathbb{Z}_+}}\frac{\alpha(-kn)}{n}z^{kn}\bigg)
=\sum\limits_{n\geq0}h_{\alpha, -n}^{[k]}z^{kn}, $$
since $1+\omega^j+\omega^{2j}+\cdots+\omega^{(k-1)j}=0$, where $k\nmid j$.
Taking the coefficients of $z^{kn}$ ($n>0$), we have
$$h_{\alpha, -n}^{[k]}=\sum\limits_{i_1+i_2+\cdots+i_k=kn}h_{\alpha, i_1}h_{\alpha, i_2}\cdots h_{\alpha, i_k}\omega^{i_2+2i_3+\cdots(k-1)i_k}
\in \mathbb{Z}[\omega][h_{\alpha, -1},h_{\alpha, -2}, \cdots, h_{\alpha, -kn}].$$
It is obvious that $h_{-n}^{[k]} \in \mathbb{Q}[\alpha(-1),\cdots \alpha(-kn)] = \mathbb{Q}[h_{\alpha, -1},h_{\alpha, -2}, \cdots, h_{\alpha, -kn}]$.
Therefore
$$h_{\alpha, -n}^{[k]} \in (\mathbb{Q}\cap \mathbb{Z}[\omega])[h_{\alpha, -1},h_{-2}, \cdots, h_{\alpha, -kn}]=\mathbb{Z}[h_{\alpha, -1},h_{\alpha, -2}, \cdots, h_{\alpha, -kn}].$$
\end{proof}
\section{$\mathbb Z$-lattice structure of the lattice vertex algebra}
Now we consider the action of general homogeneous vertex operator on $(V_L)_{\mathbb{Z}}$.
Let $v=\alpha_1(-\lambda_1)$ $\cdots$ $\alpha_k(-\lambda_k)e^{\gamma} \in V_L$,
$\alpha_i, \gamma\in L^{\times}$, where $\lambda_i\geq 1, k\geq 0$, here $k=0$ means $v=e^{\gamma}$.
For $\beta\in L$, we write $\beta(z)=\beta^+(z)+\beta^-(z)$, where $\beta^{\pm}(z)$ refers to the annihilation/creating part. Then for
$m\geq 1$
\begin{align*}
\partial^{(m-1)}\beta(z)
&=A^-_{\beta, m}(z)+A^+_{\beta, m}(z)
\end{align*}
where $A^+_{\beta, m}(z)=\sum\limits_{n\geq 0}(-1)^{m-1}\binom{n+m-1}{m-1}\beta(n)z^{-n-m}$
and
$A^-_{\beta, m}(z)=\sum\limits_{n>0}\binom{n-1}{m-1}\beta(-n)z^{n-m}$
are the annihilation and creation parts of the operator, respectively.
We omit the subscript $\beta$ if no confusion arises.
For $\alpha(\lambda)=\alpha_1(-\lambda_1)\cdots \alpha_k(-\lambda_k)$,
let
\begin{align*}
A^+_{\alpha(\lambda)}(z):&=A^+_{\alpha_1, \lambda_1}(z)\cdots A^+_{\alpha_k, \lambda_k}(z),\\
A^-_{\alpha(\lambda)}(z):&=A^-_{\alpha_1, \lambda_1}(z)\cdots A^-_{\alpha_k, \lambda_k}(z).
\end{align*}
where the index of $\lambda_i$ matches with that of $\alpha_i$. We can view $\lambda=(\lambda_1, \ldots, \lambda_k)$ as
a parition, and also view $(\lambda_1, \ldots, \lambda_k)$ as a decomposition of $\sum_i\alpha_i$, thus the notation
$|\alpha|=\sum_i\alpha_i$ will also be adopted. When $\alpha_i$ are fixed and omitted, we will use $\lambda_i$ to specify
the dependence of $\alpha_i$.
Then for $v=\alpha_1(-n_1)\cdots \alpha_k(-n_k)e^{\gamma}$ we can write
\begin{align}\label{Yvz}
Y(v,z)&=\sum \limits_{n\in \mathbb{Z}}v_nz^{-n-1} \nonumber \\
&=\, : \partial^{(n_1-1)}\alpha_1(z)
\cdots
\partial^{(n_k-1)}\alpha_k(z)\cdot
Y(e^{\gamma},z) : \nonumber \\
&=\sum \limits_\lambda A^-_{\alpha(\lambda)}(z)Y(e^{\gamma},z)A^+_{\alpha(^c{\lambda})}(z),
\end{align}
summed over $2^k$ subpartitions $\lambda=(\lambda_1,\cdots, \lambda_s)$ of $(n_1, \cdots ,n_k)$, and $^c{\lambda}$ is the complementary subpartition of $\lambda$ inside $(n_1, \cdots ,n_k)$. i.e. if $\lambda=(n_{i_1}, \cdots, n_{i_s})$, then
$\alpha(\lambda)=\alpha_{i_1}(-n_{i_1})\cdots \alpha_{i_1}(-n_{i_1})$.
Fix $\alpha_1, \ldots, \alpha_k\in L$. For a sequence $\underline{\beta}=(\beta_1,\beta_2,\cdots,\beta_l)$, $\beta_i\in L$
and $\alpha(-m)$, $m\in\mathbb Z_+$, we define
\begin{equation}\label{fz1}
f_{\alpha, m}(z; w):=f_{\alpha, m}(\underline{\beta}, z; w)
\sum_{i=1}^l\frac{\langle\alpha, \beta_i\rangle(-1)^{m-1}}{(z-w_i)^{m}}
\end{equation}
where $w=(w_1, \cdots, w_l)$ and the rational function refers to the power series in $w_i$.
Also for fixed $\underline{\beta}$ and $\alpha(\lambda)=\alpha_1(-\lambda_1)\cdots \alpha_k(-\lambda_k)$ we define the formal series:
\begin{equation}\label{fzz}
f_{\alpha(\lambda)}(z,w):=f_{\alpha(\lambda)}(\underline{\beta}, z; w)=f_{\alpha_1, \lambda_1}(\underline{\beta}, z; w)\cdots f_{\alpha_k, \lambda_k}(\underline{\beta}, z; w)
\end{equation}
\begin{lemma} \label{A1E}
Let $\alpha(\lambda)=\alpha_1(-\lambda_1)\alpha_2(-\lambda_2)\cdots \alpha_k(-\lambda_k)$. Then for any sequence $(\beta_1,\beta_2,\cdots,\beta_l), \beta_q\in L, 1\leq q\leq l$, and $|\beta|=\sum_{i=1}^{l}\beta_i$, one has that
\begin{align}\notag
A^+_{\alpha(\lambda)}(z)\prod_{i=1}^lE^-(-\beta_i,w_i)e^{|\beta|}
&=\prod_{j=1}^k\left(\sum_{i=1}^l\frac{\langle\alpha_j, \beta_i\rangle(-1)^{\lambda_j-1}}{(z-w_i)^{\lambda_j}}\right)\prod_{i=1}^lE^-(-\beta_i,w_i)e^{|\beta|}\\ \label{e:AlE}
&=f_{\alpha(\lambda)}(\beta,z; w)\prod_{i=1}^lE^-(-\beta_i,w_i)e^{|\beta|}.
\end{align}
\end{lemma}
\begin{proof} Note that
$[\alpha(n),E^-(-\beta,w)e^{\beta}]=\langle \alpha, \beta \rangle w^nE^-(-\beta,w)$ ($n\geq 0$) and
$[\alpha(0), e^{\eta}]=\langle\alpha, \eta\rangle e^{\eta}$.
It follows that
\begin{flalign}\label{AE}
[A^+_{\alpha, m}(z),E^-(-\beta,w)e^{\beta}]&=\langle \alpha, \beta \rangle \bigg(\sum\limits_{n\geq 0}\binom{n+m-1}{m-1}(-1)^{m-1}z^{-n-m} w^n \bigg) E^-(-\beta,w)e^{\beta} \nonumber\\
&=(-1)^{m-1}\langle \alpha, \beta\rangle(z-w)^{-m}E^-(-\beta,w)e^{\beta}
\end{flalign}
and $A^+_{\alpha, m}(z)e^{\eta}=\langle\alpha, \eta\rangle(-1)^{m-1}z^{-m}e^{\eta}$.
Using these relations we get that
\begin{align*}
&A^+_{\alpha, m}(z)E^-(-\beta_1,w_1)\cdots E^-(-\beta_l,w_l).e^{\eta}=\epsilon A^+_{\alpha, m}(z)E^-(-\beta_1,w_1)\cdots E^-(-\beta_l,w_l)e^{\sum_i\beta_i}.e^{\eta-\sum_i\beta}\\
&=\epsilon \prod_{i=1}^l(E^-(-\beta_i,w_i)e^{\beta_i})
A^+_{\alpha, m}(z).e^{\eta-\sum_i\beta_i}+\epsilon\left(\sum_{i=1}^l\frac{\langle\alpha, \beta_i\rangle(-1)^{m-1}}{(z-w_i)^{m}}\right)\prod_{i=1}^lE^-(-\beta_i,w_i)e^{\beta_i}.e^{\eta-\sum_i\beta_i}\\
&=\left((-1)^{m-1}z^{-m}\langle\alpha, \eta-\sum_{i=1}^l\beta_i\rangle+\sum_{i=1}^l\frac{\langle\alpha, \beta_i\rangle(-1)^{m-1}}{(z-w_i)^{m}}\right)
\prod_{i=1}^lE^-(-\beta_i,w_i)e^{\eta},
\end{align*}
where $\epsilon=\varepsilon(|\beta|, \eta-|\beta|)^{-1}$.
Successively applying $A^+_{\alpha, \lambda_j}(z_j)$, we obtain that
\begin{align*}
A^+_{\alpha(\lambda)}(z)&\prod_{i=1}^lE^-(-\beta_i,w_i)e^\eta=\bigg(A^+_{\alpha_1, \lambda_1}(z)\cdots A^+_{\alpha_k,\lambda_k}(z)\bigg)\prod_{i=1}^lE^-(-\beta_i,w_i)e^\eta\\
&=\prod_{j=1}^k\left((-1)^{\lambda_j-1}z^{-\lambda_j}\langle\alpha_j, \eta-|\beta|\rangle+\sum_{i=1}^l\frac{\langle\alpha_j, \beta_i\rangle(-1)^{\lambda_j-1}}{(z-w_i)^{\lambda_j}}\right)\prod_{i=1}^lE^-(-\beta_i,w_i)e^\eta.
\end{align*}
In particular, \eqref{e:AlE} follows by taking $\eta=|\beta|=\sum_i\beta_i$.
\end{proof}
This lemma allows us to move the annihilation operator $A^+_{\alpha(\lambda)}(z)$ to the right. Next we consider how to move it
across the creating operator $A^-_{\alpha(\xi)}(z)$ and $Y(\iota(e_\alpha),z_j)$.
\begin{lemma} \label{A1A0}
Let $\lambda=(\lambda_1,\ldots, \lambda_k), \mu=(\mu_1,\ldots, \mu_l)$ be two compositions of nonnegative integers, and
$\alpha_1, \ldots, \alpha_k, \beta_1, \ldots, \beta_l\in L$, then
\begin{equation}\label{a+a-}
A^+_{\alpha(\lambda)}(z)A^-_{\alpha(\mu)}(w)= \sum\limits_{\mu^\ast, \lambda^\ast}C_{\mu^\ast, \lambda^\ast}(z,w)
A^-_{\alpha(\mu^\ast)}(z)A^+_{\alpha(\lambda^\ast)}(w),
\end{equation}
where $\mu^\ast, \lambda^\ast$ run through all possible pairs of equal length subcompositions of $\mu$ and permutations of subcompositions of $\lambda$ respectively and
$C_{\xi^\ast, \lambda^\ast}(z,w) \in \mathbb Z[[z,z^{-1},w,w^{-1}]]$.
\end{lemma}
\begin{proof} For $\alpha, \beta\in L$, it is easy to see that for $m, n\geq 1$
\begin{equation*}
[A^+_{\alpha, m}(z), A^-_{\beta, n}(w)]=(-1)^{m-1}\langle\alpha, \beta\rangle\sum_{k=1}^{\infty}k\binom{k+m-1}{m-1}\binom{k-1}{n-1}z^{-k-m}w^{k-n}\in\mathbb
Z[[z^{-1}, w]]w^{-n+1}.
\end{equation*}
So the contraction function between the annihilation field $A^+_{\alpha, m}(z)$ and creation
field $A^-_{\beta, n}(w)$ is
\begin{equation}\label{e:C1}
C(A^+_{\alpha, m}(z), A^-_{\beta, n}(w))=\frac{(-1)^{m-1}\langle\alpha, \beta\rangle}{z^{m}w^{n}}\sum_{i=1}^{\infty}i\binom{i+m-1}{m-1}\binom{i-1}{n-1}\left(\frac{w}{z}\right)^i.
\end{equation}
Now for $\alpha(\lambda)=\alpha_1(-\lambda_1)\cdots \alpha_k(-\lambda_k)$ and $\beta(\mu)=\beta_1(-\mu_1)\cdots \beta_l(-\mu_k)$, the function
\begin{equation}\label{e:C2}
C(A^+_{\alpha(\lambda)}(z), A^-_{\beta(\mu)}(w))=\prod_{i=1}^kC(A^+_{\alpha_i, \lambda_i}(z), A^-_{\beta_i, \mu_i}(w))
\end{equation}
is the product of contraction functions between the two ordered sets of annihilation operators\newline $\{A^+_{\alpha_i, \lambda_i}(z)\}_{i=1}^k$ and the
creation operators $\{A^-_{\beta_j, \mu_j}(w)\}_{j=1}^l$. Using Wick's theorem (cf. \cite{K}) we have that
\begin{align*}
A^+_{\alpha(\lambda)}(z)A^-_{\beta(\mu)}(w)&=A^+_{\alpha_1,\lambda_1}(z)\cdots
A^+_{\alpha_k,\lambda_k}(z_i)A^-_{\beta_1, \mu_1}(w)\cdots A^-_{\beta_l, \mu_l}(w)\\
&=\sum_{\lambda^*, \mu^*}C(A^+_{\alpha(\lambda^*)}(z), A^-_{\beta(\mu^*)}(w))A^-_{\beta(\overline{\mu^*})}(w)A^+_{\alpha(\overline{\lambda^*})}(z)
\end{align*}
summed over all possible paired subcompositions $\lambda^*$ of $\lambda$ and permutations $\mu^*$ of subcompositions of $\lambda$
with the same length, and $C(\ \ , \ \ )$ is defined in \eqref{e:C1}-\eqref{e:C2}. Explicitly one first selects a subcomposition $\lambda^*$ of $i$ parts and then pairs it with permutations of
any $i$-part subcomposition $\mu^*$, i.e., there are $\sum_{i\geq 0}\binom{k}{i}\binom{l}{i}i!$ such pairings. Also $^c{\tau}$ is the complementary subpartition of $\tau$ in the partition. In particular, when $\lambda^*=\mu^*=\emptyset$, the summand is
$A^-_{\beta(\mu)}(w)A^+_{\alpha(\lambda)}(z)$.
\end{proof}
The next lemma considers the commutation relation between $A^+_{\alpha(\lambda)}(z)$ with $Y(e^{\beta},w)$.
\begin{lemma} \label{A1Y}
Let $\lambda=(\lambda_1,\ldots \lambda_k) \in (\mathbb{Z}^+)^k, \alpha_i, \beta\in L$, then
\begin{align}\label{AY1}
A^+_{\alpha(\lambda)}(z)Y(e^{\beta},w)&=Y(e^{\beta}, w)
\prod \limits_{p=1}\limits^k \left(A^+_{\alpha_p, \lambda_p }(z)
+(-1)^{\lambda_p-1}\langle \alpha_p, \beta \rangle(z-w)^{-\lambda_p}
\right),\\ \label{AY2}
Y(e^{\beta},w)A^-_{\alpha(\lambda)}(z)&=\prod \limits_{p=1}\limits^k \bigg(A^-_{\alpha_p,\lambda_p}(z)-\langle\alpha_p, \beta \rangle
(w-z)^{-\lambda_p}\bigg)Y(e^{\beta},w),
\end{align}
where the rational functions refer to power series in $w$ in the first relation and $z$ in the second one.
\end{lemma}
\begin{proof} It follows from \eqref{AE} that
\begin{align*}
A^+_{\alpha_p, \lambda_p }(z)E^-(-\beta, w)e^{\beta}&
=E^-(-\beta,w)e^{\beta}\bigg(A^+_{\alpha_p, \lambda_p }(z)
+(-1)^{\lambda_p-1}\langle \alpha_p, \beta \rangle(z-w)^{-\lambda_p}\bigg),
\end{align*}
where the rational function is expanded at $w$. Similarly we also have
\begin{equation*}
E^+(-\beta, w)A^-_{\alpha_p,\lambda_p}(z)=\left(A^-_{\alpha_p,\lambda_p}(z)-\frac{\langle \alpha_p, \beta \rangle}{(w-z)^{\lambda_p}}\right)E^+(-\alpha,w).
\end{equation*}
where the rational function $(w-z)^{-\lambda_p}$ is expanded as a powers series in $z$.
The lemma is then proved by repeated applying $A^+_{\alpha_p, \lambda_p }(z)$ or $A^-_{\alpha_p, \lambda_p }(z)$ as above.
\end{proof}
We now prove our first main result of this paper.
\begin{theorem} The integral form $(V_L)_{\mathbb Z}$ and all of its irreducible modules $(V_{L+\gamma_i})_{\mathbb Z}$
associated with the vertex operator algebra $V_L$ are preserved by the divided powers of the general vertex operator $Y(v, z)$, where
$v=\alpha_1(-n_1)\alpha_2(-n_2)\cdots \alpha_k(-n_k)e^{\gamma} \in V_L, \gamma\neq 0$.
In particular, $s_{\alpha(\lambda)}e^{\beta}$ (resp. $s_{\alpha(\lambda)}e^{\beta+\gamma_i}$)
span a $\mathbb Z$-lattice for the vertex operator algebra $(V_L)_{\mathbb Z}$ (resp. its irreducible module $(V_{L+\gamma_i})_{\mathbb Z}$).
\end{theorem}
\begin{proof}
It's enough to consider the case of $(V_L)_{\mathbb Z}$. Write $v=\alpha_1(-\lambda_1)\cdots \alpha_k(-\lambda_k)e^{\gamma}=\alpha(\lambda)e^{\gamma}$, then
\begin{align}\notag
Y(v, z)&=\sum_nv_nz^{-n-1}=:\partial^{(\lambda_1-1)}\alpha_1(z)\cdots \partial^{(\lambda_k-1)}\alpha_k(z)Y(e^{\gamma}, z):\\ \label{e:Yv}
&=\sum_{\lambda^*}A^-_{\alpha(\lambda^*)}(z)Y(e^{\gamma}, z)A^+_{\alpha(\bar{\lambda}^*)}(z)
\end{align}
summed over all subpartitions $\lambda^*$ of $\lambda$.
Let $E(w)=E^-(-\beta_1,w_1)\cdots E^-(-\beta_l,w_l)\otimes e^{\eta} $. It is enough to consider $\eta=\sum_i\beta_i=|\beta|$,
so $E(w)=E^-(-\beta,w)e^{|\beta|}$, where $\beta=(\beta_1, \ldots, \beta_l), w=(w_1, \ldots, w_l)$.
By Lemma \ref{A1E} and \eqref{e:Yv}
\begin{align}\label{e:YE1}\notag
Y(v,z)E(w)
=\sum \limits_{\lambda^*} A^-_{\alpha(\lambda^*)}(z)Y(e^{\gamma},z)A^+_{\alpha(\bar{\lambda}^*)}(z)E(w) \\ \notag
&=\sum \limits_{\lambda^*} f_{\alpha(\bar{\lambda}^*)}(\beta, z; w)A^-_{\alpha(\lambda^*)}(z)Y(e^{\gamma},z)E(w)\\
&=\epsilon(\gamma, |\beta|)\sum_{\lambda^*}f_{\alpha(\bar{\lambda}^*)}(\beta, z; w)A^-_{\alpha(\lambda^*)}(z)\prod_{i}(z-w_i)^{\langle\gamma, \beta_i\rangle}E^-(-(\beta,\gamma), w, z)e^{|\beta|+\gamma}\\ \notag
&=\sum_{\lambda^*}F_{\alpha(\lambda^*)}(z, w)A^-_{\alpha(\lambda^*)}(z)E^-(-(\beta,\gamma), w, z)e^{|\beta|+\gamma}
\end{align}
where $F_{\alpha(\lambda^*)}(z, w)\in \mathbb Z[[z, z^{-1}, w_i, w_i^{-1}]]$
as $f_{\alpha(\bar{\lambda}^*)}(\beta, z, w)$ is defined in \eqref{fzz}.
Note that
\begin{align*}
&Y(v, z_2)A^-_{\alpha(\lambda^*)}(z_1)E^-(-(\beta,\gamma), w,z_1)e^{|\beta|+\gamma}\\
&=\sum_{{\lambda^{(1)}}^*}\left(A^-_{\alpha({\lambda^{(1)}}^*)}(z_2)Y(e^{\gamma},z_2)A^+_{\alpha({^c\lambda^{(1)}}^*)}(z_2)\right)
A^-_{\alpha(\lambda^*)}(z_1)E^-(-(\beta, \gamma), w, z_1)e^{|\beta|+\gamma}\\
&=\sum_{{\lambda^{(1)}}^*}C_{\alpha({^c\lambda^{(1)}}^*), \alpha(\lambda^*)}(z_2, z_1)A^-_{\alpha({\lambda^{(1)}}^*)}(z_2)Y(e^{\gamma},z_2)
A^-_{\alpha(\lambda^*)}(z_1)A^+_{\alpha({^c\lambda^{(1)}}^*)}(z_2)E^-(-(\beta,\gamma), z_1\cup w)e^{|\beta|+\gamma}\\
&=\sum_{{\lambda^{(1)}}^*}C_{\alpha({^c\lambda^{(1)}}^*), \alpha(\lambda^*)}(z_2, z_1)A^-_{\alpha({\lambda^{(1)}}^*)}(z_2)Y(e^{\gamma},z_2)
A^-_{\alpha(\lambda^*)}(z_1)\\
&\hskip 5cm \cdot f_{\alpha({^c\lambda^{(1)}}^*)}((\beta, \gamma), z_2; w, z_1)E^-(-(\beta,\gamma), w, z_1)e^{|\beta|+\gamma}.
\end{align*}
Recalling Lemma \ref{A1Y} and \eqref{e:YE1}, the above can be written as:
\begin{align*}
&Y(v, z_2)Y(v,z_1)E\\
&=\sum_{\lambda^*, {\lambda^{(1)}}^*}F_{\lambda^*, {\lambda^{(1)}}^*}(z_1, z_2, w)(z_1-z_2)^{\langle \gamma, \gamma\rangle}A^-_{\alpha({\lambda^{(1)}}^*),\alpha(\lambda^*)}(z_1,z_2)
E^-(-(\beta,\gamma,\gamma), w, z_1, z_2)e^{|\beta|+2\gamma}
\end{align*}
for some series $F_{\lambda^*, {\lambda^{(1)}}^*}(z_1, z_2, w)\in\mathbb Z[[z_i, z_i^{-1}, w_j, w_j^{-1}]]$.
Continuing in this way, we have
\begin{align*}
&Y(v, z_r)\cdots Y(v, z_2)Y(v,z_1)E\\
&=\sum_{\lambda^*, {\lambda^{(1)}}^*, \ldots, {\lambda^{(r-1)}}^*}F_{\lambda^*, \ldots, {\lambda^{(r-1)}}^*}(z, w)\prod_{1\leq i<j\leq r}(z_i-z_j)^{\langle \gamma, \gamma\rangle}A^-_{\alpha({\lambda^{(r-1)}}^*),\ldots, \alpha(\lambda^*)}(z)
E^-(-(\beta,\gamma^r), w, z)e^{|\beta|+r\gamma}
\end{align*}
where $z=(z_1, \ldots, z_r), w=(w_1, \dots, w_l)$, and $F_{\lambda^*, \ldots, {\lambda^{(r-1)}}^*}(z, w)$ are some series in $\mathbb Z[[z_i, z_i^{-1}, w_j, w_j^{-1}]]$. The sum
runs through sequences of subpartitions $\lambda^*, {\lambda^{(1)}}^*, \ldots, {\lambda^{(r-1)}}^*$.
By a result in \cite{IJS}, for any vector $\alpha(\lambda)=\alpha_1(-\lambda_1)\cdots \alpha_k(-\lambda_k)$, the product $\alpha(\lambda)s_{\beta, -\mu}$
is still a $\mathbb Z$-linear combination of tensor product Schur functions. It follows that
$$A^-_{\alpha({\lambda^{(r-1)}}^*),\ldots, \alpha(\lambda^*)}(z)
E^-(-(\beta,\gamma^r), w, z)e^{|\beta|+r\gamma}$$
is also a $\mathbb Z$-linear combination of tensor product Schur functions
in view of our vertex operator realization \eqref{e:genSchur1}-\eqref{e:genSchur2}.
Finally it follows from Lemma \ref{r!} that the coefficient of $\frac{(z_1\cdots z_r)^{n+1}}{r!}$ in
$$Y(v, z_r)\cdots Y(v, z_2)Y(v,z_1)E$$
is always a $\mathbb Z$-linear combination of wreath product Schur functions, i.e. an element in the
$\mathbb Z$-form of the vertex operator algebra $V_L$.
\end{proof}
As $V$ is locally finite, for each vertex operator $Y(v, z)=\sum_{n\in\mathbb Z}v_nz^{-n-1}$, we can define
\begin{equation}
\exp(tv_n)=\sum_{r=0}^{\infty} v_n^{(r)}t^r
\end{equation}
as an element of the linear group $\mathrm{GL}((V_L)_{\mathbb Z})$ or $\mathrm{GL}((V_{L+\gamma_i})_{\mathbb Z})$. By our main theorem, the element
$\exp(tv_n)u$ is in $(V_L)_{\mathbb Z}$ or $(V_{L+\gamma_i})_{\mathbb Z}$. So we can summarize:
\begin{theorem} Let $L$ be an integral lattice over $\mathbb Z$.
For any homogeneous $v\in (V_L)_{\mathbb Z}$ with nontrivial lattice part
and $Y(v, z)=\sum_{n\in\mathbb Z}v_nz^{-n-1}$, the operator $\exp(tv_n)$
generates an element in the automorphism group $\mathrm{GL}((V_L)_{\mathbb Z})$ (or $\mathrm{GL}((V_{L+\gamma_i})_{\mathbb Z})$).
\end{theorem}
Let $\mathbb F_q$ be a fixed finite field with characteristic $p$, the lattice vertex algebra $V_q$ over $\mathbb F_q$ associated with $V_L$ is
usually defined as $
V_q=\mathbb F_q\otimes (V_L)_{\mathbb Z},
$
which is known to be a simple vertex algebra when $\det(L)\neq 0$ in $\mathbb F_q$ (cf. \cite{Mu, DG2}).
\vskip30pt \centerline{\bf Acknowledgments}
The work is partially supported by
Simons Foundation grant No. 523868.
\bigskip
|
2,877,628,090,741 | arxiv | \section{Introduction}
Stochastic analysis on the path space over a complete Riemannian
manifold without boundary has been well developed since 1992 when B.
K. Driver \cite{D} proved the quasi-invariance theorem for the
Brownain motion on compact Riemannian manifolds. A key point of the
study is to first establish an integration by parts formula for the
associated gradient operator induced by the quasi-invariant flow,
then prove functional inequalities for the corresponding Dirichlet
form (see e.g. \cite{F,H,CHL} and references within). Moreover, some
efforts have been made for the study of geometry and topology on
Riemannian path or loop spaces (see e.g. \cite{EL} and references
within).
On the other hand, however, the analysis on the path space over a
manifold with boundary is still very open. To see this, let us
mention \cite{Z} where an integration by parts formula was
established on the path space of the one-dimensional reflecting
Brownian motion. Let e.g. $X_t=|b_t|$, where $b_t$ is the
one-dimensional Brownian motion. For $h\in C([[0,T];\R)$ with
$h_0=0$ and $\int_0^T |\dot h_t|^2\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t<\infty$, let $\pp_h$ be the
derivative operator induced by the flow $X+\vv h$, i.e.
$$\pp_h F = \sum_{i=1}^n h_{t_i} \nabla} \def\pp{\partial} \def\EE{\scr E_i f(X_{t_1},\cdots, X_{t_n}),$$
where $n\in \mathbb N, 0<t_1<\cdots<t_n\le T$ and $F(X)=
f(X_{t_1},\cdots, X_{t_n})$ for some $f\in C^\infty(M^n).$ As the
main result of \cite{Z}, when $h\in C_0^2(0,T)$, \cite[Theorem
2.3]{Z} provides an integration by parts formula for $\pp_h$ by
using an infinite-dimensional generalized functional in the sense of
Schwartz. Since for non-trivial $h$ the flow is not
quasi-invariant, this integration by parts formula can not be
formulated by using the distribution of $X$ with a density function,
and the induced gradient operator does not provide a Dirichlet form
on the $L^2$-space of the distribution of $X$.
In this paper, we shall define quasi-invariant flows on a
$d$-dimensional Riemannian manifolds with boundary for all $h\in
\H$ in an intrinsic way, where
$$\H:= \bigg\{h\in C( [0,T]; \R^d):\ h_0=0, \int_0^T |\dot h_t|^2\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t<\infty\bigg\}$$
is the Cameron-Martin space. When $M$ is a half-space of $\R^d$,
which essentially reduces to the one-dimensional setting,
quasi-invariant flows has been constructed in \cite[\S 4(a)]{B} by
solving SDEs with reflecting boundary. We shall modify the idea to
the reflecting Brownian motion on a manifold with boundary. By
establishing integration by parts formula, these flows will be
linked to a damped gradient operator defined by using Hsu's
multiplicative functionals constructed in \cite{H}. Form this we
will derive the Gross log-Sobolev inequality for the associated
Dirichlet form.
To explain the idea of the study in a simple way, we first
consider the one-dimensional situation. Let $l_t$ be the local time
of $X_t:= |b_t|$ at point $0$. We have
$$\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X_t = \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D b_t + \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D l_t,\ \ X_0=0.$$ Now, for any $h\in \H$ and
$\vv>0$, let $X_t^{\vv,h}$ and its local time $l_t^{\vv,h}$ at $0$
solve the equation
$$ \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X_t^{\vv,h}= \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D b_t +\vv \dot h_t\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t +\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D l_t^{\vv,h},\ \
X_0^{\vv,h}=0.$$ By the Girsanov theorem $\{b_t+\vv h_t:\ 0\le t\le
T\}$ is a Brownian motion under the probability $R_\vv \P$, where
$$R_\vv =\exp\bigg[-\vv \int_0^T \dot h_t \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D b_t -\ff {\vv^2} 2
\int_0^T |\dot h_t|^2 \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\bigg]$$ is a functional of $X$ since $\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D
b_t= \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X_t -\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D l_t$. Thus, the distribution of $X^{\vv,h}$ under
$R_\vv \P$ coincides with that of $X$ under $\P$. Therefore, the
flow $X^{\vv,h}$ is quasi-invariant. Moreover, it is easy to see
that
$X_t^{\vv,h}= |b_t+\vv h_t|.$ So, for a cylindrical
function $\gg\mapsto F(\gg)= f(\gg_{t_1},\cdots, \gg_{t_n})$, where $n\ge 1,
0< t_1<\cdots <t_n\le T,$
$f\in C_0^\infty(\R_+^n)$ and $\gg\in C([0,1]; [0,\infty))$ with
$\gg_0=0,$ one has
\begin} \def\beq{\begin{equation}} \def\F{\scr F{equation*}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split} &\lim_{\vv\downarrow 0} \ff{F(X^{\vv,h})-
F(X)}\vv =\sum_{i=1}^n \text{sgn} (b_{t_i}) h_{t_i} \pp_i
f(X_{t_1},\cdots, X_{t_n})\\
&=\sum_{i=1}^n \text{sgn}(X_{t_i}-
L_{t_i}) h_{t_i} \pp_i f(X_{t_1},\cdots,
X_{t_n}),\end{split}\end{equation*} which is a functional of $X$.
Let $\tilde} \def\Ric{\text{\rm{Ric}} f(x_1,\cdots, x_n)= f(|x_1|, \cdots, |x_n|)$. We have
\beq\label{1.0} D_h^0 F:=\lim_{\vv\downarrow 0} \ff{F(X^{\vv,h})-
F(X)}\vv = \sum_{i=1}^n h_{t_i} \pp_i \tilde} \def\Ric{\text{\rm{Ric}} f(b_{t_1}, \cdots,
b_{t_n}).\end{equation} Combining this with the known integration by parts formula
for the Brownian motion, we obtain
\beq\label{00}\E D_h^0 F= \E\bigg\{F(X) \int_0^T \dot h_t \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D
b_t\bigg\}.\end{equation} Furthermore,
let the gradient of $F$ be fixed as an $\H$-valued random variable such
that $\<D^0F,h\>_\H= D_h^0 F, h\in \H.$ So,
$$D^0F= \sum_{i=1}^n (t_i\land t) \text{sgn}(X_{t_i}-l_{t_i}) h_{t_i} \pp_i
f(X_{t_1},\cdots, X_{t_n}).$$ Let $\mu$ be the distribution of
$X$. By (\ref{00}), the form
$$\EE(F,G)=\E(D^0F, D^0G\>_\H$$ defined for cylindrical functions $F$ and $G$
is closable in $L^2(\mu)$, and the closure $(\EE,\D(\EE))$ is a conservative
Dirichlet form. Finally, by the known log-Sobolev inequality on the path space
of the Brownain motion, this Dirichlet form satisfies the log-Sobolev
inequality
$$\mu(F^2\log F^2)\le 2\EE(F,F),\ \ F\in \D(\EE).$$
The main purpose of this paper is to realized the above idea on the path space of the reflecting
Brownian motion on a Riemannian manifold with boundary. In this
case we no longer have explicit expression of $D^0$. But in Section
2 we shall present an integration by parts formula, which identifies
the adapted projection of $D^0$ and that of the damped gradient
operator induced by Hsu's multiplicative functional constructed in
\cite{H}. this
integration by parts formula will be proved in Section 3. Finally, using the resulting integration by
parts formula, the standard log-Sobolev
inequality will be addressed in Section 4.
\section{Damped Gradient and Integration by Parts}
Let $M$ be a $d$-dimensional compact connected Riemannian manifold
with boundary $\pp M$. Let $o\in M$ and $T>0$ be fixed. Then the
path space for the reflecting Brownian motion on $M$ starting at $o$
is
$$W= \{\gg\in C([0,T]; M):\ \gg_0=o\}.$$ Let $B_t$ be the
$d$-dimensional Brownian motion on a complete probability space
$(\OO,\F,\P)$ with natural filtration $\{\F_t\}_{t\ge 0}.$ For any
$x\in M$, let $O_{x}M$ be the set of all orthonormal bases for the
tangent space $T_{x}M$ at point $x$, and let $O(M):= \cup_{x\in M}
O_x(M)$ be the frame bundle. Then for any $X_0\in M$, the reflecting
Brownian motion can be constructed by solving the SDE
\beq\label{2.1} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X_t= u_t\circ \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D B_t +N(X_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D l_t,\end{equation}
where $u_t\in O_{X_t}(M)$ is the horizontal lift of $X_t$ on the
frame bundle $O(M)$, and $l_t$ is the local time of $X_t$ on the
boundary $\pp M.$ Let $\mu$ be the distribution of $X:=\{X_t:\ 0\le
t\le T\}$ for $X_0=o$. Then $\mu$ is a probability measure on the
path space $W$.
To define the damped gradient operator, let us introduce the
multiplicative functional constructed in \cite{H}. To this end, we
need to introduce some $\R^d\bigotimes\R^d$-valued functionals on
the frame bundle. Let $\text{\rm{Ric}}} \def\Hess{\text{\rm{Hess}}$ be the Ricci curvature on $M$ and $\II$
the second fundamental form on $\pp M$. For any $u\in O(M)$, let
$$R_u (a,b)= \text{\rm{Ric}}} \def\Hess{\text{\rm{Hess}}(au, bu),\ \ a,b\in \R^d.$$ Let $\pi_\pp: TM\to T\pp M$ be the orthogonal projection
at points on $\pp M$, and let $\pi: O(M)\to M$ is the canonical
projection. For any $u\in O(M)$ with $\pi u\in\pp M$, let
$$\II_u(a,b)= \II(\pi_\pp au, \pi_\pp bu),\ \ a,b \in \R^d.$$ Finally,
let $N$ be the inward unit normal vector field on $\pp M$. For $\pi
u\in \pp M$, let
$$P_u(a,b)= \<ua, N\>\<ub, N\>,\ \ a,b\in \R^d.$$
For any $u_0\in O(M)$, let $X_t$ be the reflecting Brownian motion
on $M$ with horizontal lift $u_t$. For any $\vv>0$, let $ Q_t^\vv$
solve the following SDE on $\R^d\bigotimes\R^d$:
\beq\label{PP1} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D Q_t^\vv =- Q_t^\vv \Big\{\ff 1 2 R_{u_t} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t +
\big(\vv^{-1}P_{u_t} +\II_{u_t}\big)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D l_t\Big\},\ \ \ Q_0^\vv =
I.\end{equation} According to \cite[Theorem 3.4]{H}, when
$\vv\downarrow 0$ the process $Q_t^\vv$ converges in $L^2$ to an
adapted right-continuous process $Q_t$ with left limit, such that
$Q_tP_{u_t}=0$ if $X_t=\pi u_t\in \pp M.$ Consequently, if $\text{\rm{Ric}}} \def\Hess{\text{\rm{Hess}}\ge
-K$ and $\II\ge -\sigma} \def\ess{\text{\rm{ess}}$ for some continuous functions $K$ and $\sigma} \def\ess{\text{\rm{ess}}$ on
$M$, then
$$\|Q_t\|\le \exp\bigg\{\ff 1 2 \int_0^t K(X_s)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s + \int_0^t
\sigma} \def\ess{\text{\rm{ess}}(X_s)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D l_s\bigg\},\ \ t\ge 0,$$ where $\|\cdot\|$ is the
operator norm on $\R^d$. In particular, $\E\|Q_t\|^p<\infty$ holds
for any $p>1$. For $f\in C(M)$, let
$$P_t f(x)= \E f(X_t^x),\ \ \ x\in M,$$ where and in the sequel, $X_t^x$ denotes the solution to (\ref{2.1}) for
$X_0=x$. Then $P_t$ is the Neumann semigroup.
By \cite[Theorem 4.2]{H}
(see also the last display in the proof of \cite[Theorem 5.1]{H}),
$s\mapsto Q_s u_s^{-1} \nabla} \def\pp{\partial} \def\EE{\scr E P_{t-s} f (X_s)$ is a martingale. So,
\beq\label{2.2} u_0^{-1} \nabla} \def\pp{\partial} \def\EE{\scr E P_t f(x) = \E \big\{ Q_t^x
(u_t^x)^{-1} \nabla} \def\pp{\partial} \def\EE{\scr E f(X_t^x)\big\},\ \ x\in M, u_0\in
O_x(M),\end{equation} where $Q^x_t$ and $u^x_t$ are the
multiplicative functional and horizontal lift of $X_t^x$.
In general, for $s\ge 0$, let $( Q_{s, t+s})_{t\ge 0}$ be the
associated multiplicative functional for the process
$(X_{t+s})_{t\ge 0}.$ We have
\beq\label{Q} Q_{s,t} Q_{t,r}= Q_{s,r},\ \ 0\le s\le t\le
r.\end{equation} We shall use these multiplicative functionals to
define the damped gradient operator (see \cite{FM} for the damped
gradient operator for manifolds without boundary).
Let
$$ \scr FC^\infty=\big\{ W\ni \gg\mapsto f(\gg_{t_1},\cdots, \gg_{t_n}): n\ge 1,
0<t_1<\cdots< t_n\le T, f\in C^\infty(M)\big\}$$ be the class of
smooth cylindrical functions on $W$. For any $F\in
\scr FC^\infty$ with $F(\gg)=f(\gg_{t_1},\cdots, \gg_{t_n}),$ define
the damped gradient $DF$ as an $\H$-valued random variable by
setting $(DF)_0=0$ and
$$\ff{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D}{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t}(DF)_t= \sum_{i=1}^n 1_{\{t<t_i\}} Q_{t, t_i} u_{t_i}^{-1}
\nabla} \def\pp{\partial} \def\EE{\scr E_i f(X_{t_1},\cdots, X_{t_n}),\ \ t\in [0,T],$$ where $\nabla} \def\pp{\partial} \def\EE{\scr E_i$
denotes the gradient operator w.r.t. the $i$-th component. Then, for
any $\H$-valued random variable $h$, we have
\beq\label{Dh} D_h F:= \<DF, h\>_\H= \sum_{i=1}^n \int_0^{t_i}
\<u_{t_i}^{-1} \nabla} \def\pp{\partial} \def\EE{\scr E_i f(X_{t_1},\cdots, X_{t_n}), Q_{t, t_i}^* \dot
h_t\>\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t.\end{equation}
Next, let $\tilde} \def\Ric{\text{\rm{Ric}} \H$ denote the set of all square-integrable
$\H$-valued adapted random variables, i.e.
$$\tilde} \def\Ric{\text{\rm{Ric}} \H=\big\{h\in L^2(\OO\to \H;\P):\ h_t \ \text{is}\ \F_t\text{-measurable},\ t\in [0,T]\big\}.$$
Then $\tilde} \def\Ric{\text{\rm{Ric}}\H$ is a Hilbert
space with inner product
$$\<h,\tilde} \def\Ric{\text{\rm{Ric}} h\>_{\tilde} \def\Ric{\text{\rm{Ric}} \H}:= \E \int_0^T \<\dot h_t, \dot{\tilde} \def\Ric{\text{\rm{Ric}} h}_t\>\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D
t=\E\<h,\tilde} \def\Ric{\text{\rm{Ric}} h\>_\H,\ \ h,\tilde} \def\Ric{\text{\rm{Ric}} h\in \tilde} \def\Ric{\text{\rm{Ric}} \H.$$
To describe $DF$ by using a quasi-invariant flow, for $h\in\tilde} \def\Ric{\text{\rm{Ric}}\H$
and $\vv>0$ let $X_t^{\vv,h}$ solve the SDE
\beq\label{2.3} \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X_t^{\vv,h} = u_t^{\vv,h} \circ \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D B_t
+N(X_t^{\vv,h})\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D l_t^{\vv,h} +\vv \dot h_t u_t^{\vv,h}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t,\ \
X_0^{\vv,h}=X_0=\pi u_0,\end{equation} where $l_t^{\vv,h}$ and
$u_t^{\vv,h}$ are, respectively, the local time on $\pp M$ and the
horizontal lift on $O(M)$ for $X_t^{\vv,h}$. To see that
$\{X^{\vv,h}\}_{\vv\ge 0}$ has the flow property, let
$$\Theta: W_0:=\{\oo\in C([0,T]):\ \oo_0=0\}\to W$$ be measurable such
that $X=\Theta(B)$. For any $\vv>0$ and a function $\Phi: W_0\to W$,
let $(\theta_\vv^h \Phi)(\oo)= \Phi(\oo+\vv h).$ Then $X^{\vv,h}=
(\theta_\vv^h \Theta)(B), \vv\ge 0$. Hence,
$$X^{\vv_1+\vv_2,h}= \theta_{\vv_1}^h X^{\vv_2,h},\ \ \
\vv_1,\vv_2\ge 0.$$ We shall try to link the multiplicative
functional $ Q$ to the vector field (if exists) generating the flow
$X^{\vv,h}.$
First of all, let us explain that the flow $X^{\vv,h}$ is
quasi-invariant. Let
$$R^{\vv,h}= \exp\bigg[\vv \int_0^T\<\dot h_t, \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D B_t\> -\ff {\vv^2}
2 \int_0^T |\dot h_t|^2\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\bigg].$$ By the Girsanov theorem,
$$B_t^{\vv,h}:= B_t-\vv h_t$$ is the $d$-dimensional Brownian motion
under the probability $R^{\vv,h}\P.$ Thus, the distribution of $X$
under $R^{\vv,h}\P$ coincides with that of $X^{\vv,h}$ under $\P.$
Therefore, $X^{\vv,h}$ is quasi-invariant.
The following integration by parts formula provides a link between
the damped gradient $D$ and the flow $X^{\vv,h}$.
\begin} \def\beq{\begin{equation}} \def\F{\scr F{thm}\label{T2.1} For any $u_0\in O(M)$ and $F\in \scr
FC^\infty$,
$$ \E\big\{D_h F\big\}= \lim_{\vv\downarrow 0} \E
\ff{F(X^{\vv,h})-F(X)}\vv =\E \bigg\{F(X)\int_0^T \<\dot h_t, \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D
B_t\>\bigg\}$$ holds for all $h\in \tilde} \def\Ric{\text{\rm{Ric}} \H_b$, the set of all
elements in $\tilde} \def\Ric{\text{\rm{Ric}} \H$ with bounded $\|h\|_\H.$ \end{thm}
\paragraph{Remark 2.1.} Since $\tilde} \def\Ric{\text{\rm{Ric}} \H_b$ is dense in $\tilde} \def\Ric{\text{\rm{Ric}} \H$, the above result implies
that the projection of $D$ onto $\tilde} \def\Ric{\text{\rm{Ric}}\H$ can be determined by the
flows $X^{\vv,h}, h\in \tilde} \def\Ric{\text{\rm{Ric}}\H_b.$ But in general
\beq\label{Q*}D_h F= \lim_{\vv\downarrow 0} \ff{F(X^{\vv,h})-F(X)}
\vv,\ \ h\in\tilde} \def\Ric{\text{\rm{Ric}} \H\end{equation} does not hold, so that the flow
$\{X^{\vv,h}\}$ is not generated by the vector field
$$W\ni \gg\mapsto \bigg\{u_t(\gg) \int_0^t Q_{s,t}^*(\gg) \dot h_s \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s\in T_{\gg_t}M: 0\le t\le T\bigg\},$$
where $u_s(\gg)$ and $ Q_s(\gg)$ are the horizontal lift and the
multiplicative functional of $\gg$ respectively.
To disprove (\ref{Q*}), let us consider $M=[0,1]\subset \R$ and
$X_0=0$. Let $h_t=t$ and $F(\gg)= gg_1.$ By (\ref{1.0}) we have
\beq\label{P1} \lim_{\vv\downarrow 0} \ff{F(X^{\vv,h})- F(X)}\vv
=f'(X_1)\,\text{sgn}(X_1-l_1)= \text{sgn}(X_1-l_1),\end{equation}
provided
$$\tau_1:= \inf\{t>0:\ X_t=1\} >1.$$ On the other hand, for the one-dimensional case we
have $R_u=\II_u=0$ and $P_u=1.$ Then by (\ref{PP1})
$$Q_{s,t}^\vv= \exp\big[-\vv^{-1} (l_t-l_s)\big]>0,\ \ s\le t.$$
This implies that $Q_{t,1}\ge 0$ for all $t\in [0,1]$. Combining
this with (\ref{2.3}) we obtain
\beq\label{P2} D_h F= f'(X_1) \int_0^1 Q_{t,1}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t=\int_0^1
Q_{t,1}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\ge 0.\end{equation} Since
$$\P\big( X_1-l_1<0,\tau_1>1\big)= \P\Big(\inf_{s\in [0,1]} B_s<0,
\sup_{s\in [0,1]}B_s<1\Big)>0,$$ where $B_s$ is now the
one-dimensional Brownian motion. Combining this with (\ref{P1})
and (\ref{P2}), we see that $(\ref{Q*})$ does not hold.
\
To prove Theorem \ref{T2.1}, we need some preparations. In
particular, we shall use (\ref{2.2}) and a conducting argument as in
\cite{H2} for the case without boundary.
\section{Proof of Theorem \ref{T2.1}}
\begin} \def\beq{\begin{equation}} \def\F{\scr F{lem}\label{L3.1} Let $u_0\in O(M)$ and $F\in \F C^\infty.$ Then
$$\lim_{\vv\downarrow 0} \E \ff{F(X^{\vv,h})-F(X)}\vv =\E
\bigg\{F(X)\int_0^T \<\dot h_t, \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D B_t\>\bigg\}$$ holds for all
$h\in \tilde} \def\Ric{\text{\rm{Ric}} \H_b$.\end{lem}
\begin} \def\beq{\begin{equation}} \def\F{\scr F{proof} Let $B_t^{\vv,h}= B_t-\vv h_t,$ which is the
$d$-dimensional Brownian motion under $R^{\vv,h}\P.$ Reformulating
(\ref{2.1}) as
$$\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D X_t= u_t\circ \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D B_t^{\vv,h} + N(X_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D l_t +\vv \dot h_t
u_t\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t,\ \ X_0=\pi u_0.$$ By the weak uniqueness of (\ref{2.3}),
we conclude that the distribution of $X$ under $R^{\vv,h}\P$
coincides with that of $X^{\vv,h}$ under $\P$. In particular, $\E
F(X^{\vv,h})= \E [R^{\vv,h} F(X)]$. Thus,
\begin} \def\beq{\begin{equation}} \def\F{\scr F{equation*}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split} &\lim_{\vv\downarrow 0} \E
\ff{F(X^{\vv,h})-F(X)}\vv
=\lim_{\vv\downarrow 0} \E\Big\{F(X)
\cdot\ff{R^{\vv,h}-1}\vv\Big\}\\
&= \E\bigg\{F(X) \int_0^T \<\dot h_t, \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D
B_t\>\bigg\},\end{split}\end{equation*} where the last step is due
to the dominated convergence theorem since $\{R^{\vv,h}\}_{\vv\in
[0,1]}$ is uniformly integrable for $h\in \tilde} \def\Ric{\text{\rm{Ric}}\H_b$. \end{proof}
\begin} \def\beq{\begin{equation}} \def\F{\scr F{lem}\label{L3.2} For any $n\ge 1, 0<t_1<\cdots<t_n\le T$, and
$f\in C^\infty(M^n)$,
$$u_0^{-1} \nabla} \def\pp{\partial} \def\EE{\scr E_x \E f(X_{t_1}^x,\cdots, X_{t_n}^x)
= \sum_{i=1}^n \E \big\{M^x_{t_i} (u_{t_i}^x)^{-1} \nabla} \def\pp{\partial} \def\EE{\scr E_i
f(X_{t_1}^x,\cdots, X_{t_n}^x)\big\} $$ holds for all $x\in M$ and
$u_0\in O_x(M),$ where $\nabla} \def\pp{\partial} \def\EE{\scr E_x$ denotes the gradient w.r.t.
$x$.\end{lem}
\begin} \def\beq{\begin{equation}} \def\F{\scr F{proof} By (\ref{2.2}), the desired assertion holds for $n=1$.
Assume that it holds for $n=k$ for some natural number $k\ge 1$. It
remains to prove the assertion for $n=k+1.$ To this end, set
$$g(x)= \E f(x, X_{t_2-t_1}^x,\cdots, X_{t_{k+1}-t_1}^x),\ \ x\in M.$$
By the assumption for $n=k$ we have
$$u_0^{-1} \nabla} \def\pp{\partial} \def\EE{\scr E g(x) = \sum_{i=1}^{k+1}
\E \big\{M^x_{t_i-t_1}(u_{t_i-t_1}^x)^{-1} \nabla} \def\pp{\partial} \def\EE{\scr E_i f(x,
X_{t_2-t_1}^x,\cdots, X_{t_{k+1}-t_1}^x)\big\}$$ for all $ x\in M,
u_0\in O_x(M).$ Combining this with the assertion for $k=1$ and
using the Markov property, we obtain
\begin} \def\beq{\begin{equation}} \def\F{\scr F{equation*}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split}& u_0^{-1} \nabla} \def\pp{\partial} \def\EE{\scr E_x \E f(X_{t_1}^x,\cdots,
X_{t_{k+1}}^x) =u_0^{-1} \nabla} \def\pp{\partial} \def\EE{\scr E_x
\E g(X_{t_1}^x) \\
&= \E \big\{Q_{t_1}^x (u_{t_1}^x)^{-1} \nabla} \def\pp{\partial} \def\EE{\scr E g(X_{t_1}^x)\big\} =
\sum_{i=1}^{k+1} \E \big\{ Q_{t_i}^x (u_{t_i}^x)^{-1} \nabla} \def\pp{\partial} \def\EE{\scr E_i
f(X_{t_1}^x,\cdots, X_{t_{k+1}}^x)\big\}.\end{split}\end{equation*}
\end{proof}
\begin} \def\beq{\begin{equation}} \def\F{\scr F{lem}\label{L3.3} Let $f\in C^\infty(M)$. Then for any
$u_0\in O(M)$ and $t>0$,
$$\E\bigg\{f(X_{t})\int_0^{t} \<\dot h_s, \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D B_s\>\bigg\}= \E \int_0^{t}\<
u_{t}^{-1} \nabla} \def\pp{\partial} \def\EE{\scr E f(X_{t}), Q_{s,t}^*\dot h_{s}\>\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s,\ \
h\in\tilde} \def\Ric{\text{\rm{Ric}}\H, t_1\in [0,T].$$\end{lem}
\begin} \def\beq{\begin{equation}} \def\F{\scr F{proof} Noting that
$$\ff{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D}{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s} P_s f= \ff 1 2\DD P_s f,\ \ NP_sf|_{\pp M}=0,\ \
s>0,$$ by (\ref{2.1}) and the
It\^o formula we obtain
$$\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D (P_{t-s}f)(X_s)= \<\nabla} \def\pp{\partial} \def\EE{\scr E P_{t-s} f(X_s), u_s\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D B_s\>,\ \ s\in
[0,t).$$ This implies
$$f(X_{t})= P_t f(X_0) +\int_0^{t} \<u_s^{-1}\nabla} \def\pp{\partial} \def\EE{\scr E P_{t-s}f(X_s), \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D
B_s\>,\ \ s\in [0,t].$$ Therefore,
\begin} \def\beq{\begin{equation}} \def\F{\scr F{equation}\label{BB} \E\bigg\{f(X_{t}) \int_0^{t} \<\dot h_s,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D
B_s\>\bigg\}= \E\int_0^{t}\<u_s^{-1} \nabla} \def\pp{\partial} \def\EE{\scr E P_{t-s} f(X_s), \dot
h_s\>\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s.\end{equation} By (\ref{2.2}) and the Markov property we
have
$$u_s^{-1} \nabla} \def\pp{\partial} \def\EE{\scr E P_{t-s} f(X_s)
=\E\big( Q_{s,t}u_{t}^{-1}\nabla} \def\pp{\partial} \def\EE{\scr E f(X_{t})\big|\F_{s}\big).$$ So, the
desired formula follows from (\ref{BB}) since $\dot h_s$ is
$\F_s$-measurable.
\end{proof}
As a consequence of (\ref{2.2}) and Lemma \ref{L3.3}, we have the
following Bismut formula.
\begin} \def\beq{\begin{equation}} \def\F{\scr F{cor} Let $P_t f(x)= \E^x f(X_t),\ t\ge 0, x\in M, f\in C(M).$
Then for any $v\in T_x M$ and any $h\in \tilde} \def\Ric{\text{\rm{Ric}} H$ with $h_t= u_0^{-1}
v$,
$$\<v, \nabla} \def\pp{\partial} \def\EE{\scr E P_t f(x)\>= \E^x \bigg\{f(X_t) \int_0^t
\< Q_s^*\dot h_s,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D B_s\>\bigg\}.$$\end{cor}
\begin} \def\beq{\begin{equation}} \def\F{\scr F{proof} By (\ref{Q}) and applying Lemma \ref{L3.3} to $\tilde} \def\Ric{\text{\rm{Ric}}
h\in\tilde} \def\Ric{\text{\rm{Ric}} \H$ in place of $h$, where $\dot{\tilde} \def\Ric{\text{\rm{Ric}} h}_s= Q_s^* \dot h_s,$
we obtain
$$\E\bigg\{f(X_t) \int_0^t \<Q_s^* \dot h_s, \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D B_s\>\bigg\} =\E \int_0^t
\<u_t^{-1} \nabla} \def\pp{\partial} \def\EE{\scr E f(X_t), Q_t^* \dot h_s\>\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s= \E \<Q_t u_t^{-1}\nabla} \def\pp{\partial} \def\EE{\scr E
f(X_t), u_0^{-1}v\>.$$ Then the proof is completed by combining this
with (\ref{2.2}). \end{proof}
\ \newline \emph{Proof of Theorem \ref{T2.1}.} By Lemma \ref{L3.1},
it suffices to prove
\beq\label{3.*} \E\{D_h F\} =\E \bigg\{F(X)\int_0^T\<\dot h_t, \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D
B_t\>\bigg\},\ \ h\in\tilde} \def\Ric{\text{\rm{Ric}} \H\end{equation} for $F = f(X_{t_1},
\cdots, X_{t_n})$ with $f\in C^\infty(M^n),$ where $n\ge 1,
0<t_1<\cdots<t_n\le T$. According to Lemma \ref{L3.3}, (\ref{3.*})
holds for $n=1.$ Assuming (\ref{3.*}) holds for $n=k$ for some $k\ge
1$, we aim to prove it for $n=k+1.$ To this end, let
$$g(x)= \E f(x, X_{t_2-t_1}^x,\cdots, X_{t_{k+1}-t_1}^x),\ \ x\in M.$$
By the result for $n=1$ and the Markov property,
\beq\label{2.4} \begin} \def\beq{\begin{equation}} \def\F{\scr F{split} \int_0^{t_1} \E\< u_{t_1}^{-1}\nabla} \def\pp{\partial} \def\EE{\scr E
g(X_{t_1}), Q_{t,t_1}^*\dot h_{t}\>\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t &= \E\bigg\{\E(F(X)|\scr
F_{t_1})\int_0^{t_1}\<\dot h_t,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D
B_t\>\bigg\}\\
&= \E\bigg\{F(X)\int_0^{t_1} \<\dot h_t, \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D
B_t\>\bigg\}.\end{split}\end{equation} On the other hand, by
(\ref{Q}), Lemma \ref{L3.2} and the Markov property,
\begin} \def\beq{\begin{equation}} \def\F{\scr F{equation*}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split} &\int_0^{t_1}\E\< u_{t_1}^{-1}\nabla} \def\pp{\partial} \def\EE{\scr E
g(X_{t_1}), Q_{t,t_1}^* \dot h_{t}\>\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\\ &=
\int_0^{t_1}\E\Big\<\E\Big(\sum_{i=1}^{k+1} Q_{t_1,t_i}
u_{t_i}^{-1} \nabla} \def\pp{\partial} \def\EE{\scr E_i f(X_{t_1},\cdots, X_{t_{k+1}})\Big|\scr
F_{t_1}\Big),Q_{t,t_1}^*\dot h_t \Big\>\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\\
&= \sum_{i=1}^{k+1}\int_0^{t_1}\< u_{t_i}^{-1} \nabla} \def\pp{\partial} \def\EE{\scr E_i
f(X_{t_1},\cdots, X_{t_{k+1}}),
Q_{t, t_i}^*\dot h_t \>\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t.\end{split}\end{equation*} Combining this with
(\ref{Dh}) and (\ref{2.4}) we obtain
\beq\label{2.5} \begin} \def\beq{\begin{equation}} \def\F{\scr F{split} \E\big\{D_h F\big\} =
&\E\bigg\{F(X)\int_0^{t_1} \<\dot h_t, \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D B_t\> \bigg\}\\& +\E
\sum_{i=2}^{k+1} \int_{t_1}^{t_i} \< u_{t_i}^{-1} \nabla} \def\pp{\partial} \def\EE{\scr E_i f(X_{t_1},
\cdots, X_{t_{k+1}}), Q_{t, t_i}^*\dot h_t \>\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t
.\end{split}\end{equation} By the Markov property and the
assumption for $n=k$, we have
$$\sum_{i=2}^{k+1} \E\int_{t_1}^{t_i} \< u_{t_i}^{-1} \nabla} \def\pp{\partial} \def\EE{\scr E_i
f(X_{t_1}, \cdots, X_{t_{k+1}}), Q_{t, t_i}^*\dot h_t \>\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t =\E
\bigg\{F(X)\int_{t_1}^{T}\<\dot h_t,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D B_t\>\bigg\}.$$Combining this
with (\ref{2.5}) we complete the proof.\qed
\section{The Log-Sobolev Inequality}
Let $\mu$ be the distribution of $X$ with $X_0=o$, and let
$$\EE (F,G)= \E\<DF, DG\>_\H,\ \ F,G\in \scr FC^\infty.$$
Since both $DF$ and $DG$ are functionals of $X$,
$(\EE,\scr FC^\infty)$ is a positive bilinear form on $L^2(W;\mu)$.
It is standard that the integration by parts formula (\ref{3.*})
implies the closability of the form (see Lemma \ref{L4.0}). We
shall use $(\EE,\D(\EE))$ to denote the closure of $(\EE,\scr
FC^\infty)$. Moreover, (\ref{3.*}) also implies the Clark-Ocone
type martingale representation formula (see Lemma \ref{L4.1}), which
leads to the standard Gross \cite{G} log-Sobolev inequality. It is
well known that the log-Sobolev inequality implies that the
associated Markov semigroup is hypercontractive and converges
exponentially to $\mu$ in the sense of relative entropy.
\begin} \def\beq{\begin{equation}} \def\F{\scr F{lem}\label{L4.0} $(\EE,\scr FC^\infty)$ is closable in
$L^2(W;\mu)$.
\end{lem}
\begin} \def\beq{\begin{equation}} \def\F{\scr F{proof} Although the proof is standard by using the integration
by parts formula, we include it here for completeness. Let
$\{F_n\}_{n\ge 1}\subset \scr FC^\infty$ such that $\EE(F_n,F_n)\le
1$ for all $n\ge 0$ and $\mu(F_n^2)+\EE(F_n-F_m,F_n-F_m)\to 0$ as
$n,m\to\infty.$ We aim to prove that $\EE(F_n,F_n)\to 0$ as
$n\to\infty.$ Since
$$\EE(F_n,F_n)= \EE(F_n,F_n-F_m)+ \EE(F_n,F_m)\le \ss{\EE(F_n-F_m,
F_n-F_m)} +\EE(F_n,F_m),$$ it suffices to show that for any $G\in
\scr FC^\infty$, one has $\EE(F_n,G)\to 0$ as $n\to\infty$. To this
end, let $\{h^i\}_{i\ge 1}$ be an ONB on $\H$. For any $\vv>0$ there
exists $k\ge 1$ such that
$$\Big|\EE(F_n,G)-\sum_{i=1}^k \E(D_{h^i} F_n)(D_{h^i}G) \Big|<\vv,$$ where
$D_hF:= \<DF,h\>_{\H}$ for $F\in \scr FC^\infty$ and $h\in\H.$ Since
$\scr FC^\infty$ is dense in $L^2(W;\mu)$, there exists $G_i\in \scr
FC^\infty$ such that
$$\E|D_{h^i} G- G_i|^2<\vv,\ \ 1\le i\le k.$$ Therefore,
$$|\EE(F_n,G)|\le 2\vv + \sum_{i=1}^k \big|\E \<G_iDF_n, h_i\>_\H\big|.$$
Noting that $G_i DF_n= D(F_nG_i)- F_nDG_i$, by (\ref{3.*}) we obtain
$$|\EE(F_n,G)| \le 2\vv +\sum_{i=1}^k \bigg|\E
\bigg[F_n(X)\bigg\{G_i(X)\int_0^T \<\dot{h}^i_t, \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D B_t\> -D_{h^i}
G_i\bigg\}\bigg]\bigg|.$$ Since $\mu(F_n^2)\to 0$ as $n\to\infty$,
by letting first $n\to\infty$ then $\vv\to 0$ we complete the proof.
\end{proof}
\begin} \def\beq{\begin{equation}} \def\F{\scr F{lem}\label{L4.1} For any $F\in \scr FC^\infty$, let $\tilde} \def\Ric{\text{\rm{Ric}} D F$
be the projection of $DF$ on $\tilde} \def\Ric{\text{\rm{Ric}}\H$, i.e.
$$\ff{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D}{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t} (\tilde} \def\Ric{\text{\rm{Ric}} DF)_t = \E\Big(\ff{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D }{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t} (DF)_t\Big|\F_t\Big),\ \ t\in [0,T], (\tilde} \def\Ric{\text{\rm{Ric}} DF)_0=0.$$
Then
$$F(X)= \E F(X) +\int_0^T \Big\<\ff{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D }{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t} (\tilde} \def\Ric{\text{\rm{Ric}} DF)_t, \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D
B_t\Big\>.$$\end{lem}
\begin} \def\beq{\begin{equation}} \def\F{\scr F{proof} By Theorem \ref{T2.1}, we have
\beq\label{4.1} \E\<h, \tilde} \def\Ric{\text{\rm{Ric}} DF\>_\H= \E\bigg\{F(X)\int_0^T \<\dot
h_t, \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D B_t\>\bigg\},\ \ h\in\tilde} \def\Ric{\text{\rm{Ric}}\H.\end{equation} On the other
hand, by the martingale representation, there exists a predictable
process $\bb_t$ such that
\beq\label{4.2} \E(F(X)|\F_t) = \E F(X) + \int_0^t \<\bb_s, \text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D
B_s\>,\ \ \ t\in [0,T].\end{equation} Let
$$\varphi_t= \int_0^t \bb_s\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D s,\ \ \ t\in [0,T].$$ We have $\varphi\in
\tilde} \def\Ric{\text{\rm{Ric}}\H$ and by (\ref{4.2}),
$$\E \<h,\varphi\>_\H= \E\int_0^T \<\dot h_t,\bb_t\>\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t=
\E\bigg\{F(X)\int_0^T \<\dot h_t,\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D B_t\>\bigg\}$$ holds for all $h\in
\tilde} \def\Ric{\text{\rm{Ric}}\H$. Combining this with (\ref{4.1}) we conclude that $\tilde} \def\Ric{\text{\rm{Ric}}
DF=\varphi$. Therefore, the desired formula follows from (\ref{4.2}).
\end{proof}
\begin} \def\beq{\begin{equation}} \def\F{\scr F{thm} For any $T>0$ and $o\in M$, $(\EE,\D(\EE))$ satisfies
the following log-Sobolev inequality
$$\mu(F^2\log F^2) \le 2 \EE(F,F),\ \ F\in \D(\EE),\
\mu(F^2)=1.$$\end{thm}
\begin} \def\beq{\begin{equation}} \def\F{\scr F{proof} It suffices to prove for $F\in \F C^\infty$. Let $m_t=\E(F(X)^2|\F_t),\ t\in [0,T].$
By Lemma \ref{L4.1} and the It\^o formula,
$$\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D m_t\log m_t= (1+\log m_t)\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D m_t +\ff{|\ff{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D}{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t} (\tilde} \def\Ric{\text{\rm{Ric}} DF^2)_t|^2}{2m_t}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t.$$ Thus,
\begin} \def\beq{\begin{equation}} \def\F{\scr F{equation*}\begin} \def\beq{\begin{equation}} \def\F{\scr F{split} \mu(F^2\log F^2) &= \E m_T\log m_T
=\int_0^T \ff{2 \E(F(X)
\ff{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D }{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t} (DF)_t|\F_t)^2}{\E(F(X)^2|\F_t)}\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t\\
&\le 2 \int_0^T \E\Big| \ff{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D}{\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t} (DF)_t\Big|^2\text{\rm{d}}} \def\bb{\beta} \def\aa{\alpha} \def\D{\scr D t= 2
\E\|DF\|_\H^2= 2\EE(F,F).\end{split}\end{equation*}\end{proof}
\begin} \def\beq{\begin{equation}} \def\F{\scr F{thebibliography}{99}
\bibitem{B} J.-M. Bismut, \emph{the calculus of boundary processes,}
Ann. Sci. \'Ecole Norm. Sup. 17(1984), 507--622.
\bibitem{CHL} M. Capitaine, E. P. Hsu and M. Ledoux, \emph{Martingale
representation and a simple proof of logarithmic Sobolev
inequalities on path spaces,} Electron Comm. Prob. 2(1997) 71-81.
\bibitem{D} B. K. Driver, \emph{A Cameron-Martin type
quasi-invariance theorem for Brownian motion on a compact Riemannian
manifold,} J. Funct. Anal. 110(1992), 272--376.
\bibitem{EL} K. D. Elworthy, K. D., X.-M. Li, \emph{An $L^2$ theory for differential forms on path
spaces I,} J. Funct. Anal. 254 (2008), 196--245.
\bibitem{F} S. Fang, \emph{Inegalit\'e du type de Poinca\'e sur
l'espace des chemins riemanninens,} C.R.Acad.Sci.Paris 318(1994),
257--260.
\bibitem{G} L. Gross, \emph{Logarithmic Sobolev inequalities,} Amer. J. Math.
97(1976), 1061--1083.
\bibitem{FM} S. Fang and P. Malliavin, \emph{Stochastic analysis on the path
space of a Riemannian manifold}, J. Funct. Anal. 118 (1993),
p.249--274.
\bibitem{H2} E. P. Hsu, \emph{Logarithmic-Sobolev inequalities on
path spaces over Riemannian manifolds,} Comm. Math. Phys. 189(1997),
9--16.
\bibitem{H} E. P. Hsu, \emph{Multiplicative functional for the heat
equation on manifolds with boundary,} Michigan Math. J. 50(2002),
351--367.
\bibitem{Z} L. Zambotti, \emph{Integration by parts on the law of the reflecting
Brownian motion,} J. Funct. Anal. 223(2005), 147-178.
\end{thebibliography}
\end{document}
|
2,877,628,090,742 | arxiv | \section{Introduction}
In this paper we give a simplified version of our proof of the following theorem:
\begin{theorem} {\bf The Weak Factorization Theorem}\label{th:1}
\begin{enumerate}
\item Lef $f:X\dashrightarrow Y$ be a
birational map of
smooth complete varieties
over a field of characteristic zero, which is an
isomorphism over an open set $U$. Then $f$ can be factored
as
$$X=X_0\buildrel f_0 \over \dashrightarrow X_1
\buildrel f_1 \over \dashrightarrow \ldots \buildrel f_{n-1} \over
\dashrightarrow X_n=Y ,$$
where each $X_i$ is a smooth complete variety and $f_i$ is a blow-up
or blow-down at a smooth center which is an isomorphism
over $U$.
\item Moreover, if $X\setminus U$ and $Y\setminus U$ are divisors
with simple normal crossings, then each $D_i:=X_i\setminus U$
is a divisor with simple normal crossings and $f_i$ is a blow-up
or blow-down at a smooth center which has normal
crossings with components of $D_i$.
\item There is an index $1\leq r\leq n$ such that for all $i\leq r$ the induced birational map $X_i\buildrel f_0 \over\dashrightarrow X$ is a projective morphism and for all $r\leq i \leq n$ the map $X_i \buildrel f_0 \over\dashrightarrow Y$ is projective morphism.
\item The above factorization is functorial in the following sense:
\noindent Let $\phi_X$, $\phi_Y$ and $\phi_K$ be automorphisms of $X$ , $Y$ and $\operatorname{Spec}(K)$ such that $f\circ \phi_X=\phi_Y\circ f$, $j_X\circ\phi_X=j_Y\circ\phi_Y=\phi_K$ ,where $j_X: X\to\operatorname{Spec}(K)$ and $j_Y:Y\to \operatorname{Spec}(K)$ are the natural morphisms. Then
the induced birational transformations $\phi_i: X_i\dashrightarrow X_i$ are automorphisms of $X_i$ commuting with $f_i:X_i\to X_{i+1}$ and $j_{X_i}:X_i\to \operatorname{Spec}(K)$.
Moreover if $\phi_X(D_X)=D_X$ and $\phi_Y(D_Y)=D_Y$ then for all $i$, we have $\phi_i(D_i)=D_i$.
\item The factorization commute with field extensions $K\subset L$.
\end{enumerate}
\end{theorem}
The theorem was proven in \cite{Wlodarczyk3} and in \cite{AKMW} in a more general version. The above formulation essentially reflects the statement of the Theorem in \cite{AKMW}.
The weak factorization theorem extends a theorem of Zariski, which states that
any birational map between two smooth complete
surfaces can be factored into a succession of blow-ups at
points followed by a succession of blow-downs at points.
A stronger version of the above theorem, called the strong factorization
conjecture, remains open.
\begin{conjecture} {\bf Strong Factorization Conjecture}.
Any birational map $f:X\dashrightarrow Y$ of smooth
complete varieties can be factored into a succession of blow-ups
at
smooth centers followed by a succession of
blow-downs at
smooth centers.
\end{conjecture}
Note that both statements are equivalent in dimension 2.
One can find the formulation of the relevant conjectures in many papers. Hironaka \cite{Hironaka1} formulated the strong
factorization conjecture. The weak factorization
problem was stated by Miyake and Oda \cite{Miyake-Oda}. The toric versions of the
strong and weak factorizations were also conjectured by
Miyake and Oda \cite{Miyake-Oda} and are called the strong and weak Oda
conjectures. The $3$-dimensional toric version of the weak form was
established by Danilov \cite{Danilov2} (see also Ewald \cite{Ewald}).
The weak toric conjecture
in arbitrary dimensions was proved in \cite{Wlodarczyk1} and later
independently by Morelli \cite{Morelli1}, who also claimed to have a proof of the
strong factorization conjecture
(see also Morelli \cite{Morelli2}).
Morelli's
proof of the weak Oda conjecture was completed, revised and
generalized to the toroidal case by Abramovich, Matsuki and Rashid
in \cite{Abramovich-Matsuki-Rashid}. A gap in Morelli's proof of the strong Oda conjecture,
which went
unnnoticed in \cite{Abramovich-Matsuki-Rashid}, was later
found by K. Karu.
The local version of the strong factorization problem was posed
by Abhyankar in dimension 2 and by Christensen in general; Christensen has solved it for 3-dimensional toric varieties \cite{Christensen}.
The local version of the weak factorization problem (in
characteristic 0) was
solved by Cutkosky \cite{Cutkosky1}, who also
showed that Oda's strong conjecture implies the local
version of the strong conjecture for proper birational
morphisms \cite{Cutkosky2} and proved the local strong factorization conjecture
in dimension 3 \cite{Cutkosky2} via Christensen's theorem.
Finally Karu generalized Christensen's result to any dimension and completed the argument for the local strong factorization \cite{Karu}.
The proofs in \cite{Wlodarczyk3} and \cite{AKMW} are both build upon the idea of cobordisms which was developed in
\cite{Wlodarczyk2} \, and was inspired by Morelli's theory of polyhedral cobordisms \cite{Morelli1}. The main idea of \cite{Wlodarczyk2}\, is to construct a space with a $K^*$-action for a given birational map.
The space called a birational cobordism resembles the idea of Morse cobordism and determines a decomposition of the birational map into elementary transformations (see Remark \ref{cobordism}).
This gives a factorization into a sequence of
weighted blow-ups and blow-downs. One can view the factorization determined by the cobordisms also in terms of VGIT developed in papers of Thaddeuss and Dolgachev-Hu.
As shown in \cite{Wlodarczyk2}\, the weighted blow-ups which occur in the factorization have a
natural local toric description which is crucial for their further regularization.
The two existing methods of regularizing centers of this factorization are $\pi$-desingularization of cobordisms as in \cite{Wlodarczyk3} and local torification of the action as in \cite{AKMW}.
The present proof is essentially the same as in \cite{Wlodarczyk3}.
Instead of working in full generality and developing the suitable language for toroidal varieties we focus on
applying the general ideas to a particular construction of a smooth cobordism.
The $\pi$-desingularization is a desingularization of geometric quotients
of a $K^*$-action. This can be done locally and the procedure can be globalized in the functorial and even canonical way. The $\pi$-desingularization makes all the intermediate varieties (which are geometric quotients) smooth, and also the connecting blow-ups have smooth centers.
The proof of Abramovich, Karu, Matsuki and the author \cite{AKMW}
relies on a subtle analysis of differences between locally toric and toroidal structures defined by the action of $K^*$. The Abramovich-de Jong idea of torification is roughly speaking to construct the ideal sheaves whose blow-ups (or principalizations) introduce the structure of toroidal varieties in neighborhoods of fixed points of the action. This allows one to pass from birational maps between intermediate varieties in the neighborhood of fixed points to toroidal maps. The latter can be factored into a sequence of smooth blow-ups by using the same combinatorial methods as for toric varieties. Combining all the local factorizations together we get a global factorization.
In the presentation of birational cobordisms below we base on \cite{Wlodarczyk2}\, with some improvements in \cite{AKMW}. In particular we use Hironaka flattening for factorization into projective morphisms, and elements of GIT to show existence of quotients.
The presentation of the paper is self contained. In particular the toric version of the weak factorization is proven in Section \ref{toric} to illustarate some of the ideas of the proof.
\section{Birational cobordisms}
\subsection{Definition of a birational cobordism}
Recall some basic definitions from Mumford's GIT theory.
\noindent \begin{definition} Let $K^*$ act on $X$. By a {\it good
quotient} we mean a variety $Y=X//K^*$ together with
a morphism $\pi:X\rightarrow Y$ which is constant on
$G$-orbits such that for any affine open subset
$U\subset Y$ the inverse image $\pi^{-1}(U)$ is affine and
$\pi^*:O_Y(U)\rightarrow O_X(\pi^{-1}(U))^{K^*}$ is an isomorphism.
If additionally for any closed point $y\in Y$ its inverse limit $\pi^{-1}(x)$
is a single orbit we call $Y:=X/K^*$ together with
$\pi:X\rightarrow Y$ a {\it geometric quotient}.
\end{definition}
\begin{remark} A geometric quotient is a space of orbits while a good quotient is a space of equivalence classes of orbits generated by the relation that two
orbits
are equivalent if their closures intersect.
\end{remark}
\begin{definition}
Let $K^*$ act on $X$. We say that $ \lim_{{t}\to 0} \, {t}x $ exists (respectively $\lim_{{t}\to \infty} \, { t}x$ exists ) if the morphism
$\operatorname{Spec}(K^*)\to X$ given by $t\mapsto {tx}$ extends to $\operatorname{Spec}(K^*)\subset{\mathbb{A}}^1\to X$ (or respectively $\operatorname{Spec}(K^*)\subset\mathbb P^1\setminus\{0\}\to X$).
\end{definition}
\begin{definition} (\cite{Wlodarczyk2})
Let $X_1 $ and $X_2$ be two birationally equivalent
normal varieties.
A {\it birational cobordism} or simply a {\it cobordism}
$B:=B(X_1,X_2)$ between them is a
normal variety $B$ with an algebraic action of $K^*$ such that the
sets
\[\begin{array}{rccc}
&B_-:=\{x \in B\mid \, \lim_{{ t}\to 0} \, { t}x \,\, \mbox{does
not exist}\}& \, \mbox{and}& \\
&B_+:=\{x \in B\mid \, \lim_{{ t}\to\infty} \, { t}x \,\,
\mbox{does not exist}\} &&
\end{array}\]
are nonempty and open and there exist
geometric quotients
$B_-/ K^*$ and $B_+\// K^*$ such that $B_+/ K^*\simeq X_1$ and $B_-\//
K^*\simeq X_2$ and the birational map $X_1 \dashrightarrow
X_2$ is given by the above
isomorphisms and the open embeddings of
$B_+ \cap B_-\// K^*$ into
$B_+\//K^*$ and $B_-\//K^*$ respectively.
\end{definition}
\begin{remark} An analogous notion of cobordism of fans of
toric varieties was introduced by Morelli in \cite{Morelli1}.
\end{remark}
\begin{remark}\label{cobordism}
The above definition can also be considered as an analog of the notion of
cobordism in Morse theory. Let $W_0$ be a cobordism in Morse
theory of two
differentiable manifolds $X$ and $X'$ and $f:W_0\rightarrow
[a,b]\subset \mathbb R$ be a
Morse function such that $f^{-1}(a)=X$ and $f^{-1}(b)=X'$.
Then $X$ and $X'$ have open neighborhoods $X\subseteq
V\subseteq W_0$ and $X'\subseteq
V'\subseteq W$ such that $V\simeq X\times
[a,a+\epsilon )$ and $V'\simeq X'\times (b-\epsilon ,b]$ for which
$f_{\mid V}:V\simeq X \times [a,a+\epsilon)\rightarrow [a,b]$ and
$f_{\mid V'}:V'\simeq X' \times (b-\epsilon ,b] \rightarrow [a,b]$ are
the natural projections on the second coordinate. Let
$W:=W_0\cup_V X\times
(-\infty,a+\varepsilon) \cup_{V'}
X'\times (b-\varepsilon,+\infty )$. One can easily see
that $W$ is isomorphic to $W_0\setminus X\setminus
X'=\{x\in W_0\mid a < f(x) < b\}$. Let $f':W\rightarrow \mathbb R$ be the
map defined by glueing the function $f$ and the natural projection on the second
coordinate. Then ${\operatorname{grad}} (f')$ defines an action on $W$ of
a $1$-parameter group
$T\simeq {\mathbb R}\simeq {\mathbb R}_{>0}^*$ of diffeomorphisms. The last group isomorphism is given by the exponential.
Then one can see that $W_-:=\{x \in W\mid \lim_{t\to 0}
\, tx \ \mbox{does not exist}\}$ and $W_+:=\{x
\in W\mid \lim_{t\rightarrow\infty} \, tx \ \mbox{does
not exist}\}$ are open and $X$
and $X'$ can be considered as quotients of these sets by $T$.
The critical points of the Morse function are $T$-fixed points. ``Passing through the fixed points'' of the action
induces a simple birational transformation similar to spherical modification in Morse theory (see Example \ref{main}).
\end{remark}
\begin{figure}[ht]
\epsfysize=1.5in
\epsffile{figure1.eps}
\caption{Cobordism in Morse theory}\label{Fi:1}
\end{figure}
\begin{example}
\label{main}
Let $K^*$ act on $B:={\mathbb{A}}^{l+m+r}_K$ by
$$ t(x_1,\ldots,x_l,y_1,\ldots,y_m,z_1,\ldots,z_r)=(t^{a_1}\cdot x_1,\ldots,t^{a_l} \cdot
x_l,t^{-b_1}\cdot y_1,\ldots,t^{-b_m}\cdot y_m, z_1,\ldots,z_r),$$
where $a_1,\ldots,a_l,b_1,\ldots,b_m>0$.
Set $\overline{x}=(x_1,\ldots,x_l) , \overline{y}=(y_1,\ldots,y_m) ,
\, \overline{z}=( z_1,\ldots,z_r)$.
Then
$$\displaylines{ B_-= \{p=(\overline{x},\overline{y},\overline{z})\in
{\mathbb{A}}^{l+m+r}_K \mid \, \overline{y}\neq 0\},\cr
B_+= \{p=(\overline{x}, \overline{y},\overline{z})\in
{\mathbb{A}}^{l+m+r}_K \mid \, \overline{x}\neq 0 \}.\cr}$$
\noindent{\bf Case 1.} $a_i=b_i=1$, $r=0$ (Atiyah, Reid).
One can easily see that $B//K^*$ is the affine cone over
the Segre embedding ${{\mathbb{P}}}^{l-1}\times {{\mathbb{P}}}^{m-1}\rightarrow
{{\mathbb{P}}}^{l\cdot m -1}$, and $B_+/K^*$ and $B_-/K^*$ are smooth.
The relevant birational map $\phi : B_-/K^* \dashrightarrow B_+/K^*$ is a
flip
for $l, m\geq 2$ replacing ${{\mathbb{P}}}^{l-1}\subset B_-/K^* $
with ${{\mathbb{P}}}^{m-1}\subset B_+/K^* $.
For $l=1, m\geq2$, $\phi$ is a blow-down, and for
$l\geq2, m=1$ it is a blow-up. If $l=m=1$ then $\phi$ is the identity.
One can show that $\phi : B_-/K^* \dashrightarrow B_+/K^*$ factors into the blow-up of
${{\mathbb{P}}}^{l-1}\subset B_-/K^* $ followed by the blow-down of ${{\mathbb{P}}}^{m-1}\subset B_+/K^* $.
%
\noindent {\bf Case 2}. General case.
For $l=1$, $m\geq2$, $\phi$ is a toric blow-up whose
exceptional fibers are weighted projective spaces. For
$l\geq2$, $m=1$, $\phi$ is a toric blow-down. If $l=m=1$ then
$\phi$ is the identity. The birational map $\phi : B_-/K^* \dashrightarrow B_+/K^*$ factors into
a weighted blow-up and a weighted blow-down.
\noindent {\bf Case 3}. $l=0$ and $m\neq 0$ (or $l\neq 0$ and $m=0$).
In this case we have only negative and zero weights (respectively positive and zero weights.)
Then $B={\mathbb{A}}^{l+m+r}$ is not a cobordism.
In particular $B_+=\emptyset$ .The morphism $B_-/K^*=\mathbb P({\mathbb{A}}^m)\times {\mathbb{A}}^r\to B//K^*={\mathbb{A}}^r$ is the standard projection, where $\mathbb P({\mathbb{A}}^m)$ is the weighted projective space defined by the action of $K^*$ on ${\mathbb{A}}^m$.
\end{example}
\begin{figure}[ht]
\epsfysize=.8in
\epsffile{figure2.eps}
\caption{Affine Cobordism}\label{Fi:2 }
\end{figure}
\begin{remark} In Morse theory we have an
analogous situation. In cobordisms with one critical point we replace
$S^{l-1}$ by $S^{m-1}$. (See Figure \ref{Fi:3})
\end{remark}
\begin{figure}[ht]
\epsfysize=.8in
\epsffile{figure3.eps}
\caption{Spherical modifications}\label{Fi:3}
\end{figure}
\subsection{Fixed points of the action}
Let $X$ be a variety with an action of $K^*$. Denote by $X^{K^*}$ the set of fixed points of the action and by ${\mathcal C}(X^{K^*})$ the set of its irreducible fixed components. For any $F\in {\mathcal C}(X^{K^*})$ set $$F^+(X)=F^+=\{x\in X\mid
\, \lim_{{t}\to 0} {t}x \in F\}, \quad F^-(X)=F^-=\{x\in X\mid
\, \lim_{{t}\to \infty} {t}x \in F\}.$$
\begin{example} In Example \ref{main}, $$F=\{p \in B\mid \overline{x}=\overline{y}= 0\}, \quad\quad F^-=\{p\in B\mid \overline{x}= 0\}, \quad\quad F^+=\{p\in B\mid \overline{y}= 0\}.$$
\end{example}
\begin{lemma}\label{fix} If $F$ is the fixed point set of an affine variety $U$ then $F$, $F^+$ and $F^-$ are closed in $U$. Moreover the ideals $I_{F^+}, I_{F^-}\subset K[V]$ are generated by all semiinvariant functions with positive (respectively negative) weights.
\end{lemma}
\begin{proof} Embed $U$ equivariantly into affine space ${\mathbb{A}}^n$ with linear action and use the example above.
\end{proof}
\subsection{Existence of a smooth birational cobordism}\label{se: construction}
The following result is a consequence of Hironaka flatenning theorem \cite{Hironaka4}.
\begin{proposition}{fact}
Let $\phi:X\dashrightarrow Y$ be a birational map between smooth projective
varieties. Then $\phi$ factors as $X\leftarrow Z\to Y$, where $Z\to X$ and $Z\to Y$ are birational morphisms from a smooth projective variety $Z$.
The above factorization is functorial. Moreover there exist functorial divisors $D_X$ and $D_Y$ on $Z$ which are relatively ample over $X$ and $Y$ respectively.
If $\phi$ is an isomorphism over $U$ and the complements $X\setminus U$ and $Y\setminus U$ are simple normal crossing divisors then $Z\setminus U$ is a simple normal crossing divisor.
\end{proposition}
\begin{proof} Let $\Gamma(X,Y)\subset X\times Y$ be the graph of $\phi$ and $Z_0$ be its canonical resolution of singularities \cite{Hironaka3}.
If $X$ and $Y$ are projective we take simply $Z=Z_0$. If $X$ and $Y$ are arbitrary we can apply Hironaka flattening to $Z_0\to Y$ to find a projective factorization
$\phi: Z_0\leftarrow Z_Y\to Y$, where $Z_Y\to Y$ is a composition of blow-ups at smooth centers and $Z_Y\to Z_0$ is a composition of blow-ups which are pull-backs of these blow-ups (\cite{Hironaka4},\cite{Raynaud-Gruson}). Next we apply Hironaka flattening
to $Z_Y\to X$ to obtain a factorization $Z_Y\leftarrow Z_X\to X$.
Finally, $Z\to Z_X$ is a canonical prinicipalization of ${\mathcal I}_D$, where $D$ is the complement of $U$ on $Z_X$. The divisors $D_X$ are $D_Y$ are constructed as a combination of components of the exceptional divisors of $Z\to X$ and $Z\to Y$ respectively.
\end{proof}
It suffices to construct the cobordism and factorization for the projective morphism $Z\to X$.
\begin{proposition} (\cite{Wlodarczyk2},\cite{AKMW}) \label{construction} Let $\varphi: Z\to X$ be a birational projective morphism of smooth complete varieties with the exceptional divisor $D$. Let $U\subset X, Z$ be
an open subset where $\varphi$ is an isomorphism.
There exists a smooth complete variety $\overline{B}$ with a $K^*$-action, which contains fixed point components isomorphic to $X$ and $Z$ such that
\begin{enumerate}
\item
\begin{itemize}
\item $B=B(X,Z):= \bar{B} \setminus(X \cup Z)$ is a cobordism between $X$ and $Z$.
\item $U\times K^*\subset B_-\cap B_+\subset B$.
\item There are $K^*$-equivariant isomorphisms $X^-\simeq X\times ({\mathbb P^1\setminus\{0\}})$ and $Z^+\simeq{\mathcal O}_Z(D)$.
\item $X^-\setminus X=B_+$ and $Z^+\setminus Z=B_-$
\item There exists a $K^*$-equivariant projective morphism $\pi_B:\overline{B}\to X$ such that $i_X\pi_B={\operatorname{id}}_X$ and $i_Z\pi_B=f$, where $i_X: X\hookrightarrow \overline{B}$ and $i_Z: Z\hookrightarrow \overline{B}$ are embeddings of $X$ and $Z$.
Here the action of $K^*$ on $X$ is trivial.
\item There is a relatively ample divisor for $\pi_B$ which is functorial and in particular $K^*$-invariant.
\end{itemize}
\item If $D_X:=X\setminus U$ and $D_Z:=Z\setminus U$ are divisors
with simple normal crossings then there exists
a smooth cobordism $\tilde{B}\subset\overline{\tilde{B}}$ between $\tilde{X}$ and
$\tilde{Z}$ as in (1) such that
\begin{itemize}
\item $\tilde{X}$ and $\tilde{Z}$ are obtained from $X$ and $Z$ by a sequence of
blow-ups at centers which have normal crossings with
components of the total transforms of $D_X$ and $D_Z$ respectively.
\item $U\times \mathbb P^1 \subset \tilde{B} $ and
$\overline{\tilde{B}}\setminus (U\times \mathbb P^1)$ is a
divisor with simple normal crossings.
\end{itemize}
\end{enumerate}
\end{proposition}
In further considerations we shall refer to $\bar{B}$ as a {\it compactified cobordism.}
\begin{figure}[ht]
\epsfysize=1in
\epsffile{figure4.eps}
\caption{Compactified cobordism}\label{Fi:4}
\end{figure}
\begin{proof} (1) We follow here the Abramovich construction of cobordism.
Let ${\mathcal I}\subset {\mathcal O}_X$ be a sheaf of ideals such that
$Z=Bl_{\mathcal I}{X}$ is obtained from $X$ by blowing up of ${\mathcal I}$. Let $z$ denote the standard coordinate on ${{\mathbb{P}}}^1$ and
let ${\mathcal I}_0$ be the ideal of the point $z=0$ on ${{\mathbb{P}}}^1$.
Set
$W:=X\times {{\mathbb{P}}}^1$ and denote by $\pi_1:W\rightarrow X$,
$\pi_2:W\rightarrow {{\mathbb{P}}}^1$ the standard projections.
Then
${\mathcal J}:={\pi_1}^*({\mathcal I})+{\pi_2}^*({\mathcal I}_0)$ is an ideal supported on $X\times \{0\}$.
Set $W':=Bl_{{\mathcal J}}W$.
The proper transform
of $X\times
\{0\}$ is isomorphic to $Z$
and we identify it with $Z$. Let us describe $Z$ locally.
Let $f_1,\ldots,f_k$ generate the ideal ${\mathcal I}$ on some open affine set $U\subset X$. Then after the blow-up $Z\to X$ at ${\mathcal I}$ the inverse image of $U$ is a union of open charts $U_i\subset Z$, where $$K[U_i]=K[U][f_i,f_1/f_i,\ldots, f_k/f_i].$$
Now the functions $f_1,\ldots,f_k,z$ generate the ideal ${\mathcal J}$ on
$U\times {\mathbb{A}}^1\subset W$. After the blow-up $W'\to W$ at ${\mathcal J}$, the
inverse image of $U\times {\mathbb{A}}^1$ is a union of open charts $V_i\supset Z$, where $$K[V_i]=K[U][f_i, f_1/f_i,\ldots, f_k/f_i,z/f_i]=K[U_i][z/f_i]$$ and the relevant $V_z$ which does not intersect $Z$. Then $V_i=U_i^+\simeq U_i\times {\mathbb{A}}^1$ where $ z':=z/f_i$ is the standard coordinate on ${\mathbb{A}}^1$. The action of $K^*$
on the factor $U$ is trivial while on ${\mathbb{A}}^1$ it is standard given by $t(z')=tz'$.
Thus the open subset $Z^+=\bigcup U_i^+=\bigcup V_i\subset W'$ is a line bundle over $Z$ with the standard action of $K^*$. On the other hand the neighborhood $X^-:=X\times (\mathbb P^1\setminus\{0\})$ of $X\subset W$ remains unchanged after the blow-up of ${\mathcal J}$. We identify $X$ with $X\times\{\infty\}$.
We define $\overline{B}$ to be the canonical desingularization of $W$.
Then $B:=\overline{B}\setminus X\setminus Z$. We get $B_-/K^*=
(Z^+\setminus Z)/K^*=Z$, while $B_+/K^*=(X^+\setminus X)/K^*=X
$.
The relatively very ample divisor is the relevant combination of the divisor $X$ and the exceptional divisors with negative coefficients.
(2) The sets $Z^+$ and $X^-$ are line bundles with projections $\pi_+:
Z^+ \to Z$ and $\pi_-: X^-\to X$. Let $Z:=\overline{\tilde{B}}\setminus (U\times \mathbb P^1)$. Then $Z\cap Z^+$ and
$Z\cap X^-$ are simple normal crossing divisors and $\pi_+(Z\cap
Z^+)=D_Z$.
and $\pi_-(Z\cap
X^-)=D_X$. Let $f:\overline{\tilde{B}}\to \bar{B}$ be a canonical principalization of
${\mathcal I}_Z$ (see Hironaka \cite{Hironaka3},
Villamayor \cite{Villamayor} and Bierstone-Milman \cite{Bierstone-Milman}). Let $f_+:f^{-1}(Z^+)\to Z^+$ be the restrion of $f$. By functoriality
$f_+$ is a canonical
principalization of ${\mathcal I}_{Z|Z^+}=\pi_+^*({\mathcal I}_{D_Z})$ on $Z^+$ (resp. $X^-$) which commutes with $\pi_+$.
Then $f_+$ is a pull-back of the canonical principalization $\tilde{Z}\to Z$ of ${\mathcal I}_{D_Z}$ on $Z$.
In particular $f^{-1}(Z^+)=\tilde{Z}^+$ and all centers of blow-ups are $K^*$-invariant and of the form $\pi_+^{-1}(C)$, where $C$ has
normal crossings with components of the total transform of $D_Z$. Analogously for $X^-$.
\end{proof}
\begin{remark} The Abramovich construction can be considered as a generalization of the Fulton-Macpherson example of the deformation to the normal cone. If we let ${\mathcal I}={\mathcal I}_C$ be the ideal sheaf of the smooth center then the relevant blow-up is already smooth.
On the other hand this a particular case of the very first construction of a cobordism in (\cite{Wlodarczyk2}\,Proposition 2. p 438)
which is a $K^*$-equivariant completion of the space
$$L(Z,D;X,0):=O_Z(D)\cup_{U\times K^*} X\times (\mathbb P^1\setminus \{0\}).$$
Another variant of our construction is given by
Hu and Keel in \cite{Hu-Keel}.
\end{remark}
\subsection{Collapsibility}
In the following $\bar B\subset B$ denotes compactified cobordism between $X$ and $Z$ subject to the conditions from Proposition \ref{construction} but not necessarily smooth.
\begin{definition} (\cite{Wlodarczyk2}).\label{de: order} Let $X$ be a
cobordism or any variety with a $K^*$-action.
\begin{enumerate}
\item We say that $F\in{\mathcal C}(X^{K^*})$
is an {\it immediate predecessor} of
$F'\in {\mathcal C}(X^{K^*})$ if there exists a nonfixed point $x$ such that
$\lim_{{\bf t}\to 0} {\bf t}x\in F$ and
$\lim_{{\bf t}\to
\infty} {\bf t}x\in F'$.
\item
We say that $F$ {\it precedes} $F'$ and
write $F<F'$ if there exists a sequence of connected fixed
point set components $F_0=F ,F_1,\ldots,F_l=F'$ such that
$F_{i-1}$ is an immediate predecessor of $F_i$
(see \cite{BB-S}).
\item
We call a cobordism (or a variety with a $K^*$-action) {\it collapsible} (see
also Morelli \cite{Morelli1}) if the relation $<$ on its set of
connected components of the fixed point set
is an order. (Here an
order is just required to be transitive.)
\end{enumerate}
\end{definition}
\begin{remark} One can show (\cite{Wlodarczyk2}) that a projective cobordism is collapsible. The collapsibility follows from the existence of a $K^*$-equivariant embedding into a projective space and direct computations
for the projective space. A similar technique works for a relatively projective cobordism.
\end{remark}
\begin{definition} A function $\chi:{\mathcal C}(X^{K^*})\to {\mathbb{Z}}$ is {\it strictly increasing} if $\chi(F)<\chi(F')$ whenever $F< F'$.
\end{definition}
\subsection{Existence of a strictly increasing function for $\mathbb P^k$}
The space $\mathbb P^k=\mathbb P({\mathbb{A}}^{k+1})$ splits according to the weights as
\[\mathbb P^k=\mathbb P({\mathbb{A}}^{k+1})=\mathbb P({\mathbb{A}}_{a_1}\oplus\cdots\oplus {\mathbb{A}}_{a_r}) \]
where $K^*$ acts on ${\mathbb{A}}_{a_i}$
with the weight $a_i$. Assume that $a_1<\cdots< a_r$. Let $\overline x_{a_i}=[x_{i,1},\dots, x_{i,{r_i}}]$
be the coordinates on ${\mathbb{A}}_{a_i}$. The action of $K^*$ is given by
\[t[\overline x_{a_1},\dots,\overline x_{a_r}]=[t^{a_1}\overline x_{a_1},\dots,t^{a_r}\overline x_{a_r}].\]
It follows that the fixed point components of $(\mathbb P^k)^{K^*}$ are $\mathbb P({\mathbb{A}}_{a_i})$. We define a strictly increasing function $\chi_\mathbb P:{\mathcal C}(\mathbb P^{K^*})\to\mathbb Z$ by $$\chi_{\mathbb P}(\mathbb P({\mathbb{A}}_{a_i}))=a_i.$$
We see that for $x=[x_{a_0},\dots, x_{a_r}]$, $\underset{t\to 0}{\lim} tx\in \mathbb P({\mathbb{A}}_{a_{\min}}), \underset{t\to \infty}{\lim} tx \in \mathbb P({\mathbb{A}}_{a_{\max}})$, where
$${}a_{\max}=\max\{a\mid \overline x_a\neq 0\},\quad
a_{\min}=\min\{a\mid\overline x_a\neq 0\}.$$
Then $\mathbb P({\mathbb{A}}_{a_i})<\mathbb P({\mathbb{A}}_{a_j})$ iff $a_i<a_j$.
\subsection{Existence of a strictly increasing function for a compactified cobordism $\overline{B}$.}
Let $E$ be a $K^*$-invariant relatively very ample divisor for $\pi_B:\overline{B}\to X$.
For any $x\in F$, where $F\in{\mathcal C}(\bar{B}^{K^*})$, we find a semiinvariant function $f$ describing $-E=(f)$ in the neighborhood of $x\in B$. Then we put $\chi_E(F)=a$ to be the weight of the action $t(f)=t^af$ of $K^*$ on $f$. Note that $\chi_E:{\mathcal C}(B^{K^*})\to{\mathbb{Z}}$ is locally constant so it is independent of the choice of $x\in F$. Let $f'$ be another function describing $E$ at $x$, with weight $a'$. Then the function $f'/f$ is invertible at $x$ so $t(f'/f)(x))=t^{a'-a}f'/f(x)=f'/f(tx)=f'/f(x)$. Then $a'-a=0$ and $a'=a$.
For any open affine set $U\subset X$ there exist $K^*$-semiinvariant sections $s_0,\dots,s_K\in\Gamma({\mathcal O}_{\overline B_U}(E))$ corresponding to rational $K^*$-semiinvariant functions $f_i$ (with the same weight) such that $(f_i)+E\geq 0$, which define a closed embedding $$\varphi_U: \overline B_U\hookrightarrow \mathbb P^k_U=\mathbb P^k\times U,$$ where $\overline B_U=\pi_B^{-1}(U)$.
Every fixed point component $F$ on $\bar{B}_U$ is contained in $\mathbb P_a\times U$.
For any $x\in F$ there exists a section $s_i$ such that $(s_i)=(f_i)+E=0$ in the neighborhood of $F$. Thus the section $s_i$ with weight $a_i$ is invertible at $x$. This implies that $F\cap B_U\subset \mathbb P({\mathbb{A}}_{a_i})\times U$. On the other hand, $(f_i)=-E$ and the weight of $f_i$ is $a_i$. Thus we get
$\chi_E(F)=\chi_\mathbb P(\mathbb P({\mathbb{A}}_{a_i}))=a_i.$
The function $\chi_{\mathbb{P}}$ is strictly increasing, and the intersection of every component $F\in \bar{B}^{K^*}$ with $\bar{B}_U$ is contained in $\mathbb P(A_{a})\times U$, where $\chi_E(F)=\chi_\mathbb P(F)=a$. In particular we get
$\chi_E(F)<\chi_E(F')$ if $F<F'$ so $\chi_E$ is a strictly increasing function on $\bar{B}$. This implies
\begin{lemma} A compactified cobordism $\overline{B}$ is collapsible.
\end{lemma}
\subsection{Decomposition of a birational cobordism}
\begin{definition}(\cite{AKMW}, \cite{Wlodarczyk2}) A cobordism $B$ is {\it elementary} if for any $F\in {\mathcal C}(B^{K^*})$ the sets $F^+$ and $F^-$ are closed. (In particular any two distinct component $F, F'\in {\mathcal C}(B^{K^*})$ are incomparable with respect to $>$.)\end{definition}
The function $\chi_F$ defines a decomposition of ${\mathcal C}(B^{K^*})$ into elementary cobordisms
$$B_{a_i}:=B\setminus (\bigcup_{\chi_B(F)<a_i}F^-\cup \bigcup_{\chi_B(F)>a_i}F^+),$$
where $a_1<\cdots<a_r$ are the values of $\chi_B$.
This yields
\begin{lemma}
\begin{enumerate}
\item
$(B_{a_1})_-=B_-$, $(B_{a_r})_+=B_+$.
\item $(B_{a_{i+1}})_-=(B_{a_i})_+= B\setminus (\bigcup_{\chi_B(F)\leq a_i}F^-\cup \bigcup_{\chi_B(F)\geq a_{i+1}}F^+)$.
\item $\chi(F)=a_i$ for any $F\in {\mathcal C}(B_{a_i})$.
\item $(B_{a_i})_-=B_{a_i}\setminus (\bigcup_{\chi_B(F)= a_i}F^+)$, $(B_{a_i})_+=B_{a_i}\setminus (\bigcup_{\chi_B(F)= a_i}F^-)$
\end{enumerate}
\end{lemma}
\begin{figure}[ht]
\epsfysize=1in
\epsffile{figure5.eps}
\caption{Elementary birational cobordism}\label{Fi:5}
\end{figure}
\begin{figure}[ht]
\epsfysize=1in
\epsffile{figure6.eps}
\caption{"Handle"-elemenatry cobordism in Morse Theory}\label{Fi:6}
\end{figure}
\subsection{Decomposition of $\mathbb P^k$}
Set ${\mathbb{A}}_{\geq a_i}:={\mathbb{A}}_{a_i}\oplus\cdots\oplus {\mathbb{A}}_{a_r}$,
${\mathbb{A}}_{>a_i}:={\mathbb{A}}_{a_{i+1}}\oplus\cdots\oplus {\mathbb{A}}_{a_r}$, and define ${\mathbb{A}}_{< a_i}$, ${\mathbb{A}}_{\leq a_i}$ analogously.
\begin{lemma} $\mathbb P({\mathbb{A}}_{a_i})^+=\mathbb P({\mathbb{A}}_{\geq a_i})$ and
$\mathbb P({\mathbb{A}}_{a_i})^-=\mathbb P({\mathbb{A}}_{\leq a_i})$. Moreover if $F=\mathbb P(F_A)\subset \mathbb P({\mathbb{A}}_{a_i})$ is a closed subset then $F^+=\mathbb P( F_A\oplus A_{a_i}\oplus\cdots\oplus {\mathbb{A}}_{a_r})$ is closed.
\end{lemma}
\begin{lemma} Set $\mathbb P_{a_i}:=\mathbb P^k\setminus (\bigcup_{\chi_\mathbb P(F)<a_i}F^-\cup \bigcup_{\chi_\mathbb P(F)>a_i}F^+).$
Then
${}\mathbb P_{a_i}=\mathbb P^k \setminus \mathbb P({\mathbb{A}}_{> a_i}) \setminus \mathbb P({\mathbb{A}}_{< a_i}),\quad\quad
(\mathbb P_{a_i})_+=\mathbb P^k \setminus \mathbb P({\mathbb{A}}_{\geq a_i}) \setminus \mathbb P({\mathbb{A}}_{< a_i}),\quad\quad
(\mathbb P_{a_i})_-=\mathbb P^k \setminus \mathbb P({\mathbb{A}}_{> a_i}) \setminus \mathbb P({\mathbb{A}}_{\leq a_i})$.
\end{lemma}
\begin{lemma}$ \phi^{-1}_U(\mathbb P_{a_i}\times U)=(B_U)_{a_i},$\quad $ \phi^{-1}_U((\mathbb P_{a_i})_+ \times U)=((B_U)_{a_i})_+,$ \quad $ \phi^{-1}_U((\mathbb P_{a_i})_- \times U)=((B_U)_{a_i})_-,$
\end{lemma}
Combining these results gives us
\begin{lemma}
\label{open}
The sets $(B_a)_-$, $(B_a)_+$ and $B_a$ are open in $B$. For any $F\in {\mathcal C}(B_a)^{K^*}$, the sets $F^+, F^-$ are closed.
\end{lemma}
\subsection{ GIT and existence of quotients for $\mathbb P^k$.}
The sets $\mathbb P_{a_i}$ can be interpreted in terms of Mumford's GIT theory. Any lifting of the action of $K^*$ on $\mathbb P^k$ to ${\mathbb{A}}^{k+1}$ is called a {\it linearization}. Consider the twisted action on ${\mathbb{A}}^{k+1}$,
\[t_r(x)=t^{-r}\cdot t(x).\]
The twisting does not change the action on $\mathbb P({\mathbb{A}}^{k+1})$ and defines different linearizations.
If we compose the action with a group monomorphism $t\mapsto t^k$ the weights of the new action $t^k(x)$ will be multiplied by $k$. The good and geometric quotients for $t(x)$ and $t^k(x)$ are the same.
Keeping this in mind it is convenient to allow linerizations with rational
weights.
\begin{definition} The point $x\in\mathbb P^{k}$ is {\it semistable} with respect to $t_r$, written $x\in(\mathbb P^{k},t_r)^{ss}$,
if there exists an invariant section $s\in \Gamma({\mathcal O}_{{\mathbb P}^{k+1}}(n)^{t_r})$ such that $s(x)\neq 0$.
\end{definition}
\begin{lemma}(\cite{AKMW}) $\mathbb P_{a_i}=(\mathbb P^k, t_{a_i})^{ss}$, $(\mathbb P_{a_i})_-=(\mathbb P^k, t_{a_i-\frac{1}{2}})^{ss}$, $(\mathbb P_{a_i})_+=(\mathbb P^k, t_{a_i+\frac{1}{2}})^{ss}$.
\end{lemma}
\begin{proof} $x\in \mathbb P_{a_i}$ iff either $\overline x_{a_i}\neq 0$ or $\overline x_{a_{j_1}}\neq 0$ and $\overline x_{a_{j_2}}\neq 0$ for $a_{j_1}<r=a_i<a_{j_2}$.
In both situations we find a nonzero $t_r$-invariant section $s_i=x_{i}$ or $s_{j_1j_2}=x_{j_1}^{b_1}x_{j_2}^{b_2}$ for suitable coprime $b_1$ and $b_2$.
$x\in (\mathbb P_{a_i})_-$ iff $\overline x_{a_{j_1}}\neq 0$ and $\overline x_{a_{j_2}}\neq 0$ for $a_{j_1}<a_i\leq a_{j_2}$ (or equivalently $a_{j_1}<r=a_i-1/2< a_{j_2}$). As before there is a nonzero $t_r$-invariant section $x_{j_1}^{b_1}x_{j_2}^{b_2}$ for suitable coprime $b_1$ and $b_2$.
\end{proof}
It follows from GIT theory that $(\mathbb P^k,t_r)^{ss}/\/K^*$ exists and it is a projective variety. Moreover we get
\begin{lemma} \label{invariant} Let $X_r:=(\mathbb P^k,t_r)^{ss}\subset \mathbb P^k$. For a sufficiently divisible $n\in\mathbb N$, the invariant sections $x^{\alpha_0},\dots,=x^{\alpha_\ell}\in \Gamma({\mathcal O}_{X}(n)^{t_r})$ define the morphism $\psi: X\to\mathbb P^\ell=\operatorname{Proj}(K[s_0,\ldots,s_\ell])$, by putting $s_i\mapsto x^{\alpha_i}$. Then
$X_r//K^*\cong\psi(X_r)\subset \mathbb P^\ell$. Moreover if $\pi: X_r\to X_r//K^*$ is the quotient morphism (determined by $\psi$) then the push-forward $\pi_*({\mathcal O}_X(n)^{t_r})$ of the sheaf ${\mathcal O}_X(n)^{t_r}$ of ${\mathcal O}_{X_r}^{K^*}$-modules on $X_r$ is a very ample line bundle on $X_r//K^*$.
\end{lemma}
\begin{proof}
{(1)} Let $x_{i}$ be a coordinate on $\mathbb P^k$ with $t_r$-weight $0$. The section $x_i^n\in\Gamma({\mathcal O}(n)^{t_r})$ corresponds to a coordinate $s$ on $\mathbb P^\ell$ and let ${\mathbb{A}}^\ell_s:=\{p\in \mathbb P^\ell\mid s(p)\neq 0\}\subset \mathbb P^\ell$ be the open affine subset. Thus the inverse image $\psi^{-1}({\mathbb{A}}^\ell_s)$ equals to $U_i:=\{x\mid x_{i}\neq 0\}$ and the morphism
$\psi_{|U_i}: U_i\to {\mathbb{A}}^\ell_s$ is given by $s_j/s\mapsto x^{\alpha_j}/x_i^n$. If $n\in\mathbb N$ is sufficiently divisible then $K[U_i]^{t_r}$ is generated by all the monomials $x^{\alpha_j}/x_i^n$. We have a surjection $K[ {\mathbb{A}}^\ell_s]\to K[U_i]^{t_r}\subset K[U_i]$ corresponding to the embedding of the quotient $U_i\to U_i//{K^*}\subset {\mathbb{A}}^\ell_s$.
{(2)} Let $x_{j_1}, x_{j_2}$ be two coordinates on $\mathbb P^k$ whose $t_r$-weights have opposite signs. Then the $t_r$-invariant section $(x_{j_1}^{b_1}\cdot x_{j_2}^{b_2})^{n/{(b_1+b_2)}}\in \Gamma({\mathcal O}(n)^{t_r})$ for suitable coprime $b_1,b_2$ corresponds to a coordinate $s$ on $\mathbb P^\ell$. The inverse image $\psi^{-1}({\mathbb{A}}^\ell_s)$ is given by $U_{j_1j_2}:=\{x\mid x_{j_1}\cdot x_{j_2}\neq 0\}$. We get the quotient morphism again:
$\psi_{|U_{j_1j_2}}: U_{j_1j_2}\to U_{j_1j_2}//{K^*}=\operatorname{Spec} K[U_{j_1j_2}]^{t_r}\subset {\mathbb{A}}^\ell_s.$
Note that the monomials $x^{\alpha_i}\in \Gamma({\mathcal O}(n)^{t_r})$ are the products of $x_i$ as in (1) and $x_{j_1}^{b_1}\cdot x_{j_2}^{b_2}$ as in (2). Thus $U_i$ and $U_{j_1j_2}$ cover the image of $\psi$ so $\psi((\mathbb P^k, t_{r})^{ss})\subset \mathbb P^\ell_s$ is a closed subset and $\psi$ defines a quotient morphism.
\end{proof}
\begin{corollary}
There exist quotients $\pi_{a_i}:\mathbb P_{a_i}\to \mathbb P_{a_i}//{K^*}$, $\pi_{a_{i-}}=\mathbb P_{a_{i-}}\to (\mathbb P_{a_{i}})_-/{K^*}$, $\pi_{a_{i+}}=(\mathbb P_{a_{i}})_+\to (\mathbb P_{a_{i}})_+//{K^*}$.
\end{corollary}
\subsection{Existence of quotients for $\bar{B}$}
\begin{lemma} There exist quotients $\pi_{a_i}:B_{a_i}\to B_{a_i}//{K^*}$, $\pi_{a_{i-}}=B_{a_{i-}}\to B_{a_{i-}}/{K^*}$, $\pi_{a_{i+}}=B_{a_{i+}}\to B_{a_{i-}}//{K^*}$. Moreover the induced morphisms $B_{a_i}//{K^*}\to X, B_{a_{i-}}/{K^*}\to X, B_{a_{i+}}/{K^*}\to X$ are projective
\end{lemma}
\begin{proof} Since $B_U\subset\mathbb P^k\times U$ and $(B_U)_a\subset \mathbb P_a\times U$ are closed subvarieties the quotients $(B_U)_{a_i}//K^*$, $(B_U)_{a+}/{K^*}$, ${(B_U)_{a-}}/{K^*}$ exist for any open affine $U\subset X$. Glueing these together defines the global quotients $(B)_{a_i}//K^*$, $(B)_{a+}/{K^*}$, ${(B)_{a-}}/{K^*}$.
Consider twisted action $t_r$ on the line bundle $E$.
By Lemma \ref{invariant}, $(\pi_a)_*({\mathcal O}_{B_a}(nE)^{t_a})$ is relatively very ample on $B_{a}// {K^*} \to X$. Analogously for
$B_{a_+}// {K^*}$ and $B_{a_-}// {K^*}$.
\end{proof}
\begin{lemma} The open embeddings $(B_a)_-,(B_a)_+\subset B_a$ define the factorization $$ \begin{array}{ccccc}
({B}_{a_i})_-/K^* & & \stackrel{\varphi_i}{\dashrightarrow} & &({B}_{a_i})_+/K^* \\
& \searrow & & \swarrow & \\
& & {B}_{a_i} /\!/ K^* & & \end{array}$$
which is an isomorphism over $\pi_a(B_a^{K^*})\simeq B_a^{K^*}\subset B_a//K^*$.
\end{lemma}
As a corollary from the above we get
\begin{proposition} \cite{Wlodarczyk2} \,\label{deco} There is a factorization of the projective morphism $\phi: Z\to X$ given by
$$Z=(B_{a_1})_-/K^* \dashrightarrow (B_{a_1})_+/K^*= (B_{a_2})_-/K^* \dashrightarrow
\ldots (B_{a_{k-1}})_+/K^*= (B_{a_k})_+/K^* \dashrightarrow
({B_{a_k}})_+/K^*=X.$$
\end{proposition}
\subsection{Local description of elementary cobordisms}
\begin{proposition} (\cite{Wlodarczyk2}) \label{local}
Let $B_{a}$ be a smooth elementary cobordism. Then for
any $x\in B^{K^*}_a$ there exists an invariant neighborhood
$V_x$ of $x$ and a
$K^*$-equivariant \'etale morphism (i.e. locally analytic isomorphism) $\phi :V_x\rightarrow
{\operatorname{Tan}}_{x,B} $,
where ${\operatorname{Tan}}_{x,B}\simeq {\mathbb{A}}_K^n$
is the tangent space with the
induced linear $K^*$-action, such that in the diagram
\[\begin{array}{rccccccc}
&{(B_a)}_-/K^*& \supset &V_x//K^* \times_{ {\operatorname{Tan}}_{x,B}//K^*}
{({\operatorname{Tan}}_{x,B})}_-/K^*& \simeq &{V_x}_-/K^*& \rightarrow
&{({\operatorname{Tan}}_{x,B})}_-/K^* \\
& \downarrow&&&& \downarrow & &\downarrow \\
&B_a//K^* &&\supset&&V_x//K^* & \rightarrow &{\operatorname{Tan}}_{x,B}//K^* \\
&\uparrow&&&&\uparrow& &\uparrow \\
&{(B_a)}_+/K^*&\supset&V_x//K^* \times_ { {\operatorname{Tan}}_{x,B}//K^*}
{({\operatorname{Tan}}_{x,B})}_+/K^* & \simeq &{V_x}_+/K^*&\rightarrow & {({\operatorname{Tan}}_{x,B})}_+/K^*\\
\end{array}\]
the vertical arrows are defined by open embeddings
and the horizontal morphisms are defined by $\phi$ and are \'etale.
\end{proposition}
\begin{proof} By Lemma \ref{open}, for any irreducible component $F\in {\mathcal C}(B_a^{K^*})$ the sets $F^+$ and $F^-$ are closed. Let $U$ be an open $K^*$-equivariant neighborhood of $x\in \in B_a^{K^*}$, disjoint from the closed sets $F^+$ and $F^-$, where $F\in {\mathcal C}(B_a^{K^*})$, does not pass through $x$.
By taking local semiinvariant parameters at the point $x$ one
can construct an equivariant morphism $\phi:U_x\rightarrow
{\operatorname{Tan}}_{x,B}\simeq {\mathbb{A}}^n_k$ from some
open affine invariant neighborhood $U_x\subset U$ such that $\phi$ is \'etale
at $x$.
By Luna's Lemma (see [Lu], Lemme 3 (Lemme Fondamental))
there exists an invariant affine
neighborhood $V_x\subseteq U_x$ of the point $x$ such that
$\phi_{\mid V_x}$ is \'etale, the induced map
$\phi_{\mid V_x/K^*}:V_x//K^* \rightarrow {\operatorname{Tan}}_{x,B}//K^*$ is \'etale
and $V_x\simeq V_x//K^* \times_{{\operatorname{Tan}}_{x,B}//K^*} {\operatorname{Tan}}_{x,B}$.
This defines the isomorphisms $V_x//K^* \times_{ {\operatorname{Tan}}_{x,B}//K^*}
{({\operatorname{Tan}}_{x,B})}_-/K^* \simeq {V_x}_-/K^*$.
It follows that the irreducible components of $B_a^{K^*}$ are smooth and disjoint and the sets $F^+$ and $F^-$, where $F\in {\mathcal C}(B_a^{K^*})$, are irreducible. Let $F_0\in {\mathcal C}(B_a^{K^*})$ be the component through $x$.
Note that $V_x\cap (\bigcup_{F\in {\mathcal C}(B_a^{K^*})}F^+)=V_x\cap F_0^+= (V_x\cap F_0)^+$. (Both sets are closed and irreducible (Lemma \ref{fix}.)) Thus $(V_x)_-=V_x\cap (B_{a})_-$ and we get the horizontal inclusions.
\end{proof}
\begin{remark} It follows from the above thet the birational maps $(B_a)_-/K^*\dashrightarrow (B_a)_+/K^*$ are
locally described by Example \ref{main}. Both spaces have cyclic singularities and differ by the composite of a weighted blow-up and a weighted
blow-down. To achieve the factorization we need to desingularize quotients as in for instance case 1 of the example. It is hopeless to modify weights
by birational modification of smooth varieties. Instead we want to view Example \ref{main} from the perspective of toric varieties.
\end{remark}
\section{Toric varieties}
\subsection{Fans and toric varieties}
Let $N\simeq
{{\mathbb{Z}}}^k$ be a lattice contained in
the vector space $N^{{\mathbb{Q}}}:=N\otimes {{\mathbb{Q}}}\supset N$.
\begin{definition} (\cite{Danilov1},
\cite{Oda})\label{de: fan} By a {\it fan}
$\Sigma $ in
$N^{{\mathbb{Q}}}$
we mean a finite collection of finitely
generated strictly convex cones $\sigma$ in $N^{{\mathbb{Q}}}$ such
that
$\bullet$ any face of a cone in $\Sigma $ belongs to $\Sigma$,
$\bullet$ any two cones of $\Sigma $ intersect in a common face.
\end{definition}
If $\sigma$ is a face of $\sigma'$ we shall write $\sigma\preceq\sigma'$.
We say that a cone
$\sigma$ in $N^{{\mathbb{Q}}}$ is {\it nonsingular} if it is generated by a part of a basis of the lattice
$e_1,\ldots,e_k\in N$, written $\sigma=\langle e_1,\ldots,e_k\rangle$.
A cone $\sigma$ is {\it simplicial} if it
is generated over $\mathbb Q$ by linearly
independent integral vectors $v_1,\ldots,v_k$, written $\sigma =\langle v_1,\ldots,v_k\rangle $
\begin{definition}\label{de: star} Let $\Sigma$ be a fan
and $\tau \in \Sigma$. The {\it star} of the
cone $\tau$ and the {\it closed star} of $\tau$ are
defined as follows:
$${\rm Star}(\tau ,\Sigma):=\{\sigma \in \Sigma\mid
\tau\preceq \sigma\},$$
$$\overline{{\rm Star}}(\tau ,\Sigma):=\{\sigma \in
\Sigma\mid \sigma'\preceq \sigma \mbox{ for some } \sigma'\in
{\rm Star}(\tau ,\Sigma)\}.$$
\end{definition}
To a fan $\Sigma $ there is associated a toric variety
$X_{\Sigma}\supset {\bf }T$, i.e. a normal variety on which a torus
$T$ acts effectively with an open dense orbit
(see \cite{KKMS}, \cite{Danilov2}, \cite{Oda}, \cite{Fulton}). To
each cone $\sigma\in \Sigma$
corresponds an open affine invariant subset
$X_{\sigma}$ and its unique closed orbit $O_{\sigma}$. The
orbits in the
closure of the orbit $O_\sigma$ correspond to the cones of
${\rm Star}(\sigma ,\Sigma)$. In particular, $\tau\preceq\sigma$ iff $\overline{O_{\tau}}\supset O_\sigma$. (We shall also denote the closure of $O_\sigma$ by ${\operatorname{cl}}(O_\sigma)$.)
The fan $\Sigma $ is {\it nonsingular} (resp. {\it simplicial}) if all its cones are nonsingular (resp. simplicial). Nonsingular fans correspond to nonsingular varieties.
Denote by $$M:={\rm Hom}_{alg.gr.}(T,K^*)$$ the lattice of
group homomorphisms to $K^*$, i.e. characters of $T$.
The dual lattice $
{\rm Hom}_{alg.gr.}(K^*,T)$ of $1$-parameter subgroups of $T$ can be
identified with the lattice $N$.
Then the
vector space $M^{{\mathbb{Q}}}:=M\otimes{{\mathbb{Q}}}$ is dual to $N^{{\mathbb{Q}}}=
N\otimes{{\mathbb{Q}}}$.
The elements $F\in M=N^*$ are functionals on $N$ and integral functionals on $N^{{\mathbb{Q}}}$.
For any $\sigma\subset N^{{\mathbb{Q}}}$ we denote by
$$\sigma^\vee:=\{F\in M \mid F(v) \geq 0 \,\, {\rm
for\,\,
any} \,\,\, v\in \sigma\}$$
the set of integral vectors of the dual cone to $\sigma$. Then the
ring of regular functions $K[X_\sigma]$ is $K[\sigma^\vee]$.
We call a vector $v\in N$ {\it primitive} if it generates the
sublattice ${{\mathbb{Q}}}_{\geq 0}v\cap N$. Primitive vectors correspond to $1$-parameter monomorphisms.
For any $\sigma\subset N^{{\mathbb{Q}}}$ set
$$\sigma^\perp:=\{m\in M \mid (v,m) = 0 \,\, {\rm
for\,\,
any} \,\,\, v\in \sigma\}.$$
The latter set represents all invertible characters
on $X_\sigma$. All noninvertible characters are in $\sigma^\vee\setminus\sigma^\perp $ and vanish on $O_\sigma$.
The ring of regular functions on $O_\sigma\subset X_\sigma$ can be
written as $K[O_\sigma]=K[\sigma^\perp]\subset
K[\sigma^\vee]$.
\subsection{Star subdivisions and blow-ups}
\begin{definition}(\cite{KKMS}, \cite{Oda},
\cite{Danilov2}, \cite{Fulton}) A
{\it birational toric morphism} or simply a {\it toric morphism} of toric
varieties $X_\Sigma \to X_{\Sigma'}$ is a
morphism identical on $T\subset X_\Sigma, X_{\Sigma'}$.
\end{definition}
By the {\it support} of a fan $\Sigma$ we mean the union of all
its faces,
$|\Sigma|=\bigcup_{\sigma\in \Sigma}\sigma$.
\begin{definition} (\cite{KKMS}, \cite{Oda},
\cite{Danilov2}, \cite{Fulton})
A {\it subdivision} of a fan
$\Sigma$ is a fan $\Delta$ such that $|\Delta|=|\Sigma|$
and any cone $\sigma\in
\Sigma $ is a union of cones $\delta\in
\Delta$.
\end{definition}
\begin{definition}\label{de: star subdivision} Let $\Sigma$ be a fan and
$\varrho$ be a ray passing in the
relative interior of $\tau\in\Sigma$. Then the {\it star
subdivision} $\varrho\cdot\Sigma$ of $\Sigma$ with respect to
$\varrho$ is defined to be
$$\varrho\cdot\Sigma=(\Sigma\setminus {\rm Star}(\tau ,\Sigma) )\cup
\{\varrho+\sigma\mid \sigma\in \overline{\rm Star}(\tau
,\Sigma)\setminus {\rm Star}(\tau
,\Sigma)\}.$$ If $\Sigma$ is nonsingular, i.e. all its cones are
nonsingular, $\tau=\langle v_1,\ldots,v_l\rangle $ and
$\varrho=\langle v_1+\cdots+v_l\rangle $
then we call the star
subdivision $\varrho\cdot\Sigma$ {\it nonsingular}.
\end{definition}
\begin{proposition} (\cite{KKMS}, \cite{Danilov2},
\cite{Oda}, \cite{Fulton}) Let
$X_\Sigma$ be a toric variety. There is a 1-1 correspondence
between subdivisions of the fan $\Sigma$ and proper toric
morphisms $X_{\Sigma'} \to X_{\Sigma}$.
\end{proposition}
\begin{remark} Regular star subdivisions from
\ref{de: star subdivision} correspond to blow-ups of smooth varieties
at closures of orbits (\cite{Oda}, \cite{Fulton}). Arbitrary
star subdivisions correspond to blow-ups of some ideals
associated to valuations (see Lemma \ref{blow-up}).
\end{remark}
\section{Polyhedral cobordisms of Morelli}
\subsection{Preliminaries}
By $N^{{\mathbb{Q}}+}$
we shall denote a vector space $N^{{\mathbb{Q}}+}\approx {\mathbb{Q}}^k$ containing a
lattice $N^+\simeq \mathbb Z^k$, together with a primitive vector ${v_0}\in N^+$ and the
canonical projection
$$\pi:N^{{\mathbb{Q}}+}\to N^{{\mathbb{Q}}}\simeq N^{{\mathbb{Q}}+} / \mathbb Q\cdot{v_0}.$$
\begin{definition}(\cite{Morelli1})
A cone $\sigma\subset N^{{\mathbb{Q}}+}$ is {\it{$\pi$-strictly convex}} if $\pi(\sigma)$ is strictly convex (contains no line). A fan $\Sigma$ is $\pi$-strictly convex if it consists of $\pi$-strictly convex cones.
\end{definition}
In the following all the cones in $N^{{\mathbb{Q}}+}$ are assumed to be $\pi$-strictly convex and simplicial. The
$\pi$-strictly convex cones $\sigma$ in $N^{{\mathbb{Q}}+}$ split into two categories.
\begin{definition}
A cone $\sigma\subset N^{{\mathbb{Q}}+}$ is called {\it independent}
if the restriction of $\pi$ to $\sigma$ is a linear isomorphism
(equivalently $v_0\not\in{\rm span}(\sigma)$).
A cone $\sigma\subset N^{{\mathbb{Q}}+}$ is called {\it
dependent} if the restriction of $\pi$ to $\sigma$ is a lattice submersion which is not an isomorphism
(equivalently $v_0\in{\rm span}(\sigma)$).
A dependent cone is called a
{\it circuit} if all its proper faces are independent.
\end{definition}
\begin{lemma}\label{circuit} Any dependent cone $\sigma$ contains a unique circuit $\delta$.
\end{lemma}
\subsection{$K^*$-actions and $N^{{\mathbb{Q}}+}$}
The vector ${v_0}=(a_1,\dots,a_k)\in N^{{\mathbb{Q}}+}$ defines a 1-parameter subgroup $t^{v_0}:=t^{a_1}_1\cdots t_k^{a_k}$
acting on $T$ and all toric varieties $X\supset T$. Denote by $M^+$ the lattice dual to $N^+$. Then the lattice $N:={N^+}/\mathbb Z\cdot{v_0}\subset N^{\mathbb{Q}}:=N\otimes \mathbb Q$
is dual to the lattice $M:=\{a\in M^+\mid(a,{v_0})=0\}\subset M^{\mathbb{Q}}:=M\otimes\mathbb Q$ of all the characters invariant with respect to the group action. The natural projection of cones $\pi :\sigma\to \pi(\sigma)\subset N^{\mathbb{Q}}$ defines the good quotient morphism
$$X_\sigma=\operatorname{Spec} K[\sigma^\vee]\to X_\sigma//K^*
=\operatorname{Spec} K[\sigma^\vee\cap M]
=\operatorname{Spec} K[{\pi(\sigma)}^\vee]=X_{\pi(\sigma)}.$$
\begin{lemma}\label{fixed3}
A cone $\sigma$ is independent iff the geometric quotient $X_\sigma\to X_{\sigma}/K^*$ exists or alternatively if $X_\sigma$ contains no fixed points. The cone $\sigma$ is dependent if $O_\sigma$ is a fixed point set.
\end{lemma}
\begin{proof} Note that the set $X_\sigma^{K^*}$ is closed and if it is nonempty then it contains $O_\sigma$. Then a point $p\in O_\sigma$ is fixed, i.e. $t^{v_0} p=p$, iff for all functionals $F\in \sigma^\perp $ (i.e. $x^F(p)\neq 0)$ we have $x^F(p)=x^F (t^{v_0}p)=t^{F(v_0)}x^F(p)$.
Then for all $F\in\sigma^\perp\subset\rm{span}(\sigma)^\perp$ we have
$F({v_0})=0$ so ${v_0}\in \rm{span} (\sigma)$.
\end{proof}
\begin{remark} In Example \ref{main} $X_\sigma={\mathbb{A}}^n$ is a cobordism iff $\sigma$ is $\pi$-strictly convex.
\end{remark}
\begin{corollary}\label{fixed2} A cone $\delta\in \Sigma$ is a circuit if and only if $O_\delta$ is the generic orbit of some $F\in {\mathcal C}(X_\Sigma^{K^*})$. \end{corollary}
\begin{proof}$O_\sigma$ is fixed with respect to the action of $K^*$ if $\sigma$ is dependent. Thus $O_\sigma\subset \overline{ O_\delta}$ where $\delta$ is the unique circuit in $\sigma$ (Lemma \ref{circuit}).
\end{proof}
\subsection{Morelli cobordisms}
\begin{definition}(Morelli \cite{Morelli1},
\cite{Abramovich-Matsuki-Rashid})\label{de: Morelli}
A fan $\Sigma$ in $N^{{{\mathbb{Q}}}+}\supset N^+$
is
called a {\it polyhedral cobordism} or simply a cobordism if
the sets of cones
$$\partial_-(\Sigma): =\{\sigma \in\Sigma\mid \mbox{there exists } p\in {\operatorname{int}}(\sigma)\mbox{ such that }
p-\epsilon\cdot v_0\not\in\ |\Sigma| \ \mbox{for all small} \ \epsilon>0\},
$$ $$\partial_+(\Sigma): =\{\sigma \in\Sigma\mid \mbox{there exists } p\in {\operatorname{int}}(\sigma)\mbox{ such that }
p+\epsilon\cdot v_0\not\in\ |\Sigma| \ \mbox{for all small} \ \epsilon>0\} \quad\quad\quad{}$$ are
subfans of $\Sigma$ and
$\pi(\partial_-(\Sigma)):=\{\pi(\tau)\mid\tau\in \partial_-(\Sigma)\}$ and
$\pi(\partial_+(\Sigma)):=\{\pi(\tau)\mid\tau\in \partial_+(\Sigma)\}$
are fans in $N^{{\mathbb{Q}}}$.
\end{definition}
\bigskip
\subsection{Dependence relation}\label{dep}
Let $\sigma=\langle v_1,\dots,v_k\rangle$ be a dependent (simplicial) cone. Then, by definition $v_0\in{\rm{span}} (v_1,\dots,v_k)$
where $v_1,\dots,v_k$ are linearly independent. There exists a unique up to rescaling integral relation
\[r_1v_1+\dots +r_kv_k=av_0,\quad \mbox{where}\quad a>0.\quad (*) \]
\begin{definition}(\cite{Morelli1}) The rays of $\sigma$ are called {\it positive, negative} and {\it null } vectors, according to the sign of the coefficient in the defining relation.
\end{definition}
\begin{remark}
Note that the relation $(*)$ defines a unique relation $$r'_1w_1+\dots+r'_kw_k=0\quad\quad (**)$$ where $w_i$ are generating vectors in the rays $\pi\left(\langle v_i\rangle\right)$, $r'_iw_i=r_i\pi(v_i)$.
In particular $r'_i/r_i>0$.
\end{remark}
\begin{lemma}\label{sign} Let $\sigma=\langle v_1,\dots, v_k\rangle$ be a dependent cone. Then an independent face $\tau$ is in $\partial_+(\sigma)$ (resp. $\tau\in\partial_+(\sigma)$) if
$\tau$ is a face of $\langle v_1,\ldots, \check{v_i},\ldots, v_k \rangle$ for some index $i$ such that $r_i<0$ (resp. $r_i>0$).
\end{lemma}
\begin{proof} By definition $\tau\in\partial_+(\sigma)$ there exists $p\in\rm{int} (\tau)$ such that for any sufficiently small $\epsilon>0$,
$p+\epsilon {v_0}\notin\sigma$.
Write $p=\sum\alpha_iv_i=\sum_{r_i>0}\alpha_iv_i+\sum_{r_i<0}\alpha_iv_i+\sum_{r_i=0}\alpha_iv_i$, where $\alpha_i\geq 0.$ Then one of the coefficients in $$p+\epsilon {v_0}=\sum_{r_i>0}(\alpha_i+r_i\epsilon)v_i+\sum_{r_i<0}(\alpha_i+r_i\epsilon)v_i+\sum_{r_i=0} (\alpha_i+r_i\epsilon)v_i.$$
is negative for small $\epsilon>0$. This is possible if $\alpha_i=0$ for some index $i$ with $r_i<0$.
\end{proof}
\begin{lemma} A cone $\tau$ is in $\partial_+(\sigma)$ iff there exists ${F\in\sigma^\vee\cap \tau^\perp}$ such that $F({v_0})<0$.
\end{lemma}
\begin{proof}If $\tau\in\partial_+(\sigma)$ then there exists $p\in\rm{int} (\tau)$ for which $ p+\epsilon{v_0}\notin\sigma$.
Hence there exists $F\in\sigma^\vee$ such that $F(p+\epsilon{v_0})<0$ for small $\epsilon>0$. Then $F(p)=0$ and $F({v_0})<0$. Since $p\in\rm{int} (\tau)$ we have $F_{|\tau}=0$.
\end{proof}
\begin{corollary}$\partial_+(\sigma)$ (resp. $\partial_-(\sigma)$) is a fan.
\end{corollary}
\begin{proof} By the lemma above, if $\tau\in\sigma^+$ then every face
$\tau'$ of $\tau$ is in $\sigma^+$.
\end{proof}
\begin{lemma}\label{sigma2}Let $\sigma$ be a dependent cone in $N^{{\mathbb{Q}}+}$. Then $B:=X_\sigma$ is a birational cobordism such that
\item{$\bullet$} $(X_\sigma)_+=X_{\partial_-(\sigma)}$, $(X_\sigma)_-=X_{\partial_+(\sigma)}$.
\item{$\bullet$} $(X_\sigma)_+/K^*\cong X_{\pi(\partial_-(\sigma))}$, $(X_\sigma)_-/K^*\cong X_{\pi(\partial_+(\sigma))}$.
\item{$\bullet$} $\pi(\partial_-(\sigma))$ and $\pi(\partial_+(\sigma))$ are both decompositions of $\pi(\sigma)$.
\item{$\bullet$} There is a factorization into a sequence of proper morphisms $(X_\sigma)_+/K^*\to(X_\sigma)//K^*\leftarrow (X_\sigma)_-/K^*$.
\end{lemma}
\begin{proof} We have $p\in O_\tau$ where $O_\tau\subset(X_\sigma)_-$ iff $ \lim t^{v_0} p\notin X_\sigma$. This is equivalent to existence of a functional ${F\in\sigma^{\vee}}$ for which $ x^F(t^{v_0} p)=t^{F({v_0})}x^F(p)$ has a pole at $t=0$. This means exactly that $x^F(p)\neq 0$ and $F({v_0})<0$. The last condition says $ F_{|\tau}=0$ and $F({v_0})<0$, which is equivalent to $\tau\in\partial_+(\sigma)$.
Let $x\in\pi(\sigma)$. Then $\pi^{-1}(x)\cap\sigma$ is a line segment or a point. Let $y=\sup \{\pi^{-1}(x)\cap\sigma\}$. Then $y\in{\operatorname{int}}(\tau)$, where $\tau\prec\sigma$ and $y+\epsilon{v_0}\notin\sigma$, which implies that $\tau\in\partial_+(\sigma)$. Thus every point in $\pi(\sigma)$ belongs to a relative interior of a unique cone $\pi(\tau)\in \pi(\partial_+(\sigma))$. Since $\pi_{|\tau}$ is a linear isomorphism and $\partial_+(\sigma)$ is a fan, all faces of $\pi(\tau)$ are in $\pi(\partial_-(\sigma))$. Finally, $\pi(\partial_+(\sigma))$ and $\pi(\partial_-(\sigma))$ are both decompositions of $\pi(\sigma)$ corresponding to toric varieties $(X_\sigma)_-/K^*= X_{\pi(\partial_+(\sigma))}$ and $(X_\sigma)_+/K^*= X_{\pi(\partial_-(\sigma))}$.
\end{proof}
The Lemmas \ref{sign} and \ref{sigma2} yield
\begin{lemma} \label{sigma} $B=X_\sigma$ is an elementary cobordism with a single
fixed point component $F:=\overline{O_\delta}$, where $\del=\langle v_i\mid r_i\neq 0\rangle $ is a circuit. Moreover $(X_\sigma)_+=X_{\partial_-(\sigma)}=X_\sigma\setminus \overline{O_{\sigma_+}}$, where $$\sigma_+:=\langle v_i\mid r_i> 0\rangle, \quad \sigma_-:=\langle v_i\mid r_i< 0\rangle.$$ In particular
$F^+=(\overline{O_\delta})^+=\overline{O_{\sigma_+}}$,\quad $F^-=(\overline{O_\delta})^-=\overline{O_{\sigma_-}}$.
\end{lemma}
\subsection{Example \ref{main} revisited}
The cobordism $X_\sigma$ from the lemma generalizes the cobordism $B= {\mathbb{A}}^{l+m+r}_K\supset T=(K^*)^{l+m+r}$ from Example \ref{main}.
The action of $K^*$ determines
a $1$-parameter subgroup of $T$ which corresponds to a vector $v_0=[a_1,\ldots,a_l,-b_1,\ldots,-b_m,0,\ldots,0].$
The cobordism $B$ is associated with a nonsingular
cone $\Delta\subset N_{{\mathbb{Q}}}$, while $B_-$ and $B_+$ correspond to the fans $\partial_+(\Delta)$ and $\partial_-(\Delta)$ consisting of the
faces of $\Delta$ visible from above and
below respectively.
The quotients $B_+/K^*$ , $B_-/K^*$ and $B//K^*$
are toric varieties corresponding to the fans
$\pi(\partial_+(\Delta))=\{\pi(\sigma)\mid \sigma\in\partial_+(\Delta)\}$,
$\pi(\partial_-(\Delta))=\{\pi(\sigma)\mid \sigma\in\Delta^-\}$ and $\pi(\Delta)$
respectively, where $\pi$ is the projection defined by $v_0$.
The relevant birational map $\phi : B_-/K^* \mathrel{{-}\,{\rightarrow}} B_+/K^*$
for $l,m\geq 2$ is a
toric flip associated with a bistellar operation replacing
the triangulation $\pi(\partial_-(\Delta))$ of the cone $\pi(\Delta)$
with $\pi(\partial_+(\Delta))$.
\begin{figure}[ht]
\epsfysize=1.2in
\epsffile{figure7.eps}
\caption{Morelli cobordism}\label{Fi:7}
\end{figure}\subsection{ $\pi$-nonsingular cones}
\begin{definition}(Morelli)\label{mo}
An independent cone $\tau$ is $\pi$-{\it nonsingular} if
$\pi(\tau)$ is nonsingular. A fan $\Sigma$ is
$\pi$-{\it nonsingular} if all independent cones in $\Sigma$
are $\pi$-nonsingular. In particular a dependent cone $\sigma$ is
$\pi$-{\it nonsingular} if all its independent faces are $\pi$-nonsingular.
\end{definition}
\begin{lemma}
Let $\sigma=\langle v_1,\ldots,v_k\rangle$ be a dependent cone and $w_i$ be primitive generators of the rays $\pi(v_i)$. Let $\sum r'_i w_i=0$ be the unique relation (**) between vectors $w_i$.
Then the ray $\varrho:=\pi(\sigma_+)\cap \pi(\sigma_-)$ is generated by the vector $\sum\limits_{r'_i>0}r'_i w_i=\sum\limits_{r'_i<0}-r'_iw_i$ and $\varrho\cdot\pi(\partial_+(\sigma))=\varrho\cdot\pi(\partial_-(\sigma))$.
If $\sigma$ is a $\pi$-nonsingular dependent cone then the ray $\varrho$ defines nonsingular star subdivisions of $\pi(\partial_+(\sigma))$ and $\pi(\partial_-(\sigma))$.
\end{lemma}
\begin{proof}
Note that $\pi(\partial_+(\sigma))\setminus \pi(\partial_-(\sigma))$ are exactly the cones containing $\pi(\sigma_+)$. That is, $\pi(\partial_+(\sigma)) \setminus \pi(\partial_-(\sigma))=\rm{Star} (\pi(\sigma_+),\pi(\partial_+(\sigma)))$.
This gives $\varrho\cdot\pi(\partial_+(\sigma))=(\pi(\sigma_+)\cap\pi(\sigma_-))\cup \{\varrho+\tau \mid \tau\in\pi(\sigma_+)\cap\pi(\sigma_-)\}=\varrho\cdot\pi(\sigma_-)$. Assume now that $\sigma$ is $\pi$-nonsingular and all the coefficients $r'_i$ are coprime. By Lemma \ref{sign} and the $\pi$-nonsingularity the set of vectors $w_1,\ldots,\check{w_i},\ldots,w_k$ where $r'_i\neq 0$ is a basis of the lattice $\pi(\sigma)\cap N$. Thus
every vector $w_i$, where $r'_i\neq 0$, can be written as an integral combination of others. Since the relation (**) is unique it follows that the coefficient $r'_i$ is equal to $\pm 1$.
Thus $\varrho$ is generated by the vector $\sum\limits_{r'_i>0} w_i=\sum\limits_{r'_i<0}w_i$ and determines nonsingular star subdivisions.
\end{proof}
\begin{corollary} \label{fact} If $\sigma$ is dependent then
there exists a factorization
$$(X_\sigma)_-/K^* \buildrel \phi_-\over\longleftarrow \Gamma((X_\sigma)_-/K^*,(X_\sigma)_+/K^*)\buildrel \phi_+\over\longrightarrow (X_\sigma)_+/K^*,$$
where $\Gamma((X_\sigma)_-/K^*,(X_\sigma)_+/K^*)$ is the normalization of the graph of $(X_\sigma)_-/K^*\to (X_\sigma)_+/K^*$. If $\sigma$ is $\pi$-nonsingular the morphisms
$ \phi_-, \phi_+$ are blow-ups of smooth centers.
\end{corollary}
\begin{proof} By definition $\Gamma((X_\sigma)_-/K^*,(X_\sigma)_+/K^*)$ is a toric variety. By the universal property of the graph (dominating component of the fiber product) it corresponds to the coarsest simultaneous subdivision of both $\pi(\sigma_-)$ and $\pi(\sigma_+)$, that is, to the fan $\{\tau_1\cap\tau_2\mid\tau_1\in \pi(\sigma_-), \tau_2\in\pi(\sigma_+)\}=\varrho\cdot\pi(\sigma_-)=\varrho\cdot\pi(\sigma_+)$.
\end{proof}
\subsection{The $\pi$-desingularization lemma of Morelli and centers of blow-ups}
For any simplicial cone
$\sigma =\langle v_1,\ldots,v_k\rangle $ in $N$ set $${\rm par}(\sigma ):=\{ v\in
\sigma\cap N_{\sigma}\mid v=\alpha_1v_1+\cdots+\alpha_kv_k,
\mbox{where}\,\, 0\leq\alpha_i< 1\},$$
$$\overline{{\rm par}(\sigma )}:=\{ v\in
\sigma\cap N_{\sigma}\mid v=\alpha_1v_1+\cdots+\alpha_kv_k,
\mbox{where}\,\, 0\leq\alpha_i\leq 1\}.$$
We associate with a dependent cone $\sigma$ and an integral vector
$ v \in
\pi(\sigma)$ a vector ${\rm Mid}(v,\sigma):=\pi_{|\partial_-(\sigma)}^{-1}(v)+\pi_{|\partial_+(\sigma)}^{-1}(v)\in \sigma$ (\cite{Morelli1}), where
$\pi_{|\partial_-(\sigma)}$ and
$\pi_{|\partial_+(\sigma)}$ are the
restrictions of $\pi$ to $\partial_-(\sigma)$ and
$\partial_+(\sigma)$.
We also set ${\rm Ctr}_-(\sigma):=\sum_{r_i<0}w_i$, ${\rm Ctr}_+(\sigma):=\sum_{r_i>0}w_i$.
\begin{lemma}(Morelli \cite{Morelli1}, \cite{Morelli2},
\cite{Abramovich-Matsuki-Rashid}) \label{le: Morelli}
Let $\Sigma$ be a simplicial cobordism in $N^+$. Then there exists a
simplicial cobordism $\Delta$ obtained from $\Sigma$ by a sequence of star
subdivisions such that $\Delta$ is $\pi$-nonsingular.
Moreover, the sequence can be taken so that any independent
and already $\pi$-nonsingular face of $\Sigma$ remains
unaffected during the process.
All the centers of the star
subdivisions are of
the form $\pi_{|\tau}^{-1}({\rm par}(\pi(\tau)))$
where $\tau$ is independent, and ${\rm Mid}({\rm Ctr}_\pm(\sigma),\sigma)$, where $\sigma$ is dependent.
\end{lemma}
\begin{remark} It follows from Lemma \ref{le: Morelli} that $\pi$-desingularization can be done for an open affine neighborhood of a point $x$ of $F\in {\mathcal C}(B^{K^*})$ on the smooth cobordism $B$ which is \'etale isomorphic with the tangent space ${\operatorname{Tan}}_{x,B}$.
We need to show how to globalize this procedure in a coherent and possibly canonical way. This will replace the tangent space ${\operatorname{Tan}}_{x,B}$ in the local description of flips defined by elementary cobordisms (as in Proposition \ref{local}) with $\pi$-nonsingular $X_\sigma$.
By Corollary \ref{fact} we get a factorization into a blow-up and a blow-down at smooth centers: $(B_a)_-/K^* \buildrel \phi_-\over\longleftarrow \Gamma((B_a)_-/K^*,(B_a)_+/K^*)\buildrel \phi_+\over\longrightarrow (B_a)_+/K^*$.
\end{remark}
\section{Proof of the $\pi$-desingularization lemma}
\subsection{Dependence relation revisited}
\begin{lemma} \label{le: normal} (\cite{Wlodarczyk1}, Lemma
10, \cite{Morelli1}) Let $w_1,\ldots,w_{k+1}$ be integral vectors in
${\bf Z}^{k}\subset {\bf Q}^{k}$
which are not contained in a proper vector subspace of ${\bf Q}^{k}$.
Then $$\sum_{i=1}^{k+1}(-1)^i\det(w_1,\ldots,\check{w_i},\ldots,w_{k+1})\cdot w_i=0 $$ is the
unique (up to proportionality) linear relation between $w_1,\ldots,w_{k+1}$.
\end{lemma}
\noindent{\bf Proof.}
Let $v:=
\sum_{i=1}^{k+1} (-1)^i\det(w_1,\ldots,\check{w_i},\ldots,w_{k+1})\cdot w_i$.
Then for any $i<j$,
\[\begin{array}{rc}
&\det(w_1,\ldots,\check{w_i},\ldots,\check{w_j},\ldots,w_{k+1},v)=
\det(w_1,\ldots,\check{w_i},\ldots,w_{k+1})\cdot
\det(w_1,\ldots,\check{w_i},\ldots,\check{w_j},\ldots,w_{k+1},w_i)+
\\
&\det(w_1,\ldots,\check{w_j},\ldots,w_{k+1})\cdot
\det(w_1,\ldots,\check{w_i},\ldots,\check{w_j},\ldots,w_{k+1},w_j)=\\
&(-1)^i(-1)^{k-i}\det(w_1,\ldots,\check{w_j},\ldots,w_{k+1})
\cdot\det(w_1,\ldots,\check{w_i},\ldots,w_{k+1})+\\
&(-1)^j(-1)^{k-j+1}\det(w_1,\ldots,\check{w_i},\ldots,w_{k+1})
\cdot\det(w_1,\ldots,\check{w_j},\ldots,w_{k+1})=0.
\end{array}\]
Therefore $v\in \bigcap_{i,j}
\operatorname{lin}\{w_1,\ldots,\check{w_i},\ldots,\check{w_j},\ldots,w_{k+1}\}=\{0\}$.
\begin{definition} Let $\delta=\langle v_1,\ldots,v_k\rangle$ be a
dependent cone and $w_i:={\operatorname{prim}}(\pi(\langle v_i \rangle))$.
Then we shall call a relation $\sum_{i=1}^k r_iw_i=0$ {\it
normal} if it is a positive multiple of the relation (**) from Section \ref{dep} and
$|r_i|=|\det(w_1,\ldots,\check{w_i},\ldots,w_{k})|$ for $i=1,\ldots,k$.
\subsection{Dependent $n$-cones}
Assume for simplicity that the normal relation is of the form
$$r_1w_1+\ldots +r_kw_k+r_{k+1}w_{k+1}+\ldots +r_{k+l}w_{k+l}+0\cdot w_{k+l+1}+0\cdot w_{k+l+2}+\ldots=0 \eqno(0),$$
where $r_1\geq r_2\geq\ldots r_k>0$ and $-r_{k+1}\geq -r_{k+2}\geq\ldots -r_{k+l}>0$.
We can represent it by two decreasing sequences of positive numbers:
$$r(\sigma)=(r_1,r_2,\ldots,r_k;-r_{k+1},-r_{k+2},\ldots,-r_{k+l})$$
Set ${\operatorname{sgn}}(\sigma)=+$ if either $r_1>-r_{k+1}$ or $r_1=-r_{k+1}$ and $l\geq 2$.
${\operatorname{sgn}}(\sigma)=-$ if either $r_1<-r_{k+1}$ or $r_1=-r_{k+1}$ and $l=1$
\end{definition}
\begin{definition} An independent cone $\sigma$ is called an {\it $n$-cone}
if $|\det(\sigma)|=n$. A dependent cone $\sigma$
is called an {\it $n$-cone} if one of its independent faces
is an $n$-cone and the others are $m$-cones, where $m\leq n$.
We shall distinguish 5 types of dependent $n$-cones.
\begin{enumerate}
\item $(n,*;n,*)$, if $r_1=-r_{k+1}=n$ and $k,l\geq 2$.
\item $(n; n,*)$ , if $r_1=-r_{k+1}=n$ and either $k=1$ and $l\geq 2$ (${\operatorname{sgn}}(\sigma)=+$), or by symmetry $k\geq 2$ and $k=1$ (${\operatorname{sgn}}(\sigma)=-$).
\item $(n,*;*)$ if $r_1=n>-r_{k+1}$ and $k\geq 2$ (${\operatorname{sgn}}(\sigma)=+$) or by symmetry $r_1<-r_{k+1}=n$ and $l\geq 2$ (${\operatorname{sgn}}(\sigma)=-$)
\item $(n;n)$ if $r_1=-r_{k+1}=n$ and $k=l=1$
\item $(n;*)$ if $r_1=n>-r_{k+1}$, $k=1$, $l\geq 2$ (${\operatorname{sgn}}(\sigma)=+$), or by symmetry $r_1<-r_{k+1}=n$, $k\geq1$ and $l=1$ (${\operatorname{sgn}}(\sigma)=+$).
\end{enumerate}
\end{definition}
We can assign invariant to any dependent $n$-cones of type $(i)$, where $i=1,\ldots,5$, to be $${\operatorname{inv}}(\sigma):=(n,-i).$$
These invariants are ordered lexicogrpahically.
\subsection{Star subdivision at ${\operatorname{Mid}}({\operatorname{Ctr}}_{{\operatorname{sgn}}(\sigma)}(\sigma),\sigma)$}
\begin{lemma} Let $\del$ be a maximal dependent cone with a normal relation $(0)$. Then $v={\operatorname{Mid}}({\operatorname{Ctr}}_+(\delta),\delta)$ is in the relative interior of the circuit $\del_0:=
\langle v_i\mid r_i\neq 0\rangle$. The star subdivision at $\langle v\rangle$ affects the cones $\del'\in\operatorname{Star}(\del_0,\Sigma)$ only. All the normal relations for the cones $\del'$ are proportional to the normal relation for $\del$.
\end{lemma}
\begin{proof}
$w_1+\ldots+w_k=w_1+\ldots+w_k-\epsilon(r_1w_1+\ldots+r_kw_k+r_{k+1}w_{k+1}+\ldots+r_{k+l}w_{k+l})$ is a combination of $w_1,\ldots,w_{k+l}$ with positive coefficients for a sufficiently small $\epsilon>0$ .
\end{proof}
\begin{lemma} \label{le: normal2} Let $\delta=\langle v_1,\ldots,v_{k+l},\ldots,v_r\rangle$ be a maximal dependent cone with a normal relation $(0)$.
Let $v={\operatorname{Mid}}({\operatorname{Ctr}}_+(\delta),\delta)\in{\operatorname{int}}
\langle v_1,\ldots,v_{k+l}\rangle$. Let $m_w\geq 1$ be an integer
such that the vector
$$w=\frac{1}{m_w}(w_1+\ldots+w_k)$$ is primitive. Then the maximal
dependent cones in $\langle v \rangle\cdot\delta$ are of the
form $\delta_{i_0}=\langle
v_1,\ldots,\check{v}_{i_0},\ldots,v_{k+l},\ldots, v_r,v\rangle$, where $i_0\leq k+l$.
\begin{enumerate}
\item Let $r_{i_0}>0$ i.e $1\leq i_0\leq k$. Then for the maximal dependent cone
$\delta_{i_0}=\langle
v_1,\ldots,\check{v_{i_0}},\ldots,v_k,v\rangle$ in
$\langle v \rangle\cdot\delta$, the normal relation
is given (up to sign) by $$\sum_{r_i> 0, i\neq i_0}\frac{r_i-r_{i_0}}{m_w}w_i+
\sum_{r_i< 0} \frac{r_i}{m_w}w_i+r_{i_0}w=0. \eqno(1a)$$
\item Let $r_{i_0}<0$ i.e $k+1\leq i_0\leq l+k$. Then for the maximal dependent cone
$\delta_{i_0}=\langle
v_1,\ldots,\check{v}_{i_0},\ldots,v_k,v\rangle$ in $\langle
v \rangle\cdot\delta$, the normal relation
is given (up to sign) by $$\sum_{i\neq i_0}-\frac{r_{i_0}}{m_w}w_i+
r_{i_0}w=0. \eqno(1b)$$
\end{enumerate}
\end{lemma}
\begin{proof} It is straightforward to see that the above equalities hold. We only
need to show that the relations considered are normal. For that
it suffices to show that one of the coefficients is equal
(up to sign) to the corresponding determinant.
Comparing the coefficients of $w$ in the above
relations with the normal
relations from Lemma \ref{le: normal} we get
1a. $\det(w_1,\ldots,\check{w}_{i_0},\ldots,w_k)=
r_{i_0}$.
1b. The coefficient of $w$ is equal to
$\det(w_1,\ldots,\check{w}_{i_0},\ldots,w_k)=r_{i_0}$.
\end{proof}
\begin{corollary} Let $\del$ be a dependent $n$-cone and $v={\operatorname{Mid}}({\operatorname{Ctr}}_{{\operatorname{sgn}}(\del)}(\del),\del)$.
\begin{enumerate}
\item If $\del$ is of type $(1)$ or $(3)$ then $\langle v\rangle\cdot\del$ consists of $n$-cones of smaller type.
\item If $\del$ is of type $(2)$ or $(5)$ then $\langle v\rangle\cdot\del$ consists of $n$-cones of one cone of the same type as $\del$ and cones of smaller type.
\end{enumerate}
\end{corollary}
\begin{proof} Without loss of generality assume that ${\operatorname{sgn}}(\del)=+$. Then $r_1=n$ .
(1) In the case when $\del$ is of type (1) or $(3)$, the index $k\geq 2$.
If $r_{i_0}=n$ then in the relation $(1b)$, $r_i-r_{i_0}\leq 0$ and only the coefficient $r_{i_0}=n$ is positive. The cone $\del_{i_0}$ is
$n$-cone $(n;n,*)$ (in case $\del$ is of type (1)) and $(n;*)$ in case $\del$ is of type (3) corresponding to $r_{i_0}=n$.
If $n>r_{i_0}>0$ then in the relation $(1b)$, all positive coefficients $r_i-r_{i_0}\leq 0$ and $r_{i_0}$ are smaller than $n$. The cone $\del_{i_0}$ is an
$n$-cone $(n;*)$ (in case $\del$ is of type (1)) and $(*;*)$ in case $\del$ is of type (3) corresponding to $r_{i_0}=n$.
If $r_{i_0}=-n$ then $\del$ is of type $(1)$ and $\del_{i_0}$ is an
$n$-cone $(n;n)$ of type (4).
If $-n<r_{i_0}<0$ then $\del_{i_0}$ is an
$m$-cone $(m;m)$, $m<n$.
(2) In the case when $\del$ is of type (2) or $(5)$, the index $k=1$ and we have one positive ray only $r_1=n$.
If $r_{i_0}=r_1=n$ then in the relation $(1b)$, $r_i-r_{i_0}\leq 0$ and only the coefficient $r_{i_0}=n$ is positive. The cone $\del_{i_0}$ is
$n$-cone $(n;n,*)$ (in case $\del$ is of type (2)) and $(n;*)$ in case $\del$ is of type (5).
If $r_{i_0}=-n$ then $\del$ is of type $(2)$ and $\del_{i_0}$ is
$n$-cone $(n;n)$.
If $-n<r_{i_0}<0$ then $\del_{i_0}$ is
$m$-cone $(m;m)$, $m<n$.
\end{proof}
A direct consequence of the above is the following
\begin{corollary} \label{def} Let $n=\max\{|\det(\pi(\tau)|\mid \tau\quad \mbox{is independent in} \quad \Sigma\}$.
Let $\del$ be a dependent $n$-cone with a circuit $\del_0$ and $v={\operatorname{Mid}}({\operatorname{Ctr}}_{{\operatorname{sgn}}(\del)}(\del),\del)$. If $\del$ is of type
$(i)$ then all the dependent cones $\del'\in\operatorname{Star}(\del_0,\Sigma)$ are $m$-cones, with $m\leq n$ of type $(i)$. Moreover:
\begin{enumerate}
\item If $\del$ is of type $(1)$ or $(3)$ then either $\langle v\rangle\cdot\Sigma$ contains
a smaller number of dependent cones with maximal invariant ${\operatorname{inv}}(\sigma)$ or
the maximal invariant ${\operatorname{inv}}(\sigma)$ drops.
\item If $\del$ is of type $(2)$ or $(5)$ then $\langle v\rangle\cdot\Sigma$ contains
unchanged number of dependent cones with maximal invariant ${\operatorname{inv}}$.
\end{enumerate}
\end{corollary}
\subsection{Codefinite faces}
\begin{definition} \label{def0}(\cite{Morelli1}) An independent face $\tau$ of a dependent cone $\del$ is called {\it codefinite} iff it does not contain both negative and positive rays.
\end{definition}
Every dependent cone $\del$ contains two maximal codefinite faces $$\del^+:=\langle v_i\mid r_i\geq 0\rangle \quad\quad \del^-:=\langle v_i\mid r_i\leq 0 \rangle.$$
\begin{corollary} Any independent cone $\tau\in\Sigma$ can be made a codefinite face of all dependent cones containing it. The process use star subdivisions at ${\operatorname{Mid}}({\operatorname{Ctr}}_{{\operatorname{sgn}}(\sigma)}(\sigma),\sigma)$ applied to dependent cones for which $\tau$ is not codefinite. Moreover using the procedure we do not increase a number of cones with maximal invariant.
\end{corollary}
\begin{proof}
First apply the procedure to all dependent cones of types (1) and (3) for which $\tau$ is not codefinite.This procedure terminates since by the previous lemma the invariant drops until we arrive at the situation where all cones for which $\tau$ is not codefinite are of type
(2) or (5). Note also that cones of type (3) have only one positive and one negative ray and thus all their independent faces are codefinite.
Next apply the star subdivision at $v={\operatorname{Mid}}({\operatorname{Ctr}}_{{\operatorname{sgn}}(\sigma)}(\sigma),\sigma)$ to all cones of type (2) or (5).
After the star subdivision the cone $\del=\langle v_1,\ldots, v_k\rangle $ with the normal relation
$$r_1w_1+r_{k+1}w_{k+1}+\ldots +r_{k+l}w_{k+l}=0,$$ where $k=1$, we create a cone
$\del_1=\langle v',\ldots, v_k\rangle $ with exactly the same relation
$$r_1w'+r_2w_2+\ldots +r_{k+l}w_{k+l}=0$$
Since $v_1$ was only negative ray an it was replaced with the center of subdivision $v$, the cone $\tau$ contains only negative rays of $\del_1$. Other cones $\del_j$, where $j\geq 2$ are of the type (3) and again $\tau$ is not their codefinite face.
\end{proof}
\subsection{Star subdivisions at $v\in\pi_{|\tau}^{-1}({\operatorname{par}}(\pi(\tau)))$}
\begin{lemma}
Let
$w=\sum_{i\in I}\alpha_iw_i\in
{\operatorname{par}}(\pi(\del^+))$. Let $\tau=\langle v_i\mid i\in I\rangle\preceq \del^+$ be a
codefinite face of
$\delta$ containing $v=\pi_{|\del^+}^{-1}(w)$ in its relative interior. Then the maximal dependent cones in
$\langle v \rangle\cdot\delta$
are of the form $\delta_{i_0}=
\langle v_1,\ldots,\check{v}_{i_0},\ldots,v_k,v\rangle$,
where $i_0\in I$.
2a. Let $i_0\in I$ and $r_{i_0}>0$. Then for the maximal
dependent cone $\delta_{i_0}=
\langle v_1,\ldots,\check{v}_{i_0},\ldots,v_k,v\rangle$ in
$\langle v \rangle\cdot\delta$,
the normal relation
is given (up to sign) by
$$\sum_{i\in I\setminus \{i_0\},r_i>0}
(\alpha_{i_0}r_{i}-
\alpha_{i}r_{i_0})w_{i}+\sum_{i\not\in I, r_i>0}
\alpha_{i_0}r_{i}w_i+\sum_{
i\in I,r_i=0} -\alpha_{i}r_{i_0}w_{i}+
\sum_{r_i<0}\alpha_{i_0} r_iw_{i}+r_{i_0}w=0. \eqno(2a)$$
2b. Let $i_0\in I$ and $r_{i_0}=0$. For the maximal dependent cone
$\delta_{i_0}=\langle
v_1,\ldots,\check{v}_{i_0},\ldots,v_k,v\rangle$,
the normal relation
is given (up to sign) by
$$\sum_{i\neq i_0} \,\,\alpha_{i_0}r_iw_i+0w=0. \eqno(2b)$$
\end{lemma}
\begin{proof}
2a. The coefficient of $w$ is equal to
$\det(w_1,\ldots,\check{w}_{i_0},\ldots,w_k)=r_{i_0}$.
2b. The coefficient of $w_i$, where $r_i>0$, is equal to \\
$\det(w_1,\ldots,
\check{w}_{i_0},\ldots,\check{w_{i}},\cdots,w_k,w)=\alpha_{i_0}
\det(w_1,\ldots,
\check{w}_{i_0},\ldots,\check{w_{i}},\cdots,w_k,w_{i_0})=\\
(-1)^{k-i_0}\alpha_{i_0}
\det(w_1,\ldots,\check{w}_{i},\cdots,w_k,w)=
(-1)^{k-i_0} \alpha_{i_0}r_i$.
\end{proof}
\begin{lemma}\label{def1} Let $\del$ be a dependent $n$-cone
$w\in {\operatorname{par}}(\pi(\del^+))$ and $v=\pi_{|\del^+}^{-1}(w)\in\del^{\pm}$.
Then
\begin{enumerate}
\item If $\del$ is of type (1) or (2) or (4) then $\langle v\rangle\cdot\del$ contains $n$-cones of smaller type.
\item If $\del$ is of type (3) then $\langle v\rangle\cdot\del$ may contain $n$-cones of type (3) and smaller type.
\item If $\del$ is of type (5) then $\langle v\rangle\cdot\del$ may contain at most one $n$-cone of type (5) and smaller types.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) If $\del$ is of type (1) or (2) that is $(n,*:n,*)$ or $(n:n,*)$. After the star subdivision we create $n$-cones for $r_{i_0}=n$ with only one coefficient $n$. These are $n$-cones of type $(3)$ or $5$.
If we subdivide cone of type $(4)$ that is $(n;n)$ we create only one $n$-cone of type $(5)$.
(2) If $\del$ is of type (3) that is $(n,*:*)$. After the star subdivision we create $n$-cones for $r_{i_0}=n$ with only one coefficient $n$. These can be $n$-cones of type $(3)$ or $(5)$.
(3) If If $\del$ is of type (5) that is $(n:*)$. After the star subdivision we create an $n$-cone of type (5) for $r_{i_0}=n$. It has only one positive coefficient $n$ and other coefficients are negative $>-n$. \end{proof}
\begin{lemma}\label{def2} Let $n>1$ and $\del$ be a dependent $n$ cone of type $(2)$, $(4)$ or $(5)$. Then $\del^{-{\operatorname{sgn}}(\del)}$ is a maximal independent face and $|\det(\pi(\del^{-{\operatorname{sgn}}(\del)})|=n.$ There exists
$w\in
{\operatorname{par}}(\pi(\del^{-{\operatorname{sgn}}(\del)}))$ and the corresponding $v=\pi_{|\del^{-{\operatorname{sgn}}(\del)}}^{-1}(w)\in\del^{-{\operatorname{sgn}}(\del)}$.
The subdivision $\langle v\rangle\cdot\del$ contains $n$-cones of smaller type.
\end{lemma}
\begin{proof} Without loss of generality we assume that ${\operatorname{sgn}}(\del)$ is negative and we take a star subdivision at
$v\in\del^+$.
If $\del$ is of type $(2)$ then $r_1=-r_{k+1}=n$, so we have one negative ray with coefficient $-n$ and $k\geq 2$ positive rays with coefficients $\leq n$.
If $\del$ is of type $(4)$ then we have one positive and one negative ray with coefficients $n$ and $-n$.
If $\del$ is of type $(5)$ then $r_1<-r_{k+1}=n$, so we have one negative ray with coefficient $-n$ and $k\geq 2$ positive rays.
After the subdivision at $\langle v\rangle$ we create cones with negative coefiicients $>-n$ and positive rays with coefficients $\leq r_1$.
If $\del$ is of type $(2)$ the new dependent $n$-cones are of type (3), (4), (5).
If $\del$ is of type $(4)$ the new dependent $n$-cones are of type (5).
In the normal relation for a new cone $\del_{i_0}$, where $r_{i_0}=r_1=n $ there is one positive ray with coefficients $n$ and negative coefficients $>-n$.
If $\del$ is of type $(5)$ we create only dependent $m$-cones with $m<n$.
\end{proof}
\subsection{Resolution algorithm}
The $\pi$-desingularization algorithm consists of
eliminating all dependent $n$-cones, where $n>1$ in the following order.
{\bf Step 1}. {\it Eliminating all dependent $n$-cones $\del$ of type $(1)$} by applying the
star subdivision at
$\langle {\operatorname{Mid}}({\operatorname{Ctr}}_{{\operatorname{sgn}}(\del)}(\del),\del)\rangle$. (Corollary
\ref{def}.)
\bigskip
{\bf Step 2}. {\it Eliminating all dependent $n$-cones $\del$ of type $(2)$} .
{\bf Step 2a}. By definition $\del^{-{\operatorname{sgn}}(\del)}$ is a maximal independent face and $|\det(\pi(\del^{-{\operatorname{sgn}}(\del)})|=n.$
Let $v\in\pi^{-1}_{|\del^{-{\operatorname{sgn}}(\del)}}({\operatorname{par}}(\pi(\del^{-{\operatorname{sgn}}(\del)}))$. Then $v\in{\operatorname{int}}(\tau)$ for some independent face $\tau\preceq \del^{-{\operatorname{sgn}}(\del)}$. We
make $\tau$ codefinite with respect to all
dependent cones containing it. By Lemma \ref{def0}, this process will not increase a number of $n$-cones of type (2).
{\bf Step 2b}. Apply the star subdivision at $\langle v\rangle$. We change all the cones in $\operatorname{Star}(\tau,\Sigma)$. The cone $\tau$ is codefinite with respect to all faces from $\operatorname{Star}(\tau,\Sigma)$. Moreover
by definition $\tau\preceq \del^{-{\operatorname{sgn}}(\del)}$.
By Lemmas \ref{def1}, \ref{def2},
the process will decrease the number of $n$-cones of type (2).
\bigskip
{\bf Step 3}. {\it Eliminating all dependent $n$-cones $\del$ of type $(3)$} by applying
star subdivision at $\langle {\operatorname{Mid}}({\operatorname{Ctr}}_{{\operatorname{sgn}}(\del)}(\del),\del)\rangle$.
\bigskip
{\bf Step 4}. {\it Eliminating all dependent $n$-cones of type (4) by using the two steps procedure as in Step 2.}
\bigskip
{\bf Step 5}. {\it Eliminating all dependent $n$-cones of type (5) by using the two steps procedure as in Step 2.}
\bigskip
{\bf Step 6}. {\it Eliminating all independent $n$-cones $\tau$ which are not faces of some dependent cones}.
{\bf Step 6a}.
Let $v\in\pi^{-1}_{\tau}({\operatorname{par}}(\pi(\tau))$. Then $v\in{\operatorname{int}}(\tau_0)$ for some independent face $\tau_0\preceq \tau$. We
make $\tau_0$ codefinite with respect to all
dependent cones containing it.
{\bf Step 6b}. Apply the star subdivision at $\langle v\rangle$. Determinant of all independent faces $\tau'$ containing $\tau_0$ drops.
$$|\det\pi(\tau')|=|\det(w_1\ldots\check{w_i},\ldots,w_k,w)|=\alpha_i |\det(w_1\ldots,w_k)|<
|\det(w_1\ldots,w_k),|$$
where $\pi(\tau')=\langle w_1,\ldots,w_k\rangle$, $w=\sum_i\alpha_iw_i$, $0\leq\alpha_i<1$, $v=\pi_{|\tau_0}^{-1}(w)$.
\begin{remark}
The strategy of this algorithm of using the above centers for the corresponding $n$-cones was first applied in \cite{Wlodarczyk1} in the proof of regularization of toric factorization (see (\cite{Wlodarczyk1}), Lemmas 11-12 pages 403-410). Then it was used directly in the context of $\pi$-desingularization in \cite{Abramovich-Matsuki-Rashid} and in the revision of the Morelli's original algorithm in \cite{Morelli2}.
\end{remark}
\subsection{The Weak Factorization of toric morphisms}\label{toric}
\begin{theorem}(\cite{Wlodarczyk1},\cite{Morelli1}) Lef $f:X\dashrightarrow Y$ be a
birational toric map of
smooth complete toric varieties.
Then $f$ can be factored
as
$X=X_0\buildrel f_0 \over \dashrightarrow X_1
\buildrel f_1 \over \dashrightarrow \ldots \buildrel f_{n-1} \over
\dashrightarrow X_n=Y ,$
where each $X_i$ is a smooth complete toric variety and $f_i$ is a blow-up
or blow-down at a smooth invariant center.
\end{theorem}
\begin{proof} By Proposition \ref{fact}, there is a a smooth toric variety $Z$ and a factorization of $f$ into $X\leftarrow Z\to Y$ where $Z\to X$ and $Z\to Y$ are projective toric morphisms. By Proposition \ref{construction} there is a toric variety $\overline{B}\supset B\supset T\times K^*$ which is a compactified cobordism defined for a projective toric morphism $Z\to X$. The variety $B$ corresponds to a strictly $\pi$-convex nonsingular fan $\Del$. Its $\pi$-desingularization determines a $\pi$-nonsingular fan $\Del^\pi$ corresponding to a toric variety $B^\pi$ projective over $B$. The open subsets $B_- $ and $B_+$ have smooth quotients $B_- /K^*$ and $B_+/K^*$. They correspond to $\pi$-nonsingular subfans $\Del_+$ and $\Del_-$ of $\Del$ and are not affected by $\pi$-desingularization.
That is $B_-=B^\pi_-$ and $B_+=B^\pi_+$ and $B^\pi$ is a cobordism between $X$ and $Z$ and admits a compactification $\bar{B}^\pi=B^\pi\cup X\cup Z=B^\pi\cup {\mathcal O}(Z)\cup X\times (\mathbb P^1\setminus\{0\})$ (see Proposition \ref{construction}). By Proposition \ref{deco}, the compactified cobordism
$\bar{B}^\pi\supset B^\pi$ determines a
a decomposition into elementary cobordisms $B^\pi_a$ and the toric factorization into maps $(B^\pi_a)_-/K^*\dashrightarrow (B^\pi_a)_+/K^*$.
If $B^\pi_a$ is an elementary cobordism corresponding to the fan $\Del_a$ then $(B^\pi_a)_-$, $(B^\pi_a)_+$ and $(B^\pi_a)_-\cap (B^\pi_a)_+\subset B^\pi_a$ correspon to subfans $\Del_-$, $\Del_+$ and $\Del_0:=\Del_+\cap\Del_-$ respectively, consisting of independent cones. Every toric orbit in $B^\pi_a\setminus ((B^\pi_a)_-\cap (B^\pi_a)_+)=F^+\cup F^-$ contains a fixed point orbit in its closure corresponding to a dependent cone. Thus $\Del_a\setminus \Del_0$ is a collection of dependent cones and some of their independent faces.
If $\sigma\in \Del_a$ is a dependent cone then $X_\sigma$ intersects a unique fixed
point component $\bar{O}_\del$, where $\del$ is a unique circuit in $\sigma$. All orbits in $X_\sigma$ contain in its closure a fixed orbit $O_\tau\subset \bar{O}_\del$. Thus $X_\sigma$ is disjoint from other closed sets
$F^+$ and $F^-$, where $F\neq \bar{O}_\del\in{\mathcal C}(B_a^\pi)^{K^*}$. We get that $(X_\sigma)_-=X_\sigma\setminus (\bar{O}_\del)^+=X_{(\partial_+(\sigma)}\subset B^\pi_-$ and $(X_\sigma)_+=X_{(\partial_-(\sigma)} \subset B^\pi_+$. It follows from the above that
$\pi(\Del_+)$ and $\pi(\Del_-)$ are two nonsingular subdivisions of the fan $\pi(\Del_a)$ which coincide on $\pi(\Del_0)$ and which define two different decomopsitions for all projections $\pi(\sigma)$ of dependent cones: $\pi(\Del_+)_{|\pi(\sigma)}=\pi(\partial_-(\sigma))$ and $\pi(\Del_-)_{|\pi(\sigma)}=\pi(\partial_+(\sigma))$. If $\del$ is a circuit in $\sigma$, such that $\pi(\del)=\langle w_1,\ldots,w_k\rangle$ then by Lemma \ref{mo}, the unique relation is given by $\sum r_iw_i=0$ where $r_i=\pm 1$. Let $w_\del=\sum_{r_i=1} w_i=-\sum_{r_i=-1} w_i$. Then
the ray $\langle w_\del\rangle$ determines nonsingular star subdivisions of
$\pi(\Del_+)$, $\pi(\Del_+)$ and
$\langle w_\del\rangle\cdot\pi(\Del_+)_{|\pi(\sigma)}=\langle v_\del\rangle\cdot\pi(\Del_-)_{|\pi(\sigma)}$. If $\del_1\ldots,\del_r$ be all ciruits in $\Del_a$ then the stars $\operatorname{Star}(\pi(\del_i),\pi(\Del_a))$ are disjoint and $\langle w_{\del_1}\rangle \cdot\dots\cdot \langle w_{\del_r}\rangle\cdot \pi(\Del_+)=\langle w_{\del_1}\rangle \cdot\dots\cdot \langle w_{\del_r}\rangle\cdot\pi(\Del_-)$ and consequently
$(B^\pi_a)_-/K^*\dashrightarrow (B^\pi_a)_+/K^*$ factors into a sequence of blow-ups about smooth toric centers followed by a sequence of blow-downs about the smooth centers.
\end{proof}
\section{$\pi$-desingularization of birational cobordisms}
\subsection{Stratification by isotropy groups on a smooth cobordism}
\label{stratification}
\medskip\newcommand{\overline s}{\overline s}
\noindent
Let $B$ be a smooth cobordism of dimension $n$. Denote by $\Gamma_x$ the isotropy group of a point $x\in B$. Let $D$ be a $K^*$-invariant divisor on $B$ with simple normal crossings.
Define the stratum $s=s_x$ through $x$ to be an irreducible component of the set $\{p\in B\mid\Gamma_x=\Gamma_p\}.$
We can find $\Gamma_x$-semiinvariant parameters in the affine open neighborhood $U$ of $x$ such that
\begin{enumerate}
\item$\Gamma_x$ acts nontrivially on
$u_1,\dots, u_k$ and trivially on $u_{k+1},\dots,u_n$.
\item Any component of $D$ through $x$ is described by a parameter $u_i$ for some $i$.
\end{enumerate}
After suitable shrinking of $U$ the parameters define an \'etale $\Gamma_x$-equivariant
morphism $\varphi: U\to {\operatorname{Tan}}_{x,B}={\mathbb{A}}^n$. By definition the stratum $s$ is locally described by
$u_1=\ldots =u_k=0$. The parameters $u_1, \dots,u_k$ determine a $\Gamma_x$-equivariant
smooth morphism $$\psi: U \to {\operatorname{Tan}}_{x,B}/{{\operatorname{Tan}}_{x,s}}={\mathbb{A}}^k.$$ We shall view
${\mathbb{A}}^k=X_\sigma$ as a toric variety with a torus $T_\sigma$ and refer to $\psi$ as
a {\it toric chart}.
This assigns to a stratum $s$ the cone $\sigma$ and the relevant group $\Gamma_\sigma$ acting on $X_\sigma$.
Then Luna's \cite{Luna} fundamental lemma implies that the morphisms $\phi$ and $\psi$
preserve stabilizers, the induced morphism $\psi_\Gamma:U//\Gamma_x\to X_\sigma//\Gamma_\sigma$ is smooth and $U\simeq U//{\Gamma_x}\times_{{\mathbb{A}}^k//\Gamma_x}{\mathbb{A}}^k$. Note that for a toric charts on $B$ we require that inverse images of toric divisors have simple normal crossings with components of $D$. We refer to this property as {\it compatibility with D}.
The invariant $\Gamma_x$ can be defined for $X_\sigma={\mathbb{A}}^k$ and determine
the relevant $T_\sigma$-invariant stratification $S_\sigma$ on $X_\sigma$.
By shrinking $U$ we may assume that the strata on $U$ are inverse images of the
strata on $X_\sigma$. Any stratum $s_y$ on $U$ through $y$ after a suitable
rearrangement of $u_1,\ldots, u_k$ is described in the neighborhood $U'\subset U$ of $y$ by $u_1=\ldots =u_\ell=0$,
where $\Gamma_y\leq \Gamma_x$ acts nontrivially on
$u_1,\dots,u_\ell$ and trivially on $u_{\ell+1}\ldots,u_k,u_{k+1},\ldots,u_n$. The remaining $\Gamma_y$-invariant parameters at $y$ are
$u_{\ell+1}-u_{\ell+1}(y),\dots,u_{n}- u_{n}(y)$. Then the closure of
$\overline s_y$
is described on $U$ by $u_1=\ldots =u_\ell=0$ and contains $s_x$. This shows \begin{lemma} The
closure of any stratum is a union of strata.
\end{lemma}
We can introduce an order on the strata by setting
$$s'\leq s\quad\quad \mbox{iff} \quad\quad\overline s'\subseteq s.$$
\begin{lemma} \label{inclusion} If $s'\leq s$ then there exists an inclusion $i_{\sigma'\sigma}:\sigma'\hookrightarrow\sigma$ onto a face of $\sigma$.
The inclusion $i_{\sigma'\sigma}$ defines a $\Gamma_{\sigma'}$-equivariant morphism of toric varieties
$X_{\sigma'}\to X_{\sigma'}\times 1\hookrightarrow X_{\sigma'}\times T\subset X_{\sigma},$
where $T_{\sigma'}\times T= T_{\sigma}$ and $\Gamma_{\sigma'}\subset T_{\sigma'}$.
Moreover we
can write $X_\sigma\cong X_{\sigma'}\times {\mathbb{A}}^r$ where $\Gamma_{\sigma'}$ acts trivially on
${\mathbb{A}}^r$ and nontrivially on all coordinates of $X_{\sigma'}\simeq{\mathbb{A}}^\ell$.
\end{lemma}
\noindent
In the above situation we shall write
$$ \sigma'\leq \sigma.$$
The lemma above implies immediately
\begin{lemma} \label{maximal} If $\tau<\sigma$ (that is, $\tau\leq\sigma$, $\tau\neq\sigma$) then $\Gamma_\tau\subsetneq \Gamma_\sigma$.
\end{lemma}
Consider the stratification $S_\sigma$ on $X_\sigma$. Every stratum $s_\tau\in S_\sigma$, where $\tau\leq \sigma$, is a union of orbits $O_{\tau'}$. Set $$\bar\tau:=\{\tau'\mid O_{\tau'}\subset s_\tau\}.$$
\begin{lemma} Any cone from the set $\tau'\in \overline{\tau}$ can be expressed as $\tau'\simeq \tau\times\langle e_1,\ldots,e_r\rangle\subset\sigma$, and $X_{\tau'}=X_\tau\times {\mathbb{A}}^s\times T^{r-s}$ where $\Gamma_\tau$ acts trivially on ${\mathbb{A}}^r\times T^{r-s}$.
\end{lemma}
\begin{lemma} For any $\tau'\in \overline{\tau}$, we have
$\Gamma_\tau=\Gamma_{\tau'}:=\{g\in\Gamma_\sigma\mid \underset{x\in O_{\sigma'}}{\forall} gx=x\}$.
\end{lemma}
\subsection{Local projections}
\begin{definition} A cone $\sigma$ in $N^{{\mathbb{Q}}}$ is \textit{of maximal dimension} if $\dim \sigma=\dim N^{{\mathbb{Q}}}$.
\end{definition}
Every cone $\sigma$ in $N^{{\mathbb{Q}}}$ defines a cone of maximal dimension in $N^{{\mathbb{Q}}}\cap \rm{span} \{\sigma\}$ with lattice $N\cap\rm{span} \{\sigma\}$. We denote it by $\underline\sigma$. There is a noncanonical isomorphism
$$X_\sigma=X_{\underline\sigma}\times O_\sigma.$$
The vector space $\mbox{span}\,\{\sigma\}\subset N^{{\mathbb{Q}}}$ corresponds to a subtorus $T_{\underline\sigma}\subset T_\sigma$ defined as $T_{\underline\sigma}:=\{t\in T_\sigma\mid tx=x\ \mbox{for}\ x\in O_\sigma\}.$ Then $O_\sigma$ is isomorphic to the torus $T_{\sigma}/T_{\underline\sigma}$ with dual lattice $\sigma^\perp\subset M^Q$.
\begin{lemma} \label{le: p}If $\Gamma\subset T_\sigma$ acts freely on $X_\sigma=X_{\underline\sigma}\times O_\sigma$ then $$X_{\sigma}{/\Gamma}=X_{\underline\sigma}\times O_\sigma/\Gamma,$$ where $O_\sigma\simeq O_\sigma/\Gamma$ if $\Gamma$ is finite, while $O_\sigma/\Gamma$ is isomorphic to a torus of dimension $\dim O_\sigma-1$ if $\Gamma=K^*$.
\end{lemma}
\begin{proof} By assumption $\Gamma\cap T_{\underline\sigma}$ is trivial. Hence $\Gamma$ acts trivially on $X_{\underline\sigma}$ and $X_\sigma/\Gamma=X_{\underline\sigma}\times O_\sigma/\Gamma$.
\end{proof}
Let $\pi_\sigma: (\sigma,N_\sigma)\to (\sigma^{\Gamma},N_\sigma^{\Gamma})$ denote the projection corresponding to the quotient map $X_\sigma\to X_{\sigma}//{\Gamma_\sigma}$.
\begin{lemma}\label{pro}
If $\tau\leq \sigma$ then $\pi_\tau(\tau)\simeq\pi_\sigma(\tau).$
\end{lemma}
\begin{proof}$X_{\underline\tau}\times O_\tau$ is an open subvariety in $X_\sigma$ and $\Gamma_\tau$ acts trivially on $O_\tau$. We have
\[(X_{\underline{\tau}}\times O_\tau)/{\Gamma_\tau}=X_{\underline{\tau}}/{\Gamma_\tau}\times O_\tau=X_{\pi_\tau(\tau)}\times O_\tau.\]
$\Gamma_{\sigma}/{\Gamma_\tau}$ acts freely on $(X_{\underline{\tau}}\times O_\tau)/{\Gamma_\tau}=X_{\pi_\tau(\tau)}\times O_\tau$. Thus by the previous lemma
$X_{\pi_\sigma(\tau)}\cong X_{\pi_\tau(\tau)}\times O_{\tau}/{\Gamma_\sigma}.$
\end{proof}
For any $\tau\in\Del^\sigma$, set $\Gamma_\tau:=\{g\in\Gamma_\sigma\mid \underset{x\in O_{\tau}}{\forall} gx=x\}$.
Similarly one proves:
\begin{lemma}\label{pro2} Let $\Gamma\subset \Gamma_\sigma$ be a group
containing $\Gamma_\tau$, where $\tau\in\Del^\sigma$. Let $\pi_\Gamma: \sigma\to\sigma^\Gamma$ be the projection corresponding to the quotient $X_\sigma\to X_\sigma/\Gamma$. Then $\pi_\Gamma(\tau)\simeq\pi_\sigma(\tau).$
\end{lemma}
\begin{lemma} \label{pr} Let $\Gamma$ be a subgroup of $\Gamma_\sigma$, and $\pi_\Gamma: \sigma\to\sigma^\Gamma$ be the projection corresponding to the quotient $X_\sigma\to X_\sigma/\Gamma$. For any $\tau\leq \sigma$ and $\tau'\in\overline\tau$ we have
$\tau'=\tau\oplus \langle e_1,\ldots,e_k\rangle$ where $\langle e_1,\ldots,e_k\rangle$ is nonsingular and $\pi_\Gamma(\tau')=\pi_\Gamma(\tau)\oplus \langle e_1,\ldots,e_k\rangle$.
\end{lemma}
\begin{proof} $X_{\tau'}=X_\tau\times {\mathbb{A}}^k\times O_{\tau'}$ where the action of $\Gamma_\tau\cap \Gamma$ on ${\mathbb{A}}^k\times O_{\tau}$ is trivial. Thus $X_{\tau'}/{\Gamma_\tau}=X_{\tau}/{\Gamma_\tau}\times {\mathbb{A}}^k\times O_{\tau'}$. Now $\Gamma/({\Gamma_\tau}\cap \Gamma)$ acts freely on $O_{\tau'}\subset s_\tau$ and we use Lemma \ref{le: p}.
\end{proof}
\subsection{Independent and dependent cones}
By Lemma \ref{pro} there exists a lattice isomorphism $j_{\tau\sigma}:\pi_\tau(\tau)\to\pi_\sigma(\tau)$, where $\tau\leq\sigma$. Thus the projections $\pi_\tau$ and $\pi_{\sigma}$ are coherent and related: $j_{\tau\sigma}\pi_\tau=\pi_{\sigma}$.
\noindent {\bf Case 1}: $\Gamma_\sigma=K^*$. The action of $K^*$ on $X_\sigma$ corresponds to a primitive vector $v_\sigma\in N_\sigma$. The invariant characters $M^\Gamma_\sigma\subset M_\sigma$ are precisely those $F\in M^\Gamma_\sigma$ such that $F(v_\sigma)=0$. The dual morphism is a projection $\pi_\sigma :N_\sigma\to N_{\sigma}/{\mathbb Z\cdot v_\sigma}=N^\Gamma_\sigma$.
\noindent
The quotient morphism of toric varieties $X_\sigma\to X_{\sigma}/\Gamma_\sigma$ corresponds to the projection $\sigma\to \pi_\sigma(\sigma)$.
\bigskip\noindent
{\bf Case 2}: $\Gamma_\sigma\cong {\mathbb{Z}}_n$. The invariant characters $M^\Gamma_\sigma\subset M_\sigma$ form a sublattice of dimension $\dim(M^\Gamma_\sigma)=\dim(M_\sigma)$, where $M_{\sigma}/{M_\sigma^\Gamma}\simeq {\mathbb{Z}}_n$.
The dual morphism defines an inclusion $\pi:N_\sigma\hookrightarrow N^\Gamma_\sigma$. The projection $\sigma\to \pi_\sigma(\sigma)$ is a linear isomorphism which does not preserve lattices. This gives
\begin{lemma}
$X_\tau$ is independent iff $\Gamma_\tau$ is finite. $X_\sigma$ is dependent iff $\Gamma_\sigma=K^*$.
\end{lemma}
\begin{definition} Let $\Del^\sigma$ be a decomposition of a cone $\sigma\in\Sigma$. A cone $\tau\in\Del^\sigma$ is {\it independent} if $\pi_{\sigma{|\tau}}$ is a linear isomorphism. A cone $\tau$ is {\it dependent} if $\pi_{\sigma|\tau}$ is not a linear isomorphism.\end{definition}
\medskip
\subsection{Semicomplexes and birational modifications of cobordisms}
By glueing cones $\sigma$ corresponding to strata along their faces we construct a {\it semicomplex} $\Sigma$, that is, a partially ordered set of cones such that for $\sigma\leq \sigma'$ there exists a face inclusion $i_{\sigma\sig'}:\sigma\to\sigma'$.
\begin{remark} The glueing need not be transitive: for $\sigma\leq\sigma'\leq\sigma''$ we have $i_{\sigma'\sigma''}i_{\sigma\sig'}\neq i_{\sigma\sig''}$. Instead, there exists an automorphism $\alpha_\sigma$ of $\sigma$ such that $i_{\sigma'\sigma''}i_{\sigma\sig'}= i_{\sigma\sig''}\alpha_\sigma$.
\end{remark}
For any fan $\Sigma$ denote by $\rm{Vert}(\Sigma)$ the set of all 1-dimensional faces (rays) in $\Sigma$. Denote by $\rm{Aut}(\sigma)$ the automorphisms of $\sigma$ inducing $\Gamma_\sigma$-equivariant automorphisms.
\medskip
\begin{definition}\label{de:sub} By a {\it subdivision} of $\Sigma$ we mean a collection $\Del=\{\Del^\sigma\mid \sigma\in\Sigma\}$ of subdivisions $\Del^\sigma$ of $\sigma$ such that
\item{1$^\circ$} If $\tau\leq \sigma$ then the restriction $\Del^\sigma_{|\tau}$ of $\Del^\sigma$ to $\tau$ is equal to $\Del^\tau$.
\item{2$^\circ$} All rays in $\rm{Vert}(\Del^\sigma)\setminus \rm{Vert}(\sigma)$ are contained in $\underset{\tau\leq\sigma}{\bigcup}{\operatorname{int}}(\tau)$.
\item{3$^\circ$} $\Del^\sigma$ is $\rm{Aut}(\sigma)$-invariant.
\end{definition}
\begin{remark} Condition $3^\circ$ is replaced with a stronger one in the following proposition.
\end{remark}
\begin{lemma} \label{sub} If $\tau'\in\overline{\tau}$, $\tau'\prec \sigma\in\Sigma$ then $\rm{Vert} (\Del^\sigma_{|\tau'})\setminus\rm{Vert}(\tau')\subset\tau$ and thus
\[\Del^\sigma_{|\tau'}=\Del^\sigma_{|\tau}\oplus\langle e_1,\ldots,e_k\rangle=\Del^\tau\times \langle e_1,\ldots,e_k\rangle.\]
\end{lemma}
\begin{lemma} \label{maximal2}
For every point $x\in B\setminus ({B_+\cap B_-})$, $x\in s'$ there exists a toric chart $x\in U_\sigma\to X_\sigma$, with $\Gamma_\sigma=K^*$, corresponding to a stratum $s\subset \overline{s'}$ . In particular the maximal cones of $\Sigma$ are circuits.
\end{lemma}
\begin{proof}
Let $\tau$ correspond to a stratum $s'\ni x$. By definition of cobordism $\displaystyle\lim_{t\to 0}tx=x_0$ or $\displaystyle\lim_{t\to \infty}tx=x_0$ exists. The point $x_0$ is $K^*$-fixed and belongs to a stratum $s$, with $\Gamma_s=\Gamma_\sigma=K^*$. Since $U$ is a $K^*$-invariant neighborhood of $x_0$ it contains an orbit $K^*\cdot x$ and the point $x$. Moreover $\overline{s'}\supset s $ and $\tau\leq \sigma$.
\end{proof}
\begin{lemma} Let $\sigma$ be the cone corresponding to a stratum $s$ on $B$ and $x\in s$. Then $\widehat X_x=\operatorname{Spec} \widehat O_{x,B}\simeq (X_\sigma\times {\mathbb{A}}^{\dim(s)})^\wedge\cong\,\operatorname{Spec} K [[x_1,\ldots,x_k,\ldots, x_n]].$
\end{lemma}
Set $\widetilde X_\sigma:=(X_\sigma\times {\mathbb{A}}^{\dim(s)})^\wedge$ and let $G_\sigma$ denote the group of all $\Gamma_\sigma$-equivariant autorphisms of $\widetilde X_\sigma$.
The subdivision $\Del^\sigma$ of $\sigma$ defines a toric morphism and induces a proper birational $\Gamma_\sigma$-equivariant morphism
\[\widetilde X_{\Del^\sigma}:=X_{\Del^\sigma} \times_{X_\sigma}\widetilde X_\sigma\to\widetilde X_\sigma.\]
\noindent
\begin{proposition} \label{correspondence} Let $\Del=\{\Del^\sigma\mid\sigma\in\Sigma\}$ be a subdivision of $\Sigma$ such that:
\begin{enumerate}
\item {For every $\sigma\in\Sigma$ the morphism $\widetilde X_{\Del^\sigma}\to\widetilde X_\sigma$ is $G_\sigma$-equivariant}.
\end{enumerate}
Then $\Del$ defines a $K^*$-equivariant birational modification $f:B'\to B$ such that for every toric chart $\varphi_\sigma:U\to X_\sigma$ there exists a $\Gamma_\sigma$-equivariant fiber square
\[\begin{array}{rcccccccc}
U_\sigma \times_{X_{\sigma}}
X_{\Delta^\sigma} &&\simeq& f^{-1}(U_\sigma)
& \rightarrow &
X_{\Delta^\sigma} &&& \\
&&&\downarrow {\scriptstyle f} & & \downarrow &&&\\
&& & U_\sigma & \rightarrow & X_{\sigma}&&&(2)\\
\end{array}\]
\end{proposition}
\begin{definition} A decomposition $\Del$ of $\Sigma$ is {\it canonical} if it satisfies condition (1).
\end{definition}
\begin{proof} The above diagrams define open subsets $f^{-1}_\sigma(U_\sigma)$ together with proper birational $\Gamma_\sigma$-equivariant morphisms $f^{-1}_\sigma(U_\sigma)\to U_\sigma$. Let $ s'\leq s$ be a stratum corresponding to the cone $\tau\leq\sigma$. By Lemma \ref{sub}, the restriction of the diagram (2) defined by $U_\sigma\to X_\sigma$ to a neighboorhod $U_\tau$ of $y\in s'$ determines a diagram defined by the induced toric chart $U_\tau\to X_\tau$ and the decomposition $\Del^\tau$ of $\tau$.
In order to show that the $f^{-1}_\sigma(U)$ glue together we need to prove that for $x\in s$ and two different charts of the form $\varphi_{1,\sigma}:U_{1,\sigma}\to X_\sigma$ and $\varphi_{2,\sigma}:U_{2,\sigma}\to X_\sigma$ where $x\in U_{1,\sigma}, U_{2,\sigma}$ the induced varieties $V_1:=f^{-1}_{1,\sigma}(U_{1,\sigma})$ and $V_2:=f^{-1}_{2,\sigma}(U_{2,\sigma})$ are isomorphic over $U_{1,\sigma}\cap U_{2,\sigma}$.
\noindent
For simplicity assume that $U_{1,\sigma}=U_{2,\sigma}=U$ by shrinking $U_{1,\sigma}$ and $U_{2,\sigma}$ if necessary. The charts $\varphi_{1,\sigma}, \varphi_{2,\sigma}:U\to X_\sigma$ are defined by the two sets of semiinvariant parameters, $u^1_1,\ldots,u^1_k$ and $u^2_1,\ldots,u^2_k$
with a nontrivial action of $\Gamma_\sigma$. These sets can be extended to full sets of parameters $u^1_1,\ldots,u^1_k,u_{k+1},\ldots,u_n$ and $u^2_1,\ldots,u^2_k, u_{k+1},\ldots,u_n$ where $\Gamma_\sigma$ acts trivially on $u_{k+1},\ldots,u_n$, and $u_{k+1}\ldots,u_n$ define parameters on the stratum $s$ at $x$.
These two sets of parameters define \'etale morphisms $\varphi_{1,\sigma},\varphi_{2,\sigma}:U\to X_\sigma\times {\mathbb{A}}^{n-k}$ and fiber squares
\[\begin{array}{rcccccc}
\overline{\varphi_{i,\sigma}}:& V_i
& \rightarrow &X_{\Del^\sigma}\times {\mathbb{A}}^{n-K}
&&& \\
&\downarrow & & \downarrow &&&\\
\varphi_{i,\sigma}:& U & \rightarrow & X_\sigma \times {\mathbb{A}}^{n-K}&&&\\
\end{array}\]
\noindent
Suppose the induced $\Gamma$-equivariant birational map $f:V_1\dashrightarrow V_2$ is not an isomorphism over $U$.
Let $V$ be the graph of $f$ which is a dominating component of the fiber product $V_1\times_U V_2$. Then either $V\to V_1$ or $V\to V_2$ is not an isomorphism (i.e.~collapses a curve to a point)
over some $x\in s\cap U$. Consider an \'etale $\Gamma_\sigma$-equivariant morphism $e:\whx_x\to U$. Pull-backs of the morphisms $V_i\to U$ via $e$ define
two different nonisomorphic $\Gamma_\sigma$-equivariant
liftings $Y_i\to {\whx_x}$, since the graph $Y$ of $Y_1 \dashrightarrow Y_2$ (which is a pull-back of $V$) is not isomorphic to at least one $Y_i$. But these two liftings are defined by two isomorphisms $\widehat\varphi_1,\widehat\varphi_2: {\whx_x}\simeq \widetilde X_{\sigma}$.
These isomorphisms differ by some automorphism $g\in G_\sigma$, so we have
$\widehat\varphi_1=g\circ \widehat\varphi_2$. Since $g$ lifts to the automorphism of $ \widetilde X_{\Del^\sigma} $ we get $Y_1\simeq Y_2\simeq \widetilde X_{\Del^\sigma}$, which contradicts the choice of $Y_i$.
Thus $V_1$ and $V_2$ are isomorphic over any $x\in s$ and $B'$ is well defined by glueing pieces $f^{-1}_\sigma(U)$ together. We need to show that the action of $K^*$ on $B$ lifts to the action of $K^*$ on $B'$.
Note that $B'$ is isomorphic to $B$ over the open generic stratum $U\supset B_+\cup B_-$ of points $x$ with $\Gamma_x=\{e\}$. By Lemma \ref{maximal2} every point $x\in B\setminus ({B_+\cap B_-})$ is in $U_\sigma$, with $\Gamma_\sigma=K^*$.
Then the diagram (2) defines the action of $K^*$ on $f^{-1}(U_\sigma)$.
\end{proof}
\subsection{Simple properties of $\widetilde X_{\Del^\sigma}$}
Recall that $\widetilde X_\sigma=\operatorname{Spec}(K[[x_1,\ldots,x_k,x_{k+1},\ldots,x_n]])$,
where \\$X_\sigma=\operatorname{Spec}(K[x_1,\ldots,x_k])$ and $\Gamma_\sigma$ acts trivially on $x_{k+1},\ldots,x_n$. This gives us $$X_{\widetilde{\sigma}}:=\operatorname{Spec}(K[x_1,\ldots,x_k,x_{k+1},\ldots,x_n])=\operatorname{Spec}(K[x_1,\ldots,x_k])\times \operatorname{Spec}(K[x_{k+1},\ldots,x_n])=X_\sigma\times X_{\operatorname{reg}}(\sigma)$$
where ${\widetilde{\sigma}}$ and ${\operatorname{reg}}(\sigma)$ correspond to $\operatorname{Spec}(K[x_1,\ldots,x_k,x_{k+1},\ldots,x_n])$ and $\operatorname{Spec}(K[x_{k+1},\ldots,x_n]$
respectively.
We can write $\widetilde{\sigma}=\sigma\times{\operatorname{reg}}(\sigma)$,
$\widetilde{\sigma}^\vee=\sigma^\vee\times{\operatorname{reg}}(\sigma)^\vee$,
$N_{\widetilde{\sigma}}=N_\sigma\times N_{{\operatorname{reg}}(\sigma)}$, and $M_{\widetilde{\sigma}}=M_\sigma\times M_{{\operatorname{reg}}(\sigma)}$.
Let $\Del^\sigma$ be a subdivision of $\sigma$. There is a natural morphism
$$j_{\Del^\sigma}:\widetilde X_{\Del^\sigma}\to X_{\Del^\sigma}$$
\begin{lemma}
\begin{enumerate}
\item The open cover $\{X_\tau\mid \tau \in \Del^\sigma\}$ of $X_{\Del^\sigma}$ defines the open cover of $\{\widetilde X_\tau\mid \tau \in \Del^\sigma\}$ of $\widetilde X_{\Del^\sigma}$, where $\widetilde X_\tau:=X_\tau\times_{X_\sigma}\widetilde X_\sigma=j_{\Del^\sigma}^{-1}(X_\tau)$ and
$K[\widetilde X_\tau]=K[\tau^\vee]\otimes_{K[\sigma^\vee]}K[[\widetilde{\sigma}^\vee]]$
\item The closed orbits $O_\tau\subset X_\tau$ define closed subschemes $\widetilde{O_\tau}:=j_{\Del^\sigma}^{-1}(O_\tau)$ of $\widetilde X_\tau\subset \widetilde X_{\Del^\sigma},$
\noindent where
$K[\widetilde{O_\tau}]={K[\tau^\perp]}\otimes_{K[\sigma^\vee\cap\tau^\perp]}K[[({\sigma}^\vee\cap\tau^\perp)\times {\operatorname{reg}}(\sigma)^\vee]].$
\item The local ring ${\mathcal O}_{\widetilde X_{\Del^\sigma}, \widetilde{O_\tau}}$ at the generic point of $\widetilde{O_\tau}$ contains the residue field $ K(\widetilde{O_\tau})$ (which is a quotient of $K[\widetilde{O_\tau}]$).
The completion of ${\mathcal O}_{\widetilde X_{\Del^\sigma}, \widetilde{O_\tau}}$ is of the form $$\widehat {{\mathcal O}_{\widetilde X_{\Del^\sigma}, \widetilde{O_\tau}}} \buildrel{\varphi}\over{\simeq} K(\widetilde{O_\tau})
[[{\underline\tau}^\vee]]\supset {\mathcal O}_{\widetilde X_{\Del^\sigma}, \widetilde{O_\tau}}\supset K(\widetilde{O_\tau})[{\underline\tau}^\vee].$$
\item The group $\Gamma_\tau\subset \Gamma_\sigma$ acts tivially on $K(\widetilde{O_\tau})$.The action of $\Gamma_\tau$ on characters $\tau^\vee$ descends to $\underline{\tau}^\vee=\tau^\vee/\tau^\perp$.
In particular if $\tau$ is dependent, the action of $K^*$ on $K(\widetilde{O_\tau})$ is trivial. \end{enumerate}
\end{lemma}
\begin{proof} (1) follows from definition. The elements of $K[\tau^\vee]\otimes_{K[\sigma^\vee]}K[[\widetilde{\sigma}^\vee]]$ are the finite sums of the form $\sum x_if_i$, where $x_i\in\tau^\vee$ and $f_i\in K[[\widetilde{\sigma}^\vee]]=K[[{\sigma}^\vee\times{\operatorname{reg}}(\sigma)^\vee]]$ is an infinite power series. Note also that $\sigma^\vee\subset \tau^\vee$.
(2) The ideal $I_{O_\tau}\subset K[X_\tau]$ is generated by all characters $x^F$, where $F\in \tau^\vee\setminus\tau^\perp$.
These characters generate the ideal $I_{\widetilde{O_\tau}}\subset K[\widetilde X_\tau]$.
Then the elements of
$K[\widetilde{O_\tau}]=K[\widetilde X_\tau]/{I_{\widetilde{O_\tau}}}$ are the finite sums of the form $\sum x_if_i$, where $x_i\in\tau^\perp$ and $f_i\in K[[(\sigma^\vee\cap\tau^\perp)\times{\operatorname{reg}}(\sigma)^\vee]]$ is an infinite power series. We get $K[\widetilde X_\tau]/{I_{\widetilde{O_\tau}}}=
K[\tau^\perp]\otimes_{K[\sigma^\vee\cap\tau^\perp]}K[[(\sigma^\vee\cap\tau^\perp)\times{\operatorname{reg}}(\sigma)^\vee]]$.
(3) Note that $K[\widetilde{O_\tau}]$ is a subring of $K[\widetilde X_\tau]$. The subalgebra generated by $\tau^\vee$ over $K[\widetilde{O_\tau}]$ is equal to $K[\widetilde{O_\tau}][\tau^\vee]=K[\widetilde{O_\tau}][\underline{\tau}^\vee]\subset K[\widetilde X_\tau]\subset K[\widetilde{O_\tau}][[\underline{\tau}^\vee]]$. Passing to the localizations at $I_{\widetilde{O_\tau}}$ we get inclusions $K(\widetilde{O_\tau})[\underline\tau^\vee]\subset(K[\widetilde X_\tau])_{\widetilde{O_\tau}}={{\mathcal O}_{\widetilde X_{\Del^\sigma},\widetilde{O_\tau}}} \subset K(\widetilde{O_\tau})[[\underline\tau^\vee]]=\widehat {{\mathcal O}_{\widetilde X_{\Del^\sigma},\widetilde{O_\tau}}} $.
(4) The action of $\Gamma_\tau$ on $K(O_\tau)=K[\tau^\perp]$ is trivial. Then
$\Gamma_\tau$ acts trivially on all characters in $\tau^\perp$ and on $K[\widetilde{O_\tau}]=
K[\tau^\perp]\otimes_{K[\sigma^\vee\cap\tau^\perp]}K[[(\sigma^\vee\cap\tau^\perp)\times{\operatorname{reg}}(\sigma)^\vee]]$.
\end{proof}
\bigskip
\subsection{Basic properties of valuations}
Let $K(X)$ be the field of rational functions on an algebraic variety or an integral scheme $X$.
A {\it valuation} on $K(X)$ is a group homomorphism $\mu:K(X)^*\to G$ from the multiplicative group $K(X)^*$ to a totally ordered group $G$ such that $\mu(a+b)\geq\min(\mu(a),\mu(b))$.
By the {\it center} of a valuation $\mu$ on $X$ we mean an irreducible closed subvariety $Z(\mu)\subset X$ such that for any open affine $V\subset X$, intersecting $Z(\mu)$, the ideal $I_{Z(\mu)\cap V}\subset K[V]$ is generated by all $f\in K[V]$ such that $\mu(f)>0$ and for any $f\in K[V]$, we have $\mu(f)\geq 0$.
Each vector $v\in N^{{\mathbb{Q}}}$ defines a linear function on $M$ which
determines a
valuation ${\operatorname{val}}(v)$ on a toric variety
$X_{\Del}\supset T$.
For any regular function $f=\sum_{w\in M} a_wx^w\in K[T]$ set
$${\operatorname{val}}(v)(f):=\min\{(v,w)\mid a_w\neq 0\}.$$
If $v\in{\operatorname{int}}(\sigma)$, where $\sigma\in\Del$, then ${\operatorname{val}}(v)$ is positive for all $x^F$, where $F\in\sigma^\vee\setminus\sigma^\perp$. In particular we get
$$Z({\operatorname{val}}(v))=\overline{O_\sigma}\quad\quad\mbox{iff}\quad\quad v\in{\operatorname{int}}\sigma.$$
\noindent
If $v\in\sigma$ then ${\operatorname{val}}(v)$ is a valuation on $R=K[X_\sigma]=K[\sigma^\vee]$, that is, ${\operatorname{val}}(v)(f)\geq 0$ for all $f\in K[\sigma^\vee]\setminus\{0\}$.
We construct ideals for all $a\in \mathbb N$ which uniquely determine ${\operatorname{val}}(v)$:
\[I_{{\operatorname{val}}(v),a}=\{f\in R\mid {\operatorname{val}}(v)(f) \geq a\}=(x^F\mid F\in\sigma^\vee, F(v)\geq a)\subset R.\]
By glueing $I_{{\operatorname{val}}(v),a}$
for all $v\in \sigma$ and putting
${\mathcal I}_{{\operatorname{val}}(v),a|X_\sigma}={\mathcal O}_{X_\sigma}$ if $v\notin\sigma$ we construct a coherent sheaf of ideals ${\mathcal I}_{{\operatorname{val}}(v),a}$
on $X_\Del$.
Let $\sigma\in\Sigma$ be a cone of the semicomplex $\Sigma$ and $v\in\sigma\subset\widetilde{\sigma}$. The valuation ${\operatorname{val}}(v)$ on $K[\sigma^\vee]$ extends to the valuation on $K[[\widetilde{\sigma}^\vee]]$. Thus it determines a valution on $\widetilde X_{\Del^\sigma}$, where $\Del^\sigma $ is a subdivision of $\sigma$. As before we have
\begin{lemma}
\begin{enumerate}
\item
$Z({\operatorname{val}}(v),\widetilde X_{\Del^\sigma})={\operatorname{cl}}{(\widetilde{O_\tau})}\subset \widetilde X_{\Del^\sigma}$, where $\tau\in\Del$ and $v\in{\operatorname{int}}(\tau)$.
\item There exists a coherent sheaf of ideals ${\mathcal I}_{{\operatorname{val}}(v,a),\widetilde X_{\Del^\sigma}}=j_{\Del^\sigma}^*({\mathcal I}_{{\operatorname{val}}(v,a),X_{\Del^\sigma}})$ on $\widetilde X_{\Del^\sigma}$ such that for every $\del\in\Del
$ containing $v$ and $R=K[\widetilde X_\del]=K[\del^\vee]\otimes_{K[\sigma^\vee]}K[[\widetilde{\sigma}^\vee]]$ we have \[I_{\rm{val}\,(v),a}=\{f\in R\mid \rm{val}\,(v)(f) \geq a\}=(x^F\mid F\in\sigma^\vee, F(v)\geq a)\subset R.\]
\item The valuation ${\operatorname{val}}(v)$ on the local ring ${\mathcal O}_{\whx_{\Del^\sigma},\widetilde{O_\tau}}$, where $v\in\tau$ extends to its completion $\widehat{{\mathcal O}}_{\whx_\Del,\widetilde{O_\tau}}=K(\widetilde{O_\tau})[[\underline{\tau}^\vee]]$. Moreover $\rm{val}\,(v)_{| K(\widetilde{O_\tau})^*}=0$. \end{enumerate}
\end{lemma}
\begin{lemma} \label{closed}
If ${\operatorname{cl}}{(\widetilde{O_\tau})}\subset \widetilde X_{\Del^\sigma}$ is $G_\sigma$-invariant
then ${\operatorname{val}}(v)$ is $G_\sigma$-invariant on $\widetilde X_{\Del^\sigma}$ iff it is $G_\sigma$-invariant on $\whx_\tau:=\operatorname{Spec}(K(\widetilde{O_\tau})[[\underline{\tau}^\vee]])$.
\end{lemma}
\begin{proof}
$K(\widetilde{O_\tau})[[\underline{\tau}^\vee]]$ is faithfully flat over ${\mathcal O}_{\whx_{\Del^\sigma},\widetilde{O_\tau}}$ and we have $1-1$ correspondence between
ideals $g^*(I_{{\operatorname{val}}(v),a})$ on ${\mathcal O}_{\widetilde X_{\Del^\sigma},\widetilde{O_\tau}}$ and on $K(\widetilde{O_\tau})[[\underline{\tau}^\vee]]$.
\end{proof}
\subsection{Blow-ups of toric ideal sheaves}
The sheaf ${\mathcal I}_{\rm{val}\,(v),a}$ is an example of an $T$-invariant sheaf of ideals on a toric variety $X_\Del$. It is locally defined by monomial ideals $I_\sigma\subset K[X_\sigma]$. Any $T$-invariant sheaf of ideals ${\mathcal I}$ on $X_\Del$ defines a function $\rm{ord}_{{\mathcal I}}:|\Sigma|\to Q$ (see \cite{KKMS}) such that for any $p\in\sigma$
\[\rm{ord}_{{\mathcal I}}(p)=\min\{F(p):x^F\in I_\sigma\}\]
\noindent
The function $\rm{ord}_{{\mathcal I}}$ is concave and piecewise linear on every cone $\sigma\in\Del$. If $(x^{F_1},\dots,x^{F_k})= I_\sigma$ for $F_1,\ldots,F_k\in\sigma^\vee$ then $\rm{ord}_{{\mathcal I}}(p)=\min(F_1(p),\dots, F_k(p))$. The cones $\sigma_{F_i}:=\{p\in\sigma:\rm{ord}_{{\mathcal I}}(p)=F_i(p)\}$ define a subdivision of $\sigma$ and by combining these subdivision together for all $\sigma\in \Del$ we get a subdivision $\Del_{\rm{ord}_{{\mathcal I}}}$ of $\Del$. This is the coarsest subdivision of $\Del$ for which $\rm{ord}_{{\mathcal I}}$ is linear on every cone.
\begin{lemma}\label{le:kk}
\cite{KKMS} If ${\mathcal I}$ is an iraviant sheaf of ideals on $X_\Del$ then the normalization of the blow--up of ${\mathcal I}$ on $X_\Del$ is a toric variety $X_{\Del_{\rm{ord}_{{\mathcal I}}}}$ corresponding to the subdivision $\Del_{\rm{ord}_{{\mathcal I}}}$ of $\Del$.
\end{lemma}
\begin{proof} Let $f:X'\to X_\Del$ be the normalized blow--up of ${\mathcal I}$. Then $X'$ is a toric variety on which $f^*(I)$ is locally invertible. Then $X'$ corresponds to a subdivision $\Del'$ of $\Del$ such that $\rm{ord}_{f^*({\mathcal I})}=\rm{ord}_{{\mathcal I}}$ is linear on every cone $\Del'$. From the universal property of the blow--up we conclude that $\Del'$ is the coarsest subdivision with this property. Thus $\Del'=\Del_{\rm{ord}_{{\mathcal I}}}$.
\end{proof}
\noindent
\begin{lemma}\label{blow-up}\cite{KKMS} Given a simplical fan and an integral vector $v\in|\Del|$, there exists a sufficiently divisible natural number $a$, such that
$\Del_{{\rm{ord}_{\mathcal I}}_{\rm{val}\,(v),a}}=\langle v\rangle\cdot\Del. $
\end{lemma}
\begin{proof} Let $\sigma=\langle e_1,\dots,e_k\rangle$ be a cone containing $v$ and assume that $v\in {\operatorname{int}} \langle e_1,\dots,e_\ell\rangle\preceq \sigma$, for some $\ell\leq k$. Let $F_j\in\sigma^\vee$, for $1\leq j\leq\ell$, be the functional such that $F_j(e_i)=0$ for $i\neq j$ and $F_j(v)=a$. If $a$ is sufficiently divisible then $F_j$ is integral and $x^{F_j}\in I_{\rm{val}\,(v),a}$ for all $1\leq j\leq \ell$. Note that for any $x^F\in I_{\rm{val}\,(v),a}$ we have that $F(v)\geq a$ and $F(e_i)\geq 0$. This gives $F\geq F_j$ on $\langle e_1,\dots,\check e_j,\dots,e_k,v\rangle$ and finally ${\rm{ord}_{\mathcal I}}_{\rm{val}\,(v),a}=F_j$ on $\langle e_1,\dots,\check e_j,\dots,e_{k},v\rangle$. Note that since $F_j(e_j)>0$, we have that $F_j(p)>F_i(p)$ if $p\in \langle e_1,\dots,\check e_i,\dots,e_k,v\rangle\setminus\langle e_1,\dots,\check e_j,\dots,e_k,v\rangle$ so ${\rm{ord}_{\mathcal I}}_{\rm{val}\,(v),a}=F_j$ exactly on $\langle e_1,\dots,\check e_j,\dots,e_k,v\rangle$ and $(v)\cdot \sigma=\sigma_{{\rm{ord}_{\mathcal I}}_{\rm{val}\,(v),a}}$.
\end{proof}
\subsection{Stable vectors}
Let $g:X\to Y$ be any dominant morphism of integral schemes (that is,
$\overline{g(X)}=Y$) and $\mu$ be a valuation of $K(X)$. Then $g$ induces a valuation $g_*(\mu)$ on $K(Y)\simeq g^*(K(Y))\subset K(X)$:
$g_*\mu(f)=\mu(f\circ g)$.
\begin{definition} Let $\Sigma$ be the semicomplex defined for the cobordism $B$. A vector $v\in \rm{int}(\sigma)$, where $\sigma\in\Sigma$, is called \textit{stable} if for every $\sigma\leq\sigma'$, ${\operatorname{val}}(v)$ is $G_{\sigma'}$-invariant on $\widetilde X_{\sigma'}$.
\end{definition}
\begin{lemma} \label{G} If $\widetilde X_{\Del^\sigma}\to\widetilde X_{\sigma}$ is $G_\sigma$-equivariant and ${\operatorname{val}}(v)$ is $G_\sigma$-invariant then $\widetilde X_{\langle v\rangle\cdot \Del^\sigma}\to\widetilde X_{\sigma}$
is $G_\sigma$-equivariant.
\end{lemma}
\begin{proof} The morphism $\widetilde X_{\langle v\rangle\cdot\Del^\sigma}\to\widetilde X_{\Del^\sigma}$ is a pull-back of the morphism $X_{\langle v\rangle\cdot\Del^\sigma}\to X_{\Del^\sigma}$. Thus, by Lemma \ref{blow-up}, $\widetilde X_{\langle v\rangle\cdot\Del^\sigma}\to\widetilde X_{\Del^\sigma}$ is a normalized blow-up of ${\mathcal I}_{{\operatorname{val}}(v),a}$ on $\widetilde X_{\Del^\sigma}$. But the latter sheaf is $G_\sigma$-invariant.
\end{proof}
\begin{proposition}\label{blowup} Let $\Del=\{\Del^\sigma\mid \sigma\in\Sigma\}$ be a canonical subdivision of $\Sigma$ and $v$ be a stable on $\Sigma$. Then $\langle v\rangle\cdot\Del:=\{\langle v\rangle\cdot\Del^\sigma\mid \sigma\in\Sigma\}$ is a canonical subdivision of $\Sigma$.
\end{proposition}
\subsection{Convexity}
\begin{lemma}\label{convex} Let ${\operatorname{val}}(v_1)$ and ${\operatorname{val}}(v_2)$ be $G_\sigma$-invariant valuations on $X_\sigma$. Then all valuations ${\operatorname{val}}(v)$, where $v=av_1+bv_2$, $a,b\geq 0$, $a,b\in {\mathbb{Q}}$, are $G_\sigma$-invariant.
\end{lemma}
\begin{proof} Let $\Del=\langle v_1\rangle\cdot\langle v_2\rangle\cdot\sigma$ be a subdivision of $\sigma$. Then by Lemma \ref{G}, $\widetilde X_{\Del^\sigma}\to\widetilde X_\sigma$ is $G_\sigma$-invariant on $\widetilde X_{\Del^\sigma}$. The exceptional divisors $D_1$ and $D_2$ are $G_\sigma$-invariant and correspond to one-dimensional cones (rays) $\langle v_1\rangle,\langle v_2\rangle\in\Del$. The cone $\tau=\langle v_1,v_2\rangle\in D$ corresponds to the orbit $\widetilde{O_\tau}$ whose closure is $D_1\cap D_2$ and thus the generic point is $G_\sigma$-invariant. The action of $G_\sigma$ on $\widetilde X_\sigma$ induces an action on the local ring $\widetilde X_{\Del,\widetilde{O_\tau}}$ at the generic point of $\widetilde{O_\tau}$ and on its completion $K(\widetilde{O_\tau})[[\underline\tau^\vee]]$. Note that for any $v\in\tau$, ${\operatorname{val}}(v)_{|K(\widetilde{O_\tau})}=0$. For any $F\in\underline\tau^\vee=\displaystyle{\tau^\vee\over\tau^\perp}$ the divisor $(x^F)$ of the character $x^F$ on $\whx_\tau:=\operatorname{Spec} K(\widetilde{O_\tau})[[\underline\tau^\vee]]$ is a combination $n_1D_1+n_2D_2$ for $n_1n_2\in\mathbb Z$. Since $D_1$ and $D_2$ are $G_\sigma$-invariant, the divisor $(x^F)=n_1D_1+n_2D_2$ is $G_\sigma$-invariant, that is, for any $g\in G$, we have $g x^F=u_{g,F}\cdot x^F$ where $u_{g,F}$ is invertible on $K(\widetilde{O_\tau})[[\underline\tau^\vee]]$.
Thus for every $v\in\tau$ and $g\in G$ we have
\begin{equation*}
\begin{aligned}
g^*(I_{{\operatorname{val}}(v),a})&=g^*(x^F | F\in\underline\sigma^\vee, F(v)\geq a)
=(u_{g,F}x^F | F\in\sigma^\vee, F(v)\geq a)=I_{{\operatorname{val}}(v),a}.
\end{aligned}
\end{equation*}
Thus ${\operatorname{val}}(v)$ is $G_\sigma$-invariant on $K(\widetilde{O_\tau})[[\underline\tau^\vee]]$ and on its subring ${\mathcal O}_{\widetilde X_{\Del,\widetilde{O_\tau}}}$. The latter ring has the same quotient field as $\widetilde X_\sigma$ so ${\operatorname{val}}(v)$ is $G_\sigma$-invariant on $\widetilde X_\sigma$.
\end{proof}
\begin{lemma}\label{convex2}
Let $\sigma\in\Sigma$ and $v_1,v_2\in\sigma$ be stable vectors. Then all vectors $v=av_1+bv_2\in\sigma$, where $a,b\in {\mathbb{Q}}_{>0}$, are stable.
\end{lemma}
\subsection{Existence of quotients}
\begin{lemma}\label{le:q}
Let $\Gamma\subset \Gamma_\sigma$ be a finite subgroup and $\tau\in\Del^\sigma$.
Then $\widetilde X_{\tau}/\Gamma=X_{\tau}/{\Gamma}\times_{X_{\sigma}/\Gamma}\widetilde X_\sigma/\Gamma$.
\end{lemma}
\begin{proof} The group $\Gamma\simeq {\mathbb{Z}}_n$ acts on characters $x^F$, $F\in M_\sigma$, with weights $a_F$: $t(x^F)=t^{a_F}x^F$ where $t\in\Gamma$, and $a_F\in \mathbb Z_n$. The elements of the ring
\[K[\widetilde X_\tau]=K[X_\tau]\otimes_{K[X_\sigma]}K[\widetilde X_\sigma]=K[\tau^\vee]\otimes_{K[\sigma^\vee]}K[[\widetilde{\sigma}^\vee]]\]
are finite sums $\sum x_if_i$ where $f_i\in K[\widetilde X_\sigma]$ is a formal power series and $x_i\in\tau^\vee$ is a character. (Note that $\sigma^\vee\subseteq\tau^\vee$ since $\tau\subset \sigma$.)
The elements of the ring $K[\widetilde X_{\tau}]/\Gamma=K[X_\tau]^\Gamma$ are finite sums $\sum x_if_i$ of weight zero, that is, every $f_i\in K[[\sigma^\vee]]$ is a quasihomogeneous power series of weight $a_{f_i}=-a_{x_i}$. The elements of the ring $K[X_\tau]^\Gamma\otimes_{K[X_\sigma]^\Gamma}K[\widetilde X_\sigma]^\Gamma$ are of the form $\sum x_if_i$ where $x_i$ and $f_i$ each have weight zero. We have to prove
\begin{lemma} \label{le:10} Let $K[\widetilde{\sigma}^\vee]=\underset{a\in {\mathbb{Z}}_n}{\oplus}K[\widetilde{\sigma}^\vee]^a$ be
a decomposition according to weights. Then $K[\widetilde{\sigma}^\vee]^a$ is generated over $K[\widetilde{\sigma}^\vee]^0$ by
finitely many monomials.
\end{lemma}
\begin{proof} Note that for any $F\in \widetilde{\sigma}^\vee$, the element $nF\in (\widetilde{\sigma}^\vee)^a$.
Let $x^{F_1},\dots,x^{F_k}$
generate $K[\widetilde{\sigma}^\vee]$.
Then the elements
$x^{\alpha_1F_1+\cdots+\alpha_kF_k}$, where $$\alpha_1F_1(v_\sigma)+\cdots
+\alpha_kF_k(v_\sigma)=a\quad \mbox{and}\quad 0\leq\alpha_i\leq n$$ generate
$K[\widetilde{\sigma}^\vee]^a$ over $K[\widetilde{\sigma}^\vee]^0$.
\end{proof}
It follows from Lemma \ref{le:10} that every $f_i\in K[[\widetilde{\sigma}^\vee]]^a$ decomposes as a finite sum $f_i=\sum x_{ij}f_{ij}$, where $f_{ij}\in K[[\widetilde{\sigma}^\vee]]^0$ and $x_{ij}\in(\sigma^\vee)^a$ and
$f=\underset{i}{\sum}x_if_i=\underset{ij}{\sum}x_ix_{ij}f_{ij}\in K[\tau^\vee]^\Gamma\otimes_{K[\sigma^\vee]^\Gamma}K[[\widetilde{\sigma}^\vee]]^\Gamma $.
\end{proof}
\begin{corollary}\label{quo1}
Let a finite group $\Gamma\subset \Gamma_\sigma$ act on $\widetilde X_\sigma$. Then for a decomposition $\Del^\sigma$ of $\sigma$ the following quotient exists:
\[\widetilde X_{\Del^\sigma}/\Gamma=X_{\Del^\sigma}/\Gamma\times_{X_{\sigma}/\Gamma}\widetilde X_\sigma/\Gamma=X_{\pi({\Del^\sigma})}\times_{X_{\pi(\sigma)}}\widetilde X_{\pi(\sigma)}\]
where $\pi:\sigma\to \pi(\sigma)$ corresponds to the quotient $X_\sigma\to X_\sigma/\Gamma$ and $\widetilde X_{\pi(\sigma)}:=(X_{\pi(\sigma)}\times X_{{\operatorname{reg}}(\sigma)})^\wedge=\widetilde X_{\sigma}/\Gamma$.
\end{corollary}
As before
there is a natural morphism
$$j_{\pi(\Del^\sigma}):\widetilde X_{\pi(\Del^\sigma)}\to X_{\pi(\Del^\sigma)}$$
\begin{lemma}
\begin{enumerate}
\item The open cover $\{X_{\pi(\tau)}\mid \pi(\tau)\in\pi(\Sigma)\}$ of $X_{\pi(\Del^\sigma)}$ defines the open cover \\ $\{\widetilde X_{\pi(\tau)}\mid \pi(\tau)\in\pi(\Sigma)\}$ of $\widetilde X_{\pi(\Del^\sigma)}$, where $\widetilde X_{\pi(\tau)}:= \widetilde X_{\tau}/\Gamma=
X_{\pi(\tau)} \times_{X_{\pi(\sigma)}}\widetilde X_{\pi(\sigma)}=j_{\pi(\Del^\sigma)}^{-1}(X_{\pi(\tau)})$.
\item The closed orbits $O_{\pi(\tau)}\subset X_{\pi(\tau)}$ define closed subschemes $\widetilde{O}_{\pi(\tau)}:=j_{\pi(\Del^\sigma)}^{-1}(O_{\pi(\tau)})$ of $\widetilde X_{\pi(\tau)}\subset \widetilde X_{\pi(\Del^\sigma)},$
\noindent where
$K[\widetilde{O}_{\pi(\tau)}]=K[\widetilde{O}_{\tau}]/\Gamma.$
\item The completion of the local ring ${\mathcal O}_{\widetilde X_{\pi(\Del^\sigma}), \widetilde{O}_{\pi(\tau)}}$ at the generic point of $\widetilde{O}_{\pi(\tau)}$ is of the form $$\widehat {{\mathcal O}}_{\widetilde X_{\pi(\Del)}, \widetilde{O}_{\pi(\tau)}}{\simeq} K(\widetilde{O}_{\pi(\tau)})
[[{{\pi(\underline\tau)}}^\vee]].$$
\end{enumerate}
\end{lemma}
\subsection{Descending of the group action of $G_\sigma$.}
\begin{lemma}\label{21}
If $V\subset \widetilde X_{\Del^\sigma}$ is an open affine $\Gamma$--invariant subscheme then for any open affine $\Gamma$--invariant subscheme $U\subset V$, we have an open inclusion of schemes $
U/\Gamma\subset V/\Gamma$.
\end{lemma}
\begin{proof} Let $Z=V\setminus U$ be a closed affine subscheme. Then $I_Z\subset K[U]$ is $\Gamma\simeq{\mathbb{Z}}^n$ invariant and generated by a finite number of semiinvariant functions $f_1,\dots, f_k\in I_Z$. Then the functions $g_1=f^n_1,\dots,g_k=f^n_k$ are invariant. Write $U=\underset{i=1}{\bigcup} V_{g_i}$ as the union of open subschemes $V_{g_i}=U_{g_i}$. The algebra $K[V_{g_i}]^\Gamma=K[V]^\Gamma_{g_i}=K[U]_{g_i}$ the localization of $K[V]$ so there is an open inclusion $V_{g}/\Gamma\subset U/\Gamma$ and $V_{g}/\Gamma\subset V/\Gamma$. It follows that $U/\Gamma=\underset{i}{\bigcup}V_{i}/\Gamma\subset V_{g}/\Gamma$.
\end{proof}
\begin{lemma}\label{22}
Any open $\Gamma$--equivant embedding of an open affine $\Gamma$--invariant subscheme $V$ into $\widetilde X_{\Del^\sigma}$ determines an open
embedding $V/\Gamma\subset \widetilde X_{\Del^\sigma}/\Gamma$.
\end{lemma}
\begin{proof} Let $V_\tau=V\cap\widetilde X_\tau$, where $\tau\in{\Del^\sigma}$. Then $V=\underset{\tau\in{\Del^\sigma}}{\bigcup}V_\tau$. By the previous lemma $V_\tau/\Gamma\subset \widetilde X_\tau/\Gamma$ and $V_\tau/\Gamma\subset V/\Gamma$ are open embeddings defining an open inclusion $V/\Gamma=\underset{\tau\in{\Del^\sigma}}{\bigcup}V_{\tau}/\Gamma\subset \widetilde X_{{\Del^\sigma}}/\Gamma$.
\end{proof}
\begin{lemma}\label{2}
The action of $G_\sigma$ descends to $\widetilde X_{\Del^\sigma}/\Gamma$ and we have a $G_\sigma$--equivariant morphism $\widetilde X_{\Del^\sigma} \to \widetilde X_{\Del^\sigma}/\Gamma$.
\end{lemma}
\begin{proof} The lemma is immediate consequence of Lemma \ref{22}. For any $g\in G_\sigma$, the morphism $g: \widetilde X_{{\Del^\sigma}}/\Gamma\to \widetilde X_{{\Del^\sigma}}/\Gamma$ is defined locally by $g: \widetilde X_{{\Del^\sigma}}/\Gamma\supset V/\Gamma\to gV/\Gamma\subset \widetilde X_{{\Del^\sigma}}/\Gamma$.
\end{proof}
\subsection{Basic properties of stable vectors}
\begin{lemma} Let ${\operatorname{Tan}}_0={\mathbb{A}}^n={\operatorname{Tan}}_0^{a_0}\oplus {\operatorname{Tan}}_0^{a_1}\oplus\cdots\oplus\ {\operatorname{Tan}}_0^{a_k}$ denote the tangent space of $\widetilde X_\sigma=\operatorname{Spec} K[[u_1,\dots,u_n]]$ at $0$ and its decomposition according to the weight distribution. Let $d:G_\sigma\to \operatorname{Gl}({\operatorname{Tan}}_0)$ be the differential morphism defined as $g\mapsto dg$. Then $d(G_\sigma)=\operatorname{Gl}({\operatorname{Tan}}_0^{a_1})\times\cdots\times \operatorname{Gl}({\operatorname{Tan}}_0^{a_k})$.
\end{lemma}
\begin{proof} The elements of the group $g\in G_\sigma$ are defined by $(u_1,\dots,u_n)\mapsto(q_1,\dots,q_n)$ where $g_i=g^*(u_i)$ are quasihomogenious power series of $\Gamma_\sigma$-weights $a(g_i)=a(u_i)$.
\end{proof}
\begin{lemma}\label{le:semiinv}
Let $v\in\sigma$, where $\sigma\in\Sigma$, be an integral vector such that for any $g\in G_\sigma$, there exists an integral vector $v_g\in \sigma$ such that
$g_*({\operatorname{val}}(v))={\operatorname{val}}(v_g)$. Then ${\operatorname{val}}(v)$ is $G_\sigma$-invariant on $\widetilde{X}_\sigma$.
\end{lemma}
\begin{proof} Set $W=\{v_g\mid g\in G\}$. For any $a\in {\mathbb{N}}$, the ideals $I_{{\operatorname{val}}(v_g), a}$ are generated by monomials. They define the same Hilbert--Samuel function $k\mapsto\dim_K(K[\widetilde X_\sigma]/(I_{{\operatorname{val}}(v_g),a}+m^k))$, where $m\subset K[\widetilde X_\sigma]$ denotes the maximal ideal. It follows that the set $W$ is finite. On the other hand since $I_{{\operatorname{val}}(v_g),a}$ are generated by monomials they are uniquely determined by the ideals ${\operatorname{gr}}(I_{{\operatorname{val}}(v_g),a})$ in the graded ring
\[{\operatorname{gr}}(O_{\widetilde X_\sigma})=O_{\widetilde X_\sigma}/{m}\oplus m/m^2\oplus\ldots\]
The connected group $d(G_\sigma)$ acts algebraically on ${\operatorname{gr}}(O_{\widetilde X_\sigma})$ and on the connected component of the Hilbert scheme with fixed Hilbert polynomial. In particular it acts trivially on its finite subset $W$ and consequently $d(G_\sigma)$ preserves ${\operatorname{gr}}(I_{{\operatorname{val}}(v_g),a})$ and $G_\sigma$ preserves $I_{{\operatorname{val}}(v_g), a}$.
\end{proof}
Let $R\subset K$ be a ring contained in the field. We can order valuations by writing $$\mu_1>\mu_2\quad{\rm if}\quad\underset{a\in R}{\forall} \quad \mu_1(a)\geq \mu_2(a)\quad{\rm and }\quad\mu_1\neq \mu_2.$$ A cone $\sigma$ defines a partial ordering: $\quad \quad v_1>v_2 \quad {\rm if}\quad v_1-v_2\in\sigma\setminus \{0\}.$
\noindent Both orders coincide for $K[X_\sigma]\subset K(X_\sigma)$: $v_1>v_2$ iff ${\operatorname{val}}(v_1)>{\operatorname{val}}(v_2)$.
\begin{lemma}\label{equiv2}
Let $\sigma$ be a cone in $N^{{\mathbb{Q}}}_\sigma$ with the lattice of 1-parameter subgroups $N_\sigma\subset N_\sigma^Q$ and the dual lattice of characters $M_\sigma$. Let $\mu$ be any integral ({or} rational) valuation centered on ${\operatorname{cl}}(\widetilde{O_\tau})$, where $\tau\preceq\sigma$. Then the restriction of $\mu$ to $M_\sigma\subset M_\sigma\times M_{{\operatorname{reg}}(\sigma)}\subset K(\widetilde X_\sigma)^*$ defines a functional on $\tau^\vee\subseteq M_\sigma^Q$ corresponding to a vector $v_\mu\in{\operatorname{int}}\tau$ such that
$F(v_\mu)=\mu(x^F)$ for $F\in M_\sigma$ and
$\mu\geq{\operatorname{val}}(v_\mu)$ on $\widetilde X_\sigma$.
\end{lemma}
\begin{proof} $I_{\mu,a}\supseteq (x^F\mid \mu(x^F)\geq a)=(x^F \mid F(v_\mu)\geq a)=I_{{\operatorname{val}}(v_\mu),a}$.
\end{proof}
\begin{lemma} \label{group} Let $\Gamma\subset \Gamma_\sigma$ be a finite group acting on $\widetilde X_\sigma$. Let $\pi:N^{{\mathbb{Q}}}\to (N^\Gamma)^{\mathbb{Q}}$ denote the projection corresponding to the geometric quotient $\widetilde X_\sigma\to\widetilde X_{\pi(\sigma)}=\widetilde X_{\sigma}/\Gamma$. Then ${\operatorname{val}}(v)$ is $G_\sigma$-invariant on $\widetilde X_\sigma$ iff ${\operatorname{val}}(\pi(v))$ is $G_\sigma$-invariant on $\widetilde X_{\pi(\sigma)}.$
\end{lemma}
\begin{proof} ($\Rightarrow$) ${\operatorname{val}}(v)$ is $G_\sigma$-invariant on $K[\widetilde X_\sigma]$
and it is invariant on $K[\widetilde X_\sigma]^\Gamma$.
\noindent
($\Leftarrow$) Note that $\pi$ defines an inclusion of same dimension lattices $N\hookrightarrow N^\Gamma$ and $M^\Gamma\hookrightarrow M$.
Assume that ${\operatorname{val}}(\pi(v))$ is $G_\sigma$-invariant. It defines a functional on the lattice $M^\Gamma$ and its unique extension to $M\supset M^\Gamma$ corresponding to ${\operatorname{val}}(v)$. Since $g_*({\operatorname{val}}(\pi(v)))={\operatorname{val}}(\pi(v))$, we have $g_*({\operatorname{val}}(v))_{|M^\Gamma}={\operatorname{val}}(v)_{|M^\Gamma}$ and consequently
$g_*({\operatorname{val}}(v))_{|M}={\operatorname{val}}(v)_{|M}. $
By Lemma \ref{equiv2}, $g_*({\operatorname{val}}(v))\geq {\operatorname{val}}(v)$ for all $g\in G_\sigma$. Thus ${\operatorname{val}}(v)\geq g_*^{-1}({\operatorname{val}}(v))$ for all $g^{-1}\in G_\sigma$. Finally $g_*({\operatorname{val}}(v))= {\operatorname{val}}(v)$.
\end{proof}
\subsection{Stability of centers from ${\operatorname{par}}(\pi(\tau))$}
In the following let $\Del^\sigma$ be a decomposition of $\sigma\in\Sigma$ such that $\widetilde X_{\Del^\sigma}\to \widetilde X_\sigma$ is $G_\sigma$-equivariant, $\tau\in \Del^\sigma$ be its face and
$\Gamma$ be a finite subgroup of $\Gamma_\sigma$. Denote by $\pi :(\sigma, N_\sigma)\to (\sigma^\Gamma, N^\Gamma_\sigma)$ the linear isomorphism and the lattice inclusion corresponding to the quotient $X_\sigma\to X_{\sigma}/\Gamma=X_{\pi(\sigma)} $.
\begin{lemma}\label{sp}
Assume that for any $g\in G_\sigma$, there exists a cone $\tau_g\in\Del^\sigma$ such that $g\cdot({\operatorname{cl}}(\widetilde{O_\tau}))={\operatorname{cl}}(\widetilde{O_\tau}_g)$.
Let $v\in{\operatorname{int}}(\pi(\tau))\subset N^\Gamma_\sigma\subset N_\sigma$ be an integral vector such that ${\operatorname{val}}(v)$ is not $G_\sigma$-invariant on $\widetilde X_{\sigma}{/\Gamma}$. Then there exist integral vectors $v_1\in{\operatorname{int}}(\pi(\tau))$ and $v_2\in\pi(\tau)$ such that $$v=v_1+v_2.$$ Moreover if there exists $v_0\in \pi(\sigma)$ (not necessarily integral) such that $\rm{val}(v_0)$ is $G_\sigma$-invariant and $v>v_0$ on $\pi(\sigma)$ then $v_1>v_0$ on $\pi(\sigma)$.
\end{lemma}
\begin{proof}
If $\rm{val}(v)$ is not $G_\sigma$-invariant on $\widetilde X_\sigma/\Gamma$ then by Lemma \ref{le:semiinv}
there exists an element $g\in G_\sigma$ such that $\mu_g=g_*({\operatorname{val}}(v))$ is not a toric valuation. By the assumption $\mu_g$ is centered on ${\operatorname{cl}}(\widetilde{O}_{\pi(\tau_g)}$. Then by Lemma \ref{equiv2} it defines $v_g\in {\operatorname{int}}\pi(\tau_g)$ such that $\mu_g(x^F)=F(v_g)$ for $F\in\sigma^\vee$. Moreover $\mu_g>{\operatorname{val}}(v_g)$. Then
the valuation $g_*^{-1}({\operatorname{val}}(v_g))$ is centered on ${\operatorname{cl}}(\widetilde{O}_{\pi(\tau)}$
Thus it defines an integral $v_1\in \rm{int}(\pi(\tau))$ such that $v> v_1$ on $\pi(\tau)$ and $v_2:=v-v_1$.
Then
\[{\operatorname{val}}(v)=g_*^{-1}(\mu_g)>g_*^{-1}({\operatorname{val}}(v_g))\geq{\operatorname{val}}(v_1). \]
\noindent
Note also that if $v\geq v_0$ then $\mu_g=g_*({\operatorname{val}}(v))\geq {\operatorname{val}}(v_0)$ and ${\operatorname{val}}(v_g)\geq {\operatorname{val}}(v_0)$. Thus also ${\operatorname{val}}(v_1)\geq {\operatorname{val}}(v_0)$.
\end{proof}
\begin{lemma}\label{equiv} All valuations ${\operatorname{val}}(v)$, where $v\in\varrho$, $\varrho\in {\operatorname{Vert}} (\Del^\sigma)\setminus {\operatorname{Vert}} (\sigma)$, are $G_\sigma$-invariant.
\end{lemma}
\begin{proof} Let $v_\varrho$ be the primitive generator of $\varrho\in {\operatorname{Vert}} (\Del^\sigma)\setminus {\operatorname{Vert}} (\sigma)$. The ray $\varrho$ corresponds to an exceptional divisor $D_\varrho$. By the definition there is no decomposition $v_\varrho=v_1+v$. Thus by the previous lemma (for $\Gamma=\{e\}$), ${\operatorname{val}}(v)$ is $G_\sigma$-invariant.
\end{proof}
\begin{lemma} For any $\tau\subset\sigma$, the closure of the orbit ${\operatorname{cl}}{(\widetilde{O_\tau})}\subset \widetilde X_{\sigma}$ is $G_\sigma$-invariant.
\end{lemma}
\begin{proof} By Lemma \ref{inclusion}, the ideal of ${\operatorname{cl}}{(\widetilde{O_\tau})}\subset \widetilde X_\sigma$ is generated by all functions with nontrivial $\Gamma_\sigma$-weights.
\end{proof}
\begin{lemma} \label{stab} The valuations ${\operatorname{val}}(v)$, where $v\in {\operatorname{par}}(\pi(\tau))$, are $G_\sigma$-invariant on $\widetilde X_{\Del^\sigma}$. Moreover $v\in{\operatorname{int}}(\pi(\sigma_0))$, for some $\sigma_0\leq\sigma$.
\end{lemma}
\begin{proof} Let $v\in{\operatorname{par}}(\pi(\tau))$, where $\pi(\tau)\in\pi({\Del^\sigma})$ is a minimal integral vector such that ${\operatorname{val}}(v)$ is not $G_\sigma$-invariant. We may assume that $v\in{\operatorname{int}}(\pi(\tau))$ passing to its face if necessary. Let $\sigma'\in\overline{\sigma_0}$ be a face of $\sigma$ such that $v\in{\operatorname{int}}\pi(\sigma')$. In particular $\pi(\sigma') \supset \pi(\tau) $.
Then $\pi(\Del^\sigma)_{|\pi(\sigma')}=\pi(\Del^\sigma)_{|\pi(\sigma_0)}\oplus\langle e_1,\dots,e_k\rangle $ by Lemmas \ref{pr} and \ref{sub} and
$ v \in {\rm par}{(\pi(\tau))}\subset \pi(\sigma_0) $.
Thus $\sigma'=\sigma_0$ and $v\in{\operatorname{int}} (\pi(\sigma_0))$.
Let $$\pi(\tau)=\langle v_1,\dots,v_k,w_1,\dots,w_\ell\rangle,$$\noindent where $v_1,\dots,v_k\in{\operatorname{Vert}} (\pi(\tau))$ and $w_1,\dots,w_\ell\in{\operatorname{Vert}} (\pi({\Del^\sigma}))\setminus {\operatorname{Vert}} (\pi(\sigma))$. By Lemma \ref{equiv}, ${\operatorname{val}}(w_1),\dots,{\operatorname{val}}(w_\ell)$ are $G_\sigma$-invariant. Write $$v=\alpha_1v_1+\cdots+\alpha_kv_k+\alpha_{k+1}w_1+\cdots+\alpha_{k+\ell}w_\ell,$$\noindent where $0<\alpha_i<1$. Note that $$v\geq v_0=\alpha_{k+1}w_1+\cdots+\alpha_{k+\ell}w_\ell$$ and ${\operatorname{cl}}(\widetilde{O}_{\pi(\sigma_0)})\subset\widetilde X_{\pi(\sigma)}$ is $G_\sigma$-invariant. By Lemma \ref{sp} for $v\in\pi(\sigma_0)\leq\pi(\sigma)$ and $v>v_0$ we can find integral vectors $v', v''\in\pi(\sigma)$ such that $v=v'+v''$, $v'\geq v_0$. Then $$v'':=v-v'\leq v-v_0=\alpha_1v_1+\cdots+\alpha_kv_k.$$ Thus $v''\in {\operatorname{par}}\langle v_1,\ldots,v_k\rangle\subseteq \rm{par}(\pi)(\tau)$.
Write $v'' :=\beta_1v_1+\cdots+\beta_kv_k$, where $\beta_i\leq\alpha_i$. Then $$v'=v-v''=(\alpha_1-\beta_1){v_1}+\cdots+(\alpha_k-\beta_k){v_k}+\alpha_{k+1}w_1+\cdots+\alpha_{k+\ell}\in{\operatorname{par}}(\pi(\tau)).$$ By the minimality assumption, ${\operatorname{val}}(v')$ and ${\operatorname{val}}(v'')$ are $G_\sigma$-invariant and by Lemma \ref{convex}, ${\operatorname{val}}(v)={\operatorname{val}}(v'+v'')$ is $G_\sigma$-invariant.
\end{proof}
\begin{corollary} \label{stab2} Let $\Del=\{\Del^\sigma\in\Sigma\}$ be a decomposition of $\Sigma$. Let $\tau\in\Del^{\sigma}$ be an independent face. Then the vectors in $(\pi_{\sigma_{|\tau}})^{-1}({\operatorname{par}}(\pi_\sigma(\tau)))$ are stable.
\end{corollary}
\begin{proof} Put $\Gamma=\Gamma_\tau$. Let $\pi: (\sigma,N_\sigma)\to (\sigma, N^\Gamma_\sigma)$ be the linear isomorphism and a lattice inclusion corresponding to the quotient $X_\sigma\to X_\sigma/\Gamma$.
Then by Lemma \ref{pro2}, $\pi(\tau)\simeq\pi_\tau(\tau)\simeq\pi_\sigma(\tau)$ and by Lemma \ref{stab} vectors in $(\pi_{\sigma_{|\tau}})^{-1}({\operatorname{par}}(\pi_\sigma(\tau)))=\pi^{-1}({\operatorname{par}}(\pi(\tau)))$ are stable.
\end{proof}
\begin{corollary} \label{para}\begin{enumerate}
\item Assume that for any $g\in G_\sigma$, there exists $\tau_g\in \Del^\sigma$ such that $g({\operatorname{cl}}(\widetilde{O_\tau}))={\operatorname{cl}}(\widetilde{O_\tau}_g)$. Then ${\operatorname{cl}}(\widetilde{O_\tau})$ is $G_\sigma$-invariant.
Moreover all valuations ${\operatorname{val}}(v)$, where $v\in \overline{\rm{par}}\,(\tau)\cap {\operatorname{int}}(\tau)$, are $G_\sigma$-invariant.
\item Let $\tau\in \Del^\sigma$ be an independent cone such that ${\operatorname{cl}}(\widetilde{O_\tau})$
is $G_\sigma$-invariant. Then for any $v\in\pi^{-1}_\sigma(\overline{\rm{par}}\,(\pi(\tau))\cap {\operatorname{int}}(\pi(\tau)))$ the valuation ${\operatorname{val}}(v)$ is $G_\sigma$-invariant.
\end{enumerate}
\end{corollary}
\begin{proof}
1. Let $\tau=\langle v_1,\dots,v_k\rangle$ and $v=\alpha_1v_1+\cdots+\alpha_kv_k$, where $0<\alpha_i\leq 1$, be a minimal vector in ${\operatorname{int}}(\tau)\cap\overline{\rm{par}}\,(\tau)$ such that ${\operatorname{val}}(v)$ is not $G_\sigma$-invariant. Then by Lemma \ref{sp}, the vector $v$ can be written as $v=v'+v''$, where $v',v''<v$, $v'\in {\operatorname{int}}(\tau)$, $v''\in\tau$. Thus $v'=\alpha'_1v_1+\cdots+\alpha'_kv_k$ where $0<\alpha'_i\leq\alpha_i\leq 1$
and $v''=\alpha''_1v_1+\cdots+\alpha''_kv_k$, where $0\leq\alpha''_i=\alpha_i-\alpha'_i< 1$. Then $v'\in {\operatorname{int}}(\tau)\cap\overline{\rm{par}}\,(\tau)$ and $v''\in {\operatorname{par}}(\tau)$. By Corollary \ref{stab2}, ${\operatorname{val}}(v'')$ is $G_\sigma$-invariant on $\widetilde X_\sigma$. By the minimality assumption ${\operatorname{val}}(v')$ is $G_\sigma$-invariant. Since $v=v'+v''$, the valuation ${\operatorname{val}}(v)$ is $G_\sigma$-invariant on $\widetilde X_\sigma$ and its center $Z({\operatorname{val}}(v))$ equals $\{\operatorname{cl}}(\widetilde{O}_\tau)$.
2. Let $\pi:N\to N^\Gamma$ be the projection corresponding to the quotient $X_\sigma\to X_{\sigma}/\Gamma_\tau$. Then by Lemma \ref{pro2}, we have $\pi(\tau)\simeq\pi_\sigma(\tau)$. The proof is now exactly the same as the proof in 1 except that we replace $\widetilde X_{\Del^\sigma}$ with $\widetilde X_{\Del^\sigma}/{\Gamma_\tau}$.
\end{proof}
\subsection{Fixed points of the action}
We shall carry over the concept of fixed point set of the action of $K^*$ to the scheme $\widetilde X_{\Del^\sigma}$. The problem is that $\widetilde X_{\Del^\sigma}$ does not contain enough closed points.
\begin{definition} A point $p\in\widetilde X_{\Del^\sigma}$ is a {\it fixed point} of the action of $K^*$ if $K^*\cdot p=p$ and
$K^*$ acts trivially on the residue field $K_p$ of $p$
\end{definition}
\noindent
\begin{lemma} \label{fixed} The set of all fixed points ${\widetilde X_{\Del^\sigma}}^{K^*}$ of the action of $K^*$ is given by the union of the closures of the orbits ${\operatorname{cl}}(\widetilde{O}_\del)$ defined by circuits $\del\in\Sigma$. The ${\operatorname{cl}}(\widetilde{O}_\del)$ are maximal irreducible components of the fixed point set.
\end{lemma}
\begin{proof} A point $p$ of $\widetilde X_{\Del^\sigma}$ is a point of a locally closed subscheme defined by the orbit $\widetilde{O}_\tau$.
If $\tau$ is independent then there exists an invertible character $x^F$, where $F\in\tau^\perp$, on which $K^*$ acts nontrivially.
Then the action on $K_p\ni x^F$ is nontrivial. If $\tau$ is dependent then
the action on $K[\widetilde{O}_\tau]$ and on $K_p$ is trivial so $p\in \widetilde{O}_\tau$ is a fixed point and $p\in {\operatorname{cl}}({\widetilde{O}_\delta})$, where $\delta\preceq\tau$ is a circuit.
\end{proof}
\begin{corollary} \label{g} Let $\delta\in \Del^\sigma$ be a circuit. Then ${\operatorname{cl}}(\widetilde{O}_\delta)$ is $G_\sigma$-invariant.
\end{corollary}
\begin{proof} By Corollary \ref{fixed2},
${\operatorname{cl}}(\widetilde{O}_\delta)$ is an irreducible component of a $G_\sigma$-invariant closed subscheme $\widetilde X_{\Del^\sigma}^{K^*}$. Thus by the Corollary \ref{para}(1) it is $G_\sigma$-invariant.
\end{proof}
\subsection{Stability of ${\operatorname{Ctr}}_+(\sigma)$}
In the sequel $\del=\langle v_1,\ldots,v_k\rangle\in\Del^\sigma$ is a circuit.
Let $\Gamma\subset\Gamma_\sigma=K^*$ be a finite group.
Denote by $\pi$ (resp. $\pi_\Gamma$) the projection corresponding to the quotient $X_\del\to X_\del//K^*$ (resp. $X_\del\to X_\del/\Gamma$).
In particular $\pi_\sigma(\del)\simeq\pi(\del)$.
Write $\pi_\sigma(\del)=\langle w_1,\ldots,w_k\rangle$ and let $\sum_{r'_i>0}r'_iw_i =0$ be the unique relation between vectors (**) as in Section \ref{dep}. Set
${\rm Ctr_+}(\del)=\sum_{r'_i>0}w_i \in \overline{\rm{par}}(\pi_\sigma(\del_+))\cap{\operatorname{int}}(\pi_\sigma(\del_+))$, where
${\del_+}=\langle v_i\mid r_i>0\rangle$.
Denote by $\whx_\del$ the completion of $\widetilde X_{\Del^\sigma}$ at $\widetilde{O}_\del$. By Corollary \ref{g}, the generic point $\widetilde{O}_\del\in\widetilde X_{\Del^\sigma}$ is $G_\sigma$-invariant and thus $G_\sigma$ acts on $\whx_\del$. Moreover
$K[\whx_\del]=K(\widetilde{O}_\del)[[\del^\vee]]$ is is faithfully flat over a ${\mathcal O}_{\widetilde X_{\Del^\sigma,\widetilde{O}_\del}}$.
Also, $\widehat{{\mathcal O}}_{X_{\pi_\Gamma(\Del)},\widetilde{O}_{\pi_\Gamma(\del)}}=K(\widetilde{O}_{\pi_\Gamma(\del)})[[\underline{\pi_\Gamma(\del)}^\vee]]$ is faithfully flat over ${{\mathcal O}}_{X_{\pi_\Gamma(\Del)},\widetilde{O}_{\pi_\Gamma(\del)}}$.
The valuation ${\operatorname{val}}(v)$ on the local ring ${\mathcal O}_{X_{\Del^\sigma},\widetilde{O}_\del}$ (or ${{\mathcal O}}_{X_{\pi_\Gamma(\Del)},\widetilde{O}_{\pi_\Gamma(\del)}}$), where $v\in\del$ extends to its completion $\widehat{{\mathcal O}}_{X_\Del,\widetilde{O}_\del}=K(\widetilde{O}_\del)[[\underline{\del}^\vee]]$ (respectively$K(\widetilde{O}_{\pi_\Gamma(\del)})[[\underline{\pi_\Gamma(\del)}^\vee]]$). Moreover $\rm{val}\,(v)_{| K(\widetilde{O}_\del)^*}=0$ and the action of $K^*$ on $K(\widetilde{O}_\del)$ is trivial.
As in Lemma \ref{closed} we get
\begin{lemma} \label{compa} The valuation ${\operatorname{val}}(v)$, where $v\in\pi_\Gamma(\del)$, is $G_\sigma$-invariant on $\whx_\del/\Gamma$ iff it is $G_\sigma$-invariant on $\widetilde X_{\Del^\sigma}/\Gamma$.
\end{lemma}
\begin{lemma} \label{inva}\begin{enumerate}
\item ${\operatorname{cl}}( \widetilde{O}_{\del_-}), {\operatorname{cl}}(\widetilde{O}_{\del_+})\subset\whx_\del$ are $G_\del$-invariant.
\item ${\operatorname{cl}}(\widetilde{O}_{\del_-}),{\operatorname{cl}}(\widetilde{O}_{\del_+})\subset\widetilde{\Del^\sigma}$ are $G_\del$-invariant.
\end{enumerate}
\end{lemma}
\begin{proof}(1) By Lemmas \ref{sigma} and \ref{fix}, the ideal $I_{{\operatorname{cl}}{(\widetilde{O}_{\del_+})}}\subset K[\whx_\sigma]$ of ${\operatorname{cl}}(\widetilde{O}_{\del_+})=(\widetilde{O}_\del)^+$ is generated by functions with positive weights. (2) Consider morphisms $\whx_\del\buildrel e\over\to \widetilde X_{\Del^\sigma}\to X_{\Del^\sigma}$. The morphism $e$ is $G_\sigma$-equivariant and maps the generic points of the orbits $\widetilde{O}_\del^\pm$ on $\whx_\del$ onto the generic points of the corresponding orbits on $ \widetilde X_{\Del^\sigma}$.
\end{proof}
\begin{corollary}\label{quo} There are open $K^*$-equivariant embeddings of schemes $(\whx_\del)_-:=\whx_\del\times_{X_\del}(X_\del)_-\subset \whx_\del$ and $
(\whx_\del)_+:=\whx_\del\times_{X_\del}(X_\del)_+\subset \whx_\del$.
\end{corollary}
\begin{lemma} There exist quotients
$$(\whx_\del)_{-}/K^*=\whx_\del/K^*\times_{X_{\del}{/K^*}}(X_\del)_{-}/K^*, \quad (\whx_\del)_+/K^*=\whx_\del/K^*\times_{X_{\del}{/K^*}}(X_\del)_+/K^*.$$
\end{lemma}
\begin{proof} The proof id identical with the proof of Lemma \ref{le:q} except for we need to use Lemma \ref{le:100} below instead of Lemma \ref{le:10}
\end{proof}
\begin{lemma} \label{le:100} Let $K(\widetilde{O}_{\del})[\underline{\del}^\vee]=\underset{a\in {\mathbb{Z}}}{\oplus}K(\widetilde{O}_{\del})[\underline{\del}^\vee]^a$ be
a decomposition according to weights. Then $K(\widetilde{O}_{\del})[\underline{\del}^\vee]^a$ is generated over $K(\widetilde{O}_{\del})[\underline{\del}^\vee]^0$
by
finitely many monomials.
\end{lemma}
\begin{proof} Let $x^{F_1},\dots,x^{F_k}$
generate $K(\widetilde{O}_{\del})[\underline{\del}^\vee]$. Set
$b:=\max\{|F_1(v_\del)|,\dots,|F_k(v_\del)|\}$. We show that all the elements
$x^{\alpha_1F_1+\cdots+\alpha_kF_k}$, where $$\alpha_1F_1(v_\del)+\cdots
+\alpha_kF_k(v_\del)=a\quad \mbox{and}\quad 0\leq\alpha_i\leq k\cdot b^2+|a|,$$ generate
$K[\widetilde{\del}^\vee]^a$ over $K[\widetilde{\del}^\vee]^0$. Without loss of generality assume that $\alpha_i>k\cdot b^2+|a|$ and
$F_i(v_\del)>0$. Then
$${}k\cdot b\cdot\max\{\alpha_i\mid F_i(v_\del)<0\}\geq
-\underset{F_i(v_\del)<0}{\sum}\alpha_iF_i(v_\del)
=\underset{F(v_\del)>0}{\sum}\alpha_iF(v_\del)-a\geq kb^2+|a|-a\geq kb^2.$$
Thus there exists $j$ such that $\alpha_j\geq\displaystyle{kb^2\over kb}=b$ and
$F_j(v_\del)<0$. But then
$$x^{\alpha_1F_1+\cdots+\alpha_kF_k}=x^{F_i(v_\del)F_j-F_j(v_\del)F_i}\cdot
x^{\alpha_1F_1+\cdots+\left(\alpha_i+F_j(v_\del)\right)F_i+\cdots+\left(\alpha_j+F_i(v_\del)\right)F_j+\cdots +\alpha_kF_k},$$
\noindent where $x^{F_i(v_\del)F_j-F_j(v_\del)F_i}\in K[\widetilde{\del}^\vee]^0$ and $x^{\alpha_1F_1+\cdots+\left(\alpha_i+F_j(v_\del)\right)F_i+\cdots+\left(\alpha_j-F_i(v_\del)\right)F_j+\cdots +\alpha_kF_k}\in K[\widetilde{\del}^\vee]^a$ with smaller exponents.
\end{proof}
\begin{lemma}\label{3}
The action of $G_\sigma$ on $(\whx_\del)_-$ and $(\whx_\del)_+$ descends to $(\whx_\sigma)_-/K^*$ and $(\whx_\sigma)_+/K^*$.
\end{lemma}
\begin{proof}
The proof is almost identical with the proof of Lemma \ref{2} except for the Lemma \ref{21} which shall be replaced with Lemma \ref{31}.
We replace open affine subsets $V$ with open affine subsets $V$ satisfying the condition (***) below.
\end{proof}
\begin{lemma}\label{31} Let $V$ be an open affine $\Gamma$--invariant subscheme of $\whx_\tau=X_\tau\times_{X_\del}\whx_\del\subset (\whx_\sigma)_-$. Then $V$ satisfies the condition.
\noindent
(***) For any open affine $\Gamma$--invarient subscheme $U\subset V$ there is an inclusion of open affine subschemes $U/\Gamma\subset V/\Gamma$.
\end{lemma}
\begin{proof} Let $Z\subset\whx_\del \setminus U$ be a closed subscheme. Then the ideal $I_Z\subset K[\whx_\del]$ is $K^*$--invariant. Let $f=\sum_{F\in\del^\vee}\alpha_Fx^F\in I_Z$ and $f_a:=\sum_{F\in(\del^\vee)^a}\alpha_Fx^F$ be a part of $f$ with weight $a$. Let $m_\del\subset K[\whx_\del]$ be the maximal ideal. Then for any $m^k_\del$, the decomposition of $[f]:=f+m^k_\del=\sum f_a+m^k_\del=\sum [f_a]\in K[\whx_\del]/m^k_\del$ is finite. Moreover $t[f]=\sum t^a [f_a]$. It follows that $[f_a]=f_a+m^k_\del\in I_Z+m^k_\del$ and $f_a\in I_Z$. Thus $I_Z$ is generated by semininvariant generators $f_1,\dots,f_k$ with weight $a_1,\dots,a_k$. Note that since $\tau$ is independent, we have that $v_\sigma\not\in\rm{span}\,(\tau)$, and $v_\sigma$ is not orthogonal to $(\rm{span}\,(\tau))^\perp=\tau^\perp\otimes_{\mathbb{Z}}{\mathbb{Q}}$. Thus there exists $F\in \tau^\perp$ such that $F(v_\sigma)=a\neq 0$. The corresponding character $x^F$ is invertible on $\whx_\tau$. The function $g_i=f^a_i (x^F)^{-a_i}$ are invariant (with weight $aa_i-aa_i=0$) and
$(\whx_\tau)_{g_i}=(\whx_\tau)_{f_i}=V_{f_i}$.
Then $U=\underset{i=1}{\bigcup}V_{g_i}=\underset{i=1}{\bigcup}(\whx_\tau)_{g_i}$
and $U_{g_i}/K^*=V_{g_i}/K^*\subset U/K^*, V/K^*$. It follows that $U/K^*\subset V/K^*$.
\end{proof}
Proposition \ref{sigma2}, Lemma \ref{sigma} and the above imply:
\begin{corollary}\label{bira}
The morphisms $\widehat{\phi}_-: (\whx_\del)_-/K^*\to \whx_\del/K^*$ and
$\widehat{\phi}_+: (\whx_\del)_+/K^*\to \whx_\del/K^*$
are $G_\sigma$-equivariant, proper and birational.
\end{corollary}
\begin{lemma} \label{stable} The vector $v:=\rm{Mid}\,({\rm Ctr_+}(\del),\del)=\pi_{\sigma_{|\del^-}}^{-1}({\rm Ctr_+}(\del))+\pi_{\sigma_{|\del^+}}^{-1}({\rm Ctr_+}(\del))$
is stable.
\end{lemma}
\begin{proof} Set $ v_-:=\pi_{\sigma_{|\del^-}}^{-1}({\rm Ctr_+}(\del)) $ and
$ v_+:=\pi_{\sigma_{|\del^+}}^{-1}({\rm Ctr_+}(\del))$.
By Lemma \ref{inva}, ${\operatorname{cl}}(\widetilde{O}_{\del_+})\subset \widetilde X_{\Del^\sigma}$ is $G_\sigma$-invariant and, by Corollary \ref{para}(2) and Lemma \ref{compa}, ${\operatorname{val}}(v_+)$ is $G_\sigma$-invariant on $\widetilde X_{\sigma}$ and on $\whx_{\del}$. Hence the valuation ${\operatorname{val}}(v_+)$ descends to a $G_\sigma$-invariant valuation ${\operatorname{val}}(\pi(v_+))$ on $\whx_{\del}{//K^*}=K(\widetilde{O}_\del)[[\underline{\del}^\vee]]^{K^*}$.
By Corollary \ref{bira}, ${\operatorname{val}}(\pi(v_-))={\operatorname{val}}(\pi(v_+))$
is
$G_\sigma$-invariant on
$(\whx_{\del})_+/K^*=\whx_{\partial_-(\del)}/K^*=\whx_{\pi(\partial_-(\del))}$.
Let $\Gamma\subset K^*$ be the subgroup generated by all subgroups $\Gamma_\tau\subset K^*$, where $\tau\in\partial_-(\del)$.
Then $K^*/\Gamma$ acts freely on $X_{\partial_-(\del)}/\Gamma=(X_\del)_+/\Gamma$. Let $j: (X_\del)_+/\Gamma\to (X_\del)_+/K^*$ be the natural morphism.
Let $\pi_\Gamma:\del\to \pi_\Gamma(\del)$ be the projection corresponding to the quotient $X_\del \to X_\del/\Gamma$.
By Lemma \ref{le: p},
for any $\tau\in\del_-$, the restriction of $j$ to $X_\tau/\Gamma\subset (X_\del)_+/\Gamma$ is given by $$j:X_\tau/\Gamma=
X_{\underline\tau}/\Gamma\times O_\tau/\Gamma\to X_\tau/K^*=X_{\underline\tau}/\Gamma\times O_\tau/K^*.$$ Thus ${j}^*({\mathcal I}_{{\operatorname{val}}(\pi_\Gamma(v_-)),a})={\mathcal I}_{{\operatorname{val}}(\pi(v_-)),a}$.
Consider the natural morphisms $i_\Gamma:(\whx_\del)_+/\Gamma\to(X_\del)_+/\Gamma$ and $i_{K^*}:(\whx_\del)_+/K^*\to(X_\del)_+/K^*$. Then
${i_\Gamma}^*({\mathcal I}_{{\operatorname{val}}(\pi_\Gamma(v_-)),a,X_\del/\Gamma})={\mathcal I}_{{\operatorname{val}}(\pi_\Gamma(v_-)),a,\whx_\del/\Gamma}$ and $({i_{K^*}})^*({\mathcal I}_{{\operatorname{val}}(\pi(v_-)),a,X_\del/K^*})={\mathcal I}_{{\operatorname{val}}(\pi(v_-)),a,\whx_\del/K^*}$. Let $\hat{j}:(\whx_\del)_+/\Gamma\to (\whx_\del)_+/K^*$ be the natural morphism induced by $j$. The following diagram commutes.
\[\begin{array}{rcccccccc}
&&& (\whx_\del)_+/\Gamma
& \buildrel\widehat{j}\over\rightarrow &(\whx_\del)_+{/K^*}
&&& \\
&&&\downarrow {i_\Gamma} & & \downarrow i_{K^*} &&&\\
&& & (X_\del)_+/\Gamma & \rightarrow & (\whx_\del)_+{/K^*}.&&&\\
\end{array}\]
Thus we get $\hat{j}^*({\mathcal I}_{{\operatorname{val}}(\pi_\Gamma(v_-)),a,\whx_\del/K^*})={\mathcal I}_{{\operatorname{val}}(\pi(v_-)),a,\whx_\del/\Gamma}$.
Since the morphism $\widehat j$ is $G_\sigma$-equivariant it follows that ${\operatorname{val}}(\pi_\Gamma(v_-))$ is $G_\sigma$-equivariant on $(\whx_\del)_+/\Gamma$. Since $(\whx_\del)_+\subset \whx_\del$ is an open $G_\sigma$-equivariant inclusion and $\Gamma$ is finite we get that the morphism $(\whx_\del)_+/\Gamma\subset (\whx_\del)/\Gamma$
is an open $G_\sigma$-equivariant inclusion.
Thus the valuation ${\operatorname{val}}(\pi(v_-))$ is $G_\sigma$-equivariant on $\whx_\del{/\Gamma}$ and on $\widetilde X_{\Del^\sigma}/\Gamma$ (Lemma \ref{compa}). Finally, by Lemma \ref{group}, ${\operatorname{val}}(v_-)$ it is $G_\sigma$-equivariant on $\widetilde X_{\Del^\sigma}$. By the convexity, ${\operatorname{val}}(v)={\operatorname{val}}(v_++v_-)$ is $G_\sigma$-equivariant on $\widetilde X_{\Del^\sigma}$.
\end{proof}
\subsection{Canonical coordinates on $\Sigma$}
\medskip\noindent
Note that for any $\sigma\in\Sigma$ we can order the coordinates according the weights
\[X_\sigma\simeq {\mathbb{A}}^k={\mathbb{A}}_{a_1}\oplus {\mathbb{A}}_{a_2}\oplus\cdots\oplus {\mathbb{A}}_{a_\ell},\]
where $a_1< a_2<\ldots< a_\ell$ and $\Gamma_\sigma$ acts on ${\mathbb{A}}_{a_i}$ with character $t\to t^{a_i}$, where $t\in\Gamma$ and $a_i\in\mathbb Z_n$ if $\Gamma_\sigma\simeq \mathbb Z_n$ or $a_i\in\mathbb Z$ if $\Gamma_\sigma\simeqK^*$.( In the first case $a_i$ are represented by integers from $[0,n-1]$. Let us call these coordinates \textit{canonical}.) The canonical coordinates are preserved by the group $\rm{Aut}\, (\sigma)$ of all automorphisms of $\sigma$ defining $K^*$-equivariant automorphisms of $X_\sigma$. Since all stable vectors $v\in\sigma$ define $G_\sigma$-invariant valuations $\rm{val}\,(v)$ on $\widetilde X_\sigma$, they are in particular $\rm{Ant}\,(\sigma)$-invariant. Thus all stable vectors $v\in\sigma$ can be assigned the canonical coordinates in a unique way.
\subsection{Canonical $\pi$-desingularization of cones $\sigma$ in $\Sigma$}
Given the canonical coordinates we are in position to construct a canonical $\pi$-desingularization of $\sigma$ or its subdivision $\Del^\sigma$. We eliminate all choices of centers of star subdivisions in the
$\pi$-desingularization algorithm by choosing the center with the smallest canonical coordinates (ordered lexicographically).
\subsection{Canonical $\pi$-desingularization of $\Sigma$}
\medskip
\noindent
Note that the $\pi$-desingularization of an independent $\tau\in\Sigma$ is nothing but a desingularization of $\pi(\tau)$.
\noindent
For any cone $\sigma=\langle v_1,\ldots,v_k\rangle \in\Sigma$ the vector $v_\sigma:=v_1+\ldots+v_k\in\overline{{\operatorname{par}}}(\sigma)$ is stable (Lemma \ref{para}).
Order all cones $\sigma\in\Sigma$ by their dimension and apply star subdivision at $\langle v_\sigma\rangle\in\sigma$ starting from the heighest dimensions to the lowest.
Note that the result of this subdivision does not depend on the order of cones of the same dimension. That's because none of two cones of
$\Sigma$ which are of the same dimension are the faces of the same cone. The cones of higher dimensions were already subdivided and all their proper faces occur in the different cones.
Let $\Del=\{\Del^\sigma\mid \sigma\in\Sigma\}$ denote the resulting subdivision.
Now we apply the canonical subdivision $\Del^\pi_\sigma$ to the subdivided cones $\Del^\sigma$ starting from the lowest dimension to the heighest.
The subdivisions $\Del^\pi_*$ applied to any two (subdivided) cones of the same dimension in $\Sigma$ commute since their faces (of lower dimension) are already $\pi$-nonsingular and thus not affected by further subdivisions. Also as before none of two cones
which are in different faces of $\Sigma$ of the same dimension are the faces of the same cone. Note also that $\Del^\pi_\sigma$ depends only on $\sigma$ and is independent of the other faces of $\Sigma$.
\subsection{Proof of the Weak Factorization Theorem}
The canonical $\pi$-desingularization $\Delta^\pi$ of $\Sigma$ is obtained by a sequence of star subdivisions at stable centers (Lemmas \ref{stable}, \ref{stab2}). By Propositions \ref{blowup} and \ref{correspondence}, $\Del$ defines a birational projective modification $f:{B}^\pi\to B$. The modification does not affect points with trivial stabilizers $B_-=X^-\setminus X$ and $Z^+\setminus Z$ (see Proposition \ref{construction}). This means that $(B^\pi)_-=B_-$ and $(B^\pi)_+=B_+$ and $B^\pi$ is a cobordism between $X$ and $Z$. Moreover $B^\pi$ admits a projective compactification $\overline{B^\pi}=B \cup X \cup Z$. The cobordism $\overline{B^\pi}\subset \overline{B^\pi}$ admits a decomposition into elementary cobordisms $B^\pi_a$, defined by the strictly increasing function
$\chi_B$. Let $F\in {\mathcal C}((B_a^\pi)^{K^*})$ be a fixed point component
and $x\in F$ be a point.
By Proposition \ref{correspondence} the modification $f:{B^\pi}\to B$
is locally described
for a toric chart $\phi_\sigma:U\to X_\sigma$ by a smooth $\Gamma_\sigma $-equivariant morphism $\phi_{\Del^\sigma}: f^{-1}(U)\to X_{\Del^\sigma}$. Then by Lemma \ref{fixed3}, $\phi_{\Del^\sigma}(x)$ is in ${O}_\del$, where $\del\in\Del^\sigma$ is dependent and $\pi$-nonsingular.
In particular the cone $\sigma\in\Sigma$ is also dependent and $\Gamma_\sigma=K^*$. The toric chart $\phi_\sigma:U\to X_\sigma$ can be extended to an $K^*$-equivariant \'etale morphism $\psi_\sigma:U\to X_\sigma\times {\mathbb{A}}^r$, where the action of $\Gamma_\sigma=K^*$ on ${\mathbb{A}}^r$ is trivial. Moreover
since the toric chart $\phi_\sigma$ is compatible with a divisor $D$ we can assume that that the all components of $D$ are descibed by some coordinates on $X_\sigma\times {\mathbb{A}}^r\simeq {\mathbb{A}}^n$ (see also Section \ref{stratification}). The morphism $\phi_{\Del^\sigma}$ determines
a $K^*$-equivariant \'etale morphism $\psi_{\Del^\sigma}: f^{-1}(U)\to X_{\Del^\sigma}\times {\mathbb{A}}^r$.
So we locally have a
$K^*$-equivariant e\'tale morphism
$\psi_\del: V\to X_\del\times {\mathbb{A}}^r,$ where $V\subset \phi_{\Del^\sigma}^{-1}$ is an affine $K^*$-invariant subset of $B^\pi_a$. Similar to Proposition \ref{local} we get a diagram
\[\begin{array}{rccccc}
&{(B^\pi_a)}_-/K^*& \supset &{V_x}_-/K^*& \rightarrow
&{X_\del}_-/K^*\times {\mathbb{A}}^r\\
&\uparrow\psi_-&&\uparrow& &\uparrow \phi_{-}\\
&\Gamma({(B^\pi_a)}_-/K^*,{(B^\pi_a)}_+/K^*) &\supset &\Gamma({V_x}_-/K^*,{V_x}_+/K^*) & \rightarrow &\Gamma({X_\del}_-/K^*,{X_\del}_+/K^*) \times {\mathbb{A}}^r\\
& \downarrow\psi_+&& \downarrow & &\downarrow \phi_{+}\\
&{(B^\pi_a)}_+/K^*&\supset&{V_x}_+/K^*&\rightarrow & {X_\del}_+/K^*\times {\mathbb{A}}^r\\
\end{array}\]
with horizontal arrows \'etale induced by
$${(B^\pi_a)}//K^*\supset{V_x}//K^*\rightarrow {X_\del}//K^*\times {\mathbb{A}}^r.$$
Here $\Gamma(X_-/K^*,X_+/K^*)$ denotes the normalization of the graph of a birational map $X_-/K^*\dashrightarrow X_+/K^*$ for a relevant cobordism $X$.
We use functoriality of the graph (a dominated component of the fiber product $X_-/K^*\times_{X//K^*}X_+/K^*)$.
By Lemma \ref{local} the morphisms $\phi_-$ and $\phi_+$ are blow-ups at smooth centers.
Thus $\psi_-$ and $\psi_+$ are locally blow-ups at smooth centers so they are globally blow-ups at smooth centers.
The components of $D^\pi:=B^\pi\setminus (U\times K^*)$ are
either the strict transforms of components of $D$ on $B$ or the exceptional divisors of $B^\pi\to B$. In either case they correspond to toric divisors on $X_{\Del^\sigma}\times {\mathbb{A}}^r$ because of the compatibility of charts. Thus the divisor
$D_{a-}: =(D\cap (B_a)_-)/K^*=((B_a)_-/K^*)\,\setminus\, U$ corresponds to a toric dvisor on a smooth toric variety$(X_\del)_-/K^*\times {\mathbb{A}}^r\simeq {\mathbb{A}}^{n-1}$. The center of the blow-up corresponds to a toric subvariety $O_\del=\{0\}\times {\mathbb{A}}^r\subset (X_\del)_-/K^*\times {\mathbb{A}}^r\simeq {\mathbb{A}}^{n-1}$.
This shows that the centers of blow-ups has SNC with with complements of $U$.
Note that every $K^*$-equivariant automorphism of $B$ preserving the divisor $D=B\setminus U$ transforms isomorphically the strata, the relevant toric charts and the corresponding cones. This induces anautomorphism of $\Sigma$ preserving canonical coordinates on the cones. And it lifts to the $\pi$-desingularization of $\Sigma$ and to the corresponding cobordism $B^\pi\subset\bar{B^\pi}$ constructed via diagrams (2) as in Proposition \ref{correspondence}. The relatively ample divisor for $\bar{B}^\pi \to X$ is a combination of the divisor $X\times\{\infty\}\subset B^\pi$ and the exceptional divisors of the morphism $\bar{B}^\pi \to X\times \mathbb P^1$. Thus it is functorial i.e. invariant with respect to the liftings of automorphisms of $X$ commuting with $X\dashrightarrow Y$ and defines a decomposition into open invariant subsets $B_a$ and the induced equivariant factorization.
\subsection{The Weak Factorization over an algebraically nonclosed base field.}
For any proper biarational map $\phi: X\dashrightarrow Y$ over a field $K$ of characterictic zero consider the induced birational map $\phi_{\overline{K}}:=X^{\overline{K}}:=X\times_{\operatorname{Spec}{K}}\operatorname{Spec}{\overline{K}}\dashrightarrow Y^{\overline{K}}:=Y\times_{\operatorname{Spec}{K}}\operatorname{Spec}{\overline{K}}$ over the algebraic closure $\overline{K}$ of $K$. The weak factorization
of $\phi_{\overline{K}}$ over $\overline{K}$ is ${\operatorname{Gal}}(\overline{K}/K)$-equivariant and defines the relevant weak factorization of $\phi$ over $K$.
|
2,877,628,090,743 | arxiv | \section{Introduction}\label{sec:intro}
\subsection*{Overview}
Let $X$ be a projective K3 surface $X$, and $\ensuremath{\mathbf v}$ a primitive algebraic class in the Mukai lattice
with self-intersection with respect to the Mukai pairing $\ensuremath{\mathbf v}^2 > 0$. For a generic
polarization $H$, the moduli space $M_H(\ensuremath{\mathbf v})$
of $H$-Gieseker stable sheaves is a projective holomorphic symplectic manifold (hyperk\"ahler
variety) deformation equivalent to Hilbert schemes of points on K3 surfaces.
The cone theorem and the minimal model program (MMP) induce a locally polyhedral chamber
decomposition of the movable cone of $M_H(\ensuremath{\mathbf v})$ (see \cite{HassettTschinkel:MovingCone}):
\begin{itemize*}
\item chambers correspond one-to-one to smooth $K$-trivial
birational models $\widetilde M \dashrightarrow M_H(\ensuremath{\mathbf v})$ of the moduli space, as the minimal model of the
pair $(M_H(\ensuremath{\mathbf v}), D)$ for any $D$ in the corresponding chamber, and
\item walls correspond to
extremal Mori contractions, as the canonical model of $(M_H(\ensuremath{\mathbf v}), D)$.
\end{itemize*}
It is a very interesting question to understand this chamber decomposition for general
hyperk\"ahler varieties \cite{HassettTschinkel:RationalCurves, HassettTschinkel:ExtremalRays,
HassettTschinkel:MovingCone}. It has arguably become even more important in light of
Verbitsky's recent proof \cite{Verbitsky:torelli} of a global Torelli statement:
two hyperk\"ahler varieties $X_1, X_2$ are isomorphic if and only if there exists an
isomorphism of integral Hodge structures $H^2(X_1) \to H^2(X_2)$ that is
induced by parallel transport in a family, and that maps the nef cone of $X_1$ to
the nef cone of $X_2$ (see also \cite{Huybrechts:torrelliafterVerbitsky, Eyal:survey}).
In addition, following the recent success \cite{BCHM} of MMP for the log-general case,
there has been enormous interest to relate MMPs for moduli spaces
to the underlying moduli problem; we refer \cite{Maksym-David:survey} for a survey of the case of the
moduli space $\overline{M}_{g, n}$ of stable curves, known as the Hassett-Keel program. Ideally, one
would like a moduli interpretation for every chamber of the base locus decomposition of the movable
or effective cone.
On the other hand, in \cite{Bridgeland:K3} Bridgeland described a connected component
$\mathop{\mathrm{Stab}}\nolimits^\dag(X)$ of the
space of stability conditions on the derived category of $X$. He showed that
$M_H(\ensuremath{\mathbf v})$ can be recovered as the moduli space $M_\sigma(\ensuremath{\mathbf v})$ of $\sigma$-stable objects for
$\sigma \in \mathop{\mathrm{Stab}}\nolimits^\dag(X)$ near the ``large-volume limit''. The manifold
$\mathop{\mathrm{Stab}}\nolimits^\dag(X)$ admits a chamber decomposition, depending on $\ensuremath{\mathbf v}$, such that
\begin{itemize*}
\item for a chamber $\ensuremath{\mathcal C}$, the moduli space $M_\sigma(\ensuremath{\mathbf v}) =:M_\ensuremath{\mathcal C}(\ensuremath{\mathbf v})$ is independent of the choice of
$\sigma \in \ensuremath{\mathcal C}$, and
\item walls consist of stability conditions with strictly semistable objects of class $\ensuremath{\mathbf v}$.
\end{itemize*}
The main result of our article, Theorem \ref{thm:MAP}, relates these two
pictures directly. It shows that any MMP for the Gieseker moduli space (with movable boundary) can be
induced by wall-crossing for Bridgeland stability conditions, and so any minimal model has an
interpretation as a moduli space of Bridgeland-stable objects for some chamber.
In Theorem \ref{thm:nefcone}, we deduce the chamber decomposition of the movable
cone of $M_H(\ensuremath{\mathbf v})$ in terms of the Mukai lattice of $X$ from a description of the chamber
decomposition of $\mathop{\mathrm{Stab}}\nolimits^\dag(X)$, given by Theorem \ref{thm:walls}.
We also obtain the proof of a long-standing conjecture: the existence of a
birational Lagrangian fibration $M_H(v) \dashrightarrow \ensuremath{\mathbb{P}}^n$ is equivalent to the existence of an integral
divisor class $D$ of square zero with respect to the Beauville-Bogomolov form, see Theorem
\ref{thm:SYZ}. We use birationality of wall-crossing and a Fourier-Mukai transform to reduce the conjecture to
the well-known case of a moduli space of torsion sheaves, studied in
\cite{Beauville:ACIS}. Further applications are mentioned below.
\subsection*{Birationality of wall-crossing and the map to the movable cone}
Let $\sigma, \tau \in \mathop{\mathrm{Stab}}\nolimits^\dag(X)$ be two stability conditions, and assume that they are
\emph{generic} with respect to $\ensuremath{\mathbf v}$. By \cite[Theorem 1.3]{BM:projectivity}, the moduli spaces
$M_{\sigma}(\ensuremath{\mathbf v})$ and $M_{\tau}(\ensuremath{\mathbf v})$ of stable objects $\ensuremath{\mathcal E} \in \mathrm{D}^{b}(X)$ with Mukai vector $\ensuremath{\mathbf v}(\ensuremath{\mathcal E})
= \ensuremath{\mathbf v}$ exist as smooth projective varieties. Choosing a path from $\sigma$ to $\tau$ in $\mathop{\mathrm{Stab}}\nolimits^\dag(X)$
relates them by a series of wall-crossings. Based on a detailed analysis of all
possible wall-crossings, we prove:
\begin{Thm} \label{thm:birational-WC}
Let $\sigma,\tau$ be generic stability conditions with respect to $\ensuremath{\mathbf v}$.
\begin{enumerate}
\item The two moduli spaces $M_\sigma(\ensuremath{\mathbf v})$ and $M_\tau(\ensuremath{\mathbf v})$ of Bridgeland-stable objects are
birational to each other.
\item \label{enum:birationalautoequivalence}
More precisely, there is a birational map induced by a derived (anti-)autoequivalence
$\Phi$ of $\mathrm{D}^{b}(X)$ in the following
sense: there exists a common open subset $U \subset M_\sigma(\ensuremath{\mathbf v})$, $U \subset M_\tau(\ensuremath{\mathbf v})$,
with complements of codimension at least two, such that
for any $u \in U$, the corresponding objects $\ensuremath{\mathcal E}_u \in M_\sigma(\ensuremath{\mathbf v})$ and
$\ensuremath{\mathcal F}_u \in M_\tau(\ensuremath{\mathbf v})$ are related via
$\ensuremath{\mathcal F}_u = \Phi(\ensuremath{\mathcal E}_u)$.
\end{enumerate}
\end{Thm}
An anti-autoequivalence is an equivalence from the opposite category $\mathrm{D}^{b}(X)^{\mathrm{op}}$ to
$\mathrm{D}^{b}(X)$, for example given by the local dualizing functor $\mathop{\mathbf{R}\mathcal Hom}\nolimits(\underline{\hphantom{A}}, \ensuremath{\mathcal O}_X)$.
As a consequence, we can canonically identify the N\'eron-Severi groups of $M_{\sigma}(\ensuremath{\mathbf v})$
and $M_\tau(\ensuremath{\mathbf v})$.
Now consider the chamber decomposition of $\mathop{\mathrm{Stab}}\nolimits^\dag(X)$
with respect to $\ensuremath{\mathbf v}$ as above, and let $\ensuremath{\mathcal C}$ be a chamber. The main result of \cite{BM:projectivity} gives a natural map
\begin{equation} \label{eq:ellCC}
\ell_\ensuremath{\mathcal C} \colon \ensuremath{\mathcal C} \to \mathop{\mathrm{NS}}\nolimits\left(M_{\ensuremath{\mathcal C}}(\ensuremath{\mathbf v})\right)
\end{equation}
to the N\'eron-Severi group of the moduli space, whose image is contained in the ample cone of
$M_{\ensuremath{\mathcal C}}(\ensuremath{\mathbf v})$. More technically stated, our main result describes the global behavior of this map:
\begin{Thm} \label{thm:MAP}
Fix a base point $\sigma \in \mathop{\mathrm{Stab}}\nolimits^\dag(X)$.
\begin{enumerate}
\item \label{enum:piecewise}
Under the identification of the N\'eron-Severi groups induced by the birational maps
of Theorem \ref{thm:birational-WC}, the maps $\ell_\ensuremath{\mathcal C}$ of \eqref{eq:ellCC} glue to a piece-wise
analytic continuous map
\begin{equation} \label{eq:ell}
\ell \colon \mathop{\mathrm{Stab}}\nolimits^\dag(X) \to \mathop{\mathrm{NS}}\nolimits \left(M_\sigma(\ensuremath{\mathbf v})\right).
\end{equation}
\item \label{enum:imagemovable}
The image of $\ell$ is the intersection of the movable cone with the big cone of
$M_\sigma(\ensuremath{\mathbf v})$.
\item \label{enum:allMMP}
The map $\ell$ is compatible, in the sense that for any generic $\sigma' \in \mathop{\mathrm{Stab}}\nolimits^\dag(X)$, the
moduli space $M_{\sigma'}(\ensuremath{\mathbf v})$ is the birational model corresponding to $\ell(\sigma')$.
In particular, every smooth $K$-trivial birational model of $M_{\sigma}(\ensuremath{\mathbf v})$ appears as a moduli space $M_\ensuremath{\mathcal C}(\ensuremath{\mathbf v})$ of Bridgeland stable
objects for some chamber $\ensuremath{\mathcal C} \subset \mathop{\mathrm{Stab}}\nolimits^\dag(X)$.
\item \label{enum:AmpleCone} For a chamber $\ensuremath{\mathcal C} \subset \mathop{\mathrm{Stab}}\nolimits^\dag(X)$, we have $\ell(\ensuremath{\mathcal C})=\mathrm{Amp}(M_\ensuremath{\mathcal C}(\ensuremath{\mathbf v}))$.
\end{enumerate}
\end{Thm}
The image $\ell(\tau)$ of a stability condition $\tau$ is determined by its central charge; see
Theorem \ref{thm:ellandgroupaction} for a precise statement.
Claims \eqref{enum:imagemovable} and \eqref{enum:allMMP} are the precise version of our claim above
that MMP can be run via wall-crossing: any minimal model can be reached after wall-crossing as a
moduli space of stable objects. Extremal contractions arising as canonical models are given as
coarse moduli spaces for stability conditions on a wall.
\subsection*{Wall-crossing transformation}
Our second main result is Theorem \ref{thm:walls}. It determines
the location of walls in $\mathop{\mathrm{Stab}}\nolimits^\dag(X)$, and for each wall $\ensuremath{\mathcal W}$ it describes the associated
birational modification of the moduli space precisely. These descriptions are given purely
in terms of the algebraic Mukai lattice $H^*_\mathrm{alg}(X, \ensuremath{\mathbb{Z}})$ of $X$:
To each wall $\ensuremath{\mathcal W}$ we associate a rank two lattice $\ensuremath{\mathcal H}_\ensuremath{\mathcal W} \subset H^*_\mathrm{alg}(X, \ensuremath{\mathbb{Z}})$,
consisting of Mukai vectors whose central charges align for stability conditions on $\ensuremath{\mathcal W}$. Theorem
\ref{thm:walls} determines the birational wall-crossing behavior of $\ensuremath{\mathcal W}$ completely in terms of the
pair $(\ensuremath{\mathbf v}, \ensuremath{\mathcal H}_\ensuremath{\mathcal W})$. Rather than setting up the necessary notation here, we invite the
reader to jump directly to Section \ref{sec:hyperbolic} for the full statement.
The proof of Theorem \ref{thm:walls} takes up Sections \ref{sec:hyperbolic} to \ref{sec:flopping},
and can be considered the heart of this paper. The ingredients in the proof include
Harder-Narasimhan filtrations in families, a priori constraints on the geometry of birational
contractions of hyperk\"ahler varieties, and the essential fact that every
moduli space of stable objects on a K3 surface has expected dimension.
\subsection*{Fourier-Mukai transforms and birational moduli spaces}
The following result is a consequence of
Mukai-Orlov's Derived Torelli Theorem for K3 surfaces, a crucial Hodge-theoretic result by
Markman, and Theorem \ref{thm:birational-WC}. It
completes Mukai's program, started in \cite{Mukai:duality-Picard,Mukai:Fourier-moduli}, to understand birational maps between
moduli spaces of sheaves via Fourier-Mukai transforms. Following Mukai, consider
$H^*(X,\ensuremath{\mathbb{Z}})$ equipped with its weight two Hodge structure, polarized by the \emph{Mukai pairing}.
We write $\ensuremath{\mathbf v}^{\perp, \mathop{\mathrm{tr}}\nolimits} \subset H^*(X, \ensuremath{\mathbb{Z}})$ for the orthogonal complement of $\ensuremath{\mathbf v}$. By a result
of Yoshioka \cite{Yoshioka:Abelian}, $\ensuremath{\mathbf v}^{\perp, \mathop{\mathrm{tr}}\nolimits}$ and $H^2(M_H(\ensuremath{\mathbf v}), \ensuremath{\mathbb{Z}})$ are isomorphic as
Hodge structures; the Mukai pairing on $H^*(X, \ensuremath{\mathbb{Z}})$ gets identified with the
\emph{Beauville-Bogomolov} pairing on
$H^2(M_H(\ensuremath{\mathbf v}),\ensuremath{\mathbb{Z}})$.
\begin{Cor}\footnote{We will prove this and the following results more generally for moduli spaces of
Bridgeland-stable complexes.}\label{cor:Mukaifull}
Let $X$ and $X'$ be smooth projective K3 surfaces.
Let $\ensuremath{\mathbf v}\in H^*_{\mathrm{alg}}(X,\ensuremath{\mathbb{Z}})$ and $\ensuremath{\mathbf v}'\in H^*_{\mathrm{alg}}(X',\ensuremath{\mathbb{Z}})$ be primitive Mukai vectors.
Let $H$ (resp., $H'$) be a generic polarization with respect to $\ensuremath{\mathbf v}$ (resp., $\ensuremath{\mathbf v}'$).
Then the following statements are equivalent:
\begin{enumerate}
\item $M_H(\ensuremath{\mathbf v})$ is birational to $M_{H'}(\ensuremath{\mathbf v}')$. \label{enum:birationalmoduli}
\item The embedding $\ensuremath{\mathbf v}^{\perp, \mathop{\mathrm{tr}}\nolimits} \subset H^*(X, \ensuremath{\mathbb{Z}})$ of integral weight-two Hodge structures is
isomorphic to the embedding $\ensuremath{\mathbf v}'^{\perp, \mathop{\mathrm{tr}}\nolimits} \subset H^*(X', \ensuremath{\mathbb{Z}})$. \label{enum:Markmancrit}
\item There is an
(anti-)equivalence $\Phi$ from $\mathrm{D}^{b}(X)$ to $\mathrm{D}^{b}(X')$ with $\Phi_*(\ensuremath{\mathbf v})=\ensuremath{\mathbf v}'$.
\label{enum:equivnumerical}
\item There is an (anti-)equivalence $\Psi$ from $\mathrm{D}^{b}(X)$ to $\mathrm{D}^{b}(X')$ with $\Psi_*(\ensuremath{\mathbf v}) = \ensuremath{\mathbf v}'$ that
maps a generic object $E \in M_H(\ensuremath{\mathbf v})$ to an object $\Psi(E) \in M_{H'}(\ensuremath{\mathbf v}')$.
\label{enum:equivbirational}
\end{enumerate}
\end{Cor}
The equivalence $\eqref{enum:birationalmoduli} \Leftrightarrow \eqref{enum:Markmancrit}$ is
a special case of \cite[Corollary 9.9]{Eyal:survey}, which is based on Markman's description of the monodromy group
and Verbitsky's global Torelli theorem. We will only need the implication
$\eqref{enum:birationalmoduli} \Rightarrow \eqref{enum:Markmancrit}$, which is part of
earlier work by Markman: \cite[Theorem 1.10 and Theorem 1.14]{Eyal:integral} (when combined with the
fundamental result \cite[Corollary 2.7]{Huybrechts:Kaehlercone} that birational hyperk\"ahler
varieties have isomorphic cohomology).
By \cite{Toda:K3}, stability is an open property in families; thus $\Psi$ as in
\eqref{enum:equivbirational} directly induces a birational map
$M_H(\ensuremath{\mathbf v}) \dashrightarrow M_{H'}(\ensuremath{\mathbf v}')$; in particular, $\eqref{enum:equivbirational} \Rightarrow
\eqref{enum:birationalmoduli}$.
We will prove at the end of Section \ref{sec:MainThms} that
derived Torelli for K3 surfaces \cite{Orlov:representability} gives
$\eqref{enum:Markmancrit} \Rightarrow \eqref{enum:equivnumerical}$, and
that Theorem \ref{thm:birational-WC}
provides the missing implication $\eqref{enum:equivnumerical} \Rightarrow \eqref{enum:equivbirational}$.
Thus, in the case of moduli spaces of sheaves, we obtain a proof of Markman's version
\cite[Corollary 9.9]{Eyal:survey} of global Torelli independent of \cite{Verbitsky:torelli}.
\subsection*{Cones of curves and divisors}
As an application, we can use Theorems \ref{thm:MAP} and \ref{thm:walls} to determine the cones
of effective, movable, and nef divisors (and thus dually the Mori cone of curves) of
the moduli space $M_H(\ensuremath{\mathbf v})$ of $H$-Gieseker stable sheaves completely in terms of the algebraic Mukai lattice of $X$;
as an example we will state here our description of the nef cone.
Recall that we assume $\ensuremath{\mathbf v}$ primitive and $H$ generic; in particular, $M_H(\ensuremath{\mathbf v})$ is smooth.
Restricting the Hodge isomorphism of \cite{Yoshioka:Abelian} mentioned previously to the algebraic
part, we get an isometry
$\theta \colon \ensuremath{\mathbf v}^\perp \to \mathop{\mathrm{NS}}\nolimits(M_H(\ensuremath{\mathbf v}))$ of lattices, where $\ensuremath{\mathbf v}^\perp$ denotes the orthogonal complement
of $\ensuremath{\mathbf v}$ inside the algebraic Mukai lattice $H^*_{\alg}(X, \Z)$. (Equivalently,
$\ensuremath{\mathbf v}^{\perp} \subset \ensuremath{\mathbf v}^{\perp, \mathop{\mathrm{tr}}\nolimits}$ is the sublattice of $(1,1)$-classes with respect to the
induced Hodge structure on $\ensuremath{\mathbf v}^{\perp, \mathop{\mathrm{tr}}\nolimits}$.)
Let $\mathop{\mathrm{Pos}}(M_H(\ensuremath{\mathbf v}))$ denote the cone of
strictly positive classes $D$ with respect to the Beauville-Bogomolov pairing, satisfying $(D, D) > 0$
and $(A, D) > 0$ for a fixed ample class $A \in \mathop{\mathrm{NS}}\nolimits(M_H(\ensuremath{\mathbf v}))$.
We let $\overline{\mathop{\mathrm{Pos}}}(M_H(\ensuremath{\mathbf v}))$ denote its closure, and by abuse of language we call it the \emph{positive cone}.
\begin{repThm}{thm:nefcone}
Consider the chamber decomposition of the closed positive cone $\overline{\mathop{\mathrm{Pos}}}(M_H(\ensuremath{\mathbf v}))$ whose walls are given
by linear subspaces of the form
\[
\theta(\ensuremath{\mathbf v}^\perp \cap \ensuremath{\mathbf a}^\perp),
\]
for all $\ensuremath{\mathbf a} \in H^*_{\alg}(X, \Z)$ satisfying $\ensuremath{\mathbf a}^2 \ge -2$
and $0 \le (\ensuremath{\mathbf v}, \ensuremath{\mathbf a}) \le \frac{\ensuremath{\mathbf v}^2}2$. Then the nef cone of $M_H(\ensuremath{\mathbf v})$ is one of the chambers
of this chamber decomposition.
In other words, given an ample class $A \in \mathop{\mathrm{NS}}\nolimits(M_H(\ensuremath{\mathbf v}))$, a class $D \in \overline{\mathop{\mathrm{Pos}}}(M_H(\ensuremath{\mathbf v}))$ is
nef if and only if
$(D, \theta(\pm\ensuremath{\mathbf a})) \ge 0$ for all classes $\ensuremath{\mathbf a}$ as above and a choice of sign such that
$(A, \theta(\pm\ensuremath{\mathbf a})) > 0$.
\end{repThm}
We obtain similar descriptions of the movable and effective cone, see Section \ref{sec:cones}.
The intersection of the movable cone with the strictly positive cone has been described
by Markman for any hyperk\"ahler variety \cite[Lemma 6.22]{Eyal:survey};
the pseudo-effective cone can also easily be deduced from his results.
Our method gives an alternative wall-crossing proof, and in
addition a description of the boundary, based the proof of the Lagrangian fibration
conjecture discussed below.
However, there was no known description of the nef cone except for specific examples, even in
the case of the Hilbert scheme of points.
A general conjecture by Hassett and Tschinkel, \cite[Thesis 1.1]{HassettTschinkel:ExtremalRays},
suggested that the nef cone (or dually, its Mori cone) of a hyperk\"ahler variety $M$
depends only on the lattice of algebraic cycles in $H_2(M,\ensuremath{\mathbb{Z}})$. In small dimension, their conjecture
has been verified in \cite{HassettTschinkel:RationalCurves, HassettTschinkel:MovingCone, HassettTschinkel:ExtremalRays, HHT:ProjectiveSpaces,BakkerJorza:LagrangianK34}.
The original conjecture turned out to be incorrect, already for Hilbert schemes (see \cite[Remark 10.4]{BM:projectivity} and \cite[Remark 8.10]{KnutsenCiliberto}).
However, Theorem \ref{thm:nefcone} is in fact very closely related to the Hassett-Tschinkel Conjecture: we will explain this precisely in Section \ref{sec:cones}, in particular Proposition \ref{prop:RelationHT} and Remark \ref{rmk:RelationHT}.
In Section \ref{sec:examples}, we give many explicit examples of nef and movable cones.
Using deformation techniques, Theorem \ref{thm:nefcone} and Proposition \ref{prop:RelationHT}
have now been extended to all hyperk\"ahler varieties of the same deformation
type, see \cite{Mori-cones, Mongardi:note}.
\subsection*{Existence of Lagrangian fibrations}
The geometry of a hyperk\"ahler variety $M$ is particularly rigid. For example,
Matsushita proved in \cite{Matsushita:Addendum} that any map $f \colon M \to Y$ with connected
fibers and $\mathop{\mathrm{dim}}\nolimits(Y) < \mathop{\mathrm{dim}}\nolimits(M)$ is a Lagrangian fibration; further, Hwang proved in
\cite{Hwang:fibrationsbase} that if $Y$ is smooth, it must be isomorphic to a projective space.
It becomes a natural question to ask when such a fibration exists, or when
it exists birationally.
According to a long-standing conjecture, this can be detected purely in terms of the quadratic
Beauville-Bogomolov form on the N\'eron-Severi group of $M$:
\begin{Con}[Tyurin-Bogomolov-Hassett-Tschinkel-Huybrechts-Sawon]\label{conj:SYZ}
Let $M$ be a compact hyperk\"ahler manifold of dimension $2m$, and let $q$ denote its
Beauville-Bogomolov form.
\begin{enumerate}[label={(\alph*)}]
\item \label{enum:birational}
There exists an integral divisor class $D$ with $q(D) = 0$ if and only if
there exists a birational hyperk\"ahler manifold $M'$ admitting a Lagrangian fibration.
\item \label{enum:nef}
If in addition, $M$ admits a \emph{nef} integral primitive divisor class $D$ with $q(D) = 0$, then
there exists a Lagrangian fibration $f \colon M \to \ensuremath{\mathbb{P}}^m$ induced by the complete linear system of $D$.
\end{enumerate}
\end{Con}
In the literature, it was first suggested by Hassett-Tschinkel in
\cite{HassettTschinkel:RationalCurves} for symplectic
fourfolds, and, independently, by Huybrechts \cite{GrossHuybrechtsJoyce} and Sawon
\cite{Sawon:AbelianFibred} in general; see \cite{Verbitsky:HyperkaehlerSYZ} for more remarks on the
history of the Conjecture.
\begin{Thm}\label{thm:SYZ}
Let $X$ be a smooth projective K3 surface.
Let $\ensuremath{\mathbf v}\in H_{\mathrm{alg}}^*(X,\ensuremath{\mathbb{Z}})$ be a primitive Mukai vector with $\ensuremath{\mathbf v}^2>0$ and let
$H$ be a generic polarization with respect to $\ensuremath{\mathbf v}$.
Then Conjecture \ref{conj:SYZ} holds for the moduli space $M_H(\ensuremath{\mathbf v})$ of $H$-Gieseker stable sheaves.
\end{Thm}
The basic idea of our proof is the following: as we recalled above, the
N\'eron-Severi group of $M_H(\ensuremath{\mathbf v})$, along with its Beauville-Bogomolov form, is isomorphic to the
orthogonal
complement $\ensuremath{\mathbf v}^\perp \subset H^*_{\alg}(X, \Z)$ of $\ensuremath{\mathbf v}$ in the algebraic Mukai lattice of $X$, along with the restriction
of the Mukai pairing. The existence of an integral divisor $D = c_1(L)$ with $q(D) = 0$ is thus
equivalent to the existence of an isotropic class $\ensuremath{\mathbf w} \in \ensuremath{\mathbf v}^\perp$: a class with $(\ensuremath{\mathbf w}, \ensuremath{\mathbf w}) = 0$
and $(\ensuremath{\mathbf v}, \ensuremath{\mathbf w}) = 0$. The moduli space $Y = M_H(\ensuremath{\mathbf w})$ is a smooth K3 surface, and the associated Fourier-Mukai
transform $\Phi$ sends sheaves of class $\ensuremath{\mathbf v}$ on $X$ to complexes of rank 0 on $Y$. While these
complexes on $Y$ are typically not sheaves---not even for a generic object in $M_H(\ensuremath{\mathbf v})$---,
we can arrange them to be Bridgeland-stable complexes with respect to a Bridgeland-stability
condition $\tau$ on $\mathrm{D}^{b}(Y)$. We then deform $\tau$ along a path with endpoint $\tau'$, such that
$\tau'$-stable complexes of class $\Phi_*(\ensuremath{\mathbf v})$ are Gieseker stable sheaves, necessarily of rank zero.
In other words, the Bridgeland-moduli space $M_{\tau'}(\Phi_*(\ensuremath{\mathbf v}))$ is a moduli space of sheaves
$\ensuremath{\mathcal F}$ with support $\abs{\ensuremath{\mathcal F}}$ on a curve of fixed degree. The map $\ensuremath{\mathcal F} \mapsto
\abs{\ensuremath{\mathcal F}}$ defines a map from $M_{\tau'}(\Phi_*(\ensuremath{\mathbf v}))$ to the linear system of the associated curve;
this map is a Lagrangian fibration, known as the \emph{Beauville integral system}. On the other hand, birationality of wall-crossing shows that
$M_{\tau}(\Phi_*(\ensuremath{\mathbf v})) = M_H(\ensuremath{\mathbf v})$ is birational to $M_{\tau'}(\Phi_*(\ensuremath{\mathbf v}))$.
The idea to use a Fourier-Mukai transform to prove Conjecture \ref{conj:SYZ} was used previously by
Markushevich \cite{Markushevich:Lagrangian} and Sawon \cite{Sawon:LagrangianFibrations} for a specific
family of Hilbert schemes on K3 surfaces of Picard rank one. Under their assumptions,
the Fourier-Mukai transform of an ideal sheaf is a stable torsion sheaf;
birationality of wall-crossing makes such a claim unnecessary.
\begin{Rem}
By \cite{EyalSukhendu:Density}, Hilbert schemes of $n$ points on projective K3 surfaces are dense in the moduli space of hyperk\"ahler varieties of $K3^{[n]}$-type.
Conjecture \ref{conj:SYZ} has been proved independently by Markman
\cite{Eyal:LagrangianFibration} for a very general hyperk\"ahler variety $M$ of $K3^{[n]}$-type;
more specifically, under
the assumption that $H^{2,0}(M)\oplus H^{0,2}(M)$ does not contain any integral class.
His proof is completely different from ours, based on Verbitsky's
Torelli Theorem, and a way to associate a K3 surface (purely lattice theoretically) to such
hyperk\"ahler manifolds with a square-zero divisor class.
These results have been extended by
Matsushita to any variety of $K3^{[n]}$-type \cite{Matsushita:isotropic}.
\end{Rem}
\subsection*{Geometry of flopping contractions}
As mentioned previously, every extremal contraction of $M_H(\ensuremath{\mathbf v})$ is induced by a wall
in the space of Bridgeland stability conditions. In Section
\ref{sec:floppingeometry}, we explain how basic geometric properties of
flopping contractions are also determined via the associated lattice-theoretic wall-crossing data;
this adds geometric content to Theorem \ref{thm:walls}.
We obtain examples where the exceptional locus
has either arbitrarily many connected components, or arbitrarily many irreducible components all
intersecting in one point.
\subsection*{Strange Duality}
In Section \ref{sec:SD} we apply Theorem \ref{thm:SYZ} to study Le
Potier's Strange Duality, in the case where one of the two classes involved has square zero.
We give sufficient criteria for strange duality to hold, which are determined by wall-crossing, and
which are necessary in examples.
\subsection*{Generality}
In the introduction, we have stated most results for Gieseker moduli spaces $M_H(\ensuremath{\mathbf v})$.
In fact, we will work throughout more generally with moduli spaces $M_\sigma(\ensuremath{\mathbf v})$ of Bridgeland stable objects
on a K3 surface $(X, \alpha)$ with a Brauer twist $\alpha$, and all results will be proved
in that generality.
\subsection*{Relation to previous work on wall-crossing}
Various authors have previously studied examples of the relation between wall-crossing and the
birational geometry of the moduli space induced by the chamber decomposition of its cone of movable
divisors: the first examples (for moduli of torsion sheaves on $K$-trivial surfaces) were studied in
\cite{Aaron-Daniele}, and moduli on abelian surfaces were considered (in varying generality) in
\cite{MaciociaMeachan, Maciocia:walls, Minamide-Yanagida-Yoshioka:wall-crossing, MYY2,
YY:abeliansurfaces, Yoshioka:cones}.
Several of our results have analogues for abelian surfaces that have been obtained
previously by Yoshioka, or by Minamide, Yanagida, and Yoshioka:
the birationality of wall-crossing has been established in
\cite[Theorem 4.3.1]{Minamide-Yanagida-Yoshioka:wall-crossing};
the ample cone of the moduli spaces is described in \cite[Section 4.3]{MYY2};
statements related to Theorem \ref{thm:MAP} can be found in \cite{Yoshioka:cones};
an analogue of Corollary \ref{cor:Mukaifull} is contained in \cite[Theorem
0.1]{Yoshioka:FM_abelian_surfaces}; and Conjecture \ref{conj:SYZ} is proved in
\cite[Proposition 3.4 and Corollary 3.5]{Yoshioka:FM_abelian_surfaces} with the same basic
approach.
The crucial difference between abelian surfaces and K3 surfaces is the existence of spherical objects
on the latter. They are responsible for the existence of \emph{totally semistable walls} (walls for
which there are no strictly stable objects) that are harder to control; in particular, these can
correspond to any possible type of birational transformation (isomorphism, divisorial contraction,
flop). The spherical classes are the main reason our wall-crossing analysis in Sections
\ref{sec:hyperbolic}---\ref{sec:flopping} is fairly involved.
A somewhat different behavior was established in \cite{ABCH:MMP} in many cases for the Hilbert
scheme of points on $\ensuremath{\mathbb{P}}^2$ (extended to torsion-free sheaves in \cite{Huizenga:P2, AaronAndStudents:P2}, and to
Hirzebruch surfaces in \cite{Aaron-Izzet:pointsonsurfaces}): the authors show that the chamber
decomposition in the space of stability conditions corresponds to the base locus decomposition of
the \emph{effective} cone. In particular, while the map $\ell_\ensuremath{\mathcal C}$ of equation \eqref{eq:ellCC}
exists similarly in their situation, it will behave differently across walls corresponding to a
divisorial contraction: in our case, the map ``bounces back'' into the ample cone, while in their
case, it will extend across the wall.
\subsection*{Acknowledgments}
Conversations with Ralf Schiffler dissuaded us from pursuing a failed approach to the birationality
of wall-crossing, and we had extremely useful discussions with Daniel Huybrechts. Tom Bridgeland
pointed us towards Corollary \ref{cor:Mukaifull}, and Dragos Oprea towards the results in Section
\ref{sec:SD}. We also received helpful comments from Daniel Greb, Antony Maciocia, Alina Marian,
Eyal Markman, Dimitri Markushevich, Daisuke Matsushita, Ciaran Meachan, Misha Verbitsky, K\=ota
Yoshioka, Ziyu Zhang, and we would like to thank all of them. We would also like to thank the
referee very much for an extremely careful reading of the paper, and for many useful suggestions.
The authors were visiting the Max-Planck-Institut Bonn, respectively the
Hausdorff Center for Mathematics in Bonn, while working on this paper, and would like to thank
both institutes for their hospitality and stimulating atmosphere.
A.~B.~ is partially supported by NSF grant DMS-1101377.
E.~M.~ is partially supported by NSF grant DMS-1001482/DMS-1160466, Hausdorff Center for
Mathematics, Bonn, and by SFB/TR 45.
\subsection*{Notation and Convention}
For an abelian group $G$ and a field $k(=\ensuremath{\mathbb{Q}},\ensuremath{\mathbb{R}},\ensuremath{\mathbb{C}})$, we denote by $G_k$ the $k$-vector space $G\otimes k$.
Throughout the paper, $X$ will be a smooth projective K3 surface over the complex numbers. We refer
to Section \ref{sec:ReviewK3s} for all notations specific to K3 surfaces.
We will abuse notation and usually denote all derived functors as if they were underived. We write the
dualizing functor as $(\underline{\hphantom{A}})^\vee = \mathop{\mathbf{R}\mathcal Hom}\nolimits(\underline{\hphantom{A}}, \ensuremath{\mathcal O}_X)$.
The skyscraper sheaf at a point $x\in X$ is denoted by $k(x)$.
For a complex number $z\in\mathbb{C}$, we denote its real and imaginary part by $\Re z$ and $\Im z$, respectively.
By \emph{simple object} in an abelian category we will denote an object that has no non-trivial subobjects.
Recall that an object $S$ in a K3 category is spherical if $\mathop{\mathrm{Hom}}\nolimits^\bullet(S, S) = \ensuremath{\mathbb{C}} \oplus \ensuremath{\mathbb{C}}[-2]$.
We denote the associated spherical twist at $S$ by $\mathop{\mathrm{ST}}\nolimits_S(\underline{\hphantom{A}})$; it is defined
\cite{Mukai:BundlesK3, Seidel-Thomas:braid} by the exact triangle
\[
\mathop{\mathrm{Hom}}\nolimits^\bullet(S, E) \otimes S \to E \to \mathop{\mathrm{ST}}\nolimits_S(E).
\]
We will write \emph{stable} (in italics) whenever we are considering strictly stable objects in a
context allowing strictly semistable objects: for a non-generic stability condition, or for
objects with non-primitive Mukai vector.
\section{Review: derived categories of K3 surfaces, stability conditions, moduli spaces}\label{sec:ReviewK3s}
In this section, we give a review of stability conditions K3 surfaces, and their moduli spaces of
stable complexes. The main references are \cite{Bridgeland:Stab, Bridgeland:K3, Toda:K3, Yoshioka:Abelian,
BM:projectivity}.
\subsection*{Bridgeland stability conditions}
Let $\ensuremath{\mathcal D}$ be a triangulated category.
\begin{Def}\label{def:slicing}
A slicing $\ensuremath{\mathcal P}$ of the category $\ensuremath{\mathcal D}$ is a collection of full extension-closed subcategories $\ensuremath{\mathcal P}(\phi)$ for $\phi \in \ensuremath{\mathbb{R}}$ with the following properties:
\begin{enumerate}
\item $\ensuremath{\mathcal P}(\phi + 1) = \ensuremath{\mathcal P}(\phi)[1]$.
\item If $\phi_1 > \phi_2$, then $\mathop{\mathrm{Hom}}\nolimits(\ensuremath{\mathcal P}(\phi_1), \ensuremath{\mathcal P}(\phi_2)) = 0$.
\item For any $E \in \ensuremath{\mathcal D}$, there exists a collection of real numbers $\phi_1 > \phi_2 > \dots >
\phi_n$ and a sequence of triangles
\begin{equation} \label{eq:HN-filt}
\TFILTB E A n
\end{equation}
with $A_i \in \ensuremath{\mathcal P}(\phi_i)$.
\end{enumerate}
\end{Def}
The collection of exact triangles in \eqref{eq:HN-filt} is called the \emph{Harder-Narasimhan (HN) filtration}
of $E$.
Each subcategory $\ensuremath{\mathcal P}(\phi)$ is extension-closed and abelian.
Its nonzero objects are called semistable of phase $\phi$, and its simple objects are called stable.
We will write $\phi_{\mathop{\mathrm{min}}\nolimits}(E) := \phi_n$ and $\phi_{\max}(E) := \phi_1$.
By $\ensuremath{\mathcal P}(\phi -1, \phi]$ we denote the full subcategory of objects with $\phi_{\mathop{\mathrm{min}}\nolimits}(E) > \phi -1$ and $\phi_{\max}(E) \le \phi$.
This is the heart of a bounded t-structure $(\ensuremath{\mathcal D}^{\le 0}, \ensuremath{\mathcal D}^{\ge 0})$ given by
\[ \ensuremath{\mathcal D}^{\le 0} = \ensuremath{\mathcal P}(>\phi-1) = \{E \in \ensuremath{\mathcal D}\colon \phi_{\mathop{\mathrm{min}}\nolimits} > \phi-1\}
\quad \text{and} \quad
\ensuremath{\mathcal D}^{\ge 0} = \ensuremath{\mathcal P}(\le\phi) = \{E \in \ensuremath{\mathcal D}\colon \phi_{\max} \le \phi\}.\]
Let us fix a lattice of finite rank $\Lambda$ and a surjective map $\ensuremath{\mathbf v}\colon K(\ensuremath{\mathcal D})\ensuremath{\twoheadrightarrow}\Lambda$.
\begin{Def}[{\cite{Bridgeland:Stab, Kontsevich-Soibelman:stability}}]
\label{def:Bridgeland}
A \emph{Bridgeland stability condition} on $\ensuremath{\mathcal D}$ is a pair $(Z,\ensuremath{\mathcal P})$, where
\begin{itemize}
\item $Z\colon\Lambda\to\ensuremath{\mathbb{C}}$ is a group homomorphism, and
\item $\ensuremath{\mathcal P}$ is a slicing of $Z$,
\end{itemize}
satisfying the following compatibilities:
\begin{enumerate}
\item $\frac{1}{\pi}\mathrm{arg}Z(\ensuremath{\mathbf v}(E))=\phi$, for all non-zero $E\in\ensuremath{\mathcal P}(\phi)$;
\item given a norm $\|\underline{\hphantom{A}}\|$ on $\Lambda_\ensuremath{\mathbb{R}}$, there exists a constant $C>0$ such that
\[
|Z(\ensuremath{\mathbf v}(E))| \geq C \| \ensuremath{\mathbf v}(E) \|,
\]
for all $E$ that are semistable with respect to $\ensuremath{\mathcal P}$.
\end{enumerate}
\end{Def}
We will write $Z(E)$ instead of $Z(\ensuremath{\mathbf v}(E))$ from now on.
A stability condition is called \emph{algebraic} if $\mathop{\mathrm{Im}}\nolimits(Z)\subset\ensuremath{\mathbb{Q}}\oplus\ensuremath{\mathbb{Q}}\sqrt{-1}$.
The main theorem in \cite{Bridgeland:Stab} shows that the set $\mathop{\mathrm{Stab}}\nolimits(\ensuremath{\mathcal D})$ of stability conditions
on $\ensuremath{\mathcal D}$ is a complex manifold; its dimension equals the rank of $\Lambda$.
\begin{Rem}[{\cite[Lemma 8.2]{Bridgeland:Stab}}]
\label{rmk:GroupAction}
There are two group actions on $\mathop{\mathrm{Stab}}\nolimits(\ensuremath{\mathcal D})$.
The group $\mathop{\mathrm{Aut}}\nolimits(\ensuremath{\mathcal D})$ of autoequivalences acts on the left by
$\Phi_*(Z,\ensuremath{\mathcal P})=(Z\circ\Phi_*^{-1},\Phi(\ensuremath{\mathcal P}))$, where $\Phi \in \mathop{\mathrm{Aut}}\nolimits(\ensuremath{\mathcal D})$ and $\Phi_*$ also denotes the push-forward on the
K-group. The universal cover $\widetilde{\mathop{\mathrm{GL}}}^+_2(\ensuremath{\mathbb{R}})$ of
matrices in $\mathop{\mathrm{GL}}_2(\ensuremath{\mathbb{R}})$ with positive determinant acts on the right, lifting the action of
$\mathop{\mathrm{GL}}_2(\ensuremath{\mathbb{R}})$ on $\mathop{\mathrm{Hom}}\nolimits(K(\ensuremath{\mathcal D}), \ensuremath{\mathbb{C}}) = \mathop{\mathrm{Hom}}\nolimits(K(\ensuremath{\mathcal D}), \ensuremath{\mathbb{R}}^2)$.
\end{Rem}
\subsection*{Twisted K3 surfaces}
Let $X$ be a smooth K3 surface.
The (cohomological) \emph{Brauer group} $\mathrm{Br}(X)$ is the torsion part of the cohomology group $H^2(X,\ensuremath{\mathcal O}_X^*)$ in the analytic topology.
\begin{Def}\label{def:twisted}
Let $\alpha\in\mathrm{Br}(X)$.
The pair $(X,\alpha)$ is called a \emph{twisted K3 surface}.
\end{Def}
Since $H^3(X,\ensuremath{\mathbb{Z}})=0$, there exists a \emph{B-field lift} $\beta_0\in H^2(X,\ensuremath{\mathbb{Q}})$ such that $\alpha = e^{\beta_0}$.
We will always tacitly fix both such B-field lift and a \v{C}ech representative $\alpha_{ijk}\in\Gamma(U_i\cap U_j\cap U_k,\mathcal{O}^*_X)$ on an open analytic cover $\{U_i\}$ in $X$;
see \cite[Section 1]{HuybrechtsStellari:Twisted} for a discussion about these issues.
\begin{Def}\label{def:TwistedSheaf}
An \emph{$\alpha$-twisted coherent sheaf}
$F$ consists of a collection $(\{F_i\},\{\varphi_{ij}\})$, where $F_i$ is a coherent sheaf on $U_i$
and $\varphi_{ij}\colon F_j|_{U_i\cap U_j}\to F_i|_{U_i\cap U_j}$ is an isomorphism, such that:
\[
\varphi_{ii}=\mathrm{id};\quad
\varphi_{ji}=\varphi_{ij}^{-1};\quad
\varphi_{ij}\circ\varphi_{jk}\circ\varphi_{ki}=\alpha_{ijk}\cdot\mathrm{id}.
\]
\end{Def}
We denote by $\mathop{\mathrm{Coh}}\nolimits(X,\alpha)$ the category of $\alpha$-twisted coherent sheaves on $X$, and by $\mathrm{D}^{b}(X,\alpha)$ its bounded derived category.
We refer to \cite{Caldararu:Thesis, HuybrechtsStellari:Twisted, Yoshioka:TwistedStability, Lieblich:Twisted} for basic facts about twisted sheaves on K3 surfaces.
In \cite[Section 1]{HuybrechtsStellari:Twisted}, the authors define a twisted Chern character by
\[
\mathop{\mathrm{ch}}\nolimits\colon K(\mathrm{D}^{b}(X,\alpha)) \to H^*(X,\ensuremath{\mathbb{Q}}), \quad \mathop{\mathrm{ch}}\nolimits(\underline{\hphantom{A}}) = e^{\beta_0} \cdot
\mathop{\mathrm{ch}}\nolimits^{\mathrm{top}}(\underline{\hphantom{A}}),
\]
where $\mathop{\mathrm{ch}}\nolimits^{\mathrm{top}}$ is the topological Chern character.
By \cite[Proposition 1.2]{HuybrechtsStellari:Twisted}, we have
\[
\mathop{\mathrm{ch}}\nolimits(\underline{\hphantom{A}}) \in \left[ e^{\beta_0} \cdot \left( H^0(X,\ensuremath{\mathbb{Q}})\oplus\mathrm{NS}(X)_\ensuremath{\mathbb{Q}}\oplus H^4(X,\ensuremath{\mathbb{Q}}) \right)\right] \cap H^*(X,\ensuremath{\mathbb{Z}}).
\]
\begin{Rem}\label{rmk:TwistedHodgeStructure}
Let $H^*(X,\alpha,\ensuremath{\mathbb{Z}}):= H^*(X,\ensuremath{\mathbb{Z}})$.
In \cite{HuybrechtsStellari:Twisted}, the authors define a weight-2 Hodge structure on the whole cohomology $H^*(X,\alpha,\ensuremath{\mathbb{Z}})$
with
\[
H^{2,0}(X, \alpha, \ensuremath{\mathbb{C}}) := e^{\beta_0} \cdot H^{2,0}(X, \ensuremath{\mathbb{C}}).
\]
We denote by
\[
H^*_{\alg}(X,\alpha, \Z) := H^{1,1}(X,\alpha,\ensuremath{\mathbb{C}}) \cap H^*(X,\ensuremath{\mathbb{Z}})
\]
its $(1,1)$-integral part.
It coincides with the image of the twisted Chern character.
When $\alpha=1$, this reduces to the familiar definition
$H^*_{\mathrm{alg}}(X,\ensuremath{\mathbb{Z}}) = H^0(X,\ensuremath{\mathbb{Z}})\oplus\mathrm{NS}(X)\oplus H^4(X,\ensuremath{\mathbb{Z}}).$
\end{Rem}
\subsection*{The algebraic Mukai lattice}
Let $(X,\alpha)$ be twisted K3 surface.
\begin{Def}\label{def:MukaiLattice}
\begin{enumerate*}
\item We denote by $\ensuremath{\mathbf v} \colon K(\mathrm{D}^{b}(X,\alpha)) \to H^*_{\mathrm{alg}}(X, \alpha, \ensuremath{\mathbb{Z}})$ the \emph{Mukai vector}
\[
\ensuremath{\mathbf v}(E) := \mathop{\mathrm{ch}}\nolimits(E) \sqrt{\mathop{\mathrm{td}}\nolimits(X)}.
\]
\item The \emph{Mukai pairing} $(\underline{\hphantom{A}}, \underline{\hphantom{A}})$ is defined on $H^*_{\mathrm{alg}}(X, \alpha, \ensuremath{\mathbb{Z}})$ by
\[
((r,c,s),(r',c',s')) := cc' - rs'-sr' \in \ensuremath{\mathbb{Z}}.
\]
It is an even pairing of signature $(2,\rho(X))$, satisfying $-(\ensuremath{\mathbf v}(E), \ensuremath{\mathbf v}(F)) = \chi(E, F) =
\sum_i (-1)^i \mathop{\mathrm{ext}}\nolimits^i(E, F)$ for all $E,F\in \mathrm{D}^{b}(X,\alpha)$.
\item The \emph{algebraic Mukai lattice} is defined to be the pair $\left(H^*_{\mathrm{alg}}(X, \alpha, \ensuremath{\mathbb{Z}}), (\underline{\hphantom{A}},\underline{\hphantom{A}})\right)$.
\end{enumerate*}
\end{Def}
Recall that an embedding $i\colon V\to L$ of a lattice $V$ into a lattice $L$ is \emph{primitive} if $L/i(V)$ is a free abelian group.
In particular, we call a non-zero vector $\ensuremath{\mathbf v}\in H^*_{\mathrm{alg}}(X, \alpha, \ensuremath{\mathbb{Z}})$ \emph{primitive} if it is not divisible in $H^*_{\mathrm{alg}}(X, \alpha, \ensuremath{\mathbb{Z}})$.
Throughout the paper $\ensuremath{\mathbf v}$ will often denote a primitive class with $\ensuremath{\mathbf v}^2 > 0$.
Given a Mukai vector $\ensuremath{\mathbf v}\in H^*_{\mathrm{alg}}(X, \alpha, \ensuremath{\mathbb{Z}})$, we denote its orthogonal complement by
$\ensuremath{\mathbf v}^\perp$.
\subsection*{Stability conditions on K3 surfaces}
Let $(X,\alpha)$ be a twisted K3 surface. We remind the reader that this includes
fixing a B-field lift $\beta_0$ of the Brauer class $\alpha$.
\begin{Def}
A (full, numerical) \emph{stability condition} on $(X,\alpha)$ is a Bridgeland stability condition
on $\mathrm{D}^{b}(X,\alpha)$, whose lattice $\Lambda$ is given by the Mukai lattice $H^*_{\alg}(X,\alpha, \Z)$.
\end{Def}
In \cite{Bridgeland:K3}, Bridgeland describes a connected component of the space of numerical stability
conditions on $X$. These results have been extended to twisted K3 surfaces in \cite{HMS:generic_K3s}.
In the following, we briefly summarize the main results.
Let $\beta, \omega \in \mathop{\mathrm{NS}}\nolimits(X)_\ensuremath{\mathbb{R}}$ be two real divisor classes, with $\omega$ being ample.
For $E\in \mathrm{D}^{b}(X,\alpha)$, define
\[
Z_{\omega,\beta}(E) := \left(e^{i\omega + \beta + \beta_0}, \ensuremath{\mathbf v}(E)\right).
\]
In \cite[Lemma 6.1]{Bridgeland:K3}
Bridgeland constructs a heart $\ensuremath{\mathcal A}_{\omega, \beta}$ by tilting
at a torsion pair (see \cite[Section 3.1]{HMS:generic_K3s} for the case $\alpha \neq 1$).
Its objects are two-term complexes $E^{-1} \xrightarrow{d} E^0$ with the property:
\begin{itemize}
\item $\mathop{\mathrm{Ker}}\nolimits d$ is a torsion-free $\alpha$-twisted sheaf such that, for every non-zero subsheaf $E'\subset \mathop{\mathrm{Ker}}\nolimits d$, we have $\Im Z_{\omega,\beta}(E')\leq0$;
\item the torsion-free part of $\mathop{\mathrm{Cok}}\nolimits d$ is such that, for every non-zero torsion free quotient $\mathop{\mathrm{Cok}}\nolimits d \ensuremath{\twoheadrightarrow} E''$, we have $\Im Z_{\omega,\beta}(E'')>0$.
\end{itemize}
\begin{Thm}[{\cite[Sections 10, 11]{Bridgeland:K3}, \cite[Proposition 3.8]{HMS:generic_K3s}}] \label{thm:BridgelandK3geometric}
Let $\sigma = (Z, \ensuremath{\mathcal P})$ be a stability condition such that all
skyscraper sheaves $k(x)$ of points are $\sigma$-stable. Then
there are real divisor classes $\omega, \beta \in \mathop{\mathrm{NS}}\nolimits(X)_\ensuremath{\mathbb{R}}$ with $\omega$ ample, such that,
up to the $\widetilde{\mathop{\mathrm{GL}}}^+_2(\ensuremath{\mathbb{R}})$-action, $\sigma$ is equal to the stability condition
$\sigma_{\omega, \beta}$
determined by $\ensuremath{\mathcal P}((0, 1]) = \ensuremath{\mathcal A}_{\omega, \beta}$ and $Z=Z_{\omega,\beta}$.
\end{Thm}
We will call such stability conditions \emph{geometric}, and write
$U(X, \alpha) \subset \mathop{\mathrm{Stab}}\nolimits(X, \alpha)$ for the
the open subset of geometric stability conditions.
Using the Mukai pairing, we identify any central charge
$Z \in \mathop{\mathrm{Hom}}\nolimits(H^*_{\alg}(X,\alpha, \Z), \ensuremath{\mathbb{C}})$ with a vector $\Omega_Z$ in $H^*_{\alg}(X,\alpha, \Z) \otimes \ensuremath{\mathbb{C}}$ such that
\[
Z (\underline{\hphantom{A}}) = \left( \Omega_Z,\underline{\hphantom{A}} \right).
\]
The vector $\Omega_Z$ belongs to the domain $\ensuremath{\mathcal P}_0^+(X,\alpha)$, which we now describe.
Let
\[
\ensuremath{\mathcal P}(X,\alpha) \subset H^*_{\alg}(X,\alpha, \Z) \otimes \ensuremath{\mathbb{C}}
\]
be the set of vectors $\Omega$ such that
$\Im \Omega, \Re \Omega$ span a positive definite 2-plane in $H^*_{\alg}(X,\alpha, \Z) \otimes \ensuremath{\mathbb{R}}$.
The subset $\ensuremath{\mathcal P}_0(X,\alpha)$ is the set of vectors not orthogonal to any spherical class:
\[ \ensuremath{\mathcal P}_0(X,\alpha) =
\stv{\Omega \in \ensuremath{\mathcal P}(X,\alpha)}{ (\Omega, \ensuremath{\mathbf s}) \neq 0,\,
\text{for all $\ensuremath{\mathbf s} \in H^*_{\alg}(X,\alpha, \Z) \text{ with } \ensuremath{\mathbf s}^2 = -2$}}.
\]
Finally, $\ensuremath{\mathcal P}_0(X,\alpha)$ has two connected components, corresponding to the orientation induced on the
plane spanned by $\Im \Omega, \Re \Omega$; we let
$\ensuremath{\mathcal P}_0^+(X,\alpha)$ be the component containing vectors of the form $e^{i\omega + \beta + \beta_0}$ for
$\omega$ ample.
\begin{Thm}[{\cite[Section 8]{Bridgeland:K3}, \cite[Proposition 3.10]{HMS:generic_K3s}}] \label{thm:Bridgeland_coveringmap}
Let $\mathop{\mathrm{Stab}}\nolimits^\dag(X,\alpha)$ be the connected component of the space of stability conditions
containing geometric stability conditions $U(X,\alpha)$.
Let $\ensuremath{\mathcal Z} \colon \mathop{\mathrm{Stab}}\nolimits^\dag(X,\alpha) \to H^*_{\alg}(X,\alpha, \Z) \otimes \ensuremath{\mathbb{C}}$ be the map sending
a stability condition $(Z, \ensuremath{\mathcal P})$ to $\Omega_Z$, where $Z(\underline{\hphantom{A}}) = (\Omega_Z, \underline{\hphantom{A}})$.
Then $\ensuremath{\mathcal Z}$ is a covering map of $\ensuremath{\mathcal P}_0^+(X,\alpha)$.
\end{Thm}
We will need the following observation:
\begin{Prop} \label{prop:dualstability}
The stability conditions $\sigma_{\omega, \beta}$ on $U(X, \alpha)$ and $\sigma_{\omega, -\beta}$
on $U(X, \alpha^{-1})$ are dual to each
other in the following sense: An object $E \in \mathrm{D}^{b}(X, \alpha)$ is $\sigma_{\omega, \beta}$-(semi)stable
of phase $\phi$
if and only if its shifted derived dual $E^\vee[2] \in \mathrm{D}^{b}(X, \alpha^{-1})$ is $\sigma_{\omega,-\beta}$-(semi)stable of phase $-\phi$.
\end{Prop}
\begin{proof}
By \cite[Propositions 3.3.1 \& 4.2]{large-volume}, this follows as in \cite[Proposition 4.3.6]{BMT:3folds-BG}.
\end{proof}
\subsection*{Derived Torelli}
Any positive definite 4-plane in $H^*(X, \alpha, \ensuremath{\mathbb{R}})$ comes equipped with a canonical orientation,
induced by the K\"ahler cone. A Hodge-isometry $\phi \colon H^*(X, \alpha, \ensuremath{\mathbb{Z}}) \to H^*(X', \alpha',
\ensuremath{\mathbb{Z}})$ is called orientation-preserving if it is compatible with this orientation data.
\begin{Thm}[Mukai-Orlov] \label{thm:derivedtorelli}
Given an orientation-preserving Hodge isometry $\phi$ between the Mukai lattice of twisted K3
surfaces $(X, \alpha)$ and $(X', \alpha')$, there exists a derived equivalence
$\Phi \colon \mathrm{D}^{b}(X, \alpha) \to \mathrm{D}^{b}(X', \alpha')$ with $\Phi_* = \phi$.
Moreover, $\Phi$ may be chosen such that it sends the distinguished component $\mathop{\mathrm{Stab}}\nolimits^\dag(X,\alpha)$ to
$\mathop{\mathrm{Stab}}\nolimits^\dag(X',\alpha')$.
\end{Thm}
\begin{proof}
The case $\alpha=1$ follows from Orlov's representability result \cite{Orlov:representability}
(based on \cite{Mukai:BundlesK3}), see \cite{HLOY:autoequivalences, Ploog:thesis, HMS:Orientation}. The
twisted case was treated in \cite{HuybrechtsStellari:CaldararuConj}.
The second statement follows identically to the case $X =X'$ treated in
\cite[Proposition 7.9]{Hartmann:cusps}; see also \cite{Huybrechts:derived-abelian}.
\end{proof}
\subsection*{Walls}
One of the main properties of the space of Bridgeland stability conditions is that it admits a well-behaved wall and chamber structure.
This is due to Bridgeland and Toda (the precise statement is \cite[Proposition 2.3]{BM:projectivity}).
Let $(X,\alpha)$ be a twisted K3 surface and let $\ensuremath{\mathbf v}\in H^*_{\mathrm{alg}}(X,\alpha,\ensuremath{\mathbb{Z}})$ be a Mukai vector.
Then there exists a locally finite set of \emph{walls} (real codimension one submanifolds with boundary)
in $\mathop{\mathrm{Stab}}\nolimits^\dag(X,\alpha)$, depending only on $\ensuremath{\mathbf v}$, with the following properties:
\begin{enumerate}
\item When $\sigma$ varies within a chamber, the sets of $\sigma$-semistable and
$\sigma$-stable objects of class $\ensuremath{\mathbf v}$ does not change.
\item When $\sigma$ lies on a single wall $\ensuremath{\mathcal W} \subset \mathop{\mathrm{Stab}}\nolimits^\dag(X,\alpha)$,
then there is a $\sigma$-semistable object
that is unstable in one of the adjacent chambers, and semistable in the other adjacent chamber.
\item When we restrict to an intersection of finitely many walls $\ensuremath{\mathcal W}_1, \dots, \ensuremath{\mathcal W}_k$, we
obtain a wall-and-chamber decomposition on $\ensuremath{\mathcal W}_1 \cap \dots \cap \ensuremath{\mathcal W}_k$ with the same properties,
where the walls are given by the intersections $\ensuremath{\mathcal W} \cap \ensuremath{\mathcal W}_1 \cap \dots \cap \ensuremath{\mathcal W}_k$ for any
of the walls $\ensuremath{\mathcal W} \subset \mathop{\mathrm{Stab}}\nolimits^\dag(X,\alpha)$ with respect to $\ensuremath{\mathbf v}$.
\end{enumerate}
Moreover, if $\ensuremath{\mathbf v}$ is primitive, then $\sigma$ lies on a wall if and only if there exists a
strictly $\sigma$-semistable object of class $\ensuremath{\mathbf v}$.
The Jordan-H\"older filtration of $\sigma$-semistable objects
does not change when $\sigma$ varies within a chamber.
\begin{Def}\label{def:generic}
Let $\ensuremath{\mathbf v}\in H^*_{\mathrm{alg}}(X,\alpha,\ensuremath{\mathbb{Z}})$.
A stability condition is called \emph{generic} with respect to $\ensuremath{\mathbf v}$ if it does not lie on a wall.
\end{Def}
\begin{Rem} \label{rem:Giesekerchamber}
Given a polarization $H$ that is generic with respect to $\ensuremath{\mathbf v}$, there is always a Gieseker
chamber $\ensuremath{\mathcal C}$: for $\sigma \in \ensuremath{\mathcal C}$, the moduli space
$M_{\sigma}(\ensuremath{\mathbf v})$ of Bridgeland stable objects is exactly the moduli space
of $H$-Gieseker stable sheaves; see \cite[Proposition 14.2]{Bridgeland:K3}.
\end{Rem}
\subsection*{Moduli spaces and projectivity}
Let $(X,\alpha)$ be a twisted K3 surface and let $\ensuremath{\mathbf v}\in H^*_{\mathrm{alg}}(X,\alpha,\ensuremath{\mathbb{Z}})$.
Given $\sigma=(Z,\ensuremath{\mathcal P})\in\mathop{\mathrm{Stab}}\nolimits^\dagger(X,\alpha)$ and $\phi\in\ensuremath{\mathbb{R}}$ such that $Z(\ensuremath{\mathbf v})\in\ensuremath{\mathbb{R}}_{>0}\cdot
e^{\pi \phi \sqrt{-1}}$, let $ \mathfrak M_{\sigma}(\ensuremath{\mathbf v},\phi)$ and $\mathfrak M_{\sigma}^{st}(\ensuremath{\mathbf v},\phi)$
be the moduli stack of $\sigma$-semistable and $\sigma$-stable objects with phase $\phi$ and Mukai
vector $\ensuremath{\mathbf v}$, respectively.
We will omit $\phi$ from the notation from now on.
If $\sigma\in\mathop{\mathrm{Stab}}\nolimits^\dagger(X,\alpha)$ is generic with respect to $\ensuremath{\mathbf v}$, then $\mathfrak M_{\sigma}(\ensuremath{\mathbf v})$
has a coarse moduli space $M_{\sigma}(\ensuremath{\mathbf v})$ of $\sigma$-semistable objects with Mukai vector $\ensuremath{\mathbf v}$
(\cite[Theorem 1.3(a)]{BM:projectivity}, which generalizes \cite[Theorem 0.0.2]{MYY2}).
It is a normal projective irreducible variety with $\ensuremath{\mathbb{Q}}$-factorial singularities.
If $\ensuremath{\mathbf v}$ is primitive, then $M_{\sigma}(\ensuremath{\mathbf v})=M^{st}_{\sigma}(\ensuremath{\mathbf v})$ is a smooth projective hyperk\"ahler manifold (see Section \ref{sec:ReviewHK}).
By results of Yoshioka and Toda, there is a very precise criterion for non-emptiness of a moduli
space, and it always has expected dimension:
\begin{Thm}\label{thm:nonempty}
Let $\ensuremath{\mathbf v} = m\ensuremath{\mathbf v}_0 \in H^*_\mathrm{alg}(X,\alpha,\ensuremath{\mathbb{Z}})$ be a vector with $\ensuremath{\mathbf v}_0$ primitive and $m>0$,
and let $\sigma\in\mathop{\mathrm{Stab}}\nolimits^\dagger(X,\alpha)$ be a generic stability condition with respect
to $\ensuremath{\mathbf v}$.
\begin{enumerate}
\item \label{enum:nonempty}
The coarse moduli space $M_{\sigma}(\ensuremath{\mathbf v})$ is non-empty if and only if $\ensuremath{\mathbf v}_0^2 \geq -2$.
\item \label{enum:dimandsquare}
Either $\mathop{\mathrm{dim}}\nolimits M_\sigma(\ensuremath{\mathbf v}) = \ensuremath{\mathbf v}^2 + 2$ and $M^{st}_\sigma(\ensuremath{\mathbf v})\neq\emptyset$, or $m > 1$ and $\ensuremath{\mathbf v}_0^2 \le 0$.
\end{enumerate}
\end{Thm}
In other words, when $\ensuremath{\mathbf v}^2 \neq 0$ and the dimension of the moduli space is positive,
then it is given by $\mathop{\mathrm{dim}}\nolimits M_\sigma(\ensuremath{\mathbf v}) = \ensuremath{\mathbf v}^2 + 2$.
\begin{proof}
This is well-known: we provide a proof for completeness.
First of all, claim \eqref{enum:nonempty} follows from results of Yoshioka and Toda
(see \cite[Theorem 6.8]{BM:projectivity}).
Since $\sigma$ is generic with respect to $\ensuremath{\mathbf v}$, we know that $M_{\sigma}(\ensuremath{\mathbf v})$ exists as a projective variety, parameterizing S-equivalence classes of semistable objects.
Moreover, if $E\in M_\sigma(\ensuremath{\mathbf v})$, and we let $F\ensuremath{\hookrightarrow} E$ be such that $\phi_{\sigma}(F)=\phi_{\sigma}(E)$, then $\ensuremath{\mathbf v}(F)=m' \ensuremath{\mathbf v}_0$, for some $m'>0$.
Hence, the locus of strictly semistable objects in $M_\sigma(\ensuremath{\mathbf v})$ coincides with the image of the natural map
\[
\mathrm{SSL}\colon \coprod_{m_1+m_2=m} M_\sigma(m_1\ensuremath{\mathbf v}_0)\times M_{\sigma}(m_2\ensuremath{\mathbf v}_0) \longrightarrow M_\sigma(\ensuremath{\mathbf v}),
\quad
\mathrm{SSL}\bigl((E_1, E_2)\bigr) = E_1\oplus E_2.
\]
If we assume $\ensuremath{\mathbf v}_0^2>0$ (and so $\geq2$), then we can proceed by induction on $m$.
For $m=1$, $M_\sigma^{st}(\ensuremath{\mathbf v}_0)=M_{\sigma}(\ensuremath{\mathbf v}_0)$ and the conclusion follows from the Riemann-Roch Theorem and \cite{Mukai:BundlesK3}.
If $m>1$, then we deduce from the inductive assumption that the image of the map $\mathrm{SSL}$ has dimension equal to the maximum of $(m_1^2+m_2^2)\ensuremath{\mathbf v}_0^2+4$, for $m_1+m_2=m$.
We claim that we can construct a semistable object $E$ with vector $\ensuremath{\mathbf v}$ which is also a Schur
object, i.e. $\mathop{\mathrm{Hom}}\nolimits(E, E) = \ensuremath{\mathbb{C}}$.
Indeed, again by the inductive assumption, we can consider a $\sigma$-\emph{stable} object $F_{m-1}$ with vector $(m-1)\ensuremath{\mathbf v}_0$.
Let $F\in M_\sigma(\ensuremath{\mathbf v}_0)$.
Then, again by the Riemann-Roch Theorem, $\mathop{\mathrm{Ext}}\nolimits^1(F,F_{m-1})\neq0$.
We can take any non-trivial extension
\[
0\to F_{m-1} \to F_m \to F \to 0.
\]
Since both $F_{m-1}$ and $F$ are Schur objects, and they have no morphism between each other, $F_m$
is also a Schur object.
Again by the Riemann-Roch Theorem and \cite{Mukai:Symplectic}, we deduce that the dimension of $M_\sigma(\ensuremath{\mathbf v})$ is equal to $\mathrm{ext}^1(F_m,F_m)=m^2 \ensuremath{\mathbf v}_0^2+2$.
Since, for all $m_1,m_2>0$ with $m_1+m_2=m$, we have
\[
(m_1^2+m_2^2)\ensuremath{\mathbf v}_0^2 + 4 < m^2\ensuremath{\mathbf v}_0^2 +2,
\]
this shows that $M^{st}_\sigma(\ensuremath{\mathbf v})\neq\emptyset$ as claimed.
For the case $\ensuremath{\mathbf v}_0^2 \le 0$, see \cite[Lemma 7.1 and Lemma 7.2]{BM:projectivity}.
\end{proof}
Let us also point out that the proof shows a stronger statement:
\begin{Lem} \label{lem:nonprimitive}
Let $\ensuremath{\mathbf v} = m \ensuremath{\mathbf v}_0$ with $\ensuremath{\mathbf v}_0^2 > 0$, and $\sigma \in \mathop{\mathrm{Stab}}\nolimits^\dag(X,\alpha)$, not necessarily generic with
respect to $\ensuremath{\mathbf v}$. If there exist $\sigma$-\emph{stable} objects of class $\ensuremath{\mathbf v}_0$, then the same
holds for $\ensuremath{\mathbf v}$.
\end{Lem}
\begin{proof}
Let $F'$ be a generic deformation of $F_m$, and assume that it is strictly semistable; let
$E \ensuremath{\hookrightarrow} F'$ be a semistable subobject of the same phase. The above proof shows
the Mukai vector $\ensuremath{\mathbf v}(E)$ cannot be a multiple of $\ensuremath{\mathbf v}_0$.
Using the universal closedness of moduli spaces of semistable objects, it follows as in
\cite[Theorem 3.20]{Toda:K3} that $F_m$ also has a semistable subobject with Mukai vector
equal to $\ensuremath{\mathbf v}(E)$. This is not possible by construction.
\end{proof}
\subsection*{Line bundles on moduli spaces}
In this section we recall the main result of \cite{BM:projectivity}. It shows that every
moduli space of Bridgeland-stable objects comes equipped with a numerically positive line bundle,
naturally associated to the stability condition.
Let $(X,\alpha)$ be a twisted K3 surface.
Let $S$ be a proper algebraic space of finite type over $\ensuremath{\mathbb{C}}$, let $\sigma = (Z, \ensuremath{\mathcal P})
\in\mathop{\mathrm{Stab}}\nolimits^\dag(X,\alpha)$, and let $\ensuremath{\mathcal E}\in\mathrm{D}^{b}(S\times (X,\alpha))$ be a family of $\sigma$-semistable objects
of class $\ensuremath{\mathbf v}$ and phase $\phi$: for all closed points $s\in S$, $\ensuremath{\mathcal E}_s\in\ensuremath{\mathcal P}(\phi)$ with
$\ensuremath{\mathbf v}(\ensuremath{\mathcal E}_s)=\ensuremath{\mathbf v}$. We write $\Phi_\ensuremath{\mathcal E} \colon \mathrm{D}^{b}(S) \to \mathrm{D}^{b}(X,\alpha)$ for the Fourier-Mukai transform
associated to $\ensuremath{\mathcal E}$.
We construct a class $\ell_{\sigma}\in\mathop{\mathrm{NS}}\nolimits(S)_\ensuremath{\mathbb{R}}$ on $S$ as follows: To every curve $C\subset S$, we associate
\[
C\mapsto \ell_{\sigma}.C := \Im \left(-\frac{Z(\ensuremath{\mathbf v}(\Phi_{\ensuremath{\mathcal E}}(\ensuremath{\mathcal O}_C)))}{Z(\ensuremath{\mathbf v})}\right).
\]
This defines a numerical Cartier divisor class on $S$, see \cite[Section
4]{BM:projectivity}.
\begin{Rem} \label{rmk:comparison}
The classical construction of determinant line bundles (see \cite[Section 8.1]{HL:Moduli}) induces,
up to duality, the so-called \emph{Mukai morphism}
$\theta_{\ensuremath{\mathcal E}} \colon \ensuremath{\mathbf v}^\perp \to \mathop{\mathrm{NS}}\nolimits(S)$. It can also be defined by
\begin{equation} \label{eq:definetheta}
\theta_{\ensuremath{\mathcal E}}(\ensuremath{\mathbf w}).C:= \bigl(\ensuremath{\mathbf w}, \ensuremath{\mathbf v}(\Phi_\ensuremath{\mathcal E}(\ensuremath{\mathcal O}_C))\bigr).
\end{equation}
If we assume $Z(\ensuremath{\mathbf v})=-1$, and write $Z(\underline{\hphantom{A}})=(\Omega_Z, \underline{\hphantom{A}})$ as above, we can also write
\begin{equation} \label{eq:ellandtheta}
\ell_\sigma = \theta_{\ensuremath{\mathcal E}}(\Im \Omega_Z).
\end{equation}
\end{Rem}
\begin{Thm}[{\cite[Theorem 4.1 \& Remark 4.6]{BM:projectivity}}] \label{thm:ampleness}
The main properties of $\ell_{\sigma}$ are:
\begin{enumerate}
\item $\ell_{\sigma}$ is a nef divisor class on $S$. Additionally, for a curve $C\subset S$, we have $\ell_\sigma.C = 0$ if and only
if, for two general closed points $c, c' \in C$, the corresponding objects $\ensuremath{\mathcal E}_c, \ensuremath{\mathcal E}_{c'}\in\mathrm{D}^{b}(X,\alpha)$ are S-equivalent.
\item For any Mukai vector $\ensuremath{\mathbf v}\in H^*_\mathrm{alg}(X,\alpha,\ensuremath{\mathbb{Z}})$ and a stability condition
$\sigma \in \mathop{\mathrm{Stab}}\nolimits^\dag(X, \alpha)$ that is generic,
$\ell_{\sigma}$ induces an ample divisor class on the coarse moduli space $M_{\sigma}(\ensuremath{\mathbf v})$.
\end{enumerate}
\end{Thm}
For any chamber $\ensuremath{\mathcal C}\subset\mathop{\mathrm{Stab}}\nolimits^\dagger(X,\alpha)$, we thus get a map
\begin{equation} \label{eq:elltoAmp}
\ell_\ensuremath{\mathcal C} \colon \ensuremath{\mathcal C} \to \mathrm{Amp}(M_{\ensuremath{\mathcal C}}(\ensuremath{\mathbf v})),
\end{equation}
where we used the notation $M_{\ensuremath{\mathcal C}}(\ensuremath{\mathbf v})$ to denote the coarse moduli space $M_{\sigma}(\ensuremath{\mathbf v})$,
independent of the choice $\sigma\in\ensuremath{\mathcal C}$.
The main goal of this paper is to understand the global behavior of this map.
We recall one more result from \cite{BM:projectivity}, which will be crucial for our wall-crossing
analysis.
Let $\ensuremath{\mathbf v}\in H^*_{\mathrm{alg}}(X,\alpha,\ensuremath{\mathbb{Z}})$ be a \emph{primitive} vector with $\ensuremath{\mathbf v}^2\geq -2$.
Let $\ensuremath{\mathcal W}$ be a wall for $\ensuremath{\mathbf v}$ and let $\sigma_0\in \ensuremath{\mathcal W}$ be a generic stability condition on the
wall, namely it does not belong to any other wall.
We denote by $\sigma_+$ and $\sigma_-$ two generic stability conditions nearby $\ensuremath{\mathcal W}$ in opposite chambers.
Then all $\sigma_{\pm}$-semistable objects are also $\sigma_0$-semistable.
Hence, $\ell_{\sigma_0}$ induces two nef divisors $\ell_{\sigma_0,+}$ and $\ell_{\sigma_0,-}$ on $M_{\sigma_+}(\ensuremath{\mathbf v})$ and $M_{\sigma_-}(\ensuremath{\mathbf v})$ respectively.
\begin{Thm}[{\cite[Theorem 1.4(a)]{BM:projectivity}}]\label{thm:contraction}
The divisors $\ell_{\sigma_0,\pm}$ are big and nef on $M_{\sigma_{\pm}}(\ensuremath{\mathbf v})$.
In particular, they are semi-ample, and induce birational contractions
\[
\pi^{\pm}\colon M_{\sigma_{\pm}}(\ensuremath{\mathbf v})\to \overline{M}_{\pm},
\]
where $\overline{M}_{\pm}$ are normal irreducible projective varieties.
The curves contracted by $\pi^{\pm}$ are precisely the curves of objects that are
S-equivalent with respect to $\sigma_0$.
\end{Thm}
\begin{Def}\label{def:TypeOfWalls}
We call a wall $\ensuremath{\mathcal W}$:
\begin{enumerate}
\item a \emph{fake wall}, if there are no curves in $M_{\sigma_{\pm}}(\ensuremath{\mathbf v})$ of objects that are S-equivalent
to each other with respect to $\sigma_0$;
\item a \emph{totally semistable wall}, if $M^{st}_{\sigma_0}(\ensuremath{\mathbf v})=\emptyset$;
\item a \emph{flopping wall}, if we can identify $\overline{M}_+=\overline{M}_-$ and the induced
map $M_{\sigma_+}(\ensuremath{\mathbf v})\dashrightarrow M_{\sigma_-}(\ensuremath{\mathbf v})$ induces a flopping contraction;
\item a \emph{divisorial wall}, if the morphisms $\pi^{\pm}\colon M_{\sigma_{\pm}}(\ensuremath{\mathbf v})\to \overline{M}_{\pm}$ are both divisorial contractions.
\end{enumerate}
\end{Def}
By \cite[Theorem 1.4(b)]{BM:projectivity}, if $\ensuremath{\mathcal W}$ is not a fake wall and $M^{st}_{\sigma_0}(\ensuremath{\mathbf v})\subset M_{\sigma_{\pm}}(\ensuremath{\mathbf v})$ has complement of codimension at least two, then $\ensuremath{\mathcal W}$ is a flopping wall.
We will classify walls in Theorem \ref{thm:walls}.
\section{Review: basic facts about hyperk\"ahler varieties}\label{sec:ReviewHK}
In this section we give a short review on hyperk\"ahler manifolds.
The main references are \cite{Beauville:HK,GrossHuybrechtsJoyce,Eyal:survey}.
\begin{Def}\label{def:HK}
A \emph{projective hyperk\"ahler manifold} is a simply connected smooth projective variety $M$ such that $H^0(M,\Omega^2_M)$ is one-dimensional, spanned by an everywhere non-degenerate holomorphic $2$-form.
\end{Def}
The N\'eron-Severi group of a hyperk\"ahler manifold carries a natural bilinear form, called the \emph{Fujiki-Beauville-Bogomolov form}.
It is induced by a quadratic form on the whole second cohomology group $q:H^2(M,\ensuremath{\mathbb{Z}})\to\ensuremath{\mathbb{Z}}$, which is
primitive of signature $(3,b_2(M)-3)$. It satisfies the Fujiki relation
\begin{equation}\label{eq:BBform}
\int_M \alpha^{2n} = F_M \cdot q(\alpha)^n,\qquad \alpha\in H^2(M,\ensuremath{\mathbb{Z}}),
\end{equation}
where $2n=\mathop{\mathrm{dim}}\nolimits M$ and $F_M$ is the \emph{Fujiki constant}, which depends only on the deformation
type of $M$.
We will mostly use the notation $(\underline{\hphantom{A}},\underline{\hphantom{A}}):=q(\underline{\hphantom{A}},\underline{\hphantom{A}})$ for the induced bilinear form on $\mathrm{NS}(M)$.
The Hodge structure $\left(H^2(M,\ensuremath{\mathbb{Z}}),q\right)$ behaves similarly to the case of a K3 surface.
For example, by \cite{Verbitsky:torelli}, there is a weak global Hodge theoretic Torelli theorem for
(deformation equivalent) hyperk\"ahler manifolds.
Moreover, some positivity properties of divisors on $M$ can be rephrased in terms of $q$.
We first recall a few basic definitions on cones of divisors.
\begin{Def}\label{def:MoveableBigPositiveEffDivisors}
An integral divisor $D\in\mathop{\mathrm{NS}}\nolimits(M)$ is called
\begin{itemize}
\item \emph{big}, if its Iitaka dimension is maximal;
\item \emph{movable}, if its stable base-locus has codimension $\geq 2$;
\item \emph{strictly positive}, if $(D,D) > 0$ and $(D, A) > 0$ for a fixed ample class $A$ on $M$.
\end{itemize}
\end{Def}
The real (not necessarily closed) cone generated by big (resp., movable, strictly positive, effective) integral divisors will be denoted by $\mathop{\mathrm{Big}}(M)$ (resp., $\mathop{\mathrm{Mov}}\nolimits(M)$, $\mathop{\mathrm{Pos}}(M)$, $\mathrm{Eff}(M)$).
We have the following inclusions:
\begin{align*}
&\mathop{\mathrm{Pos}}(M)\subset \mathop{\mathrm{Big}}(M)\subset\mathrm{Eff}(M)\\
&\mathop{\mathrm{Nef}}\nolimits(M)\subset\overline{\mathop{\mathrm{Mov}}\nolimits}(M)\subset \overline{\mathop{\mathrm{Pos}}}(M)\subset\overline{\mathop{\mathrm{Big}}}(M)=\overline{\mathrm{Eff}}(M).
\end{align*}
The only non-trivial inclusion is $\mathop{\mathrm{Pos}}(M)\subset \mathop{\mathrm{Big}}(M)$, which follows from \cite[Corollary 3.10]{Huybrechts:compactHyperkaehlerbasic}.
Divisors in $\overline{\mathop{\mathrm{Pos}}}(M)$ are called \emph{positive}.
We say that an irreducible divisor $D \subset M$ is \emph{exceptional} if there is a birational map $\pi \colon M
\dashrightarrow M'$ contracting $D$. Using the Fujiki relations, one proves $D^2 < 0$ and $(D,
E) \ge 0$ for every movable divisor $E$ \cite[Section 1]{Huybrechts:compactHyperkaehlerbasic}.
We let $\rho_D$ be the reflection at $D$, i.e., the linear involution of $\mathop{\mathrm{NS}}\nolimits(M)_\ensuremath{\mathbb{Q}}$ fixing $D^\perp$ and sending $D$ to $-D$.
\begin{Prop}[\cite{Eyal:prime-exceptional}] \label{prop:Weylmovablecone}
The reflection $\rho_D$ at an irreducible exceptional divisor is an integral involution of $\mathop{\mathrm{NS}}\nolimits(M)$.
Let $W_{\mathrm{Exc}}$ be the Weyl group generated by such reflections $\rho_D$.
The cone $\mathop{\mathrm{Mov}}\nolimits(M) \cap \mathop{\mathrm{Pos}}(M)$ of big movable divisors is the fundamental chamber, for
the action of $W_{\mathrm{Exc}}$ on $\mathop{\mathrm{Pos}}(M)$, given by $(D, \underline{\hphantom{A}}) \ge 0$ for every exceptional divisor $D$.
\end{Prop}
The difficult claim is the integrality of $\rho_D$; in our case, we could also deduce it from our
classification of divisorial contractions in Theorem \ref{thm:walls}.
As explained in \cite[Section 6]{Eyal:survey}, the remaining statements follow
from Zariski decomposition for divisors \cite{Boucksom:Zariskidecomposition} and standard results
about Weyl group actions on hyperbolic lattices.
\begin{Def}\label{def:LagrangianFibration}
Let $M$ be a projective hyperk\"ahler manifold of dimension $2n$.
A \emph{Lagrangian fibration} is a surjective morphism with connected fibers $h\colon M\to B$, where $B$
is a smooth projective variety, such that the generic fiber is Lagrangian with respect to the
symplectic form $\omega\in H^0(M,\Omega^2_M)$.
\end{Def}
By the Arnold-Liouville Theorem, any smooth fiber of a Lagrangian fibration is an abelian variety of dimension $n$.
Moreover:
\begin{Thm}[{\cite{Matsushita:Fibrations, Matsushita:Addendum} and \cite{Hwang:fibrationsbase}}]\label{thm:MatsushitaHwang}
Let $M$ be a projective hyperk\"ahler manifold of dimension $2n$.
Let $B$ be a smooth projective variety of dimension $0<\mathop{\mathrm{dim}}\nolimits B < 2n$ and let $h\colon M\to B$ be a surjective morphism with connected fibers.
Then $h$ is a Lagrangian fibration, and $B\cong \ensuremath{\mathbb{P}}^n$.
\end{Thm}
This result explains the importance of Conjecture \ref{conj:SYZ}. In addition,
the existence of a Lagrangian fibration is equivalent to the existence of a single Lagrangian
torus in $M$ (see \cite{GLR:Lagrangian1, HW:Lagrangian,Matsushita:BeauvilleConj}, based on previous results in
\cite{Amerik:BeauvilleConjDim4,GLR:Lagrangian2}).
The examples of hyperk\"ahler manifolds we will consider are moduli spaces of stable complexes,
as explained by the theorem below. It has been proven for moduli of sheaves
in \cite[Sections 7 \& 8]{Yoshioka:Abelian}, and generalized to Bridgeland stability
conditions in \cite[Theorem 6.10 \& Section 7]{BM:projectivity}:
\begin{Thm}[Huybrechts-O'Grady-Yoshioka]\label{thm:ModuliSpacesAreHK}
Let $(X,\alpha)$ be a twisted K3 surface and let $\ensuremath{\mathbf v}\in H^*_{\mathrm{alg}}(X,\alpha,\ensuremath{\mathbb{Z}})$ be a primitive vector with $\ensuremath{\mathbf v}^2\geq -2$.
Let $\sigma\in\mathop{\mathrm{Stab}}\nolimits^\dagger(X,\alpha)$ be a generic stability condition with respect to $\ensuremath{\mathbf v}$.
Then:
\begin{enumerate}
\item $M_{\sigma}(\ensuremath{\mathbf v})$ is a projective hyperk\"ahler manifold, deformation-equivalent to the
Hilbert scheme of points on any K3 surface.
\item The Mukai morphism induces an isomorphism
\begin{itemize}
\item $\theta_{\sigma, \ensuremath{\mathbf v}}\colon \ensuremath{\mathbf v}^\perp \xrightarrow{\sim} \mathrm{NS}(M_{\sigma}(\ensuremath{\mathbf v}))$, if $\ensuremath{\mathbf v}^2>0$;
\item $\theta_{\sigma, \ensuremath{\mathbf v}}\colon \ensuremath{\mathbf v}^\perp/\ensuremath{\mathbf v} \xrightarrow{\sim} \mathrm{NS}(M_{\sigma}(\ensuremath{\mathbf v}))$, if $\ensuremath{\mathbf v}^2=0$.
\end{itemize}
Under this isomorphism, the quadratic Beauville-Bogomolov form for $\mathrm{NS}(M_{\sigma}(\ensuremath{\mathbf v}))$
coincides with the quadratic form of the Mukai pairing on $(X,\alpha)$.
\end{enumerate}
\end{Thm}
Here $\theta_{\sigma, \ensuremath{\mathbf v}}$ is the Mukai morphism as in Remark \ref{rmk:comparison}, induced
by a (quasi-)universal family. We will often drop $\sigma$ or $\ensuremath{\mathbf v}$ from the notation.
It extends to an isomorphism of Hodge structures, identifying
the orthogonal complement $\ensuremath{\mathbf v}^{\perp, \mathop{\mathrm{tr}}\nolimits}$ inside the
whole cohomology $H^*(X,\alpha,\ensuremath{\mathbb{Z}})$ (rather than its algebraic part) with $H^2(M_{\sigma}(\ensuremath{\mathbf v}), \ensuremath{\mathbb{Z}})$.
The following result is Corollary 9.9 in \cite{Eyal:survey} for the untwisted case $\alpha = 1$; by
deformation techniques, the result also holds in the twisted case:
\begin{Thm}[{\cite{Verbitsky:torelli}, \cite{Eyal:survey}}]
For $\ensuremath{\mathbf v}$ primitive and $\ensuremath{\mathbf v}^2>0$, the embedding $H^2(M_{\sigma}(\ensuremath{\mathbf v}), \ensuremath{\mathbb{Z}}) \cong \ensuremath{\mathbf v}^{\perp, \mathop{\mathrm{tr}}\nolimits}\ensuremath{\hookrightarrow} H^*(X,\alpha, \ensuremath{\mathbb{Z}})$ of integral Hodge
structures determines the birational class of $M_{\sigma}(\ensuremath{\mathbf v})$.
\end{Thm}
However, as indicated in the introduction, we only need the implication that
birational moduli spaces have isomorphic extended Hodge structures.
We will also the need the following special case of a result by Namikawa and Wierzba:
\begin{Thm}[{\cite[Theorem 1.2 (ii)]{Wierzba:contractions} and \cite[Proposition
1.4]{Namikawa:deformation}}] \label{thm:NamikawaWierzba}
Let $M$ be a projective hyperk\"ahler manifold of dimension $2n$, and let $\overline{M}$
be a projective normal variety. Let $\pi \colon M\to\overline{M}$ be a birational projective morphism.
We denote by $S_i$ the set of points $p\in \overline{M}$ such that $\mathop{\mathrm{dim}}\nolimits \pi^{-1}(p)=i$.
Then $\mathop{\mathrm{dim}}\nolimits S_i \leq 2n-2i$.
In particular, if $\pi$ contracts a divisor $D\subset M$, we must have $\mathop{\mathrm{dim}}\nolimits \pi(D) = 2n-2$.
\end{Thm}
Consider a non-primitive vector $\ensuremath{\mathbf v}$. As shown by O'Grady and Kaledin-Lehn-Sorger, the moduli space
$M_{\sigma}(\ensuremath{\mathbf v})$ can still be thought of as a singular hyperk\"ahler manifold, in the following
sense:
\begin{Def}\label{def:SingularHK}
A normal projective variety $M$ is said to have \emph{symplectic singularities} if
\begin{itemize}
\item the smooth part $M_\mathrm{reg}\subset M$ admits a symplectic 2-form $\omega$, such that
\item for any resolution $f\colon \widetilde M\to M$, the pull-back of $\omega$ to
$f^{-1}(M_\mathrm{reg})$ extends to a holomorphic form on $\widetilde M$.
\end{itemize}
\end{Def}
Given a hyperk\"ahler manifold $M$ and a dominant rational map $M\dashrightarrow \overline{M}$, where $\overline{M}$ is a normal projective variety with symplectic singularities, then
it follows from the definitions that $\mathop{\mathrm{dim}}\nolimits(M)=\mathop{\mathrm{dim}}\nolimits(\overline{M})$. This explains the relevance of
the following theorem; our results in \cite{BM:projectivity} reduce it to the case of moduli of sheaves:
\begin{Thm}[{\cite{OGrady} and \cite{KLS:SingSymplecticModuliSpaces}}]
\label{thm:KLS}
Let $(X,\alpha)$ be a twisted K3 surface and let $\ensuremath{\mathbf v}=m\ensuremath{\mathbf v}_0\in H^*_{\mathrm{alg}}(X,\alpha,\ensuremath{\mathbb{Z}})$ be a Mukai vector with $\ensuremath{\mathbf v}_0$ primitive and $\ensuremath{\mathbf v}_0^2\geq 2$.
Let $\sigma\in\mathop{\mathrm{Stab}}\nolimits^\dagger(X,\alpha)$ be a generic stability condition with respect to $\ensuremath{\mathbf v}$.
Then $M_{\sigma}(\ensuremath{\mathbf v})$ has symplectic singularities.
\end{Thm}
\section{Harder-Narasimhan filtrations in families}\label{sec:HNfamily}
In this section, we will show that results by Abramovich-Polishchuk and Toda imply the existence
of HN filtrations in families, see Theorem \ref{thm:HNfamily}.
The results we present will work as well in the twisted context; to simplify notation, we only state
the untwisted case.
Let $Y$ be a smooth projective variety over $\ensuremath{\mathbb{C}}$.
We will write $\mathrm{D}_{\mathrm{qc}}(Y)$ for the unbounded derived of quasi-coherent sheaves. Pick a lattice
$\Lambda$ and $\ensuremath{\mathbf v}$ for the bounded derived category $\mathrm{D}^{b}(Y)$ as in Definition \ref{def:Bridgeland},
and let $\sigma$ be a Bridgeland stability on $\mathrm{D}^{b}(Y)$.
\begin{Def} \label{def:openness}
We say $\sigma$ satisfies \emph{openness of stability} if the following condition holds: for any
scheme $S$ of finite type over $\ensuremath{\mathbb{C}}$, and for any $\ensuremath{\mathcal E} \in \mathrm{D}^{b}(S\times Y)$ such that its derived
restriction $\ensuremath{\mathcal E}_s$ is a $\sigma$-semistable object of $\mathrm{D}^{b}(Y)$ for some $s \in S$,
there exists an open neighborhood
$s \in U \subset S$ of $s$, such that $\ensuremath{\mathcal E}_{s'}$ is $\sigma$-semistable for all $s' \in U$.
\end{Def}
\begin{Thm}[{\cite[Section 3]{Toda:K3}}] \label{thm:Todaopenness}
Openness of stability holds when $Y$ is a K3 surface and $\sigma$ is
a stability condition in the connected component $\mathop{\mathrm{Stab}}\nolimits^\dag(Y)$.\footnote{In \cite[Section 3]{Toda:K3}, this Theorem is only stated for
families $\ensuremath{\mathcal E}$ satisfying $\mathop{\mathrm{Ext}}\nolimits^{<0}(\ensuremath{\mathcal E}_s, \ensuremath{\mathcal E}_s) = 0$ for all $s \in S$. However, Toda's proof
in Lemma 3.13 and Proposition 3.18 never uses that assumption.}
\end{Thm}
\begin{Thm} \label{thm:HNfamily}
Let $\sigma = (Z, \ensuremath{\mathcal A}) \in \mathop{\mathrm{Stab}}\nolimits(Y)$ be an algebraic stability condition satisfying openness of stability.
Assume we are given an irreducible variety $S$ over $\ensuremath{\mathbb{C}}$, and an object
$\ensuremath{\mathcal E} \in \mathrm{D}^{b}(S\times Y)$. Then there exists a system of maps
\begin{equation} \label{eq:HNfamily}
0 = \ensuremath{\mathcal E}^0 \to \ensuremath{\mathcal E}^1 \to \ensuremath{\mathcal E}^2 \to \dots \to \ensuremath{\mathcal E}^m = \ensuremath{\mathcal E}
\end{equation}
in $\mathrm{D}^{b}(S \times Y)$, and an open subset $U \subset S$ with the following property:
for any $s \in U$, the derived restriction of the system of maps \eqref{eq:HNfamily}
\[
0 = \ensuremath{\mathcal E}_s^0 \to \ensuremath{\mathcal E}_s^1 \to \ensuremath{\mathcal E}_s^2 \to \dots \to \ensuremath{\mathcal E}_s^m = \ensuremath{\mathcal E}_s
\]
is the HN filtration of $\ensuremath{\mathcal E}_s$.
\end{Thm}
The proof is based on the notion of constant family of t-structures due to Abramovich and Polishchuk,
constructed in \cite{Abramovich-Polishchuk:t-structures} (in case $S$ is smooth) and
\cite{Polishchuk:families-of-t-structures} (in general).
Throughout the remainder of this section, we will assume that $\sigma$ and $S$ satisfy the
assumptions of Theorem \ref{thm:HNfamily}.
A t-structure is called \emph{close to Noetherian} if it can be obtained via tilting from a
t-structure whose heart is Noetherian.
For $\phi \in \ensuremath{\mathbb{R}}$, the category
$\ensuremath{\mathcal P}((\phi-1, \phi]) \subset \mathrm{D}^{b}(Y)$ is the heart of a close to Noetherian bounded t-structure on $Y$ given by
$\ensuremath{\mathcal D}^{\le 0} = \ensuremath{\mathcal P}((\phi -1, +\infty))$ and $\ensuremath{\mathcal D}^{\ge 0} = \ensuremath{\mathcal P}((-\infty, \phi])$ (see the example
discussed at the end of \cite[Section 1]{Polishchuk:families-of-t-structures}).
In this situation, Abramovich and Polishchuk's work
induces a bounded t-structure $(\ensuremath{\mathcal D}^{\le 0}_S, \ensuremath{\mathcal D}^{\ge 0}_S)$ on $\mathrm{D}^{b}(S \times Y)$; we
paraphrase their main results as follows:
\begin{Thm}[{\cite{Abramovich-Polishchuk:t-structures, Polishchuk:families-of-t-structures}}] \label{thm:APP}
Let $\ensuremath{\mathcal A}$ be the heart of a close to Noetherian bounded t-structure $(\ensuremath{\mathcal D}^{\le 0}, \ensuremath{\mathcal D}^{\ge 0})$ on $\mathrm{D}^{b}(Y)$.
Denote by $\ensuremath{\mathcal A}_{qc} \subset \mathrm{D}_{\mathrm{qc}}(Y)$ the closure of $\ensuremath{\mathcal A}$ under infinite coproducts
in the derived category of quasi-coherent sheaves.
\begin{enumerate}
\item \label{enum:deftstruct}
For any scheme $S$ of finite type of $\ensuremath{\mathbb{C}}$ there is a close to Noetherian bounded t-structure
$(\ensuremath{\mathcal D}_S^{\le 0}, \ensuremath{\mathcal D}_S^{\ge 0})$
on $\mathrm{D}^{b}(S\times Y)$, whose heart $\ensuremath{\mathcal A}_S$ is characterized by
\[ \ensuremath{\mathcal E} \in \ensuremath{\mathcal A}_S \Leftrightarrow (p_Y)_* \left(\ensuremath{\mathcal E} |_{Y \times U}\right) \in \ensuremath{\mathcal A}_{qc}
\quad \text{for every open affine $U \subset S$}
\]
\item \label{enum:sheaf}
The above construction defines a sheaf of t-structures over $S$: when $S = \bigcup_i U_i$
is an open covering of $S$, then $\ensuremath{\mathcal E} \in \ensuremath{\mathcal A}_S$ if and only if
$\ensuremath{\mathcal E} |_{Y \times U_i} \in \ensuremath{\mathcal A}_{U_i}$ for every $i$. In particular, for $i \colon U \subset S$ open, the
restriction functor $i^*$ is t-exact.
\item \label{enum:pushforwardexact}
When $i \colon S' \subset S$ is a closed subscheme, then $i_*$ is t-exact, and $i^*$ is t-right exact.
\end{enumerate}
\end{Thm}
We briefly comment on the statements that are not explicitly mentioned in
\cite[Theorem 3.3.6]{Polishchuk:families-of-t-structures}: From part (i) of
\cite[Theorem 3.3.6]{Polishchuk:families-of-t-structures}, it follows that the t-structure
constructed there on $\mathrm{D}(S \times Y)$ descends to a bounded t-structure on
$\mathrm{D}^{b}(S \times Y)$. To prove that the push-forward in claim \eqref{enum:pushforwardexact} is t-exact,
we first use the sheaf property to reduce to the case where $S$ is affine; in this case, the claim
follows by construction. By adjointness, it follows that $i^*$ is t-right exact.
For an algebraic stability condition $\sigma = (Z, \ensuremath{\mathcal P})$ on $\mathrm{D}^{b}(Y)$ and a phase
$\phi \in \ensuremath{\mathbb{R}}$, we will from now on
denote its associated t-structure by $\ensuremath{\mathcal P}(>\phi) = \ensuremath{\mathcal D}^{\le -1}$, $\ensuremath{\mathcal P}(\le \phi) = \ensuremath{\mathcal D}^{\ge 0}$, and
the associated truncation functors by $\tau^{>\phi}, \tau^{\le \phi}$.
By \cite[Lemma 2.1.1]{Polishchuk:families-of-t-structures}, it induces a t-structure on
$\mathrm{D}_{\mathrm{qc}}(Y)$, which we denote by $\ensuremath{\mathcal P}_{qc}(>\phi), \ensuremath{\mathcal P}_{qc}(\le \phi)$.
For the t-structure on $\mathrm{D}^{b}(S \times Y)$ induced via Theorem \ref{thm:APP}, we will similarly write
$\ensuremath{\mathcal P}_S(>\phi), \ensuremath{\mathcal P}_S(\le \phi)$, and $\tau_S^{>\phi}, \tau_S^{\le \phi}$.
We start with a technical observation:
\begin{Lem} \label{lem:trivialbuttechnical}
The t-structures on $\mathrm{D}^{b}(S \times Y)$ constructed via Theorem \ref{thm:APP} satisfy the following
compatibility relation:
\begin{equation} \label{eq:intersect}
\bigcap_{\epsilon > 0} \ensuremath{\mathcal P}_S(\le \phi + \epsilon) = \ensuremath{\mathcal P}_S(\le \phi).\end{equation}
\end{Lem}
\begin{proof}
Assume $\ensuremath{\mathcal E}$ is in the intersection of the left-hand side of \eqref{eq:intersect}.
By the sheaf property, we may assume that $S$ is affine.
The assumption implies
$(p_Y)_* \ensuremath{\mathcal E} \in \ensuremath{\mathcal P}_{qc}(\le \phi + \epsilon)$ for all $\epsilon > 0$.
By \cite[Lemma 2.1.1]{Polishchuk:families-of-t-structures}, we can
describe $\ensuremath{\mathcal P}_{qc}(\le \phi + \epsilon) \subset \mathrm{D}_{\mathrm{qc}}(Y)$ as the right orthogonal complement of
$\ensuremath{\mathcal P}(>\phi + \epsilon) \subset \mathrm{D}^{b}(Y)$ inside $\mathrm{D}_{\mathrm{qc}}(Y)$; thus we obtain
\begin{align*}
\bigcap_{\epsilon > 0} \ensuremath{\mathcal P}_{qc}(\le \phi + \epsilon)
= \bigcap_{\epsilon > 0} \bigl(\ensuremath{\mathcal P}(> \phi + \epsilon) \bigr)^\perp
= \Bigl( \bigcup_{\epsilon > 0} \ensuremath{\mathcal P}(>\phi + \epsilon) \Bigr)^\perp
= \Bigl( \ensuremath{\mathcal P}(>\phi) \Bigr)^\perp
= \ensuremath{\mathcal P}_{qc}(\le \phi).
\end{align*}
Hence $(p_Y)_* \ensuremath{\mathcal E} \in \ensuremath{\mathcal P}_{qc}(\le \phi)$, proving the lemma.
\end{proof}
We next observe that the truncation functors $\tau_S^{> \phi}, \tau_S^{\le \phi}$ induce
a slicing on $\mathrm{D}^{b}(S\times Y)$. (See Definition \ref{def:slicing} for the notion of slicing on
a triangulated category.)
\begin{Lem} \label{lem:slicingS}
Assume that $\sigma = (Z, \ensuremath{\mathcal P})$ is an algebraic stability condition, and $\ensuremath{\mathcal P}_S(> \phi),
\ensuremath{\mathcal P}_S(\le \phi)$ are as defined above.
There is a slicing $\ensuremath{\mathcal P}_S$ on $\mathrm{D}^{b}(S\times Y)$ defined by
\[
\ensuremath{\mathcal P}_S(\phi) = \ensuremath{\mathcal P}_S(\le \phi) \cap \bigcap_{\epsilon > 0} \ensuremath{\mathcal P}_S(> \phi - \epsilon).
\]
\end{Lem}
Note that $\ensuremath{\mathcal P}_S(\phi)$ cannot be characterized by the analogue of Theorem
\ref{thm:APP}, part \eqref{enum:deftstruct}.
For example, consider the case where $Y$ is a curve and $(Z, \ensuremath{\mathcal P})$ the standard
stability condition corresponding to classical slope-stability in $\mathop{\mathrm{Coh}}\nolimits Y$.
Then $\ensuremath{\mathcal P}(1) \subset \mathop{\mathrm{Coh}}\nolimits Y$ is the category of
torsion sheaves, and $\ensuremath{\mathcal P}_S(1) \subset \mathop{\mathrm{Coh}}\nolimits S \times Y$ is the category of sheaves $\ensuremath{\mathcal F}$ that are torsion
relative over $S$. However, for $U \subset S$ affine and a non-trivial family $\ensuremath{\mathcal F}$, the
push-forward $(p_Y)_* \ensuremath{\mathcal F}|_U$ is never a torsion sheaf.
\begin{proof}
By standard arguments, it is sufficient to construct a HN filtration for any object $\ensuremath{\mathcal E} \in \ensuremath{\mathcal A}_S := \ensuremath{\mathcal P}_S(0, 1]$.
In particular, since $\sigma$ is algebraic, we can assume that both $\ensuremath{\mathcal A}:=\ensuremath{\mathcal P}(0,1]$ and $\ensuremath{\mathcal A}_S$ are Noetherian.
For any $\phi \in (0, 1]$, we have $\ensuremath{\mathcal P}_S(\phi, \phi + 1] \subset \langle \ensuremath{\mathcal A}_S, \ensuremath{\mathcal A}_S[1] \rangle$.
By \cite[Lemma 1.1.2]{Polishchuk:families-of-t-structures}, this induces a torsion pair
$(\ensuremath{\mathcal T}_\phi, \ensuremath{\mathcal F}_\phi)$ on $\ensuremath{\mathcal A}_S$ with
\[
\ensuremath{\mathcal T}_\phi = \ensuremath{\mathcal A}_S \cap \ensuremath{\mathcal P}_S(\phi, \phi + 1] \quad \text{and} \quad
\ensuremath{\mathcal F}_\phi = \ensuremath{\mathcal A}_S \cap \ensuremath{\mathcal P}_S(\phi-1, \phi].\]
Let $T_\phi \ensuremath{\hookrightarrow} \ensuremath{\mathcal E} \ensuremath{\twoheadrightarrow} F_\phi$ be the induced short exact sequence in $\ensuremath{\mathcal A}_S$.
Assume $\phi < \phi'$; since $\ensuremath{\mathcal F}_{\phi} \subset \ensuremath{\mathcal F}_{\phi'}$, the surjection
$\ensuremath{\mathcal E} \ensuremath{\twoheadrightarrow} F_\phi$ factors via
$\ensuremath{\mathcal E} \ensuremath{\twoheadrightarrow} F_{\phi'} \ensuremath{\twoheadrightarrow} F_\phi$. Since $\ensuremath{\mathcal A}_S$ is Noetherian, the
set of induced quotients $\stv{F_\phi}{\phi \in (0, 1]}$ of $\ensuremath{\mathcal E}$ must be finite.
In addition, if $F_{\phi} \cong F_{\phi'}$, we must also have
$F_{\phi''} \cong F_{\phi}$ for any $\phi'' \in (\phi, \phi')$.
Thus, there exist real numbers $\phi_0 = 1 > \phi_1 > \phi_2 > \dots > \phi_l > \phi_{l+1} = 0$ such that
$F_\phi$ is constant for $\phi \in (\phi_{i+1}, \phi_i)$, but such that
$F_{\phi_i - \epsilon} \neq F_{\phi_i + \epsilon}$. Let us assume for simplicity that
$F_{\phi_1 + \epsilon} \cong \ensuremath{\mathcal E}$; the other case is treated similarly by setting $F^1 = F_{\phi_1 +
\epsilon}$, and shifting all other indices by one.
For $i = 1, \dots, l$ we set
\begin{itemize}
\item $F^i := F_{\phi_i - \epsilon}$,
\item $\ensuremath{\mathcal E}^i := \mathop{\mathrm{Ker}}\nolimits (\ensuremath{\mathcal E} \ensuremath{\twoheadrightarrow} F^i)$, and
\item $A^i = \ensuremath{\mathcal E}^i/\ensuremath{\mathcal E}^{i-1}$.
\end{itemize}
We have $\ensuremath{\mathcal E}^i \in \ensuremath{\mathcal P}_S(> \phi_i - \epsilon)$ and
$\ensuremath{\mathcal E}^{i-1} = \tau_S^{>\phi_i + \epsilon} \ensuremath{\mathcal E}^i$ for all $\epsilon > 0$. Hence the quotient
$A^i$ satisfies, for all $\epsilon > 0$,
\begin{itemize}
\item $A^i \in \ensuremath{\mathcal P}_S(>\phi_i - \epsilon)$,
\item $A^i \in \ensuremath{\mathcal P}_S(\le \phi_i + \epsilon)$.
\end{itemize}
The latter implies $A^i \in \ensuremath{\mathcal P}_S(\le \phi_i)$ by Lemma \ref{lem:trivialbuttechnical}.
By definition, we obtain $A^i \in \ensuremath{\mathcal P}_S(\phi_i)$.
Finally, we have $F^l \in \ensuremath{\mathcal P}_S(0, 1] \cap \ensuremath{\mathcal P}_S(\le \epsilon)$ for all $\epsilon > 0$. Using
Lemma \ref{lem:trivialbuttechnical} again, we obtain $F^l = 0$, and thus $\ensuremath{\mathcal E}^l = \ensuremath{\mathcal E}$.
Thus the $\ensuremath{\mathcal E}^i$ induce a HN filtration as claimed.
\end{proof}
The following lemma is an immediate extension of
\cite[Proposition 3.5.3]{Abramovich-Polishchuk:t-structures}:
\begin{Lem}\label{lem:densestable}
Assume that $\ensuremath{\mathcal E} \in \ensuremath{\mathcal P}_S(\phi)$ for some $\phi \in \ensuremath{\mathbb{R}}$.
and that $\ensuremath{\mathcal E}_s \neq 0$ for $s \in S$ generic.
Then there exists a dense subset $Z \subset S$, such that
$\ensuremath{\mathcal E}_s$ is semistable of phase $\phi$ for all $s \in Z$.
\end{Lem}
\begin{proof}
By \cite[Proposition 3.5.3]{Abramovich-Polishchuk:t-structures}, applied to the smooth locus
of $S$, there exists a dense
subset $Z \subset S$ such that $\ensuremath{\mathcal E}_s \in \ensuremath{\mathcal P}((\phi-1, \phi])$. Since
$\ensuremath{\mathcal E} \in \ensuremath{\mathcal P}_S(>\phi - \epsilon)$ for all $\epsilon > 0$, and since $i_s^*$ is t-right exact, we also have
$\ensuremath{\mathcal E}_s \in \ensuremath{\mathcal P}(>\phi - \epsilon)$ for all $\epsilon > 0$.
Considering the HN filtration of $\ensuremath{\mathcal E}_s$, this shows that $\ensuremath{\mathcal E}_s \in \ensuremath{\mathcal P}(\phi)$ for all $s \in Z$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:HNfamily}]
The statement now follows easily from the above two lemmas. First of all, under
the assumption of openness of stability, the dense subset $Z$ of Lemma \ref{lem:densestable} may of
course be taken to be open.
Given any $\ensuremath{\mathcal E} \in \mathrm{D}^{b}(S\times Y)$, let
\begin{equation}
0 = \ensuremath{\mathcal E}^0 \to \ensuremath{\mathcal E}^1 \to \dots \to \ensuremath{\mathcal E}^m = \ensuremath{\mathcal E}
\end{equation}
be the HN filtration with respect to the slicing of Lemma \ref{lem:slicingS}, and
let $A^j$ be the HN filtration quotients fitting in the exact triangle
$\ensuremath{\mathcal E}^{j-1} \to \ensuremath{\mathcal E}^j \to A^j$. Let
$j_1, \dots, j_l$ be the indices for which the generic fiber $i_s^* A^j$ does not vanish,
and let $\phi_i$ be the phase of $A^{j_i}$.
Then we claim that
\begin{equation} \label{eq:EHNslicing}
0 = \ensuremath{\mathcal E}^0 \to \ensuremath{\mathcal E}^{j_1} \to \ensuremath{\mathcal E}^{j_2} \to \dots \to \ensuremath{\mathcal E}^m = \ensuremath{\mathcal E} \end{equation}
has
the desired property. Indeed, there is an open subset $U$ such that for all $s \in U$,
the fibers
$A^{j_i}_s$ are semistable for all $i = 1, \dots, l$, and such that $A^j_s = 0$ for all
$j \notin \{i_1, \dots, i_l\}$. Then, for each such $s$, the restriction
of the sequence of maps \eqref{eq:EHNslicing}
via $i_s^*$ induces a sequence of maps that satisfies all properties of a HN filtration.
\end{proof}
\section{The hyperbolic lattice associated to a wall}\label{sec:hyperbolic}
Our second main tool will be a rank two hyperbolic lattice associated to any wall.
Let $(X,\alpha)$ be a twisted K3 surface.
Fix a primitive vector $\ensuremath{\mathbf v} \in H^*_\mathrm{alg}(X,\alpha,\ensuremath{\mathbb{Z}})$ with $\ensuremath{\mathbf v}^2 > 0$, and a wall $\ensuremath{\mathcal W}$ of the chamber
decomposition with respect to $\ensuremath{\mathbf v}$.
\begin{Prop} \label{prop:HW}
To each such wall, let $\ensuremath{\mathcal H}_\ensuremath{\mathcal W} \subset H^*_\mathrm{alg}(X,\alpha,\ensuremath{\mathbb{Z}})$ be the set of classes
\[ \ensuremath{\mathbf w} \in \ensuremath{\mathcal H}_\ensuremath{\mathcal W} \quad \Leftrightarrow \quad \Im \frac{Z(\ensuremath{\mathbf w})}{Z(\ensuremath{\mathbf v})} = 0 \quad
\text{for all $\sigma = (Z, \ensuremath{\mathcal P}) \in \ensuremath{\mathcal W}$.} \]
Then $\ensuremath{\mathcal H}_\ensuremath{\mathcal W}$ has the following properties:
\begin{enumerate}
\item \label{enum:signature}
It is a primitive sublattice of rank two and of signature $(1, -1)$ (with respect to the restriction
of the Mukai form).
\item \label{enum:HNfactorsinHW}
Let $\sigma_+, \sigma_-$ be two sufficiently close and generic stability conditions on opposite sides
of the wall $\ensuremath{\mathcal W}$, and consider any $\sigma_+$-stable object $E \in M_{\sigma_+}(\ensuremath{\mathbf v})$.
Then any HN filtration factor $A_i$ of $E$ with respect to $\sigma_-$ has
Mukai vector $\ensuremath{\mathbf v}(A_i)$ contained in $\ensuremath{\mathcal H}_\ensuremath{\mathcal W}$.
\item \label{enum:semistablehasfactorsinHW}
If $\sigma_0$ is a generic stability condition on the wall $\ensuremath{\mathcal W}$, the conclusion of the
previous claim also holds for any $\sigma_0$-semistable object $E$ of class $\ensuremath{\mathbf v}$.
\item \label{enum:JHfactorsinHW}
Similarly, let $E$ be any object with $\ensuremath{\mathbf v}(E) \in \ensuremath{\mathcal H}_\ensuremath{\mathcal W}$, and assume that it is
$\sigma_0$-stable for a generic stability condition $\sigma_0 \in \ensuremath{\mathcal W}$.
Then every Jordan-H\"older factors of $E$ with respect to $\sigma_0$ will have Mukai vector contained
in $\ensuremath{\mathcal H}_\ensuremath{\mathcal W}$.
\end{enumerate}
\end{Prop}
The precise meaning of ``sufficiently close'' will become apparent in the proof.
\begin{proof}
The first two claims of \eqref{enum:signature} are evident. To verify the claim on the signature,
first note that by the assumption $\ensuremath{\mathbf v}^2 > 0$, the lattice $\ensuremath{\mathcal H}_\ensuremath{\mathcal W}$ is either hyperbolic or
positive (semi-)definite. On the other hand,
consider a stability condition $\sigma = (Z, \ensuremath{\mathcal A})$ with $Z(\ensuremath{\mathbf v}) = -1$.
Since $(\Im Z)^2 > 0$ by Theorem \ref{thm:Bridgeland_coveringmap}, since
$\ensuremath{\mathcal H}_\ensuremath{\mathcal W}$ is contained in the orthogonal complement of $\Im Z$, and since
the algebraic Mukai lattice has signature $(2, \rho(X))$, this leaves the hyperbolic
case as the only possibility.
In order to prove the remaining claims, consider an $\epsilon$-neighborhood $B_\epsilon(\tau)$
of a generic stability condition $\tau \in \ensuremath{\mathcal W}$, with $0 < \epsilon \ll 1$.
Let $\mathfrak S_{\ensuremath{\mathbf v}}$ be the set of objects $E$ with $\ensuremath{\mathbf v}(E) = \ensuremath{\mathbf v}$, and that are semistable for
some stability condition in $B_\epsilon(\tau)$.
Let $\mathfrak U_{\ensuremath{\mathbf v}}$ be the set of classes $\ensuremath{\mathbf u} \in H^*_\mathrm{alg}(X,\alpha,\ensuremath{\mathbb{Z}})$ that can appear as Mukai
vectors of Jordan-H\"older factors of $E \in \mathfrak S_{\ensuremath{\mathbf v}}$, for any stability condition
$(Z', \ensuremath{\mathcal A}') \in B_\epsilon(\tau)$. As shown in the
proof of local finiteness of walls (see \cite[Proposition 9.3]{Bridgeland:K3} or \cite[Proposition
3.3]{localP2}), the set $\mathfrak U_\ensuremath{\mathbf v}$ is finite; indeed, such a class would have to satisfy
$\abs{Z'(\ensuremath{\mathbf u})} < \abs{Z'(\ensuremath{\mathbf v})}$. Hence, the union of all walls for all classes in $\mathfrak U_\ensuremath{\mathbf v}$ is still
locally finite.
To prove claim \eqref{enum:HNfactorsinHW}, we may
assume that $\ensuremath{\mathcal W}$ is the only wall separating $\sigma_+$ and $\sigma_-$, among all walls for classes
in $\mathfrak U_{\ensuremath{\mathbf v}}$. Let $\sigma_0 = (Z_0, \ensuremath{\mathcal P}_0) \in \ensuremath{\mathcal W}$ be a generic stability condition in the wall separating
the chambers of $\sigma_+, \sigma_-$. It follows that $E$ and all $A_i$ are
$\sigma_0$-semistable of the same phase, i.e.
$\Im \frac{Z_0(\ensuremath{\mathbf v}(A_i))}{Z_0(\ensuremath{\mathbf v})} = 0$.
Since this argument works for generic $\sigma_0$, we must have
$\ensuremath{\mathbf v}(A_i) \in \ensuremath{\mathcal H}_\ensuremath{\mathcal W}$ by the definition of $\ensuremath{\mathcal H}_\ensuremath{\mathcal W}$.
Claim \eqref{enum:semistablehasfactorsinHW} follows from the same discussion, and
\eqref{enum:JHfactorsinHW} similarly by considering the set of all walls for the classes
$\mathfrak U_{\ensuremath{\mathbf v}(E)}$ instead of $\mathfrak U_{\ensuremath{\mathbf v}}$.
\end{proof}
Our main approach is to characterize which hyperbolic lattices $\ensuremath{\mathcal H} \subset H^*_\mathrm{alg}(X,\alpha,\ensuremath{\mathbb{Z}})$ correspond to a wall, and
to determine the type of wall purely in terms of $\ensuremath{\mathcal H}$. We start by making the following definition:
\begin{Def} \label{def:potentialwall}
Let $\ensuremath{\mathcal H} \subset H^*_\mathrm{alg}(X,\alpha,\ensuremath{\mathbb{Z}})$ be a primitive rank two hyperbolic sublattice containing $\ensuremath{\mathbf v}$. A
\emph{potential wall} $\ensuremath{\mathcal W}$ associated to $\ensuremath{\mathcal H}$ is a
connected component of the real codimension one submanifold of stability conditions
$\sigma = (Z, \ensuremath{\mathcal P})$ which satisfy the condition that $Z(\ensuremath{\mathcal H})$ is contained in a line.
\end{Def}
\begin{Rem} \label{rem:HW}
The statements of Proposition \ref{prop:HW} are still valid when $\ensuremath{\mathcal W}$ is a potential
wall as in the previous definition.
\end{Rem}
\begin{Def} \label{def:PW}
Given any hyperbolic lattice $\ensuremath{\mathcal H} \subset H^*_\mathrm{alg}(X,\alpha,\ensuremath{\mathbb{Z}})$ of rank two containing $\ensuremath{\mathbf v}$, we denote
by $P_\ensuremath{\mathcal H} \subset \ensuremath{\mathcal H} \otimes \ensuremath{\mathbb{R}}$ the cone generated by integral classes $\ensuremath{\mathbf u} \in \ensuremath{\mathcal H}$ with
$\ensuremath{\mathbf u}^2 \ge 0$ and $(\ensuremath{\mathbf v}, \ensuremath{\mathbf u}) > 0$. We call $P_\ensuremath{\mathcal H}$ the \emph{positive cone} of $\ensuremath{\mathcal H}$, and
a class in $P_\ensuremath{\mathcal H} \cap \ensuremath{\mathcal H}$ is called a \emph{positive class}.
\end{Def}
The condition $(\ensuremath{\mathbf v}, \ensuremath{\mathbf u}) > 0$ just picks out one of the two components of the set of real
classes with $\ensuremath{\mathbf u}^2 > 0$. Observe that $P_\ensuremath{\mathcal H}$ can be an open or closed cone, depending on whether
the lattice contains integral classes $\ensuremath{\mathbf w}$ that are isotropic: $\ensuremath{\mathbf w}^2 = 0$.
\begin{Prop} \label{prop:CW}
Let $\ensuremath{\mathcal W}$ be a potential wall associated to a hyperbolic rank two sublattice
$\ensuremath{\mathcal H} \subset H^*_\mathrm{alg}(X,\alpha,\ensuremath{\mathbb{Z}})$.
For any $\sigma = (Z, \ensuremath{\mathcal P}) \in \ensuremath{\mathcal W}$, let $C_\sigma \subset \ensuremath{\mathcal H} \otimes \ensuremath{\mathbb{R}}$ be the cone generated by classes
$\ensuremath{\mathbf u} \in \ensuremath{\mathcal H}$ satisfying the two conditions
\[ \ensuremath{\mathbf u}^2 \ge -2 \quad \text{and} \quad \Re \frac{Z(\ensuremath{\mathbf u})}{Z(\ensuremath{\mathbf v})} > 0.
\]
This cone does not depend on the choice of $\sigma \in \ensuremath{\mathcal W}$, and it contains $P_\ensuremath{\mathcal H}$.
If $\ensuremath{\mathbf u} \in C_\sigma$, then there exists a semistable object of class $\ensuremath{\mathbf u}$ for every
$\sigma' \in \ensuremath{\mathcal W}$. If $\ensuremath{\mathbf u} \notin C_\sigma$, then there does not exist a semistable object of class
$\ensuremath{\mathbf u}$ for generic $\sigma' \in \ensuremath{\mathcal W}$.
\end{Prop}
From here on, we will write $C_\ensuremath{\mathcal W}$ instead of $C_\sigma$, and call it the cone of effective classes
in $\ensuremath{\mathcal H}$. Given two different walls $\ensuremath{\mathcal W}_1$, $\ensuremath{\mathcal W}_2$, the corresponding effective cones
$C_{\ensuremath{\mathcal W}_1}, C_{\ensuremath{\mathcal W}_2}$ will only differ by spherical classes.
\begin{proof}
If $\ensuremath{\mathbf u}^2 \ge -2$, then by Theorem \ref{thm:nonempty} there exists a $\sigma$-semistable object
of class $\ensuremath{\mathbf u}$ for every $\sigma = (Z, \ensuremath{\mathcal P}) \in \ensuremath{\mathcal W}$. Hence $Z(\ensuremath{\mathbf u}) \neq 0$, i.e, we cannot
simultaneously have $\ensuremath{\mathbf u} \in \ensuremath{\mathcal H}$ (which implies $\Im \frac{Z(\ensuremath{\mathbf u})}{Z(\ensuremath{\mathbf v})} = 0$) and
$\Re \frac{Z(\ensuremath{\mathbf u})}{Z(\ensuremath{\mathbf v})} = 0$. Therefore, the condition $\Re \frac{Z(\ensuremath{\mathbf u})}{Z(\ensuremath{\mathbf v})} > 0$ is invariant
under deforming a stability condition inside $\ensuremath{\mathcal W}$, and $C_\sigma$ does not depend on
the choice of $\sigma \in \ensuremath{\mathcal W}$.
Now assume for contradiction that $P_\ensuremath{\mathcal H}$ is not contained in $C_\ensuremath{\mathcal W}$. Since $\ensuremath{\mathbf v} \in C_\ensuremath{\mathcal W}$,
this is only possible if there is a real class
$\ensuremath{\mathbf u} \in P_\ensuremath{\mathcal H}$ with $\Re \frac{Z(\ensuremath{\mathbf u})}{Z(\ensuremath{\mathbf v})} = 0$; after deforming $\sigma \in \ensuremath{\mathcal W}$ slightly, we may
assume $\ensuremath{\mathbf u}$ to be integral. As above, this implies $Z(\ensuremath{\mathbf u}) = 0$, in contradiction to the existence
of a $\sigma$-semistable object of class $\ensuremath{\mathbf u}$.
The statements about existence of semistable objects follow directly from
Theorem \ref{thm:nonempty}.
\end{proof}
\begin{Rem}\label{rmk:GenericOnTheWall}
Note that by construction, $C_\ensuremath{\mathcal W} \subset \ensuremath{\mathcal H} \otimes \ensuremath{\mathbb{R}}$ is strictly contained in a half-plane. In
particular, there are only finitely many classes in $C_\ensuremath{\mathcal W} \cap \bigl(\ensuremath{\mathbf v} - C_\ensuremath{\mathcal W}\bigr) \cap \ensuremath{\mathcal H}$ (in other
words, effective classes $\ensuremath{\mathbf u}$ such that
$\ensuremath{\mathbf v} - \ensuremath{\mathbf u}$ is also effective).
We will use this observation throughout in order to freely make
genericity assumptions: a generic stability condition $\sigma_0 \in \ensuremath{\mathcal W}$ will be a stability condition
that does not lie on any additional wall (other than $\ensuremath{\mathcal W}$) for any of the above-mentioned classes.
Similarly, by stability conditions $\sigma_+, \sigma_-$ \emph{nearby $\sigma_0$} we will mean
stability conditions that lie in the two chambers adjacent to $\sigma_0$ for the wall-and-chamber decompositions
with respect to any of the classes in $C_\ensuremath{\mathcal W} \cap \bigl(\ensuremath{\mathbf v} - C_\ensuremath{\mathcal W}\bigr) \cap \ensuremath{\mathcal H}$.
\end{Rem}
The behavior of the potential wall $\ensuremath{\mathcal W}$ is completely determined by $\ensuremath{\mathcal H}$ and its
effective cone $C_\ensuremath{\mathcal W}$:
\begin{Thm} \label{thm:walls}
Let $\ensuremath{\mathcal H} \subset H^*_\mathrm{alg}(X,\alpha,\ensuremath{\mathbb{Z}})$ be a primitive hyperbolic rank two sublattice containing $\ensuremath{\mathbf v}$.
Let $\ensuremath{\mathcal W} \subset \mathop{\mathrm{Stab}}\nolimits^\dag(X,\alpha)$ be a potential wall associated to $\ensuremath{\mathcal H}$ (see Definition \ref{def:potentialwall}).
The set $\ensuremath{\mathcal W}$ is a totally semistable wall if and only if there exists either an isotropic class
$\ensuremath{\mathbf w} \in \ensuremath{\mathcal H}$ with $(\ensuremath{\mathbf v}, \ensuremath{\mathbf w}) = 1$, or an effective spherical class $\ensuremath{\mathbf s} \in C_\ensuremath{\mathcal W} \cap \ensuremath{\mathcal H}$ with $(\ensuremath{\mathbf s}, \ensuremath{\mathbf v}) < 0$.
In addition:
\begin{enumerate}
\item \label{enum:niso-divisorial}
The set $\ensuremath{\mathcal W}$ is a wall inducing a divisorial contraction if one of the following three conditions
hold:
\begin{description*}
\item[(Brill-Noether)] there exists a spherical class $\ensuremath{\mathbf s} \in \ensuremath{\mathcal H}$ with $(\ensuremath{\mathbf s}, \ensuremath{\mathbf v}) = 0$, or
\item[(Hilbert-Chow)] there exists an isotropic class $\ensuremath{\mathbf w} \in \ensuremath{\mathcal H}$
with $(\ensuremath{\mathbf w}, \ensuremath{\mathbf v}) = 1$, or
\item[(Li-Gieseker-Uhlenbeck)] there exists an isotropic class $\ensuremath{\mathbf w} \in \ensuremath{\mathcal H}$
with $(\ensuremath{\mathbf w}, \ensuremath{\mathbf v}) = 2$.
\end{description*}
\item \label{enum:niso-flop}
Otherwise, if $\ensuremath{\mathbf v}$ can be written as the sum $\ensuremath{\mathbf v} = \ensuremath{\mathbf a} + \ensuremath{\mathbf b}$ of two positive\footnote{In the
sense of Definition \ref{def:PW}.} classes, or if there exists
a spherical class $\ensuremath{\mathbf s} \in \ensuremath{\mathcal H}$ with $0 < (\ensuremath{\mathbf s}, \ensuremath{\mathbf v}) \le \frac{\ensuremath{\mathbf v}^2}2$,
then $\ensuremath{\mathcal W}$ is a wall corresponding to a flopping contraction.
\item \label{enum:niso-fakeornothing}
In all other cases, $\ensuremath{\mathcal W}$ is either a fake wall (if it is a totally semistable wall), or it is
not a wall.
\end{enumerate}
\end{Thm}
The Gieseker-Uhlenbeck morphism from the moduli space of Gieseker semistable sheaves to slope-semistable vector bundle was constructed in \cite{JunLi:Uhlenbeck}.
Many papers deal with birational transformations between moduli spaces of twisted Gieseker
semistable sheaves, induced by variations of the polarization.
In particular, we refer to \cite{Thaddeus:GIT-flips, DolgachevHu:Variation} for the general theory of variation of GIT quotients and \cite{EllingsrudGottsche:Variation, FriedmanQin:Variation, MatsukiWenthworth:TwistedVariation} for the case of sheaves on surfaces.
Theorem \ref{thm:walls} can be thought as a generalization and completion of these results in the case of K3 surfaces.
\subsection*{Proof outline}
The proof of the above theorem will be broken into four sections. We will distinguish
two cases, depending on whether $\ensuremath{\mathcal H}$ contains isotropic classes:
\begin{Def} \label{def:isotropicwall}
We say that $\ensuremath{\mathcal W}$ is an \emph{isotropic} wall if $\ensuremath{\mathcal H}_\ensuremath{\mathcal W}$ contains an isotropic class.
\end{Def}
In Section \ref{sec:noniso-totsemistable}, we analyze totally semistable non-isotropic walls,
and Section \ref{sec:divisorialcontraction} describes non-isotropic walls corresponding to divisorial
contractions. In Section
\ref{sec:iso}, we use a Fourier-Mukai transform to reduce the treatment of isotropic walls to
the well-known behavior of the Li-Gieseker-Uhlenbeck morphism from the Gieseker moduli space to the Uhlenbeck
space. For the remaining cases, Section \ref{sec:flopping} describes whether it is a
flopping wall, a fake walls, or no wall at all.
To give an example of the strategy of our proof, consider a wall with
a divisor $D \subset
M_{\sigma_+}(\ensuremath{\mathbf v})$ of objects that become strictly semistable on the wall. We
use the contraction morphism $\pi^+$ of Theorem \ref{thm:contraction}; Theorem
\ref{thm:NamikawaWierzba} implies $\mathop{\mathrm{dim}}\nolimits \pi^+(D) \ge \mathop{\mathrm{dim}}\nolimits D-1 = \ensuremath{\mathbf v}^2$. Recall
that $\pi^+$ contracts a curve if the associated objects
have the same Jordan-H\"older factors. Intuitively, this means that the sum of the
dimensions of the moduli spaces parameterizing the Jordan-H\"older factors is at least
$\ensuremath{\mathbf v}^2$; a purely lattice-theoretic argument (using that moduli spaces
always have expected dimension) leads to a contradiction except
in the cases listed in the Theorem. To make this
argument rigorous, we use the relative Harder-Narasimhan filtration with respect to $\sigma_-$ in the
family parameterized by $D$; it induces a rational map from $D$ to a product of moduli
spaces of $\sigma_-$-stable objects.
The most technical part of our arguments deals with totally semistable walls induced by a spherical class.
We use a sequence of spherical twists to reduce to the previous cases,
see Proposition \ref{prop:sphericaltotallysemistable}.
\section{Totally semistable non-isotropic walls}
\label{sec:noniso-totsemistable}
In this section, we will analyze \emph{totally semistable walls}; while some of our intermediate
results hold in general, we will focus on the case where $\ensuremath{\mathcal H}$ does not contain an isotropic class.
The relevance of this follows from Theorem \ref{thm:nonempty}: in this case,
if the dimension of a moduli space $M_{\sigma}(\mathbf{u})$ is positive, then it is given by $\mathbf{u}^2 + 2$.
We will first describe the possible configurations of effective spherical classes in $C_\ensuremath{\mathcal W}$, and of
corresponding spherical objects with $\ensuremath{\mathbf v}(S) \in \ensuremath{\mathcal H}_\ensuremath{\mathcal W}$.
We start with the following classical argument of Mukai (cfr.~\cite[Lemma 5.2]{Bridgeland:K3}):
\begin{Lem}[Mukai]\label{lem:Mukai}
Consider an exact sequence
$0\to A \to E \to B \to 0$
in the heart of a bounded t-structure $\ensuremath{\mathcal A} \subset \mathrm{D}^{b}(X, \alpha)$ with $\mathop{\mathrm{Hom}}\nolimits(A,B)=0$.
Then
\begin{equation*
\mathrm{ext}^1(E,E)\geq\mathrm{ext}^1(A,A) + \mathrm{ext}^1(B,B).
\end{equation*}
\end{Lem}
The following is a well-known consequence of Mukai's lemma (cfr.~\cite[Section 2]{HMS:generic_K3s}):
\begin{Lem} \label{lem:JHspherical}
Assume that $S$ is a $\sigma$-semistable object with $\mathop{\mathrm{Ext}}\nolimits^1(S, S) = 0$.
Then any Jordan-H\"older filtration factor of $S$ is spherical.
\end{Lem}
\begin{proof}
Pick any stable subobject $T \subset S$ of the same phase. Then there exists a short exact sequence
$ \widetilde T \ensuremath{\hookrightarrow} S \ensuremath{\twoheadrightarrow} R $
with the following two properties:
\begin{enumerate}
\item The object $\widetilde T$ is an iterated extension of $T$.
\item $\mathop{\mathrm{Hom}}\nolimits(T, R) = 0$.
\end{enumerate}
Indeed, this can easily be constructed inductively: we let $R_1 = S/T$. If $\mathop{\mathrm{Hom}}\nolimits(T, S/T) = 0$,
the subobject $\widetilde T = T$ already has the desired properties. Otherwise, any
non-zero morphism $T \to R_1$ is necessarily injective; if we let $R_2$ be its quotient, then the
kernel of $S \ensuremath{\twoheadrightarrow} R_2$ is a self-extension of $T$, and we can proceed inductively.
It follows that $\mathop{\mathrm{Hom}}\nolimits(\widetilde T, R) = 0$, and we can apply Lemma \ref{lem:Mukai} to conclude
that $\mathop{\mathrm{Ext}}\nolimits^1(\widetilde T, \widetilde T) = 0$. Hence
$(\ensuremath{\mathbf v}(\widetilde T), \ensuremath{\mathbf v}(\widetilde T)) < 0$, which also implies $(\ensuremath{\mathbf v}(T), \ensuremath{\mathbf v}(T)) < 0$. Thus $\ensuremath{\mathbf v}(T)$ is
spherical, too.
The lemma follows by induction on the length of $S$.
\end{proof}
\begin{Prop} \label{prop:stablespherical}
Let $\ensuremath{\mathcal W}$ be a potential wall associated to the primitive hyperbolic lattice $\ensuremath{\mathcal H}$, and
let $\sigma_0 = (Z_0, \ensuremath{\mathcal P}_0) \in \ensuremath{\mathcal W}$ be a generic stability condition with
$Z_0(\ensuremath{\mathcal H}) \subset \ensuremath{\mathbb{R}}$. Then $\ensuremath{\mathcal H}$ and $\sigma_0$ satisfy
one of the following mutually exclusive conditions:
\begin{enumerate}
\item \label{case:0spherical} The lattice $\ensuremath{\mathcal H}$ does not admit a spherical class.
\item \label{case:1spherical} The lattice $\ensuremath{\mathcal H}$ admits, up to sign, a unique spherical class, and
there exists a unique $\sigma_0$-stable object $S \in \ensuremath{\mathcal P}_0(1)$ with $\ensuremath{\mathbf v}(S) \in \ensuremath{\mathcal H}$.
\item \label{case:2spherical} The lattice $\ensuremath{\mathcal H}$ admits infinitely many spherical classes, and there exist
exactly two $\sigma_0$-stable spherical objects $S, T \in \ensuremath{\mathcal P}_0(1)$ with $\ensuremath{\mathbf v}(S), \ensuremath{\mathbf v}(T) \in \ensuremath{\mathcal H}$.
In this case, $\ensuremath{\mathcal H}$ is not isotropic.
\end{enumerate}
\end{Prop}
\begin{proof}
Given any spherical class, $\ensuremath{\mathbf s} \in \ensuremath{\mathcal H}$, then by Theorem \ref{thm:nonempty}, there exists a
$\sigma_0$-semistable object $S$ with $\ensuremath{\mathbf v}(S) = \ensuremath{\mathbf s}$ and $S \in \ensuremath{\mathcal P}_0(1)$. If $\ensuremath{\mathcal H}$ admits a unique
spherical class, then by Proposition \ref{prop:HW} and Lemma \ref{lem:JHspherical}, $S$ must be
stable.
Hence it remains to consider the case where $\ensuremath{\mathcal H}$ admits two linearly independent spherical classes.
If we consider the Jordan-H\"older filtrations of $\sigma_0$-semistable objects of the
corresponding classes, and apply Proposition \ref{prop:HW} and Lemma \ref{lem:JHspherical},
we see that there must be two $\sigma_0$-stable objects $S, T$ whose Mukai vectors
are linearly independent.
Now assume that there are three stable spherical objects $S_1, S_2, S_3 \in \ensuremath{\mathcal P}_0(1)$, and
let $\ensuremath{\mathbf s}_i = \ensuremath{\mathbf v}(S_i)$. Since they are stable of the same phase, we have
$\mathop{\mathrm{Hom}}\nolimits(S_i, S_j) = 0$ for all $i \neq j$, as well as $\mathop{\mathrm{Ext}}\nolimits^k(S_i, S_j) = 0$ for $k < 0$.
Combined with Serre duality, this implies $(\ensuremath{\mathbf s}_i, \ensuremath{\mathbf s}_j) = \mathop{\mathrm{ext}}\nolimits^1(S_i, S_j) \ge 0$.
However, a rank two lattice of signature $(1, -1)$ can never contain
three spherical classes $\ensuremath{\mathbf s}_1, \ensuremath{\mathbf s}_2, \ensuremath{\mathbf s}_3$ with $(\ensuremath{\mathbf s}_i, \ensuremath{\mathbf s}_j) \ge 0$ for $i \neq j$. Indeed, we may assume
that $\ensuremath{\mathbf s}_1, \ensuremath{\mathbf s}_2$ are linearly independent. Let $m := (\ensuremath{\mathbf s}_1, \ensuremath{\mathbf s}_2) \ge 0$; since $\ensuremath{\mathcal H}$ has signature
$(1, -1)$, we have $m \ge 3$. If we write $\ensuremath{\mathbf s}_3 = x \ensuremath{\mathbf s}_1 + y \ensuremath{\mathbf s}_2$, we get the following implications:
\begin{eqnarray*}
(\ensuremath{\mathbf s}_1, \ensuremath{\mathbf s}_3 ) \ge 0 & \Rightarrow & y \ge \frac 2m x \\
(\ensuremath{\mathbf s}_2, \ensuremath{\mathbf s}_3 ) \ge 0 & \Rightarrow & y \le \frac m2 x \\
(\ensuremath{\mathbf s}_3, \ensuremath{\mathbf s}_3) = -2 & \Rightarrow & -2x^2 + 2m xy - 2y^2 < 0
\end{eqnarray*}
However, by solving the quadratic equation for $y$, it is immediate that
the term in the last inequality is positive in the range $\frac 2m x \le y \le \frac m2 x$ (see also
Figure \ref{fig:HWplanenohyperbola}).
Finally, if $\ensuremath{\mathcal H}$ admits two linearly independent spherical class $\ensuremath{\mathbf s}, \ensuremath{\mathbf t}$, then the group
generated by the associated reflections $\rho_{\ensuremath{\mathbf s}}, \rho_{\ensuremath{\mathbf t}}$ is infinite; the orbit of $\ensuremath{\mathbf s}$
consists of infinitely many spherical classes. Additionally, an isotropic class would be a rational
solution of $-2x^2 + 2m xy - 2y^2 = 0$, but the discriminant $m^2 - 4$ can never be a square when
$m$ is an integer $m \ge 3$.
\end{proof}
\begin{wrapfigure}{r}{0.42\textwidth}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\clip(-2.9,-1.9) rectangle (3.2,2.2);
\draw[->,color=black] (-3.5,0) -- (3.2,0);
\foreach \x in {-3,-2,-1,1,2,3}
\draw[shift={(\x,0)},color=black] (0pt,2pt) -- (0pt,-2pt);
\draw[->,color=black] (0,-2.5) -- (0,2.2);
\foreach \y in {-2,-1,1,2}
\draw[shift={(0,\y)},color=black] (2pt,0pt) -- (-2pt,0pt);
\draw (0.5,0.9) node[anchor=north west] {$y = r_1 x$};
\draw (0.38,1.55) node[anchor=north west] {$ y = r_2 x $};
\draw [domain=-3.5:3.5] plot(\x,{(-0-2.23*\x)/-0.6});
\draw [domain=-3.5:3.5] plot(\x,{(-0--0.6*\x)/2.23});
\draw (0.67,2.1) node[anchor=north west] {$Q(x, y) > 0$};
\draw (-2.5,-0.72) node[anchor=north west] {$Q(x, y) > 0$};
\draw (-2.58,1.98) node[anchor=north west] {$Q(x, y) < 0$};
\draw (0.62,-1.35) node[anchor=north west] {$Q(x, y) < 0$};
\fill [color=black] (1,0) circle (1.5pt);
\draw[color=black] (1,-0.35) node {$S$};
\fill [color=black] (0,1) circle (1.5pt);
\draw[color=black] (-0.26,1) node {$T$};
\fill [color=black] (-1,0) circle (1.5pt);
\draw[color=black] (-0.99,0.29) node {$S[1]$};
\fill [color=black] (0,-1) circle (1.5pt);
\draw[color=black] (0.56,-1.07) node {$T[1]$};
\end{tikzpicture}
\caption{$\ensuremath{\mathcal H}_\ensuremath{\mathcal W}$, as oriented by $\sigma_+$}
\label{fig:HWplanenohyperbola}
\end{wrapfigure}
Whenever we are in case \eqref{case:2spherical}, we will will denote the two $\sigma_0$-stable
spherical objects by $S, T$. We may assume that $S$ has smaller phase than $T$ with respect
to $\sigma_+$; conversely, $S$ has bigger phase than $T$ with respect to $\sigma_-$.
We will also write $\ensuremath{\mathbf s} := \ensuremath{\mathbf v}(S), \ensuremath{\mathbf t} := \ensuremath{\mathbf v}(T)$, and $m := (\ensuremath{\mathbf s}, \ensuremath{\mathbf t}) > 2$.
We identify $\ensuremath{\mathbb{R}}^2$ with $\ensuremath{\mathcal H}_\ensuremath{\mathcal W} \otimes \ensuremath{\mathbb{R}}$ by sending the standard basis
to $(\ensuremath{\mathbf s}, \ensuremath{\mathbf t})$; under this identification, the ordering of phases in $\ensuremath{\mathbb{R}}^2$
will be consistent with the ordering induced by $\sigma_+$.
We denote by $Q(x, y) = -2x^2 + 2mxy - 2y^2$ the pull-back of the quadratic form
induced by the Mukai pairing on $\ensuremath{\mathcal H}_\ensuremath{\mathcal W}$.
Let $r_1 < r_2$ be the two solutions of $-2r^2 + 2mr - 2 = 0$; they are both positive and irrational
(as $m^2 - 4$ cannot be a square for $m \ge 3$ integral). The positive cone $P_\ensuremath{\mathcal H}$ is thus the cone
between the two lines $y = r_i x$, and the effective cone $C_\ensuremath{\mathcal W}$ is the upper right quadrant
$x, y \ge 0$.
We will first prove that the condition for the existence of totally semistable walls given in
Theorem \ref{thm:walls} is necessary in the case of non-isotropic walls. We start with an easy
numerical observation:
\begin{Lem} \label{lem:numericaldimensions}
Given $l > 1$ positive classes $\ensuremath{\mathbf a}_1, \dots, \ensuremath{\mathbf a}_l \in P_\ensuremath{\mathcal H}$ with $\ensuremath{\mathbf a}_i^2 > 0$, set
$\ensuremath{\mathbf a} = \ensuremath{\mathbf a}_1 + \dots + \ensuremath{\mathbf a}_l$. Then
\[
\sum_{i=1}^l \left(\ensuremath{\mathbf a}_i^2 + 2\right) < \ensuremath{\mathbf a}^2.
\]
\end{Lem}
\begin{proof}
Since the $\ensuremath{\mathbf a}_i$ are integral classes, and $\ensuremath{\mathcal H}_\ensuremath{\mathcal W}$ is an even lattice, we have $\ensuremath{\mathbf a}_i^2 \ge 2$.
If $\ensuremath{\mathbf a}_i \neq \ensuremath{\mathbf a}_j$, then $\ensuremath{\mathbf a}_i, \ensuremath{\mathbf a}_j$ span a lattice of signature $(1, -1)$, which gives
\[
(\ensuremath{\mathbf a}_i, \ensuremath{\mathbf a}_j) > \sqrt{\ensuremath{\mathbf a}_i^2 \ensuremath{\mathbf a}_j^2} \ge 2, \quad \text{and thus} \quad
\ensuremath{\mathbf a}^2 > \sum_{i=1}^l \ensuremath{\mathbf a}_i^2 + 2 l (l-1) \ge \sum_{i=1}^l \ensuremath{\mathbf a}_i^2 + 2l. \]
\end{proof}
\begin{Lem} \label{lem:nottotallysemistable}
Assume that the potential wall $\ensuremath{\mathcal W}$ associated to $\ensuremath{\mathcal H}$ satisfies the following conditions:
\begin{enumerate}
\item \label{enum:noniso} The wall is non-isotropic.
\item \label{enum:nonegs} There does not exist an effective spherical class $\ensuremath{\mathbf s} \in C_\ensuremath{\mathcal W}$ with $(\ensuremath{\mathbf s}, \ensuremath{\mathbf v}) < 0$.
\end{enumerate}
Then $\ensuremath{\mathcal W}$ cannot be a totally semistable wall.
\end{Lem}
In other words, there exists a $\sigma_0$-stable object of class $\ensuremath{\mathbf v}$. Note that by Lemma
\ref{lem:nonprimitive}, this statement automatically holds in the case of non-primitive $\ensuremath{\mathbf v}$ as well.
\subsubsection*{Proof}
We will consider two maps from the moduli space $M_{\sigma_+}(\ensuremath{\mathbf v})$.
On the one hand, by Theorem \ref{thm:contraction}, the line bundle
$\ell_{\sigma_0}$ on $M_{\sigma_+}(\ensuremath{\mathbf v})$ induces a birational morphism
\[ \pi^+ \colon M_{\sigma_+}(\ensuremath{\mathbf v}) \to \overline{M}. \]
The curves contracted by $\pi^+$ are exactly curves of S-equivalent objects.
For the second map, first assume for simplicity that $M_{\sigma_+}(\ensuremath{\mathbf v})$ is a fine moduli space,
and let $\ensuremath{\mathcal E}$ be a universal family.
Consider the relative HN filtration for $\ensuremath{\mathcal E}$ with respect to $\sigma_-$
given by Theorem \ref{thm:HNfamily}. Let $\ensuremath{\mathbf a}_1, \dots, \ensuremath{\mathbf a}_m$ be the Mukai vectors of the semistable
HN filtration quotients of a generic fiber $\ensuremath{\mathcal E}_m$ for $m \in M_{\sigma_+}(\ensuremath{\mathbf v})$; by assumption
\eqref{enum:noniso}, we have $\ensuremath{\mathbf a}_i^2 \neq 0$. On the open subset
$U$ of the Theorem \ref{thm:HNfamily}, the filtration quotients
$\ensuremath{\mathcal E}^i/\ensuremath{\mathcal E}^{i-1}$ are flat families of $\sigma_-$-semistable objects of class $\ensuremath{\mathbf a}_i$; thus
we get an induced rational map
\[
\mathrm{HN} \colon M_{\sigma_+}(\ensuremath{\mathbf v}) \dashrightarrow M_{\sigma_-}(\ensuremath{\mathbf a}_1) \times \dots \times
M_{\sigma_-}(\ensuremath{\mathbf a}_m).
\]
Let $I \subset \{1, 2, \dots, m\}$ be the subset of indices $i$ with $\ensuremath{\mathbf a}_i^2 > 0$, and let
$\ensuremath{\mathbf a} = \sum_{i \in I} \ensuremath{\mathbf a}_i$.
Our first claim is $\ensuremath{\mathbf a}^2 \le \ensuremath{\mathbf v}^2$, with equality if and only if $\ensuremath{\mathbf a} =\ensuremath{\mathbf v}$, i.e., if there are no
classes with $\ensuremath{\mathbf a}_i^2 < 0$: Let $\ensuremath{\mathbf b} = \ensuremath{\mathbf v} - \ensuremath{\mathbf a} = \sum_{i \notin I} \ensuremath{\mathbf a}_i$.
If $\ensuremath{\mathbf b}^2 \ge 0$, and so $\ensuremath{\mathbf b}^2 \ge 2$, the claim follows trivially from
$(\ensuremath{\mathbf a}, \ensuremath{\mathbf b}) > 0$:
\begin{equation} \label{eq:a2lessthanv2-1}
\ensuremath{\mathbf v}^2 = \ensuremath{\mathbf a}^2 + 2(\ensuremath{\mathbf a}, \ensuremath{\mathbf b}) + \ensuremath{\mathbf b}^2 \ge \ensuremath{\mathbf a}^2 + 4.
\end{equation}
Otherwise, observe that by our assumption
$(\ensuremath{\mathbf v}, \underline{\hphantom{A}})$ is non-negative on all effective classes; in particular, $(\ensuremath{\mathbf v}, \ensuremath{\mathbf b}) \ge 0$.
Combined with $\ensuremath{\mathbf b}^2 \le -2$ we obtain
\begin{equation} \label{eq:a2lessthanv2-2}
\ensuremath{\mathbf a}^2 = \ensuremath{\mathbf v}^2-2(\ensuremath{\mathbf v}, \ensuremath{\mathbf b}) + \ensuremath{\mathbf b}^2 \le \ensuremath{\mathbf v}^2 -2.
\end{equation}
Lemma \ref{lem:numericaldimensions} then implies
\begin{equation} \label{eq:dimcomparison}
\ensuremath{\mathbf v}^2 + 2 \ge \ensuremath{\mathbf a}^2 + 2 \ge \sum_{i \in I} \left(\ensuremath{\mathbf a}_i^2 + 2\right),
\end{equation}
with equality if and only if $\abs{I} = 1$.
By Theorem \ref{thm:nonempty}, part \eqref{enum:dimandsquare},
this says that the target of the rational map $\mathrm{HN}$ has at most the dimension of the source:
\begin{equation}\label{eq:InequalityDimensionHNFactors}
\mathop{\mathrm{dim}}\nolimits M_{\sigma_+}(\ensuremath{\mathbf v}) \ge \sum_{i = 1}^m \mathop{\mathrm{dim}}\nolimits M_{\sigma_-}(\ensuremath{\mathbf a}_i).
\end{equation}
However, if $\mathrm{HN}(E_1) = \mathrm{HN}(E_2)$, then $E_1, E_2$ are S-equivalent: indeed, they
admit Jordan-H\"older filtrations that are refinements of their HN filtrations with
respect to $\sigma_-$, which have the same filtration quotients.
It follows that any curve contracted by $\mathrm{HN}$ is also
contracted by $\pi^+$; therefore
\[
\sum_{i = 1}^m \mathop{\mathrm{dim}}\nolimits M_{\sigma_-}(\ensuremath{\mathbf a}_i) \ge \mathop{\mathrm{dim}}\nolimits \overline{M} = \mathop{\mathrm{dim}}\nolimits M_{\sigma_+}(\ensuremath{\mathbf v})
\]
Hence we have equality in each step of the above inequalities, the relative HN
filtration is trivial, and the generic fiber $\ensuremath{\mathcal E}_m$ is $\sigma_-$-stable. In other words, the
generic object of $M_{\sigma_+}(\ensuremath{\mathbf v})$ is also $\sigma_-$-stable, which proves the claim.
In case $M_{\sigma_+}(\ensuremath{\mathbf v})$ does not admit a universal family, we can construct $\mathrm{HN}$ by first
passing to an \'etale neighborhood $f \colon U \to M_{\sigma_+}(\ensuremath{\mathbf v})$ admitting a universal family; the induced
rational map from $U$ induced by the relative HN filtration will then factor via $f$.
\hfill$\Box$
We recall some theory of Pell's equation in the language of spherical reflections of the
hyperbolic lattice $\ensuremath{\mathcal H}$:
\begin{PropDef} \label{prop:pellequation}
Let $G_\ensuremath{\mathcal H} \subset \mathop{\mathrm{Aut}}\nolimits \ensuremath{\mathcal H}$ be the group generated by spherical reflections $\rho_{\ensuremath{\mathbf s}}$ for
effective spherical classes $\ensuremath{\mathbf s} \in C_\ensuremath{\mathcal W}$.
Given a positive
class $\ensuremath{\mathbf v} \in P_\ensuremath{\mathcal H} \cap \ensuremath{\mathcal H}$, the $G_\ensuremath{\mathcal H}$-orbit $G_\ensuremath{\mathcal H}.\ensuremath{\mathbf v}$ contains a unique class $\ensuremath{\mathbf v}_0$ such that
$(\ensuremath{\mathbf v}_0, \ensuremath{\mathbf s}) \ge 0$ for all effective spherical classes $\ensuremath{\mathbf s} \in C_\ensuremath{\mathcal W}$.
We call $\ensuremath{\mathbf v}_0$ the \emph{minimal class} of the orbit $G_\ensuremath{\mathcal H}.\ensuremath{\mathbf v}$.
\end{PropDef}
Note that the notion of minimal class depends on the potential wall $\ensuremath{\mathcal W}$, not just on the lattice
$\ensuremath{\mathcal H}$.
\begin{proof}
Again, we only treat the case \eqref{case:2spherical} of Proposition \ref{prop:stablespherical}, the
other cases being trivial. It is sufficient to prove that $(\ensuremath{\mathbf v}_0, \ensuremath{\mathbf s}) \ge 0$ and
$(\ensuremath{\mathbf v}_0, \ensuremath{\mathbf t}) \ge 0$. Assume $(\ensuremath{\mathbf v}, \ensuremath{\mathbf s}) < 0$. Then
$\rho_\ensuremath{\mathbf s}(\ensuremath{\mathbf v}) = \ensuremath{\mathbf v} - \abs{(\ensuremath{\mathbf v}, \ensuremath{\mathbf s})}\cdot \ensuremath{\mathbf s}$ is still in the upper right quadrant, with
smaller $x$-coordinate than $\ensuremath{\mathbf v}$, and with the same $y$-coordinate.
Similarly if $(\ensuremath{\mathbf v}, \ensuremath{\mathbf t}) < 0$. If we proceed inductively, the procedure has to terminate, thus
reaching $\ensuremath{\mathbf v}_0$.
The uniqueness follows from Proposition \ref{prop:orbitlist} below.
\end{proof}
Assume additionally that $\ensuremath{\mathcal H}$ admits infinitely many spherical classes, so we are in case
\eqref{case:2spherical} of Proposition \ref{prop:stablespherical}. The hyperbola
$\ensuremath{\mathbf v}^2 = -2$ intersects the upper right quadrant $x, y \ge 0$ in two branches, starting at $\ensuremath{\mathbf s}$ and
$\ensuremath{\mathbf t}$, respectively. Let $\ensuremath{\mathbf s}_0 = \ensuremath{\mathbf s}, \ensuremath{\mathbf s}_{-1}, \ensuremath{\mathbf s}_{-2}, \dots$ be the integral spherical classes on the
lower branch starting at $\ensuremath{\mathbf s}$, and $\ensuremath{\mathbf t}_1 = \ensuremath{\mathbf t}, \ensuremath{\mathbf t}_2, \ensuremath{\mathbf t}_3, \dots$ be those on the upper branch
starting at $\ensuremath{\mathbf t}$, see also Figure \ref{fig:orbitlist}. The $\ensuremath{\mathbf s}_i$ can be defined recursively by
$\ensuremath{\mathbf s}_{-1} = \rho_\ensuremath{\mathbf s}(\ensuremath{\mathbf t})$, and $\ensuremath{\mathbf s}_{k-1} = \rho_{\ensuremath{\mathbf s}_{k}}(\ensuremath{\mathbf s}_{k+1})$ for $k \le -1$; similarly
for the $\ensuremath{\mathbf t}_i$.
\begin{Prop} \label{prop:orbitlist}
Given a minimal class $\ensuremath{\mathbf v}_0$ of a $G_\ensuremath{\mathcal H}$-orbit, define $\ensuremath{\mathbf v}_i, i \in \ensuremath{\mathbb{Z}}$ via
$\ensuremath{\mathbf v}_{i} = \rho_{\ensuremath{\mathbf t}_{i}}(\ensuremath{\mathbf v}_{i-1})$ for $i > 0$, and $\ensuremath{\mathbf v}_i = \rho_{\ensuremath{\mathbf s}_{i+1}}(\ensuremath{\mathbf v}_{i+1})$ for $i < 0$.
Then the orbit $G.\ensuremath{\mathbf v}_0$ is given by $\stv{\ensuremath{\mathbf v}_i}{i \in \ensuremath{\mathbb{Z}}}$, where the latter are ordered according to
their slopes in $\ensuremath{\mathbb{R}}^2$.
\end{Prop}
Note that these classes may coincide pairwise, in case $\ensuremath{\mathbf v}_0$ is orthogonal to $\ensuremath{\mathbf s}$ or $\ensuremath{\mathbf t}$.
\begin{proof}
The group $G_\ensuremath{\mathcal H}$ is the free product $\ensuremath{\mathbb{Z}}_2 \star \ensuremath{\mathbb{Z}}_2$, generated by $\rho_\ensuremath{\mathbf s}$ and $\rho_\ensuremath{\mathbf t}$.
It is straightforward to check that with $\ensuremath{\mathbf v}_i$ defined as above, we have
\[
\ensuremath{\mathbf v}_{-1} = \rho_\ensuremath{\mathbf s} (\ensuremath{\mathbf v}_0), \quad \ensuremath{\mathbf v}_{-2} = \rho_\ensuremath{\mathbf s} \rho_\ensuremath{\mathbf t} (\ensuremath{\mathbf v}_0), \quad
\ensuremath{\mathbf v}_{-3} = \rho_\ensuremath{\mathbf s} \rho_\ensuremath{\mathbf t} \rho_\ensuremath{\mathbf s} (\ensuremath{\mathbf v}_0), \ldots,
\]
and similarly $\ensuremath{\mathbf v}_1 = \rho_{\ensuremath{\mathbf t}} (\ensuremath{\mathbf v}_0)$ and so on. This list contains $g(\ensuremath{\mathbf v}_0)$ for all $g \in \ensuremath{\mathbb{Z}}_2
\star \ensuremath{\mathbb{Z}}_2$. That the $\ensuremath{\mathbf v}_i$ are ordered by slopes is best seen by drawing a picture; see also
Figure \ref{fig:orbitlist}.
\end{proof}
\begin{figure}
\definecolor{qqqqcc}{rgb}{0,0,0.8}
\definecolor{uququq}{rgb}{0.2509803922,0.2509803922,0.2509803922}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\draw[->,color=black] (-0.8,0) -- (8.5,0);
\foreach \x in {,1,2,3,4,5,6,7,8}
\draw[shift={(\x,0)},color=black] (0pt,2pt) -- (0pt,-2pt);
\draw[->,color=black] (0,-0.8) -- (0,4.5);
\foreach \y in {,1,2,3,4}
\draw[shift={(0,\y)},color=black] (2pt,0pt) -- (-2pt,0pt);
\clip(-0.8,-0.8) rectangle (8.5,4.5);
\draw [samples=50,domain=0:0.99,rotate around={135:(0,0)},xshift=0cm,yshift=0cm] plot
({0.5773502692*(1+\x^2)/(1-\x^2)},{1*2*\x/(1-\x^2)});
\draw [samples=50,domain=0:0.99,rotate around={135:(0,0)},xshift=0cm,yshift=0cm] plot
({0.5773502692*(1+\x^2)/(1-\x^2)},{-2*\x/(1-\x^2)});
\draw [samples=50,domain=0:0.99,rotate around={135:(0,0)},xshift=0cm,yshift=0cm] plot
({0.5773502692*(-1-\x^2)/(1-\x^2)},{2*\x/(1-\x^2)});
\draw [samples=50,domain=0:0.99,rotate around={135:(0,0)},xshift=0cm,yshift=0cm] plot
({0.5773502692*(-1-\x^2)/(1-\x^2)},{(-2)*\x/(1-\x^2)});
\draw [samples=50,domain=0:0.99,rotate around={-135:(0,0)},xshift=0cm,yshift=0cm] plot
({1.4142135624*(-1-\x^2)/(1-\x^2)},{0.8164965809*(-2)*\x/(1-\x^2)});
\draw [samples=50,domain=0:0.99,rotate around={-135:(0,0)},xshift=0cm,yshift=0cm] plot
({1.4142135624*(-1-\x^2)/(1-\x^2)},{0.8164965809*2*\x/(1-\x^2)});
\draw (5.5,1.3) node[anchor=north west] {$ Q(x,y) = -2 $};
\draw [dash pattern=on 2pt off 2pt,color=qqqqcc] (7.4604462269,2.0766418445)--
(4.3852498909,1.3078427605)-- (0.8461211513,1.3078427605)-- (0.8461211513,2.0766418445)--
(4.3852498909,16.2331568031);
\begin{scriptsize}
\fill [color=black] (1,0) circle (1.5pt);
\draw[color=black] (1.,-0.3516959583) node {$\mathbf s_0 = \mathbf s$};
\fill [color=black] (0,1) circle (1.5pt);
\draw[color=black] (0.46394913,0.855968031) node {$\mathbf t_1 = \mathbf t$};
\fill [color=uququq] (0.8461211513,2.0766418445) circle (1.5pt);
\draw[color=uququq] (1.6381894244,2.127953402) node {$\mathbf v_1 = \rho_{\mathbf t} (\mathbf v)$};
\fill [color=black] (0.8461211513,1.3078427605) circle (1.5pt);
\draw[color=black] (1.1013621151,1.4386989547) node {$\mathbf v_0$};
\fill [color=uququq] (4.3852498909,1.3078427605) circle (1.5pt);
\draw[color=uququq] (4.3852174029,1.6460115384) node {$\mathbf v_{-1} = \rho_{\mathbf s}(\mathbf
v)$};
\fill [color=uququq] (4,1) circle (1.5pt);
\draw[color=uququq] (4,0.7327083255) node {$\mathbf s_{-1}$};
\fill [color=uququq] (1,4) circle (1.5pt);
\draw[color=uququq] (0.7,3.9999740551) node {$\mathbf t_2$};
\fill [color=uququq] (7.4604462269,2.0766418445) circle (1.5pt);
\draw[color=uququq] (7.2,2.5114733858) node {$\mathbf v_{-2} = \rho_{\mathbf
s_{-1}}(\mathbf v_{-1})$};
\fill [color=uququq] (0,0) circle (1.5pt);
\end{scriptsize}
\end{tikzpicture}
\caption{The orbit of $\ensuremath{\mathbf v}_0$}
\label{fig:orbitlist}
\end{figure}
For $i > 0$, let $T_i^+ \in \ensuremath{\mathcal P}_0(1)$ be the unique $\sigma_+$-stable object with $\ensuremath{\mathbf v}(T_i^+) =
\ensuremath{\mathbf t}_i$; similarly for $S_i^+$ with $\ensuremath{\mathbf v}(S_i^+) = \ensuremath{\mathbf s}_i$ for $i \le 0$. We also write $T_i^-$ and
$S_i^-$ for the corresponding $\sigma_-$-stable objects.
\begin{Prop} \label{prop:sphericaltotallysemistable}
Let $\ensuremath{\mathcal W}$ be a potential wall, and assume there is an effective spherical class $\tilde \ensuremath{\mathbf s} \in C_\ensuremath{\mathcal W}$
with $(\ensuremath{\mathbf v}, \tilde \ensuremath{\mathbf s}) < 0$. Then $\ensuremath{\mathcal W}$ is a totally semistable wall.
Additionally, let $\ensuremath{\mathbf v}_0$ be the minimal class in the orbit $G_\ensuremath{\mathcal H}.\ensuremath{\mathbf v}$, and write
$\ensuremath{\mathbf v} = \ensuremath{\mathbf v}_l$ as in Proposition \ref{prop:orbitlist}.
If $\phi^+(\ensuremath{\mathbf v}) > \phi^+(\ensuremath{\mathbf v}_0)$, then
\[
\mathop{\mathrm{ST}}\nolimits_{T_l^+} \circ \mathop{\mathrm{ST}}\nolimits_{T_{l-1}^+} \circ \dots \circ \mathop{\mathrm{ST}}\nolimits_{T_1^+} (E_0)
\]
is $\sigma_+$-stable of class $\ensuremath{\mathbf v}$,
for every $\sigma_0$-stable object $E_0$ of class $\ensuremath{\mathbf v}_0$.
Similarly, if $\phi^+(\ensuremath{\mathbf v}) < \phi^+(\ensuremath{\mathbf v}_0)$, then
\[
\mathop{\mathrm{ST}}\nolimits_{S_{-l+1}^+}^{-1} \circ \mathop{\mathrm{ST}}\nolimits_{S_{-l+2}^+}^{-1} \circ \dots \circ \mathop{\mathrm{ST}}\nolimits_{S_0^+}^{-1} (E_0)
\]
is $\sigma_+$-stable of class $\ensuremath{\mathbf v}$ for every $\sigma_0$-stable object of class $\ensuremath{\mathbf v}_0$.
The analogous statement holds for $\sigma_-$.
\end{Prop}
Note that when we are in case \eqref{case:1spherical} of Proposition \ref{prop:stablespherical}, the
above sequence of stable spherical objects will consist of just one object.
Before the proof, we recall the following statement (see {\cite[Lemma 5.9]{localP2}}):
\begin{Lem}
\label{lem:stableextension}
Assume that $A, B$ are simple objects in an abelian category.
If $E$ is an extension of the form
\[
A \ensuremath{\hookrightarrow} E \ensuremath{\twoheadrightarrow} B^{\oplus r}
\]
with $\mathop{\mathrm{Hom}}\nolimits(B, E) = 0$, then any quotient of $E$ is of the form $B^{\oplus r'}$.
Similarly, given an extension
\[
A^{\oplus r} \ensuremath{\hookrightarrow} E \ensuremath{\twoheadrightarrow} B
\]
with $\mathop{\mathrm{Hom}}\nolimits(E, A) = 0$, then any subobject of $E$ is of the form $A^{\oplus r'}$.
\end{Lem}
\begin{proof}
We consider the former case, i.e., an extension $A \ensuremath{\hookrightarrow} E \ensuremath{\twoheadrightarrow} B^{\oplus r}$; the latter case
follows by dual arguments.
Let $E \ensuremath{\twoheadrightarrow} N$ be any quotient of $E$.
Since $A$ is a simple object, the composition
$\psi \colon A \ensuremath{\hookrightarrow} E \ensuremath{\twoheadrightarrow} N$ is either injective, or zero.
If $\psi = 0$, then $N$ is a quotient of $B^{\oplus r}$, and the claim
follows.
If $\psi$ is injective, let $M$ be the kernel of $E \ensuremath{\twoheadrightarrow} N$. Then $M \cap A = 0$, and so
$M$ is a subobject of $B^{\oplus r}$. Since $B$ is a simple object,
$M$ is of the form $B^{\oplus r'}$ for some $r' < r$; since $\mathop{\mathrm{Hom}}\nolimits(B, E) = 0$,
this is a contradiction.
\end{proof}
\subsubsection*{Proof of Proposition \ref{prop:sphericaltotallysemistable}}
Continuing with the convention of Proposition \ref{prop:stablespherical}, we use the
$\widetilde \mathop{\mathrm{GL}}_2^+(\ensuremath{\mathbb{R}})$-action to assume
$Z_0(\ensuremath{\mathcal H}) \subset \ensuremath{\mathbb{R}}$, and $Z_0(\ensuremath{\mathbf v}) \in \ensuremath{\mathbb{R}}_{<0}$.
Consider the first claim.
By assumption, we may find an effective spherical class $\tilde{\ensuremath{\mathbf s}}$ such that $(\ensuremath{\mathbf v},\tilde{\ensuremath{\mathbf s}})<0$.
Pick a $\sigma_0$-semistable object $S$ with $\ensuremath{\mathbf v}(S)=\tilde{\ensuremath{\mathbf s}}$.
By considering its Jordan-H\"older filtration, and using Lemma \ref{lem:JHspherical},
we may find a $\sigma_0$-\emph{stable} spherical object $\widetilde S$ with $(\ensuremath{\mathbf v}, \ensuremath{\mathbf v}(\widetilde S)) < 0$.
Assume, for a contradiction, that $\ensuremath{\mathcal W}$ is not a totally semistable wall.
Then there exists a $\sigma_0$-\emph{stable} object $E$ of class $\ensuremath{\mathbf v}$.
By stability, since $E$ and $\widetilde{S}$ have the same phase, we have $\mathop{\mathrm{Hom}}\nolimits(\widetilde S, E) = \mathop{\mathrm{Hom}}\nolimits(E, \widetilde S) = 0$; hence
$(\ensuremath{\mathbf v}, \ensuremath{\mathbf v}(\widetilde S)) = \mathop{\mathrm{ext}}\nolimits^1(\widetilde S, E) \ge 0$, a contradiction.
To prove the construction of $\sigma_+$-stable objects, let us assume that we are in the case
of infinitely many spherical classes. Let us also assume that $\phi^+(\ensuremath{\mathbf v}) > \phi^+(\ensuremath{\mathbf v}_0)$, the other
case is analogous; in the notation of Proposition \ref{prop:orbitlist}, this means
$\ensuremath{\mathbf v} = \ensuremath{\mathbf v}_l$ for some $l > 0$.
We define $E_i$ inductively by
\[
E_i = \mathop{\mathrm{ST}}\nolimits_{T_i^+}(E_{i-1}).
\]
By the compatibility of the spherical twist $\mathop{\mathrm{ST}}\nolimits_{T}$, for $T$ a spherical object, with the reflection
$\rho_{\ensuremath{\mathbf v}(T)}$ and Proposition \ref{prop:orbitlist}, we have $\ensuremath{\mathbf v}(E_i) = \ensuremath{\mathbf v}_i$.
Lemma \ref{lem:stableextension} shows that $E_1$ is $\sigma_+$-stable; however, for the
following induction steps, we cannot simply use Lemma \ref{lem:stableextension} again, as neither $E_i$
nor $T_i^+$ are simple objects in $\ensuremath{\mathcal P}_0(1)$.
\begin{wrapfigure}{r}{0.39\textwidth}
\vspace{-1em}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\clip(-1.2,-2.8) rectangle (4.3,4.61);
\draw[->,color=black] (-3,0) -- (4.3,0);
\foreach \x in {-2,-1,1,2,3,4,5}
\draw[shift={(\x,0)},color=black] (0pt,2pt) -- (0pt,-2pt);
\draw[->,color=black] (0,-2.8) -- (0,4.6);
\foreach \y in {-2,-1,1,2,3,4}
\draw[shift={(0,\y)},color=black] (2pt,0pt) -- (-2pt,0pt);
\draw [samples=50,domain=-0.99:0.99,rotate around={135:(0,0)},xshift=0cm,yshift=0cm] plot ({0.58*(1+(\x)^2)/(1-(\x)^2)},{1*2*(\x)/(1-(\x)^2)});
\draw [samples=50,domain=-0.99:0.99,rotate around={135:(0,0)},xshift=0cm,yshift=0cm] plot ({0.58*(-1-(\x)^2)/(1-(\x)^2)},{1*(-2)*(\x)/(1-(\x)^2)});
\draw [samples=50,domain=-0.99:0.99,rotate around={-135:(0,0)},xshift=0cm,yshift=0cm] plot ({2*(-1-(\x)^2)/(1-(\x)^2)},{1.15*(-2)*(\x)/(1-(\x)^2)});
\draw [dash pattern=on 2pt off 2pt,domain=-5.6:11.61] plot(\x,{(-0--4*\x)/1});
\draw (0.01,3) node[anchor=north west] {$\mathcal{T}_1$};
\draw [shift={(0.27,2.23)},dotted] plot[domain=-0.18:2.96,variable=\t]({1*0.28*cos(\t r)+0*0.28*sin(\t r)},{0*0.28*cos(\t r)+1*0.28*sin(\t r)});
\draw [dash pattern=on 2pt off 2pt,domain=-5.6:11.61] plot(\x,{(-0--4.61*\x)/1.34});
\draw [shift={(0.44,3.18)},dotted] plot[domain=-0.33:2.82,variable=\t]({1*0.47*cos(\t r)+0*0.47*sin(\t r)},{0*0.47*cos(\t r)+1*0.47*sin(\t r)});
\draw (0.12,4.16) node[anchor=north west] {$\mathcal{T}_2$};
\draw [shift={(-0.01,-0.03)},dotted] plot[domain=-1.55:1.325,variable=\t]({1*2.2*cos(\t r)+0*2.2*sin(\t r)},{0*2.2*cos(\t r)+1*2.2*sin(\t r)});
\draw (1.51,-0.3) node[anchor=north west] {$\mathcal{A}_1$};
\draw [shift={(0.01,0.02)},dotted] plot[domain=-1.80:1.29,variable=\t]({1*2.63*cos(\t r)+0*2.63*sin(\t r)},{0*2.63*cos(\t r)+1*2.63*sin(\t r)});
\draw (2.28,-0.67) node[anchor=north west] {$\mathcal{A}_2$};
\draw [shift={(0.01,0.02)},dotted] plot[domain=0:1.57,variable=\t]({1.5*cos(\t r)},{1.5*sin(\t r)});
\draw (0.6,1.2) node[anchor=north west] {$\mathcal{A}_0$};
\begin{scriptsize}
\fill [color=black] (1,0) circle (1.5pt);
\draw[color=black] (1.1,-0.25) node {$S_1$};
\fill [color=black] (0,1) circle (1.5pt);
\draw[color=black] (-0.3,1.1) node {$T_1$};
\fill [color=black] (0,-1) circle (1.5pt);
\draw[color=black] (0.59,-1.07) node {$T_1[-1]$};
\fill [color=black] (4,1) circle (1.5pt);
\draw[color=black] (4.1,0.75) node {$S_0$};
\fill [color=black] (1,4) circle (1.5pt);
\draw[color=black] (0.8,4.2) node {$T_2$};
\end{scriptsize}
\end{tikzpicture}
\caption{The categories $\mathcal{A}_i$}
\vspace{-2em}
\label{fig:tilt}
\end{wrapfigure}
Instead, we will need a slightly stronger induction statement.
Using Proposition \ref{prop:HW}, in particular part \eqref{enum:HNfactorsinHW}, we can define
a torsion pair $(\ensuremath{\mathcal T}_i, \ensuremath{\mathcal F}_i)$ in $\ensuremath{\mathcal A}_0 := \ensuremath{\mathcal P}_0(1)$ as follows:
we let $\ensuremath{\mathcal T}_i$ be the extension closure of all $\sigma_+$-stable objects
$F \in \ensuremath{\mathcal A}_0$ with $\phi^+(F) > \phi^+(T_{i+1})$; by Theorem \ref{thm:nonempty},
since the Mukai vectors of stable objects have self-intersection $\geq-2$ and all objects $F$ as before have self-intersection $<0$, we deduce that
$\ensuremath{\mathcal T}_i$ is the extension-closure $\ensuremath{\mathcal T}_i = \langle T_1^+, \dots, T_i^+ \rangle$.
Then let $\ensuremath{\mathcal A}_i = \langle \ensuremath{\mathcal F}_i, \ensuremath{\mathcal T}_i[-1] \rangle$ (see Figure \ref{fig:tilt}).
We can also describe $\ensuremath{\mathcal A}_{i+1}$ inductively as the tilt of $\ensuremath{\mathcal A}_i$ at the torsion pair $(\ensuremath{\mathcal T}, \ensuremath{\mathcal F})$
with $\ensuremath{\mathcal T} = \langle T_{i+1}^+ \rangle$ and $\ensuremath{\mathcal F} = \langle T_{i+1}^+ \rangle^\perp$.
\begin{description*}
\item[Induction claim]
We have $E_i \in \ensuremath{\mathcal F}_i$, and both $E_i$ and $T_{i+1}^+$ are simple objects of $\ensuremath{\mathcal A}_i$.
\end{description*}
By construction of the torsion pair $(\ensuremath{\mathcal T}_i, \ensuremath{\mathcal F}_i)$, this also shows that $E_i$ is
$\sigma_+$-stable.
Indeed, the fact that $E_i$ is in $\ensuremath{\mathcal F}_i$ shows that $Hom(F,E_i)=0$, for all $\sigma_+$-stable objects $F$ with $\phi^+(F) > \phi^+(T_{i+1})$.
Also, the fact that it is simple in $\ensuremath{\mathcal A}_i$ shows that $Hom(F,E_i)=0$, also for all $\sigma_+$-stable objects $F\neq E_i$ with $\phi^+(E_i)\leq \phi^+(F)\leq \phi^+(T_{i+1})$.
By definition, this means that $E_i$ is $\sigma_+$-stable.
The case $i = 0$ follows by the assumption that $E_0$ is $\sigma_0$-\emph{stable}.
To prove the induction step, we first consider $T_{i+1}^+$.
By stability, we have $T_{i+1}^+ \in \ensuremath{\mathcal T}_i^\perp = \ensuremath{\mathcal F}_i$.
Using stability again, we also see that any non-trivial
quotient of $T_{i+1}^+$ is contained in $\ensuremath{\mathcal T}_i$, so $T_{i+1}^+$ is a simple object of $\ensuremath{\mathcal F}_i$.
Since $T_{i+1}^+$ is stable of maximal slope in $\ensuremath{\mathcal F}_i$, there also cannot be a short exact sequence
as in \eqref{eq:sesFFT} below. Therefore, Lemma \ref{lem:simpleintilt} shows that $T_{i+1}^+$ is a simple
object of $\ensuremath{\mathcal A}_i$.
Since $E_i$ (by induction assumption) is also a simple object in
$\ensuremath{\mathcal A}_i$, this shows $\mathop{\mathrm{Hom}}\nolimits(E_i, T_{i+1}^+) = \mathop{\mathrm{Hom}}\nolimits(T_{i+1}^+, E_i) = 0$. So
$\mathop{\mathbf{R}\mathrm{Hom}}\nolimits(T_{i+1}^+, E_i) = \mathop{\mathrm{Ext}}\nolimits^1(T_{i+1}^+, E_i)[-1]$, and
$E_{i+1} = \mathop{\mathrm{ST}}\nolimits_{T_{i+1}^+}(E_i)$ fits into a short exact sequence
\[
0 \to E_i \ensuremath{\hookrightarrow} E_{i+1} \ensuremath{\twoheadrightarrow} T_{i+1}^+ \otimes \mathop{\mathrm{Ext}}\nolimits^1(T_{i+1}^+, E_i) \to 0.
\]
In particular, $E_{i+1}$ is also an object of $\ensuremath{\mathcal A}_i$.
Note that
\[
\mathop{\mathbf{R}\mathrm{Hom}}\nolimits(T_{i+1}^+, E_{i+1}) = \mathop{\mathbf{R}\mathrm{Hom}}\nolimits(\mathop{\mathrm{ST}}\nolimits_{T_{i+1}^+}^{-1}(T_{i+1}^+), \mathop{\mathrm{ST}}\nolimits_{T_{i+1}^+}^{-1}(E_{i+1}))
= \mathop{\mathbf{R}\mathrm{Hom}}\nolimits(T_{i+1}^+[1], E_i)
\]
is concentrated in degree -2; this shows both that
$E_{i+1} \in (T_{i+1}^+)^\perp \subset \ensuremath{\mathcal A}_i$, and that there are no extensions
$E_{i+1} \ensuremath{\hookrightarrow} F' \ensuremath{\twoheadrightarrow} T_{i+1}^{\oplus k}$. Applying Lemma \ref{lem:simpleintilt} via the inductive
description of $\ensuremath{\mathcal A}_{i+1}$ as a tilt of $\ensuremath{\mathcal A}_i$, this proves the induction claim.
\hfill$\Box$
\begin{Lem} \label{lem:simpleintilt}
Let $(\ensuremath{\mathcal T}, \ensuremath{\mathcal F})$ be a torsion pair in an abelian category $\ensuremath{\mathcal A}$, and let $F \in \ensuremath{\mathcal F}$ be an object
that is simple in the quasi-abelian category $\ensuremath{\mathcal F}$, and that admits no non-trivial short exact sequences
\begin{equation} \label{eq:sesFFT}
0 \to F \ensuremath{\hookrightarrow} F' \ensuremath{\twoheadrightarrow} T \to 0
\end{equation}
with $F' \in \ensuremath{\mathcal F}$ and $T \in \ensuremath{\mathcal T}$.
Then $F$ is a simple object in the tilted category $\ensuremath{\mathcal A}^\sharp = \langle \ensuremath{\mathcal F}, \ensuremath{\mathcal T}[-1] \rangle$.
\end{Lem}
\begin{proof}
Consider a short exact sequence $A \ensuremath{\hookrightarrow} F \ensuremath{\twoheadrightarrow} B$ in $\ensuremath{\mathcal A}^{\sharp}$. The long exact cohomology
sequence with respect to $\ensuremath{\mathcal A}$ is
\[
0 \to \ensuremath{\mathcal H}^0_\ensuremath{\mathcal A}(A) \ensuremath{\hookrightarrow} F \to F' \ensuremath{\twoheadrightarrow} \ensuremath{\mathcal H}^1_\ensuremath{\mathcal A}(A) \to 0
\]
with $\ensuremath{\mathcal H}^0_\ensuremath{\mathcal A}(A) \in \ensuremath{\mathcal F}, F' \in \ensuremath{\mathcal F}$ and
$\ensuremath{\mathcal H}^1_\ensuremath{\mathcal A}(A) \in \ensuremath{\mathcal T}$. Since $F$ is a simple object in $\ensuremath{\mathcal F}$, we must have
$\ensuremath{\mathcal H}^0_\ensuremath{\mathcal A}(A) = 0$. Thus we get a short exact sequence as in \eqref{eq:sesFFT}, a contradiction.
\end{proof}
\section{Divisorial contractions in the non-isotropic case}
\label{sec:divisorialcontraction}
In this section we examine Theorem \ref{thm:walls} in the case of divisorial contractions when the lattice $\ensuremath{\mathcal H}_\ensuremath{\mathcal W}$ does not contain isotropic classes.
The goal is to prove the following proposition.
\begin{Prop} \label{prop:sphericaldivisorialwall}
Assume that the potential wall $\ensuremath{\mathcal W}$ is non-isotropic. Then $\ensuremath{\mathcal W}$ is a divisorial wall if and
only if there exists a spherical class $\tilde \ensuremath{\mathbf s} \in \ensuremath{\mathcal H}_\ensuremath{\mathcal W}$ with $(\tilde \ensuremath{\mathbf s}, \ensuremath{\mathbf v}) = 0$.
If we choose $\tilde \ensuremath{\mathbf s}$ to be effective, then the class of the contracted divisor $D$
is given by $D \equiv \theta(\tilde \ensuremath{\mathbf s})$.
If $\widetilde S$ is a stable spherical object of class $\ensuremath{\mathbf v}(\widetilde S) = \tilde \ensuremath{\mathbf s}$, then
$D$ can be described as a Brill-Noether divisor of $\widetilde S$: it is given
either by the condition $\mathop{\mathrm{Hom}}\nolimits(\widetilde S, \underline{\hphantom{A}}) \neq 0$, or by
$\mathop{\mathrm{Hom}}\nolimits(\underline{\hphantom{A}}, \widetilde S) \neq 0$.
\end{Prop}
One can use more general results of Markman in \cite{Eyal:prime-exceptional} to show that a
divisorial contraction implies the existence of an orthogonal spherical class in the non-isotropic
case. We will instead give a categorical proof in our situation.
We first treat the case in which there exists a $\sigma_0$-\emph{stable} object of class $\ensuremath{\mathbf v}$:
\begin{Lem} \label{lem:notdivisorial}
Assume that $\ensuremath{\mathcal H}$ is non-isotropic, and that $\ensuremath{\mathcal W}$ is a potential wall associated to $\ensuremath{\mathcal H}$.
If $\ensuremath{\mathbf v}$ is a minimal class of a $G_\ensuremath{\mathcal H}$-orbit, and if there is no spherical class
$\tilde \ensuremath{\mathbf s} \in \ensuremath{\mathcal H}_\ensuremath{\mathcal W}$ with $(\tilde \ensuremath{\mathbf s}, \ensuremath{\mathbf v}) = 0$, then
the set of $\sigma_0$-\emph{stable} objects in $M_{\sigma_+}(\ensuremath{\mathbf v})$ has complement of codimension
at least two.
\end{Lem}
In particular, $\ensuremath{\mathcal W}$ cannot induce a divisorial contraction.
\begin{proof}
The argument is similar to Lemma \ref{lem:nottotallysemistable}; additionally, it uses
Namikawa's and Wierzba's characterization of divisorial contractions recalled in
Theorem \ref{thm:NamikawaWierzba}.
For contradiction, assume that there is an irreducible divisor $D \subset M_{\sigma_+}(\ensuremath{\mathbf v})$ of objects
that are strictly semistable with respect to $\sigma_0$.
Let $\pi^+ \colon M_{\sigma_+}(\ensuremath{\mathbf v}) \to \overline{M}$ be the morphism induced by
$\ell_{\sigma_0}$; it is either an isomorphism or a divisorial contraction.
The divisor $D$ may or may not be contracted by $\pi^+$;
by Theorem \ref{thm:NamikawaWierzba}, we have
$\mathop{\mathrm{dim}}\nolimits \pi^+(D) \ge \mathop{\mathrm{dim}}\nolimits D - 1 = \mathop{\mathrm{dim}}\nolimits M_{\sigma_+}(\ensuremath{\mathbf v}) - 2 = \ensuremath{\mathbf v}^2$ in either case.
On the other hand, consider the restriction of
the universal family $\ensuremath{\mathcal E}$ on $M_{\sigma_+}(\ensuremath{\mathbf v})$ to the divisor $D$,
and its relative HN filtration with respect to $\sigma_-$. As before, this induces
a rational map
\[
\mathrm{HN}_D \colon D \dashrightarrow M_{\sigma_-}(\ensuremath{\mathbf a}_1) \times \dots \times M_{\sigma_-}(\ensuremath{\mathbf a}_l).
\]
Again, let $I \subset \{1, \dots, l\}$ be the subset of indices $i$ with $\ensuremath{\mathbf a}_i^2 > 0$, and
$\ensuremath{\mathbf a} = \sum_{i \in I} \ensuremath{\mathbf a}_i$. The arguments leading to inequalities \eqref{eq:a2lessthanv2-1}
and \eqref{eq:a2lessthanv2-2} still apply, and show $\ensuremath{\mathbf a}^2 \le \ensuremath{\mathbf v}^2$.
If $I \neq \{1, \dots, l\}$, there exists a class $\ensuremath{\mathbf a}_j$ appearing in the HN filtration
of the form
$\ensuremath{\mathbf a}_j = m \tilde \ensuremath{\mathbf s}$, $\tilde \ensuremath{\mathbf s}^2 = -2$. Under the assumptions, we now have the \emph{strict}
inequality $(\tilde \ensuremath{\mathbf s}, \ensuremath{\mathbf v}) > 0$; thus, in equation \eqref{eq:a2lessthanv2-2}, we also have $(\ensuremath{\mathbf v}, \ensuremath{\mathbf b}) > 0$, and so
$\ensuremath{\mathbf a}^2 \le \ensuremath{\mathbf v}^2 - 4$ in all cases.
Otherwise, if $I = \{1, \dots, l\}$, we have $\abs{I} > 1$, and we can apply Lemma
\ref{lem:numericaldimensions}; in either case we obtain
\[
\sum_{i=1}^l \mathop{\mathrm{dim}}\nolimits M_{\sigma_-}(\ensuremath{\mathbf a}_i) = \sum_{i \in I} (\ensuremath{\mathbf a}_i^2 + 2) < \ensuremath{\mathbf v}^2 = \mathop{\mathrm{dim}}\nolimits \pi^+(D).
\]
As before, this is a contradiction to the observation that any curve contracted by
$\mathrm{HN}_D$ is also contracted by $\pi^+$.
\end{proof}
The case of totally semistable walls can be reduced to the previous one:
\begin{Cor} \label{cor:notdivisorial}
Assume that $\ensuremath{\mathcal H}$ is non-isotropic, and that there does not exist a spherical class
$\tilde \ensuremath{\mathbf s} \in \ensuremath{\mathcal H}$ with $(\tilde \ensuremath{\mathbf s}, \ensuremath{\mathbf v}) = 0$. Then a potential wall associated to $\ensuremath{\mathcal H}$ cannot
induce a divisorial contraction.
\end{Cor}
In fact, we will later see that all potential walls associated to $\ensuremath{\mathcal H}$ are mapped to the same wall in
the movable cone of the moduli space; thus they have to exhibit identical birational behavior.
\begin{proof}
As before, consider the minimal class $\ensuremath{\mathbf v}_0$ of the orbit $G_\ensuremath{\mathcal H}.\ensuremath{\mathbf v}$, in the sense of
Definition \ref{prop:pellequation}.
By Lemma \ref{lem:notdivisorial}, there is an open subset
$U \subset M_{\sigma_+}(\ensuremath{\mathbf v}_0)$ of objects that are $\sigma_0$-\emph{stable} that
has complement of codimension at least two.
Let $\Phi$ be the composition of spherical twists given
by Proposition \ref{prop:sphericaltotallysemistable}, such that
$\Phi(E_0)$ is $\sigma_+$-stable of class $\ensuremath{\mathbf v}$ for every $[E_0] \in U$. Observe that
$\Phi(E_0)$ has a Jordan-H\"older filtration such that $E_0$ is one of its filtration factors (the
other factors are stable spherical objects). Therefore, the induced map
$\Phi_* \colon U \to M_{\sigma_+}(\ensuremath{\mathbf v})$ is injective, and the image does not contain any curve
of S-equivalent objects with respect to $\sigma_0$. Also, $\Phi_*(U)$ has complement of codimension
at least two (see e.g. \cite[Proposition 21.6]{GrossHuybrechtsJoyce}).
Since $\ell_{\sigma_0}$ does not contract any curves in $\Phi_*(U)$, it
cannot contract any divisors in $M_{\sigma_+}(\ensuremath{\mathbf v})$.
\end{proof}
The next step is to construct the divisorial contraction when there exists an orthogonal spherical
class. To clarify the logic, we first treat the simpler case of a wall that is not totally semistable:
\begin{Lem} \label{lem:minimaldivisorialcontraction}
Assume $\ensuremath{\mathcal H}$ is non-isotropic, $\ensuremath{\mathcal W}$ a potential wall associated to $\ensuremath{\mathcal H}$, and that $\ensuremath{\mathbf v}$ is a minimal
class of a $G_\ensuremath{\mathcal H}$-orbit. If there exists a spherical class $\tilde \ensuremath{\mathbf s} \in \ensuremath{\mathcal H}$ with
$(\tilde \ensuremath{\mathbf s}, \ensuremath{\mathbf v}) = 0$, then $\ensuremath{\mathcal W}$ induces a divisorial contraction.
If we assume that $\tilde \ensuremath{\mathbf s}$ is effective, then
the contracted divisor $D \subset M_{\sigma_+}(\ensuremath{\mathbf v})$ has class $\theta(\tilde \ensuremath{\mathbf s})$.
The HN filtration of a generic element $[E] \in D$ with respect to $\sigma_-$ is of the
form
\[
0 \to \widetilde S \ensuremath{\hookrightarrow} E \ensuremath{\twoheadrightarrow} F \to 0 \quad \text{or} \quad
0 \to F \ensuremath{\hookrightarrow} E \ensuremath{\twoheadrightarrow} \widetilde S \to 0,
\]
where $\widetilde S$ and $F$ are $\sigma_0$-stable objects of class $\tilde \ensuremath{\mathbf s}$ and
$\ensuremath{\mathbf v} - \tilde \ensuremath{\mathbf s}$, respectively.
\end{Lem}
\begin{proof}
As before, we only treat the case when $\ensuremath{\mathcal H}$ admits infinitely many spherical classes.
In that case, we must have $\tilde \ensuremath{\mathbf s} = \ensuremath{\mathbf s}$ or $\tilde \ensuremath{\mathbf s} = \ensuremath{\mathbf t}$; we may assume $\tilde \ensuremath{\mathbf s} = \ensuremath{\mathbf s}$, and
the other case will follow by dual arguments.
We first prove that $\ensuremath{\mathbf v} -\ensuremath{\mathbf s}$ is a minimal class in its $G_\ensuremath{\mathcal H}$-orbit by a straightforward
computation. If $\ensuremath{\mathbf v}^2 = 2$, then $(\ensuremath{\mathbf v}-\ensuremath{\mathbf s})^2 =0$ in contradiction to the assumption; therefore $\ensuremath{\mathbf v}^2 \ge 4$.
If we write $\ensuremath{\mathbf v} = x \ensuremath{\mathbf s} + y \ensuremath{\mathbf t}$, then $(\ensuremath{\mathbf v}, \ensuremath{\mathbf s}) = 0$ gives $ y = \frac 2m x$. Plugging in $\ensuremath{\mathbf v}^2 \ge 4$ gives
$x^2 \left(1 - \frac{4}{m^2}\right) \ge 2$. Since $m \ge 3$, we obtain
\[
x^2\left(1 - \frac 4{m^2}\right)^2 >
x^2\left(1 - \frac 4{m^2}\right) \frac 12 \ge 1,
\]
and therefore
\[
(\ensuremath{\mathbf t}, \ensuremath{\mathbf v} - \ensuremath{\mathbf s}) = m (x-1) - 2 \frac 2m x
= m x \left(1 - \frac 4{m^2}\right) - m \ge 0.
\]
Also, $(\ensuremath{\mathbf s}, \ensuremath{\mathbf v}-\ensuremath{\mathbf s}) = 2 > 0$, and therefore $\ensuremath{\mathbf v}-\ensuremath{\mathbf s}$ has positive pairing with every effective spherical
class.
By Lemma \ref{lem:nottotallysemistable}, the generic element $F \in M_{\sigma_+}(\ensuremath{\mathbf v}-\ensuremath{\mathbf s})$ is also
${\sigma_0}$-\emph{stable}. Since $(\ensuremath{\mathbf s}, \ensuremath{\mathbf v}-\ensuremath{\mathbf s}) = 2$ and $\mathop{\mathrm{Hom}}\nolimits(F, S) = \mathop{\mathrm{Hom}}\nolimits(S, F) = 0$, there is
a family of extensions
\[ 0 \to S \ensuremath{\hookrightarrow} E_p \ensuremath{\twoheadrightarrow} F \to 0 \]
parameterized by $p\in \ensuremath{\mathbb{P}}^1 \cong \ensuremath{\mathbb{P}}(\mathop{\mathrm{Ext}}\nolimits^1(F, S))$. By Lemma \ref{lem:stableextension}, they are
$\sigma_+$-stable. Since all $E_p$ are S-equivalent to each other, the morphism
$\pi^+ \colon M_{\sigma_+}(\ensuremath{\mathbf v}) \to \overline{M}$ associated to $\ensuremath{\mathcal W}$
contracts the image of this rational curve. Varying $F \in M_{\sigma_0}^{st}(\ensuremath{\mathbf v}-\ensuremath{\mathbf s})$, these span a
family of dimension $1 + (\ensuremath{\mathbf v}-\ensuremath{\mathbf s})^2 + 2 = \ensuremath{\mathbf v}^2 + 1$; this is a divisor
in $M_{\sigma_+}(\ensuremath{\mathbf v})$ contracted by $\pi^+$.
Since $\pi^+$ has relative Picard-rank equal to one, it cannot contract any other component.
\end{proof}
The following lemma treats the general case, for which we will first set up notation.
As before, we let $\ensuremath{\mathbf v}_0$ be the minimal class in the $G_\ensuremath{\mathcal H}$-orbit of $\ensuremath{\mathbf v}$. By
$\tilde \ensuremath{\mathbf s}_0$ we denote the effective spherical class with $(\ensuremath{\mathbf v}_0, \tilde \ensuremath{\mathbf s}_0) = 0$;
we have $\tilde \ensuremath{\mathbf s}_0 = \ensuremath{\mathbf t}$ or $\tilde \ensuremath{\mathbf s}_0 = \ensuremath{\mathbf s}$.
Accordingly, in the list of the $G_\ensuremath{\mathcal H}$-orbit of $\ensuremath{\mathbf v}$ given by Proposition
\ref{prop:orbitlist}, we have either $\ensuremath{\mathbf v}_{2i} = \ensuremath{\mathbf v}_{2i+1}$, or $\ensuremath{\mathbf v}_{2i} = \ensuremath{\mathbf v}_{2i-1}$ for all
$i$, since $\ensuremath{\mathbf v}_0$ is fixed under the reflection $\rho_{\tilde \ensuremath{\mathbf s}_0}$ at $\tilde \ensuremath{\mathbf s}_0$.
We choose $l$ such that $\ensuremath{\mathbf v} = \ensuremath{\mathbf v}_l$, and such that the corresponding sequence
of reflections sends $\tilde \ensuremath{\mathbf s}_0$ to $\tilde \ensuremath{\mathbf s}$:
\[
\tilde \ensuremath{\mathbf s} =
\begin{cases}
\rho_{\ensuremath{\mathbf t}_l} \circ \rho_{\ensuremath{\mathbf t}_{l-1}} \circ \dots \circ \rho_{\ensuremath{\mathbf t}_0}(\tilde \ensuremath{\mathbf s}_0)
& \text{if $l > 0$} \\
\rho_{\ensuremath{\mathbf s}_l} \circ \rho_{\ensuremath{\mathbf s}_{l-1}} \circ \dots \circ \rho_{\ensuremath{\mathbf s}_{-1}}(\tilde \ensuremath{\mathbf s}_0)
& \text{if $l < 0$}
\end{cases}
\]
Depending on the
ordering of the slopes $\phi^+(\ensuremath{\mathbf v}), \phi^+(\ensuremath{\mathbf v}_0)$, we let $\Phi$ be the composition of
spherical twists appearing in Proposition \ref{prop:sphericaltotallysemistable}.
\begin{Lem} \label{lem:sphericaldivisorialwall}
Assume that $\ensuremath{\mathcal H}$ is non-isotropic, and let $\ensuremath{\mathcal W}$ be a corresponding potential wall. If there
is an effective spherical $\tilde \ensuremath{\mathbf s} \in C_\ensuremath{\mathcal W}$ with $(\ensuremath{\mathbf v}, \tilde \ensuremath{\mathbf s}) = 0$, then $\ensuremath{\mathcal W}$ induces
a divisorial contraction.
The contracted divisor $D$ has class $\theta(\tilde \ensuremath{\mathbf s})$. For $E \in D$ generic, there are $\sigma_+$-stable objects
$F$ and $\widetilde S$ of class $\ensuremath{\mathbf v}-\tilde \ensuremath{\mathbf s}$ and $\tilde \ensuremath{\mathbf s}$, respectively,
and a short exact sequence
\begin{equation} \label{eq:divisorialextension}
0 \to \widetilde S \ensuremath{\hookrightarrow} E \ensuremath{\twoheadrightarrow} F \to 0 \quad \text{or} \quad
0 \to F \ensuremath{\hookrightarrow} E \ensuremath{\twoheadrightarrow} \widetilde S \to 0.
\end{equation}
The inclusion $\widetilde S \ensuremath{\hookrightarrow} E$ or $F \ensuremath{\hookrightarrow} E$ appears as one of the filtration steps in a Jordan-H\"older
filtration of $E$.
In addition, there exists an open subset $U^+ \subset M_{\sigma_+}(\ensuremath{\mathbf v}_0)$, with complement of codimension
two, such that $\Phi(E_0)$ is $\sigma_+$-stable for every $\sigma_+$-stable object
$E_0 \in U^+$.
\end{Lem}
\begin{proof}
We rely on the construction in the proof of Proposition \ref{prop:sphericaltotallysemistable}.
Let $\widetilde S_0$ be the stable spherical object of class $\tilde \ensuremath{\mathbf s}_0$; we have
$\widetilde S_0 = S$ or $\widetilde S_0 = T$.
As in the proof of Lemma \ref{lem:minimaldivisorialcontraction},
one shows that $\ensuremath{\mathbf v}_0 - \tilde \ensuremath{\mathbf s}_0$ is the minimal class in its $G_\ensuremath{\mathcal H}$-orbit.
Let $F_0$ be a generic $\sigma_0$-\emph{stable} object of class $\ensuremath{\mathbf v}_0 - \tilde \ensuremath{\mathbf s}_0$.
Applying Proposition \ref{prop:sphericaltotallysemistable} to the class $\ensuremath{\mathbf v} - \tilde \ensuremath{\mathbf s}$,
we see that $F := \Phi(F_0)$ is $\sigma_+$-stable of that class.
We may again assume that $\Phi$ is of the form $\mathop{\mathrm{ST}}\nolimits_{T_l^+} \circ \dots \circ \mathop{\mathrm{ST}}\nolimits_{T_1^+}$; the
other case follows by dual arguments.
Inductively, one shows that
$\Phi(S) = T_{l+1}^+$ and
$\Phi(T) = T_l^+[-1]$. These are both simple objects of the category $\ensuremath{\mathcal A}_l$ defined by tilting
in the proof of Proposition \ref{prop:sphericaltotallysemistable}; therefore,
$\widetilde S := \Phi(\widetilde S_0)$ is simple in $\ensuremath{\mathcal A}_l$. By the induction claim in the proof of Proposition \ref{prop:sphericaltotallysemistable},
$F = \Phi(F_0)$ is also a simple object in this category. In particular,
$\mathop{\mathrm{Hom}}\nolimits(\widetilde S, F) = \mathop{\mathrm{Hom}}\nolimits(F, \widetilde S) = 0$ and $\mathop{\mathrm{ext}}\nolimits^1(\widetilde S, F) = 2$.
Applying Lemma \ref{lem:stableextension}
again, and using the compatibility of $\ensuremath{\mathcal A}_l$ with stability, we obtain a stable extension
of the form \eqref{eq:divisorialextension}.
This gives a divisor contracted by $\pi^+$, and we can proceed as in the previous lemma.
Let $D_0 \subset M_{\sigma_+}(\ensuremath{\mathbf v}_0)$ be the contracted divisor for the class $\ensuremath{\mathbf v}_0$. The above proof
also shows that for a generic object $E_0 \in D_0$ (whose form is given by Lemma
\ref{lem:minimaldivisorialcontraction}), the object $\Phi(E_0)$ is a $\sigma_+$-stable (contained
in the contracted divisor $D$). Thus we can take $U^+$ to be the union of the set of
$\sigma_0$-\emph{stable} objects in $M_{\sigma_+}(\ensuremath{\mathbf v}_0)$ with the open subset of
$D_0$ of objects of the form given in Lemma \ref{lem:minimaldivisorialcontraction}.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:sphericaldivisorialwall}]
The statements follow from Corollary \ref{cor:notdivisorial} and Lemma
\ref{lem:sphericaldivisorialwall}.
\end{proof}
\section{Isotropic walls are Uhlenbeck walls}\label{sec:iso}
In this section, we study potential walls $\ensuremath{\mathcal W}$ in the case where $\ensuremath{\mathcal H}$ admits an isotropic class
$\ensuremath{\mathbf w} \in \ensuremath{\mathcal H}, \ensuremath{\mathbf w}^2 = 0$.
Following an idea of Minamide, Yanagida, and Yoshioka \cite{MYY2}, we study the wall $\ensuremath{\mathcal W}$ via a
Fourier-Mukai transform after which $\ensuremath{\mathbf w}$ becomes the class of a point. Then $\sigma_+$ corresponds
to Gieseker stability and, as proven in \cite{Jason:Uhlenbeck}, the wall corresponds to the
contraction to the Uhlenbeck compactification, as constructed by Jun Li in \cite{JunLi:Uhlenbeck}.
Parts of this section are well-known.
In particular, \cite[Proposition 0.5]{Yoshioka:Irreducibility} deals with the existence of stable locally-free sheaves.
For other general results, see \cite{Yoshioka:Abelian}.
\subsection*{The Uhlenbeck compactification}
We let $(X,\alpha)$ be a twisted K3 surface.
For divisor classes $\beta,\omega \in \mathrm{NS}(X)_{\ensuremath{\mathbb{Q}}}$, with $\omega$ ample, and for a vector $\ensuremath{\mathbf v}\in H^*_\mathrm{alg}(X,\alpha,\ensuremath{\mathbb{Z}})$, we denote by $M_{\omega}^{\beta}(\ensuremath{\mathbf v})$ the moduli space of $(\beta,\omega)$-Gieseker semistable $\alpha$-twisted sheaves on $X$ with Mukai vector $\ensuremath{\mathbf v}$. Here, $(\beta,\omega)$-Gieseker stability is defined via the Hilbert polynomial formally twisted by $e^{-\beta}$ (see \cite{MatsukiWenthworth:TwistedVariation,Yoshioka:TwistedStability, Lieblich:Twisted}). When $\beta=0$, we obtain the usual notion of $\omega$-Gieseker stability. In such a case, we will omit $\beta$ from the notation.
We start with the following observation:
\begin{Lem}\label{lem:StabilityIsotropic}
Assume that there exists an isotropic class in $\ensuremath{\mathcal H}$.
Then there are two effective, primitive, isotropic classes $\ensuremath{\mathbf w}_0$ and $\ensuremath{\mathbf w}_1$ in $\ensuremath{\mathcal H}$,
such that, for a generic stability condition $\sigma_0 \in \ensuremath{\mathcal W}$, we have
\begin{enumerate}
\item \label{enum:sigma0} $M_{\sigma_0}(\ensuremath{\mathbf w}_0)=M_{\sigma_0}^{st}(\ensuremath{\mathbf w}_0)$, and
\item either $M_{\sigma_0}(\ensuremath{\mathbf w}_1)=M_{\sigma_0}^{st}(\ensuremath{\mathbf w}_1)$, or
there exists a $\sigma_0$-stable spherical object $S$, with Mukai vector $\ensuremath{\mathbf s}$, such that $(\ensuremath{\mathbf s}, \ensuremath{\mathbf w}_1)<0$ and $\ensuremath{\mathcal W}$ is a totally semistable wall for $\ensuremath{\mathbf w}_1$.
\end{enumerate}
Any positive class $\ensuremath{\mathbf v}' \in P_\ensuremath{\mathcal H}$ satisfies $(\ensuremath{\mathbf v}', \ensuremath{\mathbf w}_i) \ge 0$ for $i = 1, 2$.
\end{Lem}
\begin{proof}
Let $\tilde \ensuremath{\mathbf w} \in \ensuremath{\mathcal H}$ be primitive isotropic class; up to replacing $\tilde \ensuremath{\mathbf w}$ by $-\tilde \ensuremath{\mathbf w}$, we
may assume it to be effective.
We complete $\tilde \ensuremath{\mathbf w}$ to a basis $\{\tilde \ensuremath{\mathbf v},\tilde \ensuremath{\mathbf w}\}$ of $\ensuremath{\mathcal H}_\ensuremath{\mathbb{Q}}$.
Then, for all $(a,b)\in\ensuremath{\mathbb{Q}}$, we have
\[
(a\tilde \ensuremath{\mathbf v} + b \tilde \ensuremath{\mathbf w})^2 = a \cdot \left( a {\tilde \ensuremath{\mathbf v}}^2 + 2 b (\tilde \ensuremath{\mathbf v},\tilde \ensuremath{\mathbf w}) \right).
\]
This shows the existence of a second integral isotropic class. If we choose it to be effective,
then the positive cone $P_\ensuremath{\mathcal H}$ is given by
$\ensuremath{\mathbb{R}}_{\ge 0}\cdot \ensuremath{\mathbf w}_0 + \ensuremath{\mathbb{R}}_{\ge 0}\cdot \ensuremath{\mathbf w}_1$. The claim
$(\ensuremath{\mathbf v}', \ensuremath{\mathbf w}_i) \ge 0$ follows easily.
By Theorem \ref{thm:nonempty}, we have $M_{\sigma_0}(\tilde{\ensuremath{\mathbf w}})\neq\emptyset$. If $\ensuremath{\mathcal W}$ does not coincide
with a wall for $\tilde \ensuremath{\mathbf w}$, then we can take $\ensuremath{\mathbf w}_0 = \tilde \ensuremath{\mathbf w}$, and claim
\eqref{enum:sigma0} will be satisfied.
Otherwise, let $\sigma\in\mathop{\mathrm{Stab}}\nolimits^\dagger(X,\alpha)$ be a generic stability condition nearby $\ensuremath{\mathcal W}$;
by \cite[Lemma 7.2]{BM:projectivity}, we have
$M_{\sigma}(\tilde \ensuremath{\mathbf w})=M^{st}_{\sigma}(\tilde \ensuremath{\mathbf w})\neq\emptyset$.
Up to applying a Fourier-Mukai equivalence, we may assume that $\tilde \ensuremath{\mathbf w} = (0, 0, 1)$ is the
Mukai vector of a point on a twisted K3 surface; then we can apply
the classification of walls for isotropic classes in \cite[Theorem 12.1]{Bridgeland:K3}, extended to
twisted surfaces in \cite{HMS:generic_K3s}.
If $\ensuremath{\mathcal W}$ is a totally semistable wall for $\tilde \ensuremath{\mathbf w}$, then we are in case $(A^+)$ or $(A^-)$ of \cite[Theorem 12.1]{Bridgeland:K3}:
there exists a spherical $\sigma_0$-stable twisted vector bundle $S$ such that $S$ or $S[2]$
is a JH factor of the skyscraper sheaf $k(x)$, for every $x \in M_{\sigma}(\tilde \ensuremath{\mathbf w})$;
moreover, the other non-isomorphic JH factor is either $\mathop{\mathrm{ST}}\nolimits_S(k(x))$, or $\mathop{\mathrm{ST}}\nolimits^{-1}_S(k(x))$.
In both cases, the Mukai vector $\ensuremath{\mathbf w}_0$ of the latter JH factor is primitive and isotropic, and $\ensuremath{\mathcal W}$ is not a wall for $\ensuremath{\mathbf w}_0$.
Finally, if $\ensuremath{\mathcal W}$ is a wall for $\tilde \ensuremath{\mathbf w}$, but not a totally semistable
wall, it must be a wall of type $(C_k)$, still in the notation of \cite[Theorem
12.1]{Bridgeland:K3}: there is a rational curve $C \subset M_{\sigma}(\tilde \ensuremath{\mathbf w})$ such that
$k(x)$ is strictly semistable iff $x \in C$.
But then the rank two lattice associated to the wall is negative semi-definite by \cite[Remark 6.3]{BM:projectivity};
on the other hand, by Proposition \ref{prop:HW}, claim \eqref{enum:JHfactorsinHW},
it must coincide with $\ensuremath{\mathcal H}$, which has signature $(1, -1)$. This is a contradiction.
\end{proof}
Let $\ensuremath{\mathbf w}_0,\ensuremath{\mathbf w}_1 \in C_\ensuremath{\mathcal W}$ be the effective, primitive, isotropic classes given by the above lemma,
and let $Y:=M_{\sigma_0}(\ensuremath{\mathbf w}_0)$.
Then $Y$ is a K3 surface and, by \cite{Mukai:BundlesK3,Caldararu:NonFineModuliSpaces,Yoshioka:TwistedStability,HuybrechtsStellari:CaldararuConj}, there exist a class $\alpha'\in\mathrm{Br}(Y)$ and a Fourier-Mukai transform
\[
\Phi \colon \mathrm{D}^{b}(X,\alpha)\xrightarrow{\sim} \mathrm{D}^{b}(Y,\alpha')
\]
such that $\Phi(\ensuremath{\mathbf w}_0)=(0,0,1)$. By construction, skyscraper sheaves of points on $Y$
are $\Phi_*(\sigma_0)$-stable. By Bridgeland's Theorem \ref{thm:BridgelandK3geometric}, there exist divisor classes
$\omega, \beta \in \mathop{\mathrm{NS}}\nolimits(Y)_\ensuremath{\mathbb{Q}}$, with $\omega$ ample, such that up to the $\widetilde{\mathop{\mathrm{GL}}}^+_2(\ensuremath{\mathbb{R}})$-action,
$\Phi_*(\sigma_0)$ is given by $\sigma_{\omega, \beta}$.
In particular, the category $\ensuremath{\mathcal P}_{\omega, \beta}(1)$ is the extension-closure of
skyscraper sheaves of points, and the shifts $F[1]$ of $\mu_{\omega}$-stable torsion-free sheaves $F$
with slope $\mu_{\omega}(F) = \omega\cdot\beta$.
Since $\sigma_0$ by assumption does not lie on any other wall with
respect to $\ensuremath{\mathbf v}$, the divisor $\omega$ is generic with respect to $\Phi_*(\ensuremath{\mathbf v})$.
By abuse of notation, we will from now on write $(X,\alpha)$ instead of $(Y,\alpha')$, $\ensuremath{\mathbf v}$ instead of $\Phi_*(\ensuremath{\mathbf v})$, and
$\sigma_0$ instead of $\sigma_{\omega, \beta}$.
Let $\sigma_+ = \sigma_{\omega, \beta - \epsilon}$ and
$\sigma_- = \sigma_{\omega, \beta + \epsilon}$; here $\epsilon$ is a sufficiently small positive
multiple of $\omega$.
\begin{Prop}[\cite{Jason:Uhlenbeck, LoQin:miniwalls}]\label{prop:Jason}
An object of class $\ensuremath{\mathbf v}$ is $\sigma_+$-stable if and only if it is the shift $F[1]$ of a
$(\beta,\omega)$-Gieseker stable sheaf $F$ on $(X, \alpha)$; the shift $[1]$ induces
the following identification of moduli spaces:
\[
M_{\sigma_+}(\ensuremath{\mathbf v}) = M_{\omega}^{\beta}(-\ensuremath{\mathbf v}).
\]
Moreover, the contraction morphism $\pi^+$ induced via Theorem \ref{thm:contraction} for generic
$\sigma_0 \in \ensuremath{\mathcal W}$ is the Li-Gieseker-Uhlenbeck morphism to the Uhlenbeck compactification.
Finally, an object $F$ of class $\ensuremath{\mathbf v}$ is $\sigma_-$-stable if and only if
it is the shift $F^\vee[2]$ of the derived dual of a $(-\beta,\omega)$-Gieseker stable sheaf on
$(X, \alpha^{-1})$.
\end{Prop}
\begin{proof}
The identification of $M_{\sigma_+}(\ensuremath{\mathbf v})$ with the Gieseker moduli space is well-known,
and follows with the same arguments as in \cite[Proposition 14.2]{Bridgeland:K3}.
For $\sigma_0$, two torsion-free sheaves $E, F$ become S-equivalent if and only if they have the same image in the
Uhlenbeck space (\cite[Theorem 3.1]{Jason:Uhlenbeck}, \cite[Section 5]{LoQin:miniwalls}): indeed,
if $E_i$ are the Jordan-H\"older factors of $E$ with respect to slope-stability, then
$E$ is S-equivalent to
\[ \bigoplus E_i^{**} \oplus \left(E_i^{**}/E_i\right), \]
precisely as in \cite[Theorem 8.2.11]{HL:Moduli}.
Thus, Theorem \ref{thm:contraction} identifies $\pi^+$ with the morphism to the Uhlenbeck space.
The claim of $\sigma_-$-stability follows by
Proposition \ref{prop:dualstability} from the case of $\sigma_+$-stability; see also
see \cite[Proposition 2.2.7]{Minamide-Yanagida-Yoshioka:wall-crossing} in the case $\alpha = 1$.
\end{proof}
In other words, the coarse moduli space $M_{\sigma_0}(\ensuremath{\mathbf v})$ is isomorphic to the Uhlenbeck
compactification (\cite{JunLi:Uhlenbeck, Yoshioka:TwistedStability}) of the moduli space of
slope-stable vector bundles on $(X,\alpha)$.
Given a $(\beta,\omega)$-Gieseker stable sheaf $F \in M_{\omega}^{\beta}(-\ensuremath{\mathbf v})$, the
$\sigma_+$-stable object $F[1]$ becomes strictly semistable with respect to
$\sigma_0$ if and only if $F$ is not locally free, or if $F$ is not slope-\emph{stable}.
In particular, when the rank of $-\ensuremath{\mathbf v}$ equals one, then the contraction morphism $\pi^+$
is the Hilbert-Chow morphism $\mathrm{Hilb}^n(X) \to \mathop{\mathrm{Sym}}^n(X)$; see also \cite[Example
10.1]{BM:projectivity}.
\subsection*{Totally semistable isotropic walls}
We start with the existence of a unique spherical stable object in the case the wall is totally semistable:
\begin{Lem}\label{lem:UniqueSpherical}
Assume that $\ensuremath{\mathcal W}$ is a totally semistable wall for $\ensuremath{\mathbf v}$.
\begin{enumerate}
\item \label{enum:TotSStIsotropic1} There exists a unique spherical $\sigma_0$-stable object $S\in\ensuremath{\mathcal P}_{\sigma_0}(1)$.
\item \label{enum:TotSStIsotropic2} Let $E\in M_{\sigma_+}(\ensuremath{\mathbf v})$ be a generic object.
Then its HN filtration with respect to $\sigma_{-}$ has length $2$ and takes the form
\begin{equation}\label{eq:TotSStIsotropic}
S^{\oplus a} \to E \to F, \quad \text{ or } \quad F \to E \to S^{\oplus a},
\end{equation}
with $a\in\ensuremath{\mathbb{Z}}_{>0}$.
The $\sigma_-$-semistable object $F$ is generic in $M_{\sigma_-}(\ensuremath{\mathbf v}')$, for $\ensuremath{\mathbf v}':=\ensuremath{\mathbf v}(F)$, and $\mathop{\mathrm{dim}}\nolimits M_{\sigma_-}(\ensuremath{\mathbf v}') = \mathop{\mathrm{dim}}\nolimits M_{\sigma_+}(\ensuremath{\mathbf v}) = \ensuremath{\mathbf v}^2 +2$.
\end{enumerate}
\end{Lem}
The idea of the proof is very similar to the one in Lemma \ref{lem:nottotallysemistable}.
The only difference is that we cannot use a completely numerical criterion like Lemma \ref{lem:numericaldimensions} and we will replace it by Mukai's Lemma \ref{lem:Mukai}.
\begin{proof}[Proof of Lemma \ref{lem:UniqueSpherical}]
We first prove \eqref{enum:TotSStIsotropic1}.
We consider again the two maps
\begin{align*}
&\pi^+\colon M_{\sigma_+}(\ensuremath{\mathbf v}) \to \overline{M},\\
&\mathrm{HN}\colon M_{\sigma_+}(\ensuremath{\mathbf v}) \dashrightarrow M_{\sigma_-}(\ensuremath{\mathbf a}_1)\times \dots \times M_{\sigma_-}(\ensuremath{\mathbf a}_m).
\end{align*}
The first one is induced by $\ell_{\sigma_0}$ and the second by the existence of relative HN filtrations.
By \cite[Section 4.5]{HL:Moduli}, we have, for all $i=1,\dots,m$ and for all $A_i\in M_{\sigma_-}(\ensuremath{\mathbf a}_i)$,
\[
\mathop{\mathrm{dim}}\nolimits M_{\sigma_-}(\ensuremath{\mathbf a}_i) \leq \mathop{\mathrm{ext}}\nolimits^1(A_i,A_i).
\]
Hence, by Mukai's Lemma \ref{lem:Mukai}, we deduce
\begin{equation}\label{eq:InequalityDimensionHNFactorsIsotropic}
\mathop{\mathrm{dim}}\nolimits M_{\sigma_+}(\ensuremath{\mathbf v}) \geq \sum_{i=1}^m \mathop{\mathrm{dim}}\nolimits M_{\sigma_-}(\ensuremath{\mathbf a}_i).
\end{equation}
Equation \eqref{eq:InequalityDimensionHNFactorsIsotropic} is the analogue of \eqref{eq:InequalityDimensionHNFactors} in the non-isotropic case.
Since any curve contracted by $\mathrm{HN}$ is also contracted by $\pi^+$, it follows that
\[
\sum_{i = 1}^m \mathop{\mathrm{dim}}\nolimits M_{\sigma_-}(\ensuremath{\mathbf a}_i) \ge \mathop{\mathrm{dim}}\nolimits \overline{M} = \mathop{\mathrm{dim}}\nolimits M_{\sigma_+}(\ensuremath{\mathbf v}).
\]
Therefore equality holds, and $\mathrm{HN}$ is a dominant map.
This shows that the projections
\[
M_{\sigma_+}(\ensuremath{\mathbf v}) \dashrightarrow M_{\sigma_-}(\ensuremath{\mathbf a}_i)
\]
are dominant.
By Theorem \ref{thm:KLS}, $M_{\sigma_-}(\ensuremath{\mathbf a}_i)$ has symplectic singularities.
Hence, we deduce that either $M_{\sigma_-}(\ensuremath{\mathbf a}_i)$ is a point, or $\mathop{\mathrm{dim}}\nolimits M_{\sigma_-}(\ensuremath{\mathbf a}_i) = \mathop{\mathrm{dim}}\nolimits M_{\sigma_+}(\ensuremath{\mathbf v}) = \ensuremath{\mathbf v}^2+2$.
Since $m\geq 2$, by Lemma \ref{lem:JHspherical} this shows the existence of a spherical $\sigma_0$-stable object in $\ensuremath{\mathcal P}_{\sigma_0}(1)$.
By Proposition \ref{prop:stablespherical}, there can only be one such spherical object.
To prove \eqref{enum:TotSStIsotropic2}, we first observe that by uniqueness (and by Lemma \ref{lem:JHspherical} again), all $\sigma_-$-spherical objects appearing in a HN filtration of a generic element $E \in M_{\sigma_+}(\ensuremath{\mathbf v})$ must be $\sigma_0$-stable as well.
As a consequence, the length of a HN filtration of $E$ with respect to $\sigma_-$ must be $2$ and have the form \eqref{eq:TotSStIsotropic}.
Since the maps $M_{\sigma_+}(\ensuremath{\mathbf v}) \dashrightarrow M_{\sigma_-}(\ensuremath{\mathbf a}_i)$ are dominant, the
$\sigma_-$-semistable object $F$ is generic.
\end{proof}
We can now prove the first implication for the characterization of totally semistable walls in the isotropic case.
We let $\ensuremath{\mathbf s}:=\ensuremath{\mathbf v}(S)$, where $S$ is the unique $\sigma_0$-stable object in $\ensuremath{\mathcal P}_{\sigma_0}(1)$.
\begin{Prop}\label{prop:TotSStWallIsotropicHasNumericalProps}
Let $\ensuremath{\mathcal W}$ be a totally semistable wall for $\ensuremath{\mathbf v}$. Then either
there exist an isotropic vector $\ensuremath{\mathbf w}$ with $(\ensuremath{\mathbf w},\ensuremath{\mathbf v})=1$, or the effective
spherical class $\ensuremath{\mathbf s}$ satisfies $(\ensuremath{\mathbf s},\ensuremath{\mathbf v})<0$.
\end{Prop}
\begin{proof}
We continue to use the notation of Lemma \ref{lem:UniqueSpherical}; in particular,
let $a > 0$ be as in the short exact sequence \eqref{eq:TotSStIsotropic}, and
$\ensuremath{\mathbf v}' = \ensuremath{\mathbf v} - a \ensuremath{\mathbf s}$.
If $(\ensuremath{\mathbf v}')^2>0$, then by Lemma \ref{lem:UniqueSpherical} and Theorem \ref{thm:nonempty}\eqref{enum:dimandsquare}, we have $(\ensuremath{\mathbf v}')^2 = \ensuremath{\mathbf v}^2$.
Since $\ensuremath{\mathbf v}' = \ensuremath{\mathbf v} - a \ensuremath{\mathbf s}, a>0$, this implies $(\ensuremath{\mathbf s},\ensuremath{\mathbf v})<0$.
So we may assume $\ensuremath{\mathbf v}'^2 = 0$. Then
$\ensuremath{\mathbf v}^2 = 0 + 2a (\ensuremath{\mathbf v}', \ensuremath{\mathbf s}) - 2a^2$, and it follows that $(\ensuremath{\mathbf v}', \ensuremath{\mathbf s}) > 0$.
In the notation of Lemma \ref{lem:StabilityIsotropic}, this means that $\ensuremath{\mathbf v}'$ is a positive multiple
of $\ensuremath{\mathbf w}_0$, which we can take to be the class of a point:
$\ensuremath{\mathbf v}' = c \ensuremath{\mathbf w}_0 = c (0, 0, 1)$.
Then the coarse moduli space $M_{\sigma_0}(\ensuremath{\mathbf v}')$ is the symmetric product
$\mathop{\mathrm{Sym}}^c X$; if we define $n$ by $\ensuremath{\mathbf v}^2 = 2n-2$, then
the equality of dimensions in Lemma \ref{lem:UniqueSpherical} becomes $c = n$.
Therefore
\[
2n-2 = \ensuremath{\mathbf v}^2 = (a \ensuremath{\mathbf s} + n \ensuremath{\mathbf w}_0)^2 = -2 a^2 + 2 a n (\ensuremath{\mathbf s}, \ensuremath{\mathbf w}_0)
\]
or, equivalently,
\begin{equation}\label{eq:Columbus0902II}
n-1 = a \bigl(n (\ensuremath{\mathbf s}, \ensuremath{\mathbf w}_0) - a\bigr)
\end{equation}
Recall that $(\ensuremath{\mathbf s}, \ensuremath{\mathbf w}_0) > 0$. If the right-hand side is positive, then it is at least
$n(\ensuremath{\mathbf s}, \ensuremath{\mathbf w}_0) - 1$. Thus, \eqref{eq:Columbus0902II} only has solutions if $(\ensuremath{\mathbf s}, \ensuremath{\mathbf w}_0) = 1$, in
which case they are $a = 1$ and $a = n-1$.
In the former case, $(\ensuremath{\mathbf v},\ensuremath{\mathbf w}_0)=1$. In the latter case, observe that $\ensuremath{\mathbf w}_1 = \ensuremath{\mathbf w}_0 + \ensuremath{\mathbf s}$,
and $(\ensuremath{\mathbf v},\ensuremath{\mathbf w}_1)=1$ follows directly.
\end{proof}
The converse statement follows from Proposition \ref{prop:sphericaltotallysemistable} above, and Lemma \ref{lem:NumericalPropertiesImplyTotSStIsotropicII} below.
\begin{Lem}\label{lem:NumericalPropertiesImplyTotSStIsotropicII}
Let $\ensuremath{\mathcal W}$ be a potential wall.
If there exists an isotropic class $\ensuremath{\mathbf w} \in \ensuremath{\mathcal H}_\ensuremath{\mathcal W}$ with $(\ensuremath{\mathbf w},\ensuremath{\mathbf v})=1$, then $\ensuremath{\mathcal W}$ is a totally semistable wall.
\end{Lem}
\begin{proof}
Note that by Lemma \ref{lem:StabilityIsotropic}, the primitive class $\ensuremath{\mathbf w}$ is automatically
effective.
Let $\sigma_0\in\ensuremath{\mathcal W}$ be a generic stability condition.
If $M_{\sigma_0}^{st}(\ensuremath{\mathbf w})\neq\emptyset$, then we can assume $\ensuremath{\mathbf w}=(0,0,1)$.
In this case $-\ensuremath{\mathbf v}$ has rank one, $M_{\sigma_+}(\ensuremath{\mathbf v})$ is the Hilbert scheme,
and $\ensuremath{\mathcal W}$ is the Hilbert-Chow wall discussed in
\cite[Example 10.1]{BM:projectivity}; in particular, it is totally
semistable.
Otherwise, $M_{\sigma_0}^{st}(\ensuremath{\mathbf w})=\emptyset$; hence, in the notation
of Lemma \ref{lem:StabilityIsotropic}, we are in the case $\ensuremath{\mathbf w} = \ensuremath{\mathbf w}_1$, and there exists a
$\sigma_0$-stable spherical object $S$, with Mukai vector $\ensuremath{\mathbf s}$, such that $(\ensuremath{\mathbf s}, \ensuremath{\mathbf w}_1)<0$.
Write $\ensuremath{\mathbf w}_1 = \ensuremath{\mathbf w}_0 + r \ensuremath{\mathbf s}$, where $r = (\ensuremath{\mathbf s}, \ensuremath{\mathbf w}_0) \in\ensuremath{\mathbb{Z}}_{>0}$.
Then
\[
1 = (\ensuremath{\mathbf v},\ensuremath{\mathbf w}_1) = (\ensuremath{\mathbf v},\ensuremath{\mathbf w}_0) + r (\ensuremath{\mathbf v},\ensuremath{\mathbf s}).
\]
By Lemma \ref{lem:StabilityIsotropic}, $(\ensuremath{\mathbf v},\ensuremath{\mathbf w}_0)$ is strictly positive, and
so $(\ensuremath{\mathbf v},\ensuremath{\mathbf s})\leq 0$. If the inequality is strict, Proposition \ref{prop:sphericaltotallysemistable}
applies. Otherwise, $(\ensuremath{\mathbf v}, \ensuremath{\mathbf s}) = 0$ and $(\ensuremath{\mathbf v},\ensuremath{\mathbf w}_1)=(\ensuremath{\mathbf v},\ensuremath{\mathbf w}_0)=1$; thus we are again
in the case of the Hilbert-Chow wall, and $\ensuremath{\mathcal W}$ is a totally semistable wall for $\ensuremath{\mathbf v}$.
\end{proof}
\subsection*{Divisorial contractions}
We now deal with divisorial contractions for isotropic walls.
The case of a flopping wall, a fake wall, and no wall will be examined in Section \ref{sec:flopping}.
\begin{Prop}\label{prop:DivisorialWallIsotropicHasNumericalProps}
Let $\ensuremath{\mathcal W}$ be a wall inducing a divisorial contraction.
Assume that $(\ensuremath{\mathbf v},\ensuremath{\mathbf w})\neq1,2$, for all isotropic vectors $\ensuremath{\mathbf w}\in \ensuremath{\mathcal H}$.
Then there exists an effective spherical class $\ensuremath{\mathbf s}\in \ensuremath{\mathcal H}$ with $(\ensuremath{\mathbf s},\ensuremath{\mathbf v})=0$.
\end{Prop}
\begin{proof}
The proof is similar to the one of Lemma \ref{lem:notdivisorial}: in particular, we are going to use Theorem \ref{thm:NamikawaWierzba}.
Let $D\subset M_{\sigma_+}(\ensuremath{\mathbf v})$ be an irreducible divisor contracted by $\pi^+\colon M_{\sigma_+}(\ensuremath{\mathbf v})\to\overline{M}$.
We know that $\mathop{\mathrm{dim}}\nolimits \pi^+(D) = \ensuremath{\mathbf v}^2$.
Consider the rational map
\[
\mathrm{HN}_D\colon D\dashrightarrow M_{\sigma_-}(\ensuremath{\mathbf a}_1)\times\dots\times M_{\sigma_-}(\ensuremath{\mathbf a}_l)
\]
induced by the relative HN filtration with respect to $\sigma_-$.
We let $I \subset \{1, \dots, l\}$ be the subset of indices $i$ with $\ensuremath{\mathbf a}_i^2 > 0$, and
$\ensuremath{\mathbf a} = \sum_{i \in I} \ensuremath{\mathbf a}_i$. We can assume $\abs{I} < l$, otherwise the proof is identical
to Lemma \ref{lem:notdivisorial}.
\medskip
\noindent
{\bf Step 1.} We show that there is an $i$ such that $\ensuremath{\mathbf a}_i$ is a multiple of a
spherical class $\ensuremath{\mathbf s}$.
Assume otherwise.
Then we can write $\ensuremath{\mathbf v} = n_0 \ensuremath{\mathbf w}_0 + n_1 \ensuremath{\mathbf w}_1 + \ensuremath{\mathbf a}$.
By symmetry, we may assume $n_1 \ge n_0$; in particular $n_1 \neq 0$.
Also note that for $i = 0, 1$ we have $(\ensuremath{\mathbf w}_0,\ensuremath{\mathbf w}_1)\geq1$,
$(\ensuremath{\mathbf v},\ensuremath{\mathbf w}_i)\geq3$ and $(\ensuremath{\mathbf w}_i, \ensuremath{\mathbf a}) \ge 1$ as long as $\ensuremath{\mathbf a} \neq 0$.
In case $\abs{I} \ge 1$, i.e., $\ensuremath{\mathbf a} \neq 0$, we obtain a contradiction by
\begin{align}
\ensuremath{\mathbf v}^2 & = \bigl( (\ensuremath{\mathbf a} + n_0 \ensuremath{\mathbf w}_0) + n_1 \ensuremath{\mathbf w}_1\bigr)^2
= \ensuremath{\mathbf a}^2 + 2 n_0 (\ensuremath{\mathbf a}, \ensuremath{\mathbf w}_0) + 2 n_1 (\ensuremath{\mathbf v}, \ensuremath{\mathbf w}_1) \nonumber\\
& \ge \ensuremath{\mathbf a}^2 + 2 n_0 + 6 n_1
> \ensuremath{\mathbf a}^2 + 2 + 2 n_0 + 2 n_1 \label{dumb2} \\
& \ge \sum_{i \in I} (\ensuremath{\mathbf a}_i^2 + 2) + 2 n_0 + 2 n_1 = \sum_{i = 1}^l \mathop{\mathrm{dim}}\nolimits M_{\sigma_-}(\ensuremath{\mathbf a}_i) \ge
\mathop{\mathrm{dim}}\nolimits \pi^+(D) = \ensuremath{\mathbf v}^2,
\label{dumb3}
\end{align}
where we used the numerical observations in \eqref{dumb2},
and Lemma \ref{lem:numericaldimensions} for the case $\abs{I} > 1$ in \eqref{dumb3}.
Otherwise, if $\abs{I} = 0$, then $\ensuremath{\mathbf v} = n_0 \ensuremath{\mathbf w}_0 + n_1 \ensuremath{\mathbf w}_1$ with $n_0, n_1 > 0$ and
$n_i ( \ensuremath{\mathbf w}_0, \ensuremath{\mathbf w}_1) \ge 3$ by the assumption $(\ensuremath{\mathbf v}, \ensuremath{\mathbf w}_i) \ge 3$. We get
a contradiction from
\begin{align*}
\ensuremath{\mathbf v}^2 = 2n_0 n_1 (\ensuremath{\mathbf w}_0, \ensuremath{\mathbf w}_1)
& = 2n_0 + 2n_1 + 2 (n_0 -1 ) (n_1-1) -2 + 2n_0n_1 \bigl((\ensuremath{\mathbf w}_0, \ensuremath{\mathbf w}_1) -1\bigr) \\
& > 2n_0 + 2n_1 = \sum_{i = 1}^l \mathop{\mathrm{dim}}\nolimits M_{\sigma_-}(\ensuremath{\mathbf a}_i) \ge \ensuremath{\mathbf v}^2.
\end{align*}
\medskip
{\bf Step 2.} We show $(\ensuremath{\mathbf s},\ensuremath{\mathbf v})\leq0$.
Assume for a contradiction that $(\ensuremath{\mathbf s},\ensuremath{\mathbf v})>0$.
Using $\ensuremath{\mathbf w}_1=\rho_{\ensuremath{\mathbf s}}(\ensuremath{\mathbf w}_0)$ we can write $\ensuremath{\mathbf v} = a\ensuremath{\mathbf s} + b\ensuremath{\mathbf w}_0 + \ensuremath{\mathbf a}$.
By Step 1, we have $a>0$.
In case $\ensuremath{\mathbf a} \neq 0$, we use $(\ensuremath{\mathbf a}, \ensuremath{\mathbf w}_0) > 0$ to get
\begin{align*}
\ensuremath{\mathbf a}^2 & = \bigl( \left(\ensuremath{\mathbf v} -a\ensuremath{\mathbf s}\right) - b\ensuremath{\mathbf w}_0\bigr)^2 \\
&= \ensuremath{\mathbf v}^2 -2a(\ensuremath{\mathbf v},\ensuremath{\mathbf s}) -2a^2 -2b (\ensuremath{\mathbf a} + b\ensuremath{\mathbf w}_0 ,\ensuremath{\mathbf w}_0) \\
&\leq \ensuremath{\mathbf v}^2 -2a(\ensuremath{\mathbf v},\ensuremath{\mathbf s}) -2a^2 -2b.
\end{align*}
This leads to a contradiction:
\[
\ensuremath{\mathbf v}^2 >\ensuremath{\mathbf v}^2 -2a(\ensuremath{\mathbf v},\ensuremath{\mathbf s}) -2a^2 +2 \ge \ensuremath{\mathbf a}^2 +2 + 2b
\ge \sum_{i=1}^l \mathop{\mathrm{dim}}\nolimits M_{\sigma_-}(\ensuremath{\mathbf a}_i) \ge \ensuremath{\mathbf v}^2.
\]
If $\ensuremath{\mathbf a} = 0$, our assumptions give $a(\ensuremath{\mathbf s},\ensuremath{\mathbf w}_0) = (\ensuremath{\mathbf v}, \ensuremath{\mathbf w}_0) > 2$ and
$-2a + b(\ensuremath{\mathbf s}, \ensuremath{\mathbf w}_0) = (\ensuremath{\mathbf s}, \ensuremath{\mathbf v}) > 0$. This leads to
\[
\ensuremath{\mathbf v}^2 = -2a^2 + 2ab (\ensuremath{\mathbf s}, \ensuremath{\mathbf w}_0) >
ab(\ensuremath{\mathbf s}, \ensuremath{\mathbf w}_0) > 2b =
\sum_{i=1}^l \mathop{\mathrm{dim}}\nolimits M_{\sigma_-}(\ensuremath{\mathbf a}_i) \ge \ensuremath{\mathbf v}^2.
\]
\medskip
\noindent
{\bf Step 3.} We show $(\ensuremath{\mathbf s},\ensuremath{\mathbf v})=0$.
Assume for a contradiction that $(\ensuremath{\mathbf s},\ensuremath{\mathbf v})<0$.
By Proposition \ref{prop:sphericaltotallysemistable}, $\ensuremath{\mathcal W}$ is a totally semistable wall for $\ensuremath{\mathbf v}$.
We consider $\ensuremath{\mathbf v}'=\rho_{\ensuremath{\mathbf s}}(\ensuremath{\mathbf v})$ as in Lemma \ref{lem:UniqueSpherical}.
The wall $\ensuremath{\mathcal W}$ induces a divisorial contraction for $\ensuremath{\mathbf v}$ if and only if it induces one for $\ensuremath{\mathbf v}'$.
But, since $(\ensuremath{\mathbf v},\ensuremath{\mathbf w})\neq1,2$, for all $\ensuremath{\mathbf w}$ isotropic, then $(\ensuremath{\mathbf v}',\ensuremath{\mathbf w})\neq1,2$ as well.
Moreover, $(\ensuremath{\mathbf s},\ensuremath{\mathbf v}')>0$.
This is a contradiction, by Step 2.
\end{proof}
The converse of Proposition \ref{prop:DivisorialWallIsotropicHasNumericalProps} is a consequence of the following three lemmas:
\begin{Lem}\label{lem:NumericalPropertiesImplyDivisorialWallI}
Assume that $(\ensuremath{\mathbf v},\ensuremath{\mathbf w}_0)=2$.
Then $\ensuremath{\mathcal W}$ induces a divisorial contraction.
\end{Lem}
\begin{proof}
It suffices to show that $M_{\omega}^{\beta}(-\ensuremath{\mathbf v})$ contains a divisor of non-locally free sheaves.
Since $(\ensuremath{\mathbf v},\ensuremath{\mathbf w}_0)=2$, we can write $\ensuremath{\mathbf v} = -(2,D,s)$, where $D$ an integral divisor which is either primitive or $D=0$.
Consider the vector $\ensuremath{\mathbf v}'=-(2,D,s+1)$
with $(\ensuremath{\mathbf v}')^2 = \ensuremath{\mathbf v}^2 -4 \geq-2$. By Theorem \ref{thm:nonempty}, we get
$M_{\omega}^{\beta}(-\ensuremath{\mathbf v}')\neq\emptyset$ .
Given a sheaf $F \in M_{\omega}^{\beta}(-\ensuremath{\mathbf v}')$ and a point $x \in X$, the
surjections $F \ensuremath{\twoheadrightarrow} k(x)$ induce a $\ensuremath{\mathbb{P}}^1$ of extensions
\[
k(x) \to E[1] \to F[1] \to k(x)[1]
\]
of objects in $M_{\sigma_+}(\ensuremath{\mathbf v})$ that are S-equivalent with respect to $\sigma_0$. Dimension counting shows
that they sweep out a divisor.
\end{proof}
\begin{Lem}\label{lem:NumericalPropertiesImplyDivisorialWallII}
Assume that there exists an effective spherical class $\ensuremath{\mathbf s} \in \ensuremath{\mathcal H}$ such that $(\ensuremath{\mathbf v},\ensuremath{\mathbf s})=0$.
Then $\ensuremath{\mathcal W}$ induces a divisorial contraction.
\end{Lem}
\begin{proof}
Let $S$ be the unique $\sigma_0$-stable spherical object with Mukai vector $\ensuremath{\mathbf s}$.
Let $\ensuremath{\mathbf a}=\ensuremath{\mathbf v}-\ensuremath{\mathbf s}$; then
\[
\ensuremath{\mathbf a}^2 = (\ensuremath{\mathbf v}-\ensuremath{\mathbf s})^2 = \ensuremath{\mathbf v}^2 -2 \quad \text{and} \quad
(\ensuremath{\mathbf a},\ensuremath{\mathbf s}) = -\ensuremath{\mathbf s}^2 =2.
\]
If $\ensuremath{\mathbf v}^2>2$, then $\ensuremath{\mathbf a}^2>0$.
By Lemma \ref{lem:StabilityIsotropic}, $\ensuremath{\mathbf w}_1 = b\ensuremath{\mathbf s} +\ensuremath{\mathbf w}_0$ with $b>0$; hence $(\ensuremath{\mathbf w}_1,\ensuremath{\mathbf a})>(\ensuremath{\mathbf w}_0,\ensuremath{\mathbf a})$.
If $(\ensuremath{\mathbf w}_0,\ensuremath{\mathbf a})\geq2$, then Proposition \ref{prop:TotSStWallIsotropicHasNumericalProps} implies that $\ensuremath{\mathcal W}$ is not a totally semistable wall for $\ensuremath{\mathbf a}$, since $(\ensuremath{\mathbf a},\ensuremath{\mathbf s})=2$.
Hence, given $A\in M_{\sigma_0}(\ensuremath{\mathbf a})$, all the extensions
\[
S \to E \to A
\]
give a divisor $D\subset M_{\sigma_+}(\ensuremath{\mathbf v})$, which is a $\ensuremath{\mathbb{P}}^1$-fibration over $M_{\sigma_0}^{st}(\ensuremath{\mathbf a})$ and which gets contracted by crossing the wall $\ensuremath{\mathcal W}$.
If $(\ensuremath{\mathbf w}_0,\ensuremath{\mathbf a})=1$, then there is a spherical class of the form $\ensuremath{\mathbf a} + k \ensuremath{\mathbf w}_0$. By the uniqueness
up to sign, $\ensuremath{\mathbf s}$ must be of this form; hence also $(\ensuremath{\mathbf w}_0,\ensuremath{\mathbf s})=1$.
From this we get $(\ensuremath{\mathbf w}_0,\ensuremath{\mathbf v})=2$, and so $\ensuremath{\mathcal W}$ induces a divisorial contraction by Lemma \ref{lem:NumericalPropertiesImplyDivisorialWallI}.
Finally, assume that $\ensuremath{\mathbf v}^2=2$. Then $\ensuremath{\mathbf a}$ is an isotropic vector with
$(\ensuremath{\mathbf a},\ensuremath{\mathbf v}) = (\ensuremath{\mathbf a},\ensuremath{\mathbf s}) =2$.
But this implies that $(\ensuremath{\mathbf w}_0,\ensuremath{\mathbf v})=1,2$.
Indeed, by Lemma \ref{lem:StabilityIsotropic}, the fact that $\ensuremath{\mathbf a}$ is an effective class with $(\ensuremath{\mathbf a},\ensuremath{\mathbf s})>0$ implies that $\ensuremath{\mathbf a}$ has to be a positive multiple of $\ensuremath{\mathbf w}_0$.
The case $(\ensuremath{\mathbf w}_0,\ensuremath{\mathbf v})=2$ is again Lemma \ref{lem:NumericalPropertiesImplyDivisorialWallI};
and if $(\ensuremath{\mathbf w}_0, \ensuremath{\mathbf v})=1$, then $-\ensuremath{\mathbf v}$ has rank $1$, and we are in the case of the Hilbert-Chow wall.
\end{proof}
\begin{Lem}\label{lem:NumericalPropertiesImplyDivisorialWallIII}
Let $\ensuremath{\mathcal W}$ be a potential wall.
If there exists an isotropic class $\ensuremath{\mathbf w}$ such that $(\ensuremath{\mathbf v},\ensuremath{\mathbf w})\in \{1,2\}$,
then $\ensuremath{\mathcal W}$ induces a divisorial contraction.
\end{Lem}
\begin{proof}
By Lemma \ref{lem:StabilityIsotropic}, the class $\ensuremath{\mathbf w}$ is automatically effective.
By Lemma \ref{lem:NumericalPropertiesImplyDivisorialWallI}, the only remaining case is
$\ensuremath{\mathbf w} = \ensuremath{\mathbf w}_1$, with $\ensuremath{\mathbf w}_1=b\ensuremath{\mathbf s}+\ensuremath{\mathbf w}_0$, $b>0$, where $\ensuremath{\mathbf s}$ is the class of the unique $\sigma_0$-stable spherical object.
By Lemma \ref{lem:NumericalPropertiesImplyDivisorialWallII}, we can assume that $(\ensuremath{\mathbf s},\ensuremath{\mathbf v})\neq0$.
If $(\ensuremath{\mathbf s},\ensuremath{\mathbf v})>0$, then
\[
(\ensuremath{\mathbf w}_1, \ensuremath{\mathbf v}) = b (\ensuremath{\mathbf s},\ensuremath{\mathbf v}) + (\ensuremath{\mathbf w}_0,\ensuremath{\mathbf v}) \in \{1, 2\}.
\]
Since $(\ensuremath{\mathbf w}_0,\ensuremath{\mathbf v})>0$ and $b>0$, this is possible only if $(\ensuremath{\mathbf w}_0,\ensuremath{\mathbf v})=1$, which corresponds to the Hilbert-Chow contraction.
Hence, we can assume $(\ensuremath{\mathbf s},\ensuremath{\mathbf v})<0$.
By Proposition \ref{prop:sphericaltotallysemistable}, $\ensuremath{\mathcal W}$ is a totally semistable wall for $\ensuremath{\mathbf v}$,
and $\ensuremath{\mathcal W}$ induces a divisorial contraction with respect to $\ensuremath{\mathbf v}$ if and only if it
induces one with respect to $\ensuremath{\mathbf v}' = \rho_\ensuremath{\mathbf s}(\ensuremath{\mathbf v})$.
But then $(\ensuremath{\mathbf v}',\ensuremath{\mathbf w}_0)=(\ensuremath{\mathbf v},\ensuremath{\mathbf w}_1) \in \{1,2\}$.
Again, we can use Lemma \ref{lem:NumericalPropertiesImplyDivisorialWallI} to finish the proof.
\end{proof}
\section{Flopping walls}
\label{sec:flopping}
This section deals with the remaining case of a potential wall $\ensuremath{\mathcal W}$: assuming that $\ensuremath{\mathcal W}$ does not
correspond to a divisorial contraction, we describe in which cases it is a flopping wall, a fake
wall, or not a wall. This is the content of Propositions \ref{prop:flops} and \ref{prop:noflops}.
\begin{Prop} \label{prop:flops}
Assume that $\ensuremath{\mathcal W}$ does not induce a divisorial contraction. If either
\begin{enumerate}
\item \label{enum:sum2positive}
$\ensuremath{\mathbf v}$ can be written as the sum
$\ensuremath{\mathbf v} = \ensuremath{\mathbf a}_1 + \ensuremath{\mathbf a}_2$ of two positive classes $\ensuremath{\mathbf a}_1, \ensuremath{\mathbf a}_2 \in P_\ensuremath{\mathcal H} \cap \ensuremath{\mathcal H}$, or
\item \label{enum:sphericalflop}
there exists
a spherical class $\tilde \ensuremath{\mathbf s} \in \ensuremath{\mathcal W}$ with $0 < (\tilde \ensuremath{\mathbf s}, \ensuremath{\mathbf v}) \le \frac{\ensuremath{\mathbf v}^2}2$,
\end{enumerate}
then $\ensuremath{\mathcal W}$ induces a small contraction.
\end{Prop}
\begin{Lem} \label{lem:findparallelogram}
Let $M$ be a lattice of rank two, and $C \subset M \otimes \ensuremath{\mathbb{R}}^2$ be a convex cone not containing a
line. If a primitive lattice element $\ensuremath{\mathbf v} \in M \cap C$ can be written as the sum $\ensuremath{\mathbf v} = \ensuremath{\mathbf a} + \ensuremath{\mathbf b}$ of two classes in
$\ensuremath{\mathbf a}, \ensuremath{\mathbf b} \in M \cap
C$, then it can be written as a sum $\ensuremath{\mathbf v} = \ensuremath{\mathbf a}' + \ensuremath{\mathbf b}'$ of two classes $\ensuremath{\mathbf a}', \ensuremath{\mathbf b}' \in M \cap C$ in such a way that
the parallelogram with vertices $0, \ensuremath{\mathbf a}', \ensuremath{\mathbf v}, \ensuremath{\mathbf b}'$ does not contain any other lattice point besides
its vertices.
\end{Lem}
\begin{proof}
If the parallelogram $0, \ensuremath{\mathbf a}, \ensuremath{\mathbf v}, \ensuremath{\mathbf b}$ contains an additional lattice point $\ensuremath{\mathbf a}'$, we may replace
$\ensuremath{\mathbf a}$ by $\ensuremath{\mathbf a}'$ and $\ensuremath{\mathbf b}$ by $\ensuremath{\mathbf v}-\ensuremath{\mathbf a}'$. This procedure terminates.
\end{proof}
\begin{Lem} \label{lem:parallelogram}
Let $\ensuremath{\mathbf a}, \ensuremath{\mathbf b}, \ensuremath{\mathbf v} \in \ensuremath{\mathcal H} \cap C_\ensuremath{\mathcal W}$ be effective classes with $\ensuremath{\mathbf v} = \ensuremath{\mathbf a} + \ensuremath{\mathbf b}$.
Assume that the following conditions are satisfied:
\begin{itemize}
\item The phases of $\ensuremath{\mathbf a}, \ensuremath{\mathbf b}$ satisfy $\phi^+(\ensuremath{\mathbf a}) < \phi^+(\ensuremath{\mathbf b})$.
\item The objects $A, B$ are $\sigma_+$-stable with $\ensuremath{\mathbf v}(A) = \ensuremath{\mathbf a}, \ensuremath{\mathbf v}(B) = \ensuremath{\mathbf b}$.
\item The parallelogram in $\ensuremath{\mathcal H} \otimes \ensuremath{\mathbb{R}}$ with vertices $0, \ensuremath{\mathbf a}, \ensuremath{\mathbf v}, \ensuremath{\mathbf b}$ does not contain any other
lattice point.
\item The extension $A \ensuremath{\hookrightarrow} E \ensuremath{\twoheadrightarrow} B$ satisfies $\mathop{\mathrm{Hom}}\nolimits(B, E) = 0$.
\end{itemize}
Then $E$ is $\sigma_{+}$-stable.
\end{Lem}
\begin{proof}
Since $A$ and $B$ are $\sigma_+$-stable, they are $\sigma_0$-semistable.
Therefore, the extension $E$ is also $\sigma_0$-semistable.
Let $\ensuremath{\mathbf a}_i$ be the Mukai vector of the $i$-th HN factor of $E$ with respect to $\sigma_+$.
By Proposition \ref{prop:HW} part \eqref{enum:semistablehasfactorsinHW} and Remark \ref{rem:HW}, we
have $\ensuremath{\mathbf a}_i \in \ensuremath{\mathcal H}$.
We have $E \in \ensuremath{\mathcal P}_+([\phi^+(\ensuremath{\mathbf a}), \phi^+(\ensuremath{\mathbf b})])$, and hence $\ensuremath{\mathbf a}_i$ is contained in the
cone generated by $\ensuremath{\mathbf a}, \ensuremath{\mathbf b}$. Since the same holds for $\ensuremath{\mathbf v} - \ensuremath{\mathbf a}_i = \sum_{j \neq i} \ensuremath{\mathbf a}_j$,
$\ensuremath{\mathbf a}_i$ is in fact contained in the parallelogram with vertices $0, \ensuremath{\mathbf a}, \ensuremath{\mathbf v}, \ensuremath{\mathbf b}$. Since
it is also a lattice point, the assumption on the
parallelogram implies $\ensuremath{\mathbf a}_i \in \{\ensuremath{\mathbf a}, \ensuremath{\mathbf b}, \ensuremath{\mathbf v}\}$.
Assume that $E$ is not $\sigma_+$-stable, and let $A_1 \subset E$ be the first HN filtration factor. Since
$\phi^+(\ensuremath{\mathbf a}_1) > \phi^+(\ensuremath{\mathbf v})$, we must have $\ensuremath{\mathbf a}_1 = \ensuremath{\mathbf b}$. By the stability of $A, B$ we
have $\mathop{\mathrm{Hom}}\nolimits(A_1, A) = 0$, and $\mathop{\mathrm{Hom}}\nolimits(A_1, B) = 0$ unless $A_1 \cong B$. Either of these is
a contradiction, since $\mathop{\mathrm{Hom}}\nolimits(A_1,E)\neq0$ and $\mathop{\mathrm{Hom}}\nolimits(B,E)=0$.
\end{proof}
\subsubsection*{Proof of Proposition \ref{prop:flops}}
We first consider case \eqref{enum:sum2positive}, so $\ensuremath{\mathbf v} = \ensuremath{\mathbf a}_1 + \ensuremath{\mathbf a}_2$ with
$\ensuremath{\mathbf a}_1, \ensuremath{\mathbf a}_2 \in P_\ensuremath{\mathcal H}$. Using Lemma
\ref{lem:findparallelogram}, we may assume that the parallelogram with vertices
$0, \ensuremath{\mathbf a}_1, \ensuremath{\mathbf v}, \ensuremath{\mathbf a}_2$ does not contain an interior lattice point. In particular,
$\ensuremath{\mathbf a}_1, \ensuremath{\mathbf a}_2$ are primitive. We may also assume that $\phi^+(\ensuremath{\mathbf a}_1) < \phi^+(\ensuremath{\mathbf a}_2)$.
By the signature of $\ensuremath{\mathcal H}$ (see the proof of Lemma \ref{lem:numericaldimensions}),
we have $(\ensuremath{\mathbf a}_1, \ensuremath{\mathbf a}_2) > 2$.
By Theorem \ref{thm:nonempty}, there exist $\sigma_+$-stable objects $A_i$ of class
$\ensuremath{\mathbf v}(A_i) = \ensuremath{\mathbf a}_i$. The inequality for the Mukai pairing implies
$\mathop{\mathrm{ext}}\nolimits^1(A_2, A_1) > 2$. By Lemma \ref{lem:parallelogram}, any extension
\[ 0 \to A_1 \ensuremath{\hookrightarrow} E \ensuremath{\twoheadrightarrow} A_2 \to 0 \]
of $A_2$ by $A_1$ is $\sigma_+$-stable of class $\ensuremath{\mathbf v}$. As all these extensions are S-equivalent to
each other with respect to $\sigma_0$, we obtain a projective space of dimension at least two
that gets contracted by $\pi^+$.
Now consider case \eqref{enum:sphericalflop}.
First assume that $\tilde \ensuremath{\mathbf s}$ is an effective class. Note that $(\ensuremath{\mathbf v} - \tilde \ensuremath{\mathbf s})^2 \ge -2$
and $(\tilde \ensuremath{\mathbf s}, \ensuremath{\mathbf v} - \tilde \ensuremath{\mathbf s}) = (\tilde \ensuremath{\mathbf s}, \tilde \ensuremath{\mathbf v}) - \tilde \ensuremath{\mathbf s}^2 > 2$.
Consider the parallelogram $\mathbf{P}$ with vertices $0, \tilde \ensuremath{\mathbf s}, \ensuremath{\mathbf v}, \ensuremath{\mathbf v}-\tilde \ensuremath{\mathbf s}$,
and the function $f(\ensuremath{\mathbf a}) = \ensuremath{\mathbf a}^2$ for $\ensuremath{\mathbf a} \in \mathbf{P}$. By homogeneity, its minimum is obtained
on one of the boundary segments; thus
\[ \bigl( \tilde \ensuremath{\mathbf s} + t (\ensuremath{\mathbf v} - \tilde \ensuremath{\mathbf s}) \bigr)^2 > -2 + 4t - 2t^2 > -2\]
for $0 < t < 1$, along with a similar computation for the other line segments, shows
$f(\ensuremath{\mathbf a}) > -2$ unless $\ensuremath{\mathbf a} \in \{\tilde \ensuremath{\mathbf s}, \ensuremath{\mathbf v}-\tilde \ensuremath{\mathbf s}\}$. In particular, if there is
any lattice point $\ensuremath{\mathbf a} \in \mathbf{P}$ other than one of its vertices, then
$\ensuremath{\mathbf a}^2 \ge 0$ and $(\ensuremath{\mathbf v} - \ensuremath{\mathbf a})^2 \ge 0$. Thus $\ensuremath{\mathbf v} = \ensuremath{\mathbf a} + (\ensuremath{\mathbf v}-\ensuremath{\mathbf a})$ can be
written as the sum of positive classes, and the claim follows from the previous paragraph.
Otherwise, let $\widetilde S$ be the
$\sigma_+$-stable object of class $\tilde \ensuremath{\mathbf s}$, and $F$ any $\sigma_+$-stable object of class
$\ensuremath{\mathbf v} - \tilde \ensuremath{\mathbf s}$; then $\mathop{\mathrm{ext}}\nolimits^1(\widetilde S, F) = \mathop{\mathrm{ext}}\nolimits^1(F, \widetilde S) > 2$. Thus, with the same
arguments we obtain a family of $\sigma_+$-stable objects parameterized by a projective space that
gets contracted by $\pi^+$.
We are left with the case where $\tilde \ensuremath{\mathbf s}$ is not effective. Set
$\tilde \ensuremath{\mathbf t} = - \tilde \ensuremath{\mathbf s}$, which is an effective class.
With the same reasoning as above, we may assume that the parallelogram
with vertices $0, \tilde \ensuremath{\mathbf t}, \ensuremath{\mathbf v}, \ensuremath{\mathbf v} - \tilde \ensuremath{\mathbf t}$ contains no additional lattice points.
Set
\[
\ensuremath{\mathbf v}' = \rho_{\tilde \ensuremath{\mathbf t}}(\ensuremath{\mathbf v}) - \tilde \ensuremath{\mathbf t} = \ensuremath{\mathbf v} - \bigl((\tilde \ensuremath{\mathbf s},\ensuremath{\mathbf v}) + 1\bigr)\tilde \ensuremath{\mathbf t},,
\]
and consider the parallelogram $\mathbf P$ with vertices $0$, $\bigl((\tilde \ensuremath{\mathbf s},\ensuremath{\mathbf v}) + 1\bigr)
\tilde \ensuremath{\mathbf t}$, $\ensuremath{\mathbf v}$, $\ensuremath{\mathbf v}'$ (see Figure \eqref{fig:snoteffective}).
We have $\ensuremath{\mathbf v}'^2 \ge -2$ and $(\tilde \ensuremath{\mathbf t}, \ensuremath{\mathbf v}') = (\tilde \ensuremath{\mathbf s}, \ensuremath{\mathbf v}) + 2 > 2$. The lattice points
of $\mathbf P$
are given by $k \tilde \ensuremath{\mathbf t}$ and $\ensuremath{\mathbf v}' + k \tilde \ensuremath{\mathbf t}$ for $k \in \ensuremath{\mathbb{Z}}$, $0 \le k \le (\tilde \ensuremath{\mathbf s}, \ensuremath{\mathbf v}) + 1$
(otherwise, already the parallelogram with vertices $0, \tilde \ensuremath{\mathbf t}, \ensuremath{\mathbf v}, \ensuremath{\mathbf v}-\tilde \ensuremath{\mathbf t}$ would contain
additional lattice points).
\begin{wrapfigure}{r}{0.26\textwidth}
\vspace{-1em}
\definecolor{zzttqq}{rgb}{0.6,0.2,0}
\definecolor{tttttt}{rgb}{0.2,0.2,0.2}
\definecolor{uququq}{rgb}{0.25,0.25,0.25}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\clip(-1,-1.2) rectangle (3,5.0);
\fill[line width=0.4pt,dash pattern=on 4pt off 4pt,color=zzttqq,fill=zzttqq,fill opacity=0.1] (0,0)
-- (0,4.2) -- (2.1,4.84) -- (2.1,0.64) -- cycle;
\fill[line width=0.4pt,dotted,pattern color=zzttqq,fill=zzttqq,pattern=north west lines] (0,0) --
(0,0.84) -- (2.1,4.84) -- (2.1,4) -- cycle;
\draw [->] (0,0) -- (0,0.84);
\draw [->] (0,0) -- (2.1,0.64);
\draw [->] (0,0) -- (0,-0.84);
\draw [line width=0.4pt,dash pattern=on 4pt off 4pt,color=zzttqq] (0,0)-- (0,4.2);
\draw [line width=0.4pt,dash pattern=on 4pt off 4pt,color=zzttqq] (0,4.2)-- (2.1,4.84);
\draw [line width=0.4pt,dash pattern=on 4pt off 4pt,color=zzttqq] (2.1,4.84)-- (2.1,0.64);
\draw [line width=0.4pt,dash pattern=on 4pt off 4pt,color=zzttqq] (2.1,0.64)-- (0,0);
\draw [line width=0.4pt,dotted,color=zzttqq] (0,0)-- (0,0.84);
\draw [line width=0.4pt,dotted,color=zzttqq] (0,0.84)-- (2.1,4.84);
\draw [line width=0.4pt,dotted,color=zzttqq] (2.1,4.84)-- (2.1,4);
\draw [line width=0.4pt,dotted,color=zzttqq] (2.1,4)-- (0,0);
\fill [color=uququq] (0,0) circle (1.5pt);
\draw[color=uququq] (-0.28,0.02) node {$0$};
\fill [color=tttttt] (0,0.84) circle (1.5pt);
\draw[color=tttttt] (-0.25,0.84) node {$\tilde \ensuremath{\mathbf t}$};
\fill [color=black] (2.1,0.64) circle (1.5pt);
\draw[color=black] (2.5,0.64) node {$\ensuremath{\mathbf v}'$};
\fill [color=uququq] (2.1,1.48) circle (1.5pt);
\draw[color=uququq] (2.6,1.5) node {$\rho_{\tilde \ensuremath{\mathbf t}}(\ensuremath{\mathbf v})$};
\fill [color=uququq] (0,1.68) circle (1.5pt);
\fill [color=uququq] (0,2.52) circle (1.5pt);
\fill [color=uququq] (0,3.36) circle (1.5pt);
\fill [color=uququq] (0,4.2) circle (1.5pt);
\draw[color=uququq] (0.3,4.78) node {$((\tilde \ensuremath{\mathbf s}, \ensuremath{\mathbf v})+1)\tilde \ensuremath{\mathbf t}$};
\fill [color=uququq] (2.1,4.84) circle (1.5pt);
\draw[color=uququq] (2.3,4.84) node {$\ensuremath{\mathbf v}$};
\fill [color=uququq] (2.1,4) circle (1.5pt);
\fill [color=uququq] (2.1,3.16) circle (1.5pt);
\fill [color=uququq] (2.1,2.32) circle (1.5pt);
\fill [color=uququq] (0,-0.84) circle (1.5pt);
\draw[color=uququq] (-0.25,-0.84) node {$\tilde \ensuremath{\mathbf s}$};
\end{tikzpicture}
\vspace{-25pt}
\caption{-$\tilde \ensuremath{\mathbf s}$ effective.} \label{fig:snoteffective}
\end{wrapfigure}
Let $\widetilde T$ and $F$ be $\sigma_+$-stable objects of class $\tilde \ensuremath{\mathbf t}$ and $\ensuremath{\mathbf v}'$, respectively.
Let us assume $\phi^+(\tilde \ensuremath{\mathbf t}) > \phi^+(\ensuremath{\mathbf v})$; the other case follows by dual arguments.
Any subspace $\mathfrak V \subset \mathop{\mathrm{Ext}}\nolimits^1(\widetilde T, F)$ of dimension
$(\tilde \ensuremath{\mathbf s}, \ensuremath{\mathbf v}) + 1$ defines an extension
\[ 0 \to F \ensuremath{\hookrightarrow} E \ensuremath{\twoheadrightarrow} \widetilde T \otimes \mathfrak V \to 0 \phantom{\hspace{5cm}} \]
such that $E$ is of class $\ensuremath{\mathbf v}(E) = \ensuremath{\mathbf v}$, and satisfies $\mathop{\mathrm{Hom}}\nolimits(\widetilde T, E) = 0$. If $E$ were
not $\sigma_+$-stable, then the class of the maximal destabilizing subobject $A$ would have to be
a lattice point in $\mathbf P$ with $\phi^+(\ensuremath{\mathbf v}(A)) > \phi^+(\ensuremath{\mathbf v})$;
therefore, $\ensuremath{\mathbf v}(A) = k \tilde \ensuremath{\mathbf t}$. The only $\sigma_+$-semistable object of this class is
$\widetilde T^{\oplus k}$, and we get a contradiction. Thus, we have constructed a family of
$\sigma_+$-stable objects of class $\ensuremath{\mathbf v}$ parameterized by the Grassmannian
$\mathrm{Gr}((\tilde \ensuremath{\mathbf s}, \ensuremath{\mathbf v}) + 1, \mathop{\mathrm{ext}}\nolimits^1(\widetilde T, F))$ that become S-equivalent with respect to
$\sigma_0$.
\hfill$\Box$
It remains to prove the converse of Proposition \ref{prop:flops}:
\begin{Prop} \label{prop:noflops}
Assume that $\ensuremath{\mathcal W}$ does not induce a divisorial contraction. Assume that $\ensuremath{\mathbf v}$ cannot be written
as the sum of two positive classes in $P_\ensuremath{\mathcal H}$, and that there is no spherical class
$\ensuremath{\mathbf s} \in \ensuremath{\mathcal H}$ with $0 < (\ensuremath{\mathbf s}, \ensuremath{\mathbf v}) \le \frac{\ensuremath{\mathbf v}^2}2$.
Then $\ensuremath{\mathcal W}$ is either a fake wall, or not a wall.
\end{Prop}
\subsubsection*{Proof}
First consider the case where $\ensuremath{\mathbf v} = \ensuremath{\mathbf v}_0$ is the minimal class in its orbit $G_\ensuremath{\mathcal H}.\ensuremath{\mathbf v}$. We will prove that
every $\sigma_+$-stable object $E$ of class $\ensuremath{\mathbf v}_0$ is also $\sigma_0$-\emph{stable}.
Assume otherwise, so $E$ is strictly $\sigma_0$-semistable, and therefore $\sigma_-$-unstable.
Let $\ensuremath{\mathbf a}_1, \dots, \ensuremath{\mathbf a}_l$ be the Mukai vectors
of the HN filtration factors of $E$ with respect to $\sigma_-$. If all classes $\ensuremath{\mathbf a}_i$ are positive,
$\ensuremath{\mathbf a}_i \in P_\ensuremath{\mathcal H}$, then we have an immediate contradiction to the assumptions.
Otherwise, $E$ must have
a spherical destabilizing subobject, or a spherical destabilizing quotient.
Let $\tilde \ensuremath{\mathbf s}$ be the class of this spherical object. If there is only one $\sigma_0$-stable
spherical object, then it is easy to see that $\ensuremath{\mathbf v}_0 - \tilde \ensuremath{\mathbf s}$ is in the positive cone;
therefore, $(\tilde \ensuremath{\mathbf s}, \ensuremath{\mathbf v}_0) < \frac{\ensuremath{\mathbf v}_0^2}2$ in contradiction to our assumption.
\begin{wrapfigure}{r}{0.4\textwidth}
\vspace{-1em}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.5cm,y=1.5cm]
\clip(-0.3,-0.3) rectangle (3.5,2.5);
\draw[->,color=black] (-1,0) -- (3.5,0);
\foreach \x in {-1,1,2,3,4}
\draw[shift={(\x,0)},color=black] (0pt,2pt) -- (0pt,-2pt);
\draw[->,color=black] (0,-1) -- (0,2.5);
\foreach \y in {-1,1,2,3}
\draw[shift={(0,\y)},color=black] (2pt,0pt) -- (-2pt,0pt);
\draw [samples=50,domain=-0.99:0.99,rotate around={-135:(-1,-1)},xshift=-1.5cm,yshift=-1.5cm] plot ({2*(1+(\x)^2)/(1-(\x)^2)},{1.15*2*(\x)/(1-(\x)^2)});
\draw [samples=50,domain=-0.99:0.99,rotate around={-135:(-1,-1)},xshift=-1.5cm,yshift=-1.5cm] plot ({2*(-1-(\x)^2)/(1-(\x)^2)},{1.15*(-2)*(\x)/(1-(\x)^2)});
\draw [samples=50,domain=-0.99:0.99,rotate around={135:(1,0)},xshift=1.5cm,yshift=0cm] plot ({0.58*(1+(\x)^2)/(1-(\x)^2)},{1*2*(\x)/(1-(\x)^2)});
\draw [samples=50,domain=-0.99:0.99,rotate around={135:(0,1)},xshift=0cm,yshift=1.5cm] plot ({0.58*(-1-(\x)^2)/(1-(\x)^2)},{1*(-2)*(\x)/(1-(\x)^2)});
\draw (1.4,0.68) node[anchor=north west] {$\mathbf{v}^2=\mathbf{v}_0^2$};
\draw (1.4,1.3) node[anchor=north west] {$ (\mathbf{v}-\mathbf{t})^2=-2$};
\draw (0.4,2.2) node[anchor=north west] {$(\mathbf{v}-\mathbf{s})^2=-2$};
\draw (0.37,0.31) node[anchor=north west] {$ \mathbf{v}_0 $};
\fill [color=black] (1,0) circle (1.5pt);
\draw[color=black] (1.0,-0.18) node {$S_1$};
\fill [color=black] (0,1) circle (1.5pt);
\draw[color=black] (-0.18,1) node {$S_2$};
\fill [color=black] (0.44,0.39) circle (1.5pt);
\end{tikzpicture}
\caption{Proof of Proposition \ref{prop:noflops}}
\label{fig:NoFlop}
\vspace{-10pt}
\end{wrapfigure}
If there are two $\sigma_0$-stable spherical objects of classes $\ensuremath{\mathbf s}, \ensuremath{\mathbf t}$, consider the two vectors
$\ensuremath{\mathbf v}_0-\ensuremath{\mathbf s}$ and $\ensuremath{\mathbf v}_0 -\ensuremath{\mathbf t}$. The assumptions imply
$(\ensuremath{\mathbf v}_0-\ensuremath{\mathbf s})^2 < -2$ and $(\ensuremath{\mathbf v}_0-\ensuremath{\mathbf t})^2 < -2$; on the other hand, $\ensuremath{\mathbf v}_0 - \tilde \ensuremath{\mathbf s}$ is effective;
using Lemma \ref{lem:JHspherical}, this implies that $\ensuremath{\mathbf v}_0-\ensuremath{\mathbf s}$ or $\ensuremath{\mathbf v}_0-\ensuremath{\mathbf t}$ must be effective.
We claim that this leads to a simple numerical contradiction.
Indeed, $(\ensuremath{\mathbf v}_0-\ensuremath{\mathbf t})^2 < -2$ constrains $\ensuremath{\mathbf v}_0$ to lie below a concave down hyperbola, and $(\ensuremath{\mathbf v}_0-\ensuremath{\mathbf s})^2 < -2$
to lie above a concave up hyperbola; the two hyperbolas intersect at the points $0$ and $\ensuremath{\mathbf s} + \ensuremath{\mathbf t}$.
Therefore, if we write $\ensuremath{\mathbf v}_0 = x\ensuremath{\mathbf s} + y\ensuremath{\mathbf t}$, we have $x,y < 1$. Thus, neither $\ensuremath{\mathbf v}_0-\ensuremath{\mathbf s}$ nor $\ensuremath{\mathbf v}_0-\ensuremath{\mathbf t}$
can be effective (see Figure \ref{fig:NoFlop}).
In the case where $\ensuremath{\mathbf v}$ is not minimal, $\ensuremath{\mathbf v} \neq \ensuremath{\mathbf v}_0$,
let $\Phi$ be the sequence of spherical twists given by
Proposition \ref{prop:sphericaltotallysemistable}. Since the assumptions of our proposition are
invariant under the $G_\ensuremath{\mathcal H}$-action, they are also satisfied by $\ensuremath{\mathbf v}_0$. By the previous case,
we know that every $\sigma_+$-stable objects $E_0$ of class $\ensuremath{\mathbf v}_0$ is also $\sigma_0$-\emph{stable}.
Thus $\Phi$ induces a morphism
$\Phi_* \colon M_{\sigma_+}(\ensuremath{\mathbf v}_0) \to M_{\sigma_+}(\ensuremath{\mathbf v})$; since $\Phi_*$ is injective and
the two spaces are smooth projective varieties of the same dimension, it is an isomorphism.
The S-equivalence class of $\Phi(E_0)$ is determined by that of $E_0$; since S-equivalence is a
trivial equivalence relation on $M_{\sigma_+}(\ensuremath{\mathbf v}_0)$, the same holds for
$M_{\sigma_+}(\ensuremath{\mathbf v})$, and thus $\pi^+$ is an isomorphism.
\hfill$\Box$
Proposition \ref{prop:noflops} finishes the proof of Theorem \ref{thm:walls}.
\section{Main theorems}\label{sec:MainThms}
We will first complete the proof of Theorem \ref{thm:birational-WC}.
\begin{proof}[Proof of Theorem \ref{thm:birational-WC}, part \eqref{enum:birationalautoequivalence}]
We consider a wall $\ensuremath{\mathcal W}$ with nearby stability conditions $\sigma_\pm$, and $\sigma_0 \in \ensuremath{\mathcal W}$.
Since $M_{\sigma_\pm}(\ensuremath{\mathbf v})$ are $K$-trivial varieties, it is sufficient to find
an open subset $U \subset M_{\sigma_\pm}(\ensuremath{\mathbf v})$ with complement of codimension two, and an
(anti-)autoequivalence $\Phi_\ensuremath{\mathcal W}$ of $\mathrm{D}^{b}(X,\alpha)$, such that
$\Phi_\ensuremath{\mathcal W}(E)$ is $\sigma_-$-stable for all $E \in U$.
We will distinguish cases according to Theorem \ref{thm:walls}.
First consider the case when $\ensuremath{\mathcal W}$ corresponds to a flopping contraction, or when $\ensuremath{\mathcal W}$
is a fake wall.
If $\ensuremath{\mathcal W}$ does not admit an effective spherical class $\ensuremath{\mathbf s} \in \ensuremath{\mathcal H}_\ensuremath{\mathcal W}$ with $(\ensuremath{\mathbf s}, \ensuremath{\mathbf v}) < 0$
then we can choose $U$ to be the open subset of $\sigma_0$-\emph{stable}
objects; its complement has codimension two, and there is nothing to prove.
Otherwise, there exists a spherical object
destabilizing every object in $M_{\sigma_+}(\ensuremath{\mathbf v})$. Let $\ensuremath{\mathbf v}_0 \in \ensuremath{\mathcal H}_\ensuremath{\mathcal W}$ be the
minimal class of the $G_\ensuremath{\mathcal H}$-orbit of $\ensuremath{\mathbf v}$, in the sense of Definition \ref{prop:pellequation}.
The subset $U$
of $\sigma_0$-\emph{stable} objects in $M_{\sigma_\pm}(\ensuremath{\mathbf v}_0)$ has complement of codimension two.
Then the sequence of spherical twists of Proposition \ref{prop:sphericaltotallysemistable}, applied
for $\sigma_+$ and $\sigma_-$, identifies $U$ with subsets of $M_{\sigma_+}(\ensuremath{\mathbf v})$
and $M_{\sigma_-}(\ensuremath{\mathbf v})$ via derived equivalences $\Phi^+, \Phi^-$; then the composition
$\Phi^- \circ \left(\Phi^+\right)^{-1}$ has the desired property.
Next assume that $\ensuremath{\mathcal W}$ induces a divisorial contraction. We have three cases to consider:
\begin{description}[leftmargin=0.5cm]
\item[Brill-Noether] Again, we first assume that $\ensuremath{\mathbf v}$ is minimal, namely there is no
effective spherical class $\ensuremath{\mathbf s}$ with $(\ensuremath{\mathbf s}, \ensuremath{\mathbf v}) < 0$.
The contracted divisor is described in Proposition
\ref{prop:sphericaldivisorialwall}, and the HN filtration of the destabilized
objects in Lemma \ref{lem:minimaldivisorialcontraction}. We may assume that we are in
the case where the Brill-Noether divisor in $M_{\sigma_+}(\ensuremath{\mathbf v})$ is described by
$\mathop{\mathrm{Hom}}\nolimits(\widetilde S, \underline{\hphantom{A}}) \neq 0$.
Now consider the spherical twist
$\mathop{\mathrm{ST}}\nolimits_{\widetilde S}$ at $\widetilde S$, applied to objects $E \in M_{\sigma_+}(\ensuremath{\mathbf v})$. Note that
by $\sigma_+$-stability, we have $\mathop{\mathrm{Ext}}\nolimits^2(\widetilde S, E) = \mathop{\mathrm{Hom}}\nolimits(E, \widetilde S)^\vee = 0$
for any such $E$;
since $(\ensuremath{\mathbf v}(\widetilde S), \ensuremath{\mathbf v}(E)) = 0$, it follows that $\hom(\widetilde S, E) = \mathop{\mathrm{ext}}\nolimits^1(\widetilde S, E)$.
If $E$ does not lie on the Brill-Noether divisor, then $\mathop{\mathbf{R}\mathrm{Hom}}\nolimits(\widetilde S, E) = 0$, and so
$\mathop{\mathrm{ST}}\nolimits_{\widetilde S}(E) = E$. Also, for generic such $E$ (away from a codimension two subset), the
object $E$ is also $\sigma_-$-stable.
If $E$ is a generic element of the Brill-Noether divisor, then $\mathop{\mathrm{Hom}}\nolimits(\widetilde S, E)
\cong \ensuremath{\mathbb{C}} \cong \mathop{\mathrm{Ext}}\nolimits^1(\widetilde S, E)$, and hence we have an exact triangle
\[
\widetilde S \oplus \widetilde S[-1] \to E \to \mathop{\mathrm{ST}}\nolimits_{\widetilde S}(E).
\]
Its long exact cohomology sequence with respect to the t-structure of $\sigma_0$ induces
two short exact sequences
\[ \widetilde S \ensuremath{\hookrightarrow} E \ensuremath{\twoheadrightarrow} F \quad \text{and} \quad F \ensuremath{\hookrightarrow} \mathop{\mathrm{ST}}\nolimits_{\widetilde S}(E) \ensuremath{\twoheadrightarrow} \widetilde S. \]
By Lemma \ref{lem:sphericaldivisorialwall}, the former is the HN filtration of $E$
with respect to $\sigma_-$; the latter is the dual extension, which is a $\sigma_-$-stable
object by Lemma \ref{lem:stableextension}.
Thus, in both cases, $\mathop{\mathrm{ST}}\nolimits_{\widetilde S}(E)$ is $\sigma_-$-stable.
This gives a birational map $M_{\sigma_+}(\ensuremath{\mathbf v})\dashrightarrow M_{\sigma_-}(\ensuremath{\mathbf v})$ defined in codimension two and induced by the autoequivalence $\mathop{\mathrm{ST}}\nolimits_{\widetilde S}$, which is the claim we wanted to prove.
If instead there is an effective spherical class $\ensuremath{\mathbf s}$ with $(\ensuremath{\mathbf s}, \ensuremath{\mathbf v}) < 0$, we reduce
to the previous case, similarly to the situation of flopping contractions:
Let $\ensuremath{\mathbf v}_0$ again denote the minimal class
in the orbit $G_\ensuremath{\mathcal H}.\ensuremath{\mathbf v}$; note that $\ensuremath{\mathcal W}$ also induces a divisorial contraction of
Brill-Noether type for $\ensuremath{\mathbf v}_0$. In this case, Lemma \ref{lem:sphericaldivisorialwall}
states that the sequence $\Phi$ of spherical twists identifies
an open subset $U^+ \subset M_{\sigma_+}(\ensuremath{\mathbf v}_0)$ (with complement of codimension two)
with an open subset of $M_{\sigma_+}(\ensuremath{\mathbf v})$; similarly for $U^- \subset M_{\sigma_-}(\ensuremath{\mathbf v}_0)$.
Combined with the single spherical twist identifying a common open subset of
$M_{\sigma_\pm}(\ensuremath{\mathbf v}_0)$, this implies the claim.
\item[Hilbert-Chow]
Here $\ensuremath{\mathcal W}$ is an isotropic wall and there exists an isotropic primitive vector $\ensuremath{\mathbf w}_0$ with $(\ensuremath{\mathbf w}_0,\ensuremath{\mathbf v})=1$.
As shown in Section
\ref{sec:iso}, we may assume that shift by one identifies
$M_{\sigma_+}(\ensuremath{\mathbf v})$ with the $(\beta,\omega)$-Gieseker moduli space
$M_{\omega}^{\beta}(-\ensuremath{\mathbf v})$ of stable sheaves of rank one on a twisted K3 surface $(Y, \alpha')$.
After tensoring with a line bundle, we may assume that objects in $M_{\sigma_+}(\ensuremath{\mathbf v})$
are exactly the shifts $I_Z[1]$ of ideal sheaves of 0-dimensional subschemes $Z \subset Y$.
In the setting of Proposition \ref{prop:Jason}, we have $\beta = 0$. Since there are
line bundles on $(Y, \alpha')$, the Brauer group element $\alpha'$ is trivial. By the last
statement of the same Proposition, the moduli space $M_{\sigma_-}(\ensuremath{\mathbf v})$ parameterizes
the shifts of derived duals ideal sheaf. Thus there is a natural isomorphism
$M_{\sigma_-}(\ensuremath{\mathbf v}) \cong M_{\sigma_+}(\ensuremath{\mathbf v})$ induced by the derived anti-autoequivalence
$(\underline{\hphantom{A}})^\vee[2]$.
\item[Li-Gieseker-Uhlenbeck]
Here $\ensuremath{\mathcal W}$ is again isotropic, but $(\ensuremath{\mathbf w}_0,\ensuremath{\mathbf v})=2$.
We will argue along similar lines as in the previous case; unfortunately, the details are more
involved. The first difference
is that we cannot assume $\beta = 0$. Instead,
first observe that $M_{\sigma_+}(\ensuremath{\mathbf v}) = M_{\omega}^{\beta}(-\ensuremath{\mathbf v})$
is parameterizing $(\beta,\omega)$-Gieseker stable sheaves $F$ of rank $2 = (\ensuremath{\mathbf v}, \ensuremath{\mathbf w})$, and of slope
$\mu_{\omega}(F) = \omega.\beta$. If we assume $\omega$ to be generic, then
Gieseker stability is independent of the choice of $\beta$; we can consider
$M_{\sigma_+}(\ensuremath{\mathbf v}) = M_{\omega}(-\ensuremath{\mathbf v})$ to be the moduli space of shifts $F[1]$ of
$\omega$-Gieseker stable sheaves $F$.
Since $(Y, \alpha')$ admits rank two vector bundles, the order of $\alpha'$ in the Brauer group is
one or two; in both cases, we can identify $(Y, \alpha')$ with $(Y, (\alpha')^{-1})$, and thus
the derived dual $E \mapsto E^\vee$ defines an anti-autoequivalence of $\mathrm{D}^{b}(Y, \alpha')$.
Write $-\ensuremath{\mathbf v} = (2, c, d)$, and let $\ensuremath{\mathcal L}$ be the line bundle with $c_1(L) = c$.
From the previous discussion it follows
that $\Phi(\underline{\hphantom{A}}) = (\underline{\hphantom{A}})^\vee \otimes \ensuremath{\mathcal L}[2]$ is the desired functor:
Indeed, any object in $M_{\sigma_+}(\ensuremath{\mathbf v})$ is of the form $F[1]$ for a $\omega$-Gieseker stable sheaf
$F$ of class $-\ensuremath{\mathbf v}$. Then $\Phi(F[1]) = F^\vee \otimes \ensuremath{\mathcal L}[1]$ the derived dual of a Gieseker stable
sheaf, and has class $\ensuremath{\mathbf v}$. By Proposition \ref{prop:Jason}, this is an object of
$M_{\sigma_-}(\ensuremath{\mathbf v})$.
\end{description}
\end{proof}
Consider two adjacent chamber $\ensuremath{\mathcal C}^+, \ensuremath{\mathcal C}^-$ separated by a wall $\ensuremath{\mathcal W}$; as always,
we pick stability conditions $\sigma_{\pm} \in \ensuremath{\mathcal C}^{\pm}$, and a stability condition
$\sigma_0 \in \ensuremath{\mathcal W}$.
By the identification of N\'eron-Severi groups induced
by Theorem \ref{thm:birational-WC}, we can think of the corresponding maps
$\ell_{\pm}$ of equation \eqref{eq:elltoAmp} as maps
\[
\ell_{\pm} \colon \ensuremath{\mathcal C}^{\pm} \to \mathop{\mathrm{NS}}\nolimits(M_{\sigma_+}(\ensuremath{\mathbf v})).
\]
They can be written as the following composition of maps
\[
\mathop{\mathrm{Stab}}\nolimits^\dag(X,\alpha) \xrightarrow{\ensuremath{\mathcal Z}} H^*_{\mathrm{alg}}(X,\alpha,\ensuremath{\mathbb{Z}}) \otimes \ensuremath{\mathbb{C}} \xrightarrow{I} \ensuremath{\mathbf v}^\perp
\xrightarrow{\theta_{\ensuremath{\mathcal C}^\pm}} \mathop{\mathrm{NS}}\nolimits(M_{\sigma_+}(\ensuremath{\mathbf v}))
\]
where $\ensuremath{\mathcal Z}$ is the map defined in Theorem \ref{thm:Bridgeland_coveringmap},
$I$ is given by $I(\Omega_Z) = \Im \frac{\Omega_Z}{-(\Omega_Z, \ensuremath{\mathbf v})}$, and where
$\theta_{\ensuremath{\mathcal C}^\pm}$ are the Mukai morphisms, as reviewed in Remark \ref{rmk:comparison}.
Our next goal is to show that these two maps behave as nicely as one could hope; we will
distinguish two cases according to the behavior of the contraction morphism
\[
\pi^+ \colon M_{\sigma_+}(\ensuremath{\mathbf v}) \to \overline{M}^+
\]
induced by $\ensuremath{\mathcal W}$ via Theorem \ref{thm:contraction}:
\begin{Lem} \label{lem:ellandreflections}
The maps $\ell^+, \ell^-$ agree on the wall $\ensuremath{\mathcal W}$ (when extended by continuity).
\begin{enumerate}
\item \label{enum:isomorsmall} (Fake or flopping walls)
When $\pi^+$ is an isomorphism, or a small contraction, then
the maps $\ell_+, \ell_-$ are analytic continuations of each other.
\item \label{enum:divisorial} (Bouncing walls)
When $\pi^+$ is a divisorial contraction, then the analytic continuations of
$\ell^+, \ell^-$ differ by the reflection $\rho_D$ in $\mathop{\mathrm{NS}}\nolimits(M_{\sigma_+}(\ensuremath{\mathbf v}))$ at the
divisor $D$ contracted by $\ell_{\sigma_0}$.
\end{enumerate}
\end{Lem}
As a consequence, in case \eqref{enum:isomorsmall} the wall $\ensuremath{\mathcal W}$ is a fake wall when $\pi^+$ is
an isomorphism, and induces a flop when $\pi^+$ is a small contraction; in case
\eqref{enum:divisorial}, corresponding to a divisorial contraction, the moduli spaces
$M_{\sigma_\pm}(\ensuremath{\mathbf v})$ for the two adjacent chambers are isomorphic.
\begin{proof}
We have to prove $\theta_{\ensuremath{\mathcal C}^-} = \theta_{\ensuremath{\mathcal C}^+}$ in case \eqref{enum:isomorsmall}, and
$\theta_{\ensuremath{\mathcal C}^-} = \rho_D \circ \theta_{\ensuremath{\mathcal C}^+}$ in case \eqref{enum:divisorial}. We
assume for simplicity that the two moduli spaces admit universal families; the arguments
apply identically to quasi-universal families.
Consider case \eqref{enum:isomorsmall}. If the wall is not totally semistable, then the two
moduli spaces $M_{\ensuremath{\mathcal C}^\pm}(\ensuremath{\mathbf v})$ share a common open subset, with complement of codimension two,
on which the two universal families agree.
By the projectivity of the moduli spaces, the maps $\theta_{\ensuremath{\mathcal C}^\pm}$ are determined by their restriction
to curves contained in this subset; this proves the claim. If the wall is instead totally
semistable, we additionally have to use Proposition \ref{prop:sphericaltotallysemistable}.
Let $\Phi^+$ and $\Phi^-$ be the two sequences of spherical twists, sending $\sigma_0$-stable
objects of class $\ensuremath{\mathbf v}_0$ to $\sigma_+$- and $\sigma_-$-stable objects of class $\ensuremath{\mathbf v}$, respectively.
The autoequivalence inducing the birational map $M_{\sigma_+}(\ensuremath{\mathbf v}) \dashrightarrow
M_{\sigma_-}(\ensuremath{\mathbf v})$ is given by $\Phi^- \circ (\Phi^+)^{-1}$.
As the classes of the spherical objects occurring in $\Phi^+$ and $\Phi^-$ are identical, this
does not change the class of the universal family in the $K$-group; therefore, the Mukai morphisms
$\theta_{\ensuremath{\mathcal C}^+}, \theta_{\ensuremath{\mathcal C}^-}$ agree.
Now consider the case of a Brill-Noether divisorial contraction;
we first assume that there is no effective spherical class $\ensuremath{\mathbf s}' \in \ensuremath{\mathcal H}_\ensuremath{\mathcal W}$ with
$(\ensuremath{\mathbf s}', \ensuremath{\mathbf v}) < 0$.
The contraction induced by a spherical object $S$ with Mukai vector $\ensuremath{\mathbf s} := \ensuremath{\mathbf v}(S) \in \ensuremath{\mathbf v}^\perp$.
By Lemma \ref{lem:sphericaldivisorialwall}, the class of the contracted divisor is given by
$\theta_{\ensuremath{\mathcal C}^\pm}(\ensuremath{\mathbf s})$ on either side of the wall.
The universal families differ (up to a subset of codimension two) by the spherical twist
$\mathop{\mathrm{ST}}\nolimits_S(\underline{\hphantom{A}})$. This induces the reflection at $\ensuremath{\mathbf s}$ in $H^*_\mathrm{alg}(X,\alpha,\ensuremath{\mathbb{Z}})$; thus the Mukai morphisms differ by
reflection at $\theta(\ensuremath{\mathbf s})$, as claimed.
If in addition to $\ensuremath{\mathbf s} \in \ensuremath{\mathbf v}^\perp$, there does exist
an effective spherical class $\ensuremath{\mathbf s}' \in \ensuremath{\mathcal H}_\ensuremath{\mathcal W}$ with $(\ensuremath{\mathbf s}', \ensuremath{\mathbf v}) < 0$, we have to rely
on the constructions of Lemma \ref{lem:sphericaldivisorialwall}, as
in the proof of Theorem \ref{thm:birational-WC}. We have a common open subset
$U \subset M_{\sigma_\pm}(\ensuremath{\mathbf v}_0)$, such that the two universal families $\ensuremath{\mathcal E}^{\pm}|_U$
are related by the spherical twist at a spherical object $S_0$ of class $\ensuremath{\mathbf s}_0$. Let
$\Phi^{\pm}$ be the sequences of spherical twists obtained from Lemma
\ref{lem:sphericaldivisorialwall}, applied to $\sigma_+$ or $\sigma_-$, respectively.
Their induced maps $\Phi_*^{\pm} \colon H^*_\mathrm{alg}(X,\alpha,\ensuremath{\mathbb{Z}}) \to H^*_\mathrm{alg}(X,\alpha,\ensuremath{\mathbb{Z}})$ on the Mukai lattice are identical,
as they are obtained by twists of spherical objects of the same classes; it sends
$\ensuremath{\mathbf v}_0$ to $\ensuremath{\mathbf v}$, and thus $\ensuremath{\mathbf s}_0$ to $\pm\ensuremath{\mathbf s}$. Therefore, the composition
$\Phi^- \circ \mathop{\mathrm{ST}}\nolimits_{S_0} \circ (\Phi^+)^{-1}$ induces the reflection at $\ensuremath{\mathbf s}$, as claimed.
It remains to consider divisorial contractions of Hilbert-Chow and Li-Gieseker-Uhlenbeck type. We
may assume $M_{\sigma_+}(\ensuremath{\mathbf v})$ is the Hilbert scheme, or a moduli space of Gieseker stable sheaves of
rank two.
By the proof of Theorem \ref{thm:birational-WC}, there is a line bundle
$\ensuremath{\mathcal L}$ on $X$ such that
\[
\mathop{\mathbf{R}\mathcal Hom}\nolimits_{M_{\sigma_\pm(\ensuremath{\mathbf v})} \times X}(\ensuremath{\mathcal E}, (p_X)^*\ensuremath{\mathcal L}[2])
\]
is a universal family with respect to
$\sigma_-$ on $M_{\sigma_-}(\ensuremath{\mathbf v}) = M_{\sigma_+}(\ensuremath{\mathbf v})$. We use equation \eqref{eq:definetheta}
to compare $\theta_{\ensuremath{\mathcal C}^\pm}$ by evaluating their degree on a test curve $C \subset M_{\sigma_\pm}(\ensuremath{\mathbf v})$.
Let $i$ denote the inclusion $i \colon C \times X \ensuremath{\hookrightarrow} M_{\sigma_\pm}(\ensuremath{\mathbf v}) \times X$, and
by $p$ the projection $p \colon C \times X \to X$. This yields the following chain of equalities
for $\ensuremath{\mathbf a} \in \ensuremath{\mathbf v}^\perp$:
\begin{align}
\theta_{\ensuremath{\mathcal C}^-}(\ensuremath{\mathbf a}).C &=
\Bigl(\ensuremath{\mathbf a}, \ensuremath{\mathbf v}\bigl(p_* i^*(\ensuremath{\mathcal E}^-)\bigr) \Bigr)
= \Bigl(\ensuremath{\mathbf a}, \ensuremath{\mathbf v}\bigl(p_* \mathop{\mathbf{R}\mathcal Hom}\nolimits_{C \times X}(i^*\ensuremath{\mathcal E}, \ensuremath{\mathcal O}_C \boxtimes \ensuremath{\mathcal L}[2])\bigr) \Bigr) \label{eqdual} \\
&= \Bigl(\ensuremath{\mathbf a}, \ensuremath{\mathbf v}\bigl(p_* \mathop{\mathbf{R}\mathcal Hom}\nolimits_{C \times X}(i^*\ensuremath{\mathcal E}, \omega_C[1] \boxtimes \ensuremath{\mathcal L}[1])\bigr) \Bigr) \label{eqavperp} \\
&= -\Bigl(\ensuremath{\mathbf a}, \ensuremath{\mathbf v}\bigl(\mathop{\mathbf{R}\mathcal Hom}\nolimits_X(p_*i^*\ensuremath{\mathcal E}, \ensuremath{\mathcal L})\bigr) \Bigr) \label{Grothendieck} \\
& = -\Bigl(\ensuremath{\mathbf a}^\vee \cdot \mathop{\mathrm{ch}}\nolimits(\ensuremath{\mathcal L}), \ensuremath{\mathbf v}(p_*i^*\ensuremath{\mathcal E})\Bigr)
= - \theta_{\ensuremath{\mathcal C}^+}\bigl(\ensuremath{\mathbf a}^\vee \cdot \mathop{\mathrm{ch}}\nolimits(\ensuremath{\mathcal L})\bigr).C \label{eq:Columbus20121222}
\end{align}
Here we used compatibility of duality with base change in \eqref{eqdual}, $\ensuremath{\mathbf a} \in \ensuremath{\mathbf v}^\perp$ in
\eqref{eqavperp}, and Grothendieck duality in \eqref{Grothendieck}. In \eqref{eq:Columbus20121222},
we wrote $\ensuremath{\mathbf a}^\vee$ for the class corresponding to $\ensuremath{\mathbf a}$ under duality $(\underline{\hphantom{A}})^\vee$.
In the Hilbert-Chow case, with $\ensuremath{\mathbf v} = -(1, 0, 1-n)$, the class of the contracted divisor
$D$ is proportional to $\theta_{\ensuremath{\mathcal C}^+}(1, 0, n-1)$, and we have $\ensuremath{\mathcal L} \cong \ensuremath{\mathcal O}_X$;
in the Li-Gieseker-Uhlenbeck case, we can write
$\ensuremath{\mathbf v} = (2, c, d)$, the class of the contracted divisor $D$ is a multiple of
$\theta_{\ensuremath{\mathcal C}^+}(2, c, \frac{c^2}2-d)$, and $c_1(\ensuremath{\mathcal L}) = c$. In both cases,
the reflection $\rho_D$ is compatible with the above chain of equalities:
\begin{equation}\label{eq:Columbus20130813}
\rho_D\left(\theta_{\ensuremath{\mathcal C}^+}(\ensuremath{\mathbf a})\right) = -\theta_{\ensuremath{\mathcal C}^+}\left(\ensuremath{\mathbf a}^\vee\cdot \mathop{\mathrm{ch}}\nolimits(\ensuremath{\mathcal L})\right).
\end{equation}
Indeed, in the HC case, we can test \eqref{eq:Columbus20130813} for $\ensuremath{\mathbf a}_1=(1,0,1-n)$ and classes of
the form $\ensuremath{\mathbf a}_2=(0,c',0)\in \theta_{\ensuremath{\mathcal C}^+}^{-1}\left(D^\perp\right)$:
since $\ensuremath{\mathbf a}_1^\vee=\ensuremath{\mathbf a}_1$ and $\ensuremath{\mathbf a}_2^\vee = - \ensuremath{\mathbf a}_2$, and since such classes
span $\ensuremath{\mathbf v}^\perp$, the equality follows.
Similarly, in the LGU case, we can use $\ensuremath{\mathbf a}_1=(2,c,\frac{c^2}2-d)$ and
$\ensuremath{\mathbf a}_2=(0,c',\frac{c'.c}{2})$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:MAP}, \eqref{enum:piecewise}, \eqref{enum:imagemovable}, \eqref{enum:allMMP}]
Lemma \ref{lem:ellandreflections} proves part \eqref{enum:piecewise}. Part \eqref{enum:allMMP}
follows directly from the positivity $\ell_\ensuremath{\mathcal C}(\ensuremath{\mathcal C}) \subset \mathrm{Amp} M_\ensuremath{\mathcal C}(\ensuremath{\mathbf v})$ once
we have established part \eqref{enum:imagemovable}.
Consider a big class in the movable cone, given as $\theta_\sigma(\ensuremath{\mathbf a})$ for some class
$\ensuremath{\mathbf a} \in \ensuremath{\mathbf v}^\perp, \ensuremath{\mathbf a}^2 > 0$; we have to show that it is in the image of $\ell$.
Recall the definition of $\ensuremath{\mathcal P}_0^+(X,\alpha)$ given in the
discussion preceding Theorem \ref{thm:Bridgeland_coveringmap}. If we set
\[
\Omega'_\ensuremath{\mathbf a} := \sqrt{-1}\ensuremath{\mathbf a} - \frac{\ensuremath{\mathbf v}}{\ensuremath{\mathbf v}^2} \in H^*_{\alg}(X, \Z) \otimes \ensuremath{\mathbb{C}},
\]
then clearly $\Omega'_\ensuremath{\mathbf a} \in \ensuremath{\mathcal P}(X,\alpha)$.
In case there is a spherical class $\ensuremath{\mathbf s} \in H^*_\mathrm{alg}(X,\alpha,\ensuremath{\mathbb{Z}})$ with $(\Omega'_\ensuremath{\mathbf a}, \ensuremath{\mathbf s}) = 0$, we modify
$\Omega'_\ensuremath{\mathbf a}$ by a small real multiple of $\ensuremath{\mathbf s}$ to obtain $\Omega_\ensuremath{\mathbf a} \in \ensuremath{\mathcal P}_0(X,\alpha)$, otherwise we
set $\Omega_\ensuremath{\mathbf a} = \Omega'_\ensuremath{\mathbf a}$; in either case, we have
$\Omega_\ensuremath{\mathbf a} \in \ensuremath{\mathcal P}_0(X,\alpha)$ with $(\Omega_\ensuremath{\mathbf a}, \ensuremath{\mathbf v}) = -1$ and $\Im \Omega_\ensuremath{\mathbf a} = \ensuremath{\mathbf a}$.
In addition, the fact that $\theta(\ensuremath{\mathbf a})$ is contained in the positive cone gives
$\Omega_\ensuremath{\mathbf a} \in \ensuremath{\mathcal P}_0^+(X,\alpha)$.
Let $\Omega_\sigma \in \ensuremath{\mathcal P}_0^+(X,\alpha)$ be the central charge for the chosen basepoint
$\sigma \in \mathop{\mathrm{Stab}}\nolimits^\dag(X,\alpha)$. Then there is a path $\gamma \colon [0,1] \to \ensuremath{\mathcal P}_0^+(X,\alpha)$ starting at
$\Omega_\sigma$ and ending at $\Omega_\ensuremath{\mathbf a}$ with the following additional property: for all $t \in
[0,1]$, the class
\[
-\frac{\theta_\sigma(\Im \gamma(t))}{(\gamma(t), \ensuremath{\mathbf v})}
\]
is contained in the
movable cone of $M_\sigma(\ensuremath{\mathbf v})$.
By Theorem \ref{thm:Bridgeland_coveringmap}, there is a lift $\sigma \colon [0,1] \to
\mathop{\mathrm{Stab}}\nolimits^\dag(X,\alpha)$ of $\gamma$ starting at $\sigma(0) = \sigma$. By
the above assumption on $\gamma$, this will never hit a wall of the movable cone
corresponding to a divisorial contraction; by Lemma \ref{lem:ellandreflections}, the
map $\ell$ extends analytically, with $\theta_\sigma = \theta_{\sigma(0)}
= \theta_{\sigma(1)}$. Therefore,
\[
\ell_{\sigma(1)}(\sigma(1)) = \theta_{\sigma(1)}(\ensuremath{\mathbf a}) = \theta_\sigma(\ensuremath{\mathbf a})
\]
as claimed.
\end{proof}
Now recall the Weyl group action of $W_{\mathrm{Exc}}$ of Proposition \ref{prop:Weylmovablecone}.
The exceptional chamber of a hyperbolic reflection group intersects every orbit
exactly once. Thus there is a map
\[
W \colon \mathop{\mathrm{Pos}}(M_\sigma(\ensuremath{\mathbf v})) \to {\mathop{\mathrm{Mov}}\nolimits}(M_\sigma(\ensuremath{\mathbf v}))
\]
sending any class to the intersection of its $W_{\mathrm{Exc}}$-orbit with the fundamental domain.
Lemma \ref{lem:ellandreflections} and Theorem \ref{thm:MAP} immediately give
the following global description of $\ell$:
\begin{Thm} \label{thm:ellandgroupaction}
The map $\ell$ of Theorem \ref{thm:MAP} can be given as the composition of the following
maps:
\[
{\mathop{\mathrm{Stab}}\nolimits}^\dagger(X,\alpha) \xrightarrow{\ensuremath{\mathcal Z}} H^*_\mathrm{alg}(X,\alpha,\ensuremath{\mathbb{Z}}) \otimes \ensuremath{\mathbb{C}}
\xrightarrow{I} \ensuremath{\mathbf v}^\perp \xrightarrow{\theta_{\sigma,\ensuremath{\mathbf v}}} \mathop{\mathrm{Pos}}(M_\sigma(\ensuremath{\mathbf v}))
\xrightarrow{W} {\mathop{\mathrm{Mov}}\nolimits}(M_\sigma(\ensuremath{\mathbf v})).
\]
\end{Thm}
To complete the proof of Theorem \ref{thm:MAP}, it remains to prove part \eqref{enum:AmpleCone}:
\begin{Prop}
Let $\ensuremath{\mathcal C} \subset \mathop{\mathrm{Stab}}\nolimits^\dag(X,\alpha)$ be a chamber of the chamber decomposition with
respect to $\ensuremath{\mathbf v}$. Then the image of $\ell_\ensuremath{\mathcal C}(\ensuremath{\mathcal C}) \subset \mathop{\mathrm{NS}}\nolimits(M_\ensuremath{\mathcal C}(\ensuremath{\mathbf v}))$ of the
chamber $\ensuremath{\mathcal C}$ is exactly the ample cone of the corresponding moduli space
$M_\ensuremath{\mathcal C}(\ensuremath{\mathbf v})$.
\end{Prop}
\begin{proof}
In light of Theorems \ref{thm:ampleness} and \ref{thm:MAP}, \eqref{enum:piecewise}, \eqref{enum:imagemovable}, \eqref{enum:allMMP}, the only potential problem is given
by walls $\ensuremath{\mathcal W} \subset \partial \ensuremath{\mathcal C}$ that do not get mapped to walls of the nef cone of
the moduli space. These are totally semistable fake walls induced by an effective
spherical class $\ensuremath{\mathbf s} \in \ensuremath{\mathcal H}_\ensuremath{\mathcal W}$ with $(\ensuremath{\mathbf s}, \ensuremath{\mathbf v}) < 0$. The idea is that there is always a potential
wall $\ensuremath{\mathcal W}'$, with the same lattice $\ensuremath{\mathcal H}_{\ensuremath{\mathcal W}'} = \ensuremath{\mathcal H}_\ensuremath{\mathcal W}$, for which all effective spherical classes have
positive pairing with $\ensuremath{\mathbf v}$. By Theorem \ref{thm:walls}, $\ensuremath{\mathcal W}'$ is not a wall, and it will have
the same image in the nef cone of $M_\ensuremath{\mathcal C}(\ensuremath{\mathbf v})$ as the wall $\ensuremath{\mathcal W}$.
Let $\sigma_0 = (Z_0, \ensuremath{\mathcal P}_0) \in \ensuremath{\mathcal W}$ be a very general stability condition on the given wall:
this means we can assume that $\ensuremath{\mathcal H}_\ensuremath{\mathcal W}$ contains all integral classes $\ensuremath{\mathbf a} \in H^*_\mathrm{alg}(X,\alpha,\ensuremath{\mathbb{Z}})$
with $\Im Z_0(\ensuremath{\mathbf a}) = 0$. If we
write $Z_0(\underline{\hphantom{A}}) = (\Omega_0, \underline{\hphantom{A}})$ as in Theorem \ref{thm:Bridgeland_coveringmap}, we may
assume that $\Omega_0$ is normalized by
$(\Omega_0, \ensuremath{\mathbf v}) = -1$ and $\Omega_0^2 = 0$, i.e., $(\Re \Omega_0, \Im \Omega_0) = 0$
and $(\Re \Omega_0)^2 = (\Im \Omega_0)^2$ (see \cite[Section 10]{Bridgeland:K3}). We will now
replace $\sigma_0$ by a stability condition whose central charge has real part given by
$(-\ensuremath{\mathbf v}, \underline{\hphantom{A}})$, and identical imaginary part.
To this end, let $\sigma_1 \in \ensuremath{\mathcal C}$ be a stability condition nearby $\sigma_0$, whose central charge
is defined by $\Omega_1 = \Omega_0 +i\epsilon$, where $\epsilon \in H^*_\mathrm{alg}(X,\alpha,\ensuremath{\mathbb{Z}})\otimes \ensuremath{\mathbb{R}}$
is a sufficiently small vector with $(\epsilon, \ensuremath{\mathbf v}) = 0$; we may also assume that multiples
of $\ensuremath{\mathbf v}$ are the only integral classes $\ensuremath{\mathbf a} \in H^*_\mathrm{alg}(X,\alpha,\ensuremath{\mathbb{Z}})$ with $(\Im \Omega_1, \ensuremath{\mathbf a}) = 0$. Let
$\Omega_2 = -\ensuremath{\mathbf v} + i \Im \Omega_1$; then a straight-forward computation shows that the straight path
connecting $\Omega_1$ with $\Omega_2$ lies completely within $\ensuremath{\mathcal P}_0^+(X,\alpha)$. Finally,
let $\Omega_3 = -\ensuremath{\mathbf v} + \Im \Omega_0$; by Theorem \ref{thm:walls}, there are no spherical classes
$\tilde \ensuremath{\mathbf s} \in \ensuremath{\mathcal H}_\ensuremath{\mathcal W}$ with $(\ensuremath{\mathbf v}, \tilde \ensuremath{\mathbf s}) = 0$, implying that the straight path from $\Omega_2$ to
$\Omega_3$ is also contained in $\ensuremath{\mathcal P}_0^+(X,\alpha)$.
By Theorem \ref{thm:Bridgeland_coveringmap}, there is a lift of the path
$\Omega_0 \mapsto \Omega_1 \mapsto \Omega_2 \mapsto \Omega_3$ to $\mathop{\mathrm{Stab}}\nolimits^\dag(X,\alpha)$; let $\sigma_2$ and $\sigma_3$
the stability conditions corresponding to $\Omega_2$ and $\Omega_3$, respectively.
By choice of $\epsilon$, we may assume that the paths $\sigma_0 \mapsto \sigma_1$ and
$\sigma_2 \mapsto \sigma_3$ do not cross any walls. Since $(\Omega_1, \ensuremath{\mathbf v}) = (\Omega_2, \ensuremath{\mathbf v}) = -1$,
and since the imaginary part on the path $\Omega_1 \mapsto \Omega_2$ is constant, the same holds for
the path $\sigma_1 \mapsto \sigma_2$. Hence $\sigma_3$ is in the closure of the chamber
$\ensuremath{\mathcal C}$. In particular, $\sigma_3$ lies on a potential wall of $\ensuremath{\mathcal C}$ with hyperbolic lattice given by
$\ensuremath{\mathcal H}_\ensuremath{\mathcal W}$; by construction, any spherical class $\ensuremath{\mathbf s} \in \ensuremath{\mathcal H}_\ensuremath{\mathcal W}$ with $(\ensuremath{\mathbf v}, \ensuremath{\mathbf s}) < 0$ satisfies
$(\Omega_3, \ensuremath{\mathbf s}) > 0$, and thus $\ensuremath{\mathbf s}$ is not effective.
By Theorem \ref{thm:walls}, $\sigma_3$ does not lie on a wall. Since $\Im \Omega_3 = \Im \Omega_0$,
the images $l_\ensuremath{\mathcal C}(\sigma_0) = l_\ensuremath{\mathcal C}(\sigma_3)$ in the N\'eron-Severi group of $M_\ensuremath{\mathcal C}(\ensuremath{\mathbf v})$ agree.
\end{proof}
We conclude this section by proving Corollary \ref{cor:Mukaifull} for moduli spaces of
Bridgeland stable objects on twisted K3 surfaces:
\begin{proof}[Proof of Corollary \ref{cor:Mukaifull}]
$\eqref{enum:Markmancrit} \Rightarrow \eqref{enum:equivnumerical}$:
Let $\phi \colon H^*(X, \alpha, \ensuremath{\mathbb{Z}}) \to H^*(X', \alpha', \ensuremath{\mathbb{Z}})$ be a Hodge isometry sending $\ensuremath{\mathbf v}^{\perp, \mathop{\mathrm{tr}}\nolimits} \to
\ensuremath{\mathbf v}'^{\perp, \mathop{\mathrm{tr}}\nolimits}$. Up to composing with $[1]$, we may assume $\phi(\ensuremath{\mathbf v}) = \ensuremath{\mathbf v}'$.
If $\phi$ is orientation-preserving, then Theorem \ref{thm:derivedtorelli}, gives an equivalence $\Phi$ with
$\Phi_* = \phi$.
Otherwise, the composition $(\underline{\hphantom{A}})^\vee \circ \phi$ defines an orientation-preserving Hodge
isometry
\[ \phi^\vee \colon H^*(X, \alpha, \ensuremath{\mathbb{Z}}) \to H^*(X', (\alpha')^{-1}, \ensuremath{\mathbb{Z}}) \quad
\text{with} \quad \phi^\vee(\ensuremath{\mathbf v}) = (\ensuremath{\mathbf v}')^\vee. \]
Again, Theorem \ref{thm:derivedtorelli} gives a derived equivalence
$\Psi \colon \mathrm{D}^{b}(X, \alpha) \to \mathrm{D}^{b}(X', (\alpha')^{-1})$; the composition with
$(\underline{\hphantom{A}})^\vee \colon \mathrm{D}^{b}(X', (\alpha')^{-1}) \to \mathrm{D}^{b}(X', \alpha')$ has the desired property.
$\eqref{enum:equivnumerical} \Rightarrow \eqref{enum:equivbirational}$:
Assume that $\Phi\colon \mathrm{D}^{b}(X,\alpha)\xrightarrow{\simeq}\mathrm{D}^{b}(X',\alpha')$ is an (anti-)equivalence
with $\Phi_*(\ensuremath{\mathbf v})=\ensuremath{\mathbf v}'$.
Consider moduli spaces $M_{\sigma}(\ensuremath{\mathbf v}), M_{\sigma'}(\ensuremath{\mathbf v}')$ of Bridgeland-stable objects.
We claim that we can assume the existence of
$\tau \in \mathop{\mathrm{Stab}}\nolimits^\dag(X',\alpha')$ such that $E \in \mathrm{D}^{b}(X, \alpha)$ of class $\ensuremath{\mathbf v}$ is
$\sigma$-stable if and only if $\Phi(E)$ is $\tau$-stable:
\begin{itemize*}
\item if $\Phi_*$ is orientation-preserving,
we may replace $\Phi$ by an equivalence satisfying the last claim of Theorem \ref{thm:derivedtorelli}, and set
$\tau = \Phi_*(\sigma) \in \mathop{\mathrm{Stab}}\nolimits^\dag(X, \alpha')$;
\item
otherwise, we may assume that
\[ (\underline{\hphantom{A}})^\vee \circ \Phi \colon \mathrm{D}^{b}(X, \alpha) \to \mathrm{D}^{b}(X', (\alpha')^{-1})
\]
satisfies the same claim, and we let
$\tau \in \mathop{\mathrm{Stab}}\nolimits^\dag(X, \alpha')$ be the stability condition dual
(in the sense of Proposition \ref{prop:dualstability}) to
$\left((\underline{\hphantom{A}})^\vee \circ \Phi\right)_* \sigma \in \mathop{\mathrm{Stab}}\nolimits^\dag(X',(\alpha')^{-1})$.
\end{itemize*}
By construction, $\Phi$ induces an isomorphism
\[ M_\sigma(\ensuremath{\mathbf v}) \cong M_{\tau}(\Phi_*(\ensuremath{\mathbf v})) = M_{\tau}(\ensuremath{\mathbf v}'). \]
Due to Theorem \ref{thm:birational-WC}, part \eqref{enum:birationalautoequivalence}, there is
an (anti-)autoequivalence $\Phi'$ of $\mathrm{D}^{b}(X',\alpha')$ inducing a birational map
$M_{\Psi_*(\sigma)}(\ensuremath{\mathbf v}') \dashrightarrow M_{\sigma'}(\ensuremath{\mathbf v}')$. The composition
$\Phi' \circ \Phi$ has the desired properties.
\end{proof}
\section{Application 1: Lagrangian fibrations}\label{sec:Application1}
In this section, we will explain how birationality of wall-crossing implies
Theorem \ref{thm:SYZ}, verifying the Lagrangian fibration conjecture.
We will prove the theorem for any moduli space $M_\sigma(\ensuremath{\mathbf v})$ of Bridgeland-stable objects on
a twisted K3 surface $(X, \alpha)$, under the assumptions that $\ensuremath{\mathbf v}$ is primitive, and $\sigma$ generic with respect to $\ensuremath{\mathbf v}$.
One implication in Theorem \ref{thm:SYZ} is immediate: if $f \colon M_\sigma(\ensuremath{\mathbf v}) \dashrightarrow Z$ is
a rational abelian fibration, then the pull-back $f^*D$ of any ample divisor $D$ on $Z$ has volume
zero; by equation \eqref{eq:BBform}, the self-intersection of $f^*D$ with respect to the
Beauville-Bogomolov form must also equal zero.
To prove the converse, we will first restate carefully the argument establishing part \ref{enum:birational}
of Conjecture \ref{conj:SYZ}, which was already sketched in the introduction; then we will explain
how to extend the argument to also obtain part \ref{enum:nef}.
Assume that there is an integral divisor $D$ on $M_\sigma(\ensuremath{\mathbf v})$ with $q(D)=0$.
Applying the inverse of the Mukai morphism $\theta_{\sigma, \ensuremath{\mathbf v}}$ of Theorem \ref{thm:ModuliSpacesAreHK},
we obtain a primitive vector $\ensuremath{\mathbf w} = \theta_{\sigma, \ensuremath{\mathbf v}}^{-1}(D) \in \ensuremath{\mathbf v}^\perp$ with
$ \ensuremath{\mathbf w}^2 = 0.$
After a small deformation, we may assume that $\sigma$ is also generic with respect to $\ensuremath{\mathbf w}$.
As in Section \ref{sec:iso},
we consider the moduli space $Y:=M_\sigma(\ensuremath{\mathbf w})$ of $\sigma$-stable objects, which is a
smooth K3 surface. There is a derived equivalence
\begin{equation} \label{eq:edinburgh0903}
\Phi \colon \mathrm{D}^{b}(X,\alpha) \xrightarrow{\sim} \mathrm{D}^{b}(Y,\alpha')
\end{equation}
for the appropriate choice of a Brauer class $\alpha'\in\mathrm{Br}(Y)$; as before, we have $\Phi_*(\ensuremath{\mathbf w}) = (0, 0, 1)$.
By the arguments recalled in Theorem \ref{thm:derivedtorelli}, we have $\Phi_*(\sigma) \in \mathop{\mathrm{Stab}}\nolimits^\dag(Y, \alpha')$.
By definition, $\Phi$ induces an isomorphism
\begin{equation}\label{eq:comparison}
M_{\sigma}(\ensuremath{\mathbf v}) \cong M_{\Phi_*(\sigma)}(\Phi_*(\ensuremath{\mathbf v})),
\end{equation}
where $\Phi_*(\sigma)$ is generic with respect to $\Phi_*(\ensuremath{\mathbf v})$.
\begin{Lem}\label{lem:isotropic}
The Mukai vector $\Phi_*(\ensuremath{\mathbf v})$ has rank zero.
\end{Lem}
\begin{proof}
This follows directly from $\Phi_*(\ensuremath{\mathbf w})=(0,0,1)$ and $(\Phi_*(\ensuremath{\mathbf w}),\Phi_*(\ensuremath{\mathbf v})) = (\ensuremath{\mathbf w},\ensuremath{\mathbf v})=0$.
\end{proof}
We write $\Phi_*(\ensuremath{\mathbf v})=(0,C,s)$, with $C\in\mathrm{Pic}(Y)$ and $s\in\ensuremath{\mathbb{Z}}$.
Since $\ensuremath{\mathbf v}^2>0$ we have $C^2>0$.
\begin{Lem} \label{lem:Cisample}
After replacing $\Phi$ by the composition $\Psi \circ \Phi$, where
$\Psi \in \mathop{\mathrm{Aut}}\nolimits(\mathrm{D}^{b}(Y, \alpha'))$, we may assume that $C$ is ample, and that $s \neq 0$.
\end{Lem}
\begin{proof}
Up to shift $[1]$, we may assume that $H'.C > 0$, for a given ample class $H'$ on $Y$. In
particular, $C$ is an effective class; it is ample unless there is a rational $-2$-curve
$D \subset Y$ with $C. D < 0$. Applying the spherical twist $\mathop{\mathrm{ST}}\nolimits_{\ensuremath{\mathcal O}_D}$ at the structure
sheaf\footnote{Note that the restriction of $\alpha$ to any curve vanishes, hence
the structure sheaf $\ensuremath{\mathcal O}_D$ is a coherent sheaf on $(Y, \alpha')$.} of $D$ replaces
$C$ by its image $C'$ under the reflection at $D$, which satisfies $C'.D > 0$. This procedure
terminates, as the nef cone is a fundamental domain of the Weyl group
action generated by reflections at $-2$-curves.
Since tensoring with an (untwisted) line bundle on $Y$ induces an autoequivalence
of $\mathrm{D}^{b}(Y, \alpha')$, we may also assume $s \neq 0$.
\end{proof}
Let $H'\in\mathrm{Amp}(Y)$ be a generic polarization with respect to $\Phi_*(\ensuremath{\mathbf v})$.
The following is a small (and well-known) generalization of Beauville's integrable system
\cite{Beauville:ACIS}:
\begin{Lem}\label{lem:fibration}
The moduli space $M_{H'}(\Phi_*(\ensuremath{\mathbf v}))$ admits a structure of Lagrangian fibration
induced by global sections of $\theta_{H', \Phi_*(\ensuremath{\mathbf v})}((0,0,-1))$.
\end{Lem}
\begin{proof}
Let $M':=M_{H'}(\Phi_*(\ensuremath{\mathbf v}))$ and $L' := \theta_{H', \Phi_*(\ensuremath{\mathbf v})}((0,0,-1))$.
By an argument of Faltings and Le Potier (see \cite[Section 1.3]{LePotier:StrangeDuality}), we can construct sections of $L'$ as follows:
for all $y\in Y$, we define a section $s_y\in H^0(M',L')$ by its zero-locus
\[
Z(s_y) := \left\{ E\in M'\colon \mathop{\mathrm{Hom}}\nolimits(E,k(y))\neq0 \right\}.
\]
Whenever $y$ is not in the support of $E$, the section $s_y$ does not vanish at $E$;
hence the sections $\{s_y\}_{y\in Y}$ generate $L'$. Consider the morphism induced by this linear
system. The image of $E$ is determined
by its set-theoretic support; hence the image of $M'$ is the complete
local system of $C$. By Matsushita's theorem \cite{Matsushita:Fibrations, Matsushita:Addendum},
the map must be a Lagrangian fibration.
\end{proof}
By Remark \ref{rem:Giesekerchamber}, there exists a generic stability condition
$\sigma'\in\mathop{\mathrm{Stab}}\nolimits^\dagger(Y,\alpha')$ with the property that $M_{H'}(\Phi(\ensuremath{\mathbf v}))=M_{\sigma'}(\Phi(\ensuremath{\mathbf v}))$.
On the other hand, by the birationality of wall-crossing,
Theorem \ref{thm:birational-WC}, the moduli spaces
$M_{\sigma'}(\Phi_*(\ensuremath{\mathbf v}))$ and $M_{\Phi_*(\sigma)}(\Phi_*(\ensuremath{\mathbf v}))$ are birational; combined
with the identification \eqref{eq:comparison}, this shows that $M_{\sigma}(\ensuremath{\mathbf v})$ is birational
to a Lagrangian fibration.
It remains to prove part \ref{enum:nef}, so let us assume that $D$ is nef and primitive.
Using the Fourier-Mukai
transform $\Phi$ as above, and after replacing $\sigma$ by $\Phi_*(\sigma)$, we may also assume that
$\ensuremath{\mathbf v}$ has rank zero, and that $\ensuremath{\mathbf w} = \theta_{\sigma, \ensuremath{\mathbf v}}^{-1}(D)$ is the class of skyscraper sheaves of
points.
Now consider the autoequivalence $\Psi \in \mathop{\mathrm{Aut}}\nolimits \mathrm{D}^{b}(Y, \alpha')$ of Lemma \ref{lem:Cisample}. Except
for the possible shift $[1]$, each autoequivalence used in the construction of $\Psi$ leaves the
class $\ensuremath{\mathbf w}$ invariant. Thus, in the moduli space $M_{\Psi_*\sigma}(\Psi_*(\ensuremath{\mathbf v})) = M_\sigma(\ensuremath{\mathbf v})$, the
divisor class $D$ is still given by $D = \pm \theta_{\Psi_*\sigma, \Psi_*(\ensuremath{\mathbf v})}(\ensuremath{\mathbf w})$, up to sign.
Let $f \colon M_\sigma(\ensuremath{\mathbf v}) \dashrightarrow M_H(\ensuremath{\mathbf v})$ be the birational map to the Gieseker moduli
space $M_H(\ensuremath{\mathbf v})$ of torsion sheaves induced by a sequence of wall-crossings as above. The
Lagrangian fibration $M_H(\ensuremath{\mathbf v}) \to \ensuremath{\mathbb{P}}^n$ is induced by the divisor $\theta_{H,\ensuremath{\mathbf v}}(-\ensuremath{\mathbf w})$. By Theorem
\ref{thm:ellandgroupaction}, the classes $f_*D$ and $\theta_{H,\ensuremath{\mathbf v}}(-\ensuremath{\mathbf w})$ are (up to sign) in the
same $W_{\mathrm{Exc}}$-orbit. Since they are both nef on a smooth K-trivial birational model, they
are also in the closure of the movable cone (and in particular, their orbits agree, not just up to sign).
By Proposition \ref{prop:Weylmovablecone}, the closure of the movable cone is the closure of the
fundamental chamber of the action on $W_{\mathrm{Exc}}$ on $\mathop{\mathrm{Pos}}(M)$, which intersects
every orbit exactly once.
Therefore, the classes $f_*D$ and $\theta_{H,\ensuremath{\mathbf v}}(-\ensuremath{\mathbf w})$ have to be equal.
Since $M_\sigma(\ensuremath{\mathbf v})$ and $M_H(\ensuremath{\mathbf v})$ are isomorphic in codimension two, the section rings of
$D$ and $f_*D$ agree. In particular, $D$ is effective with Iitaka dimension $\frac{\ensuremath{\mathbf v}^2+2}{2}$.
As explained in \cite[Section 4.1]{Sawon:AbelianFibred}, it follows from
\cite{Verbitsky:Cohomology} that
the numerical Iitaka dimension of $D$ is also equal to $\frac{\ensuremath{\mathbf v}^2+2}{2}$.
Since $D$ is nef by assumption, $D$ is semi-ample by Kawamata's Theorem (see \cite[Theorem 6.1]{Kawamata:Pluricanonical}
and \cite[Theorem 1.1]{Fujino:KawamataThm}), and thus induces a morphism to $\ensuremath{\mathbb{P}}^n$.
This completes the proof of Theorem \ref{thm:SYZ}.
\begin{Rem} \label{Rem:SYZcon_movable}
In fact, the above proof shows the following two additional statements:
\begin{enumerate}
\item If $D \in \mathop{\mathrm{NS}}\nolimits(M_\sigma(\ensuremath{\mathbf v}))$ with $q(D) = 0$ lies in the closure of the movable cone,
then there is a birational Lagrangian fibration induced by $D$. (In particular, $D$ is movable.)
\item Any $W_{\mathrm{Exc}}$-orbit of divisors on $M_{\sigma(\ensuremath{\mathbf v})}$ satisfying
$q(D) = 0$ contains exactly one movable divisor, which induces a birational Lagrangian fibration.
\end{enumerate}
\end{Rem}
\section{Application 2: Mori cone, nef cone, movable cone, effective cone}
\label{sec:cones}
Let $\ensuremath{\mathbf v}$ be a primitive vector with $\ensuremath{\mathbf v}^2 > 0$, let $\sigma$ be a generic stability condition
with respect to $\ensuremath{\mathbf v}$, and let $M := M_\sigma(\ensuremath{\mathbf v})$ be the moduli space of $\sigma$-semistable objects.
In this section, we will completely describe the cones associated to the birational geometry of $M$ in terms of the Mukai lattice of $X$.
Recall that $\overline{\mathop{\mathrm{Pos}}}(M) \subset \mathop{\mathrm{NS}}\nolimits(M)_\ensuremath{\mathbb{R}}$ denotes the (closed) cone of positive classes
defined by the Beauville-Bogomolov quadratic form.
Let $\overline{\mathop{\mathrm{Pos}}}(M)_\ensuremath{\mathbb{Q}} \subset \overline{\mathop{\mathrm{Pos}}}(M)$ be the subcone generated by all rational classes
in $\overline{\mathop{\mathrm{Pos}}(M)}$; it is the union of the interior
$\mathop{\mathrm{Pos}}(M)$ with all rational rays in the boundary $\partial \mathop{\mathrm{Pos}}(M)$. We fix an ample divisor class
$A$ on $M$ (which can be obtained from Theorem \ref{thm:ampleness}).
In the following theorems, we will say that a subcone of $\overline{\mathop{\mathrm{Pos}}}(M)_\ensuremath{\mathbb{Q}}$ (or of its closure) is ``cut
out'' by a collection of linear subspaces if it is one of the closed chambers of the wall-and-chamber
decomposition of $\mathop{\mathrm{Pos}}(M)_\ensuremath{\mathbb{Q}}$ whose walls are the given collection of subspaces. This is easily
translated into a more explicit statement as in the formulation of Theorem \ref{thm:nefcone} given
in the introduction.
\begin{Thm} \label{thm:nefcone}
The nef cone of $M$ is cut out in $\overline{\mathop{\mathrm{Pos}}}(M)$ by all linear subspaces of the form
$\theta(\ensuremath{\mathbf v}^\perp \cap \ensuremath{\mathbf a}^\perp)$, for all classes $\ensuremath{\mathbf a} \in H^*_\mathrm{alg}(X,\alpha,\ensuremath{\mathbb{Z}})$ satisfying $\ensuremath{\mathbf a}^2 \ge -2$
and $0 \le (\ensuremath{\mathbf v}, \ensuremath{\mathbf a}) \le \frac{\ensuremath{\mathbf v}^2}2$.
\end{Thm}
Via the Beauville-Bogomolov form we can identify the group $N_1(M)$ of curves up to
numerical equivalences with a lattice in the N\'eron-Severi group:
$N_1(M)_\ensuremath{\mathbb{Q}} \cong \left(N^1(M)_\ensuremath{\mathbb{Q}}\right)^\vee \cong N^1(M)_\ensuremath{\mathbb{Q}}$. In particular, we get an
induced rational pairing on $N_1(M)$; we then say that the \emph{cone of positive curves}
is the cone of classes $[C] \in N_1(M)_\ensuremath{\mathbb{R}}$ with $(C, C) > 0$ and $C.A > 0$.
Also, we obtain a dual Mukai isomorphism
\begin{equation} \label{eq:dualMukai}
\theta^\vee \colon H^*_\mathrm{alg}(X,\alpha,\ensuremath{\mathbb{Z}})/\ensuremath{\mathbf v} \otimes \ensuremath{\mathbb{Q}} \to N_1(M)_\ensuremath{\mathbb{Q}}.
\end{equation}
As the dual statement to Theorem \ref{thm:nefcone}, we obtain:
\begin{Thm} \label{thm:Moricone}
The Mori cone of curves in $M$ is generated by the cone of positive curves, and by all curve classes
$\theta^\vee(\ensuremath{\mathbf a})$, for all $\ensuremath{\mathbf a} \in H^*_\mathrm{alg}(X,\alpha,\ensuremath{\mathbb{Z}}), \ensuremath{\mathbf a}^2 \ge -2$ satisfying
$\abs{(\ensuremath{\mathbf v}, \ensuremath{\mathbf a})} \le \frac{\ensuremath{\mathbf v}^2}2$ and $\theta^\vee(\ensuremath{\mathbf a}).A > 0$.
\end{Thm}
Some of these classes $\ensuremath{\mathbf a}$ may not define a wall bordering the nef cone; in this case,
$\theta^\vee(\ensuremath{\mathbf a})$ is in the interior of the Mori cone (as it intersects every nef divisor
positively).
\begin{Thm} \label{thm:movable}
The movable cone of $M$ is cut out in $\overline{\mathop{\mathrm{Pos}}}(M)_\ensuremath{\mathbb{Q}}$ by the following two types of walls:
\begin{enumerate}
\item $\theta(\ensuremath{\mathbf s}^\perp \cap \ensuremath{\mathbf v}^\perp)$ for every spherical class $\ensuremath{\mathbf s} \in \ensuremath{\mathbf v}^\perp$.
\item $\theta(\ensuremath{\mathbf w}^\perp \cap \ensuremath{\mathbf v}^\perp)$ for every isotropic class $\ensuremath{\mathbf w} \in H^*_\mathrm{alg}(X,\alpha,\ensuremath{\mathbb{Z}})$ with $1 \le (\ensuremath{\mathbf w}, \ensuremath{\mathbf v}) \le 2$.
\end{enumerate}
\end{Thm}
\begin{Thm} \label{thm:effective}
The effective cone of $M$ is generated by $\overline{\mathop{\mathrm{Pos}}}(M)_\ensuremath{\mathbb{Q}}$ along with the following
exceptional divisors:
\begin{enumerate}
\item $D:= \theta(\ensuremath{\mathbf s})$ for every spherical class $\ensuremath{\mathbf s} \in \ensuremath{\mathbf v}^\perp$ with
$(D, A) > 0$, and
\item $D:= \theta(\ensuremath{\mathbf v}^2 \cdot \ensuremath{\mathbf w} - (\ensuremath{\mathbf v},\ensuremath{\mathbf w}) \cdot \ensuremath{\mathbf v})$ for every isotropic class $\ensuremath{\mathbf w} \in H^*_\mathrm{alg}(X,\alpha,\ensuremath{\mathbb{Z}})$
with $1 \le (\ensuremath{\mathbf w}, \ensuremath{\mathbf v}) \le 2$ and $(D, A) > 0$.
\end{enumerate}
\end{Thm}
Note that only those classes $D$ whose orthogonal complement $D^\perp$ is a wall of the movable cone
will correspond to irreducible exceptional divisors.
The movable cone has essentially been described by Markman for any hyperk\"ahler variety; more
precisely, \cite[Lemma 6.22]{Eyal:survey} gives the intersection of the movable cone with the
strictly positive cone $\mathop{\mathrm{Pos}}(M)$. While our methods give an alternative proof, the only new statement of
Theorem \ref{thm:movable} concerns rational classes $D$ with $D^2 = 0$ in the closure of the movable
cone; such a $D$ is movable due to our proof of the Lagrangian fibration conjecture in Theorem
\ref{thm:SYZ}.
Using the divisorial Zariski decomposition of \cite{Boucksom:Zariskidecomposition}, one can show
for any hyperk\"ahler variety that the pseudo-effective cone is dual to the closure of the movable
cone. In particular, Theorem \ref{thm:effective} could also be deduced from Markman's results and
Theorem \ref{thm:SYZ}.
\begin{proof}[Proof of Theorem \ref{thm:nefcone}]
Let $\ensuremath{\mathcal C}$ be the chamber of $\mathop{\mathrm{Stab}}\nolimits^\dag(X,\alpha)$ containing $\sigma$. By Theorem \ref{thm:MAP}, the boundary
of the ample cone inside the positive cone is equal to the union of the images $\ell(\ensuremath{\mathcal W})$, for all
walls $\ensuremath{\mathcal W}$ in the boundary of $\ensuremath{\mathcal C}$ that induce a non-trivial contraction morphism. (These are
walls that are not ``fake walls'' in the sense of Definition \ref{def:TypeOfWalls}.) Theorem
\ref{thm:walls} characterizes hyperbolic lattices corresponding to such walls.
For any such hyperbolic lattice $\ensuremath{\mathcal H}$, we get a class $\ensuremath{\mathbf a}$ as in Theorem \ref{thm:nefcone} as follows:
\begin{itemize}
\item
in the cases \eqref{enum:niso-divisorial} of divisorial contractions, we let $\ensuremath{\mathbf a}$ be the corresponding spherical or
isotropic class;
\item in the subcase of \eqref{enum:niso-flop} of a flopping contraction induced by a spherical class
$\ensuremath{\mathbf s}$, we also set $\ensuremath{\mathbf a} = \ensuremath{\mathbf s}$;
\item and in the subcase of \eqref{enum:niso-flop} of a flopping contraction induced by a
sum $\ensuremath{\mathbf v} = \ensuremath{\mathbf a} + \ensuremath{\mathbf b}$, we may assume $(\ensuremath{\mathbf v}, \ensuremath{\mathbf a}) \le (\ensuremath{\mathbf v}, \ensuremath{\mathbf b})$, which is equivalent
to $(\ensuremath{\mathbf v}, \ensuremath{\mathbf a}) \le \frac{\ensuremath{\mathbf v}^2}2$.
\end{itemize}
Stability conditions $\sigma = (Z, \ensuremath{\mathcal A})$ in the corresponding wall $\ensuremath{\mathcal W}$ satisfy
$\Im \frac{Z(\ensuremath{\mathbf a})}{Z(\ensuremath{\mathbf v})} = 0$, or, equivalently, $\ell(\sigma) \in \theta(\ensuremath{\mathbf v}^\perp \cap \ensuremath{\mathbf a}^\perp)$.
Conversely, given $\ensuremath{\mathbf a}$, we obtain a rank two lattice $\ensuremath{\mathcal H} := \langle \ensuremath{\mathbf v}, \ensuremath{\mathbf a} \rangle$. If $\ensuremath{\mathcal H}$ is
hyperbolic, then it is straightforward to check that it conversely induces one of the walls listed
in Theorem \ref{thm:walls}. Otherwise, $\ensuremath{\mathcal H}$ is positive-semidefinite. Then the orthogonal complement
$\ensuremath{\mathcal H}^\perp = \ensuremath{\mathbf v}^\perp \cap \ensuremath{\mathbf a}^\perp$ does not contain any positive classes, and thus its image
under $\theta$ in $\mathop{\mathrm{NS}}\nolimits(M)$ does not intersect the positive cone and can be ignored.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:movable}.]
As already discussed in Section \ref{sec:MainThms}, the intersection
$\mathop{\mathrm{Mov}}\nolimits(M) \cap \mathop{\mathrm{Pos}}(M)$ follows directly from Theorem \ref{thm:MAP}; the statement of
Theorem \ref{thm:movable} is just an explicit description of the exceptional chamber
of the Weyl group action.
A movable class $D$ in the boundary of the positive cone, with $(D, D) = 0$,
automatically has to be rational. Conversely, by our proof of Theorem \ref{thm:SYZ}, if
we have a rational divisor with $(D, D) = 0$ that is in the closure of the movable cone, then
there is a Lagrangian fibration induced by $D$ on a smooth birational model of $M$;
in particular, $D$ is movable.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:effective}.]
We first claim that the class of an irreducible exceptional divisor is (up to a multiple) of the form described in the Theorem. For the
Brill-Noether case, this was proved in \ref{prop:sphericaldivisorialwall}. In the Hilbert-Chow and
Li-Gieseker-Uhlenbeck case, the class of the divisor of non-locally free sheaves can be computed
explicitly; alternatively, it is enough to observe that $\theta^{-1}(D)$ has to be a multiple of the orthogonal
projection of $\ensuremath{\mathbf w}$ to $\ensuremath{\mathbf v}^\perp$.
If $D$ is an arbitrary effective divisor, then $D$ can be written
as $D = A + E$ with $A$ movable and $E$ exceptional, see \cite[Section
4]{Boucksom:Zariskidecomposition}, \cite[Theorem 5.8]{Eyal:survey}.
The class of $A$ is a rational point of $\overline{\mathop{\mathrm{Pos}}(M)}$. Thus the effective cone is
contained in the cone described in the Theorem.
For the converse, first recall $\mathop{\mathrm{Pos}}(M) \subset \mathop{\mathrm{Eff}}(M)$. Now consider a rational divisor with $D^2
= 0$. If $(D, E) < 0$ for some exceptional divisor $E$, then $D$ can be written
the sum $D = \epsilon E + (D - \epsilon E)$ with $D - \epsilon E \in \mathop{\mathrm{Pos}}(M)$; thus $D$ is in the effective cone. Otherwise
$(D, E) \ge 0$ for every exceptional divisor $E$. By Proposition \ref{prop:Weylmovablecone},
$D$ is in the closure of the movable cone; by Theorem \ref{thm:SYZ} and Remark
\ref{Rem:SYZcon_movable}, a multiple of $D$ induces a birational Lagrangian fibration, so $D$ is effective.
Finally, when $D$ is one of the classes listed explicitly in the Theorem, consider the orthogonal
complement $D^\perp$. If it does not intersect the movable cone in a face or in the interior, then the inequality
$(D, \underline{\hphantom{A}}) \ge 0$ is implied by the inequality $(E, \underline{\hphantom{A}}) \ge 0$ for all irreducible exceptional
divisors; hence $D$ is a positive linear combination of such divisors. Since the wall $D^\perp$ is
identical to one of the walls listed in Theorem \ref{thm:movable}, the only other possibility is
that $D^\perp$ defines a wall of the movable cone. The corresponding exceptional divisor is
proportional to $D$.
\end{proof}
\subsection*{Relation to Hassett-Tschinkel's conjecture on the Mori cone}
Hassett and Tschinkel gave a conjectural description of the nef and Mori cones via intersection
numbers of extremal rays in \cite{HassettTschinkel:ExtremalRays}. While their conjecture turned out
to be incorrect (see \cite[Remark 10.4]{BM:projectivity} and \cite[Remark 8.10]{KnutsenCiliberto}),
we will now explain that it is in fact very closely related to Theorem \ref{thm:Moricone}.
We first recall their conjecture. Via the identification
$N_1(M)_\ensuremath{\mathbb{Q}} \cong N^1(M)_\ensuremath{\mathbb{Q}}$ explained above, the Beauville-Bogomolov extends to
a $\ensuremath{\mathbb{Q}}$-valued quadratic form on $N_1(M)$; we will also
denote it by $q(\underline{\hphantom{A}})$.
The following lemma follows immediately from this definition, and the definition
of $\theta^\vee$:
\begin{Lem} \label{lem:thetaveequadratic}
Consider the isomorphism $\ensuremath{\mathbf v}^\perp_\ensuremath{\mathbb{Q}} \cong N_1(M)_\ensuremath{\mathbb{Q}}$ induced by the dual Mukai morphism
$\theta^\vee$ of \eqref{eq:dualMukai}. This isomorphism respects the quadratic form on
either side.
\end{Lem}
Let $2n$ be the dimension of $M$, and as above let $A$ be an ample divisor.
Let $C \subset N_1(M)_\ensuremath{\mathbb{R}}$ be the cone generated by all integral curve classes
$R \in N_1(M)_\ensuremath{\mathbb{Z}}$ that satisfy $q(R) \ge -\frac{n+3}2$ and $R.A > 0$.
In \cite[Conjecture 1.2]{HassettTschinkel:ExtremalRays}, the authors conjectured that for
any hyperk\"ahler variety $M$ deformation equivalent to the Hilbert scheme of a K3
surface, the cone $C$ is equal to the Mori cone.
Our first observation shows that the Mori cone is contained in $C$:
\begin{Prop}\label{prop:RelationHT}
Let $R$ be the generator of an extremal ray of the Mori cone of $M$. Then
$(R, R) \ge -\frac{n+3}2$.
\end{Prop}
\begin{proof}
It is enough to prove the inequality for some effective curve on the extremal ray.
Let $\ensuremath{\mathcal W}$ be a wall inducing the extremal contraction corresponding to the ray generated by $R$, and
$\ensuremath{\mathcal H}_\ensuremath{\mathcal W} \subset H^*_{\alg}(X, \Z)$ its associated hyperbolic lattice. Let $\sigma_+$ be a nearby stability
condition in the chamber of $\sigma$, and $\sigma_0 \in \ensuremath{\mathcal W}$. Let $\ensuremath{\mathbf a} \in \ensuremath{\mathcal H}_\ensuremath{\mathcal W}$ be a
corresponding class satisfying the assumptions in Theorem \ref{thm:Moricone}: $\ensuremath{\mathbf a}^2 \ge -2$ and
$\abs{(\ensuremath{\mathbf v}, \ensuremath{\mathbf a})} \le \frac{\ensuremath{\mathbf v}^2}2$. Replacing $\ensuremath{\mathbf a}$ by $-\ensuremath{\mathbf a}$ if necessary, we can also
assume $(\ensuremath{\mathbf v}, \ensuremath{\mathbf a}) \ge 0$.
We first claim that there exists a contracted curve whose integral class is given by
$\pm\theta^\vee(\ensuremath{\mathbf a})$. We ignore the well-known case of the Hilbert-Chow contraction, and also
assume for simplicity we assume that $\ensuremath{\mathcal W}$ is not a totally semistable wall for any
class in $\ensuremath{\mathcal H}_\ensuremath{\mathcal W}$; the general case can be reduced to this one with the same methods as in the
previous sections. By assumptions, we have both $\ensuremath{\mathbf a}^2 \ge -2$ and $(\ensuremath{\mathbf v}-\ensuremath{\mathbf a})^2 \ge -2$; therefore,
we can choose $\sigma_0$-\emph{stable} objects $A$ and $B$ of class $\ensuremath{\mathbf a}$ and $\ensuremath{\mathbf v} - \ensuremath{\mathbf a}$,
respectively. We further claim $(\ensuremath{\mathbf a}, \ensuremath{\mathbf v}) \ge 2 + \ensuremath{\mathbf a}^2$:
this claim is trivial when $\ensuremath{\mathbf a}^2 = -2$, amounts to the exclusion of the Hilbert-Chow case
when $\ensuremath{\mathbf a}^2 = 0$, and in case $\ensuremath{\mathbf a}^2 > 0$ it follows from the signature
of $\ensuremath{\mathcal H}_\ensuremath{\mathcal W}$ and the assumption on $(\ensuremath{\mathbf a}, \ensuremath{\mathbf v})$:
\[
(\ensuremath{\mathbf a}, \ensuremath{\mathbf v})^2 > \ensuremath{\mathbf a}^2 \ensuremath{\mathbf v}^2 \ge 2\ensuremath{\mathbf a}^2 (\ensuremath{\mathbf a},\ensuremath{\mathbf v}) \ge (\ensuremath{\mathbf a}^2 + 2) (\ensuremath{\mathbf a}, \ensuremath{\mathbf v}).
\]
Assume that $\phi^+(\ensuremath{\mathbf a}) < \phi^+(\ensuremath{\mathbf v}) < \phi^+(\ensuremath{\mathbf v}-\ensuremath{\mathbf a})$; in the opposite case we
swith the roles of $A$ and $B$. By the above claim, $\mathop{\mathrm{ext}}\nolimits^1(B, A) = (\ensuremath{\mathbf a}, \ensuremath{\mathbf v}-\ensuremath{\mathbf a}) \ge 2$.
Varying the extension class in $\mathop{\mathrm{Ext}}\nolimits^1(B, A)$ produces curves of objects in
$M_{\sigma_+}(\ensuremath{\mathbf v})$ that are S-equivalent with respect to $\sigma_0$;
in order to compute their class, we have to make the construction explicit.
Let $\ensuremath{\mathbb{P}}(\mathop{\mathrm{Ext}}\nolimits^1(B, A))$ be the projective space of one-dimensional subspaces of $\mathop{\mathrm{Ext}}\nolimits^1(B,A)$.
Choose a parameterized line $\ensuremath{\mathbb{P}}^1 \ensuremath{\hookrightarrow} \ensuremath{\mathbb{P}}(\mathop{\mathrm{Ext}}\nolimits^1(B, A))$, corresponding to a section $\nu$ of
\[
H^0(\ensuremath{\mathbb{P}}^1, \ensuremath{\mathcal O}(1) \otimes \mathop{\mathrm{Ext}}\nolimits^1(B,A))
= \mathop{\mathrm{Ext}}\nolimits^1_{\ensuremath{\mathbb{P}}^1 \times X}(\ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^1} \boxtimes B, \ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^1}(1) \boxtimes A).
\]
Let $\ensuremath{\mathcal E} \in \mathrm{D}^{b}(\ensuremath{\mathbb{P}}^1 \times X)$ be the extension
$\ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^1}(1) \boxtimes A \to \ensuremath{\mathcal E} \to \ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^1} \boxtimes B $
given by $\nu$. By Lemma \ref{lem:stableextension}, every fiber of $\ensuremath{\mathcal E}$ is
$\sigma_+$-stable. Thus we have produced a rational curve $R \subset M_{\sigma_+}(\ensuremath{\mathbf v})$
of S-equivalent objects.
To compute its class, it is sufficient to compute the intersection product
$\theta(D).R$ with a divisor $\theta(D)$, for any $D \in \ensuremath{\mathbf v}^\perp$. We have
\[
\theta(D).R = (D, \ensuremath{\mathbf v}(\Phi(\ensuremath{\mathcal O}_R)) = (D, \ensuremath{\mathbf v}(B) + 2 \ensuremath{\mathbf v}(A)) = (D, \ensuremath{\mathbf v} + \ensuremath{\mathbf a}) = (D, \ensuremath{\mathbf a}) =
\theta(D).\theta^\vee(\ensuremath{\mathbf a}),
\]
where $\Phi \colon \mathrm{D}^{b}(M_{\sigma^+}(\ensuremath{\mathbf v})) \to \mathrm{D}^{b}(X)$ denotes the Fourier-Mukai transform,
and where we used $D \in \ensuremath{\mathbf v}^\perp$ in the second-to-last equality.
Let $\ensuremath{\mathbf a}_0 \in H^*_{\alg}(X, \Z)$ denote the projection of $\ensuremath{\mathbf a}$ to the orthogonal complement of $\ensuremath{\mathbf v}$.
By Lemma \ref{lem:thetaveequadratic}, we have $(R, R) = \ensuremath{\mathbf a}_0^2$, and for the latter we
obtain:
\[
(\ensuremath{\mathbf a}_0, \ensuremath{\mathbf a}_0) = \left( \ensuremath{\mathbf a} - \frac{(\ensuremath{\mathbf v}, \ensuremath{\mathbf a})}{\ensuremath{\mathbf v}^2} \ensuremath{\mathbf v}, \ensuremath{\mathbf a} - \frac{(\ensuremath{\mathbf v}, \ensuremath{\mathbf a})}{\ensuremath{\mathbf v}^2} \ensuremath{\mathbf v}\right)
= \ensuremath{\mathbf a}^2 - \frac{(\ensuremath{\mathbf v}, \ensuremath{\mathbf a})^2}{\ensuremath{\mathbf v}^2} \ge -2 - \frac{\ensuremath{\mathbf v}^2}4 = - \frac{n+3}2.
\]
\end{proof}
\begin{Rem}\label{rmk:RelationHT}
When $M$ is the Hilbert scheme of points on $X$, we can make the comparison to Hassett-Tschinkel's
conjecture even more precise: in this case, it is easy to see that $\theta^\vee$ induces an
isomorphism
\[
H^*_{\alg}(X, \Z)/\ensuremath{\mathbf v} \to N_1(M)
\]
of lattices, respecting the integral structures. Given a class $R \in N_1(M)$ satisfying the
inequality $(R, R) \ge -\frac{n+3}2$ of \cite{HassettTschinkel:ExtremalRays}, let
$\ensuremath{\mathbf a}_0 \in \ensuremath{\mathbf v}^\perp_\ensuremath{\mathbb{Q}}$ be the (rational) class with $\theta^\vee(\ensuremath{\mathbf a}_0) = R$.
Let $k$ be any integer satisfying $k \le n-1$ and
$k^2 \ge (2n-2)(-2-\ensuremath{\mathbf a}_0^2)$; by the assumptions, $k = n-1$ is always an example satisfying both
inequalities.
Then $\ensuremath{\mathbf a} := \ensuremath{\mathbf a}_0 + \frac{k}{2n-2} \ensuremath{\mathbf v}$ is a rational class in the algebraic Mukai lattice that
satisfies the assumptions appearing in Theorem \ref{thm:Moricone}. In addition, it has
has integral pairing with both $\ensuremath{\mathbf v}$, and with every integral class in $\ensuremath{\mathbf v}^\perp$; thus, it is
potentially an integral class. The Hassett-Tschinkel conjecture holds if and only if for every extremal
ray of $C$, there is a choice of $k$ such that $\ensuremath{\mathbf a}$ is an integral class.
\end{Rem}
If we are given a lattice $\ensuremath{\mathbf v}^\perp$ of small rank, then the algebraic Mukai lattice of $(X,
\alpha)$ can be any lattice in $\ensuremath{\mathbf v}^\perp_\ensuremath{\mathbb{Q}} \oplus \ensuremath{\mathbb{Q}} \cdot \ensuremath{\mathbf v}$ containing both $\ensuremath{\mathbf v}^\perp$ and
$\ensuremath{\mathbf v}$, as long as $\ensuremath{\mathbf v}$ and $\ensuremath{\mathbf v}^\perp$ are primitive. In general, the Hassett-Tschinkel conjecture
holds for some of these lattices, but not for others. The question is thus closely related to
the fact that a strong global Torelli statement needs the embedding $H^2(M) \ensuremath{\hookrightarrow} H^*(X)$, rather
than just $H^2(M)$.
\section{Examples of nef cones and movable cones}
\label{sec:examples}
In this section we examine examples of cones of divisors.
\subsection*{K3 surfaces with Picard number 1...}
Let $X$ be a K3 surface such that $\mathrm{Pic}(X) \cong \ensuremath{\mathbb{Z}} \cdot H$, with $H^2=2d$.
We let $M:=\mathrm{Hilb}^n(X)$, for $n\geq2$, and $\ensuremath{\mathbf v}=(1,0,1-n)$.
In this case, everything is determined by certain Pell's equations.
We recall that a basis of $\mathrm{NS}(M)$ is given by
\begin{equation} \label{eq:thetaexplicit}
\widetilde H = \theta(0,-H,0) \quad \text{and} \quad B = \theta(-1,0,1-n).
\end{equation}
Geometrically, $\widetilde H$ is the big and nef divisor induced by the symmetric power of $H$ on
$\mathop{\mathrm{Sym}}^n(X)$, and $2B$ is the class of the exceptional divisor of the Hilbert-Chow morphism.
By Theorem \ref{thm:walls}, divisorial contractions can be divided in three cases:
\begin{description}
\item[Brill-Noether] If there exists a spherical class $\ensuremath{\mathbf s}$ with $(\ensuremath{\mathbf s},\ensuremath{\mathbf v})=0$.
\item[Hilbert-Chow] If there exists an isotropic class $\ensuremath{\mathbf w}$ with $(\ensuremath{\mathbf w},\ensuremath{\mathbf v})=1$.
\item[Li-Gieseker-Uhlenbeck] If there exists an isotropic class $\ensuremath{\mathbf w}$ with $(\ensuremath{\mathbf w},\ensuremath{\mathbf v})=2$.
\end{description}
Elementary substitutions show that the case of BN-contraction is governed by
solution to Pell's equation
\begin{equation}\label{eq:Pell(-2)}
(n-1) X^2 - d Y^2 = 1 \quad \text{via} \quad
\ensuremath{\mathbf s}(X, Y) = (X,-YH,(n-1)X).
\end{equation}
The case of HC-contractions and LGU-contractions are governed solutions of
\begin{equation}\label{eq:Pell(0)}
X^2 - d (n-1) Y^2 = 1 \quad \text{with $X+1$ divisible by $n-1$};
\end{equation}
we get a HC-contraction or LGU-contraction via
\[
\ensuremath{\mathbf w}(X,Y)=\left(\frac{X+1}{2(n-1)},-\frac Y2 H,\frac{X-1}2\right)
\quad \text{or} \quad
\ensuremath{\mathbf w}(X,Y)=\left(\frac{X+1}{n-1},-Y H,X-1\right)
\]
depending on wheter $Y$ is even or odd.
The two equations determine the movable cone:
\begin{Prop}\label{prop:MovHilbSchemePic1}
Assume $\mathrm{Pic}(X)\cong \ensuremath{\mathbb{Z}} \cdot H$.
The movable cone of the Hilbert scheme $M=\mathrm{Hilb}^n(X)$ has the following form:
\begin{enumerate}
\item\label{enum:Columbus120912_1} If $d=\frac{k^2}{h^2}(n-1)$, with $k,h\geq1$, $(k,h)=1$, then
\[
\mathrm{Mov}(M) = \langle \widetilde H, \widetilde H - \frac kh B\rangle,
\]
where $q(h\widetilde H - kB)=0$, and it induces a (rational) Lagrangian fibration on $M$.
\item\label{enum:Columbus120912_2} If $d(n-1)$ is not a perfect square, and \eqref{eq:Pell(-2)} has a solution, then
\[
\mathrm{Mov}(M) = \langle \widetilde H, \widetilde H - d\frac{y_1}{x_1(n-1)}B\rangle,
\]
where $(x_1,y_1)$ is the solution to \eqref{eq:Pell(-2)} with $x_1, y_1 > 0$, and with smallest
possible $x_1$.
\item\label{enum:Columbus120912_3} If $d(n-1)$ is not a perfect square, and \eqref{eq:Pell(-2)} has no solution, then
\[
\mathrm{Mov}(M) = \langle \widetilde H, \widetilde H - d\frac{y_1'}{x_1'}B\rangle,
\]
where $(x_1',y_1')$ is the solution to \eqref{eq:Pell(0)} with smallest possible
$\frac{y_1'}{x_1'}>0$.
\end{enumerate}
\end{Prop}
\begin{proof}
Since $\widetilde H$ induces the divisorial HC contraction, it is an extremal ray of the movable
cone; to find the other extremal ray, we need to find $\Gamma > 0$ such that
$\widetilde H - \Gamma B$ lies on one of the walls described by Theorem \ref{thm:movable}, and such
that $\Gamma$ is as small as possible.
Also recall Proposition \ref{prop:Weylmovablecone}: the movable cone is a
fundamental domain for Weyl group action of $W_{\mathrm{Exc}}$ on $\mathop{\mathrm{Pos}}(M)$.
Any solution to \eqref{enum:Columbus120912_2} or \eqref{enum:Columbus120912_3} determines a wall in
the positive cone via Theorem \ref{thm:movable}; one of its Weyl group translates
thus determines a wall bordering the movable cone.
Part \eqref{enum:Columbus120912_1} follows directly from Theorem \ref{thm:SYZ}.
To prove part \eqref{enum:Columbus120912_2}, it follows immediately from the previous discussion that
if \eqref{eq:Pell(-2)} has a solution, then
one of the solutions determines the second extremal ray. The claim thus follows from the observation
that
\[
D(X,Y) :=\widetilde H - d\frac{Y}{X(n-1)}B = \theta\left( \left(\frac{dY}{X(n-1)}, -H,
d\frac{Y}{X}\right) \right)
\]
is obtained as the image under $\theta$ of a class orthogonal to $\ensuremath{\mathbf s}(X, Y)$, and the fact that
$\frac YX$ is minimal if and only if $X$ is minimal.
A similar computation shows that given a solution of \eqref{eq:Pell(0)} (which always exists),
the vector
\[ D'(X, Y) = \widetilde H - d\frac YX B = \theta\left( \left( \frac{dY}{X}, -H,
(n-1)\frac{dY}X\right) \right) \]
is contained in $\theta(\ensuremath{\mathbf w}(X, Y)^\perp)$ in both
the HC and the LGU case; this proves part \eqref{enum:Columbus120912_3}.
\end{proof}
\begin{Ex}
If $d = n-2$, then \eqref{eq:Pell(-2)} has $X = 1, Y=1$ as a solution. Therefore
\[
\mathrm{Mov}(M)=\langle \widetilde H, \widetilde H - \frac{n-2}{n-1} B \rangle.
\]
\end{Ex}
For the nef cone, we start with the easy case $n=2$.
Consider the Pell's equation
\begin{equation}\label{eq:(-2)flops_n=2}
X^2 - 4dY^2 = 5.
\end{equation}
The associated spherical class is $\ensuremath{\mathbf s}(X,Y) = \left(\frac{X+1}2,-Y H,\frac{X-1}2\right)$.
\begin{Lem}\label{lem:n=2}
Let $M=\mathrm{Hilb}^2(X)$.
The nef cone of $M$ has the following form:
\begin{enumerate}
\item\label{enum:Columbus120913_1} If \eqref{eq:(-2)flops_n=2} has no solutions, then
\[
\mathrm{Nef}(M) = \mathrm{Mov}(M).
\]
\item\label{enum:Columbus120913_2} Otherwise, let $(x_1,y_1)$ be the positive solution of \eqref{eq:(-2)flops_n=2}
with $x_1>0$ minimal.
Then
\[
\mathrm{Nef}(M) = \langle \widetilde H, \widetilde H - d \frac{y_1}{x_1} B \rangle.
\]
\end{enumerate}
\end{Lem}
\begin{proof}
We apply Theorem \ref{thm:nefcone}.
The movable cone and the nef cone agree unless there is a flopping wall, described
in Theorem \ref{thm:walls}, part \eqref{enum:niso-flop}.
Since $\ensuremath{\mathbf v}^2 = 2$, the case $\ensuremath{\mathbf v} = \ensuremath{\mathbf a} + \ensuremath{\mathbf b}$ with $\ensuremath{\mathbf a}, \ensuremath{\mathbf b}$ positive
is impossible. This leaves only the case of a spherical class $\ensuremath{\mathbf s}$
with $(\ensuremath{\mathbf v},\ensuremath{\mathbf s}) = 1$; this exists if and only if \eqref{eq:(-2)flops_n=2} has a solution.
\end{proof}
\begin{Ex}\label{ex:CK}
Let $d=31$.
Then the nef cone for $M=\mathrm{Hilb}^2(X)$ is
\[
\mathrm{Nef}(M) = \langle \widetilde H, \widetilde H - \frac{3658}{657} B\rangle.
\]
In particular, this gives a negative answer to \cite[Question 8.4]{KnutsenCiliberto}.
Indeed, \eqref{eq:(-2)flops_n=2} has a the smallest solution given by $x_1=657$ and $y_1=118$.
This gives a $(-2)$-class $\ensuremath{\mathbf s} = (329, -59 \cdot H, 328)$, which induces a flop, by Lemma \ref{lem:n=2}.
\end{Ex}
For higher $n>2$ the situation is more complicated, since the number of Pell's equations to consider is higher.
But, in any case, everything is completely determined.
\begin{Ex}\label{ex:n=7}
Consider the case in which $d=1$ and $n=7$, $M=\mathrm{Hilb}^7(X)$.
This example exhibits a flop of ``higher degree'': it is induced by a
decomposition $\ensuremath{\mathbf v} = \ensuremath{\mathbf a} + \mathbf{b}$, with $\ensuremath{\mathbf a}^2, \mathbf{b}^2>0$, and not induced by a
spherical or isotropic class.
Indeed, if $\ensuremath{\mathbf v}=(1,0,-6)$, $\ensuremath{\mathbf a}=(1,-H,0)$ and $\ensuremath{\mathbf b}=(0,H,-6)$, then
the rank two hyperbolic lattice associated to this wall contains no spherical or isotropic classes.
The full list of walls in the movable cone is given the table below.
We can write the nef divisor associated to a wall as $\widetilde H -\Gamma B$, for
$\Gamma\in\ensuremath{\mathbb{Q}}_{>0}$; as before, the value of
$\Gamma$ is determined from \eqref{eq:thetaexplicit} given an element of $\ensuremath{\mathbf v}^\perp\cap\ensuremath{\mathbf a}^\perp$.
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
&&&&\\[-6pt]
$\Gamma$ & $\ensuremath{\mathbf a}$ & $\ensuremath{\mathbf a}^2$ & $(\ensuremath{\mathbf v},\ensuremath{\mathbf a})$ & Type \\
&&&&\\[-6pt]
\hline
&&&&\\[-6pt]
$0$ & $(0,0,-1)$ & 0 & 1 & divisorial contraction \\
&&&&\\[-6pt]
\hline
&&&&\\[-6pt]
$\frac 14$ & $(1,-H,2)$ & -2 & 4 & flop \\
&&&&\\[-6pt]
\hline
&&&&\\[-6pt]
$\frac 27$ & $(1,-H,1)$ & 0 & 5 & flop \\
&&&&\\[-6pt]
\hline
&&&&\\[-6pt]
$\frac 13$ & $(1,-H,0)$ & 2 & 6 & flop \\
&&&&\\[-6pt]
\hline
&&&&\\[-6pt]
$\frac{6}{17}$ & $(2,-3H,5)$ & -2 & 7 & fake wall \\
&&&&\\[-6pt]
\hline
&&&&\\[-6pt]
$\frac{4}{11}$ & $(1,-2H,5)$ & -2 & 1 & flop \\
&&&&\\[-6pt]
\hline
&&&&\\[-6pt]
$\frac 38$ & $-(1,-3H,10)$ & -2 & 4 & flop\\
&&&&\\[-6pt]
\hline
&&&&\\[-6pt]
$\frac 25$ & $(1,-2H,4)$ & 0 & 2 & divisorial contraction\\[-6pt]
&&&&\\
\hline
\end{tabular}
\end{center}
\end{Ex}
\subsection*{...and higher Picard number}
Let $X$ be a K3 surface such that $\mathrm{Pic}(X) \cong \ensuremath{\mathbb{Z}} \cdot \xi_1 \oplus \ensuremath{\mathbb{Z}} \cdot \xi_2$.
\begin{Ex}
We let $M:=\mathrm{Hilb}^2(X)$, and $\ensuremath{\mathbf v}=(1,0,-1)$.
We assume that the intersection form (with respect to the basis $\xi_1,\xi_2$) is given by
\[
q =
\begin{pmatrix}
28 & 0\\
0 & -4
\end{pmatrix}.
\]
Such a K3 surface exists, see \cite{Morrison:K3,Kovacs:K3}.
We have:
\[
\mathrm{NS}(M) = \ensuremath{\mathbb{Z}} \cdot \mathbf{s} \oplus \mathrm{NS}(X),
\]
where $\mathbf{s} = (1,0,1)$.
Our first claim is
\begin{equation}\label{enum:HigherPic1}
\mathrm{Nef}(M) = \mathrm{Mov}(M).
\end{equation}
Indeed, by Theorem \ref{thm:walls}, a flopping contraction would have to come from a class $\ensuremath{\mathbf a}$
with $\ensuremath{\mathbf a}^2 \ge -2$ and $(\ensuremath{\mathbf v}, \ensuremath{\mathbf a}) = 1$; also, the corresponding lattice $\ensuremath{\mathcal H} = \langle \ensuremath{\mathbf v}, \ensuremath{\mathbf a} \rangle$
has to be hyperbolic, which implies $\ensuremath{\mathbf a}^2 \le 0$. In addition, $\ensuremath{\mathbf a}^2 = 0$ would correspond to the
Hilbert-Chow divisorial contraction, and thus $\ensuremath{\mathbf a}^2 = -2$ is the only possibility.
If we write $\ensuremath{\mathbf a} = (r, D, r-1)$ with $D = a\xi_1 + b\xi_2$, this gives
\[ -2r(r-1) + 28 a^2 - 4b^2 = -2.
\]
This equation has no solutions modulo 4.
The structure of the nef cone is thus determined by divisorial contractions.
These are controlled by the quadratic equation
\begin{equation}\label{eq:HigherPic2}
X^2 - 2 (7a^2 - b^2) = 1,
\end{equation}
via $\ensuremath{\mathbf a} = (X,D,X)$.
For example, the Hilbert-Chow contraction corresponds to the solution $a=b=0$ and $X=1$ to \eqref{eq:HigherPic2}.
Other contractions arise, for example, at $a=4$, $b=0$, $X=15$, or $a=2$, $b=2$, $X=7$, etc.
The nef cone is locally polyhedral but not finitely generated. Its walls have an
accumulation point at the boundary, coming from a solution to
\[
X^2 - 2 (7a^2 - b^2) = 0
\]
corresponding to a Lagrangian fibration.
\end{Ex}
We continue to consider the case where $X$ has Picard rank two.
To increase the flexibility of our examples, we now also consider a twist by a Brauer class $\alpha\in\mathrm{Br}(X)$.
We choose $\alpha = e^{\beta_0}$ for some $B$-field class
$\beta_0 \in H^2(X, \ensuremath{\mathbb{Q}})$ with
\[
\beta_0.\mathrm{NS}(X)=0\quad \text{ and } \quad \beta_0^2=0.
\]
(See \cite{HMS:generic_K3s} for more details; in particular, the existence of our examples follows as in \cite[Lemma 3.22]{HMS:generic_K3s}.)
\begin{Ex}\label{ex:RoundNef}
We assume that $2\beta_0$ is integral, and that the intersection form on
\[
H^*_{\mathrm{alg}}(X,\alpha,\ensuremath{\mathbb{Z}})=\mathrm{NS}(X) \oplus \ensuremath{\mathbb{Z}}\cdot (2,2\beta_0,0) \oplus \ensuremath{\mathbb{Z}} \cdot (0,0,-1)
\]
takes the form
\[
q =
\begin{pmatrix}
4 & 0 & 0 & 0\\
0 & -4 & 0 & 0\\
0 & 0 & 0 & 2\\
0 & 0 & 2 & 0
\end{pmatrix}.
\]
Consider the primitive vector $\ensuremath{\mathbf v}=(0,\xi_1,0)$, and let $M:=M_H(\ensuremath{\mathbf v})$ be the moduli space of $\alpha$-twisted $H$-Gieseker semistable sheaves on $X$, for $H$ a generic polarization on $X$.
Then:
\begin{enumerate}
\item\label{enum:RoundRational1} $\mathrm{Nef}(M)=\mathrm{Mov}(M)$;
\item\label{enum:RoundRational2} $\mathrm{Nef}(M)$ is a rational circular cone.
\end{enumerate}
To prove the above statements, observe that $\ensuremath{\mathbf v}^2 = 4$ and
$(\ensuremath{\mathbf v}, \ensuremath{\mathbf a}) \in 4\ensuremath{\mathbb{Z}}$ for all $\ensuremath{\mathbf a} \in H^*_\mathrm{alg}(X, \alpha, \ensuremath{\mathbb{Z}})$. According to Theorem
\ref{thm:walls}, the only possible wall in this situation would be given by
a Brill-Noether divisorial contraction, coming from a spherical class $\ensuremath{\mathbf s} \in \ensuremath{\mathbf v}^\perp$. But the
above lattice admits no spherical classes, and thus there are no walls.
Thus the nef cone and the closure of the movable cone are both equal to the positive cone.
Since $M$ obviously admits Lagrangian fibrations, the cone is rational.
\end{Ex}
Modifying the previous example slightly, we obtain a moduli space with circular movable cone and locally polyhedral nef cone:
\begin{Ex}
Now assume $3\beta_0$ is integral, and that
\[
H^*_{\mathrm{alg}}(X,\alpha,\ensuremath{\mathbb{Z}})=\mathrm{NS}(X) \oplus \ensuremath{\mathbb{Z}}\cdot (3,3\beta_0,0) \oplus \ensuremath{\mathbb{Z}} \cdot (0,0,-1)
\]
has intersection form given by
\[
q =
\begin{pmatrix}
6 & 0 & 0 & 0\\
0 & -6 & 0 & 0\\
0 & 0 & 0 & 3\\
0 & 0 & 3 & 0
\end{pmatrix}.
\]
Consider the primitive vector $\ensuremath{\mathbf v}=(0,\xi_1,1)$, and let $M:=M_H(\ensuremath{\mathbf v})$.
Then:
\begin{enumerate}
\item\label{enum:MovRoundNefPoly1} $\mathrm{Nef}(M)$ is a rational locally-polyhedral cone;
\item\label{enum:MovRoundNefPoly2} $\mathrm{Mov}(M)$ is a rational circular cone.
\end{enumerate}
Indeed, \eqref{enum:MovRoundNefPoly2} follows exactly as in Example \ref{ex:RoundNef}: there are no
spherical classes, and, for all $\ensuremath{\mathbf a}\in H^*_{\mathrm{alg}}(X,\alpha,\ensuremath{\mathbb{Z}})$, $(\ensuremath{\mathbf a},\ensuremath{\mathbf v})\in3\ensuremath{\mathbb{Z}}$.
However, flopping contractions are induced by solutions to the quadratic equation
\[
a^2-b^2-2as+s=0,
\]
where we set $D=a\xi_1+b\xi_2$, and $\ensuremath{\mathbf a}=(3(2a-1),a\xi_1+b\xi_2+ 3(2a-1)\beta_0,s)$.
This has infinitely many solutions.
It is an easy exercise to deduce \eqref{enum:MovRoundNefPoly1} from this.
\end{Ex}
\section{The geometry of flopping contractions}
\label{sec:floppingeometry}
One can also refine the analysis leading to Theorem \ref{thm:walls} to give a precise description of
the geometry of the flopping contraction associated to a flopping wall $\ensuremath{\mathcal W}$.
As in Section \ref{sec:hyperbolic}, we let $\sigma_0 \in \ensuremath{\mathcal W}$ be a stability condition on the wall,
and $\sigma_+ \notin \ensuremath{\mathcal W}$ be sufficiently close to $\sigma_0$.
For simplicity, let us assume throughout this section that the hyperbolic lattice $\ensuremath{\mathcal H}_\ensuremath{\mathcal W}$ associated
to $\ensuremath{\mathcal W}$ via Definition \ref{def:potentialwall} does not admit spherical or isotropic classes; in particular,
$\ensuremath{\mathcal W}$ is not a totally semistable wall for any class $\ensuremath{\mathbf a} \in \ensuremath{\mathcal H}$, and does not induce
a divisorial contraction.
Let $\mathfrak P$ be the set of unordered partitions
$P = [\ensuremath{\mathbf a}_i]_i$ of $\ensuremath{\mathbf v}$ into a sum $\ensuremath{\mathbf v} = \ensuremath{\mathbf a}_1 + \dots + \ensuremath{\mathbf a}_m$ of positive classes
$\ensuremath{\mathbf a}_i \in \ensuremath{\mathcal H}$. We say that a partition $P$ is a refinement of another partition
$Q = [\ensuremath{\mathbf b}_i]_i$ if it can be obtained by choosing partitions of each $\ensuremath{\mathbf b}_i$.
This defines a natural partial order on $\mathfrak P$, with $P \prec Q$ if $P$ is a refinement of $Q$.
The trivial partition as the maximal element of $\mathfrak P$.
Given $P = [\ensuremath{\mathbf a}_i]_i \in \mathfrak P$, we let $M_P \subset M_{\sigma_+}(\ensuremath{\mathbf v})$ be the subset of objects $E$ such
that the Mukai vectors of the Jordan-H\"older factors $E_i$ of $E$ with respect to $\sigma_0$ are
given by $\ensuremath{\mathbf a}_i$ for all $i$. Using openness of stability and closedness of semistability in
families, one easily proves:
\begin{Lem} The disjoint union $M_{\sigma_+}(\ensuremath{\mathbf v}) = \coprod_{P \in \mathfrak P} M_P$ defines a
stratification of $M_{\sigma_+}(\ensuremath{\mathbf v})$ into locally closed subsets, such that
$M_P$ is contained in the closure of $M_Q$ if and only if $P \prec Q$.
\end{Lem}
In addition, our simplifying assumptions on $\ensuremath{\mathcal H}_\ensuremath{\mathcal W}$ give the following:
\begin{Lem} \label{lem:MAnonempty}
Assume that $P = [\ensuremath{\mathbf a}_1, \ensuremath{\mathbf a}_2]$ is a two-element partition of $\ensuremath{\mathbf v}$. Then $M_P \subset
M_{\sigma_+}(\ensuremath{\mathbf v})$ is non-empty, and of codimension $(\ensuremath{\mathbf a}_1, \ensuremath{\mathbf a}_2) - 1$.
\end{Lem}
\begin{proof}
Since $\ensuremath{\mathbf v}$ is primitive, we may assume that
$\ensuremath{\mathbf a}_1$ has smaller phase than $\ensuremath{\mathbf a}_2$ with respect to $\sigma_+$.
By assumption on $\ensuremath{\mathcal H}_\ensuremath{\mathcal W}$ and by Theorem \ref{thm:nonempty}, the generic element $A_i \in M_{\sigma_+}(\ensuremath{\mathbf a}_i)$ is
$\sigma_0$-\emph{stable} for $i= 1, 2$.
In particular, $\mathop{\mathrm{Hom}}\nolimits(A_1, A_2) = \mathop{\mathrm{Hom}}\nolimits(A_2, A_1) = 0$, and
therefore $\mathop{\mathrm{dim}}\nolimits \mathop{\mathrm{Ext}}\nolimits^1(A_2, A_1) = (\ensuremath{\mathbf a}_1, \ensuremath{\mathbf a}_2)$. By Lemma \ref{lem:stableextension}, any non-trivial
extension $A_1 \ensuremath{\hookrightarrow} E \ensuremath{\twoheadrightarrow} A_2$ is $\sigma_+$-stable. Using Theorem \ref{thm:nonempty} again, one
computes the dimension of the space of such extensions as
\[
\ensuremath{\mathbf a}_1^2 + 2 + \ensuremath{\mathbf a}_2^2 + 2 + (\ensuremath{\mathbf a}_1, \ensuremath{\mathbf a}_2) - 1 = \ensuremath{\mathbf v}^2 + 2 - \left((\ensuremath{\mathbf a}_1, \ensuremath{\mathbf a}_2) - 1\right).
\]
\end{proof}
For $P$ as above, the flopping contraction $\pi^+$ contracts $M_P$ to the product of moduli spaces
$M_{\sigma_0}^{st}(\ensuremath{\mathbf a}_1) \times M_{\sigma_0}^{st}(\ensuremath{\mathbf a}_2)$ of $\sigma_0$-\emph{stable} objects.
The exceptional locus of $\pi^+$ is the union of $M_P$ for all non-trivial partitions $P$.
In particular, when there is more than one two-element partition,
the stratification is only partially ordered, and the exceptional locus has multiple irreducible
components. This leads to a generalization of Markman's
notion of \emph{stratified Mukai flops} introduced in \cite{Markman:BNduality} (where the
stratification is indexed by a totally ordered set).
Using the above two Lemmas, it is easy to construct examples of flops where the exceptional locus
of the small contraction $\pi^+$ has $m$ intersecting irreducible components:
\begin{Ex} \label{ex:flopirreduciblecomponents}
Choose $M \gg m$ for which $x^2 + Mxy + y^2 = -1$ does not admit an integral solution.
We define the symmetric pairing on $\ensuremath{\mathcal H} \cong \ensuremath{\mathbb{Z}}^2$ via the matrix
$\begin{pmatrix}2 & M \\ M & 2 \end{pmatrix}$, and let $\ensuremath{\mathbf v} = \begin{pmatrix}1 \\ m-1\end{pmatrix}$.
The positive cone contains the upper right quadrant and is
bordered by lines of slopes approximately $-\frac 1M$ and $-M$.
Since $M \gg m$ (in fact, $M > 2m$ is enough), any partition of $\ensuremath{\mathbf v}$ into positive classes is in
fact a partition in $\ensuremath{\mathbb{Z}}_{\ge 0}^2$. Therefore, the two-element partitions are given by
$A_k = \left[\begin{pmatrix}1 \\ k\end{pmatrix}, \begin{pmatrix}0 \\ m-1-k\end{pmatrix}\right] $ for
$0 \le k \le m-1$. There is a unique minimal partition
$Q = \left[ \begin{pmatrix}1 \\ 0\end{pmatrix}, \begin{pmatrix}0 \\ 1\end{pmatrix}, \dots,
\begin{pmatrix}0 \\ 1\end{pmatrix} \right]$, with
$M_Q \subset \overline{M_{A_k}}$ for all $k$; thus, the exceptional locus has $m$ irreducible
components $\overline{M_{A_k}}$ intersecting in $M_Q$.
\end{Ex}
Similarly, one can construct flopping contractions with arbitrarily many
\emph{connected} components:
\begin{Ex} \label{ex:flopconnecteccomponents}
Let $m$ be an odd positive integer. Choose $M \gg m$ and define the lattice $\ensuremath{\mathcal H}$ by the matrix
$\begin{pmatrix}-4 & 2M \\ 2M & 4 \end{pmatrix}$. The positive cone lies between the lines of slope
approximately $+ \frac 1M$ and $-M$. We let $\ensuremath{\mathbf v} = \begin{pmatrix} m \\ 2 \end{pmatrix}$.
Any summand in a partition of $\ensuremath{\mathbf v}$ must be of the form $\begin{pmatrix} x \\ y \end{pmatrix}$
with $x \ge 0 $ and $y > 0$, and therefore $y = 1$. Besides the trivial element, the only partitions
occurring in $\mathfrak P$ are therefore of the form $A_k = \left[\begin{pmatrix} k \\ 1 \end{pmatrix},
\begin{pmatrix} m-k \\ 1 \end{pmatrix}\right]$, for $0 \le k < \frac m2$. Each corresponding stratum
$M_{A_k}$ is a connected component of the exceptional locus of $\pi^+$, as $A_k$ admits no further
refinement.
\end{Ex}
\begin{Rem}
To show that the lattices $\ensuremath{\mathcal H}$ as above occur as the lattice $\ensuremath{\mathcal H}_\ensuremath{\mathcal W}$ associated to a wall,
we only have to find a K3 surface $X$ such that $\ensuremath{\mathcal H}$ embeds primitively into its Mukai lattice $H^*_{\alg}(X, \Z)$.
For example, we can choose $\mathrm{Pic}(X) \cong \ensuremath{\mathcal H}$ and
$\ensuremath{\mathbf v} = (0, c, 0)$ for the corresponding curve class $c$. In particular, Example
\ref{ex:flopirreduciblecomponents} occurs in a relative Jacobian of curves on special double covers
$X \to \ensuremath{\mathbb{P}}^2$, and Example \ref{ex:flopconnecteccomponents} in special quartics $X \subset \ensuremath{\mathbb{P}}^3$.
This wall crossing already occurs for Gieseker stability with respect to a non-generic polarization
$H$. The morphism $\pi^+$ contracts sheaves supported on reducible curves $C = C_1 \cup C_2$ in the
corresponding linear system; it forgets the gluing data at the intersection points $C_1 \cap C_2$.
The induced flop preserves the Lagrangian fibration given by the Beauville integrable
system.
\end{Rem}
\section{Le Potier's Strange Duality for isotropic classes}\label{sec:SD}
In this section, we will explain a relation of Theorem \ref{thm:SYZ} to Le Potier's Strange Duality
Conjecture for K3 surfaces. We thank Dragos Oprea for pointing us to this application.
We first recall the basic construction from \cite{LePotier:StrangeDuality,MarianOprea:StrangeDualitySurvey}.
Let $(X,\alpha)$ be a twisted K3 surface and let $\sigma\in\mathop{\mathrm{Stab}}\nolimits^\dag(X,\alpha)$ be a generic stability condition.
Let $\ensuremath{\mathbf v},\ensuremath{\mathbf w}\in H^*_{\mathrm{alg}}(X,\alpha,\ensuremath{\mathbb{Z}})$ be primitive Mukai vectors with $\ensuremath{\mathbf v}^2,\ensuremath{\mathbf w}^2\geq0$.
We denote by $L_\ensuremath{\mathbf w}$ (resp., $L_\ensuremath{\mathbf v}$) the line bundle $\ensuremath{\mathcal O}_{M_\sigma(\ensuremath{\mathbf v})}(-\theta_\ensuremath{\mathbf v}(\ensuremath{\mathbf w}))$ (resp., $\ensuremath{\mathcal O}_{M_\sigma(\ensuremath{\mathbf w})}(-\theta_\ensuremath{\mathbf w}(\ensuremath{\mathbf v}))$).
We assume:
\begin{enumerate}
\item[(I)] $(\ensuremath{\mathbf v},\ensuremath{\mathbf w})=0$, and
\item[(II)] \label{assumII} for all $E\in M_\sigma(\ensuremath{\mathbf v})$ and all $F\in M_\sigma(\ensuremath{\mathbf w})$, $\mathop{\mathrm{Hom}}\nolimits^2(E,F)=0$.
\end{enumerate}
Then the locus
\[
\Theta = \left\{ (E,F)\in M_\sigma(\ensuremath{\mathbf v}) \times M_\sigma(\ensuremath{\mathbf w})\,:\, \mathop{\mathrm{Hom}}\nolimits(E,F)\neq0 \right\}
\]
gives rise to a section of the line bundle $L_{\ensuremath{\mathbf v},\ensuremath{\mathbf w}} := L_\ensuremath{\mathbf w} \boxtimes L_\ensuremath{\mathbf v}$ on $M_\sigma(\ensuremath{\mathbf v})
\times M_\sigma(\ensuremath{\mathbf w})$ (which may or may not vanish).
We then obtain a morphism, well-defined up to scalars,
\[
\mathrm{SD}\colon H^0(M_\sigma(\ensuremath{\mathbf v}),L_\ensuremath{\mathbf w})^\vee \longrightarrow H^0(M_\sigma(\ensuremath{\mathbf w}),L_\ensuremath{\mathbf v}).
\]
The two basic questions are:
\begin{itemize}
\item When is $h^0(M_\sigma(\ensuremath{\mathbf v}),L_\ensuremath{\mathbf w})=h^0(M_\sigma(\ensuremath{\mathbf w}),L_\ensuremath{\mathbf v})$?
\item If equality holds, is the map $\mathrm{SD}$ an isomorphism?
\end{itemize}
We answer the two previous questions in the case where one of the two vectors is isotropic:
\begin{Prop}\label{prop:StrangeDuality}
Let $(X,\alpha)$ be a twisted K3 surface and let $\sigma\in\mathop{\mathrm{Stab}}\nolimits^\dag(X,\alpha)$ be a generic stability condition.
Let $\ensuremath{\mathbf v},\ensuremath{\mathbf w}\in H^*_{\mathrm{alg}}(X,\alpha,\ensuremath{\mathbb{Z}})$ be primitive Mukai vectors with $(\ensuremath{\mathbf v}, \ensuremath{\mathbf w}) = 0$,
$\ensuremath{\mathbf v}^2\geq2$ and $\ensuremath{\mathbf w}^2=0$.
We assume that
$-\theta_\ensuremath{\mathbf v}(\ensuremath{\mathbf w})\in \mathrm{Mov}(M_\sigma(\ensuremath{\mathbf v}))$ and $-\theta_\ensuremath{\mathbf w}(\ensuremath{\mathbf v})\in \mathrm{Nef}(M_\sigma(\ensuremath{\mathbf w}))$.
Then
\begin{enumerate}
\item\label{enum:EqualityDim} $h^0(M_\sigma(\ensuremath{\mathbf v}),L_\ensuremath{\mathbf w})=h^0(M_\sigma(\ensuremath{\mathbf w}),L_\ensuremath{\mathbf v})$, and
\item\label{enum:SD} the morphism $\mathrm{SD}$ is either zero or an isomorphism.
\end{enumerate}
\end{Prop}
We will see that the case $\mathrm{SD} = 0$ is caused by totally semistable walls.
\begin{proof}
Let $Y:= M_{\sigma}(\ensuremath{\mathbf w})$.
By \cite{Mukai:BundlesK3, Caldararu:NonFineModuliSpaces,Yoshioka:TwistedStability}, there exist an
element $\alpha'\in\mathrm{Br}(Y)$ and a derived equivalence $\Phi\colon \mathrm{D}^{b}(X,\alpha)\xrightarrow{\simeq}\mathrm{D}^{b}(Y,\alpha')$.
Replacing $(X, \alpha)$ by $(Y, \alpha')$, we may
assume that $\ensuremath{\mathbf w}=(0,0,1)$ and $\ensuremath{\mathbf v}=(0,D,s)$, for some $s\in\ensuremath{\mathbb{Z}}$ and $D\in\mathrm{NS}(X)$, and that
$X = M_{\sigma}(\ensuremath{\mathbf w})$ is the moduli space of skyscraper sheaves.
Moreover, $D=-\theta_\ensuremath{\mathbf w}(\ensuremath{\mathbf v})\in \mathrm{Nef}(X)$ is effective, by assumption.
By stability and Serre duality, for all $E\in M_\sigma(\ensuremath{\mathbf v})$ and all $x\in X$,
$\mathop{\mathrm{Hom}}\nolimits^2(E,k(x))=\mathop{\mathrm{Hom}}\nolimits(k(x),E)^\vee=0$, verifying the assumption (II); thus the locus $\Theta$ gives a section of $L_\ensuremath{\mathbf w} \boxtimes L_\ensuremath{\mathbf v}$.
By Remark \ref{Rem:SYZcon_movable}, there exists a chamber $\ensuremath{\mathcal L}_{\infty}$ in the interior of the movable cone $\mathrm{Mov}(M_\sigma(\ensuremath{\mathbf v}))$ whose boundary contains $-\theta_\ensuremath{\mathbf v}(\ensuremath{\mathbf w})$.
Moreover, there exist a polarization $H$ on $X$ and a chamber $\ensuremath{\mathcal C}_{\infty}\subset \mathop{\mathrm{Stab}}\nolimits^\dag(X,\alpha)$ such that $\ell(\ensuremath{\mathcal C}_{\infty})=\ensuremath{\mathcal L}_{\infty}$, $M_H(\ensuremath{\mathbf v})=M_{\ensuremath{\mathcal C}_{\infty}}(\ensuremath{\mathbf v})$, and the Lagrangian fibration induced by $\ensuremath{\mathbf w}$ is the Beauville integrable system on $M_H(\ensuremath{\mathbf v})$.
The argument in \cite[Example 8]{MarianOprea:StrangeDualitySurvey} shows that $h^0(M_{H}(\ensuremath{\mathbf v}),L_\ensuremath{\mathbf w})=h^0(X,\ensuremath{\mathcal O}(D))$ and the morphism $\mathrm{SD}$ is an isomorphism.
Since $M_H(\ensuremath{\mathbf v})$ is connected to $M_{\sigma}(\ensuremath{\mathbf v})$ by a sequence of flops, which do not change the dimension of the spaces of sections of $L_\ensuremath{\mathbf w}$, we obtain immediately \eqref{enum:EqualityDim}.
To prove \eqref{enum:SD}, we need to study the behavior of the morphism $\mathrm{SD}$ under wall-crossing.
We pick a stability condition $\sigma_{\infty}\in\ensuremath{\mathcal C}_{\infty}$.
Both $\sigma$ and $\sigma_{\infty}$ belong to the open subset $U(X,\alpha)$ of Theorem \ref{thm:BridgelandK3geometric}.
The restriction of the map $\ensuremath{\mathcal Z}$ of Theorem \ref{thm:Bridgeland_coveringmap} to $U(X, \alpha)$
is injective up to the $\widetilde \mathop{\mathrm{GL}}_2^+(\ensuremath{\mathbb{R}})$-action (i.e., the map separates points that are in different
orbits). Now consider the map $\ell$ in the formulation of Theorem \ref{thm:ellandgroupaction},
restricted to $U(X, \alpha)$. The composition
\[
\theta_{\sigma,\ensuremath{\mathbf v}} \circ I \circ \ensuremath{\mathcal Z}|_{U(X, \alpha)} \colon U(X, \alpha) \to \mathop{\mathrm{Pos}}(M_{\sigma}(\ensuremath{\mathbf v}))
\]
generically has connected fibers. Since both $\sigma_\infty$ and $\sigma$ get mapped to a class in
the movable cone, we can find
a path $\gamma$ in $U(X,\alpha)$ connecting $\sigma$ and $\sigma_{\infty}$ whose image stays within the
movable cone. Thus $\gamma$ crosses no divisorial walls.
If $\gamma$ also crosses no totally semistable walls, then the morphism $\mathrm{SD}$ is compatible with the wall-crossing; since it induces an isomorphism at $\sigma_{\infty}$, it induces an isomorphism at $\sigma$.
Assume instead that there is a totally semistable wall.
We write $\sigma=\sigma_{\omega,\beta}$.
The straight path from $\sigma_{\infty}$ to $\sigma_{t\omega,\beta}$, for $t\gg0$, corresponds to a change of polarization for Gieseker stability, and thus does not cross any totally semistable wall.
Therefore, we may replace $\sigma_{\infty}$ with $\sigma_{t\omega,\beta}$, for $t\gg0$.
We claim that all objects $E$ in $M_{\sigma}(\ensuremath{\mathbf v})$ must be actual complexes.
Indeed, if there exists a sheaf $E$ in $M_{\sigma}(\ensuremath{\mathbf v})$, then the generic element is a sheaf.
Moreover, since $D$ is nef and big, it is globally generated, and we can assume that the support of $E$ is a smooth integral curve.
Stability in $U(X,\alpha)$ for torsion sheaves implies, in particular, that the sheaf is actually stable on the curve.
But then $E$ would be stable for $t\to\infty$.
This shows that we crossed no totally semistable wall.
So $E \in \ensuremath{\mathcal A}_{\omega, \beta}$ is an actual complex. Since $\mathop{\mathrm{rk}} (E) = 0$ and $\mathop{\mathrm{rk}}
\ensuremath{\mathcal H}^{-1}(E) > 0$, we must have $\mathop{\mathrm{rk}} \ensuremath{\mathcal H}^0(E) > 0$; hence
$\mathop{\mathrm{Hom}}\nolimits(E,k(x))\neq0$ for all $x\in X$.
This shows that $\Theta$ is nothing but the zero-section of $L_{\ensuremath{\mathbf v},\ensuremath{\mathbf w}}$ and the induced map $\mathrm{SD}$ is the zero map.
\end{proof}
In particular, the previous proposition holds for pairs of Gieseker moduli spaces.
\begin{Ex}
Let $X$ be a K3 surface such that $\mathrm{Pic}(X) = \ensuremath{\mathbb{Z}} \cdot H$, with $H^2=2$.
Let $\ensuremath{\mathbf v}=(1,0,-1)$ and $\ensuremath{\mathbf w}=-(1,-H,1)$.
Consider a stability condition $\sigma_{\infty}=\sigma_{tH,-2H}$, for $t\gg0$.
Then, as observed in \cite[Proposition 1.3]{Beauville:RationalCurvesK3}, $\mathrm{Hilb}^2(X)= M_{\sigma_{\infty}}(\ensuremath{\mathbf v})$ admits a flop to a Lagrangian fibration induced by the vector $\ensuremath{\mathbf w}$.
The assumptions of Proposition \ref{prop:StrangeDuality} are satisfied.
In this case, for all $E[1]\in M_{\sigma_{\infty}}(\ensuremath{\mathbf w})$, $E\cong I_{pt}(-H)$, and for all $\Gamma\in \mathrm{Hilb}^2(X)$, we have $\mathop{\mathrm{Hom}}\nolimits(I_\Gamma,E[1])\neq0$.
Hence, the map $\mathrm{SD}$ is the zero map.
\end{Ex}
The following example shows that the assumption in Proposition \ref{prop:StrangeDuality} is
necessary:
\begin{Ex}
Let $X$ be a K3 surface with $\mathrm{NS}(X)=\ensuremath{\mathbb{Z}} \cdot C_1 \oplus \ensuremath{\mathbb{Z}} \cdot C_2$ and intersection form
\[
q =
\begin{pmatrix}
-2 & 4\\
4 & -2
\end{pmatrix}.
\]
We assume the two rational curves $C_1$ and $C_2$ generate the cone of effective divisors on $X$.
Let $\ensuremath{\mathbf v}=(0,3C_1+C_2,1)$ and $\ensuremath{\mathbf w}=(0,0,1)$.
Then $\ensuremath{\mathbf v}^2=4$.
Pick a generic ample divisor $H$ on $X$.
We have
\[
H^0(M_H(\ensuremath{\mathbf v}),\theta_{\ensuremath{\mathbf v}}(\ensuremath{\mathbf w}))\cong \ensuremath{\mathbb{C}}^{\oplus 4}.
\]
For example, consider the totally semistable wall where $\ensuremath{\mathbf v}$ aligns with the spherical vector
$(0,C_1,0)$,
Then Proposition \ref{prop:sphericaltotallysemistable} induces a birational map
$M_H(\ensuremath{\mathbf v})\dashrightarrow M_H(\ensuremath{\mathbf v}_0)$ for $\ensuremath{\mathbf v}_0=(0,C_1+C_2,1)$, and a chain of isomorphisms
\[
H^0(M_H(\ensuremath{\mathbf v}),\theta_{\ensuremath{\mathbf v}}(\ensuremath{\mathbf w}))\cong H^0(M_H(\ensuremath{\mathbf v}_0),\theta_{\ensuremath{\mathbf v}_0}(\ensuremath{\mathbf w})) \cong H^0(\ensuremath{\mathbb{P}}^3,\ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^3}(1))
\cong \ensuremath{\mathbb{C}}^{\oplus 4},
\]
where the middle isomorphism follows from Proposition \ref{prop:StrangeDuality}.
However,
\[
H^0(M_H(\ensuremath{\mathbf w}),\theta_\ensuremath{\mathbf w}(\ensuremath{\mathbf v}))\cong H^0(X,\ensuremath{\mathcal O}_X(3C_1+C_2))\cong\ensuremath{\mathbb{C}}^{\oplus 5}.
\]
The last isomorphism follows from the exact sequence
\[
0\to \ensuremath{\mathcal O}_X(2C_1+C_2) \to \ensuremath{\mathcal O}_X(3C_1+C_2) \to \ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^1}(-2) \to 0,
\]
since $\ensuremath{\mathcal O}_X(2C_1+C_2)$ is big and nef and thus has no higher cohomology.
\end{Ex}
|
2,877,628,090,744 | arxiv | \section{Introduction}
Concerning the nuclear many-body problem the relativistic
Dirac-Brueckner-Hartree-Fock (DBHF) approach turned out to be
remarkably successful in describing the nuclear matter saturation mechanism.
The major improvement compared to non-relativistic treatments is based on an
additional density dependence introduced in the formalism by using a
self-consistent spinor basis.
To solve the Bethe-Salpeter equation, i.e. its three-dimensional reductions,
a variety of approaches relying on different techniques and various
bare nucleon-nucleon interactions have been developed over the last decade
\cite{horse87,thm87a,weigel88,nupp89,bm90,amorin92,boersma94,huber,sefu97,lee97}.
All these calculations are able to describe reasonably well, although not
excellent, nuclear matter properties. In the meantime two main
results can be regarded as settled: first, the nuclear single particle
potential originates from the cancelation of large repulsive vector and
attractive scalar fields, and second, the magnitude
of the effective mass is reduced to a value of about $\sim 0.6 M$
at saturation density. Both findings are consistent with effective
hadron field theories \cite{sw86} where, e.g. the effective mass
can be determined from the spin-orbit interaction in
finite nuclei \cite{ring}.
Refined treatments which take into account hole-hole excitations
\cite{dejong96} or include an additional scaling
of meson properties in the nuclear medium \cite{rapp97} do not
significantly alter these results.
To test the DBHF approach over a wider range of physical problems
also finite nuclei have been studied within effective
Hartree-Fock calculations \cite{boersma94b} and within a density dependent
hadron field theory \cite{fuchs95,toki97}. For a successful
application of DBHF results to heavy ion collisions
using relativistic transport models, however, it is required
to go beyond a local density approximation and to properly account
for the non-equilibrium features of the highly anisotropic phase
space configurations in such reactions \cite{fuchs96}. As soon as nuclei
overlap and particles are positioned at close quarters in
configuration space, the momentum dependence of the nuclear
self-energy starts to play a crucial role
in the description of heavy ion collisions.
Unfortunately, the determination of the momentum dependence of the
nucleon self-energy is a subtle problem in the DBHF approach which has not
yet led to settled results. Generally, the techniques applied in the
standard relativistic Brueckner approach rely on a weak momentum dependence
of the self-energy inside the Fermi sea.
This assumption was supported by various calculations
in the past \cite{horse87,thm87a,weigel88}. In Ref.
\cite{nupp89} it was noted that the determination of the
self-energy leads to ambiguities arising from the
projection of the T-matrix onto positive energy states and that the
actual momentum dependence strongly depends on the choice of the used
nucleon-nucleon interaction \cite{sefu97}.
These ambiguities can be avoided when negative
energy states are taken into account in the calculation
\cite{weigel88,huber,fred97}. However, the conventional
nucleon-nucleon potentials \cite{mach89} are determined
for particle-particle scattering and an extrapolation to
anti-particles is in itself ambiguous.
To avoid the latter ambiguity we will
restrict our discussion merely to the particle sector.
In practice there exist various procedures to determine the self-energy.
In the method proposed by Horowitz and Serot \cite{horse87} one projects onto
Lorentz-invariant amplitudes of the T-matrix (or in-medium G-matrix)
\cite{horse87,thm87a,nupp89,boersma94,sefu97}. This
method is, however, not unique since pseudo-scalar (PS) and pseudo-vector (PV)
matrix elements are equivalent for on-shell particles in the positive
energy sector. Hence, it is impossible to
disentangle pseudo-scalar and pseudo-vector contributions in the
on-shell T-matrix. On the other hand, the pion exchange contribution is
known to react rather sensitively on the particular choice of its
representation \cite{tjon85,sw86}, i.e. PS or PV.
Thus in the past different choices for the representation
of the corresponding amplitude have been used in the relativistic Brueckner
scheme \cite{horse87,thm87a,nupp89}. We find that these choices strongly
influence the magnitude and in particular the momentum dependence
of the self-energy in the nuclear medium. In the present work we will discuss
various possibilities to decompose the T-matrix which are based on the
general representation of covariant amplitudes proposed in Refs.
\cite{horse87} and \cite{tjon85}. A strong momentum dependence of
the self-energy is found to originate from pseudo-scalar admixtures
due to pion-exchange contributions. These pseudo-scalar admixtures are still
present when the so called 'pseudo-vector choice' is applied, as it was done in
Refs. \cite{thm87a,nupp89,boersma94,sefu97}.
Suppressing the undesirable admixtures by making use of a
complete pseudo-vector representation, as proposed in Ref. \cite{tjon85}, we
find a much weaker momentum dependence of the self-energy components.
In an altogether different approach pursued in Refs. \cite{bm90,lee97},
the self-energy components are determined indirectly by a fit to the
single particle potential. Here one circumvents the ambiguities of
the projection methods. The problem is, however, only shifted to
another level of ambiguity, since one has to extract two functions out of
one. Thus, the fit method leads to highly ambiguous results
concerning the momentum dependence of the self-energy components.
The present paper is organized as follows: in section II we review
the structure of the nucleon self-energy in nuclear matter. Thereafter
we describe in section III the representation of the T-matrix by
Lorentz invariant amplitudes and discuss the different on-shell equivalent
choices used in the literature.
In section IV we present new results obtained for the
different choices of the T-matrix representation utilizing the Bonn A potential
as the bare nucleon-nucleon potential. There we also discuss, as a side remark,
the approach pursued in Ref. \cite{bm90,lee97}.
The paper ends with a summary and the conclusions of our work.
\section{Nucleon self-energy in nuclear matter}
The properties of dressed nucleons in nuclear matter are
expressed by the self-energy which enters
the in-medium nucleon propagator as the formal
solution of the Dyson equation
\begin{equation}
G(k) = \frac{1}{k \!\!\! / - M - \Sigma (k) +i\epsilon}
\quad .
\label{green}
\end{equation}
Due to translational and rotational invariance, parity conservation
and time reversal invariance the self-energy in isospin saturated
nuclear matter has the general form
$\Sigma = \Sigma_s - \gamma_\mu \Sigma^\mu$. It depends on the Lorentz
invariants $k^2$, $k\cdot j$ and $j^2$, with $j_\mu$ and $k_\mu$ being the
baryon current and the nucleon four-momentum, respectively \cite{sw86}.
The invariants can also be expressed in terms of
$k_0, |{\bf k}|$ and $k_{\mathrm F} $, where $k_{\mathrm F} $ denotes the Fermi momentum.
Furthermore the vector part of the self
energy has contributions proportional to $k^\mu$ and to the current
$j^\mu$. Defining the streaming velocity as
$u^\mu = j^\mu / \sqrt{j^2}$, the momentum $k^\mu$ can be decomposed
into contributions parallel and perpendicular to the streaming velocity, i.e.
$ k^\mu = (k\cdot u) u^\mu + \Delta^{\mu\nu}k_\nu$ with the
projector $\Delta^{\mu\nu} = g^{\mu\nu} - u^\mu u^\nu$.
The vector part of the self-energy can then be written covariantly
as \cite{amorin92,sehn96}
\begin{equation}
\Sigma^\mu = \Sigma_{\mathrm o} u^\mu + \Sigma_{\mathrm v} \Delta^{\mu\nu}k_\nu
\quad .
\label{sigvec1}
\end{equation}
Thus the full self-energy reads
\begin{eqnarray}
\Sigma (k,k_{\mathrm F} ) &=& \Sigma_{\mathrm s} (k,k_{\mathrm F} ) -\gamma_\mu \left[ \Sigma_{\mathrm o} (k,k_{\mathrm F} ) \, u^\mu
+ \Sigma_{\mathrm v} (k,k_{\mathrm F} ) \, \Delta^{\mu\nu}k_\nu \right]
\label{sig1} \\
&=& \Sigma_{\mathrm s} (k,k_{\mathrm F} ) -\gamma_0 \, \Sigma_{\mathrm o} (k,k_{\mathrm F} ) +
{\bf \gamma}\cdot {\bf k} \,\Sigma_{\mathrm v} (k,k_{\mathrm F} )\, |_{{\mathrm RF}}
\label{sig2}
\end{eqnarray}
where the subscript RF indicates the respective
expressions in the nuclear matter rest frame
($u^{\mu} = \delta^{\mu 0}$) \cite{horse87,thm87a}.
The $\Sigma_{\mathrm s} ,\Sigma_{\mathrm o} $ and $\Sigma_{\mathrm v} $ components are
Lorentz scalar functions which actually
depend on $k_0$,$|{\bf k}|$ and $k_{\mathrm F} $. They follow from
the self-energy matrix by taking the respective traces
\cite{sehn96}
\begin{eqnarray}
\Sigma_{\mathrm s} &=& \frac{1}{4} tr \left[ \Sigma \right]
\label{trace1}\\
\Sigma_{\mathrm o} &=& \frac{-1}{4} tr \left[ \gamma_\mu u^\mu \Sigma \right]
= \frac{-1}{4} tr \left[ \gamma_0 \, \Sigma \right]_{{\mathrm RF}}
\label{trace2}\\
\Sigma_{\mathrm v} &=& \frac{-1}{4\Delta^{\mu\nu}k_\mu k_\nu }
tr \left[\Delta^{\mu\nu}\gamma_\mu k_\nu \, \Sigma \right]
= \frac{-1}{4|{\bf k}|^2 }
tr \left[{\bf \gamma}\cdot {\bf k} \, \Sigma \right]_{{\mathrm RF}}
\label{trace3}
\quad .
\end{eqnarray}
The Dirac equation for the in-medium spinor basis can be deduced from the
Green function (\ref{green}). Written in terms of effective
masses and momenta
\begin{eqnarray}
\quad M^* = M+ Re \, \Sigma_{\mathrm s} \quad , \quad k^{*}_\mu = k_\mu + Re \, \Sigma_\mu
\label{sig3}
\end{eqnarray}
the Dirac equation has the form
\begin{equation}
\left[ k^* \!\!\!\!\!\! / - M^* -i \, Im \, \Sigma \right] u(k) =0 .
\label{dirac}
\end{equation}
In the following we will work in the quasi-particle approximation
and therefore we neglect the imaginary part of the
self-energy from now on. Thus the effective nucleon four-momentum will be
on mass shell even above the Fermi surface, fulfilling the relation
$ k^{*}_\mu k^{*\mu} = M^{* 2}$.
Since we only deal with the real part
of the self-energy in the quasi-particle approximation we omit this in the
notation. In the nuclear matter rest frame the four-momentum
follows from Eq. (\ref{sig3})
\begin{eqnarray}
{\bf k}^* = {\bf k} (1+\Sigma_{\mathrm v} )
\quad , \quad
k^{*}_0 = E^* = \sqrt{ {\bf k}^2 (1+\Sigma_{\mathrm v} )^2 + M^{*2} }
\label{estar}
\end{eqnarray}
which allows one to eliminate the $\Sigma_{\mathrm v} $-term in the Dirac equation,
\begin{eqnarray}
\left[ ({\bf \alpha} \cdot {\bf k}) - \gamma^0 {\tilde M}^* \right] u(k) = {\tilde E}^* u(k)
\quad ,\label{dirac2}
\end{eqnarray}
by a rescaling of effective mass and
energy
\begin{eqnarray}
{\tilde M}^* = \frac{M^*}{1+\Sigma_{\mathrm v} } \quad , \quad {\tilde E}^* = \frac{E^*}{1+\Sigma_{\mathrm v} }
= \sqrt{{\bf k}^2 + {\tilde M}^{*2} }
\quad .
\label{red1}
\end{eqnarray}
From the Dirac equation in the form (\ref{dirac2})
one derives the in-medium relativistic Hamiltonian for nucleons
and thereby the operator for the single-particle potential within
the DBHF approximation, i.e. ${\hat U } = \gamma^0 \Sigma$.
The expectation value of ${\hat U}$, i.e. sandwiching ${\hat U}$ between
the effective spinor basis, yields the single particle potential
\begin{eqnarray}
U(k) = \frac{<u(k)|\gamma^0 \Sigma | u(k)>}
{< u(k)| u(k)>} =
\frac{M^{\ast}}{E^{\ast}({\bf k})}
\, <{\bar u(k)}| \Sigma | u(k)>
\label{upot1}
\end{eqnarray}
which can be evaluated as
\begin{eqnarray}
U(k,k_{\mathrm F} ) &=& \frac{M^*}{E^*} \Sigma_{\mathrm s} - \frac{ k_{\mu}^* \Sigma^\mu}{E^*} \\
&=& \frac{M^* \Sigma_{\mathrm s} }{\sqrt{ {\bf k}^2 (1+\Sigma_{\mathrm v} )^2 + M^{*2}}}
- \Sigma_{\mathrm o} + \frac{ (1+\Sigma_{\mathrm v} )\Sigma_{\mathrm v} {\bf k}^2}
{\sqrt{ {\bf k}^2 (1+\Sigma_{\mathrm v} )^2 + M^{*2}}}
\quad .
\label{upot2}
\end{eqnarray}
In many applications \cite{bm90,lee97} the single particle
potential is only given in terms of a scalar and zero-vector component.
This can be achieved in nuclear matter
by introducing reduced fields ${\tilde \Sigma_{\mathrm s} }$ and
${\tilde \Sigma_{\mathrm o} }$ as
\begin{eqnarray}
{\tilde \Sigma_{\mathrm s} } = {\tilde M}^* -M = \frac{\Sigma_{\mathrm s} - \Sigma_{\mathrm v} M}{1+\Sigma_{\mathrm v} }
\quad , \quad
{\tilde \Sigma_{\mathrm o} } = {\tilde E}^* -E = \Sigma_{\mathrm o} - {\tilde E}^* ({\bf k}) \Sigma_{\mathrm v}
\quad .
\label{red2}
\end{eqnarray}
Then the single particle potential has the form
\begin{equation}
U(k,k_{\mathrm F} ) = \frac{ {\tilde M}^* }{ {\tilde E}^* } {\tilde \Sigma_{\mathrm s} } - {\tilde \Sigma_{\mathrm o} }
\quad .
\label{upot3}
\end{equation}
Frequently the reduced fields, Eq. (\ref{red2}), are
used rather than the projected components since they represent
the self-energy in a mean field or Hartree form. Thus they can
easily be related to effective hadron mean field theory
\cite{fuchs95,toki97}. Such a representation is
meaningful since the $\Sigma_{\mathrm v} $-contribution is a moderate
correction.
In contrast to the single particle potential which can easily
be derived form the T-matrix \cite{bm90} the extraction of the
self-energy components is a subtle problem. In the latter case
one has to represent the T-matrix within the Clifford algebra
in the Dirac space which is not free from severe ambiguities.
Before we discuss this point in detail we briefly recall some
basic features of the relativistic Brueckner scheme. For more details
see e.g. Refs. \cite{horse87,thm87a,sefu97}.
The iteration of the Thompson equation requires to
determine the self-consistent spinor basis, Eq. (\ref{dirac}).
In practice the problem is treated in terms of the
reduced effective mass $ {\tilde M}^* $ to avoid an explicit dependence
on the space-like $\Sigma_{\mathrm v} $ components (\ref{dirac2}).
Actually, the zero-vector component $\Sigma_{\mathrm o} $ does not enter
the self-consistency problem. In the standard treatment
\cite{horse87,thm87a,bm90,amorin92,sefu97}
the effective mass is assumed to
be entirely density dependent, i.e. a constant effective mass
$ {\tilde M}^* = {\tilde M}^* (|{\bf k}|=k_{\mathrm F} ,k_{\mathrm F} )$ generally taken at the Fermi-momentum
is used as the iteration parameter. The
effective mass at its value at the Fermi momentum is reinserted
into the Thompson equation and this procedure is repeated until
convergence is reached. Such a treatment is self-consistent
concerning the density dependence of the effective interaction
screened by the medium and appears to
be justified if the self-energy is weakly momentum dependent
inside the Fermi-sea. If the self-energy is, however, strongly
momentum dependent, as it was observed in \cite{nupp89,sefu97},
one principally has to go beyond the standard approach. Then one has to
include the momentum dependence of the effective mass in the
Thompson propagator as well as on the level of the interaction.
For the present investigations we will not include such an explicit
momentum dependence in the Brueckner scheme since first of all, this
is technically a rather involved problem. Secondly, we
will verify in the next section that the momentum dependence
of the self-energy is mainly governed by the treatment of the pion-exchange
in the T-matrix representation. Thus a more careful treatment
of the pion-exchange leads to a much weaker momentum dependents of
the self-energy as it is desirable
for the present self-consistency scheme.
\section{Decomposition by Lorentz invariant amplitudes}
The self-energy for the nucleon $k$
follows from the T-matrix by integrating
over the occupied states $q$ in the Fermi sea
\begin{equation}
\Sigma (k) = -i \int \frac{d^4 q}{(2\pi )^4}
tr \left[ G_{{\mathrm D}}(q) {\hat T}(qk;qk - kq) \right] .
\label{sig4}
\end{equation}
Here ${\hat T}$ denotes the T-matrix (or G-matrix)
operator depending on four Dirac indices of the ingoing and outgoing
nucleons. Due to antisymmetrization ${\hat T}$ contains a
direct ( Hartree) and an exchange (Fock)
contribution. The Dirac propagator
\begin{equation}
G_{{\mathrm D}}(q) = [q^{\ast}\!\!\!\!\! / + M^{\ast}] 2\pi i
\delta(q^{\ast 2} - M^{\ast 2})\Theta(q^{\ast 0}) \Theta(k_{\mathrm F} -|{\bf q}|)
\label{diracpr}
\end{equation}
projects onto positive energy states in the Fermi sea.
In order to project out the
self-energy components, Eqs. (\ref{trace1}--\ref{trace3}), the
T-matrix has to be decomposed into Lorentz invariants. Since we need to
consider only on-shell scattering of particles in eq. (\ref{sig4}),
five invariant amplitudes with five covariants are sufficient to
represent the on-shell T-matrix \cite{tjon85}.
In this case, the scalar, vector, tensor, axial-vector and
pseudo-scalar terms
\begin{eqnarray}
S = 1\otimes 1 \quad , \quad
V = \gamma^{\mu}\otimes \gamma_{\mu} \quad , \quad
T = \sigma^{\mu\nu}\otimes\sigma_{\mu\nu} \quad , \quad
A = \gamma_5 \gamma^{\mu}\otimes \gamma_5 \gamma_{\mu}\quad , \quad
P = \gamma_5 \otimes \gamma_5
\end{eqnarray}
form a linearly independent, however, not unique set of covariants.
Using this special set, the on-shell T-matrix can be represented entirely
by 'direct' amplitudes
\begin{equation}
{\hat T} (\theta) =
F_1 (\theta) S
+ F_2 (\theta) V
+ F_3 (\theta) T
+ F_4 (\theta) A
+ F_5 (\theta) P
\quad .
\label{cov3}
\end{equation}
The covariant amplitudes $F_i$ are determined from anti-symmetrized
plane wave helicity matrix amplitudes
\cite{mach89} which obey the selection rule $(-)^{L+S+I} =-1$.
We solve the relativistic Thompson equation for the T-matrix
\cite{sefu97} consistently in the two-particle center-of-mass frame.
Since we need only diagonal matrix elements for
the self-energy (\ref{sig4}) the $F_i$ amplitudes
are required at the scattering angle
$\theta = 0$. They are
already anti-symmetrized and contain implicitly direct and
exchange contributions. Therefore the simple representation (\ref{cov3})
is sufficient to calculate the self-energy.
If we insert the T-matrix given by Eq. (\ref{cov3}) into
Eq. (\ref{sig4}) and take the trace over the Dirac propagator
only the scalar and vector amplitudes $F_1$ and $F_2$ survive
and contribute to the self-energy,
\begin{equation}
\Sigma (k) = -i \int \frac{d^3 q}{(2\pi )^3}
\frac{ \Theta (k_{\mathrm F} - |{\bf q}|)}{ E^* ({\bf q})}
\left[ m^* F_1 (0) + q^{\ast}\!\!\!\!\! / \, F_2 (0) \right] .
\label{sig4b}
\end{equation}
The representation of ${\hat T}$ is, however, not uniquely defined
in the on-shell case and therefore various alternative possibilities
exist to construct the set of five independent covariants in the
subspace of positive energies. Although the different
representations discussed below are all equivalent if one works with
the pseudo-scalar covariant $P$, their difference becomes
crucial as soon as one switches from the pseudo-scalar to the
pseudo-vector representation. The PV covariant in the medium is defined as
\begin{equation}
PV = \frac{k_{2}^* \!\!\!\!\! / - k_{1}^* \!\!\!\!\! /}{2M^{\ast}}\gamma_5
\otimes
\frac{q_{2}^* \!\!\!\!\! / - q_{1}^* \!\!\!\!\! /}{2M^{\ast}}\gamma_5
\end{equation}
with $k_{1}^*,q_{1}^* $ the initial and $k_{2}^*,q_{2}^* $
the final momenta of the scattering particles. The ambiguity
of the decomposition procedure arises
from the fact that the PS and PV matrix elements are identical
(using the Dirac eq.) for on-shell
scattering of positive energy states, i.e.
\begin{equation}
{\overline u}(k_2 )\left(
\frac{k_{2}^* \!\!\!\!\! / - k_{1}^* \!\!\!\!\! /}{2M^{\ast}}
\right) \gamma_5 u(k_1 ) = {\overline u}(k_2 )\gamma_5 u(k_1 )
\quad .
\label{mat1}
\end{equation}
Thus the corresponding amplitudes are identical as well and
it is impossible to uniquely disentangle the PS and PV contributions
in the T-matrix.
However, it is known from meson theory of the
nuclear interaction that a pseudo-vector representation of
the $\pi N$ coupling is preferable \cite{mach89}.
The influence of the pion in the
relativistic theory has been discussed in detail, e.g. in
Refs. \cite{sw86,tjon85}. There it was shown that the one-pion exchange
contribution to the nuclear optical potential tends to increase
drastically at low momenta if the $\pi N$ vertex is treated as PS.
One reason for this behavior
is a strong PS coupling to negative energy states which is not apparent
in non-relativistic approaches. The PV vertex suppresses the coupling
to antiparticles since the overlap matrix elements vanish
for on-shell scattering
\begin{equation}
{\overline v}(k_2 )\left(
\frac{k_{2}^* \!\!\!\!\! / - k_{1}^* \!\!\!\!\! /}{2M^{\ast}}
\right) \gamma_5 u(k_1 ) = 0
\quad .
\label{mat2}
\end{equation}
Therefore a PV vertex is more consistent with the
approximation scheme of the conventional Brueckner model where
the coupling to negative energy states is not taken into account.
The PV vertex also strongly suppresses the
pion contribution in particular at low energies which is more
in accordance with the empirical knowledge from proton-nucleus scattering
\cite{sw86,tjon85}.
Consequently, the $\pi N$ vertex of the bare interaction
is usually treated as PV \cite{mach89}.
Due to these facts the usage of a PV covariant in the
decomposition of the T-matrix is considered as preferable
\cite{thm87a,sefu97}. However, simply replacing $P$ by $PV$
in Eq. (\ref{cov3}) leads to identical results; firstly,
because the corresponding amplitudes are equal and secondly,
because only the scalar and vector amplitudes $F_1$ and $F_2$
contribute to the self-energy (\ref{sig4b}). Motivated by this
fact alternative representation of the T-matrix have been used
\cite{thm87a,nupp89,sefu97} which lead to an -- in principle
superfluous -- explicit splitting into 'direct' and 'exchange' contributions
and give different results when the
$PS \longmapsto PV $ replacement is performed.
\subsection{Symmetrized representations}
Based on Ref. \cite{gold}
Tjon and Wallace discussed a representation which accounts for
the structure of the exchange contributions in ${\hat T}$ in
a more transparent way \cite{tjon85}. To express exchange contributions
also the Dirac indices of the covariants are interchanged.
The transformation
${\tilde S}$ interchanges the Dirac indices of particles 1 and 2, i.e.
${\tilde S} u(1)_\sigma u(2)_\tau = u(1)_\tau u(2)_\sigma$. Thus one obtains
the interchanged covariants as
${\tilde S} = {\tilde S} S,{\tilde V} = {\tilde S} V, {\tilde T} = {\tilde S} T, {\tilde A} = {\tilde S} A$ and ${\tilde P} = {\tilde S} P$.
The interchanged covariants are related to the original covariants
(\ref{cov3}) by a Fierz transformation ${\cal F}$ \cite{tjon85}
\begin{eqnarray}
\left(
\begin{array}{c} {\tilde S} \\ {\tilde V} \\ {\tilde T} \\ {\tilde A} \\ {\tilde P} \end{array}
\right)
= \frac{1}{4}
\left(
\small{
\begin{array}{ccccc}
1 & 1 & \frac{1}{2} & -1 & 1 \\
4 & -2 & 0 & -2 & -4 \\
12 & 0 & -2 & 0 & 12 \\
-4 & -2 & 0 & -2 & 4 \\
1 & -1 & \frac{1}{2} & 1 & 1
\end{array}}
\right)
\left(
\begin{array}{c} S \\ V \\ T \\ A \\ P \end{array}
\right) .
\label{fierz}
\end{eqnarray}
For a definite scattering angle $\theta$ the T-matrix is now
represented by five symmetrized covariants \cite{tjon85}
\begin{equation}
{\hat T} (\theta) =
f_1 (\theta) (S - {\tilde S})
+ f_2 (\theta) \frac{1}{2}(T + {\tilde T})
- f_3 (\theta) (A - {\tilde A})
+ f_4 (\theta) (V + {\tilde V})
+ f_5 (\theta) (P - {\tilde P})
\quad .
\label{cov1}
\end{equation}
These symmetrized covariants are constructed so that the amplitudes
$f_i$ account for the Pauli principle in a transparent way.
Interchanging the outgoing particles,
anti-symmetrization requires the following symmetry
\begin{equation}
f^{\rm I}_i (\pi-\theta) = (-)^{\rm I+ i}f^{\rm I}_i (\theta)
\label{asym}
\end{equation}
for definite isospin $I=0,1$.
The relation between the five new amplitudes $f_i$ and the former amplitudes
$F_i$ is given by the transformation \cite{tjon85}
\begin{eqnarray}
\left(
\begin{array}{c} f_1 \\ f_2 \\ f_3 \\ f_4 \\ f_5 \end{array}
\right)
= \frac{1}{4}
\left(
\small{
\begin{array}{ccccc}
2 & -4 & -12 & 0 & 0 \\
1 & 0 & 4 & 0 & 1 \\
0 & -2 & 0 & -2 & 0 \\
1 & 2 & 0 & -2 & -1 \\
0 & 4 & -12 & 0 & 2
\end{array}}
\right)
\left(
\begin{array}{c} F_1 \\ F_2 \\ F_3 \\ F_4 \\ F_5 \end{array}
\right) .
\label{transform}
\end{eqnarray}
With relation (\ref{asym}) one can express T as the combination of
two terms which resemble a direct and an exchange contribution,
i.e. ${\hat T} = {\hat T}^{\rm D} - {\hat T}^{\rm X} $, by
\begin{eqnarray}
{\hat T}^{\rm D} (\theta) &:=&
f_1 (\theta) S
+ f_2 (\theta) \frac{1}{2} T
- f_3 (\theta) A
+ f_4 (\theta) V
+ f_5 (\theta) P \label{cov2}
\\
{\hat T}^{\rm X} (\theta) &:=& (-)^{\rm I+ 1} \left[
f_1 (\pi - \theta) {\tilde S}
+ f_2 (\pi -\theta) \frac{1}{2} {\tilde T}
- f_3 (\pi -\theta) {\tilde A}
+ f_4 (\pi -\theta) {\tilde V}
+ f_5 (\pi -\theta) {\tilde P} \right]
\quad .
\nonumber
\end{eqnarray}
However, it can not be claimed that ${\hat T}^{\rm D}$ and
${\hat T}^{\rm X}$ are the real direct and exchange matrix elements
but only their combination yields the fully anti-symmetrized
matrix elements.
The representation proposed
by Horowitz and Serot \cite{horse87} and also
used in other works \cite{thm87a,sefu97} is of a similar
structure as the one above with the difference that it is
based on direct amplitudes
\begin{eqnarray}
{\hat T}^{\rm D} (\theta) &:=& \frac{1}{2} \left[
F_1 (\theta) S
+ F_2 (\theta) V
+ F_3 (\theta) T
+ F_4 (\theta) A
+ F_5 (\theta) P \right] \label{cov4}
\\
{\hat T}^{\rm X} (\theta) &:=& (-)^{\rm I+ 1} \frac{1}{2}\left[
F_1 (\pi -\theta) {\tilde S}
+ F_2 (\pi -\theta) {\tilde V}
+ F_3 (\pi -\theta) {\tilde T}
+ F_4 (\pi -\theta) {\tilde A}
+ F_5 (\pi -\theta) {\tilde P} \right]
\quad .
\nonumber
\end{eqnarray}
To be more precise, in this approach both set of amplitudes $ F_i (\pi)$
and $ F_i (\pi -\theta)$
are obtained from the helicity matrix elements applying representation
(\ref{cov3}) at angle $\theta$ and $\pi -\theta$, respectively.
Doing this, the amplitudes $ F_i $
are essentially different from the $f_i$ amplitudes of Eq. (\ref{cov1}).
Instead of relation (\ref{asym}) anti-symmetrization requires now
that the exchange amplitudes are connected to the direct amplitudes
by the Fierz transformation (\ref{fierz})
\begin{equation}
F^{\rm I}_i (\pi-\theta) = (-)^{\rm I} {\cal F}_{ji} F^{\rm I}_j (\theta)
\label{asym2}
\quad .
\end{equation}
The difference between the representations (\ref{cov3}) and
(\ref{cov4}) lies in the fact that Eq. (\ref{cov4})
leads to an explicit splitting into direct and exchange contributions
which, however, can be regarded
as -- at least partially -- artificial. Since the amplitudes
$F_i (\theta)$ and $F_i (\pi -\theta)$ in (\ref{cov4}) are
determined from already anti-symmetrized helicity
matrix elements one has in that case the identity
\begin{equation}
{\hat T}^{\rm X} = - {\hat T}^{\rm D}
\quad .
\end{equation}
Hence the normalization
factor 1/2 which determines the splitting into 'direct' and
'exchange' parts in (\ref{cov4}) is judicious and could also be
fixed differently by any normalized linear combination. In this
context we want to mention that concerning the original work of Horowitz and
Serot \cite{horse87} this statement would not hold because they
used non-antisymmetrized (unphysical) helicity states in their formalism
and therefore they had to anti-symmetrize explicitly the T-matrix
elements by splitting the representation into direct and exchange
contributions as done in Eq. (\ref{cov4}). As a consequence, their amplitudes
$F_i$ at angle $\pi$ and $\pi-\theta$ did not fulfill the anti-symmetry
relation (\ref{asym2}) and thus only the representation (\ref{cov4})
was physically meaningful.
Working with physical helicity states, however,
one retains relation (\ref{asym2}) and therefore all types of
decompositions, Eqs. (\ref{cov3}-\ref{cov4}), are equivalent as
long as one restricts to a pseudo-scalar representation.
But if one replaces the PS by the PV covariant the
equivalence of the expressions
(\ref{cov3}), (\ref{cov1}) and (\ref{cov4}) is destroyed.
Our 'optimal choice' of the representation for using
the PV covariant will be discussed in the next section.
\subsection{Pseudo-vector representation}
The nucleon-nucleus potential is very sensitive to the
treatment of the pseudo-scalar or pseudo-vector treatment of the
pion-nucleon interaction. As already mentioned,
the pion should be preferentially treated
with a pseudo-vector coupling in the bare
NN interaction. The corresponding coupling strength
is fixed in such a way that it
reproduces the on-shell PS coupling strength in the vacuum \cite{mach89}.
This already leads to a suppression of the vertex by
a factor $(\frac{M^*}{M})^2 $ inside the nuclear medium \cite{sw86}.
However, the major suppression of the pion exchange contribution
to the nucleon self-energy originates from the different cofactors
which arise if one inserts the Dirac propagator (\ref{diracpr})
and the T-matrix from (\ref{cov3}) or (\ref{cov4}) into the
Eq. (\ref{sig4}) for the self-energy \cite{thm87a,sefu97}, i.e.
\begin{eqnarray}
tr \left[ ( q^{\ast}\!\!\!\!\! / + M^{\ast} ) {\tilde P} \right]
&=& - ( q^{\ast}\!\!\!\!\! / + M^{\ast} )
\label{traceps}
\\
tr \left[ ( q^{\ast}\!\!\!\!\! / + M^{\ast} ) {\widetilde {PV} } \right]
&=& ( k^{\ast}\!\!\!\!\! / + M^{\ast} )
\left( \frac{ k^{*}_\mu q^{* \mu} }{2 M^{* 2}} - \frac{1}{2} \right)
\quad .
\label{tracepv}
\end{eqnarray}
The influence of the pion, in particular the sensitivity on the
PS or PV representation, is most clearly demonstrated at the
Hartree-Fock level. This means that
${\hat T}$ is replaced by the
bare NN interaction ${\hat V}$ and no further medium effects are
taken into account, i.e. bare nucleon masses are used and the
Pauli operator in the Thompson equation is neglected. Furthermore,
the Hartree-Fock expressions are known analytically \cite{muehf}
which provides also a straightforward check of the involved projection
techniques. It is a well known fact \cite{sw86} that the
PS treatment yields extremely large pion
contributions to the self-energy whereas the PV representation
suppresses these terms almost completely. This effect is demonstrated
in Fig. 1 where the Hartree-Fock contributions of the pion
only to the nuclear self-energy are shown. The calculations are
performed for a nuclear matter density of $\rho = 0.166 $ fm$^{-3}$ with
a PS and PV (denoted as 'full PV' in Fig. 1)
$\pi$NN coupling of $g^{2}_\pi / 4\pi = 14.9$
and the pion form factor taken from the Bonn A interaction \cite{mach89}.
The bare nucleon mass is used in this example.
It is seen that the PS description of the
pion exchange yields extremely large self-energy components at low
momenta which are rapidly decreasing with increasing momentum. The
$\Sigma_{\mathrm s} $ and $\Sigma_{\mathrm o} $ contributions are almost identical which leads
to a strong cancelation effect in the single particle potential. The
PV description (denoted as 'full PV' in Fig. 1)
suppresses the pion by nearly two orders of magnitude
and even on this new scale the momentum dependence is much less
pronounced. Remarkably, scalar and vector contributions have now
opposite signs. Thus they add up in the potential (\ref{upot3})
and the remaining
momentum dependence is not the remnant of a huge cancelation effect as
in the PS case.
For comparison we also show the results which are
obtained in the projection scheme when the so-called 'PV choice'
is adopted. In the 'PV choice' commonly used \cite{thm87a,sefu97}
we simply replace the pseudo-scalar covariants by the pseudo-vector
covariants $P,{\tilde P} \longmapsto PV,{\widetilde {PV} } $ in the decomposition of
the T-matrix (\ref{cov4}). Due to the on-shell equivalence of the respective
matrix elements, Eq. (\ref{mat1}) the amplitudes remain thereby unchanged.
Now the explicit choice of the T-matrix representation
starts to play a decisive role since
the PS and PV exchange terms contribute differently to the self-energy,
Eqs. (\ref{traceps},\ref{tracepv}).
Thus the strength of the PS $\longmapsto$ PV replacement,
determined by the respective amplitudes $f_5 (\theta)$ and
$F_5 (\pi -\theta)$, becomes important.
In most previous calculations \cite{thm87a,sefu97}
the representation (\ref{cov4}) was used. The effect of the
replacement can be seen in Fig. 1 where the Hartree-Fock contribution
of the pion exchange to the nucleon self-energy using the
PV choice is shown.
First of all, it should be noted that the result using the PV choice
for (\ref{cov4}) is identical to the result which one obtains
if one uses the replacement in the symmetrized representation,
Eq. (\ref{cov1}), since both amplitudes $f_5 (\theta)$ and
$F_5 (\pi-\theta)$ agree in the Hartree-Fock approach with
only pion exchange.
It is obviously transparent from Fig. 1 that within the
traditional PV choice the pion is not treated correctly
as a pseudo-vector particle. The PV choice representation
is rather a mixture of the PS and a complete PV
representation. Although the pion contribution to the nucleon self-energy
is suppressed by about a factor of two compared to the original PS case,
the structure of the self-energies, in particular the pronounced momentum
dependence is still very similar. The reason for this behavior of the
self-energy is easily understandable. Even after replacing
$P,{\tilde P}$ by $PV,{\widetilde {PV} } $, both representations, Eqs. (\ref{cov1})
and (\ref{cov4}), still contain pseudo-scalar admixtures because
the Fierz transformation (\ref{fierz}) couples all covariants.
This leads to identities for the symmetrized
vector and tensor covariants \cite{tjon85}
\begin{eqnarray}
\frac{1}{2} ( T+{\tilde T} ) &=& S+{\tilde S} +P+{\tilde P}
\\
V+{\tilde V} &=& S+{\tilde S} -P-{\tilde P}
\label{iden}
\quad .
\end{eqnarray}
In order to completely remove the PS part from the interaction
one first should use the identities above which leads to the
following decomposition \cite{tjon85}
\begin{eqnarray}
{\hat T} (\theta) &=&
( f_1 + f_2 +f_4 ) S
- ( f_1 - f_2 -f_4 ) {\tilde S}
- f_3 (A - {\tilde A})
\nonumber \\
&&
+ ( f_5 + f_2 -f_4 ) P
- ( f_5 - f_2 +f_4 ) {\tilde P}
\quad .
\label{cov5}
\end{eqnarray}
If the replacement $P,{\tilde P} \longmapsto PV,{\widetilde {PV} } $ is now performed
in Eq. (\ref{cov5}) instead of Eq. (\ref{cov1}) or Eq. (\ref{cov4})
this yields a complete PV representation of the interaction which we will
call 'full PV' representation in Fig. 1 and in the following.
Such a decomposition can successfully describe the PV pion exchange
on the Hartree-Fock level, i.e. the results calculated with the
analytically known Hartree-Fock matrix elements \cite{muehf}
for the PV pion-exchange are reproduced.
In the present formalism this can not be achieved by the other
decompositions. On the other hand, however, e.g. the $\omega$ exchange is
no longer treated accurately in the full PV representation
since PS admixtures arising from the Fock part of the $\omega$
exchange, Eq. (\ref{iden}), are treated as PV. This bias will, however,
turn out to be small. Anyway, all existing
decomposition are unable to handle the PV pion
exchange simultaneously with the remaining set of mesons
on the Hartree-Fock level. As we will see later on, the one-pion
exchange dominates the momentum dependence of the self-energy.
Hence, it is reasonable to require
that the PV pion-exchange is treated exactly on the Hartree-Fock level.
This constraint is fulfilled adopting the full PV treatment.
\section{Results}
In the present section we study the impact of the various
representations of the T-matrix (\ref{cov3}), (\ref{cov1}), (\ref{cov4})
and (\ref{cov5}) on the nucleon self-energy and
on related quantities.
\subsection{Momentum dependence of the self-energy}
On the level of the self-energy the effect of the different
choices can be summarized as
\begin{eqnarray}
\Sigma (k) = \Sigma^{{\rm PS}} (k) - \delta\Sigma (k)
= \Sigma^{{\rm PS}} (k)
+ i \int \frac{d^4 q}{(2\pi )^4}
f_R (kq;qk) ~tr \left[ G_{{\mathrm D}} (q) ( {\tilde P} -{\widetilde {PV} } ) \right]
\label{sig5}
\end{eqnarray}
with $\Sigma^{{\rm PS}}$ being the self-energy given in the pure PS
representation. The different approaches for the self-energy $\Sigma$
using for the T-matrix Eqs. (\ref{cov3}), (\ref{cov1}), (\ref{cov4})
or (\ref{cov5}) are only varying in the choice of $f_R$ explained
below. The strength of the suppression of the
pseudo-scalar contributions is moderated by the $f_R$ amplitude
which also determines the deviation $\delta\Sigma$ of the self-energy
from the PS case.
In principle this deviation shift is only apparent in the
decomposition of $\Sigma$ in the scalar and vector components, Eq.
(\ref{sig1}), but vanishes when the complete matrix elements are built, i.e.
\begin{equation}
<{\bar u(k)}| \delta \Sigma (k) | u(k)> = 0
\quad .
\label{iden2}
\end{equation}
For the same effective mass $M^* (k_{\mathrm F} )$ all approaches give
the same total single particle potential (\ref{upot1})
although they yield
quite different scalar and vector self-energy components.
However, the different approaches also
yield different values for the effective mass $M^* (k_{\mathrm F} )$ which
leads to a different in-medium spinor basis $|u>$ used in
the self-consistent iteration procedure and the equivalence
for the single particle potential gets distorted.
Consequently, the different approaches lead to visuable
changes also for those quantities which are built from
total matrix elements, i.e. the single particle potential and the
equation-of-state. If the value of $M^*$ is, however, kept
fixed the equivalence on the level of matrix elements is
exact and also numerically fulfilled with high accuracy.
The cases discussed in the previous section modify only $f_R$
in Eq. (\ref{sig5}) and can now be summarized as
\begin{eqnarray}
f_R = \left\{
\begin{array}{ccc}
0 & \quad , \quad & {\rm PS} \\
(-)^{\rm I+ 1} \frac{1}{2} F_5 (\pi-\theta)
& \quad , \quad & {\rm 'PV~choice'} \\
f_5 (\theta)- f_2 (\theta) +f_4 (\theta)
& \quad , \quad & {\rm 'full~PV'}
\end{array}
\right.
\label{choice}
\end{eqnarray}
Since earlier works on relativistic Brueckner theory which sticked
to the projection method \cite{thm87a,nupp89,sefu97} applied
the scheme proposed by Horowitz and Serot
(Eq. (\ref{cov4}) with $P$ and ${\tilde P}$ replaced by $PV$ and ${\widetilde {PV} }$,
respectively, our 'PV choice') we will discuss this
method first. Within this scheme the structure of the
self-energy, i.e. its density and momentum
dependence has been investigated in detail in Ref. \cite{sefu97}
with the Bonn potentials \cite{mach89} as
the bare NN-interaction. As the most prominent result we observed a
strong momentum dependence of the scalar and time-like
vector self-energy components around the Fermi momentum. These findings
are in qualitative agreement with a previous work by
Nuppenau {\it et al.} \cite{nupp89}.
Fig. \ref{fig2} shows the momentum dependence
of the three self-energy components $\Sigma_{\mathrm s} , \, \Sigma_{\mathrm o} ,\, k_{\mathrm F} \Sigma_{\mathrm v} $ at
nuclear matter density $\rho = 0.166\, fm^{-3}$ obtained with the
Bonn A interaction. The space-like $\Sigma_{\mathrm v} $ component is found to be
relatively large inside the Fermi-sea and decreases
with increasing momentum. Furthermore we compare to the corresponding reduced
fields ${\tilde \Sigma_{\mathrm s} },\,{\tilde \Sigma_{\mathrm o} }$ where the $\Sigma_{\mathrm v} $-term is
effectively included, Eq. (\ref{red2}). It is seen that the
reduced fields are significantly
smaller in magnitude at low momenta and generally show a less pronounced
momentum dependence. The inclusion of the $\Sigma_{\mathrm v} $-term
counterbalances the strong momentum dependence to some extent.
The negative $\Sigma_{\mathrm v} $-contribution effectively weakens the
momentum dependence whereas in the opposite
case \cite{amorin92,fred97}
the momentum dependence will be enhanced. In the
limit of a vanishing $\Sigma_{\mathrm v} $-contribution reduced and projected
components are equal.
Fig.\ref{fig3} demonstrates the influence of the various mesons
(using Bonn A).
Taking only $\sigma$ and $\omega$ exchange into account the
result is quite similar to that obtained in Ref. \cite{horse87},
i.e. the momentum dependence is flat inside the Fermi sea. Including
the pion we are already very close to the full result. The
strong momentum dependence of the present calculation originates to
a large extent from the pion-exchange. Hence, the calculation
still reflects the Hartree-Fock results (Fig.\ref{fig1}) and
the strong momentum dependence originates mainly from
pseudo-scalar contributions of the pion.
In Fig.\ref{fig4} the corresponding self-energies obtained
for the various decompositions are compared. Adopting the
full PV representation the space-like $\Sigma_{\mathrm v} $ contribution
turns out to be much smaller than in the PS or the
standard PV choice (see also Fig. \ref{fig2}).
Therefore we show the reduced self-energies
${\tilde \Sigma_{\mathrm s} }$ and ${\tilde \Sigma_{\mathrm o} }$ in which $\Sigma_{\mathrm v} $
is included for a better comparison.
Both, the PS and the PV choice show a similar strong
momentum dependence which again reflects the pseudo-scalar nature of
the pion exchange. Consistent with the previous
considerations this momentum dependence vanishes to a large
extent when the pion contribution is suppressed by the full PV
representation of the T-matrix. At high energies the different
choices yield similar results since the influence of the pion
decreases. The pure PS and the full
PV representation can be regarded as the limiting
cases which give the range of uncertainty in the determination
of the self-energy.
The full PV representation has thereby the big advantage
that due to the weak momentum dependence the
standard treatment of the Thompson equation, i.e. to approximate
the effective mass by its constant value at the Fermi surface,
can be safely applied as done in our and almost all other works on
this subject. Furthermore this method ensures by
construction that it treats the pion correctly
at the Hartree-Fock level.
Thus the full PV representation comes closest of all discussed
representations to what we would call the optimal representation
of the T-matrix. It is also worthwhile to mention that the
results obtained within this representation agree well with
recent calculations which include negative energy
states and thus avoid the
projection procedure \cite{fred97}.
\subsection{Single particle potential and the fit method}
As discussed above, the single particle potential, Eq. (\ref{upot1}),
is in principle independent on the representation of the T-matrix which
is again due to the on-shell equivalence of the corresponding
matrix elements, Eq. (\ref{mat1}).
However, significantly different values
of $M^*$ obtained in the different approaches lead
to different results. Although this effect is
reduced using the rescaled effective
mass $ {\tilde M}^* $, the equation-of-state
reacts sensitively on this $M^*$ dependence, as can be seen from
table 1. A suppression of the PS contributions causes a larger effective
mass and thus suppresses relativistic effects which
originate from the mixing of small and large components of the spinors.
A pure PS treatment leads to more binding
and shifts the saturation point to higher densities. The corresponding
equation-of-state is rather stiff. By the empirical knowledge of
the nuclear saturation properties the PS representation can be ruled out
since the saturation density is much too high and the effective mass
is much too small. The remaining two methods are in rough agreement
with the empirical constraints, however, the densities are also
slightly too high compared to the experimental Fermi momentum
of about $k_{\mathrm F} = 1.37 fm^{-1}$.
Here the larger value of the effective mass
is a favor of the 'full PV' representation since it is in better
agreement with the constraints derived from the spin-orbit
splitting in finite nuclei \cite{ring}. Compared to the 'PV choice',
the 'full PV' representation leads to more binding and makes the
equation-of-state softer, in agreement with the incompressibility
derived from the isoscalar giant monopole resonance of about
K $\approx$ 230 MeV. One might assume that this is due to the
fact that the 'full PV' representation suppresses part of the repulsive
$\omega$-exchange. This is however, not the case since the inaccuracy
which arises in the $PS \longmapsto PV$ replacement
procedure concerning the vector and
tensor exchange amplitudes, Eqs. (\ref{iden}), has only a minor influence
on the final results. We checked this point by treating the
Hartree-Fock contribution of the one-pion-exchange separately in the
PV representation while for the remaining part of the T-matrix
the PS representation was retained. In terms of the self-energy this
means to set $f_R = f^{X}_\pi $ in (\ref{sig5}) with $f^{X}_\pi $
the Hartree-Fock amplitude of the one-pion exchange.
In view of the problems which arise in the determination of
the self-energy we now briefly discuss a frequently used and much simpler
approach first applied by Brockmann and Machleidt
\cite{bm90}. In this approach one tries to extract the self-energy components
directly from the single particle potential,
thus one avoids to take the explicit traces (\ref{trace1}--\ref{trace3}).
Therefore there is no need to decompose the T-matrix into its
Lorentz invariants.
Actually, Brockmann and Machleidt determined
only density dependent but not momentum
dependent mean
values for ${\tilde \Sigma_{\mathrm s} },\, {\tilde \Sigma_{\mathrm o} }$ by a fit to $U$
given by Eq. (\ref{upot3}). This
fit method works reasonably well as long as one restricts oneself to
density dependent observables on the one-body level.
The corresponding (reduced) effective
masses are relatively close to our results at $k_{\mathrm F} $
obtained within the 'PV choice' \cite{sefu97}. Thus it is
understandable that the resulting nuclear matter saturation
properties are similar in the two approaches of
Refs. \cite{bm90} and \cite{sefu97},
see Tab.1. However, any information on the
magnitude of the space-like $\Sigma_{\mathrm v} $-contribution and the
explicit momentum dependence of the self-energy is completely lost
in this approach. To overcome this drawback
the Stony Brook group \cite{lee97} extracted
momentum dependent fields from the single particle potential.
They assumed a functional
dependence of the (reduced) self-energies of the form
\begin{equation}
{\tilde \Sigma_{{\mathrm s,o}}} (k) =
\frac{ \alpha_{{\mathrm s,o}} }{ 1+\beta_{{\mathrm s,o}}
(k/k_{\mathrm F} )^{\gamma_{{\mathrm s,o}}} }
\label{fit1}
\end{equation}
at fixed density $k_{\mathrm F} $. The set of six parameters
$\{\alpha ,\beta , \gamma \}_{{\mathrm s,o}}$ was then determined by
a least square fit to $U$, i.e. by minimizing
\begin{equation}
\chi^2 = \int_{0}^{k_{\mathrm F} } dk k^2 \left[ U(k,k_{\mathrm F} ) -
\left(\frac{ {\tilde M}^* }{ {\tilde E}^* } {\tilde \Sigma_{\mathrm s} } - {\tilde \Sigma_{\mathrm o} } \right)
\right]^2
\quad .
\label{fit2}
\end{equation}
However, such a procedure suffers from a large amount of
arbitrariness since
one tries to extract two independent functions out of one function.
Consequently, the result is strongly influenced by the choice of the
trial functions. It is clear that the class of trial functions
given by Eq. (\ref{fit1}) will generally not provide solutions
of Eq. (\ref{sig5}), although the insertion of the -- a priori
unknown -- 'correct' amplitude $f_R$ should yield the correct
results for the self-energies.
To demonstrate
this aspect we compare in Fig. \ref{fig5} the fit procedure
according Eqs. (\ref{fit1},\ref{fit2}) to the projection
method using thereby both, the 'PV choice' and the 'full
PV' decomposition. The fitted self-energies are determined from the
single particle potential obtained with the 'PV choice'.
It is seen that the fitted self-energies
show a moderate momentum dependence which is in a
qualitative agreement with the 'full PV' representation,
but not with the 'PV choice' to which the fit was performed.
However, the asymptotic high momentum behavior of
the fitted self-energies
is completely different from the projected self-energies,
independently which choice is used.
Also the slope of the curves at
low momenta seem to be distorted by the choice of the trial
function. The results of Ref. \cite{lee97} show a
similar behavior (which is due to
the choice of the same functional $k$-dependence (\ref{fit1})),
however, the momentum dependence
is a little more pronounced than our fitted results.
For a fair comparison one has to be
aware that in Ref. \cite{lee97} and
the present work different approximations to the Thompson
propagator were made. Actually, in
the present work the Thompson equation is solved in the
two-particle center-of-mass frame, whereas in Refs.
\cite{bm90,lee97} it is solved in the nuclear matter rest frame
thus avoiding the projection techniques. The main difference
is, however, that in Ref. \cite{lee97}
the effective mass entering into the Thompson equation
and the effective spinor basis is allowed to be
momentum dependent. This states an involved problem which
affords a couple of additional approximations. If the momentum
dependence of the self-energy is very pronounced one has
to go beyond the present approximation scheme with $ {\tilde M}^* (k_{\mathrm F} )$
in order
to include this momentum dependence self-consistently. However,
performing the full PV decomposition (which we regard as the
most reliable one) the momentum dependence is actually very weak
(see Fig. 4). Thus the usage of a constant effective mass
$ {\tilde M}^* (|{\bf k}|=k_{\mathrm F} ,k_{\mathrm F} )$ is well justified.
In contrast to \cite{lee97} where it was argued that the improved
self-consistency suppresses the momentum dependence we find that
momentum dependence is mainly governed by the representation of
the T-matrix or -- in the case of Ref. \cite{lee97} -- by the
choice of the trial functions.
To illustrate this effect the single particle
potential is considered in Fig. \ref{fig6}.
It is seen that the different decompositions,
'PV choice' versus 'full PV' representation, yield
significantly different results for this quantity. The deviations
are due to the different modifications of the effective interaction
in the self-consistency scheme which yield also different
effective masses. The 'full PV' treatment
lowers the potential by about 8 MeV
compared to the standard treatment and thus leads to
more binding in the equation-of-state. We also
included the result of Ref. \cite{lee97} into this figure.
Remarkably, this calculation yields the same result as
the present 'PV choice'
although the underlying scalar and vector
self-energies are completely different.
This somehow fortuitous agreement can be understood by the fact
that the self-energies coincide around the Fermi momentum.
In addition we show in Fig. \ref{fig6}
the single particle potential obtained as a result of the
fit procedure, Eq. (\ref{fit2}). $U$ is reasonably well
reproduced although the fitted and
projected self-energies in Fig. 5 strongly differ. Hence,
it is not possible to extract the self-energy components
from the single-particle potential in a reliable way.
This finding is also supported by a recent analysis
where it was shown that the fit method breaks down when applied to
isospin asymmetric nuclear matter \cite{ulrych97}. Recently
M\"uther, Ulrych and Toki \cite{muether98} also determined
momentum dependent scalar and vector self-energies components
directly from the single particle potential (\ref{upot3}). There
$ {\tilde M}^* $ was treated as an independent quantity
which allows to generate more than one equation for $U(k,k_{\mathrm F} )$
to determine ${\tilde \Sigma_{\mathrm s} }$ and
${\tilde \Sigma_{\mathrm o} }$. This approach, however, neglects that only
the self-consistent $ {\tilde M}^* $ has a physical meaning. With the
method of \cite{muether98} one also obtains a very weak momentum
dependence of ${\tilde \Sigma_{\mathrm s} }$ and ${\tilde \Sigma_{\mathrm o} }$
similar to the 'full PV' approach of
the present work.
Another experimentally accessible observable is the Schroedinger equivalent
optical potential \cite{thm87a}. Here the explicit momentum
dependence of the self-energies provides an important correction
to the in first order linear energy dependence of the optical
potential. Adopting the 'PV choice' we already found
in Ref. \cite{sefu97} a good agreement with
the empirical values of Ref. \cite{hama90} for the real part
up to energies around 800 MeV and an excellent agreement
up to the pion threshold for the imaginary part. As can be seen from Fig. 7
where the real part of the optical potential is shown the
agreement with the data is even improved at low energies when
the 'full PV' representation is used. This also indicates that
the depth of the corresponding single particle potential
is quite reasonable. At higher energies the two methods yield
almost identical results. On the other hand, with the
fields as obtained by the fit method and
also predicted similar in \cite{lee97} it is
not possible to reproduce the high energy behavior of the
optical potential. This will of course have implication when
such fields, Eq. (\ref{fit1}), are applied to the
transport description of heavy ion collisions.
\section{Summary and Conclusions}
In the present work we investigated the momentum dependence of the
nuclear self-energy in the relativistic Brueckner approach. We applied
the standard treatment which projects onto positive energy states and
determines the self-energy components by a decomposition of the
T-matrix into its Lorentz invariants. The T-matrix is
represented by a set of five linearly independent covariants.
Since the set of covariants is not uniquely determined in the
subspace of positive energies one has some
freedom in the choice of the representation. It is therefore not
possible to determine the scalar and vector parts of the
relativistic nuclear self-energy in a unique way. This ambiguity
originates from the fact that the on-shell scattering of positive
energy states yields identical values for the
pseudo-scalar and the pseudo-vector
representation of the corresponding matrix elements and that
they connect differently to the negative energy states,
see also Refs. \cite{huber,fred97}.
In the standard treatment of relativistic Brueckner theory
one accounts for this fact by
choosing a particular type of a pseudo-vector representation.
To perform this 'PV choice', see Eq. (\ref{cov4}),
one has to decompose already
anti-symmetrized matrix elements into direct and exchange
terms which is not free from ambiguities.
Applying the Bonn potentials as the bare NN interaction
we find that the conventional
'PV choice' leads to a pronounced momentum dependence of
the nuclear self-energy. The spatial $\Sigma_{\mathrm v} $-contribution
of the self-energy is thereby found to be relatively large,
in particular inside the Fermi sea, and counterbalances
this momentum dependence to some extent on the mean field
level. However, this strong momentum dependence
makes the standard Brueckner approach questionable
since the Thompson equation (or alternative reductions) are iterated
using a self-consistent effective mass depending only on the Fermi
momentum.
The momentum dependence is found to be
completely dominated by the pion exchange.
It originates from the pseudo-scalar nature of the pion
which is still remnant adopting the 'PV choice' (\ref{cov4}) in the
conventional manner. To eliminate this insufficiency we
represented the T-matrix by a pure pseudo-vector decomposition and
called this 'full PV' representation.
The 'full PV' representation accounts
correctly for the desired pseudo-vector nature of the pion exchange
and suppresses the momentum dependence of the self-energy
almost completely. The two limiting cases, namely the full
pseudo-scalar and the full pseudo-vector representation, set the
range of uncertainty in the determination of the self-energy.
However, the 'full PV' representation is more consistent with the
usual approximation scheme of the Brueckner approach which
assumes that the screening of the effective interaction in the
medium introduces an additional density dependence, but is only
weakly depending on the momentum of the particles.
We further investigated a frequently
used method, namely to determine the scalar and vector
self-energy components directly by a fit to the single
particle potential. Although this method works reasonably well as long
as one restricts to density dependent observables it leads to highly
ambiguous results when applied to extract the full momentum dependence
of the fields. Thus we conclude that the projection onto covariant
amplitudes using thereby a complete pseudo-vector representation
is up to now the most reliable way to determine the scalar and
vector nucleon self-energy components as long as one works
exclusively with positive energy states.
\begin{acknowledgments}
We would like to thank H. M\"uther, E. Schiller and H. Lenske for
enlighting discussions. We further thank E. Schiller and H. M\"uther
for providing us with a relativistic Hartree-Fock
program which was very useful
in order to check the present results.
\end{acknowledgments}
|
2,877,628,090,745 | arxiv | \subsection*{\ \ \ \ List of main notations:}
\addcontentsline{toc}{section}{ Notations}
\ \vspace{-7mm}
{\small
$q_j$, $j=1,...,N$ -- positions of particles;
${\bar q}_j=q_j-q_0$ -- positions of particles in the center of mass frame, $q_0=(1/N)\sum_k q_k$;
$q_{ij} = q_i - q_j$
$x_j=e^{q_j}$ or $x_j=e^{2\pi\imath q_j}$ in trigonometric or elliptic cases respectively;
$p_j$, $j=1,...,N$ -- the classical momenta of particles;
$\omega=e^{2\pi \imath \ti\tau}$ -- the elliptic modular parameter, controlling the ellipticity in momenta;
$p=e^{2\pi \imath \tau}$ -- the modular parameter, controlling the ellipticity in coordinates;
$q=e^\hbar$ -- exponent of the Planck constant;
$t=e^\eta$ -- exponent of the coupling constant;
$\lambda$ -- the spectral parameter (\ref{e1}) \, (sometimes called also $u$);
$z$ -- the second spectral parameter (\ref{s222});
$\mathcal{A}_{x,p}$ - the space of operators, generated by $\{x_1,.., \, x_N, q^{x_1 \partial_1},...,\,q^{x_N \partial_N}\}$;
$:\qquad:$ - normal ordering on $\mathcal{A}_{x,p}$, moving all shift of operators in each monomial to the right (\ref{order});
$\hat{\mathcal O}(\lambda)$ -- the generating function of operators $\hat{\mathcal O}_n$ from \cite{Sh} (\ref{e1});
$\hat{\mathcal O}'(\lambda)=h^{-1}\hat{\mathcal O}(\lambda)h$ -- the generating function $\hat{\mathcal O}(\lambda)$ with
theta functions $\theta_p$ being replaced by the Jacobi theta functions $\vartheta$ (the function
$h$ equals $\prod_{i<j}e^{-\pi\imath\eta(q_i-q_j)/ \hbar} $, see (\ref{e1002}))
$\hat{\mathcal O}'(z,\lambda)$ -- extension of $\hat{\mathcal O}'(\lambda)$ by the second spectral parameter (\ref{s22});
${\hat H}(\lambda)$ -- generating function of quantum Dell Hamiltonians ${\hat H}_n=\hat{\mathcal O}_0^{-1}\hat{\mathcal O}_n$ (\ref{e2});
${\hat{\mathcal H}}(\lambda)$ -- alternative generating function of quantum Dell Hamiltonians ${\hat{\mathcal H}}_n=\hat{\mathcal O}^{-1}(1)\hat{\mathcal O}_n$ (\ref{e55});
$\Xi\in {\rm Mat}(N,\mathbb C) $ -- the intertwining matrix, $\det \Xi$ is proportional to the Vandermonde function \{\ref{AppB}\};
$g=\Xi D^{-1}\in {\rm Mat}(N,\mathbb C) $ -- the normalized intertwining matrix with diagonal matrix $D$ (\ref{q54});
$ {\mathcal L}=\sum_{n \in \mathbb{Z} }
\omega^{\frac{n^2-n}{2}} (-\lambda)^n {L}^{RS}(q^n, t^n)$ -- the weighted average of the Ruijsenaars Lax matrix (\ref{e239});
$c$ -- the light speed parameter as in the Ruijsenaars-Schneider model (\ref{e077});
$L(z,\lambda)\in {\rm Mat}(N,\mathbb C) $ -- the $L$-matrix in the Manakov L-A-B triple (\ref{e69}).
All products of non-commuting operators should be understood as left ordered products.
By this we mean:
\begin{equation*}
\prod_{i=1}^N B_i = B_1 \cdot ... \cdot B_N
\end{equation*}
}
\newpage
\section{Introduction and summary}
\setcounter{equation}{0}
\subsection{Brief review}
The double elliptic (or Dell) model \cite{MM} is an integrable system with an elliptic
dependence on both -- positions of particles and their momenta. It extends the widely known
Calogero-Moser-Sutherland \cite{Calogero,Krich} and Ruijsenaars-Schneider \cite{RS} families of
many-body integrable systems.
Historically, the model was first derived as the elliptic self-dual system with respect to the Ruijsenaars
(or equivalently, p-q or action-angle) duality interchanging positions of particles and action variables \cite{Ruijs_d}.
At the classical level
the original group-theoretical Ruijsenaars construction was not applicable to the elliptic case.
Instead, a geometrical approach was used based on the studies of spectral curves and Seiberg-Witten differentials \cite{GKMMM}.
In this way the Dell Hamiltonians where proposed in terms of
higher genus theta-functions with a dynamical period matrices. For this reason
a definition of the standard set of algebraic tools for integrable systems (including Lax pairs, $R$-matrix structures, exchange relations etc) appeared to be a complicated problem.
The classical Poisson structures underlying the Dell model
were studied in \cite{BGOR,Aminov}.
An alternative version of the Dell Hamiltonians was suggested recently in \cite{Sh}. The authors exploited the explicit form of the 6d Supersymmetric Yang-Mills partition functions with surface defects compactified on torus, which are conjectured to serve as the wavefunctions for the corresponding Seiberg-Witten intergable systems \cite{NekBPS_IV, NekBPS_V, NekSh, AT}. The exact correspondence of their results with
the previous studies is an interesting open problem though the matching has being already verified in a few simple cases.
In this paper we deal with the Koroteev-Shakirov version of the generating function for commuting Hamiltonians.
Namely, for the $N$-body system consider the operator of complex variables:
\begin{equation}\label{e1}
\displaystyle{
\hat{\mathcal O}(\lambda)= \sum_{n_1,...,n_N \in\, \mathbb{Z}} \omega^{\sum_i\frac{n_i^2 - n_i}{2}} (-\lambda)^{\sum_i n_i} \prod_{i < j}^N \frac{\theta_p (t^{n_i - n_j}\frac{x_i}{x_j})}{\theta_p (\frac{x_i}{x_j})} \prod_i^N q^{n_i x_i \partial_i} = \sum_{n \in \mathbb{Z}} \lambda^n \hat{\mathcal O}_n\,.
}
\end{equation}
%
This is a definition of the infinite set of (non-commuting) operators $\hat{\mathcal O}_k$.
The positions of particles $q_i$ enter through $x_i=e^{q_i}$;
$t=e^{\eta}$ -- is exponent of the coupling constant $\eta$; $q=e^\hbar$ -- is exponent of the Planck constant $\hbar$; and $\partial_i=\partial_{x_i}$, so that $\partial_{q_i}=x_i\partial_i$. The constant $\omega$ is the second modular parameter (controlling the ellipticity in momenta) and $\lambda$ is the (spectral) parameter of the generating function.
The definition of the theta-function $\theta_p(x)$ with the constant
modular parameter $\tau$ ($p = e^{2 \pi i \tau}$) (controlling the ellipticity in coordinates) is given in (\ref{e81}).
The commuting Hamiltonians of the Dell system were conjectured and argued to be of the form:
\begin{equation}\label{e2}
\hat{H}_n = \hat{\mathcal O}_0^{-1} \hat{\mathcal O}_n\,, \qquad \qquad n = 1,...,N.
\end{equation}
Solution to the eigenvalue problem for $\hat{H}_n$ was suggested in \cite{Awata,Awata2} by extending the Shiraishi functions \cite{Shiraishi} -- solutions to a non-stationary Macdonald-Ruijsenaars quantum problem.
Our study, on the contrary does not appeal to the explicit form of the wavefunctions and is mostly focused on the generating function itself. It is based on the usage of the intertwining matrix $\Xi(z)$ of the IRF-Vertex correspondence (see (\ref{AppB}) for their explicit form) and the
Hasegawa's factorization formula \cite{Hasegawa,Hasegawa2}.
%
\begin{equation}\label{e211}
\begin{array}{l}
\displaystyle{
{\hat L} ^{\rm RS}(z,q,t)=g^{-1}(z)g(z-N\eta)\,q^{{\rm diag}(\partial_{q_1},...,\partial_{q_N})}\in {\rm Mat}(N,\mathbb C)
}
\end{array}
\end{equation}
for
the ${\rm gl}_N$ elliptic Ruijsenaars-Schneider Lax operator
with spectral parameter $z$ \cite{RS}
\begin{equation}\label{e0}
\displaystyle{
{\hat L}^{RS}_{ij}(z, \eta, \hbar)= \frac{\vartheta( -\eta) \vartheta(z+q_{ij} - \eta)}{\vartheta(z) \vartheta(q_{ij} -\eta)}\,\prod_{k\neq j}\frac{\vartheta(q_{jk}+\eta)}{\vartheta(q_{jk})}\,e^{ \hbar \partial_{q_j}}\,.
\quad q_{ij}=q_i-q_j\,.
}
\end{equation}
The
matrix $\Xi(z)=\Xi(z,x_1,...,x_N|p)$ enters the
normalized intertwining matrix $g(z,\tau)=\Xi(z)D^{-1}$ from (\ref{e211}), where $D(x_1,...,x_N)$ is a diagonal matrix used
for convenient normalization only, see (\ref{q54}). A key property of these matrices, which will be used, is that $\det\Xi$ is proportional to
the Vandermonde determinant.
These intertwining matrices are known from the IRF-Vertex correspondence at quantum and classical levels \cite{Baxter-Jimbo,LOZ,LOZ2,ZV}. The IRF-Vertex correspondence provides relation between
dynamical and non-dynamical quantum (or classical) $R$-matrices as a special twisted gauge transformation
with the matrix $g(z)$, thus relating the Lax operator (\ref{e0}) with the one of the Sklyanin type \cite{Skl}.
\subsection{Outline of the paper and summary of results}
In this paper, using the Hamiltonians (\ref{e1}), we construct a generalization of the Macdonald determinant operator for the Dell system and study its applications.
We use a slightly modified and extended version of the generating function $\hat{\mathcal O}'(z,\lambda)$ (\ref{e1}), which depends on additional spectral parameter $z$, and generates an equivalent\footnote{Details of the relation between $\hat{\mathcal O}_k'$ and $\hat{\mathcal O}_k$ are given in (\ref{AppD}).} set of operators $\hat{\mathcal O}_k'$:
\begin{equation}\label{s222}
\begin{array}{c}
\displaystyle{
\hat{\mathcal O}'(z,\lambda)= \sum_{k \in\, \mathbb{Z}} \frac{\vartheta(z-k\eta)}{\vartheta(z)} \lambda^k \hat{\mathcal O}_k'
=
}
\\ \ \\
\displaystyle{
=
\sum_{n_1,...,n_N \in\, \mathbb{Z}}
\frac{\vartheta(z-\eta\sum_{i=1}^N n_i)}{\vartheta(z)}
\omega^{\sum_i\frac{n_i^2 - n_i}{2}} (-\lambda)^{\sum_i^N\! n_i}\, \prod_{i < j}^N \frac{\vartheta(q_i-q_j+\eta(n_i - n_j)) }{\vartheta(q_i-q_j)} \prod_i^N e^{ n_i \hbar\partial_{q_i}}\,.
}
\end{array}
\end{equation}
The paper is organized as follows.
In \textbf{Section 2} we derive the expression for the generalized Macdonald determinant:
\begin{equation}
\displaystyle{
\hat{\mathcal O}'(z - Nq_0,\lambda) =
\frac{1}{\det \Xi(z)} \, \det_{1 \leq i,j \leq N} \Big\{ \sum_{n \in\, \mathbb{Z}} (-\lambda)^n \omega^{\frac{n^2-n}{2}} \Xi_{i}(q_j + n \eta,z) e^{n \hbar \partial_{q_j}} \Big\}\,,
}
\end{equation}
where $q_0$ is the center of mass coordinate.
The determinant is well defined as the columns of the matrix commute.
For the precise form of the matrix $\Xi_{ij}=\Xi_{i}(q_j, z)$ see (\ref{AppB}).
In \textbf{Section 3} we express the generating function (\ref{s222}) in terms of the Lax matrix of the Ruijsenaars-Schneider model:
\begin{equation}\label{e4}
{\hat{\mathcal O}}'(z,\lambda) = \,:\det_{1 \leq i,j \leq N}
\Big\{ \hat{\mathcal L}^{\rm Dell}_{ij}(z,\lambda\,|\,q,t\,|\,\tau,\omega) \Big\}:\,,
\end{equation}
%
where
%
\begin{equation}\label{e5}
\hat{\mathcal L}^{\rm Dell}_{ij}(z,\lambda\,|\,q,t\,|\,\tau,\omega)=\sum_{k \in\, \mathbb{Z} }
\omega^{\frac{k^2-k}{2}} (-\lambda)^k {\hat L}^{RS}_{ij}(z\, | k\eta, k\hbar \,| \tau)
\end{equation}
%
and the normal ordering is defined in (\ref{order}).
The trigonometric and rational limits (for coordinate dependence) of (\ref{s222})-(\ref{e5}) are described as well.
In \textbf{Section 4} we study the eigenvalue problem for the operator $\hat{\mathcal O}(u)$ (\ref{e1}) in the (coordinate) trigonometric limit $p = 0$, which corresponds to the dual to elliptic Ruijsenaars model\footnote{The terminology like dual to elliptic Ruijsenaars (or Calogero) model comes from the Mironov-Morozov description of the Dell model based on the p-q duality. Here and in what follows we use it meaning the trigonometric (or rational) $p=0$ limit of (\ref{e1}), though its relation to p-q duality needs to be clarified.}, and compare our results to the known in the literature \cite{Sh,Awata}.
The main statement here is the following:
The operators $\hat{\mathcal O}(u)$ in the limit $p=0$ for different $u$ could be simultaneously brought to the upper triangular form in some basis,
their eigenvalues are labelled by Young diagrams $\lambda=(\lambda_1,...,\lambda_N)$, and equal to:
\begin{equation}\label{e7}
E(u)_\lambda = \prod_{i=1}^{N} \theta_{\omega}(ut^{N-i} q^{\lambda_i})\,.
\end{equation}
\textbf{In Section 5} we study the classical limit of the Dell system. Using the classical analogue ${\mathcal L}(z,\lambda)$
of (\ref{e5}) we show that the $L$-matrix
\begin{equation}\label{e10}
L(z,\lambda)={\mathcal L}(z,1)^{-1}{\mathcal L}(z,\lambda)\in {\rm Mat}(N,\mathbb C)
\end{equation}
%
satisfies the Manakov triple representation \cite{Manakov,FT} (instead of the Lax equation):
\begin{equation} \label{e11}
\dot{L} = [L,A] + B L\,, \quad {\rm tr} B = 0\,.
\end{equation}
The conservation laws are generated by the function
$\det L(z,\lambda)$ only. It reduces to expression for the spectral curve
of the Ruijsenaars-Schneider model in the $\omega\rightarrow 0$ limit.
In \textbf{Section 6} we describe the factorized structure for
the $L$-matrix (\ref{e10}) $L(z,\lambda)$. Up to an inessential modification
it is presented in the form, which is
similar to the elliptic Kronecker function\footnote{It is used in the
widely known Lax pairs with spectral parameter \cite{Krich,RS} in many-body systems.} (\ref{e876}):
%
\begin{equation}\label{e12}
\begin{array}{c}
\displaystyle{
\check{L}(z,\lambda|\tau,\ti\tau)=\Phi[G(z,\tau),u|\ti\tau]:
=\frac{\vartheta'(0|\ti\tau)}{\vartheta(u|\ti\tau)} \Big[\vartheta(-\,{\rm ad}_{N\eta\partial_z}|\ti\tau)\, G(z,\tau)\Big]^{-1}
\vartheta(u-\,{\rm ad}_{N\eta\partial_z}|\ti\tau)\, G(z,\tau)\,,
}
\\ \ \\
\displaystyle{
\quad u=\log(\lambda)\,,
\quad G(z,\tau)=g(z,\tau)\exp\Big(\frac{z}{Nc\eta}\,{\rm diag}(p_1,...,p_N)\Big)\in {\rm Mat}(N,\mathbb C) \,,
}
\end{array}
\end{equation}
%
thus generalizing the classical version of the factorization (\ref{e211}) to the double elliptic case.
The elliptic moduli $\ti\tau$ appears as $\omega=e^{2\pi\imath \ti\tau}$. It is responsible for the ellipticity in momenta,
while $\tau$ controls the ellipticity in positions of particles.
We also describe connection of the $L$-matrix with the Sklyanin Lax operators, and propose its
quantization
in terms of the elliptic quantum $R$-matrix in the fundamental representation of ${\rm GL}_N$.
Possible applications of the obtained results and future plans are discussed in the end of the paper. Appendices contain
the elliptic functions definitions and properties, description of the intertwining matrices $\Xi$, computations of ${\rm GL}_2$ examples and relations between different forms of the generating functions.
\section{Characteristic Macdonald determinant for the Dell system}\label{s2}
\setcounter{equation}{0}
In this Section we express the generating function $\hat{\mathcal O}(\lambda)$ (\ref{e1}) as a determinant of $N \times N$ matrix.
The main idea is as follows. First, we introduce a certain bilinear pairing $\langle \quad | \quad \rangle: \mathcal{A}_x \times \mathcal{A}_p \rightarrow A_{x,p}$ between operators depending on coordinates and momenta only ($\mathcal{A}_x$ and $\mathcal{A}_p$ respectively). The generating function $\hat{\mathcal O}(\lambda)$ is then expressed as a pairing between the Vandermonde function and a product of theta functions of shift operators. Finally, we mention that the
Vandermonde function is a part of determinant for some intertwining matrix $\Xi$. Using its properties we come to the determinant representation.
The set of intertwining matrices which we are going to use in different cases is given in the Appendix B . The elliptic coordinate case could not be treated without spectral parameter,
while it is possible in the rational and trigonometric cases.
\subsection{Determinant representation for the generating function of the Hamiltonians}
\subsubsection{The case without the spectral parameter}
The result will be proven in all the details in the trigonometric case. The rational case could be done absolutely analogously.
The main statement of this Section is the following.
\begin{theorem}\label{th1}
Let $\Xi$ be the $N \times N$ Vandermonde matrix
\begin{equation}\label{e21}
\Xi_{ij} = \Xi_{i}(x_j) = x_j^{N-i}\,,
\end{equation}
so
%
\begin{equation}\label{e22}
\displaystyle{
\det_{1 \leq i,j \leq N} x_j^{N-i} = \prod_{i<j} (x_i - x_j)\,.
}
\end{equation}
%
Then the generating function
$\mathcal{O}^{\hbox{\tiny{trig}}}(\lambda)$ (\ref{e1}) in the coordinate trigonometric limit ($p = 0$)\footnote{conjugated by $\prod_{i <j} x_j^{\frac{\ln{t}}{\ln{q}}}$}:
\begin{equation}
\displaystyle{
\hat{\mathcal O}^{\hbox{\tiny{trig}}}(\lambda)= \sum_{n_1,...,n_N \in\, \mathbb{Z}} \omega^{\sum_i\frac{n_i^2 - n_i}{2}} (-\lambda)^{\sum_i n_i} \prod_{i < j}^N \frac{t^{n_i} x_i - t^{n_j} x_j}{x_i - x_j} \prod_i^N q^{n_i x_i \partial_i}
}
\end{equation}
is represented as follows:
\begin{equation}\label{e23}
\displaystyle{
\hat{\mathcal O}^{\hbox{\tiny{trig}}}(\lambda) = \frac{1}{\prod_{i<j} (x_i - x_j)} \det_{1 \leq i,j \leq N} \Big\{ \sum_{n \in\, \mathbb{Z}} (-\lambda)^n \omega^{\frac{n^2-n}{2}} \Xi_i(t^n x_j) \, q^{n x_j \partial_j} \Big\}
}
\end{equation}
%
or, equivalently,
\begin{equation}\label{e24}
\displaystyle{
\hat{\mathcal O}^{\hbox{\tiny{trig}}}(\lambda) = \frac{1}{\prod_{i<j} (x_i - x_j)} \det_{1 \leq i,j \leq N}
\Big\{
x_j^{N-i} \, \theta_\omega (\lambda t^{N-i} q^{x_j \partial_j}) \Big\}
\,.
}
\end{equation}
%
%
\end{theorem}
%
\noindent\underline{\em{Proof:}}\quad First, notice that the above determinant is well defined since any two
elements from different columns of the corresponding matrix commute.\\
Consider the space of difference operators, generated by $\{x_1,...,x_N, q^{x_1 \partial_1},..., q^{x_N \partial_N}\}$.
We refer to it as $\mathcal{A}_{x, p}$.
Similarly, denote the spaces, generated by $\{x_1,...,x_N\}$ and $\{ q^{x_1 \partial_1},..., q^{x_N \partial_N}\}$ only as $\mathcal{A}_{x}$ and $\mathcal{A}_{p}$ respectively.
Introduce the bilinear pairing:
\begin{equation} \label{pairing}
\langle \quad | \quad \rangle \, \, \, :\, \mathcal{A}_{x} \times \mathcal{A}_{p} \, \rightarrow \mathcal{A}_{x, p}
\end{equation}
by defining it on the basis elements
\begin{equation}\label{e26}
\displaystyle{
\langle \prod_{i =1}^N x_i^{k_i} |\, \prod_{j =1}^N q^{m_j x_j \partial_j}\rangle = \prod_{l =1}^N t^{m_lk_l} \prod_{i =1}^N x_i^{k_i} \, \prod_{j =1}^N q^{m_j x_j \partial_j} \qquad \forall \, k_i, \, m_i\,.
}
\end{equation}
%
Due to $x_i \, q^{x_j \partial_j} = q^{x_j \partial_j} \, x_i$ for $i \neq j$,
the pairing (\ref{e26}) satisfies an important property:
\begin{equation}\label{homomorphism}
\displaystyle{
\langle \prod_{i =1}^N x_i^{k_i} |\, \prod_{j =1}^N q^{m_j x_j \partial_j}\rangle = \prod_{i =1}^N \langle x_i^{k_i} | \, q^{m_i x_i \partial_i} \rangle \qquad \forall \, k_i, \, m_i\,.
}
\end{equation}
%
Then, the generating function (\ref{e1}) is represented as follows\footnote{We knew this result from \cite{Sh2}.}:
\begin{equation}\label{e29}
\displaystyle{
\hat{\mathcal O}^{\hbox{\tiny{trig}}}(\lambda) = \frac{1}{\prod_{i<j} (x_i - x_j)}\langle \prod_{i<j} (x_i - x_j) | \prod_k \theta_\omega (\lambda q^{x_k \partial_k}) \rangle\,.
}
\end{equation}
Next, we use the determinant property (\ref{e22}) and the linearity property of (\ref{e26}).
From (\ref{e29}) we conclude
%
\begin{eqnarray}\label{e30}
\displaystyle{
\hat{\mathcal O}^{\hbox{\tiny{trig}}}(\lambda) = \frac{1}{\prod_{i<j} (x_i - x_j)} \langle \det_{1 \leq i,j \leq N} \Xi_{i}(x_j) | \prod_k \theta_\omega (\lambda q^{x_k \partial_k}) \rangle =
}
\\
\displaystyle{
= \frac{1}{\prod_{i<j} (x_i - x_j)} \sum_{\sigma \in S_N} (-1)^{|\sigma|} \langle \prod_i \Xi_{\sigma(i)}(x_i) | \prod_k \theta_\omega (\lambda q^{x_k \partial_k}) \rangle\,,
}
\end{eqnarray}
%
and the property (\ref{homomorphism}) provides
%
\begin{equation}
\displaystyle{
\hat{\mathcal O}^{\hbox{\tiny{trig}}}(\lambda) = \frac{1}{\prod_{i<j} (x_i - x_j)} \sum_{\sigma \in S_N} (-1)^{|\sigma|} \sum_{n_1,...,n_N \in \mathbb{Z}} (-\lambda)^{\sum_j n_j} \omega^{\sum_j \frac{n_j^2 - n_j}{2}} \langle \prod_i x_i^{N - \sigma(i)} | \prod_k q^{n_k x_k \partial_k} \rangle=
}
\end{equation}
\begin{equation}
\displaystyle{
= \frac{1}{\prod_{i<j} (x_i - x_j)} \sum_{\sigma \in S_N} (-1)^{|\sigma|} \sum_{n_1,...,n_N \in \mathbb{Z}} (-\lambda)^{\sum_j n_j} \omega^{\sum_j \frac{n_j^2 - n_j}{2}} \prod_i \langle x_i^{N - \sigma(i)} | \, q^{n_i x_i \partial_i} \rangle
}
\end{equation}
Hence,
\begin{equation}\label{e31}
\displaystyle{
\hat{\mathcal O}^{\hbox{\tiny{trig}}}(\lambda) = \frac{1}{\prod_{i<j} (x_i - x_j)} \sum_{\sigma \in S_N} (-1)^{|\sigma|} \prod_i \langle \Xi_{\sigma(i)}(x_i) | \theta_\omega (\lambda q^{x_i \partial_i}) \rangle\,.
}
\end{equation}
%
Finally,
\begin{equation}\label{e32}
\displaystyle{
\hat{\mathcal O}^{\hbox{\tiny{trig}}}(\lambda) = \frac{1}{\prod_{i<j} (x_i - x_j)} \det_{1 \leq i,j \leq N} \langle \Xi_{i}(x_j) | \theta_\omega (\lambda q^{x_j \partial_j}) \rangle\,.
}
\end{equation}
Expression under the determinant is easily calculated:
\begin{equation}\label{e33}
\displaystyle{
\langle \Xi_{i}(x_j) | \theta_\omega (\lambda q^{x_j \partial_j}) \rangle=
\sum_{n \in\, \mathbb{Z}} (-\lambda)^n \omega^{\frac{n^2-n}{2}} \Xi_{i}(t^n x_j) q^{n x_j \partial_j}\,.
}
\end{equation}
This yields (\ref{e23}). Plugging the explicit expression for $\Xi$ into the r.h.s. of (\ref{e33})
and summing up over $n$ we get
\begin{equation}\label{e34}
\sum_{n \in\, \mathbb{Z}} (-\lambda)^n \omega^{\frac{n^2-n}{2}} \Xi_{i}(t^n x_j) q^{n x_j \partial_j}=
x_j^{N-i} \, \theta_\omega (\lambda t^{N-i} q^{x_j \partial_j})\,,
\end{equation}
%
that is (\ref{e24}).
$\blacksquare$
Now let us write the answer for the rational case:
\begin{theorem}
Let $\Xi$ be the $N \times N$ Vandermonde matrix
\begin{equation}
\Xi_{ij} = \Xi_{i}(q_j) = (-q_j)^{i-1}\,,
\end{equation}
so that
\begin{equation}
\displaystyle{
\det_{1 \leq i,j \leq N} (-q_j)^{i-1} = \prod_{i<j} (q_i - q_j)\,.
}
\end{equation}
%
Then the generating function $\mathcal{O}(\lambda)$ (\ref{e1}) in the coordinate rational limit :
\begin{equation}
\displaystyle{
\hat{\mathcal O}^{\hbox{\tiny{rat}}}(\lambda)= \sum_{n_1,...,n_N \in\, \mathbb{Z}} \omega^{\sum_i\frac{n_i^2 - n_i}{2}} (-\lambda)^{\sum_i n_i} \prod_{i < j}^N \frac{q_i - q_j + \eta(n_i -n_j)}{q_i - q_j} \prod_i^N e^{n_i \hbar \partial_{q_i}}
}
\end{equation}
is represented as follows:
\begin{equation}
\displaystyle{
\hat{\mathcal O}^{\hbox{\tiny{rat}}}(\lambda) = \frac{1}{\prod_{i<j} (q_i - q_j)} \det_{1 \leq i,j \leq N} \Big\{ \sum_{n \in\, \mathbb{Z}} (-\lambda)^n \omega^{\frac{n^2-n}{2}} (-q_j- n \eta)^{i-1} \, e^{n \hbar \partial_{q_i}} \Big\}
}
\end{equation}
or
\begin{equation}
\displaystyle{
\hat{\mathcal O}^{\hbox{\tiny{rat}}}(\lambda) = \frac{1}{\prod_{i<j} (q_i - q_j)} \det_{1 \leq i,j \leq N} \Big\{ \sum_{n \in\, \mathbb{Z}} (-\lambda)^n \omega^{\frac{n^2-n}{2}} \Xi_i(q_j+ n \eta) \, e^{n \hbar \partial_{q_i}} \Big\}\,.
}
\end{equation}
\end{theorem}
\noindent\underline{\em{Proof:}}\quad The proof is word by word repetition of the trigonometric case. $\blacksquare$
%
%
%
These theorems generalize the generating functions for commuting Hamiltonians
for the quantum Calogero-Ruijsenaars family \cite{Sekiguchi,Hasegawa}.
\subsubsection{The case with the spectral parameter}
Let us proceed to the case with the spectral parameter. First of all we need to introduce a convenient notation.
In this Section in place of the symbol $\theta_p(x)$ any of the three functions could be substituted
\begin{equation}
\theta_p\Big(\frac{x_i}{x_j}\Big) = \left\{\begin{array}{l} -(q_i - q_j) \qquad \, \, \, \, \hbox{rational case}, \\ 1-\frac{x_i}{x_j} \qquad \qquad \, \, \hbox{trigonometric case}, \\ \theta_p\Big(\frac{x_i}{x_j}\Big) \qquad \qquad \, \hbox{elliptic case}. \end{array}\right.
\end{equation}
We will also use the odd theta function:
\begin{equation}
\vartheta(q_i-q_j) = \left\{\begin{array}{l} -(q_i - q_j) \qquad \qquad \, \, \, \, \quad \qquad \hbox{rational case}, \\ e^{(q_i-q_j)/2} - e^{-(q_i-q_j)/2} \qquad \, \, \, \hbox{trigonometric case}, \\ \vartheta(q_i-q_j) \qquad \qquad \qquad \qquad \hbox{elliptic case}. \end{array}\right.
\end{equation}
For the precise relation between them, see (\ref{e83}) and (\ref{AppD}).
In this Section, we will also need the normal ordering on $\mathcal{A}_{x,p}$, which moves all the shift operators to the right of all coordinates. Namely, on monomials
\begin{equation} \label{order}
:\prod_{\mathcal{I}} x^{k_\mathcal{I}} q^{n_\mathcal{I} \, x \partial}: = \prod_{\mathcal{I}} x^{k_\mathcal{I}} \prod_{\mathcal{I}} q^{n_\mathcal{I} \, x \partial}\,,
\end{equation}
where $\mathcal{I}$ -- multi-index.
Let us formulate the main statement. Define the new generating function:
\begin{equation}\label{s22}
\begin{array}{c}
\displaystyle{
\hat{\mathcal O}'(z,\lambda)= \sum_{k \in\, \mathbb{Z}} \frac{\vartheta(z-k\eta)}{\vartheta(z)} \lambda^k \hat{\mathcal O}_k'
=
}
\\ \ \\
\displaystyle{
=
\sum_{n_1,...,n_N \in\, \mathbb{Z}} \omega^{\sum_i\frac{n_i^2 - n_i}{2}} (-\lambda)^{\sum_i^N n_i}
\frac{\vartheta(z-\eta\sum_{i=1}^N n_i)}{\vartheta(z)}
\prod_{i < j}^N \frac{\vartheta(q_i-q_j+\eta(n_i - n_j)) }{\vartheta(q_i-q_j)} \prod_i^N e^{ n_i \hbar\partial_{q_i}}\,.
}
\end{array}
\end{equation}
%
Its relation to the previous one is also explained in (\ref{AppD}).
\begin{theorem}\label{th2}
Let $\Xi$ be the $N \times N$ matrix of the St\"ackel form with the spectral parameter $z$
\begin{equation}
\Xi_{ij} = \Xi_{ij}(\bar{q}_j,z) = \Xi_{i}(\bar{q}_j - \frac{z}{N})\,,
\end{equation}
with ${\bar q}_j$ being (\ref{e3358}) and the property
%
\begin{equation}
\displaystyle{
\det_{1 \leq i,j \leq N} \Xi_{ij}(\bar{q}_j,z) = c_N(\tau)\,\vartheta(z)\prod\limits_{i<j}\vartheta(q_i-q_j)\,.
}
\end{equation}
%
Then the generating function $\hat{\mathcal O}'(z,\lambda)$ (\ref{s22}) is represented as follows:
\begin{equation}
\displaystyle{
\hat{\mathcal O}'(z,\lambda) = \frac{1}{\det \Xi_{ij}(\bar{q}_j,z)} \, :\det_{1 \leq i,j \leq N} \Big\{ \sum_{n \in\, \mathbb{Z}} (-\lambda)^n \omega^{\frac{n^2-n}{2}} \Xi_{ij}(\bar{q}_j + n \eta,z) e^{n \hbar \partial_{q_j}} \Big\}:
}
\end{equation}
%
or, equivalently,
\begin{equation}
\displaystyle{
\hat{\mathcal O}'(z,\lambda) = \frac{1}{\det \Xi_{ij}(\bar{q}_j,z)} \, :\det_{1 \leq i,j \leq N}
\Big\{ \sum_{k \in\, \mathbb{Z}} \Xi_{ij,k}(z)\,
e^{(\alpha k + \sigma_{ij})q_j} \theta_\omega \Big(\lambda e^{\alpha k \eta + \sigma_{ij} \eta} e^{ \hbar \partial_{q_j}}\Big) \Big\}:\,,
}
\end{equation}
%
where the following expansions for the functions $\Xi_{ij}(\bar{q}_j,z)$ are assumed:
\begin{equation}
\Xi_{ij}(\bar{q}_j,z) = \sum_{k \in\, \mathbb{Z}} \Xi_{ij,k}(z)\, e^{(\alpha k + \sigma_{ij}) \bar{q}_j}
\end{equation}
%
for some $\mathbb C$-numbers $\alpha$ and $\sigma_{ij}$.
\end{theorem}
The explicit form of the matrices $\Xi$ is given in (\ref{e433}), (\ref{e423}) and (\ref{e86}). We use only these matrices in our study.
\noindent\underline{\em{Proof:}}\quad
In order to use the trick from the Theorem \ref{th1} let us define the shifted $\Xi$ matrix, called $\tilde{\Xi}$:
\begin{equation}
\tilde{\Xi}_{ij} =\Xi_{ij}(q_j ,z) = \Xi_{ij}(\bar{q}_j -q_0 ,z) = \Xi_{ij}(\bar{q}_j ,z-N q_0)
\end{equation}
Its determinant is now equal to
\begin{equation}
\displaystyle{
\det_{1 \leq i,j \leq N} \tilde{\Xi}_{ij} = \det_{1 \leq i,j \leq N} \Xi_{ij}(q_j,z) = c_N(\tau)\,\vartheta(z-N q_0)\prod\limits_{i<j}\vartheta(q_i-q_j)\,.
}
\end{equation}
Now a matrix element $\tilde{\Xi}_{ij}$ depends on the coordinate $q_j$ only. Therefore, the following determinants
\begin{equation}
\displaystyle{
\frac{1}{\det \Xi_{ij}(q_j,z)} \, :\det_{1 \leq i,j \leq N} \Big\{ \sum_{n \in\, \mathbb{Z}} (-\lambda)^n \omega^{\frac{n^2-n}{2}} \Xi_{ij}(q_j + n \eta,z) e^{n \hbar \partial_{q_j}} \Big\}:
}
\end{equation}
%
or
\begin{equation}
\displaystyle{
\frac{1}{\det \Xi_{ij}(q_j,z)} \, :\det_{1 \leq i,j \leq N}
\Big\{ \sum_{k \in\, \mathbb{Z}} \Xi_{ij,k}(z)\,
e^{(\alpha k + \sigma_{ij})q_j} \theta_\omega (\lambda e^{\alpha k \eta + \sigma_{ij} \eta} e^{ \hbar \partial_{q_j}}) \Big\}:
}
\end{equation}
%
can be calculated as ordinary determinants since the elements from different columns commute.
Indeed,
%
\begin{equation}
\begin{array}{c}
\displaystyle{
:\det_{1 \leq i,j \leq N} \Big\{ \sum_{n \in\, \mathbb{Z}} (-\lambda)^n \omega^{\frac{n^2-n}{2}} \Xi_{ij}(q_j + n \eta,z) e^{n \hbar \partial_{q_j}} \Big\}: =
}
\\ \ \\
\displaystyle{
=
\sum_{\sigma \in S_N} (-1)^{|\sigma|} \sum_{n_1,...,n_N \in\, \mathbb{Z}} \omega^{\sum_i\frac{n_i^2 - n_i}{2}} (-\lambda)^{\sum_i^N n_i} \prod_{i=1}^N \Xi_{\sigma(i)i}(q_i + n_i \eta) \prod_{j=1}^N e^{ n_j \hbar \partial_{q_j}} =
}
\\ \ \\
\displaystyle{
=
\sum_{\sigma \in S_N} (-1)^{|\sigma|} \sum_{n_1,...,n_N \in\, \mathbb{Z}} \omega^{\sum_i\frac{n_i^2 - n_i}{2}} (-\lambda)^{\sum_i^N n_i} \prod_{i=1}^N \Xi_{\sigma(i)i}(q_i + n_i \eta) \, e^{ n_i \hbar \partial_{q_i}} =
}
\\ \ \\
\displaystyle{
= \det_{1 \leq i,j \leq N} \Big\{ \sum_{n \in\, \mathbb{Z}} (-\lambda)^n \omega^{\frac{n^2-n}{2}} \Xi_{ij}(q_j + n \eta,z) e^{n \hbar \partial_{q_j}} \Big\}\,.
}
\end{array}
\end{equation}
Applying the pairing trick to them as in the proof of the Theorem \ref{th1}, we arrive at
\begin{equation}
\frac{1}{\det \Xi_{ij}(q_j,z)} \, \langle \det_{1 \leq i,j \leq N} \Xi_{ij}(q_j,z) | \prod_k \theta_\omega (\lambda e^{\hbar \partial_{q_k}}) \rangle\,.
\end{equation}
Substituting the explicit expression for the determinant of $\tilde{\Xi}$, one obtains
\begin{equation}
\frac{1}{\det \Xi_{ij}(q_j,z)} \, \langle c_N(\tau)\,\vartheta(z-N q_0)\prod\limits_{i<j}\vartheta(q_i-q_j) | \prod_k \theta_\omega (\lambda e^{\hbar \partial_{q_k}}) \rangle\,,
\end{equation}
which, after taking the pairing equals
\begin{equation}
\displaystyle{
\sum_{n_1,...,n_N \in\, \mathbb{Z}} \omega^{\sum_i\frac{n_i^2 - n_i}{2}} (-\lambda)^{\sum_i^N n_i}
\frac{\vartheta(z-N q_0 - \eta\sum_{i=1}^N n_i)}{\vartheta(z-N q_0)}
\prod_{i < j}^N \frac{\vartheta(q_i-q_j+\eta(n_i - n_j)) }{\vartheta(q_i-q_j)} \prod_i^N e^{ n_i \hbar\partial_{q_i}}\,.
}
\end{equation}
So, one obtains:
\begin{equation}
\displaystyle{
\hat{\mathcal O}'(z - N q_0,\lambda) = \frac{1}{\det \Xi_{i}(q_j,z)} \, \det_{1 \leq i,j \leq N} \Big\{ \sum_{n \in\, \mathbb{Z}} (-\lambda)^n \omega^{\frac{n^2-n}{2}} \Xi_{i}(q_j + n \eta,z) e^{n \hbar \partial_{q_j}} \Big\}
}
\end{equation}
%
By the same argument as above, we could restore the normal ordering:
\begin{equation}
\displaystyle{
\hat{\mathcal O}'(z - N q_0,\lambda) = \frac{1}{\det \Xi_{i}(q_j,z)} \, :\det_{1 \leq i,j \leq N} \Big\{ \sum_{n \in\, \mathbb{Z}} (-\lambda)^n \omega^{\frac{n^2-n}{2}} \Xi_{i}(q_j + n \eta,z) e^{n \hbar \partial_{q_j}} \Big\}:
}
\end{equation}
Finally, by shifting the parameter $z$ to $z+N q_0$, we obtain the desired identity. $\blacksquare$
\subsection{Determinant representation in terms of the Ruijsenaars-Schneider \texorpdfstring{$L$-matrix}{L-matrix}} \label{RSLax}
%
In this paragraph we will derive one more useful representation for the generating function. Consider (\ref{e23}).
Let us unify the Vandermonde determinant and the determinant of the sum
into a single one. For this purpose we should put the normal ordering.
From (\ref{e23}) we easily conclude
%
\begin{equation}\label{e231}
\begin{array}{c}
\displaystyle{
\hat{\mathcal O}^{\hbox{\tiny{trig}}}(\lambda) = \frac{1}{\prod_{i<j} (x_i - x_j) } \det_{1 \leq i,j \leq N} \Big\{ \sum_{n \in\, \mathbb{Z}} (-\lambda)^n \omega^{\frac{n^2-n}{2}}\, \Xi_{ij}(t^n x_j)\, q^{n x_j \partial_j} \Big\}=
}
\\ \ \\
\displaystyle{
=:\det_{1 \leq i,j \leq N} \Big\{ \sum_{n \in\, \mathbb{Z}}\sum\limits_{k=1}^N
(-\lambda)^n \omega^{\frac{n^2-n}{2}}\, \Xi_{ik}^{-1}\, \Xi_{kj}(t^n x_j)\, q^{n x_j \partial_j} \Big\}:\,.
}
\end{array}
\end{equation}
%
The matrix (\ref{e211}):
\begin{equation}
{\hat L} ^{RS}(q,t) = \sum_{k=1}^N\Xi_{ik}^{-1}\, \Xi_{kj}(t x_j) q^{x_j \partial_j}
\end{equation}
is the (quantum)
trigonometric Ruijsenaars-Schneider Lax
matrix. In order to rewrite it in its convenient form one should also
perform the gauge transformation with the diagonal matrix $D_{ij}=\delta_{ij}\prod_{k\neq i}(x_i-x_k)$. See details in \cite{Hasegawa2,ZV}.
With the normal ordering the gauge transformed Lax operator has the same determinant.
Hence, we arrive to
the following determinant representation:
%
\begin{equation}\label{e238}
\begin{array}{c}
\displaystyle{
\hat{\mathcal O}^{\hbox{\tiny{trig}}}(\lambda)
=:\det_{1 \leq i,j \leq N} \Big\{ \sum_{n \in\, \mathbb{Z}}
(-\lambda)^n \omega^{\frac{n^2-n}{2}} {\hat L} _{ij}^{RS}(t^n,q^n) \Big\}:=
:\det\hat{\mathcal L}(\lambda):\,,
}
\end{array}
\end{equation}
%
where
%
\begin{equation}\label{e239}
\begin{array}{c}
\displaystyle{
\hat{\mathcal L}(\lambda)
= \sum_{n \in\, \mathbb{Z}}
(-\lambda)^n\omega^{\frac{n^2-n}{2}} {\hat L} ^{RS}(t^n,q^n)\in {\rm Mat}(N,\mathbb C)
}
\end{array}
\end{equation}
%
is the averaged sum of the Ruijsenaars Lax matrices. The averaging is over $\mathbb Z$ with the theta-function weights.
Explicit form of $ {\hat L} ^{RS}(t,q)$ to be substituted into
(\ref{e239}) is given in Section \ref{s4}, expression (\ref{e543}). Its generation to the rational case is given by the expressions (\ref{e557}) and (\ref{e2391}).
In the case with the spectral parameter the generating function depends on $z$. However, the arguments above could be repeated without any complications. Thus, its determinant representation is:
%
\begin{equation}\label{e509}
{\hat{\mathcal O}'}(z,\lambda) = :\det_{1 \leq i,j \leq N} \Big\{ \sum_{n \in\, \mathbb{Z} } (-\lambda)^n\omega^{\frac{n^2-n}{2}} {\hat L}^{RS}_{ij}(z, q^n, t^n) \Big\}:= :\det_{1 \leq i,j \leq N} \hat{\mathcal L}_{ij}(z,\lambda):\,,
\end{equation}
%
%
\begin{equation}\label{e510}
\hat{\mathcal L}(z,\lambda)=\sum_{n \in \mathbb{Z} } (-\lambda)^n\omega^{\frac{n^2-n}{2}}\, {\hat L}^{RS}(z, q^n, t^n)\,,
\end{equation}
%
where ${\hat L}^{RS}(z, q, t)$ is the elliptic Ruijsenaars-Schneider Lax matrix given by (\ref{e52}).
We present an alternative direct proof of the statements (\ref{e238}), (\ref{e509}) without
usage of intertwining matrix in the next Section.
\section{Alternative proof of relation to quantum Ruijsenaars-Schneider Lax operators}\label{s4}
\setcounter{equation}{0}
In this Section we give an alternative proof of the result from the paragraph (\ref{RSLax}), using the elliptic Cauchy determinant formula.
\subsection{Double elliptic \texorpdfstring{${\rm GL}_N$ model}{GL(N) model}}
The definition (\ref{e1}) can be alternatively
written in terms of the standard odd Jacobi theta-function, see (\ref{AppD}):
%
\begin{equation}\label{e101}
\displaystyle{
\hat{\mathcal O}'(\lambda)= \sum_{n_1,...,n_N \in\, \mathbb{Z}} \omega^{\sum_i\frac{n_i^2 - n_i}{2}} (-\lambda)^{\sum_i n_i} \prod_{i < j}^N \frac{\vartheta(q_i-q_j+\eta(n_i - n_j)) }{\vartheta(q_i-q_j)} \prod_i^N e^{ n_i \hbar\partial_{q_i}} = \sum_{n \in\, \mathbb{Z}} \lambda^n \hat{\mathcal O}_n'\,.
}
\end{equation}
%
Therefore, the Hamiltonians ${\hat H}_n=(\hat{\mathcal O}_0')^{-1}\hat{\mathcal O}_n'$ also commute. Its extension to the case with spectral parameter $z$ is
given in (\ref{s22}).
We are in position to represent (\ref{s22}) in terms of the (quantum) elliptic Ruijsenaars-Schneider ${\rm GL}_N$ Lax matrix with spectral parameter \cite{RS,Hasegawa}:
\begin{equation}\label{e52}
\displaystyle{
{\hat L}^{RS}_{ij}(z, \eta, \hbar)= \frac{\vartheta( -\eta) \vartheta(z+q_{ij}- \eta)}{\vartheta(z) \vartheta(q_{ij}- \eta)}\,\prod_{k\neq j}\frac{\vartheta(q_{jk}+\eta)}{\vartheta(q_{jk})}\,e^{ \hbar \partial_j}\,,
}
\end{equation}
where $q_{ij}=q_i-q_j$.
\begin{theorem}
Let ${\hat L}^{RS}_{ij}(z, q, t)$ be the quantum Lax matrix for the elliptic Ruijsenaars-Schneider model ($z$ - its spectral parameter), then the generating function (\ref{s22}) for $\hat{\mathcal O}_n'$ operators acquires the form:
\begin{equation}\label{e50}
{\hat{\mathcal O}}'(z,\lambda) = :\det_{1 \leq i,j \leq N} \Big\{ \sum_{n \in\, \mathbb{Z} } \omega^{\frac{n^2-n}{2}} (-\lambda)^n {\hat L}^{RS}_{ij}(z, n\eta, n\hbar) \Big\}:= :\det_{1 \leq i,j \leq N} \hat{\mathcal L}_{ij}(z,\lambda):
\end{equation}
%
with
\begin{equation}\label{e502}
\hat{\mathcal L}_{ij}(z,\lambda)=\sum_{k \in\, \mathbb{Z} } \omega^{\frac{k^2-k}{2}} (-\lambda)^k {\hat L}^{RS}_{ij}(z, k\hbar, k\eta)\,.
\end{equation}
%
%
\end{theorem}
%
\noindent\underline{\em{Proof:}}\quad
Consider the determinant
\begin{equation}\label{w1}
\displaystyle{
:\det\hat{\mathcal L}:=\sum\limits_{\sigma} (-1)^{|\sigma|} :\hat{\mathcal L}_{\sigma(1)1}\hat{\mathcal L}_{\sigma(2)2}\, ...\, \hat{\mathcal L}_{\sigma(N)N}:\,.
}
\end{equation}
%
and substitute it into (\ref{e50}).
Let us represent it as a sum of determinants. For this purpose collect all the terms with $\prod_i^N q^{n_i x_i \partial_i}$:
\begin{equation}\label{w2}
\begin{array}{c}
\displaystyle{
:\det\hat{\mathcal L}:=\sum_{n_1,...,n_N \in\, \mathbb{Z}} \omega^{\sum_i\frac{n_i^2 - n_i}{2}} (-\lambda)^{\sum_i n_i}\times
}
\\ \ \\
\displaystyle{
\times
\sum\limits_{\sigma} (-1)^{|\sigma|}
:\hat{L}^{\rm RS}_{\sigma(1)1}(z,n_1\eta, n_1 \hbar)\hat{L}^{\rm RS}_{\sigma(2)2}(z,n_2\eta, n_2 \hbar)\, ...\,
\hat{L}^{\rm RS}_{\sigma(N)N}(z,n_N\eta, n_N \hbar) :
}
\\ \ \\
\displaystyle{
=\sum_{n_1,...,n_N \in\, \mathbb{Z}} \omega^{\sum_i\frac{n_i^2 - n_i}{2}} (-\lambda)^{\sum_i n_i}
:\det_{1 \leq i,j \leq N}\hat{L}_{ij}^{\rm RS}(z,n_j\eta, n_j \hbar):\,,
}
\end{array}
\end{equation}
%
where the matrix $\hat{L}_{ij}^{\rm RS}(z,n_j\eta, n_j \hbar)$ is constructed by combining
rows from different terms of the sum (\ref{e502}).
Using its explicit form (\ref{e52}) let us rewrite it through the elliptic Cauchy matrix:
%
\begin{equation}\label{w3}
\displaystyle{
\hat{L}_{ij}^{\rm RS}(z,q_i-q_j,n_j\eta, n_j \hbar)=\vartheta(-n_j \eta){L}^{\rm Cauchy}_{ij}(z)\prod_{k:k\neq j}\frac{\vartheta({\tilde q}_{j}-q_k)}{\vartheta(q_{j}-q_k)}\,e^{n_j \hbar \partial_j}\,,
}
\end{equation}
%
where
%
\begin{equation}\label{w4}
\displaystyle{
{L}^{\rm Cauchy}_{ij}(z)=\frac{ \vartheta(z+q_{i}-{\tilde q}_j)}{\vartheta(z) \vartheta(q_{i}-{\tilde q}_j)}\,,
\qquad {\tilde q}_j=q_j+ n_j\eta\,.
}
\end{equation}
%
Therefore,
%
\begin{equation}\label{w5}
\begin{array}{c}
\displaystyle{
:\det_{1 \leq i,j \leq N}\hat{L}_{ij}^{\rm RS}(z,q_i-q_j,n_j\eta, n_j \hbar):\,=
}
\\ \ \\
\displaystyle{
=\det_{1 \leq i,j \leq N} {L}^{\rm Cauchy}_{ij}(z)
\prod_{k=1}^N \vartheta(-n_k \eta)
\prod_{k,j:k\neq j}\frac{\vartheta({\tilde q}_{j}-q_k)}{\vartheta(q_{j}-q_k)}
\prod_{k=1}^N e^{n_k \hbar\partial_k}\,.
}
\end{array}
\end{equation}
%
Plugging the Cauchy determinant (\ref{e875}) into (\ref{w5}) we get (\ref{s22}).
$\blacksquare$
The result of this Theorem is valid for the trigonometric and rational cases as well. The degenerations are obtained
by substitutions $\vartheta(u)\rightarrow\sinh(u)\rightarrow u$.
Consider particular examples corresponding to the intertwining matrices (\ref{e335}), (\ref{e423}) and (\ref{e331}), (\ref{e433}). These are the models (p-q) dual to the elliptic Ruijsenaars-Schneider and elliptic Calogero-Moser models respectively\footnote{Let us notice once more that the terminology like dual to elliptic Ruijsenaars (or Calogero) model comes from the Mironov-Morozov description of the Dell model based on the p-q duality. Here and in what follows we mean the trigonometric (or rational) $p=0$ limit of (\ref{e1}), though its relation to p-q duality needs to be clarified.}.
\subsection{Dual to the elliptic Ruijsenaars model}
In the ${\rm GL}_N$ case the relations (\ref{e238})-(\ref{e239})
hold true for the Ruijsenaars-Schneider Lax matrix
\begin{equation}\label{e543}
{\hat L} _{ij}^{RS}(q,t) = \delta_{ij} \prod_{k \neq i} \frac{t x_i - x_k}{x_i - x_k}\, q^{x_i \partial_i} + (1-\delta_{ij}) \frac{(1-t)x_j}{x_i - x_j} \prod_{k \neq i,j} \frac{t x_j - x_k}{x_j - x_k}\, q^{x_j \partial_j}\,.
\end{equation}
This expression corresponds to the intertwining matrix (\ref{e335}). Being substituted into (\ref{e502}) it gives the generating function
(\ref{e1}) with $\theta_p(x)=1-x$ (up to conjugation).
The intertwining matrix (\ref{e423}) provides
the Lax matrix with spectral parameter (see details in \cite{ZV}):
\begin{equation}
\label{q577}
\displaystyle{
{\hat L} ^{\rm RS}_{ij}(z) = -e^{-(N-2)\eta}\sinh(\eta) \Big(\coth(q_{i}-q_{j}-\eta) + \coth(z)\Big)
\prod \limits_{k:k \neq
j}^{N}\frac{\sinh(q_{j}-q_{k}+\eta)}{\sinh(q_{j}-q_{k})}\,e^{\hbar\partial_{q_j}}\,.
}
\end{equation}
To get this Lax matrix one should substitute $y_{j} = e^{-2q_{j}+2q_0+2z/N}$ into (\ref{e42310}). Then from
(\ref{e42311}) we obtain the following generating function of the Hamiltonians related to the Lax matrix (\ref{q577}):
\begin{equation}\label{e199}
\begin{array}{c}
\displaystyle{
\hat{\mathcal O}'(z,\lambda)= \sum_{k \in\, \mathbb{Z}} \frac{\sinh(z-k\eta)}{\sinh(z)} \lambda^k \hat{\mathcal O}_k'
=\sum_{n_1,...,n_N \in\, \mathbb{Z}} \omega^{\sum_i\frac{n_i^2 - n_i}{2}}
e^{(N-2)(z-\eta\sum_{i=1}^N n_i)}\,\times
}
\\ \ \\
\displaystyle{
\times\frac{\sinh(z-\eta\sum_{i=1}^N n_i)}{\sinh(z)}
(-\lambda)^{\sum_i^N n_i} \prod_{i < j}^N \frac{\sinh(q_i-q_j+\eta(n_i - n_j)) }{\sinh(q_i-q_j)} \prod_i^N e^{ n_i \hbar\partial_{q_i}}\,.
}
\end{array}
\end{equation}
\subsection{Dual to the elliptic Calogero-Moser model.}
Consider the $\Xi$ matrix (\ref{e331}), so that $\det\Xi_{ij}=\prod\limits_{i<j}^N(q_i-q_j)$.
The Lax matrix of the rational Ruijsenaars-Schneider model generated
by this $\Xi$-matrix via (\ref{e211}) is of the form:
\begin{equation}\label{e557}
\displaystyle{
{\hat L} ^{\rm RS}_{ij}(q_i-q_j,\eta,\hbar)=\left(\frac{-\eta}{q_i-q_j-\eta}\right)\prod\limits_{k:k\neq j}^N\frac{q_j-q_k+\eta}{q_j-q_k}\,e^{\hbar\partial_{q_j}}\,.
}
\end{equation}
Being substituted into the averaged matrix
%
\begin{equation}\label{e2391}
\begin{array}{c}
\displaystyle{
\hat{\mathcal L}
= \sum_{k \in\, \mathbb{Z}}
(-\lambda)^k \omega^{\frac{k^2-k}{2}} {\hat L} ^{RS}(k\eta,k\hbar)\in {\rm Mat}(N,\mathbb C)
}
\end{array}
\end{equation}
%
it provides the following rational analogue for (\ref{e1}):
\begin{equation}\label{e240}
\displaystyle{
\hat{\mathcal O}'(\lambda)= \sum_{n_1,...,n_N \in\, \mathbb{Z}} \omega^{\sum_i\frac{n_i^2 - n_i}{2}} (-\lambda)^{\sum_i n_i}\prod_{i < j}^N \frac{q_i-q_j+\eta(n_i - n_j) }{q_i-q_j} \prod_i^N e^{ n_i \hbar\partial_{q_i}}\,.
}
\end{equation}
%
In the case with spectral parameter $z$ we deal with intertwining matrix (\ref{e433}), which leads
to the following Lax operator (see details in \cite{ZV}):
%
\begin{equation}\label{e244}
\displaystyle{
{\hat L} ^{\rm RS}_{ij}(z,\eta,\hbar)=-\eta
\left(\frac1z+\frac{1}{q_i-q_j-\eta}\right)\prod\limits_{k:k\neq j}^N\frac{q_j-q_k+\eta}{q_j-q_k}e^{\hbar\partial_{q_j}}\,.
}
\end{equation}
Then
\begin{equation}\label{e245}
\displaystyle{
\hat{\mathcal O}'(z,\lambda)= \sum_{n_1,...,n_N \in\, \mathbb{Z}} \omega^{\sum_i\frac{n_i^2 - n_i}{2}} (-\lambda)^{\sum_i n_i}
\Big(\frac{z-\eta\sum_{i=1}^N n_i}{z}\Big)
\prod_{i < j}^N\frac{q_i-q_j+\eta(n_i - n_j) }{q_i-q_j} \prod_i^N e^{ n_i \hbar\partial_{q_i}}\,.
}
\end{equation}
%
The limit $z\rightarrow\infty$
of this answer yields the expression (\ref{e240}).
\section{Eigenvalues for the dual to elliptic Ruijsenaars model}\label{s3}
\setcounter{equation}{0}
Let us now proceed to the first possible application of our result. We are going to derive the general formula for the eigenvalues of the dual to elliptic Ruijsenaars model. It corresponds to the trigonometric limit $p=0$ of the Dell system.
So, the generating function looks as follows:
\begin{equation}\label{e36}
{\hat{\mathcal O}}^{\hbox{\tiny{trig}}}(u) =
\frac{1}{\prod_{i<j} (x_i - x_j)} \det_{1 \leq i, j \leq N} x_i^{N-j}
\theta_{\omega}(u t^{N-j} q^{x_i \partial_i})\,.
\end{equation}
By introducing standard notations $\delta_j = N-j$ and $\Delta$ for the Vandermonde determinant
we write (\ref{e36}) as
%
\begin{equation}\label{e37}
\hat{\mathcal O}^{\hbox{\tiny{trig}}}(u) = \frac{1}{\Delta} \det_{1 \leq i, j \leq N} x_i^{\delta_j} \theta_{\omega}(ut^{\delta_j} q^{x_i \partial_i})\,,
\qquad \Delta=\prod_{i<j} (x_i - x_j)\,.
\end{equation}
%
%
By the same arguments as for $\omega = 0$ case, we can see that the operator $\hat{\mathcal O}^{\hbox{\tiny{trig}}}(u)$ preserves the space $\Lambda_N$ of symmetric functions of the variables $x_1,...,x_N$.
So, we can consider the eigenvalue problem for the operator $\hat{\mathcal O}^{\hbox{\tiny{trig}}}(u)$:
\begin{equation}
\hat{\mathcal O}^{\hbox{\tiny{trig}}}(u)\Psi=E(u)\Psi
\end{equation}
where $\Psi$ is an element of $\Lambda_N$
\begin{theorem}
The eigenvalues of the operator $\hat{\mathcal O}^{\hbox{\tiny{trig}}}(u)$ are labeled by the Young diagrams $\lambda=(\lambda_1,...,\lambda_N)$.
The generating function of the eigenvalues takes the form:
\begin{equation}\label{e38}
E(u)_\lambda = \prod_{i=1}^{N} \theta_{\omega}(ut^{N-i} q^{\lambda_i})\,.
\end{equation}
\end{theorem}
%
\noindent\underline{\em{Proof:}}\quad
Let $m_\lambda$ be the monomial symmetric function:
\begin{equation}\label{e39}
m_\lambda = \frac{1}{|S_\lambda|}\sum_{\sigma \in S_N} x^{\sigma (\lambda)} = \frac{1}{|S_\lambda|}\sum_{\sigma \in S_N} x_1^{\sigma (\lambda_1)}... \, x_N^{\sigma (\lambda_N)}\,.
\end{equation}
where $|S_\lambda|$ - the order of the symmetry group of the diagram $\lambda$, namely, the product of factorials of degeneracies of each row.
They form a basis in the space of symmetric functions.
Following the Macdonald's book \cite{McD} let us calculate their image under the action of the operator $\hat{\mathcal O}^{\hbox{\tiny{trig}}}(u)$:
\begin{eqnarray}\label{e40}
\frac{1}{\Delta} \det_{1 \leq i, j \leq N} x_i^{\delta_j} \theta_{\omega}(ut^{\delta_j} q^{x_i \partial_i}) m_\lambda = \frac{1}{\Delta} \frac{1}{|S_\lambda|}\sum_{(\sigma,\,\sigma') \in S_N \times S_N} (-1)^{|\sigma|} \prod_{i=1}^{N} \theta_{\omega}(ut^{\sigma(\delta_i)} q^{\sigma'(\lambda_i)}) x_i^{\sigma'(\lambda_i)+\sigma(\delta_i)}\,.
\end{eqnarray}
%
Next, make a change in the summation of variables by introducing $\pi$ with the property $\sigma' = \sigma \pi$.
Then by rearranging the factors in the product we get
\begin{equation}\label{e41}
\hat{\mathcal O}^{\hbox{\tiny{trig}}}(u)m_\lambda = \frac{1}{\Delta}\frac{1}{|S_\lambda|}\sum_{(\pi, \sigma) \in S_N \times S_N} (-1)^{|\sigma|} \prod_{i=1}^{N} \theta_{\omega}(ut^{\delta_i} q^{\pi(\lambda_i)}) x_i^{\sigma(\pi(\lambda_i)+\delta_i)}\,,
\end{equation}
%
or in terms of the Schur functions:
\begin{equation}
s_{\lambda}= \frac{1}{\Delta}\det(x_i^{\lambda_j+\delta_j})\,,
\end{equation}
\begin{equation}\label{e42}
\hat{\mathcal O}(u)m_\lambda = \frac{1}{|S_\lambda|}\sum_{\pi \in S_N}\prod_{i=1}^{N} \theta_{\omega}(ut^{\delta_i} q^{\pi(\lambda_i)}) s_{\pi(\lambda)}\,.
\end{equation}
%
%
Recall that the Schur functions have these properties:
1) $s_{\pi(\lambda)}$ is either zero or equal to $ \pm s_\mu$ for some $\mu < \lambda$ (unless $\pi(\lambda) = \lambda$);
2) their relation to the monomial symmetric functions is given by
\begin{equation}\label{e43}
s_{\lambda} = m_{\lambda} + \sum_{\mu < \lambda} u_{\lambda \mu} m_\mu
\end{equation}
for some numbers $u_{\lambda \mu}$. Therefore, the operator $\hat{\mathcal O}(u)$ is upper triangular in the basis $\{m_\lambda\}$, and its eigenvalues have the form:
\begin{equation} \label{eigenvalue}
E(u)_\lambda = \prod_{i=1}^{N} \theta_{\omega}(ut^{N-i} q^{\lambda_i})=\sum\limits_{k\in\,\mathbb Z}u^k E_{k,\lambda}\,.
\end{equation}
This finishes the proof. $\blacksquare$.\\
Hence, we could conclude that, despite not being commuting, the operators $\hat{\mathcal O}_n$ could be simultaneously brought to the upper triangular form in the basis $\{m_\lambda\}$. It would be interesting to find out, whether the analogous phenomenon takes place in the case $p \neq 0$.
\subsection*{Eigenvalues in the ${\rm GL}_2$ case and comparison to the known answer}
Let us write down the explicit expression for the eigenvalue $\mathcal{E}_{1, \lambda}$ of the first Hamiltonian (\ref{e2})
$\hat{H}_1 = {\hat{\mathcal O}}_0^{-1}{\hat{\mathcal O}}_1$.
By expanding the result for the eigenvalues of ${\hat{\mathcal O}}(u)$ (\ref{eigenvalue}) in powers of $u$ (and to the first order in $\omega$) we obtain the eigenvalue of $\hat{H}_1$:
\begin{equation}\label{e45}
\begin{array}{l}
\displaystyle{
\mathcal{E}_{1, \lambda} = \frac{E_{1, \lambda}}{E_{0, \lambda}}
=-\frac{\sum_i t^{N-i}q^{\lambda_i}}{1+ \omega \sum_{i \neq j} t^{-i+j} q^{\lambda_i - \lambda_j}} =
}
\\ \ \\
\displaystyle{
= -\sum_i t^{N-i}q^{\lambda_i}+ \omega \sum_{i \neq j} \sum_k t^{N-i+j-k} q^{\lambda_i - \lambda_j + \lambda_k} +...\,.
}
\end{array}
\end{equation}
%
For the ${\rm GL}_2$ case and the choice of $\lambda = (\lambda_1, \lambda_2) = (1,0)$ expression (\ref{e45}) yields
\begin{equation}\label{e46}
\mathcal{E}_{1, (1,0)} = -1-tq + \omega \frac{(1+qt) (1+q^2 t^2)}{qt} + ...\,.
\end{equation}
%
Up to the factor $t^{\frac{1}{2}}$ the results (\ref{eigenvalue})-(\ref{e46}) coincide with those
obtained in \cite{Sh} and \cite{Awata}. The factor $t^{\frac{1}{2}}$ comes from a slightly different definition of the Hamiltonians.
\section{Classical mechanics: Manakov representation}\label{s5}
\setcounter{equation}{0}
In this section we will describe the classical limit of our construction and derive the Manakov $L$-$A$-$B$ triple representation from it. The first step is to express the generating function of the Hamiltonians as the ratio of the two determinants. In the classical limit then, these two determinants could be combined into one, thus giving the expression for the classical spectral curve and the corresponding $L$-matrix.
\subsection{One more generating function for the Dell Hamiltonians}
So to fulfil the first step, described above, let us introduce an alternative version for the generating function of the commuting
Dell Hamiltonians: put the operator $\hat{\mathcal O}(1)=\hat{\mathcal O}(\lambda)|_{\lambda=1}$ in (\ref{e2}) instead of $\hat{\mathcal O}_0$.
\begin{lemma}
The operator
\begin{equation}\label{e55}
\hat{\mathcal H}(\lambda) = {\hat{\mathcal O}}(1)^{-1} {\hat{\mathcal O}}(\lambda)=\sum_{n \in \mathbb{Z}} \lambda^n \hat{\mathcal H}_n\,,\quad
\hat{\mathcal H}_n={\hat{\mathcal O}}(1)^{-1} {\hat{\mathcal O}}_n
\end{equation}
%
is also a generating function of the commuting Hamiltonians,
so that commutativity of $\hat{\mathcal H}_n$ follows from commutativity of $\hat{H}_n$.
\end{lemma}
%
\noindent\underline{\em{Proof:}}\quad
First, let us notice that the operators
$
{\hat H}_{k \, n} = \hat{\mathcal O}_k^{-1} \hat{\mathcal O}_n = {\hat H}_k^{-1} {\hat H}_n
$
also commute with each other due to commutativity of $\hat{H}_k$.
Therefore, ${\hat H}_{mk} {\hat H}_{nk}={\hat H}_{nk} {\hat H}_{mk}$, or acting on this equality by ${\hat{\mathcal O}}_k^{-1}$ from the right
\begin{equation}\label{e62}
{\hat{\mathcal O}}_m^{-1} {\hat{\mathcal O}}_k {\hat{\mathcal O}}_n^{-1} = {\hat{\mathcal O}}_n^{-1} {\hat{\mathcal O}}_k {\hat{\mathcal O}}_m^{-1}\,.
\end{equation}
%
Next, summing up over $k\in\mathbb Z$ gives
%
\begin{equation}\label{e60}
{\hat{\mathcal O}}_m^{-1} {\hat{\mathcal O}}(1) {\hat{\mathcal O}}_n^{-1} = {\hat{\mathcal O}}_n^{-1} {\hat{\mathcal O}}(1) {\hat{\mathcal O}}_m^{-1}\,.
\end{equation}
%
By taking its inverse we get
%
\begin{equation}\label{e59}
{\hat{\mathcal O}}_n {\hat{\mathcal O}}(1)^{-1} {\hat{\mathcal O}}_m = {\hat{\mathcal O}}_m O(1)^{-1} {\hat{\mathcal O}}_n\,.
\end{equation}
%
Finally, multiplying both sides by ${\hat{\mathcal O}}(1)^{-1}$ from the left yields
\begin{equation}\label{e58}
{\hat{\mathcal O}}(1)^{-1} {\hat{\mathcal O}}_n {\hat{\mathcal O}}(1)^{-1} {\hat{\mathcal O}}_m = {\hat{\mathcal O}}(1)^{-1} {\hat{\mathcal O}}_m {\hat{\mathcal O}}(1)^{-1} {\hat{\mathcal O}}_n
\end{equation}
for any $n$ and $m$, which is equivalent to $[\hat{\mathcal H}_n,\hat{\mathcal H}_m]=0$. $\blacksquare$
Due to (\ref{e50}) the generating function of the quantum Hamiltonians (\ref{e55}) takes the form:
\begin{equation}\label{e64}
\hat{\mathcal H}(\lambda) = :(\det_{1 \leq i,j \leq N} \mathcal{L}_{ij}(z,1)):^{-1}\,
:\det_{1 \leq i,j \leq N} \mathcal{L}_{ij}(z,\lambda):\,.
\end{equation}
%
On the one hand ${\hat{\mathcal O}}(1)$, compared to ${\hat{\mathcal O}}_0$ is hard to invert as its Taylor series expansion in $\omega$ starts not with 1. On the other hand the advantage of ${\hat{\mathcal O}}(1)$ is that it has determinant representation, while there is no natural way to find
a determinant representation for ${\hat{\mathcal O}}_0$.
\subsection{Spectral \texorpdfstring{$L$-matrix}{L-matrix}}
The operator ${\hat{\mathcal O}}(1)^{-1}$ in (\ref{e55}) really acts on ${\hat{\mathcal O}}(\lambda)$ as a quantum operator,
so that we can not unify the normal orderings in (\ref{e64}).
At the same time in the classical limit (\ref{e64}) reduces to
\begin{equation}\label{e68}
{\mathcal H}(z,\lambda) =
\det_{N\times N}[ \mathcal{L}^{-1}(z,1)\mathcal{L}(z,\lambda)]\,,
\end{equation}
%
that is, the matrix
%
\begin{equation}\label{e69}
L(z,\lambda)=\mathcal{L}^{-1}(z,1)
\mathcal{L}(z,\lambda) \in {\rm Mat}(N,\mathbb C)
\end{equation}
%
with
%
\begin{equation}\label{e6930}
{\mathcal L}(z,\lambda)=\sum_{n \in \mathbb{Z} } (-\lambda)^n\omega^{\frac{n^2-n}{2}}\, {L}^{RS}(z, q^n, t^n)
\end{equation}
%
arises, which determinant ${\mathcal H}(z,\lambda)$ is the generating function of the classical Hamiltonians.
They commute with respect to canonical Poisson structure
%
\begin{equation}\label{e690}
\{p_i,q_j\}=\delta_{ij}\,.
\end{equation}
%
Expression ${\mathcal H}(z,\lambda)$ can be considered as an analogue of the expression $\det(\lambda-l(z))$ for spectral curve
of an integrable system with the Lax matrix $l(z)$. This is easy to see in the limit $\omega=0$. Due to (\ref{e831}) we have
%
\begin{equation}\label{e691}
{\mathcal L}(z,\lambda)|_{\omega=0}= 1_N-\lambda L^{\rm RS}(z,q,t)\,,
\end{equation}
%
where $1_N$ is the identity $N\times N$ matrix.
Plugging (\ref{e691}) into (\ref{e69}) we get
%
\begin{equation}\label{e692}
L(z,\lambda)=
\mathcal{L}^{-1}(z,1)\mathcal{L}(z,\lambda)=
\lambda 1_N+(1-\lambda)\Big(1_N-L^{\rm RS}(z,q,t)\Big)^{-1}\,.
\end{equation}
%
Therefore, equation ${\mathcal H}(z,\lambda)|_{\omega=0} = 0$
is indeed the spectral curve of the elliptic Ruijsenaars-Schneider model (written in some complicated way).
In the general case $L(z,\lambda)$ is not a Lax matrix. Its
eigenvalues do not commute with respect to (\ref{e690}).
Let us remark that the existence of the Manakov's L-A-B triple does
not contradict the possible simultaneous existence of some Lax pair.
If we had a true Lax matrix
for the Dell model then $\det L(z,\lambda)$ should represent its
spectral curve. So, if the Lax representation exists
we need to find a matrix $\breve{L}$ of a size $M\times M$
(as was mentioned in \cite{MM} it is natural
to expect $M=\infty$)
and a change of variables $u=u(z,\lambda)$, $\zeta=\zeta(z,\lambda)$
satisfying
%
\begin{equation}\label{e693}
\det_{N\times N}{\mathcal L}(z,\lambda)=\det_{M\times M}\Big(u-\breve{L}(\zeta)\Big)\,.
\end{equation}
%
Another comment is about the geometrical meaning of ${\mathcal L}(z,\lambda)$ (\ref{e6930}) and the $L$-matrix (\ref{e69}).
In the Krichever-Hitchin approach to integrable systems these matrices are sections of (the Higgs) bundles
over a base spectral curve with a coordinate $z$. The classical analogue of the Ruijsenaars-Schneider Lax matrix (\ref{e0})
takes the form
\begin{equation}\label{e077}
\displaystyle{
{L}^{RS}_{ij}(z)= \frac{\vartheta( -\eta) \vartheta(z+q_{ij} - \eta)}{\vartheta(z) \vartheta(q_{ij} -\eta)}\,e^{p_j/c}\prod_{k\neq j}\frac{\vartheta(q_{jk}+\eta)}{\vartheta(q_{jk})}\,,
\quad q_{ij}=q_i-q_j\,.
}
\end{equation}
$c$ - the "speed of light" constant of the classical Ruijsenaars model.
Its quasi-periodic behaviour is as follows: ${L}^{RS}(z+1)= {L}^{RS}(z)$ and
\begin{equation}\label{e078}
\displaystyle{
{L}^{RS}(z+\tau)= e^{2\pi\imath\eta}{\rm Ad}_{\exp(-2\pi\imath\,\,{\rm diag}(q_1,...,q_N))}\, {L}^{RS}(z)\,.
}
\end{equation}
The first factor in (\ref{e078}) means that all terms in the sum (\ref{e6930}) have different quasi-periodic behaviour
on the lattice of periods $\langle 1,\tau\rangle$. Therefore, ${\mathcal L}(z,\lambda)$ is not a section of a bundle over the elliptic curve.
This can be easily corrected by the substitution
\begin{equation}\label{e079}
\displaystyle{
\lambda=\lambda'\,\frac{\vartheta(z)}{\vartheta(z-\eta)}\,.
}
\end{equation}
Then the first factor in (\ref{e078}) get canceled, and we come to the quasi-periodic matrix ${\mathcal L}(z,\lambda)$.
The $L$-matrix (\ref{e69}) is quasi-periodic as well. The price for this change of variables (\ref{e079}) is as follows.
Initially we had the matrix ${\mathcal L}(z,\lambda)$ to be not quasi-periodic, but with a single simple pole at $z=0$.
After the change of variables we come to a quasi-periodic matrix function, but having higher order poles.
The terms with positive $n$ in the sum (\ref{e6930}) acquire the $n$-th order pole at $z=\eta$, and the
terms with negative $n$ acquire the ($-n+1$)-th order pole at $z=0$.
In what follows we do not use the substitution (\ref{e079})
keeping in mind that it can be done.
\subsection{\texorpdfstring{$L$-$A$-$B$ triple}{L-A-B triple}}
Consider the $L$-matrix (\ref{e69})
It is easy to see that this matrix satisfies identically the following equations
known as the Manakov representation \cite{Manakov}:
\begin{equation}\label{q2}
\frac{d}{dt_k}L(z,\lambda)=[L(z,\lambda),M_k(z)]+B_k(z,\lambda)L(z,\lambda)\,,
\end{equation}
where
\begin{equation}\label{q3}
{\rm tr} B_k(z,\lambda)=0\,.
\end{equation}
%
and the "time" derivatives will be specified later.
Indeed, by differentiating (\ref{e69}) we get
%
\begin{equation}\label{q4}
M_k(z)=\mathcal{L}^{-1}(z,\lambda)
\Big(\frac{d}{dt_k}\mathcal{L}(z,\lambda)\Big),
\end{equation}
%
so that the Manakov's $M_k(z)$ depends also on $\lambda$ in our case,
and
\begin{equation}\label{q5}
B_k(z,\lambda)=
\mathcal{L}^{-1}(z,\lambda)
\Big(\frac{d}{dt_k}\mathcal{L}(z,\lambda)\Big)
-
\mathcal{L}^{-1}(z,1)
\Big(\frac{d}{dt_k}\mathcal{L}(z,1)\Big)
\,.
\end{equation}
%
The property (\ref{q3}) follows from
\begin{equation}\label{q6}
\frac{d}{dt_k}\det L(z,\lambda)=\frac{d}{dt_k}\frac{\det \mathcal{L}(z,\lambda)}{\det \mathcal{L}(z,1)}\,.
\end{equation}
%
The l.h.s. of (\ref{q6}) equals zero since $\det L(z,\lambda)={\mathcal H}(z,\lambda)$, while the r.h.s. is proportional to the trace
of $B_k(z,\lambda)$ (\ref{q5}).
Alternatively, introduce
\begin{equation}\label{q7}
M_k(z,\lambda)=\mathcal{L}^{-1}(z,\lambda)
\Big(\frac{d}{dt_k}\mathcal{L}(z,\lambda)\Big)
\end{equation}
Then it follows from the conservation of $\det L(z,\lambda)$ that
\begin{equation}\label{q8}
{\rm tr} M_k(z,\lambda)={\rm tr} M_k(z,1)\,.
\end{equation}
Equations (\ref{q2}) are rather identities. To make them equivalent to the equations of motion one needs to replace the time derivative
in the r.h.s. by its values following from the Hamiltonian equations of motion generated by some $k$-th Hamiltonian
\begin{equation}\label{q9}
{\mathcal H}_k(p,q)=\mathop{\hbox{Res}}\limits\limits_{z=0}\mathop{\hbox{Res}}\limits\limits_{\lambda=0}
\Big(\frac{1}{z}\frac{1}{\lambda^{k+1}}\,\det L(z,\lambda)\Big)\,.
\end{equation}
Equations of motion are the standard Hamiltonian equations due to (\ref{e690}):
%
\begin{equation}\label{q10}
\frac{d q_i}{dt_k}=\{{\mathcal H}_k,q_i\}=\frac{\partial {\mathcal H}_k}{\partial p_i}\,,\quad
\frac{d p_i}{dt_k}=\{{\mathcal H}_k,p_i\}=-\frac{\partial {\mathcal H}_k}{\partial q_i}
\end{equation}
Finally, by redefining (\ref{q7}) as
\begin{equation}\label{q11}
M_k(z,\lambda)=\mathcal{L}^{-1}(z,\lambda)\{\mathcal{L}(z,\lambda),{\mathcal H}_k\}
\end{equation}
we rewrite (\ref{q4}) and (\ref{q5}) as follows:
\begin{equation}\label{q12}
M_k(z)= M_k(z,\lambda)
\end{equation}
and
\begin{equation}\label{q13}
B_k(z,\lambda)=M_k(z,\lambda)-M_k(z,1)\,.
\end{equation}
The expression $\{\mathcal{L}(z,\lambda),{\mathcal H}_k\}$
in (\ref{q11}) as well as the r.h.s. of equations of motion
can be evaluated explicitly using the classical analogue of (\ref{e2}).
With the definitions (\ref{q12})-(\ref{q13}) the Manakov equations follow from the equations of motion.
Notice also that the Manakov equation (\ref{q2}) is easily rewritten as:
\begin{equation}\label{q121}
\frac{d}{dt_k}L(z,\lambda)=[L(z,\lambda),M_k(z,1)]+L(z,\lambda)B_k(z,\lambda)
\end{equation}
or (what, in fact, directly follows from the definition of $L(z,\lambda)$)
\begin{equation}\label{q122}
\frac{d}{dt_k}L(z,\lambda)=L(z,\lambda)M_k(z,\lambda)-M_k(z,1)L(z,\lambda)\,.
\end{equation}
\section{Factorization of the \texorpdfstring{$L$-matrices}{L-matrices}}\label{s6}
\setcounter{equation}{0}
In this section we will show how our result naturally embeds the Dell model into the standard factorized Lax matrix approach, to the description of which we proceed in the next paragraph.
\subsection{Classification of the factorized \texorpdfstring{$L$-matrices}{L-matrices}}
In \cite{MM} integrable many-body systems of the Calogero-Ruijsenaars family
were naturally classified by the types
of dependence on the coordinates and/or momenta. Both types are numerated
by three possibilities. Each can be either rational, trigonometric or elliptic.
For example, the choice (rational coordinate, trigonometric momenta) corresponds
to the rational Ruijsenaars-Schneider model, while the choice
(rational momenta, trigonometric coordinate) imply the trigonometric Calogero-Sutherland system.
In the coordinate part this classification
follows from solutions of the underlying functional equations \cite{Calogero,RS} -- the Fay identity (\ref{e878}) and its degenerations.
By interchanging the types of coordinate and momenta dependence (as in the example pair above) one gets a pair of systems related by the
Ruijsenaars duality transformation \cite{Ruijs_d}.
When both types coincide the corresponding model is self-dual. These are the rational Calogero-Moser system,
the trigonometric Ruijsenaars-Schneider model, and finally the double elliptic model, which existence
was predicted by these arguments.
Here we supply the upper classification with precise substitutions corresponding
to the factorized Lax (or Manakov) $L$-matrices.
%
As was discussed in \cite{ZV} the factorized Lax matrices (with and without spectral parameter) for the systems
of Calogero-Ruijsenaars type can be specified
by a choice of two ingredients: the function $f$, and the intertwining matrix $\Xi(z)$:
\begin{equation}\label{r1}
\displaystyle{
L^{\rm CR}(z)=G^{-1}(z)\, f(-{\rm ad}_{N\eta\partial_z})\, G(z)\,,\qquad {\rm ad}_{\partial_z}*=[\partial_z,\,*\,]\,,
}
\end{equation}
%
where the matrix $G(z)$ is defined in terms of $\Xi(z)$:
%
\begin{equation}\label{r2}
\displaystyle{
G(z)=G(z,\tau)=\Xi(z)D^{-1}e^{ \frac{z}{N c\eta}P }\,,\qquad P={\rm diag}(p_1,...,p_N)
}
\end{equation}
%
with some diagonal matrix $D$ (see (\ref{q54}) below) and $c$ -- the
light speed parameter\footnote
The light speed provides non-relativistic limit $c\rightarrow \infty$ together with substitution $\eta=\nu/c$,
where $\nu$ is the non-relativistic coupling constant. } in the Ruijsenaars-Schneider model.
%
The function $f(w)$ is either:
%
\begin{equation}\label{r3}
\displaystyle{
\hbox{1) linear:}\ f(w)=w;
}
\end{equation}
%
\begin{equation}\label{r4}
\displaystyle{
\hbox{2) exponent:}\ f(w)=e^w
}
\end{equation}
%
The first choice of the function $f$ being substituted into (\ref{r1})
provides the Lax matrix of the Calogero-Moser-Sutherland systems \cite{Calogero,Krich}. The second choice of $f$ gives rise to
the Lax matrices of the Ruijsenaars-Schneider models \cite{RS}.
%
%
The choices of $\Xi(z)$ in (\ref{r1}) are given by (\ref{e86}) in the elliptic case, in the trigonometric case it is (\ref{e423}), and in the rational case it is (\ref{e433}). In trigonometric and rational cases
one can also use the Vandermonde matrices (\ref{e335}) and (\ref{e331}) as $\Xi_{ij}(z)=(z-q_j)^{i-1}$ and
$\Xi_{ij}(z)=(e^z\,x_j)^{N-i}$ respectively. The spectral parameter is cancelled out in these cases, and we get
the Lax pairs of the Calogero-Ruijsenaars models without spectral parameter. See the review \cite{ZV} for details.
Based on (\ref{e1}) and
the Manakov $L$-matrix structure (\ref{e69})-(\ref{e6930})
we come to elliptic version for the function $f$:
%
\begin{equation}\label{r5}
\hbox{3)}
\begin{array}{c}
\hbox{ratio of theta-functions:}
\end{array}
\displaystyle{
f_\lambda(w)=\frac{\theta_\omega(\lambda e^w)}{\theta_\omega(e^w)}\,.
}
\end{equation}
The latter result is explained in the next paragraph in detail.
A more universal classification picture arises if we slightly change the definition of $f_\lambda(w)$
as in transition from $\theta_p(e^w)$ to $\vartheta(w)$ (\ref{e83}) together with additional normalization factor
$\vartheta'(0)/\vartheta(\log(\lambda))$. Then the function $f_\lambda(w)$ turns into the Kronecker elliptic function \cite{Weil}
depending on the moduli $\ti\tau$ (defined through $\omega=e^{2\pi\imath\ti\tau}$):
%
\begin{equation}\label{r6}
\displaystyle{
f_u(w)\rightarrow \lambda^{1/2}f_u(w)\frac{\vartheta'(0|\ti\tau)}{\vartheta(u|\ti\tau)}=
\Phi(u,w|\ti\tau)=\frac{\vartheta'(0|\ti\tau)\vartheta(w+u|\ti\tau)}{\vartheta(u|\ti\tau)\vartheta(w|\ti\tau)}\,.
}
\end{equation}
where $u=\log(\lambda)$. In trigonometric and rational limits (when ${\rm Im}(\ti\tau)\rightarrow 0$) it is as follows:
%
\begin{equation}\label{r7}
\begin{array}{l}
\displaystyle{
\Phi^{\rm trig}(u,w)=\frac{\sinh(w+u)}{\sinh(w)\sinh(u)}=\coth(w)+\coth(u)\,,
}
\\ \ \\
\displaystyle{
\Phi^{\rm rat}(u,w)=\frac{w+u}{w\,u}=\frac{1}{u}+\frac{1}{w}\,.
}
\end{array}
\end{equation}
This function was used by I. Krichever \cite{Krich} to construct the Lax representation with
spectral parameter for elliptic Calogero-Moser system. It is widely used in elliptic integrable systems
due to an addition Theorem known also as the genus one Fay identity (\ref{e878}). Considered as functional equation
its solutions (including degenerated versions) were
extensively studied \cite{Calogero}.
In this way we come to the classification for the function $f_u(w)$ (responsible for the momenta type dependence), which is parallel to
the well known classification of the coordinates dependence without spectral parameter \cite{Calogero} and with
spectral parameter \cite{Krich}.
\subsection{Factorized structure of the Dell \texorpdfstring{$L$-matrix}{L-matrix}}
Recall that the Lax matrix of the Ruijsenaars-Schneider model (\ref{e52}) is factorized as
follows (\ref{e211}):
%
\begin{equation}\label{q511}
\begin{array}{l}
\displaystyle{
L^{\rm RS}(z)=
g^{-1}(z)g(z-N\eta)\,e^{P/c}\,,
}
\end{array}
\end{equation}
where
\begin{equation}\label{q52}
\begin{array}{l}
\displaystyle{
g(z)=g(z,\tau)=\Xi(z)D^{-1}
}
\end{array}
\end{equation}
%
with the intertwining matrix $\Xi_{ij}(z)$ (\ref{e86})
and the diagonal matrix
%
\begin{equation}\label{q54}
\begin{array}{c}
\displaystyle{
D_{ij}=\delta_{ij}D_{j}=\delta_{ij}
{\prod\limits_{k\neq j}\vartheta(q_j-q_k|\tau)}\,.
}
\end{array}
\end{equation}
A conjugation with the latter diagonal matrix $D$ is performed in order to have a convenient form for $L^{\rm RS}(z)$.
Consider the matrix $G(z)$ (\ref{r2}).
The Ruijsenaars-Schneider Lax matrix (\ref{q511}) takes the form
%
\begin{equation}\label{q21}
\displaystyle{
{\tilde L}^{\rm RS}(z)=G^{-1}(z)\,{\rm Ad}_{e^{-N\eta\partial_z}}\,G(z)
}
\end{equation}
up to gauge transformation with the diagonal matrix $\exp{ (\frac{z}{ N c \eta}P) }$:
\begin{equation}\label{q211}
\displaystyle{
{\tilde L}^{\rm RS}(z)
=G^{-1}(z)G(z-N\eta)=e^{ -\frac{z}{N c\eta}P } L^{\rm RS}(z)\, e^{ \frac{z}{N c\eta}P }\,.
}
\end{equation}
Let us proceed to the double elliptic case. Plugging (\ref{q511}) into the matrix ${\mathcal L}(z,\lambda)$ (\ref{e6930})
we get
%
\begin{equation}\label{r30}
{\mathcal L}(z,\lambda)=g^{-1}(z)\sum_{k \in \mathbb{Z} } (-\lambda)^k\omega^{\frac{k^2-k}{2}}\,
g(z-kN\eta)\,e^{kP/c}\,,
\end{equation}
%
or
\begin{equation}\label{q22}
\displaystyle{
{{\mathcal L}}'(z,\lambda)= e^{ -\frac{z}{N c\eta}P }{\mathcal L}(z,\lambda)e^{ \frac{z}{N c\eta}P }=
G^{-1}(z)\theta_\omega\Big(\lambda {\rm Ad}_{e^{-N\eta\partial_z}}\Big)G(z)\,.
}
\end{equation}
By introducing also
%
\begin{equation}\label{q26}
\displaystyle{
\Theta(z,\lambda)=\theta_\omega\Big(\lambda {\rm Ad}_{e^{-N\eta\partial_z}}\Big)G(z)
=\sum_{k \in \mathbb{Z} } (-\lambda)^k\omega^{\frac{k^2-k}{2}}\,
g(z-kN\eta)\,e^{kP/c}e^{ \frac{z}{N c\eta}P }\in {\rm Mat}(N,\mathbb C) \,.
}
\end{equation}
%
%
we come to the following expression for the Manakov $L$-matrix (\ref{e69}):
%
\begin{equation}\label{q27}
\displaystyle{
L'(z,\lambda)=\Theta^{-1}(z,1)\Theta(z,\lambda)\,.
}
\end{equation}
%
It is also gauge equivalent to both ${\ti L}(z,\lambda)$ and ${ L}(z,\lambda)$. In terms of the Kronecker function (\ref{r6}) we may
write the Manakov $L$-matrix as
%
\begin{equation}\label{q28}
\displaystyle{
\check{L}(z,\lambda)=\Phi[G(z,\tau),u|\ti\tau]:
=\frac{\vartheta'(0|\ti\tau)}{\vartheta(u|\ti\tau)} \Big[\vartheta(-\,{\rm ad}_{N\eta\partial_z}|\ti\tau)\, G(z)\Big]^{-1}
\vartheta(u-\,{\rm ad}_{N\eta\partial_z}|\ti\tau)\, G(z)\,,
}
\end{equation}
%
where $u=\log(\lambda)$. From the point of view
of the classification of the factorized L-matrices discussed above, the expressions (\ref{q26}) and (\ref{q28}) can be considered
as
matrix analogues for theta-function and the Kronecker elliptic function respectively.
The Kronecker function in (\ref{q28}) is constructed
by means of the theta-operator $\Theta(z,\lambda)$ understood in a "plethystic sense" (\ref{q26}).
\subsection{Relation to Sklyanin Lax operators}
Due to the IRF-Vertex relation
one can equivalently use the Sklyanin type Lax operators
\cite{Skl} instead of the Ruijsenaars-Schneider one in (\ref{e5}).
Consider the gauge transformed Ruijsenaars Lax matrix (\ref{q511}):
%
\begin{equation}\label{q29}
\begin{array}{c}
\displaystyle{
L^{\rm Skl}(z)=g(z) L^{\rm RS}(z)g^{-1}(z)=
}
\\ \ \\
\displaystyle{
=g(z-N\eta)\,e^{P/c} g^{-1}(z)=\Xi(z-N\eta)\,e^{P/c}\,\Xi^{-1}(z)\,.
}
\end{array}
\end{equation}
%
It is the classical analogue for the representation of the quantum Sklyanin Lax operator \cite{Hasegawa2}:
%
\begin{equation}\label{q30}
\displaystyle{
{\hat L} ^{\rm Skl}(z)=\,:\Xi(z-N\eta)\,q^{{\rm diag}(\partial_{q_1},...,\partial_{q_N})/c}\,\Xi^{-1}(z):
\,=\sum\limits_{k=1}^N \Xi_{ik}(z-N\eta)\, \Xi_{kj}^{-1}(z)\,e^{(\hbar/c)\partial_{q_k}}\,.
}
\end{equation}
%
Consider first the classical case (\ref{q29}). Its generalization to the double elliptic model is performed
similarly to the previous paragraph. Define
%
%
\begin{equation}\label{q32}
\begin{array}{c}
\displaystyle{
{\mathcal L}^{\rm Dell}(z,\lambda)=g(z) {\mathcal L}(z,\lambda) g^{-1}(z)=
}
\\ \ \\
\displaystyle{
=\sum_{m \in\, \mathbb{Z} } (-\lambda)^m\omega^{\frac{m^2-m}{2}}\,
\Xi(z-mN\eta)\,e^{ mP/c}\,\Xi^{-1}(z)\,.
}
\end{array}
\end{equation}
%
Hence,
\begin{equation}
{\mathcal L}^{\rm Dell}(z,\lambda)
=\sum_{m \in\, \mathbb{Z} } (-\lambda)^m\omega^{\frac{m^2-m}{2}}\,
L^{\rm Skl}(z,\{p_i\},\{q_i\},m \eta,m c^{-1})\,.
\end{equation}
So, it could be expressed as a sum of the Sklyanin type Lax matrices with different coupling constants.
For each of them, we know from \cite{LOZ2} that it can be alternatively represented in terms of the underlying elliptic $R$-matrix:
%
\begin{equation}\label{q319}
\displaystyle{
L^{\rm Skl}(z,\eta,S(\{p_i\},\{q_i\},\eta,c^{-1}))=
\sum\limits_{a,b,c,d=1}^N E_{ab}\,S_{dc}\,(\{p_i\},\{q_i\},\eta,c^{-1})\, R_{ab,cd}(\eta,z)\,,
}
\end{equation}
%
where $\{E_{ab};a,b=1,...,N\}$ is the standard basis in $ {\rm Mat}(N,\mathbb C) $, the variables $S_{ab}(\{p_i\},\{q_i\},\eta,c^{-1})$ are generators of the classical Sklyanin algebra\footnote{The classical Lax matrix (\ref{q319}) describes integrable model of relativistic integrable top. Explicit change of variables with the Ruijsenaars model $S_{ab}=S_{ab}(\{p_i\},\{q_i\},\eta,c^{-1})$ appears in the case
of rank one matrix $S$ (corresponding to special values of the Casimir functions). See details in \cite{LOZ2}.},
and $R_{ab,cd}(\eta,z)$ -- are the (properly normalized) weights of the quantum Baxter-Belavin $R$-matrix
(in the fundamental representation of ${\rm GL}_N$):
%
\begin{equation}\label{q51}
\begin{array}{c}
\displaystyle{
{R}^\eta_{12}(z)=\sum\limits_{a,b,c,d=1}^N E_{ab}\otimes E_{cd}\,
R^{\rm B}_{ab,cd}(\eta,z)\,.
}
\end{array}
\end{equation}
%
With (\ref{q319}) we get the Manakov $L$-matrix
%
\begin{equation}\label{q33}
\begin{array}{c}
\displaystyle{
L^{\rm Dell}(z,\lambda)={\mathcal L}^{\rm Skl}(z,1)^{-1}{\mathcal L}^{\rm Skl}(z,\lambda)
}
\end{array}
\end{equation}
%
with ${\mathcal L}^{\rm Skl}(z,\lambda)$ defined as
%
\begin{equation}\label{q340}
\displaystyle{
{\mathcal L}^{\rm Skl}(z,\lambda)=\sum_{k \in\, \mathbb{Z} } (-\lambda)^k\omega^{\frac{k^2-k}{2}}\,
\sum\limits_{a,b,c,d=1}^N E_{ab}\,S_{dc}(\{p_i\},\{q_i\},k\eta,kc^{-1}) R^{\rm B}_{ab,cd}(k\eta,z)\,.
}
\end{equation}
%
Expression (\ref{q33}) is gauge equivalent to the one (\ref{r30}) defined through the Ruijsenaars-Schneider Lax matrix.
\paragraph{Quantization.} A natural quantization of (\ref{q33}), which gives the generating function (\ref{e64})
is as follows:
%
\begin{equation}\label{q44}
\begin{array}{c}
\displaystyle{
{\hat L} ^{\rm Dell}(z,\lambda)=\Big(:\hat{\mathcal L}^{\rm Skl}(z,1):\Big)^{-1}:\hat{\mathcal L}^{\rm Skl}(z,\lambda):
}
\end{array}
\end{equation}
%
Consider the matrix operator $:\hat{\mathcal L}^{\rm Skl}(z,\lambda):$ (\ref{q30}).
In the special case $\eta=-\hbar/c$ the Sklyanin Lax operator is represented as
the quantum Baxter-Belavin $R$-matrix in the fundamental representation \cite{Baxter,Hasegawa2}.
The $R$-matrix coefficients are of the form:
%
\begin{equation}\label{q45}
\begin{array}{c}
\displaystyle{
R^{\rm B}_{ab,cd}(\eta,z)=\delta_{a+c,b+d\,{\rm mod}N}\,\frac{\theta^{(a-c)}(z+\eta)}{\theta^{(a-b)}(\eta)\theta^{(b-c)}(z)}
\prod\limits_{k=0}^N\frac{\theta^{(k)}(\eta)}{\theta^{(k)}(0)}\,,
}
\end{array}
\end{equation}
%
where using the definition (\ref{e87}) we introduced
%
\begin{equation}\label{q46}
\begin{array}{c}
\displaystyle{
\theta^{(j)}(u)=\vartheta{\left[ \begin{array}{c}
\frac{1}{2}-\frac{j}{N}\\
\frac{1}{2}
\end{array}
\right]}(u\,| N\tau )\,.
}
\end{array}
\end{equation}
%
In this way we come to the matrix representation for $ {\hat L} ^{\rm Dell}(z,\lambda)$ (\ref{q44}):
%
\begin{equation}\label{q36}
\begin{array}{c}
\displaystyle{
{\hat L} ^{\rm Dell}(z,\lambda) = {\bf R}_{12}(z,\lambda)={\mathcal R}_{12}(z,1)^{-1} \,{\mathcal R}_{12}(z,\lambda)\in {\rm Mat}(N,\mathbb C) \,.
}
\end{array}
\end{equation}
%
with
%
\begin{equation}\label{q35}
\begin{array}{c}
\displaystyle{
{\mathcal R}_{12}(z,\lambda)=\sum\limits_{a,b,c,d=1}^N E_{ab}\otimes E_{cd}\,
{\mathcal R}^\eta_{ab,cd}(z,\lambda)
}
\end{array}
\end{equation}
%
and
%
\begin{equation}\label{q55}
\begin{array}{c}
\displaystyle{
{\mathcal R}^\eta_{ab,cd}(z,\lambda)=
\sum_{m \in\, \mathbb{Z} } (-\lambda)^m\omega^{\frac{m^2-m}{2}}R^{\rm B}_{ab,cd}(m\eta,z)\,.
}
\end{array}
\end{equation}
%
\section{Discussion}
\setcounter{equation}{0}
\begin{itemize}
\item The Manakov $L$-matrix (\ref{q33}) can be used as a building block for the (partial) monodromy as it happens in integrable chains \cite{FT}.
Namely, construct monodromy for a chain of length $L$:
%
\begin{equation}\label{q34}
\begin{array}{c}
\displaystyle{
T(z,\lambda)=L^{\rm Skl}(z,\lambda,\{p_i^{(1)}\},\{q_i^{(1)}\})\,
L^{\rm Skl}(z,\lambda,\{p_i^{(2)}\},\{q_i^{(2)}\})\,...\,L^{\rm Skl}(z,\lambda,\{p_i^{(L)}\},\{q_i^{(L)}\})
}
\end{array}
\end{equation}
%
with the $L^{\rm Skl}$ (\ref{q33}), and the $L$-matrix at each site depends on its own set of canonical variables.
Then, $\det T(z,\lambda)$ provides a product of generating functions for each site. In \cite{Sh} this was called
the spin generalization of the double-elliptic model. Besides this construction one can also study the averaging
of the spin Ruijsenaars-Schneider Lax operators.
\item The quantization of (\ref{q34}) can be performed in a usual way
%
\begin{equation}\label{q3477}
\begin{array}{c}
\displaystyle{
{\bf T}(z,\lambda)={\bf R}_{01}(z,\lambda){\bf R}_{02}(z,\lambda)...{\bf R}_{0L}(z,\lambda)
}
\end{array}
\end{equation}
%
with ${\bf R}(z,\lambda)$ defined in (\ref{q36})-(\ref{q55}). Presumably, a properly defined quantum determinant of ${\bf T}(z,\lambda)$ could be a generating function of commuting operators. Notice that, we did not prove the Yang-Baxter equation for ${\bf R}(z,\lambda)$
(it is rather not fulfilled since the traces of $T(z,\lambda)$ do not commute), so that ${\bf R}(z,\lambda)$ is not an $R$-matrix. Finding equation for ${\bf R}(z,\lambda)$ is another interesting problem.
\item Orthogonality of the eigenvectors.
To proceed further in our construction along with the analogous treatment for the usual Macdonald's symmetric functions we need to know the analog of the Macdonald's measure, with respect to which the operators $\hat{H}_n$ are self-conjugate.
Probably, expressing the eigenvectors of $\hat{H}_n$ as vector-valued characters of some elliptic algebras might work, in the analogy with the paper \cite{EK}.
\item
The integrable many-body systems can be also described via commuting differential or difference KZ-type connections or Dunkl operators using the Matsuo-Cherednik (or Heckman) projections respectively.
The Ruijsenaars duality in many-body problems then turns (or rather embeds) into the spectral duality
interchanging canonical coordinates and momenta being written in separated variables of the corresponding Gaudin models and/or spin chains. Much progress has been achieved in studies of these relations including its elliptic version \cite{MMZZ}.
An interesting problem is to find the Dunkl-Cherednik like description for the double-elliptic models (and
define a double-elliptic version of the qKZ equations). We discuss these topics in our forthcoming paper \cite{GrZ2}.
\item Let us mention a recent paper \cite{Zabr}, where an elliptic integrable many-body system of Calogero type with the Manakov representation was obtained instead of the Lax representation. It was derived through a reduction from an integrable hierarchy. Presumably, it is a special limiting case, which can be deduced from the double-elliptic Manakov $L$-matrix.
\item Further study of algebraic structures underlying the double-elliptic model is another set of important problems. This includes $r$-matrix structures at classical and quantum levels (RTT relations) and extensions of the Sklyanin quadratic algebras by means of $R$-matrix type operators (\ref{q36})-(\ref{q35}).
\item The determinant formula (\ref{e4})-(\ref{e5}) can be naturally extended to the cases of many-body systems
associated to the root systems of ${\rm BC}_N$ type by substituting the Lax operators
for the Ruijsenaars–Schneider–van Diejen type. This can be performed using results of \cite{Feher}. We will discuss it in our future publication.
\item Another open problem is to describe the classical $L$-matrix (\ref{e69})-(\ref{e6930}) in the group-theoretical (or Krichever-Hitchin) approach.
As we mentioned in (\ref{e079}), one way is to consider the matrix valued function with higher order poles at a pair of marked points. Another possibility is to consider (block-matrix) direct sum
$\bigoplus\limits_{k\in\mathbb Z} L^{\rm RS}(z,q^k,t^k)$ embedded into ${\rm GL}(\infty)$. Each block is well defined, and the
weighted sum of all blocks can be viewed as a matrix valued character.
We will discuss these questions in future publications.
\end{itemize}
\section{Appendix A: Elliptic functions} \label{AppA}
\defD.\arabic{equation}{A.\arabic{equation}}
\setcounter{equation}{0}
We use several different theta-functions. The first is the one used in \cite{Sh}:
\begin{equation}\label{e81}
\theta_p(x) = \sum_{n \in \mathbb{Z}} p^{\frac{n^2-n}{2}} (-x)^n\,,
\end{equation}
%
where the moduli of the elliptic curve $\tau \in \mathbb C$, ${\rm Im}\, \tau >0$ enters through
\begin{equation}\label{e80}
\begin{array}{c}
p = e^{2 \pi i \tau}\,.
\end{array}
\end{equation}
%
Another theta-function is the standard odd Jacobi one:
\begin{equation}\label{e811}
\begin{array}{c}
\displaystyle{
\vartheta(z)=\vartheta(z|\tau )=-i\sum_{k\in\,\mathbb Z}
(-1)^k e^{\pi i (k+\frac{1}{2})^2\tau}e^{\pi i (2k+1)z}}\,.
\end{array}
\end{equation}
They are easily related:
\begin{equation}\label{e83}
\theta_p(x) = i p^{-\frac{1}{8}} x^{\frac{1}{2}} \vartheta(w|\, \tau )\,,\quad x=e^{2 \pi i w}\,.
\end{equation}
In the trigonometric limit $p\rightarrow 0$
\begin{equation}\label{e831}
\theta_p(x) \rightarrow (1-x)\,,\qquad \vartheta(w) \rightarrow -i p^{\frac{1}{8}}\,(\sqrt{x}-1/\sqrt{x})\,.
\end{equation}
The Riemann theta-functions with characteristics
are defined as follows:
\begin{equation}\label{e87}
\vartheta{\left[ \begin{array}{c}
a\\
b
\end{array}
\right]}(w\,| \tau ) =\sum_{j\in\,\mathbb Z}
\exp\left(2\pi\imath(j+a)^2\frac\tau2+2\pi\imath(j+a)(w+b)\right)\,,\quad
a\,,b\in\frac{1}{N}\,\mathbb Z\,.
\end{equation}
%
In particular,
%
\begin{equation}\label{e871}
\vartheta{\left[ \begin{array}{c}
1/2\\
1/2
\end{array}
\right]}(w\,| \tau ) =-\vartheta(w|\, \tau )\,.
\end{equation}
%
For
%
\begin{equation}\label{e872}
\begin{array}{c}
\displaystyle{
V_{ij}(x_j)=
\vartheta\left[ \begin{array}{c}
\frac12-\frac{i}{N} \\ \frac N2
\end{array} \right] \left(Nx_j\left.\right|N\tau\right)
}
\end{array}
\end{equation}
we have
%
\begin{equation}\label{e873}
\begin{array}{c}
\displaystyle{
\det V=c_N(\tau)\,\vartheta(\sum\limits_{k=1}^N
x_k)\prod\limits_{i<j}\vartheta(x_j-x_i)\,,\qquad
c_N(\tau)=\frac{(-1)^{N-1}}{(\imath\,\eta_D(\tau))^{\frac{(N-1)(N-2)}{2}}}\,,
}
\end{array}
\end{equation}
where $\eta_D(\tau)$ is the Dedekind eta-function:
%
\begin{equation}\label{e874}
\begin{array}{c}
\displaystyle{
\eta_D(\tau)=e^{\frac{\pi\imath\tau}{12}}\prod\limits_{k=1}^\infty (1-e^{2\pi\imath\tau
k})\,.
}
\end{array}
\end{equation}
%
The determinant of the elliptic Cauchy matrix is given by
\begin{equation}\label{e875}
\displaystyle{
\det\limits_{1\leq i,j\leq N}\frac{\vartheta(z+u_i-w_j)}{\vartheta(z)\vartheta(u_i-w_j)}
=\frac{
{\vartheta} \Bigl (z +\sum\limits_{i=1}^{N} (u_i-w_i)\Bigr )}{{\vartheta} (z )}\,
\frac{\prod\limits_{p<q}^N{\vartheta} (u_p-u_q){\vartheta}
(w_q-w_p)}{\prod\limits_{r,s=1}^N{\vartheta} (u_r-w_s)}\,.
}
\end{equation}
Define the elliptic Kronecker function
\begin{equation}\label{e876}
\begin{array}{l}
\displaystyle{
\Phi(z,u)=\frac{\vartheta'(0)\vartheta(z+u)}{\vartheta(z)\vartheta(u)}
}
\end{array}
\end{equation}
and the first Eisenstein function \cite{Weil}
\begin{equation}\label{e877}
\begin{array}{c}
\displaystyle{
E_1(z)=\frac{\vartheta'(z)}{\vartheta(z)}\,.
}
\end{array}
\end{equation}
They satisfy the (genus one) Fay trisecant identity
\begin{equation}\label{e878}
\begin{array}{c}
\displaystyle{
\Phi(z,u_1)\Phi(w,u_2)=\Phi(z,u_1-u_2)\Phi(z+w,u_2)+\Phi(w,u_2-u_1)\Phi(z+w,u_1)
}
\end{array}
\end{equation}
and its degeneration
\begin{equation}\label{e879}
\begin{array}{c}
\displaystyle{
\Phi(z,u)\Phi(w,u)=\Phi(z+w,u)(E_1(z)+E_1(w)+E_1(u)-E_1(z+w+u))\,.
}
\end{array}
\end{equation}
%
Also,
%
\begin{equation}\label{e880}
\begin{array}{c}
\displaystyle{
E_1(z)+E_1(w)+E_1(u)-E_1(z+w+u)=\frac{\vartheta'(0)\vartheta(z+w)\vartheta(z+u)\vartheta(w+u)}{\vartheta(z)\vartheta(w)\vartheta(u)\vartheta(z+w+u)}\,.
}
\end{array}
\end{equation}
%
\section{Appendix B: Intertwining matrices} \label{AppB}
\defD.\arabic{equation}{B.\arabic{equation}}
\setcounter{equation}{0}
Following \cite{ZV} we consider the intertwining matrices
in two cases: with a spectral parameter and without a spectral parameter.
These matrices lead to the Ruijsenaars-Schneider Lax matrices via (\ref{e211})
with and without spectral parameter respectively.
\paragraph{The cases without spectral parameter.} Here we deal with the Vandermonde matrices
in the rational and trigonometric coordinates.
1) in the rational case:
\begin{equation}\label{e331}
\displaystyle{
\Xi_{ij}=(-q_j)^{i-1}\,,
}
\end{equation}
\begin{equation}\label{e3315}
\displaystyle{
\det\Xi=\prod\limits_{i<j}(q_i-q_j)\,.
}
\end{equation}
2) in the trigonometric case:
\begin{equation}\label{e335}
\displaystyle{
\Xi_{ij}=x_j^{N-i}\,,\quad x_j=e^{q_j}\,,
}
\end{equation}
\begin{equation}\label{e3355}
\displaystyle{
\det\Xi=\prod\limits_{i<j}(x_i-x_j)\,.
}
\end{equation}
\paragraph{The cases with a spectral parameter.} Here we deal with the Vandermonde type matrices,
which are degenerated at $z=0$. The special feature of these cases is that the simple zero at $z=0$
appears in the center of mass frame only. Introduce
\begin{equation}\label{e3358}
\displaystyle{
{\bar q}_j=q_j-q_0\,,\quad q_0 = \frac{1}{N}\sum\limits_{k=1}^{N}q_{k}\,.
}
\end{equation}
1) in the rational case:
\begin{equation}\label{e433}
\begin{array}{c}
\displaystyle{
\Xi_{ij}(z,q_j) =\Big(\frac{z}{N}-q_{j}\Big)^{\varrho(i)}
}
\end{array}
\end{equation}
with
\begin{equation}\label{e4341}
\varrho(i) = \left\{\begin{array}{l}
i-1 \;\, \hbox{for} \; 1 \leq i \leq N-1,\\
N\; \quad\ \hbox{for} \; i=N\,.
\end{array}\right.
\end{equation}
%
Then
%
\begin{equation}\label{e4346}
\displaystyle{
\det\Xi(z,q_j)=\Big(z-\sum\limits_{k=1}^{N}q_{k}\Big)\prod\limits_{i<j}(q_i-q_j)\,,
\qquad
\det\Xi(z,{\bar q}_j)=z\prod\limits_{i<j}(q_i-q_j)\,,
}
\end{equation}
2) in the trigonometric case we use
\begin{equation}
\label{e423}
\begin{array}{c}
\displaystyle{
\Xi_{ij}(y_j) =
y_{j}^{i-1}
+\delta_{iN}\frac{(-1)^{N}}{y_{j}}\,,
}
\end{array}
\end{equation}
%
\begin{equation}
\label{e42310}
\begin{array}{c}
\displaystyle{
\det\Xi=(-1)^{\frac{N(N-1)}{2}}
\Big(1-\frac{1}{y_1\,...\,y_N}\Big)\prod\limits_{i<j}(y_i-y_j)\,.
}
\end{array}
\end{equation}
%
Plugging $y_j=e^{-2q_j+2q_0+2z/N}$ into (\ref{e42310}) we get
%
\begin{equation}
\label{e42311}
\begin{array}{c}
\displaystyle{
\det\Xi=e^{(N-2)z}
(e^z-e^{-z})\prod\limits_{i<j}\Big( e^{q_i-q_j}-e^{q_j-q_i} \Big)\,.
}
\end{array}
\end{equation}
%
Notice that a possible difference in the Vandermonde determinant definitions including $\prod\limits_{i<j}(x_i-x_j)$
or $\prod\limits_{i<j}(1-x_i/x_j)$ or $\prod\limits_{i<j}(\sqrt{x_i/x_j}-\sqrt{x_j/x_i})$ can be easily removed
through multiplying $\Xi$ by appropriate diagonal matrix.
For example, in order to match the trigonometric Vandermonde
function $\prod_{i<j}(x_i-x_j)$ to the trigonometric limit
of (\ref{e22}) with $\theta_p(x_i/x_j)=1-x_i/x_j$ on should
modify the trigonometric $\Xi$-matrix as
$\Xi_{ij}(x_j) \rightarrow \Xi_{ij}(x_j) x_j^{\frac{N+1}{2} -j}$.
3) in the elliptic case
%
\begin{equation}\label{e86}
\Xi_{ij}(z,{\bar q}_j)=
\vartheta\left[ \begin{array}{c}
\frac12-\frac{i}{N} \\ \frac N2
\end{array} \right] \left(z - N{\bar q}_j\left.\right|N\tau\right)\,.
\end{equation}
Due to (\ref{e872}) we obtain
%
\begin{equation} \label{e8655}
\begin{array}{c}
\displaystyle{
\det\Xi(z,q)=c_N(\tau)\,\vartheta(z)\prod\limits_{i<j}\vartheta(q_i-q_j)\,.
}
\end{array}
\end{equation}
%
\section{Appendix C: Determinant representations in the \texorpdfstring{${\rm GL}_2$ cases}{GL(2) cases}} \label{AppC}
\defD.\arabic{equation}{C.\arabic{equation}}
\setcounter{equation}{0}
\paragraph{Double elliptic ${\rm GL}_2$ case.} Consider the ${\rm GL}_2$ example explicitly.
Rewrite expression (\ref{e52}) for the Lax operator of the elliptic Ruijsenaars-Schneider model
in terms of the Kronecker function (\ref{e876}):
\begin{equation}\label{e721}
\displaystyle{
{\hat L}^{RS}_{ij}(z, \eta, \hbar)
=\frac{\vartheta(-\eta)}{\vartheta'(0)}\Phi(z,q_i-q_j-\eta)\,\prod_{k\neq j}\frac{\vartheta(q_{jk}+\eta)}{\vartheta(q_{jk})}\,e^{ \hbar \partial_j}\,,
}
\end{equation}
%
so that for $N=2$ we have
%
\begin{equation}\label{e722}
\displaystyle{
{\hat L} ^{\rm RS}(z, \eta, \hbar) =\frac{\vartheta(-\eta)}{\vartheta'(0)}
\mat{ \Phi(z,-\eta)\,\frac{\vartheta(q_{12}+\eta)}{\vartheta(q_{12})}\, e^{ \hbar \partial_1} }
{\Phi(z,q_{12}-\eta)\,\frac{\vartheta(q_{21}+\eta)}{\vartheta(q_{21})}\, e^{ \hbar \partial_2}}
{\Phi(z,q_{21}-\eta)\,\frac{\vartheta(q_{12}+\eta)}{\vartheta(q_{12})}\, e^{ \hbar \partial_1}}
{\Phi(z,-\eta)\,\frac{\vartheta(q_{21}+\eta)}{\vartheta(q_{21})}\, e^{ \hbar \partial_2}}\,.
}
\end{equation}
After the substitution into (\ref{e50}) one needs to calculate the expression
%
\begin{equation}\label{e723}
\begin{array}{c}
\displaystyle{
\Phi(z,-k_1\eta)\Phi(z,-k_2\eta)-\Phi(z,q_{21}-k_1\eta)\Phi(z,q_{12}-k_2\eta)\stackrel{(\ref{e879})}{=}
}
\\ \ \\
\displaystyle{
=\Phi(z,-(k_1+k_2)\eta)\Big( E_1(q_{12}+k_1\eta)+E_1(q_{21}+k_2\eta) -E_1(k_1\eta)-E_1(k_2\eta)\Big)\,.
}
\end{array}
\end{equation}
%
The expression in the brackets is simplified via (\ref{e880}), and the function $\Phi(z,-(k_1+k_2)\eta)$ should be substituted
in terms of theta functions (\ref{e876}). This yields the answer (\ref{s22}) for $N=2$.
\paragraph{Dual to the elliptic Ruijsenaars-Schneider ${\rm GL}_2$ case.}
The Lax matrix $ {\hat L} ^{\rm RS}$ of the trigonometric Ruijsenaars-Schneider model (\ref{e543}) is as follows:
\begin{equation}\label{e729}
\displaystyle{
{\hat L} ^{\rm RS} =
\mat{\frac{t x_1 - x_2}{x_1-x_2} q^{x_1 \partial_1}}
{\frac{(1-t) x_2 }{x_1-x_2} q^{x_2 \partial_2}}
{\frac{(1-t) x_1 }{x_2-x_1} q^{x_1 \partial_1}}
{\frac{t x_2 - x_1}{x_2-x_1} q^{x_2 \partial_2}}\,.
}
\end{equation}
Then the matrix $\hat{\mathcal L}(\lambda)$ (\ref{e239}) takes the form:
\begin{equation}\label{e753}
\displaystyle{
\hat{\mathcal L}(\lambda) = \sum_{n \in \mathbb{Z} } \omega^{\frac{n^2-n}{2}} (-\lambda)^n
\mat{\frac{t^n x_1 - x_2}{x_1-x_2} q^{nx_1 \partial_1}}
{\frac{(1-t^n) x_2 }{x_1-x_2} q^{nx_2 \partial_2}}
{\frac{(1-t^n) x_1 }{x_2-x_1} q^{nx_1 \partial_1}}
{\frac{t^n x_2 - x_1}{x_2-x_1} q^{nx_2 \partial_2}}
}\,.
\end{equation}
Computing its determinant one obtains:
\begin{equation}\label{e540}
\begin{array}{c}
\displaystyle{
\hat{\mathcal O}(\lambda) = \det \hat{\mathcal L}(\lambda) = \sum_{(n_1,n_2) \in\, \mathbb{Z} \times \mathbb{Z}} \omega^{\frac{n_1^2-n_1}{2}+ \frac{n_2^2-n_2}{2}} (-\lambda)^{n_1+n_2}\times
}
\end{array}
\end{equation}
%
%
$$
\begin{array}{c}
\displaystyle{
\times \frac{(t^{n_1}x_1 -x_2)(t^{n_2}x_2 - x_1) - (1-t^{n_1})(1-t^{n_2})x_1 x_2}{(x_1-x_2) (x_2 - x_1)} \, q^{n_1x_1 \partial_1 + n_2x_2 \partial_2}
}
\\ \ \\
\displaystyle{
= \sum_{(n_1,n_2) \in\, \mathbb{Z} \times \mathbb{Z}} \omega^{\frac{n_1^2-n_1}{2}+ \frac{n_2^2-n_2}{2}} (-\lambda)^{n_1+n_2} \frac{t^{n_1}x_1 - t^{n_2} x_2}{x_1 -x_2}\, q^{n_1x_1 \partial_1 + n_2x_2 \partial_2}\,,
}
\end{array}
$$
%
as it should be according to (\ref{e1}).
\paragraph{Dual to the elliptic Calogero-Moser ${\rm GL}_2$ case.}
In the $N=2$ case the Lax matrix $ {\hat L} ^{\rm RS}$ of the rational
Ruijsenaars-Schneider model (\ref{e244}) is as follows:
\begin{equation}\label{e529}
\displaystyle{
{\hat L} ^{\rm RS} =
\mat{\frac{z-\eta}{\eta}\,\frac{q_{12}+\eta}{q_{12}}\, e^{\hbar \partial_{1} }}
{\frac{\eta(z-\eta+q_{12})}{z\,q_{21}}\, e^{\hbar \partial_{2}}}
{\frac{\eta(z-\eta+q_{21})}{z\,q_{12}}\, e^{\hbar \partial_{1}}}
{ \frac{z-\eta}{\eta}\,\frac{q_{21}+\eta}{q_{21}}\,e^{\hbar \partial_{2}} }\,.
}
\end{equation}
Then the matrix $\hat{\mathcal L}(\lambda)$ (\ref{e239}) takes the form:
\begin{equation}\label{e53}
\displaystyle{
\hat{\mathcal L}(\lambda) = \sum_{k \in\, \mathbb{Z} } \omega^{\frac{k^2-k}{2}} (-\lambda)^k
\mat{\frac{z-k\eta}{k\eta}\,\frac{q_{12}+k\eta}{q_{12}}\, e^{k\hbar \partial_{1} }}
{\frac{\eta(z-k\eta+q_{12})}{z\,q_{21}}\, e^{k\hbar \partial_{2}}}
{\frac{\eta(z-k\eta+q_{21})}{z\,q_{12}}\, e^{k\hbar \partial_{1}}}
{\frac{z-k\eta}{k\eta}\,\frac{q_{21}+k\eta}{q_{21}}\,e^{k\hbar \partial_{2}} }\,.
}
\end{equation}
Computing its determinant one gets (\ref{e245}) for $N=2$.
\section{Appendix D: Relation between \texorpdfstring{$\mathcal{O}'(z, \lambda)$ and $\mathcal{O}(\lambda)$}{O'(z,w) and O(w)}} \label{AppD}
\defD.\arabic{equation}{D.\arabic{equation}}
The operators $\mathcal{O}_n'$ are generated by the function:
\begin{equation}\label{e1000}
\begin{array}{c}
\displaystyle{
\hat{\mathcal O}'(z,\lambda)= \sum_{k \in\, \mathbb{Z}} \frac{\vartheta(z-k\eta)}{\vartheta(z)} \lambda^k \hat{\mathcal O}_k'
=
}
\\ \ \\
\displaystyle{
=
\sum_{n_1,...,n_N \in\, \mathbb{Z}}
\frac{\vartheta(z-\eta\sum_{i=1}^N n_i)}{\vartheta(z)}
\omega^{\sum_i\frac{n_i^2 - n_i}{2}} (-\lambda)^{\sum_i^N\! n_i}\, \prod_{i < j}^N \frac{\vartheta(q_i-q_j+\eta(n_i - n_j)) }{\vartheta(q_i-q_j)} \prod_i^N e^{ \hbar \, n_i \partial_{q_i}}\,.
}
\end{array}
\end{equation}
They are related to the operators $\mathcal{O}_n$ generated by
%
\begin{equation}\label{e1001}
\displaystyle{
\hat{\mathcal O}(\lambda)= \sum_{n_1,...,n_N \in\, \mathbb{Z}} \omega^{\sum_i\frac{n_i^2 - n_i}{2}} (-\lambda)^{\sum_i n_i} \prod_{i < j}^N \frac{\theta_p (t^{n_i - n_j}\frac{x_i}{x_j})}{\theta_p (\frac{x_i}{x_j})} \prod_i^N q^{n_i x_i \partial_i} = \sum_{n \in \mathbb{Z}} \lambda^n \hat{\mathcal O}_n
}
\end{equation}
in the following way:
\begin{equation}\label{e1002}
\mathcal{O}_n' = h^{-1} \mathcal{O}_n h\,,
\end{equation}
where
\begin{equation}\label{e1003}
h = h(x_1,...,x_N) = \prod_{i < j} \Big( \frac{x_i}{x_j}\Big)^{\frac{\eta}{2 \hbar}}\,.
\end{equation}
This follows from the relation between the two theta functions (\ref{e83}).
\section{Appendix E: Easier proof of the main theorems \ref{th1} and \ref{th2}}
In this Appendix we give one more proof of our main theorems, using the presentation, which is very close to the book \cite{McD}.
\noindent\underline{\em{Proof of theorem 2.1:}}\quad Denote:
\begin{equation*}
\Delta(x) = \prod_{i < j}(x_i - x_j)
\end{equation*}
Let us expand the determinant:
\begin{gather*}
\frac{1}{\Delta(x)}\sum_{\sigma \in S_N} (-1)^{|\sigma|} x^{\sigma \delta} \prod_{i = 1}^N \theta_\omega(\lambda t^{(\sigma \delta)_i} q^{x_i \partial_i}) = \\ =
\frac{1}{\Delta(x)}\sum_{\sigma \in S_N} (-1)^{|\sigma|} x^{\sigma \delta} \prod_{i = 1}^N \Big(\sum_{n_i \in \mathbb{Z}}(-\lambda)^{n_i} \omega^{\frac{n_i^2-n_i}{2}} ( t^{(\sigma \delta)_i})^{n_i} q^{n_i x_i \partial_i} \Big) =
\end{gather*}
Gathering all terms in front of the fixed power of $u$ one obtains:
\begin{gather*}
= \sum_{n_1,...,n_N \in \mathbb{Z}} (-\lambda)^{\sum_i n_i} \omega^{\sum_i\frac{n_i^2-n_i}{2}} \frac{1}{\Delta(x)} \sum_{\sigma \in S_N} (-1)^{|\sigma|} x^{\sigma \delta} \prod_{i=1}^N ( t^{(\sigma \delta)_i})^{n_i} \prod_{i=1}^N q^{n_i x_i \partial_i}
\end{gather*}
The multiplication by $\prod_{i=1}^N ( t^{(\sigma \delta)_i})^{n_i}$ could be expressed as a conjugation of $\Delta(x)$ by the shift operator $\prod_{i=1}^N t^{n_i x_i \partial_i}\, $:
\begin{gather*}
\sum_{n_1,...,n_N \in \mathbb{Z}} (-\lambda)^{\sum_i n_i} \omega^{\sum_i\frac{n_i^2-n_i}{2}} \frac{1}{\Delta(x)} \Big( \prod_{i =1}^N t^{n_i x_i \partial_i}\Big) \sum_{\sigma \in S_N} (-1)^{|\sigma|} x^{\sigma \delta} \Big( \prod_{i=1}^N t^{n_i x_i \partial_i}\Big)^{-1} \prod_{i=1}^N q^{n_i x_i \partial_i} = \\ =
\sum_{n_1,...,n_N \in \mathbb{Z}} (-\lambda)^{\sum_i n_i} \omega^{\sum_i\frac{n_i^2-n_i}{2}} \prod_{i < j} \frac{t^{n_i}x_i - t^{n_j}x_j}{x_i - x_j} \prod_{i =1}^N q^{n_i x_i \partial_i}\,, \qquad \blacksquare
\end{gather*}
\noindent\underline{\em{Proof of theorem 2.3:}}\quad We need to do the same thing as in the proof of theorem 2.1 but for the determinant:
\begin{equation*}
\displaystyle{
\frac{1}{\det \Xi_{ij}(q_j,z)} \, \det_{1 \leq i,j \leq N} \Big\{ \sum_{n \in\, \mathbb{Z}} (-\lambda)^n \omega^{\frac{n^2-n}{2}} \Xi_{ij}(q_j + n \eta,z) e^{n \hbar \partial_{q_j}} \Big\} =
}
\end{equation*}
\begin{equation*}
\displaystyle{
= \frac{1}{\det \Xi_{ij}(q_j,z)} \, \det_{1 \leq i,j \leq N}
\Big\{ \sum_{k \in\, \mathbb{Z}} \Xi_{ij,k}(z)\,
e^{(\alpha k + \sigma_{ij})q_j} \theta_\omega (\lambda e^{\alpha k \eta + \sigma_{ij} \eta} e^{ \hbar \partial_{q_j}}) \Big\}
}
\end{equation*}
\begin{gather*}
= \frac{1}{\det \Xi_{ij}(q_j,z)} \, \sum_{\sigma \in S_N} (-1)^{|\sigma|} \prod_{i=1}^N \Big[ \sum_{k_i \in \mathbb{Z}} \Xi_{\sigma(i)i,k}(z)\, e^{(\alpha k_i + \sigma_{\sigma(i)i}) q_i} \theta_\omega(\lambda e^{(\alpha k_i + \sigma_{\sigma(i)i}) \eta} e^{\hbar \partial_{q_i}}) \Big]
\end{gather*}
\begin{gather*}
= \frac{1}{\det \Xi_{ij}(q_j,z)} \, \sum_{\sigma \in S_N} (-1)^{|\sigma|} \prod_{i=1}^N \Big[ \sum_{k_i \in \mathbb{Z}} \sum_{n_i \in \mathbb{Z}} (-\lambda)^{n_i} \omega^{\frac{n_i^2 - n_i}{2}} \Xi_{\sigma(i)i,k}(z) \, e^{(\alpha k_i + \sigma_{\sigma(i)i}) q_i} \, e^{(\alpha k_i + \sigma_{\sigma(i)i}) n_i \eta} e^{n_i \hbar \partial_{q_i}} \Big]
\end{gather*}
\begin{gather*}
= \frac{1}{\det \Xi_{ij}(q_j,z)} \, \sum_{k_1,...k_N \in \mathbb{Z}} \sum_{n_1,...,n_N \in \mathbb{Z}} (-\lambda)^{\sum_i n_i} \omega^{\sum_i\frac{ n_i^2 - n_i}{2}} \times
\end{gather*}
\begin{gather*}
\times\sum_{\sigma \in S_N} (-1)^{|\sigma|} \Xi_{\sigma(i)i,k}(z) \, e^{(\alpha k_i + \sigma_{\sigma(i)i}) q_i} \,\prod_{i=1}^N e^{(\alpha k_i + \sigma_{\sigma(i)i}) n_i \eta} e^{n_i \hbar \partial_{q_i}}
\end{gather*}
by the same arguments as in the proof of theorem 2.1 above, this expression could be rewritten as:
\begin{gather*}
= \frac{1}{\det \Xi_{ij}(q_j,z)} \, \sum_{n_1,...,n_N \in \mathbb{Z}} (-\lambda)^{\sum_i n_i} \omega^{\sum_i\frac{ n_i^2 - n_i}{2}} \Big(\prod_{i=1}^N e^{n_i \eta \partial_{q_i}}\Big)\det \Xi_{ij}(q_j,z) \Big(\prod_{i=1}^N e^{n_i \eta \partial_{q_i}}\Big)^{-1} \,\prod_{i=1}^N e^{n_i \hbar \partial_{q_i}}
\end{gather*}
By evaluating the determinant $\det \Xi_{ij}(q_j,z)$ via (\ref{e8655}), one obtains:
\begin{gather*}
=\sum_{n_1,...,n_N \in \mathbb{Z}} (-\lambda)^{\sum_i n_i} \omega^{\sum_i\frac{ n_i^2 - n_i}{2}} \frac{\vartheta(z -N q_0 -\eta \sum_i n_i)}{\vartheta(z - N q_0)} \prod_{i =1}^N \frac{\vartheta(q_i - q_j + \eta n_i - \eta n_j)}{\vartheta(q_i - q_j)} \,\prod_{i=1}^N e^{n_i \hbar \partial_{q_i}} =\\
= \hat{\mathcal O}'(z - N q_0,\lambda)\,, \qquad \blacksquare
\end{gather*}
\subsection*{Acknowledgments}
\addcontentsline{toc}{section}{Acknowledgments}
We are grateful to
G. Aminov, A. Grosky, A. Mironov, A. Morozov, V. Rubtsov, I. Sechin, Sh. Shakirov and especially to Y. Zenkevich for useful comments and discussions.
The work of A. Zotov
is supported by the Russian Science Foundation under grant 19-11-00062
and performed in Steklov Mathematical Institute of Russian Academy of Sciences.
\begin{small}
|
2,877,628,090,746 | arxiv |
\section{Introduction}
On 1998 January 20 during a {\em R}XTE\ observation in the direction of the
Small Magellanic Cloud (SMC), a previously unknown X--ray source, namely
XTE J0055--724, was detected at a flux level (2--10 keV) of
$\sim$ 6.0 $\times$ 10$^{-11}$ erg s$^{-1}$ cm$^{-2}$. The source showed
pulsations at a period of $\sim$59\,s (Marshall \& Lochner, 1998a).
A previous {\em R}XTE\ observation of the same field performed on 1998 January 12
failed to detect the source.
In response to these findings, simultaneous \BSAX\ and {\em R}XTE\ observations
of a region including the {\em R}XTE\ error circle ($\sim$10$^{\prime}$ radius) of XTE
J0055--724, were carried out on 1998 Jannuary 28. The results of these observations
are reported elsewhere (Santangelo {\it et al. } 1998a; Marshall {\it et al. } 1998b).
Thanks to the spatial capabilities of the imaging X--ray concentrators on board
\BSAX, an improved position ($\sim$40$^{''}$ radius) was obtained for
the pulsating source, named 1SAX J0054.9--7226 (Santangelo {\it et al. }
1998a,b). The new \BSAX\ error circle contains only the previously
classified {\em ROSAT}\ and {\em Einstein}\ X--ray sources 1WGA J0054.9--7226 and 2E\,0053.2--7242, which
are likely the
same source. In the following we adopt the earliest source name, i.e.
{\em Einstein}'s.
2E\,0053.2--7242\ is a variable X--ray source in the SMC, which was already
considered a candidate High Mass X--ray Binary by Wang \& Wu (1992;
source \#35), Bruhweiler et al. (1987; source \# 9) and by White {\it et al. }
(1994; in the WGACAT), based on its high spectral hardness.
We report in this letter on the results from the analysis of the
Position Sensitive Proportional Counter (PSPC) and High Resolution Imager (HRI)
observations from the {\em ROSAT}\ public archive.
\section{{\em ROSAT}\ observations}
\begin{table*}[tbh]
\begin{center}
\caption{{\em ROSAT}\ observations of 2E\,0053.2--7242\ }
\begin{tabular}{clrcccl}
\hline \noalign{\smallskip}
Pointing & Instr.&Expos.&Start Time&Stop Time&Count rate$^{\dag}$& Notes$^{\ddag}$ \\
Number&&(s)& & & (cts s$^{-1}$) & \\\noalign{\smallskip}
\hline\noalign{\smallskip}
600195A00 & PSPC &16978 &1991 Oct 08 03:10 & Oct 09 02:38 & 0.048$\pm$0.002 & T$_{\rm eff}$$=$11820s; [1]\\
600196A00 & PSPC & 1346 &1991 Oct 09 03:06 & Oct 10 04:47 &$<$ 0.080 & T$_{\rm eff}$$=$740s\\
600196A01 & PSPC &22223 &1992 Apr 15 15:30 & Apr 25 16:41 &$<$ 0.010 & T$_{\rm eff}$$=$10920s; [2]\\
600195A01 & PSPC & 9443 &1992 Apr 17 17:01 & Apr 27 16:28 & 0.008$\pm$0.001 & T$_{\rm eff}$$=$7202s \\
600452N00 & PSPC &14207 &1993 Apr 10 11:54 & Apr 25 01:20 &$<$ 0.030& T$_{\rm eff}$$=$6815s \\
600453N00 & PSPC &17593 &1993 May 09 07:17& May 12 20:14 &$<$ 0.005 & T$_{\rm eff}$$=$7330s \\
500142N00 & PSPC & 4909 &1993 May 12 22:41 & May 13 04:11 &$<$ 0.015 & T$_{\rm eff}$$=$2743s \\
600452A01 & PSPC & 16663& 1993 Oct 01 08:24 & Oct 14 16:54 & $<$ 0.014 & T$_{\rm eff}$$=$8509s; [3] \\
500250N00 & PSPC & 20845 &1993 Oct 14 21:10 & Oct 29 08:45 &$<$ 0.014 & T$_{\rm eff}$$=$10430s; [4]\\ \noalign{\smallskip}
\hline \noalign{\smallskip}
400237A01 & HRI & 1167 &1993 Apr 17 09:33& Apr 17 10:08 &$<$ 0.008 & \\
400237N00 & HRI & 1096 &1992 Oct 23 13:41 & Oct 24 08:45 &$<$ 0.010&\\
400237A02 & HRI & 1301 &1994 Apr 15 10:26 & Apr 15 11:05 &$<$ 0.008&\\
600810N00 & HRI & 6726 &1996 Apr 26 02:09 & Jun 10 14:45 & 0.004$\pm$0.001 & \\ \noalign{\smallskip}
\hline\noalign{\smallskip}
\end{tabular}
\end{center}
\noindent $^{\dag}$ Errors are at $1\,\sigma$ level and upper limits at
$3\,\sigma$.\\
\noindent $^{\ddag}$ The effective times, T$_{\rm eff}$, are vignetted and
exposure corrected.\\
\noindent [1] --- Pulsations at a period of 59.072\,s.\\
\noindent [2] --- Strong contamination by 1WGAJ0054.0--7226 and 1WGAJ0053.8--7226 due to
the degrading PSF at large off--axis angles.\\
\noindent [3] --- Contamination by 1WGAJ0053.8--7226.\\
\noindent [4] --- Strong contamination by 1WGAJ0053.8--7226.\\
\end{table*}
The PSPC (0.1--2.4 keV) and HRI (0.1--2.4 keV) detectors on board {\em ROSAT}\
observed the SMC field including 2E\,0053.2--7242\ several times. In the {\em ROSAT}\ public
archive there are 13 observations performed between 1991 October
and 1996 April: 9 were carried out with the PSPC and 4 with the HRI.
The PSPC images, spectra and light curves were accumulated and corrected
for the effective exposure map. This is particularly relevant to minimize the
effects of the wobble in the pointing direction on count rate measurements
of sources near the edge of the field of view or the ribs of
the detector window support structure.
The vignetting correction was also taken into account and
the effective exposure time $T_{\rm eff}$ obtained for each PSPC pointing.
A sliding cell detection algorithm was used in order to
characterise the physical parameters, such as position, count rate (90\%
confidence level), S/N ratio, etc., of 2E 0053.2--7242 when detected, and to
obtain a $3\,\sigma$ count rate upper limit in case of non--detections.
Table\,1 summarises the results of this analysis.
2E\,0053.2--7242\ was only detected on three occasions: twice with the PSPC
(1991 October 8--9 and 1992 April 17--27) and once with the HRI (1996
April 26 -- June 10).
\subsection{PSPC data}
The {\em ROSAT}\ event list and spectrum of 2E\,0053.2--7242\ were extracted from a circle of
$\sim$1$^{\prime}$.5 radius (corresponding to an encircled energy of
$\sim$95\%) around the X--ray position.
On 1991 October 8--9 (sequence number 600195A00)
the source count rate was 0.047 cts s$^{-1}$, while
on 1992 April 17--27 (600195A01) it had decreased to about 0.01 cts s$^{-1}$.
Of the $\sim$ 600 and $\sim$ 150 photons contained in the extraction circle
of each of the two pointings, we
estimated that about 45 and 20 photons derive from the background
around the source, respectively.
\begin{figure*}[tbh]
\centerline{\psfig{figure=OCT91dps.ps,width=12cm,height=8.cm}}
\caption{Oct 1991 power spectrum of the 0.1--2.4 keV {\em ROSAT}\ PSPC
light curve of 2E\,0053.2--7242. The preliminary $3\,\sigma$ detection threshold
is also shown. A folded light curve at the best period of 59.072\,s
is shown as an insert. The peak structures around a frequency of $\sim$2.5
$\times$ 10$^{-3}$ Hz are due to the wobble in the pointing direction}
\end{figure*}
The 1991 October observation is the only one that contains a sufficiently high
number of
photons to perform a detailed periodicity search. The photons arrival times
were corrected to the barycenter of the solar system and a 1\,s binned light
curve accumulated. The corresponding power spectrum, calculated over the entire
observation duration ($\sim$1 day), is shown in Fig.\,1, together with the
$3\,\sigma$ preliminary peak detection threshold described by Israel \& Stella
(1996). The search was performed over a period interval around that detected
by \BSAX\ and assuming a maximum $|$\.P$|$ of $\sim$3 s yr$^{-1}$ (the highest
ever observed from an X--ray pulsar: GX1+4). This translates into a search
over an
interval of $\sim$1100 Fourier frequencies centered on 0.017 Hz.
Significant peaks were detected around a frequency of $\sim$0.0169 Hz.
These are unique to 2E\,0053.2--7242. The multiple peak structure is due to the
sidelobes arising from the satellite orbital occultation.
The highest of these peaks has a significance of $7.4\,\sigma$ over the
explored frequency interval, and corresponds to a period of 59.072 $\pm$ 0.003 s
(90\% uncertainties are used through this letter).
The modulation is energy independent in the PSPC band (within the statistical
uncertainties). The shape is fairly asymmetric with a pulse fraction of $\sim $
40\% (Fig.\,1, inner panel).
By using the period measured by \BSAX\ a period derivative of -- 0.016 s yr$^{-1}$
was obtained.
The PSPC Pulse Hight Analyser (PHA) rates were grouped so as to contain a
minimum of 20 photons per energy bin.
The spectrum of the 1991 October observation was well fit with an
absorbed power--law model (see Fig.\,2; upper curve). The best fit
($\chi^2/$degree of freedom -- $dof$ = $21/24$) was obtained for a
photon index of $\Gamma$ = 0.90 $\pm^{0.62}_{0.85}$ and a column density
of N$_H$ = (1.3 $\pm^{1.6}_{0.8}$) $\times$ 10$^{21}$ cm$^{-2}$
(see Table\,2). Note that the Galactic hydrogen column in the direction
of the SMC is $\sim$ 6 $\times$ 10$^{20}$ cm$^{-2}$. The 0.1--2.4 keV
luminosity at the source is $L_X$ $\sim$ 4.2 $\times$ 10$^{35}$ erg s$^{-1}$
for an assumed distance of 65 kpc (Wang \& Wu 1992).
We note that any further spectral component added to the power--law model
in order to fit the data excess around 1.2 keV did not significantly
improve the fit.
For the 1992 April observation the number of photons (150) is too low to obtain
an indipendent estimate of the spectral parameters. By keeping
the photon index fixed to the best fit value of the 1991 October observation,
an unabsorbed X--ray luminosity of $L_X$ $\sim$ 1.5 $\times$ 10$^{35}$ erg
s$^{-1}$ was derived (see Table\,2 and Fig.\,2).
\begin{table}[bht]
\begin{center}
\caption{{\em ROSAT}\ PSPC spectral results for 2E\,0053.2--7242\ }
\begin{tabular}{lll}
\hline \noalign{\smallskip} \noalign{\smallskip}
Parameter & Oct 1991 & Apr 1992 \\ \noalign{\smallskip}
\hline \noalign{\smallskip}
N$_H$ (10$^{22}$ cm$^{-2}$).......................... & 0.13$\pm_{0.08}^{0.16}$& 0.16$\pm_{0.16}^{0.55}$\\
$\Gamma$.................................................& 0.90$\pm^{0.62}_{0.85}$& 0.9\,(fixed) \\ \noalign{\smallskip}
Count rate (10$^{-3}$ counts s$^{-1}$)..... &45.0$\pm$2.0& 8.0$\pm$1.0 \\
F$_X$ (10$^{-13}$ erg cm$^{-2}$ s$^{-1}$)........... &8.7 &2.4\\
$L_X$ (N$_H$=0; 10$^{35}$ erg s$^{-1}$)...........& 4.2& 1.5\\ \noalign{\smallskip}
Reduced $\chi^2$/($dof$)....................... &0.96/(24)& ---\\ \noalign{\smallskip}
\hline
\end{tabular}
\end{center}
\noindent Note. --- the X--ray flux (absorbed) and the luminosity (unabsorbed)
refer to the 0.1--2.4 keV energy band.
\end{table}
\begin{figure}[tbh]
\centerline{\psfig{figure=59s_spalln.ps,width=8cm,height=7.5cm}}
\caption{{\em ROSAT}\ PSPC spectrum of 2E\,0053.2--7242\ during 1991 October and 1992 April.
The best fit power--law is shown, together
with the corresponding residuals. The 1992 Apr 17--27 data and residuals are
marked with triangles}
\end{figure}
\subsection{HRI data}
The {\em ROSAT}\ HRI observation during which 2E\,0053.2--7242\ was detected (sequence 600810;
1996 April--June) provided a significantely improved source position (Israel
1998). This was determined to be R.A. = 00$^h$ 54$^m$
56$^s$.5 and DEC = --72$^o$ 26$^{'}$ 47$^{''}$.3 (equinox 2000; error radius of
1.1$^{\prime\prime}$ with a 90\% confidence level) by using both a sliding
cell and a Wavelet transform--based detection algorithm (Lazzati et al. 1998;
Campana et al. 1998).
However due to the unknown boresight correction the uncertainty radius
increases up to 10$^{''}$.
A 0.1--2.4 keV source flux level of $\sim$ 1.8$\times$
10$^{-13}$ erg cm$^{-2}$ s$^{-1}$ was determined assuming the best fit spectral
parameters of the 1991 October 8--9. This translates into an unabsorbed
luminosity of $L_X \sim 8.5$ $\times$ 10$^{34}$ (Israel 1998).
\section{{\em Einstein}\ observation}
2E\,0053.2--7242\ was discovered during a survey of the SMC by means of the
Imaging Proportional Counter (IPC) on board the {\em Einstein}\ spacecraft (Bruhweiler
{\it et al. }, 1987; Wang \& Wu, 1992) and classified as an hard X--ray source based
on its hardness ratio. It was observed at a count rate of
$\sim$0.009$\pm$0.002 cts s $^{-1}$ corresponding to an X--ray luminosity
of $\sim$1.5$\times$ 10$^{35}$ erg s$^{-1}$ (0.16--3.5 keV band).
\section{Discussion}
2E\,0053.2--7242\ was detected three times between 1991 and 1996 in the {\em ROSAT}\ archival
data. Highly significant pulsations, at a period of 59.072\,s were detected
on 1991 October 8--9. These findings, together with the \BSAX\ results, yield
a mean period derivative of $\sim$ -- 0.016\, s yr$^{-1}$ between 1991 and
1998.
\begin{figure}[tbh]
\centerline{\psfig{figure=59s_dss.ps,width=8.cm,height=8.cm}}
\caption{ESO plate including the position of 2E\,0053.2--7242. The X--ray error circles
obtained from different instruments and satellites are shown}
\end{figure}
In one case a spectral analysis could be performed. The spectrum was found to
be consistent with a relatively flat low absorbed power--law model that is
typical of accreting X--ray pulsars in this energy range.
The 0.1--2.4 keV luminosity of 2E\,0053.2--7242\ as observed with {\em ROSAT}\
ranges between $\sim$4.2$\times$10$^{35}$ erg
s$^{-1}$ (1991 October 8--9) and $\sim$8.5$\times$10$^{34}$ erg s$^{-1}$ (1996
April 26 -- June 10). Moreover {\em R}XTE\ detected 2E\,0053.2--7242\ at a
luminosity level of $\sim$3$\times$10$^{37}$ erg s$^{-1}$ in the 2--10 keV energy
band. Extrapolating to the {\em ROSAT}\ energy range the luminosity measured by {\em R}XTE\
on 1998 January 20, a 0.1--2.4 keV luminosity of $\sim$2.5$\times$10$^{36}$
erg s$^{-1}$ is derived, implying a pronounced long--term variability of
2E\,0053.2--7242\ (a factor of $>$30). This indicates that the source is probably a transient
X--ray pulsar in a high--mass binary containing a Be star.
A 10$^{\prime\prime}$ accurate position was obtained thanks to a {\em ROSAT}\ HRI
observation during which the source was detected (1996 April;
0.1--2.4 keV luminosity of $\sim$8.5$\times$10$^{34}$ erg s$^{-1}$).
The {\em ROSAT}\ HRI error circle of contains only three stars in the ESO plates with
m$_V$ $>$ 15.5, the likely optical counterpart of 2E\,0053.2--7242 (see Fig.\,3).
Assuming a B--V = --0.2 and a
distance modulus of 19 mag, these optical counterpart candidates are
consistent with main sequence A9 -- B2 stars. We note that a similar
spectral--type star (B1.5Ve; m$_V$ = 16) is the companion of the nearby
X--ray source SMC X--2. Future optical follow--up observations
of these candidates should determine the counterpart of 2E\,0053.2--7242\
and its probable Be star X--ray transient nature. The optical and/or infrared
activity brightening of the counterpart will allow further X--ray triggers
and studies.
\begin{acknowledgements}
We would like to thanks D. Dal Fiume, A.N. Parmar and L. Piro for their constant
help and scientific support.
This research has made use of data obtained through the High Energy
Astrophysics Science Archive Research Center (HEASARC), provided by
NASA's Goddard Space Flight Center. This work was par\-tial\-ly sup\-por\-ted
thro\-ugh grants from the Agenzia Spaziale Italiana.
\end{acknowledgements}
|
2,877,628,090,747 | arxiv | \section{Introduction}
The temperature anisotropy analysis of the cosmic microwave background~(CMB) has been successful in the past few decades.
The standard $\Lambda$CDM cosmology explains the existing linear fluctuations very well and shows us a validity of inflationary paradigm.
The next issue of modern cosmology can be the determination of the concrete model of the early universe.
The CMB temperature bispectrum has been intensely investigated in this context, and the Planck satellite provides us tight constraints on the primordial non-Gaussianity~\cite{Ade:2015ava}.
In consideration of the future observations, the second order Boltzmann theory has been developed~\cite{Bartolo:2006cu,Pitrou:2007jy,Pitrou:2008ut,Beneke:2010eg,Naruko:2013aaa} because secondary effects may mimic the primordial non-Gaussianity~\cite{Bartolo:2003bz,Bartolo:2004ty,Bartolo:2011wb,Senatore:2008wk,Khatri:2008kb,Nitta:2009jp,Creminelli:2004pv,Boubekeur:2009uk,Lewis:2012tc,Creminelli:2011sq,Fidler:2014zwa}.
The smallness of the principle anisotropies makes it crucial to estimate the secondary anisotropies, and nowadays several numerical codes have been developed to get rid of such contaminations~\cite{Pitrou:2010sn,Su:2012gt,Huang:2012ub,Pettinari:2013he,Huang:2013qua,Saito:2014bxa}.
The anisotropies in the $y$ distortions are also discussed in the same context~\cite{Pitrou:2009bc,Renaux-Petel:2013zwa}.
On the other hand, spectral distortions of the CMB are also investigated for probes of
the thermal history of the early universe.
One example is the observed thermal Sunyaev-Zel'dovich (tSZ) effect due to hot gas in the intracluster medium of galaxy groups~\cite{Zeldovich:1969ff,Sunyaev:1970er,Aghanim:2015eva}.
More generally, the spectral distortions are powerful tools to investigate energy injections from several non-trivial processes such as dark matter pair annihilation~\cite{Hu:1993gc,McDonald:2000bk,Chluba:2009uv}, evaporation of the primordial blackholes~\cite{Carr:2009jm} and dissipations of the primordial fluctuations~\cite{Sunyaev:1970eu,Hu:1994bz,Chluba:2011hw,1991MNRAS.248...52B,Chluba:2012gq,Chluba:2012we,Khatri:2012rt,Khatri:2012tw,Khatri:2013dha,Clesse:2014pna,Ota:2014hha,Chluba:2014qia}.
Recently, anisotropies of the distortions from Silk damping are also discussed for a primordial non-Gaussianity observation~\cite{Pajer:2012vz,Pajer:2013oca,Ganc:2012ae,Ganc:2014wia,Ota:2014iva,Naruko:2015pva,Ota:2016mqd,Emami:2015xqa,Chluba:2016aln}.
The previous analysis is intuitively reasonable but ad hoc since it is based on thermodynamics of each local diffusion patch with a window function introduced by hands.
These anisotropies are at second order in the primordial curvature perturbations and can be understood as mode coupling effects in a framework of the second order Boltzmann theory.
In this paper, we give a unified view to the above two issues by explicitly solving the Boltzmann equations for momentum~(frequency) dependent temperature perturbations.
We notice the momentum dependence of the temperature perturbations which arises in contrast to zeroth and first order.
There are infinite number of evolution equations corresponding to the continuous momentum.
As pointed out, for example, in~\cite{Dodelson:1993xz}, we usually integrate the momentum and define the brightness perturbations at second order.
This simplification is preferred since the anisotropy experiments do not focus on a spectroscopy; however, we should keep in mind that a lot of information is hidden in the nontrivial momentum dependence.
We handle the infinite number of d.o.f. coming from the continuous momentum by replacing them with the infinite number of the parameters describing spectral distortions~\footnote{A similar manipulation called \textit{moment expansion} was originally proposed in~\cite{Stebbins:2007ve,Pitrou:2014ota}.
The authors also tried to classify the spectral distortions in a systematic way.
In their literatures, concrete analyses were not discussed in the presence of second or higher order collision terms, which are crucial for spectral distortions and central topics of this paper.}.
Then fortunately, we find that the necessary number of parameters at second order is only two, and the equations are closed.
In addition, we point out a possibility to solve the higher order Boltzmann equations systematically by introducing higher order spectral distortions such as linear Sunyaev-Zel'dovich~(SZ) effects.
As an example, we construct the next leading order spectral distortions in our method.
Observing such a tiny higher order spectral distortion can be the future works in the next few decades.
On the other hand, the method may be useful as a prescription to non-equilibrium physics or non-linear evolution of the large scale structure~(LSS) as fluid dynamics of matters with gravitational interactions.
We organize this paper as follows.
In the section \ref{sec:2}, we summarize second and third order Boltzmann collision terms for the photon fluid.
The ansatz and the individual equations are discussed in the section \ref{sec:3}.
We solve the equation for the $y$ distortion in the section \ref{sec:4} and confirm the availability of the previous phenomenological estimations.
In the section \ref{sec:5}, we comment on another spectral distortion called $\mu$ distortion.
The section \ref{sec:6} is devoted to the extension of the method to higher orders.
The appendices provide several definitions and translations from the previous works.
We then conclude in the final section.
\section{Boltzmann equation}\label{sec:2}
We shall begin with deriving the higher order Boltzmann collision terms to investigate the spectral distortions and higher order temperature anisotropies.
\subsection{Set up}
Let $f$ and $g$ be the distribution functions for photon and electron, respectively.
Ignoring the Pauli blocking factors, the Boltzmann collision term is given as
\begin{align}
\mathcal C[f]=&\frac{1}{16\pi }\int
\frac{d\mathbf n'}{4\pi}d{\tilde p}'\int \frac{d^3\tilde q}{(2\pi)^3}\frac{1}{4}\sum_{\rm spins}|\mathcal M|^2\notag \\
&\times \frac{{\tilde p}'}{{\tilde p}}\frac 1{E(\mathbf {\tilde q})E(\mathbf {\tilde p}+\mathbf {\tilde q}-\mathbf {\tilde p}')}\delta[{\tilde p}+E(\mathbf {\tilde q})-{\tilde p}'-E(\mathbf {\tilde p}+\mathbf {\tilde q}-\mathbf {\tilde p}')]\notag \\
&\times \left\{g(\mathbf {\tilde q}')f(\mathbf {\tilde p}')[1+f(\mathbf {\tilde p})]-g(\mathbf {\tilde q})f(\mathbf {\tilde p})[1+f(\mathbf
{\tilde p}')]\right\},\label{Col:expand}
\end{align}
where tildes imply that they are physical momenta, which are different from the comoving momenta without tildes~(e.g. $p(1+z)=\tilde p$ with the redshift $z$).
$\mathcal M$ is the invariant scattering amplitude and $\mathbf n'\equiv \mathbf p'/|\mathbf p'|$.
We shall now expand each part of (\ref{Col:expand}) by introducing two parameters: $\epsilon\sim \mathcal O(q/m_{\rm e})$ and $\eta=\mathcal O(|\mathbf {\tilde p}-\mathbf {\tilde p}'|/m_{\rm e})$~\footnote{We should note that there are two types hierarchies: the inhomogeneity and the electron energy transfer.
The former is directly related to the primordial quantum fluctuations, and we usually consider that the magnitude is the order of $10^{-5}$ at first order if we assume their scale invariance.
In this paper, we use the terminologies ``first order'' and ``second order'' in terms of the inhomogeneity.
}.
We consider that $\epsilon$ is the order of the electron bulk velocity $|\mathbf v|=\mathcal O(10^{-5})$ or thermal motion $\sqrt{T_{\rm e}/m_{\rm e}}$. The relation between the two parameters are given as $\epsilon^2 \sim \eta\ll 1$ since the photon number is peaky for ${\tilde p}\sim T_\gamma$, and $T_\gamma\sim T_{\rm e}$ is manifest if we assume that the system is in kinetic equilibrium~\footnote{In the hot electron gas in the late universe, $\epsilon^2\gg \eta$ is possible.}.
Let us now show the summary of calculations below.\\
\quad
\textit{The Scattering amplitude---.}
Let us denote the initial state 4-momenta with primes.
The scattering amplitude is then written as~\cite{Peskin:1995ev}
\begin{align}
\frac{1}{4}\sum_{\rm spins}|\mathcal M|^2=2{\rm e}^4\left[\frac{{\tilde q}\cdot {\tilde p}'}{{\tilde q}\cdot {\tilde p}}+\frac{{\tilde q}\cdot {\tilde p}}{{\tilde q}\cdot {\tilde p}'}-2m_{\rm e}^2\left(\frac{1}{{\tilde q}\cdot {\tilde p}}-\frac{1}{{\tilde q}\cdot {\tilde p}'}\right)+m_{\rm e}^4\left(\frac{1}{{\tilde q}\cdot {\tilde p}}-\frac{1}{{\tilde q}\cdot {\tilde p}'}\right)^2\right],\label{KN:first}
\end{align}
where we have used
\begin{align}
{\tilde p}\cdot {\tilde q}&={\tilde p}'\cdot {\tilde q}',\\
{\tilde p}\cdot {\tilde q}'&={\tilde p}'\cdot {\tilde q},
\end{align}
which are obtained with 4-momentum conservation ${\tilde p}+{\tilde q}={\tilde p}'+{\tilde q}'$.
Let us chose the frame to satisfy ${\tilde q}_\mu=(-\sqrt{m^2_{\rm e}+\mathbf {\tilde q}^2},\mathbf {\tilde q})$.
Using the above, we can expand (\ref{KN:first}) as follows:
\begin{align}
\frac{1}{4}\sum_{\rm spins}|\mathcal M|^2=|\mathcal M_0|^2+|\mathcal M_\epsilon |^2+|\mathcal M_{\eta^2}|^2 + |\mathcal M_{\epsilon^2}|^2+ |\mathcal M_{\epsilon^3}|^2,\label{KN:expand}
\end{align}
where we have defined
\begin{align}
\frac{|\mathcal M_0|^2}{2{\rm e}^4} &=1+\lambda^2, \\
\frac{|\mathcal M_\epsilon |^2}{2{\rm e}^4} &=2\lambda (\lambda-1)\left[\frac{\mathbf n\cdot \mathbf {\tilde q}}{m_{\rm e}}+\frac{\mathbf n'\cdot \mathbf {\tilde q}}{m_{\rm e}}\right], \\
\frac{|\mathcal M_{\eta^2}|^2}{2{\rm e}^4} &=(\lambda-1)^2\frac{{\tilde p}^2}{{m^2_{\rm e}}},\\
\frac{|\mathcal M_{\epsilon^2}|^2}{2{\rm e}^4} &=2\lambda(\lambda-1)\frac{{\tilde q}^2}{{m^2_{\rm e}}}+
(\lambda -1) (3 \lambda -1)
\left[\frac{(\mathbf n'\cdot \mathbf {\tilde q})^2}{{m^2_{\rm e}}}+ \frac{(\mathbf n\cdot \mathbf {\tilde q})^2}{{m^2_{\rm e}}}\right] \notag \\
&+2 (\lambda -1) (2 \lambda -1) \frac{(\mathbf n\cdot \mathbf {\tilde q})(\mathbf n'\cdot \mathbf {\tilde q})}{{m^2_{\rm e}}}.
\end{align}
\textit{The energy product---.}
On the other hand, the energy product part can be reduced as follows:
\begin{align}
\frac{{\tilde p}'}{{\tilde p}}\frac{1}{E(\mathbf {\tilde q})E(\mathbf {\tilde q}')}
= \frac{{\tilde p}'}{
{\tilde p}m_{\rm e}^2}\left(1-\mathcal E_{\epsilon^2}-\mathcal E_{\epsilon \eta}-\mathcal E_{\eta^2} \right) \label{En:expand},
\end{align}
where each part is written as
\begin{align}
\mathcal E_{\epsilon^2}&=\frac{{\tilde q}^2}{{m^2_{\rm e}}},\\
\mathcal E_{\epsilon \eta }&=\frac{(\mathbf {\tilde p}-\mathbf {\tilde p}')\cdot \mathbf {\tilde q}}{{m^2_{\rm e}}},\\
\mathcal E_{\eta^2}&=\frac{(\mathbf {\tilde p}-\mathbf {\tilde p}')^2}{{m^2_{\rm e}}}.
\end{align}
\textit{The delta function---.}
The delta functions are Taylor expanded around the photon energy difference; namely, we have
\begin{align}
&\delta({\tilde p}-{\tilde p}'+E(\mathbf {\tilde q})-E(\mathbf {\tilde q}'))\notag \\
&=\delta({\tilde p}-{\tilde p}')+\left(\mathcal D_{\epsilon}+\mathcal D_{\epsilon^3}+\mathcal D_{\eta}\right)\frac{\partial \delta({\tilde p}-{\tilde p}')}{\partial {\tilde p}'}\notag \\
&+\frac12\left(\mathcal D_{\epsilon}+\mathcal D_{\eta}\right)^2\frac{\partial^2 \delta({\tilde p}-{\tilde p}')}{\partial {\tilde p}'^2}+\frac{1}{3!}\mathcal D^3_{\epsilon}\frac{\partial^2 \delta({\tilde p}-{\tilde p}')}{\partial {\tilde p}'^2}\label{delta:expand},
\end{align}
where we have introduced
\begin{align}
\mathcal D_\epsilon &=\frac{(\mathbf {\tilde p}-\mathbf {\tilde p}')\cdot \mathbf {\tilde q}}{m_{\rm e}},\\
\mathcal D_{\epsilon^3} &=-\frac{{\tilde q}^2(\mathbf {\tilde p}-\mathbf {\tilde p}')\cdot \mathbf {\tilde q}}{2m_{\rm e}^3},\\
\mathcal D_\eta &=\frac{(\mathbf {\tilde p}-\mathbf {\tilde p}')^2}{m_{\rm e}}.
\end{align}
\textit{Product of distribution functions---.}
Expanding $g(\mathbf {\tilde q}')$ around $\mathbf {\tilde q}$, the product of the distribution functions can be written as
\begin{align}
\mathcal F(\mathbf {\tilde p}',\mathbf {\tilde p})&=f(\mathbf {\tilde p}')[1+f(\mathbf {\tilde p})]-\frac{g(\mathbf {\tilde q})}{g(\mathbf {\tilde q}')}f(\mathbf {\tilde p})[1+f(\mathbf
{\tilde p}')]\notag \\
&=\mathcal F_1(\mathbf {\tilde p}',\mathbf {\tilde p})+\left(
\alpha_{\eta/\epsilon}+\alpha_{\eta^2/\epsilon^2}
\right)\mathcal F_3(\mathbf {\tilde p}',\mathbf {\tilde p})\label{dist:expand},
\end{align}
where $\mathcal F_1$, $\mathcal F_3$ and $\alpha$'s are defined as
\begin{align}
\alpha_{\eta/\epsilon}&=-\frac{(\mathbf {\tilde q}-m_{\rm e} \mathbf v)\cdot (\mathbf {\tilde p}-\mathbf {\tilde p}')}{m_{\rm e}T_{\rm e}},\\
\alpha_{\eta^2/\epsilon^2}&=-\frac{(\mathbf {\tilde p}-\mathbf {\tilde p}')^2}{2m_{\rm e}T_{\rm e}},\\
\mathcal F_1(\mathbf {\tilde p}',\mathbf {\tilde p})&=f(\mathbf {\tilde p}')-f(\mathbf {\tilde p}),\\
\mathcal F_3(\mathbf {\tilde p}',\mathbf {\tilde p})&=f(\mathbf {\tilde p}')\left[1+f(\mathbf {\tilde p})\right].
\end{align}
\textit{Electron momentum integrals---.}
Let us write the $\mathbf {\tilde q}$ integral as follows:
\begin{align}
\langle \cdots \rangle&\equiv \int\frac{d^3 {\tilde q}}{(2\pi)^3}\cdots g(\mathbf {\tilde q}).
\end{align}
Then integrals with the momentum should be recast into
\begin{align}
\begin{split}
\langle {\tilde q}_i\rangle&=n_{\rm e}m_{\rm e}v_i,\\
\langle {\tilde q}_i{\tilde q}_j\rangle&=n_{\rm e}m_{\rm e} T_{\rm e} \delta_{ij}+n_{\rm e}{m^2_{\rm e}}_{\rm e}v_iv_j,\\
\langle {\tilde q}_i{\tilde q}_j{\tilde q}_k\rangle&=n_{\rm e}m_{\rm e}\left(v_i\langle ({\tilde q}_j-m_{\rm e} v_j)({\tilde q}_k-m_{\rm e}v_k)\rangle+2{\rm perms.}\right)+n_{\rm e}m^3_{\rm e}v_iv_jv_k \\
&=n_{\rm e}{m^2_{\rm e}}T_{\rm e}(v_i \delta_{jk}+2{\rm perms.})+n_{\rm e}m^3_{\rm e}v_iv_jv_k.\label{electron:int}
\end{split}
\end{align}\\
\subsection{Expansions in terms of $\epsilon$ and $\eta$}
Now we are ready to write the collision terms with respect to $\epsilon$ and $\eta$:
\begin{align}
\mathcal C[f]=\mathcal C_{0,0}[f]+\mathcal C_{\epsilon,0}[f]+\mathcal C_{\epsilon^2,0}[f]+\mathcal C_{\epsilon^3,0}[f]+\mathcal C_{0,\eta}[f]+\mathcal C_{\epsilon,\eta}[f].
\end{align}
We shall now substitute (\ref{KN:expand}), (\ref{En:expand}), (\ref{delta:expand}) and (\ref{dist:expand}) into (\ref{Col:expand}).
Then, we integrate each momentum by using (\ref{electron:int}) and substitute ${\tilde p}=p(1+z)$.
For simplification of the notation, we introduce the following three angular parameters only for this subsection: $\lambda = \mathbf n\cdot \mathbf n'$, $\mu = \mathbf {\hat v}\cdot \mathbf n$ and $\mu' = \mathbf {\hat v}\cdot \mathbf n'$.
After tremendous but straight forward calculations, each term is obtained as follows:
\begin{align}
\mathcal C_{0,0}[f]=&\frac{3}{4} \left(\lambda ^2+1\right) \mathcal{F}_1(p\mathbf n',p\mathbf n)
\\
\mathcal C_{\epsilon,0}[f]=&\frac{3}{4} \Bigg[\left\{\left(4 \lambda ^2-2 \lambda +2\right) \mu '+\left(\lambda ^2-2 \lambda -1\right) \mu \right\} \mathcal{F}_1(p\mathbf n',p\mathbf n)\notag \\
&\left.
-\left(\lambda ^2+1\right) \left(\mu -\mu '\right) p\frac{\partial \mathcal{F}_1(p'\mathbf n',p\mathbf n)}{\partial p'}\bigg|_{p'=p}\right]
\\
\begin{split}
\mathcal C_{\epsilon^2,0}[f]
=&\frac{T_{\rm e}}{m_{\rm e}} \left[-\frac{3}{4}\left(\lambda ^3-\lambda ^2+\lambda -1\right) p^2 \frac{\partial^2 \mathcal{F}_1(p'\mathbf n',p\mathbf n)}{\partial p'^2}\bigg|_{p'=p}
\right.\\
&\left.-3 \left(\lambda ^3-\lambda ^2+\lambda -1\right) p \frac{\partial \mathcal{F}_1(p'\mathbf n',p\mathbf n)}{\partial p'}\bigg|_{p'=p}
\right.\\
&\left.+\frac{3}{2} \left(2 \lambda ^3-3 \lambda ^2-2 \lambda +1\right) \mathcal{F}_1(p\mathbf n',p\mathbf n)\right]
\\
&+
v^2 \bigg[\frac{3}{8} \left(\lambda ^2+1\right) \mu^2 p^2 \frac{\partial^2 \mathcal{F}_1(p'\mathbf n',p\mathbf n)}{\partial p'^2}\bigg|_{p'=p}+\frac{3}{8} \left(\lambda ^2+1\right) \mu '{}^2 p^2 \frac{\partial^2 \mathcal{F}_1(p'\mathbf n',p\mathbf n)}{\partial p'^2}\bigg|_{p'=p}
\\
&
-\frac{3}{4} \left(\lambda ^2+1\right) \mu \mu ' p^2\frac{\partial^2 \mathcal{F}_1(p'\mathbf n',p\mathbf n)}{\partial p'^2}\bigg|_{p'=p}-\frac{3}{4} \left(\lambda ^2-2 \lambda -1\right) \mu ^2 p \frac{\partial \mathcal{F}_1(p'\mathbf n',p\mathbf n)}{\partial p'}\bigg|_{p'=p}
\\
&
-\frac{3}{4} \left(-5 \lambda ^2+2 \lambda -3\right) \mu '{}^2 p\frac{\partial \mathcal{F}_1(p'\mathbf n',p\mathbf n)}{\partial p'}\bigg|_{p'=p}-3 \left(\lambda ^2+1\right) \mu \mu ' p \frac{\partial \mathcal{F}_1(p'\mathbf n',p\mathbf n)}{\partial p'}\bigg|_{p'=p}
\\
&
+\frac{3}{4} \left[\lambda ^2 \left(\mu ^2-3\right)-2 \lambda \left(\mu ^2-1\right)+\mu ^2-1\right] \mathcal{F}_1(p\mathbf n',p\mathbf n)
\\
&
+\frac{3}{2} \left(5 \lambda ^2-4 \lambda +2\right) \mu '{}^2 \mathcal{F}_1(p\mathbf n',p\mathbf n)+3 (\lambda -2) \lambda \mu \mu ' \mathcal{F}_1(p\mathbf n',p\mathbf n)\bigg]
\end{split}
\end{align}
\begin{align}
\begin{split}
&\mathcal C_{\epsilon^3,0}[f]\\
=&\frac{T_{\rm e}}{m_{\rm e}} v \Bigg[\frac{3}{4} \left(-11 \lambda ^3+13 \lambda ^2-11 \lambda +9\right) \mu ' p^2\frac{\partial^2 \mathcal{F}_1(p'\mathbf n',p\mathbf n)}{\partial p'{}^2}\bigg|_{p'=p}\\
&+\frac{3}{2} \left(2 \lambda ^3-\lambda ^2+2 \lambda -3\right) \mu p^2 \frac{\partial^2 \mathcal{F}_1(p'\mathbf n',p\mathbf n)}{\partial p'{}^2}\bigg|_{p'=p}
\\
&+\frac{3}{4} (\lambda -1) \left(\lambda ^2+1\right) \left(\mu -\mu '\right) p^3\frac{\partial^3 \mathcal{F}_1(p'\mathbf n',p\mathbf n)}{\partial p'{}^3}\bigg|_{p'=p}
\\
&
-\frac{3}{8} \left(40 \lambda ^3-47 \lambda ^2+56 \lambda -31\right) \mu 'p \frac{\partial \mathcal{F}_1(p'\mathbf n',p\mathbf n)}{\partial p'}\bigg|_{p'=p}\\
&-\frac{3}{8} \left(16 \lambda ^3-41 \lambda ^2+7\right) \mu p \frac{\partial \mathcal{F}_1(p'\mathbf n',p\mathbf n)}{\partial p'}\bigg|_{p'=p}
+\frac{3}{4} \left(20 \lambda ^3-46 \lambda ^2+5 \lambda +3\right) \mu ' \mathcal{F}_1(p\mathbf n',p\mathbf n)\\
&+\frac{3}{8} \left(16 \lambda ^3-41 \lambda ^2+34 \lambda +9\right) \mu \mathcal{F}_1(p\mathbf n',p\mathbf n)\Bigg]\\
&v^3\Bigg[
\frac{1}{8} \left(-\lambda ^2-1\right) p^3 \left(\mu -\mu '\right) \mu '{}^2 \frac{\partial^3 \mathcal{F}_1(p'\mathbf n',p\mathbf n)}{\partial p'{}^3}\bigg|_{p'=p}
+\frac{1}{4} \left(\lambda ^2+1\right) \mu p^3 \left(\mu -\mu '\right) \mu ' \frac{\partial^3 \mathcal{F}_1(p'\mathbf n',p\mathbf n)}{\partial p'{}^3}\bigg|_{p'=p}\\
&+\frac{1}{8} \left(-\lambda ^2-1\right) \mu ^2 p^3 \left(\mu -\mu '\right) \frac{\partial^3 \mathcal{F}_1(p'\mathbf n',p\mathbf n)}{\partial p'{}^3}\bigg|_{p'=p}
+\frac{3}{8} \left(\lambda ^2-2 \lambda -1\right) \mu ^3 p^2 \frac{\partial^2 \mathcal{F}_1(p'\mathbf n',p\mathbf n)}{\partial p'{}^2}\bigg|_{p'=p}\\
&
+\frac{3}{4} \left(3 \lambda ^2-\lambda +2\right) p^2 \mu '{}^3 \frac{\partial^2 \mathcal{F}_1(p'\mathbf n',p\mathbf n)}{\partial p'{}^2}\bigg|_{p'=p}
+\frac{3}{8} \left(-11 \lambda ^2+2 \lambda -9\right) \mu p^2 \mu '{}^2 \frac{\partial^2 \mathcal{F}_1(p'\mathbf n',p\mathbf n)}{\partial p'{}^2}\bigg|_{p'=p}\\
&
+\frac{3}{4} \left(2 \lambda ^2+\lambda +3\right) \mu ^2 p^2 \mu ' \frac{\partial^2 \mathcal{F}_1(p'\mathbf n',p\mathbf n)}{\partial p'{}^2}\bigg|_{p'=p}\\
&
-\frac{3}{8} \mu p \left(\lambda ^2 \left(2 \mu ^2-7\right)-4 \lambda \left(\mu ^2-1\right)+2 \mu ^2-3\right) \frac{\partial \mathcal{F}_1(p'\mathbf n',p\mathbf n)}{\partial p'}\bigg|_{p'=p}\\
&
+\frac{3}{4} \left(15 \lambda ^2-10 \lambda +7\right) p \mu '{}^3 \frac{\partial \mathcal{F}_1(p'\mathbf n',p\mathbf n)}{\partial p'}\bigg|_{p'=p}-\frac{3}{2} \left(5 \lambda ^2+4\right) \mu p \mu '{}^2 \frac{\partial \mathcal{F}_1(p'\mathbf n',p\mathbf n)}{\partial p'}\bigg|_{p'=p}\\
&
-\frac{3}{8} p \left(\lambda ^2 \left(8 \mu ^2+7\right)-4 \lambda \left(4 \mu ^2+1\right)-4 \mu ^2+3\right) \mu ' \frac{\partial \mathcal{F}_1(p'\mathbf n',p\mathbf n)}{\partial p'}\bigg|_{p'=p}\\
&
+\frac{3}{8} \mu \left(\lambda ^2 \left(2 \mu ^2-7\right)+\lambda \left(14-4 \mu ^2\right)+2 \mu ^2-1\right) \mathcal{F}_1(p\mathbf n',p\mathbf n)\\
&
+3 \left(5 \lambda ^2-5 \lambda +2\right) \mu '{}^3 \mathcal{F}_1(p\mathbf n',p\mathbf n)+\frac{3}{2} \left(5 \lambda ^2-10 \lambda +2\right) \mu \mu '{}^2 \mathcal{F}_1(p\mathbf n',p\mathbf n)\\
&
+\frac{3}{4} \left(2 \lambda ^2 \left(2 \mu ^2-7\right)+\lambda \left(13-8 \mu ^2\right)+4 \mu ^2-5\right) \mu ' \mathcal{F}_1(p\mathbf n',p\mathbf n)
\Bigg]
\end{split}
\end{align}
\begin{align}
\begin{split}
\mathcal C_{0,\eta}[f]
=&\frac{3p(1+z)}{4m_{\rm e}} \left(\lambda ^3-\lambda ^2+\lambda -1\right) \left[p \frac{\partial \mathcal{F}_1(p'\mathbf n',p\mathbf n)}{\partial p'}\bigg|_{p'=p}-2p \frac{\partial \mathcal{F}_3(p'\mathbf n',p\mathbf n)}{\partial p'{}}\bigg|_{p'=p}\right]\\
&+\frac{3p(1+z)}{4 m_{\rm e}} \left(\lambda ^3-\lambda ^2+\lambda -1\right) \left[2 \mathcal{F}_1(p\mathbf n',p\mathbf n)-4 \mathcal{F}_3(p\mathbf n',p\mathbf n)\right]
\end{split}
\\
\begin{split}
\mathcal C_{\epsilon,\eta}[f]
=&v\frac{p(1+z)}{m_{\rm e}} \Bigg[\frac{3}{4} (\lambda -1) \left(-\lambda ^2-1\right) \left(\mu -\mu '\right) \left[p^2\frac{\partial^2 \mathcal{F}_1(p'\mathbf n',p\mathbf n)}{\partial p'{}^2}\bigg|_{p'=p}-2 p^2\frac{\partial^2 \mathcal{F}_3(p'\mathbf n',p\mathbf n)}{\partial p'{}^2}\bigg|_{p'=p}\right]\\
&+\frac{3}{4} (\lambda -1) \left[4 \left\{\left(-4 \lambda ^2+\lambda -3\right) \mu '+\left(\lambda ^2+\lambda +2\right) \mu \right\}p \frac{\partial \mathcal{F}_3(p'\mathbf n',p\mathbf n)}{\partial p'{}}\bigg|_{p'=p}
\right.\\
&\left.
-2 \left\{\left(-4 \lambda ^2+\lambda -3\right) \mu '+\left(\lambda ^2+\lambda +2\right) \mu \right\} p\frac{\partial \mathcal{F}_1(p'\mathbf n',p\mathbf n)}{\partial p'}\bigg|_{p'=p}\right]\\
&+\frac{3}{4} (\lambda -1) \left[2 \left\{\left(5 \lambda ^2-2 \lambda +3\right) \mu '+\left(\lambda ^2-2 \lambda -1\right) \mu \right\} \mathcal{F}_1(p\mathbf n',p\mathbf n)\right.\\
&\left.-4 \left\{\left(5 \lambda ^2-2 \lambda +3\right) \mu '+\left(\lambda ^2-2 \lambda -1\right) \mu \right\} \mathcal{F}_3(p\mathbf n',p\mathbf n)\right]\Bigg]
\end{split}
\end{align}
\subsection{Perturbative expansions}
Now we integrate the terms with respect to $\mathbf n'$ and take the Legendre coefficients.
(\ref{appen_legen_gattai}) is useful for converting $(\mathbf n\cdot \mathbf n')$ into combinations of $(\mathbf {\hat v}\cdot \mathbf n')$ and $(\mathbf {\hat v}\cdot \mathbf n)$. In this section, we ignore the vector and tensor sector for simplicity, and rewrite the physical momenta by using the comoving momenta.
Here we consider the following perturbative expansion:
\begin{align}
f=\sum_{n=0}^3f^{(n)},
\end{align}
and replace the bulk velocity as $\mathbf v\to \mathbf v^{(1)}+\mathbf v^{(2)}+\mathbf v^{(3)}$.\\
\textit{Zeroth and first order---.}
We immediately find zeroth and first order collision terms without polarizations at the Thomson scattering limit:
\begin{align}
(n_{\rm e}\sigma_{\rm T}a)^{-1}\mathcal C^{(0)}_{\rm T}[f]&=0,\label{col:zero}\\
(n_{\rm e}\sigma_{\rm T}a)^{-1}\mathcal C^{(1)}_{\rm T}[f]&=f^{(1)}_0-f^{(1)}(\mathbf p)-\frac{1}{2}f^{(1)}_2P_2 - p\frac{\partial f^{(0)}}{\partial p}(\mathbf v\cdot \mathbf n).\label{col:first}
\end{align}
\newpage
\textit{Second order---.}
The next leading order has two parts.
The Thomson terms which do not induce the momentum transfer become
\begin{align}
&(n_{\rm e}\sigma_{\rm T}a)^{-1}\mathcal C^{(2)}_{\rm T}[f]\notag\\
=&f^{(2)}_0-f^{(2)}(\mathbf p)-\frac{1}{2} f^{(2)}_2P_2 -p\frac{\partial f^{(0)}}{\partial p}(\mathbf v^{(2)}\cdot \mathbf n) \notag \\
&v \left[(\mathbf{\hat v}\cdot \mathbf n) \left(f^{(1)}(\mathbf p)-p \frac{\partial f^{(1)}_0}{\partial p}+\frac{1}{5} p \frac{\partial f^{(1)}_2}{\partial p}-f^{(1)}_0+\frac{4}{5} f^{(1)}_2\right)+P_3 \left(\frac{3}{10} p \frac{\partial f^{(1)}_2}{\partial p}-\frac{3}{10} f^{(1)}_2\right)
\right.\notag \\
&\left.
+iP_2 \left(-\frac{1}{5} p \frac{\partial f^{(1)}_1}{\partial p}+\frac{3}{10} p \frac{\partial f^{(1)}_3}{\partial p}+\frac{1}{5} f^{(1)}_1+\frac{6}{5} f^{(1)}_3\right)-i p \frac{\partial f^{(1)}_1}{\partial p}-2 i f^{(1)}_1\right]\notag \\
&
+v^2 \left[P_2 \left(\frac{11}{30} p^2 \frac{\partial^2 f^{(0)}}{\partial p^2}+\frac{2}{3} p \frac{\partial f^{(0)}}{\partial p}\right)+\frac{1}{3} p^2 \frac{\partial^2 f^{(0)}}{\partial p^2}+\frac{4}{3} p \frac{\partial f^{(0)}}{\partial p}\right]\label{trans_sec_col}
\end{align}
On the other hand, the Kompaneets terms which have momentum transfer whose magnitude is second order can be written as follows.
\begin{align}
&(n_{\rm e}\sigma_{\rm T}a)^{-1}\mathcal C^{(0)}_{\rm K}[f]=\notag \\
&\frac{ T_{\rm e}}{m_{\rm e}}\left( p^2\frac{\partial^2 f^{(0)}}{\partial p^2}+4 p\frac{\partial f^{(0)}}{\partial p}\right)
+\frac{p(1+z)}{m_{\rm e}}\left(2 f^{(0)}p\frac{\partial f^{(0)}}{\partial p}+p\frac{\partial f^{(0)}}{\partial p}+4f^{(0)}{}^2+4f^{(0)}\right)\label{CK:0},
\end{align}
These terms are the additional contributions for the averaged part of (\ref{trans_sec_col}).
\newpage
\textit{Cubic order---.}
The cubic order Thomson terms are obtained as follows:
\begin{align}
&(n_{\rm e}\sigma_{\rm T}a)^{-1}\mathcal C^{(3)}_{\rm T}[f]\notag \\
=&-f^{(3)}(\mathbf p)+f^{(3)}_0-\frac{1}{2} P_2 f^{(3)}_2-p\frac{\partial f^{(0)}}{\partial p}(\mathbf v^{(3)}\cdot \mathbf n)
\notag\\
&\left[(\mathbf{\hat v}\cdot \mathbf n) \left(-\frac{7}{25} p^3 \frac{\partial^3 f^{(0)}}{\partial p^3}-\frac{47}{25} p^2\frac{\partial^2 f^{(0)}}{\partial p^2}-\frac{3}{2} p\frac{\partial f^{(0)}}{\partial p}\right)\right.\notag \\
&\left. +P_3 \left(-\frac{13}{150} p^3 \frac{\partial^3 f^{(0)}}{\partial p^3}-\frac{11}{50} p^2\frac{\partial^2 f^{(0)}}{\partial p^2}\right)\right] v^3
\notag \\
&+\left[\frac{1}{3} p^2\frac{\partial^2 f^{(1)}_0}{\partial p^2} -\frac{11}{30} p^2 \frac{\partial^2 f^{(1)}_2}{\partial p^2} +\frac{4}{3} p \frac{\partial f^{(1)}_0}{\partial p}-\frac{34}{15} p \frac{\partial f^{(1)}_2}{\partial p}-\frac{12}{5}f^{(1)}_2\right.\notag \\
&\left.
+P_4 \left(-\frac{3}{35} p^2 \frac{\partial^2 f^{(1)}_2}{\partial p^2} +\frac{6}{35} p \frac{\partial f^{(1)}_2}{\partial p}-\frac{6}{35}f^{(1)}_2\right)\right. \notag \\
&\left.
+(\mathbf{\hat v}\cdot \mathbf n) \left(\frac{27}{25} i p^2 \frac{\partial^2 f^{(1)}_1}{\partial p^2}-\frac{3}{25} i p^2 \frac{\partial^2 f^{(1)}_3}{\partial p^2}+\frac{108}{25} i p\frac{\partial f^{(1)}_1}{\partial p}-\frac{27}{25} i p\frac{\partial f^{(1)}_3}{\partial p}+\frac{42}{25} i f^{(1)}_1-\frac{48}{25} i f^{(1)}_3\right)\right.\notag \\
&\left.
+P_3 \left(\frac{3}{25} i p^2 \frac{\partial^2 f^{(1)}_1}{\partial p^2}-\frac{9}{50} i p^2 \frac{\partial^2 f^{(1)}_3}{\partial p^2}-\frac{3}{25} i p\frac{\partial f^{(1)}_1}{\partial p}-\frac{18}{25} i p\frac{\partial f^{(1)}_3}{\partial p}+\frac{3}{25} i f^{(1)}_1+\frac{18}{25} i f^{(1)}_3\right)\right. \notag \\
&
\left.
+P_2 \left(\frac{11}{30} p^2\frac{\partial^2 f^{(1)}_0}{\partial p^2} -\frac{11}{42} p^2 \frac{\partial^2 f^{(1)}_2}{\partial p^2} +\frac{3}{35} p^2 \frac{\partial^2 f^{(1)}_4}{\partial p^2}+\frac{2}{3} p \frac{\partial f^{(1)}_0}{\partial p}\right.\right.\notag \\
&\left.\left.
-\frac{22}{21} p \frac{\partial f^{(1)}_2}{\partial p}+\frac{6}{7} p \frac{\partial f^{(1)}_4}{\partial p}+\frac{9 f^{(1)}_2}{7}+\frac{12 f^{(1)}_4}{7}\right)\right] v^2\notag\\
&+\left[-2 i f^{(2)}_1-i p \frac{\partial f^{(2)}_1}{\partial p}+(\mathbf{\hat v}\cdot \mathbf n) \left(f^{(2)}(\mathbf p)-f^{(2)}_0+\frac{4 f^{(2)}_2}{5}-p \frac{\partial f^{(2)}_0}{\partial p}+\frac{1}{5} p \frac{\partial f^{(2)}_2}{\partial p}\right)\right.\notag\\
&
+\left.P_3 \left(\frac{3}{10} p \frac{\partial f^{(2)}_2}{\partial p}-\frac{3 f^{(2)}_2}{10}\right)+P_2 \left(\frac{1}{5} i f^{(2)}_1+\frac{6}{5} i f^{(2)}_3-\frac{1}{5} i p \frac{\partial f^{(2)}_1}{\partial p}+\frac{3}{10} i p \frac{\partial f^{(2)}_3}{\partial p}\right)\right] v\notag \\
&
+v^{(2)} \left[-2 i f^{(1)}_1-i p \frac{\partial f^{(1)}_1}{\partial p}+(\mathbf{\hat v}\cdot \mathbf n) \left(f^{(1)}(\mathbf p)-f^{(1)}_0+\frac{4 f^{(1)}_2}{5}-p \frac{\partial f^{(1)}_0}{\partial p}+\frac{1}{5} p \frac{\partial f^{(1)}_2}{\partial p}\right)
\right.\notag \\
&
\left.
+P_3 \left(\frac{3}{10} p \frac{\partial f^{(1)}_2}{\partial p}-\frac{3 f^{(1)}_2}{10}\right)+P_2 \left(\frac{1}{5} i f^{(1)}_1+\frac{6}{5} i f^{(1)}_3-\frac{1}{5} i p \frac{\partial f^{(1)}_1}{\partial p}+\frac{3}{10} i p \frac{\partial f^{(1)}_3}{\partial p}\right)
\right.\notag \\
&\left.
+v \left(\frac{2}{3} p^2\frac{\partial^2 f^{(0)}}{\partial p^2}+\frac{8}{3} p\frac{\partial f^{(0)}}{\partial p}+P_2 \left(\frac{11}{15} p^2\frac{\partial^2 f^{(0)}}{\partial p^2}+\frac{4}{3} p\frac{\partial f^{(0)}}{\partial p}\right)\right)\right].\label{col:cubic_thomson}
\end{align}
On the other hand, next leading order Kompaneets terms are given as~\cite{Chluba:2012gq}
\begin{align}
&(n_{\rm e}\sigma_{\rm T}a)^{-1}\mathcal C^{(1)}_{\rm K}[f]\notag \\
&=\frac{p(1+z)}{m_{\rm e}} \left[2p \frac{\partial f^{(0)}}{\partial p} f^{(1)}(\mathbf p)+4 f^{(0)} f^{(1)}(\mathbf p)+2 f^{(1)}(\mathbf p)+4 f^{(0)} f^{(1)}_0 +p \frac{\partial f^{(1)}_0}{\partial p}+2 f^{(1)}_0+2 f^{(0)} p\frac{\partial f^{(1)}_0}{\partial p}\right.
\notag\\
&\left.
+(\mathbf{\hat v}\cdot \mathbf n)\left(\frac{24}{5} i f^{(0)} f^{(1)}_1+\frac{12}{5} i f^{(1)}_1+\frac{12i}{5} f^{(0)} p\frac{\partial f^{(1)}_1}{\partial p}+\frac{6 i}{5} p \frac{\partial f^{(1)}_1}{\partial p}\right)\right.\notag \\
&
\left.
+(\mathbf v\cdot \mathbf n) \left(-8 f^{(0)}{}^2-8 f^{(0)}-\frac{7}{5} p^2\frac{\partial^2 f^{(0)}}{\partial p^2}-\frac{14}{5} f^{(0)} p^2\frac{\partial^2 f^{(0)}}{\partial p^2}-\frac{31}{5} p \frac{\partial f^{(0)}}{\partial p}-\frac{62}{5} f^{(0)}p \frac{\partial f^{(0)}}{\partial p}\right)\right.\notag \\
&\left.
+P_2 \left(-2 f^{(0)} f^{(1)}_2-f^{(1)}_2- f^{(0)} p\frac{\partial f^{(1)}_2}{\partial p}-\frac{1}{2}p\frac{\partial f^{(1)}_2}{\partial p}\right)\right.\notag \\
&\left.
+P_3 \left(-\frac{3i}{5} f^{(1)}_3-\frac{6i}{5} f^{(0)} f^{(1)}_3-\frac{3 i}{5} f^{(0)} p \frac{\partial f^{(1)}_3}{\partial p}-\frac{3 i}{10} p\frac{\partial f^{(1)}_3}{\partial p}\right)\right]
\notag \\
&
+\frac{T_{\rm e}}{m_{\rm e}} \left[(\mathbf v\cdot \mathbf n) \left(-\frac{7}{5} p^3 \frac{\partial^3 f^{(0)}}{\partial p^3}-\frac{47}{5} p^2 \frac{\partial^2 f^{(0)}}{\partial p^2}-\frac{15}{2} p \frac{\partial f^{(0)}}{\partial p}\right)+\right.\notag \\
&\left.
(\mathbf{\hat v}\cdot \mathbf n) \left(\frac{6i}{5} p^2 \frac{\partial^2 f^{(1)}_1}{\partial p^2}+\frac{24i}{5} p \frac{\partial f^{(1)}_1}{\partial p}+\frac{6i}{5} f^{(1)}_1\right)+P_2 \left(-\frac{1}{2} p^2 \frac{\partial^2 f^{(1)}_2}{\partial p^2}-2 p \frac{\partial f^{(1)}_2}{\partial p}+3 f^{(1)}_2\right)\right.\notag \\
&\left.
+P_3 \left(-\frac{3i}{10} p^2 \frac{\partial^2 f^{(1)}_3}{\partial p^2}-\frac{6i}{5} p \frac{\partial f^{(1)}_3}{\partial p}+\frac{6i}{5} f^{(1)}_3\right)+p^2 \frac{\partial^2 f^{(1)}_0}{\partial p^2}+4 p \frac{\partial f^{(1)}_0}{\partial p}\right]\label{CK:1}.
\end{align}\\
\textit{Vishniac effects---.}
The electron number density is also fluctuate.
This is taken into account by imposing
\begin{align}
n_{\rm e}\to n_{\rm e}(1+3\Theta^{(1)}_{\rm e}+3\Theta^{(2)}_{\rm e}+9\Theta_{\rm e}^{(1)}{}^2),
\end{align}
where $\Theta_{\rm e}$ is the electron temperature perturbation.
Then, we have the following additional terms:
\begin{align}
\left(3\Theta^{(1)}_{\rm e}+3\Theta^{(2)}_{\rm e}+9\Theta_{\rm e}^{(1)}{}^2\right)\mathcal C^{(1)}_{\rm T}[f],\qquad
3\Theta^{(1)}_{\rm e}\left(\mathcal C^{(0)}_{\rm K}[f]+\mathcal C^{(2)}_{\rm T}[f]\right).\label{vishniac}
\end{align}
\section{Solutions to the Boltzmann equation}\label{sec:3}
The cosmic microwave background radiation is almost isotropic and ideal blackbody at high precision but slightly deviates from the perfect one.
In this section, we construct such a deviations as solutions of the second order Boltzmann equation.
\subsection{Momentum functions}\label{U:e}
We shall start with introducing the following functional basis to expand the distribution function:
\begin{align}
\mathcal G(p)&=-p\frac{\partial f^{(0)}(p)}{\partial p},\label{def:G}\\
\mathcal Y(p)&=\frac{1}{p^2}\frac{\partial }{\partial p}p^4\frac{\partial f^{(0)}(p)}{\partial p}, \label{def:Y}\\
\mathcal M(p)&=T_0\frac{\partial f^{(0)}(p)}{\partial p},\label{def:M}
\end{align}
where we have defined
\begin{align}
f^{(0)}(p)=\frac{1}{e^{\frac{p}{T_0}}-1}.
\end{align}
The definite integrals of these functions are
\begin{align}
\int^\infty_0 \frac{dx}{2\pi^2}x^n f^{(0)}&=\mathcal I_n,\label{B:2}\\
\int^\infty_0 \frac{dx}{2\pi^2}x^n \mathcal G&=(n+1)\mathcal I_n,
\\
\int^\infty_0 \frac{dx}{2\pi^2}x^n \mathcal Y&=(n+1)(n-2)\mathcal I_n,\\
\int^\infty_0 \frac{dx}{2\pi^2}x^n \mathcal M&=-n\mathcal I_{n-1},
\end{align}
where $x=p/T_0$ and $\{I_n\}_{n=1}^3=\{1/12,\zeta(3)/\pi^2,\pi^2/30\}$.
The following relations are also useful:
\begin{align}
p^2\frac{\partial^2 f^{(0)}(p)}{\partial p^2}&=\mathcal Y+4\mathcal G,\label{2.9}\\
p\frac{\partial}{\partial p}p\frac{\partial f^{(0)}(p)}{\partial p}&=\mathcal Y+3\mathcal G\label{pdelp}\\
-p\frac{\partial f^{(1)}(p)}{\partial p}&=\Theta^{(1)}(3\mathcal G+\mathcal Y).\label{3.4}
\end{align}
More generally,
\begin{align}
p^n\frac{\partial }{\partial p}p^m\frac{\partial}{\partial p}p^lf^{(0)}=p^{n+m+l-2}\left[l(l+m-1)f^{(0)}+\mathcal Y+(4-m-2l)\mathcal G\right].
\end{align}
These $\mathcal G$, $\mathcal Y$ and $\mathcal M$ are ``linearly independent''.
Let us consider the linear combination of these functions which is equal to zero:
\begin{align}
a \mathcal G+ b\mathcal Y+ c \mathcal M=0.
\end{align}
We then integrate the both side with respect to momentum $p$ and obtain
\begin{align}
a(n+1)\mathcal I_n+b(n+1)(n-2)\mathcal I_n-cn\mathcal I_{n-1}=0.\label{proof:linear}
\end{align}
The solution to (\ref{proof:linear}) can be found to be trivial so that the $\mathcal G$, $\mathcal Y$ and $\mathcal M$ are linearly independent.
\subsection{First order}
The cosmic photon fluid perturbs along the primordial fluctuations; however, it is considered to be a blackbody at each point.
Based on the assumption, we usually write the ansatz for the first order Boltzmann equation as a Planck distribution function with a spacetime dependent temperature.
Let us expand the function to the linear order in terms of the temperature perturbations.
Using (\ref{def:G}), we can write the linear term as
\begin{align}
f^{(1)}=\Theta^{(1)} \mathcal G,\label{linear_ansatz}
\end{align}
where we have defined $\Theta^{(1)}$ as the first order temperature fluctuations normalized by the fiducial temperature.
One then finds that the both sides of the Boltzmann equation are proportional to $\mathcal G$.
This implies that the equation for $\Theta^{(1)}$ coincides with that of the energy density perturbations and the number density perturbations, which are integrated with respect to the momentum.
At this stage, we succeed to justify the first assumption that the system is blackbody at each point.
\subsection{Second order in the Thomson limit}
On the other hand, it has been already known that the above discussions are not applicable at second order~\cite{Dodelson:1993xz}.
In other words, the second~(or higher) order temperature perturbations are momentum dependent in general, and the fluid is no more blackbody even at each local point at second order.
This is apparent if we integrate the equation with $p^n$.
We now have three strategies to solve the Boltzmann equation.
One is to integrate the momentum dependence and to write the equations for the intensity and the number density perturbations at second order.
It can be significant simplification, at the same time, it masks a lot of information in the non-trivial momentum dependence of the distribution function.
The second is to consider the full momentum dependent temperature.
This can be perfect, but it is far more complicated.
We then propose to take into account the momentum dependence partly by the form of the spectral distortions.
In other words, we replace the infinite number of degrees of freedom coming from the continuous momentum with the infinite number of the parameters describing the spectral deformations.
In fact, we usually apply this kind of approach to reduce the number of equations of a set of partial differential equations.
For example, we use the Boltzmann hierarchy equations instead of the equations with angular parameters.
In this case, the important contributions are related to the lower multipoles, and we can simplify the infinite number of equations to a few equations.
Actually, we have already done this approach even for our problem at linear order.
We can say that the first order spectral distortion is written as momentum independent temperature perturbations in the form of (\ref{linear_ansatz}), that is, infinite number of d.o.f of continuous momentum are reduced to a single local parameter at first order.
Our next step is going to second or higher order.\\
Let us write an arbitrary distribution function in the form of the Planck distribution function with momentum dependent temperature perturbation $\widetilde \Theta=\widetilde \Theta(\mathbf x, p\mathbf n, \eta)$:
\begin{align}
f(\mathbf x, p\mathbf n, \eta)&=\frac{1}{e^{\frac{p}{T_0}e^{-\widetilde\Theta}}-1},
\end{align}
where we define $T_0$ as the temperature of a time independent comoving blackbody~\footnote{$T_0$ is not the average temperature which varies due to the acoustic reheating or the other heating processes at second order.}.
Then let us expand this function around $\widetilde \Theta=0$
\begin{align}
\frac{1}{e^{\frac{p}{T_0}e^{-\widetilde\Theta}}-1}=\sum^{\infty}_{n=0}\frac{\widetilde \Theta^n}{n!}\left(-p\frac{\partial }{\partial p}\right)^n f^{(0)}(p).\label{planck_dist_exp}
\end{align}
In our convention, the zeroth order distribution function is time independent, and it simplifies the later calculations.
Here we should notice that the $n$ is not the order of the perturbations since the temperature perturbations have the following form:
\begin{align}
\widetilde \Theta=\sum_{n} \widetilde \Theta^{(n)},
\end{align}
where $n$ is the order of the perturbations.
We have already known that $\widetilde \Theta^{(1)}=\Theta^{(1)}$ is momentum independent; however,
$n>1$ is momentum dependent in principle.
In our definition, the higher order temperature perturbations have none zero homogeneous component since we fix the fiducial temperature $T_0$ as mentioned above.
At second order, we find that the momentum dependence is separated as
\begin{align}
\widetilde \Theta^{(2)}(p)=\Theta^{(2)}+y\frac{\mathcal Y(p)}{\mathcal G(p)},
\end{align}
where $\Theta^{(2)}$ is the momentum independent part and $\mathcal Y$ is defined in (\ref{def:Y}).
The perturbative expansions are obtained as follows:
\begin{align}
f^{(1)}=&\Theta^{(1)} \mathcal G(p),\label{def:f1}\\
f^{(2)}=&\left[\Theta^{(2)}+\frac32\Theta^2\right] \mathcal G(p) + \left[y+\frac12\Theta^2\right]\mathcal Y(p)\label{def:f2},
\end{align}
where we have used (\ref{def:G}) and (\ref{def:Y}).
\subsection{Collision terms}
Thanks to the expression (\ref{def:f2}), it simplifies calculations to write the collision terms as
\begin{align}
\mathcal C^{(2)}_{\rm T}[f]=(\mathcal A^{(1)}+\mathcal A^{(2)})\mathcal G+\mathcal B^{(2)}\mathcal Y.\label{decomp:2nd}
\end{align}
Then combining (\ref{decomp:2nd}), (\ref{def:f2}), (\ref{trans_sec_col}), (\ref{vishniac}) and (\ref{3.4}) and we find
\begin{align}
(n_{\rm e}\sigma_{\rm T} a)^{-1}\mathcal A^{(1)}&=\mathbf v\cdot \mathbf n-\Theta+\Theta_0-\frac12P_2\Theta_2,\\
(n_{\rm e}\sigma_{\rm T} a)^{-1}\mathcal A^{(2)}&=-\frac{2v^2}{5}+\mathbf v^{(2)}\cdot \mathbf n+\frac{6(\mathbf v\cdot \mathbf n)^2}{5}+\mathbf v\cdot \mathbf n(\Theta+2\Theta_0+\Theta_2-2P_2\Theta_2)\notag \\
&+3\Theta_{\rm e}\left(\mathbf v\cdot \mathbf n-\Theta+\Theta_0-\frac12P_2\Theta_2\right)-iv\left[-\Theta_1+\frac15P_2\left(-4\Theta_1-\frac32\Theta_3\right)\right]\notag \\
&+\Theta^{(2)}_{0}-\Theta^{(2)}+\frac32[\Theta^2]_{0}-\frac32\Theta^2+\frac{1}{10}\sum_{m=-2}^2\left(\frac32[\Theta^2]_{2m}+\Theta^{(2)}_{2m}\right)Y_{2m},
\\
(n_{\rm e}\sigma_{\rm T} a)^{-1}\mathcal B^{(2)}&=
\frac{3}{20}v^2+\frac{11}{20}(\mathbf v\cdot \mathbf n)^2-\frac{\Theta^2}{2}+\frac{[\Theta^2]_{0}}{2}+\mathbf v\cdot \mathbf n\left(\Theta_0-\frac12P_2\Theta_2\right)\notag \\
&-iv\left[-\Theta_1+\frac15P_2\left(-\Theta_1+\frac32\Theta_3\right)\right]\notag \\
&+\frac{1}{20}\sum_{m=-2}^2[\Theta^2]_{2m}Y_{2m}\notag \\
&-y+y_{0}+\frac{1}{10}\sum_{m=-2}^2y_{2m}Y_{2m}.
\end{align}
Note that the pure second order quantities such as $\Theta^{(2)}$ and $\mathbf v^{(2)}$ only appear in $\mathcal A^{(2)}$, and $\mathcal B^{(2)}$ is expressed by products of first order perturbations except $y$.
The monopole component of $\mathcal A^{(2)}$ is calculated as zero.
This implies that the Compton scattering does not change the isotropic component of the photon number density even at second order.
\subsection{Liouville terms}
We will solve the above equation perturbatively; however, note that we avoid writing the second order metric perturbations explicitly in the following discussions since they do not appear in the final expressions for the spectral distortions.
So far we have discussed the r.h.s. of the Boltzmann equation.
Next, let us see the Liouville term on the left.
Differentiating the distribution function with respect to the conformal time, (\ref{def:f2}) yields
\begin{align}
f'=&f^{(0)'}+(\Theta'+3\Theta\Theta')\mathcal G+
\left(\Theta+\frac32\Theta^2\right)\mathcal G'\notag \\
&+(y'+\Theta\Theta')\mathcal Y+\left(y+\frac12\Theta^2\right)\mathcal Y',\label{fprime}
\end{align}
where $'\equiv d/d\eta$.
Since $f^{(0)}$ depends only on comoving momentum $p$, the derivative of the zeroth order part becomes
\begin{align}
f^{(0)'}=- (\ln p)'\mathcal G.\label{f0prime}
\end{align}
Note that $- (\ln p)'$ does not have the zeroth order part but both the first and the second order terms which describe the gravitational redshift and lensing since $p$ is the comoving momentum.
On the other hand, using (\ref{pdelp}) we can write the time derivative of $\mathcal G$ by
\begin{align}
\mathcal G'=-(\ln p)'(3\mathcal G+\mathcal Y),\label{Gprime}
\end{align}
and $\mathcal Y'$ can be neglected since this is the first order quantity which is multiplied with the second order perturbations in the equation.
Combining (\ref{fprime}), (\ref{f0prime}) and (\ref{Gprime}) up to second order, we obtain
\begin{align}
f'=(1+3\Theta)[\Theta'-(\ln p)']\mathcal G+\left( y'+
\Theta[\Theta'-(\ln p)']\right)\mathcal Y.\label{fprimeGY}
\end{align}
\subsection{Second order equations for the spectral distortion and acoustic reheating}
Next, let us write down equations order by order.
The first order Boltzmann equation is easily obtained as
\begin{align}
\Theta^{(1)'}-(\ln p)'^{(1)}&= \mathcal A^{(1)}.\label{bol:1st}
\end{align}
If we expand $(\ln p)'^{(1)}$ with respect to the metric perturbations, we obtain first order Boltzmann equation of the temperature perturbations with metric perturbations.
On the other hand, collecting second order terms, the second order equation can be written as
\begin{align}
\left[\Theta^{(2)'}-(\ln p)'^{(2)}+3\Theta\mathcal A^{(1)}\right]\mathcal G+
\left[y'+\Theta\mathcal A^{(1)}\right]\mathcal Y
&= \mathcal A^{(2)}\mathcal G +\mathcal B^{(2)}\mathcal Y,\label{bol:2nd}
\end{align}
where we have used (\ref{bol:1st}) for substituting $\mathcal A^{(1)}$ into the expression.
As we commented above, $\mathcal A^{(2)}$ does not have any monopole terms since the Compton scattering does not change the photon number; however, the temperature is raised by acoustic reheating, namely, the photon number is changed due to $3\Theta \mathcal A^{(1)}$ on the left.
Integrals with $p^n$ should be always consistent even if they do not have any physical implications since we should respect the equation at the distribution function level.
Therefore, each coefficient for $\mathcal G$ and $\mathcal Y$ should be equal so that we obtain the Boltzmann equations for $\Theta^{(2)}$ and $y$ independently.
One then immediately finds
\begin{align}
\left[\Theta^{(2)'}-(\ln p)'^{(2)}+3\Theta\mathcal A^{(1)}\right]&=\mathcal A^{(2)},\label{sec:eq}\\
y'+\Theta\mathcal A^{(1)}=\mathcal B^{(2)}.\label{y:eq}
\end{align}
In contrast to (\ref{bol:1st}), there are source terms in the l.h.s. of (\ref{sec:eq}), and this implies that the small scale perturbation generates large scale temperature perturbations at second order, whose homogenous component is recently pointed out in~\cite{Jeong:2014gna,Nakama:2014vla}.
It is important that $\mathcal B^{(2)}$ does not have any pure second order terms except $y$.
If one does not introduce the $y$ to the distribution function at the beginning, (\ref{bol:2nd}) is not satisfied since both $\Theta\mathcal A^{(1)}$ and $\mathcal B^{(2)}$ are already determined at first order and do not coincide in principle.
This also implies that the $y$ is determined by the linear perturbations automatically.
Let us expand (\ref{y:eq}) by substituting the following form:
\begin{align}
\mathcal A^{(1)}&=-n_{\rm e}\sigma_{\rm T} a\left(\Theta -\Theta_0-\mathbf v\cdot \mathbf n+\frac12P_2\Theta_2\right),\\
\mathcal B^{(2)}&=-n_{\rm e}\sigma_{\rm T} a\bigg[
\frac{\Theta^2}{2}-\frac{[\Theta^2]_{0}}{2}-\frac{3}{20}v^2-\frac{11}{20}(\mathbf v\cdot \mathbf n)^2
\notag \\
&-\frac{1}{20}\sum_{m=-2}^2[\Theta^2]_{2m}Y_{2m}-\mathbf v\cdot \mathbf n\left(\Theta_0-\frac12P_2\Theta_2\right)\notag \\
&+iv\left\{-\Theta_1+\frac15P_2\left(-\Theta_1+\frac32\Theta_3\right)\right\}\notag \\
&
+y-y_{0}-\frac{1}{10}\sum_{m=-2}^2y_{2m}Y_{2m}\bigg ]
\end{align}
Then we find
\begin{align}
\frac{\partial y}{\partial\eta}+\mathbf n\cdot \nabla y&=n_{\rm e}\sigma_{\rm T} a \left(\Theta -\Theta_0-\mathbf v\cdot \mathbf n+\frac12P_2\Theta_2\right)\Theta^{(1)}\notag \\
& -n_{\rm e}\sigma_{\rm T} a \bigg[
\frac{\Theta^2}{2}-\frac{[\Theta^2]_{0}}{2}-\frac{3}{20}v^2-\frac{11}{20}(\mathbf v\cdot \mathbf n)^2
\notag \\
&-\frac{1}{20}\sum_{m=-2}^2[\Theta^2]_{2m}Y_{2m}-\mathbf v\cdot \mathbf n\left(\Theta_0-\frac12P_2\Theta_2\right)\notag \\
&+iv\left\{-\Theta_1+\frac15P_2\left(-\Theta_1+\frac32\Theta_3\right)\right\}\bigg ]\notag \\
&-n_{\rm e}\sigma_{\rm T} a\left(y-y_{0}-\frac{1}{10}\sum_{m=-2}^2y_{2m}Y_{2m}\right).
\end{align}
On the other hand, the Fourier transform of the above equation becomes
\begin{align}
\frac{\partial y}{\partial\eta}+ik\lambda y&=n_{\rm e}\sigma_{\rm T} a \left(\Theta -\Theta_0-v\lambda+\frac12P_2\Theta_2\right) \Theta^{(1)}\notag \\
&-n_{\rm e}\sigma_{\rm T} a \bigg[
\frac{\Theta^2}{2}-\frac{[\Theta^2]_{0}}{2}-\frac{3}{20}v^2-\frac{11}{20}(\lambda v)^2
\notag \\
&-\frac{1}{20}\sum_{m=-2}^2[\Theta^2]_{2m}Y_{2m}-\lambda v \left(\Theta_0-\frac12P_2\Theta_2\right)\notag \\
&+iv \left\{-\Theta_1+\frac15P_2\left(-\Theta_1+\frac32\Theta_3\right)\right\}
\bigg ]\notag \\
&-n_{\rm e}\sigma_{\rm T} a\left(y-y_{0}-\frac{1}{10}\sum_{m=-2}^2y_{2m}Y_{2m}\right),\label{y:equation:fourier}
\end{align}
where $\lambda$ is the cosine between the Fourier momentum and the photon momentum, and products of perturbations are understood as the convolutions.
Here we also write the $m\not =0$ components.
The equations for the spectral distortions do not include the other pure second order quantities such as the temperature and the metric perturbations.
Therefore, we do not have to integrate the full second order Boltzmann equation as long as working only on the spectral distortions.
Note that the convolutions include the curvature perturbations implicitly as discussed in the appendix \ref{app_conv}.
Therefore, the integration with respect to the Fourier momentum is non-trivial in general.
\section{Inhomogeneous $y$ distortion}\label{sec:4}
We have introduced $y$ distortion as momentum dependent part of the second order temperature perturbations.
Its momentum dependence is the same form with the usual Compton $y$ parameter given as
\begin{align}
y_{\rm C}= \int \frac{T_{\rm e}-T_{\gamma}}{m_{\rm e}}n_{\rm e}\sigma_{\rm T}ad\eta,\label{def:yc}
\end{align}
where $T_{\gamma}\equiv T_0(1+z)$.
We should note that our $y$ is a free parameter determined by the second order Boltzmann equation and has nothing to do with the inhomogeneity of $T_{\rm e}$, $T_{\gamma}$ and $n_{\rm e}$ in the integrand in (\ref{def:yc}).
To be more specific, our $y$ arises from the expansion associated with the Thomson collision terms but $y_{\rm C}$ comes from the Kompaneets terms, namely, their momentum dependences coincide accidentally.
Below we summarize the evolution equation for our $y$ and confirm the availability of the previous estimations.
\subsection{Hierarchy equations for the spectral distortion}
In this section, we write the hierarchy equations for multipole components of the distortion.
Here we ignore $m\neq 0$~(vector and tensor) components for simplicity.
(\ref{y:equation:fourier}) is then written as
\begin{align}
\dot y+ik\lambda y&= \mathcal S(k,\lambda)-n_{\rm e}\sigma_{\rm T} a\left(y-y_{0}+\frac12P_2(\lambda)y_2\right),
\end{align}
where $\cdot \equiv\partial/\partial\eta$ and the source term is defined by
\begin{align}
( n_{\rm e}\sigma_{\rm T} a)^{-1}\mathcal S(k,\lambda)=& \left(\Theta -\Theta_0-v\lambda+\frac12P_2\Theta_2\right) \Theta^{(1)}\notag \\
& -
\frac{\Theta^2}{2}+\frac{[\Theta^2]_{0}}{2}+\frac{3}{20}v^2+\frac{11}{20}(\lambda v)^2
\notag \\
&-\frac{1}{4}[\Theta^2]_{2}P_2+\lambda v \left(\Theta_0-\frac12P_2\Theta_2\right)\notag \\
&-iv \left\{-\Theta_1+\frac15P_2\left(-\Theta_1+\frac32\Theta_3\right)\right\}
\end{align}
Using (\ref{Legendre}) we obtain the following hierarchy equations for the $y$ distortions:
\begin{align}
\dot y_l +\frac{k(l+1)}{2l+1} y_{l+1}-\frac{kl}{2l+1} y_{l-1}=\mathcal S_l-n_{\rm e}\sigma_{\rm T}a\left(1-\delta_{l0}-\frac{1}{10}\delta_{2l}\right)y_l.\label{y:heierarchy}
\end{align}
Up to $l=4$, the source functions can be expanded as
\begin{align}
( n_{\rm e}\sigma_{\rm T} a)^{-1}\mathcal S_0=&\frac{v^2}{3}+2iv\Theta_1-3 \Theta_1^2+\frac{9\Theta_2^2}{2}-7\Theta_3^2+9\Theta_4^2+\cdots\\
( n_{\rm e}\sigma_{\rm T} a)^{-1}\mathcal S_1=&\frac{3}{5} i \Theta_2 v-\frac95 \Theta_1 \Theta_2+\frac{27}{10}\Theta_2\Theta_3-4\Theta_3\Theta_4\cdots\\
( n_{\rm e}\sigma_{\rm T} a)^{-1}\mathcal S_2=&-\frac{11}{150}v^2-\frac{11}{25} i\Theta_1 v+\frac{33}{50} i\Theta_3 v+\frac{33}{50}\Theta_1^2-\frac{9}{14}\Theta_2^2\notag\\
&-\frac{99}{50}\Theta_1\Theta_3+\frac{77}{75}\Theta_3^2+\frac{18}{7} \Theta_2\Theta_4-\frac{9}{7} \Theta_4^2 \cdots.
\end{align}
Let $k$ be the super horizon scales.
The $l>0$ linear perturbations are not significant before the horizon entry.
Therefore, using (\ref{conv_result1}) each convolution should be well approximated as
\begin{align}
(XY)_{\bf k}\sim \int \frac{dq}{q}X_{q} Y_{q}\mathcal P_{\mathcal R}(q).\label{convo:sub}
\end{align}
(\ref{convo:sub}) implies that the source terms induce $k$ independent transfer functions for the $y$ distortion on large scales.
Imposing $v=-3 i \Theta_1$ during tight coupling regime, we obtain
\begin{align}
( n_{\rm e}\sigma_{\rm T} a)^{-1}\mathcal S_0=&\frac{9}{2} \Theta_2^2-7 \Theta_3^2+9 \Theta_4^2+\cdots
\\
( n_{\rm e}\sigma_{\rm T} a)^{-1}\mathcal S_1=&\frac{27}{10} \Theta_3 \Theta_2-4 \Theta_3 \Theta_4
+\cdots\\
( n_{\rm e}\sigma_{\rm T} a)^{-1}\mathcal S_2=&-\frac{9}{14}\Theta_2^2+\frac{18}{7} \Theta_4 \Theta_2+\frac{77}{75} \Theta_3^2-\frac{9}{7} \Theta_4^2
+\cdots.
\end{align}
Ignoring the gradient terms, we have
\begin{align}
\dot y_0 \sim& n_{\rm e}\sigma_{\rm T} a \left(\frac{9}{2} \Theta_2^2-7 \Theta_3^2+9 \Theta_4^2+\cdots\right )\label{S_0}\\
\dot y_1 \sim &- n_{\rm e}\sigma_{\rm T} ay_1+ n_{\rm e}\sigma_{\rm T} a \left(\frac{27}{10} \Theta_3 \Theta_2-4 \Theta_3 \Theta_4
+\cdots\right )\label{S_1}\\
\dot y_2 \sim &- n_{\rm e}\sigma_{\rm T} ay_2+ n_{\rm e}\sigma_{\rm T} a\left(-\frac{9}{14}\Theta_2^2+\frac{18}{7} \Theta_4 \Theta_2+\frac{77}{75} \Theta_3^2-\frac{9}{7} \Theta_4^2
+\cdots\right).\label{S_2}
\end{align}
The first term in r.h.s. of (\ref{S_0}) is well known.
$\Theta_2$ implies the emergence of the anisotropic stress which induces the friction heat, and it sources the distortion.
$- n_{\rm e}\sigma_{\rm T} ay_2$ suppresses growing of $y_2$ by isotropization due to the Thomson scattering, and this is the same for the higher order multipoles.
The term proportional to $\Theta_2^2$ in (\ref{S_2}) implies that the distortions also diffuse due to the anisotropic stress.
This effect was previously taken into account by the window function introduced by hands.
Substituting $l=0$ into (\ref{y:heierarchy}) and ignoring the gradient terms, we immediately find that only the monopole component of the $y$ distortions can survive and is conserved at super horizon after $y$ era.
The hierarchy equations at late periods without the sources are given by
\begin{align}
\dot y_l +\frac{k(l+1)}{2l+1} y_{l+1}-\frac{kl}{2l+1} y_{l-1}=-n_{\rm e}\sigma_{\rm T}a\left(1-\delta_{l0}-\frac{1}{10}\delta_{2l}\right)y_l,
\end{align}
and from (\ref{S_0}), the initial condition of the distortion should be written as
\begin{align}
y_0(\eta_f,k)\sim & \int^{\eta_f}_{\eta_i} n_{\rm e}\sigma_{\rm T} a \left(\frac{v^2}{3}+2iv\Theta_1-3 \Theta_1^2+\frac{9\Theta_2^2}{2}-7\Theta_3^2+9\Theta_4^2+\cdots\right )d\eta\notag \\
\sim& \int^{\eta_f}_{\eta_i} n_{\rm e}\sigma_{\rm T} a \left(\frac{9}{2} \Theta_2^2-7 \Theta_3^2+9 \Theta_4^2+\cdots\right )d\eta.\label{y0:calc:def}
\end{align}
Thanks to (\ref{convo:sub}), the main part of $y_0(\eta_f,k)$ is completely the same form with the homogeneous component previously evaluated, for example, in~\cite{Chluba:2012gq}.
This is reasonable since the monopole and homogeneous part are not distinguishable before horizon entry.
\subsection{Integral solutions and Gauge dependence}
We shall now demonstrate a line-of-sight integral method for the $y$ distortion.
Near the last scattering surface, the source can be negligible and the equations can be written as
\begin{align}
\dot y+ik\lambda y&=-n_{\rm e}\sigma_{\rm T} a\left(y-y_{0}+\frac12P_2(\lambda)y_2\right).
\end{align}
The method is completely parallel with that for the temperature perturbations.
That is, the line-of-sight integral solution for the $y$ distortion is given by
\begin{align}
y(k,\lambda,\eta_0)=&\int^{\eta_0}_{\eta_f}d\eta \mathcal S_y(k,\eta)e^{-ik(\eta_0-\eta)\lambda}\\
\mathcal S_y(k,\eta)=&g\left(y_0+\frac{y_2}{4}\right)+\ddot g\frac{3y_2}{4k^2},
\end{align}
where the visibility function is introduced by $g=-\dot \tau e^{-\tau}$ with $\dot \tau$ being $-n_{\rm e}\sigma_{\rm T}a$.
The terms related to $y_2$ are new corrections.
The harmonic coefficient is also immediately obtained as
\begin{align}
a_{y,lm}&=4\pi(-i)^l\int\frac{d^3k}{(2\pi)^3}Y^*_{lm}(\hat k)\int^{\eta_0}_0 d\eta \mathcal S_y(k,\eta)j_l[k(\eta_0-\eta)].\label{yualm}
\end{align}
We can see that no metric perturbations and no second order temperature perturbations are included.
Therefore, $y$ has no redshift and no crosscorrelation with the ISW lensing.
This helps us to consider the $\mu\mu$ and $yy$ auto and $\mu y$ cross correlations since we do not have to consider the curve of sight~\cite{Saito:2014bxa}.\\
In the end of this section, we shall comment on the gauge dependence of $y$.
The gauge transformation laws for $v$ and $-3i\Theta_1$ are the same as given in~\cite{Ma:1995ey}, and higher order multipoles are gauge invariant quantities.
Therefore, gauge invariance of $y$ is manifest since the velocity terms in the first line of~(\ref{y0:calc:def}) can be recast into $(v+3i\Theta_1)^2/3$~\cite{Chluba:2012gq}.
There is no metric perturbation in (\ref{y:equation:fourier}) so that $y$ distortion evolves gauge independently after its generation.
\subsection{Homogeneous component of the $y$}
In our definition, $y$ is calculated independently from the usual Compton $y$ parameter defined in (\ref{def:yc}).
Let us combine these two $y$ distortions.
By using Compton $y$ parameter, (\ref{CK:0}) is written as
\begin{align}
C_{\rm K}^{(0)}[f]=\dot y_{\rm C}\mathcal Y(p).
\end{align}
The ensemble average of the monopole component of the $y$ becomes
\begin{align}
\frac{\partial \langle y_{\rm tot}\rangle}{\partial \eta }=n_{\rm e}\sigma_{\rm T}a\left[\frac{\langle v^2\rangle }{3}+2i\langle v\Theta_1\rangle-3 \langle\Theta_1^2\rangle +\frac{9\langle\Theta_2^2\rangle }{2}-7\langle\Theta_3^2\rangle+9\langle\Theta_4^2\rangle +\cdots\right]+\dot y_{\rm C}.
\end{align}
Therefore the total homogeneous component can be calculated as
\begin{align}
\langle y_{\rm tot}\rangle =-\int \dot \tau \left[\frac{T_{\rm e}-T_\gamma }{m_{\rm e}}+\frac{\langle v^2\rangle }{3}+2i\langle v\Theta_1\rangle-3 \langle\Theta_1^2\rangle +\frac{9\langle\Theta_2^2\rangle }{2}-7\langle\Theta_3^2\rangle+9\langle\Theta_4^2\rangle +\cdots\right]d\eta,\label{y:homo}
\end{align}
where the baryon bulk velocity and the temperature dipole are cancel if we apply the tight coupling approximation, namely $v=-3i\Theta_1$.
On the other hand, SZ effect can be also calculated in the above formula if we impose $T_{\rm e}\gg T_\gamma$ and $v\gg \Theta$.
\section{$\mu$ distortion}\label{sec:5}
\subsection{Definition}
We have shown that the $y$ is necessary for a set of equations to be consistent at second order; however we have not comment on the chemical potential type distortion called $\mu$ distortion.
During $5\times 10^4<z<2\times 10^6$, the $y$ distortions are converted to the $\mu$ distortion, and the system is considered to be in kinetic equilibrium.
This was investigated numerically in the previous studies~\cite{Hu:1994bz,Chluba:2012gq}.
Let us try to include the $\mu$ as well in our formulation.
One strategy to include the chemical potential may be writing a second order ansatz in the following form:
\begin{align}
\tilde \Theta^{(2)}(p)=\Theta^{(2)}+y\frac{\mathcal Y(p)}{\mathcal G(p)}+\mu\frac{\mathcal M(p)}{\mathcal G(p)}+\cdots,\label{ans:muiri}
\end{align}
where $\mathcal M$ is defined in (\ref{def:M}).
Let us substitute (\ref{ans:muiri}) into (\ref{trans_sec_col}) and (\ref{fprimeGY}).
We then obtain additional terms proportional to $\mathcal M$.
Reading off each coefficient, the evolution equation for the $\mu$ distortion is given as follows:
\begin{align}
\dot \mu +ik\lambda \mu&=(n_{\rm e}\sigma_{\rm T} a)\left[-\mu+\mu_{0}+\frac{1}{10}\sum_{m=-2}^2\mu_{2m}Y_{2m}\right].\label{mu_eq}
\end{align}
It is not surprising that the conversion of $y$ to $\mu$ is not seen even at second order since we start with a momentum independent $\mu$ parameter, and we did not take into account the momentum transfer.
(\ref{mu_eq}) just tells us that the momentum independent chemical potential evolves independently from the $y$ distortion and the second order temperature perturbations once it is given at initial time.
This implies that $\mu$ generation from $y$ should be treated in the full~(or higher order) Boltzmann equations with momentum dependent chemical potential.
The other steps for the $\mu$ are completely parallel with those for the $y$, and the harmonic coefficient is the same form with (\ref{yualm}), namely we have
\begin{align}
a_{\mu,lm}=&4\pi(-i)^l\int\frac{d^3k}{(2\pi)^3}Y^*_{lm}(\hat k)\int^{\eta_0}_0 d\eta \mathcal S_\mu(k,\eta)j_l[k(\eta_0-\eta)],\label{mualm}\\
\mathcal S_\mu(k,\eta)=&g\left(\mu_0+\frac{\mu_2}{4}\right)+\ddot g\frac{3\mu_2}{4k^2},
\end{align}
and the initial value of the $\mu$ distortion should be introduced by hands in this context.
\subsection{Instantaneous $\mu$ formation}
The full numerical analysis for $\mu$ generation is complicated.
Here we notice that the chemical potential is a thermodynamic quantity and that we do not have to care about the details of the process when considering the thermalization time scale is rapid enough.
In this section we shall repeat a traditional explanation for the $\mu$ formation with a single comment.
Let us consider that the initial state is given by the solutions of the Thomson limit second order Boltzmann equation, namely, the second order number and the energy density are calculated as
\begin{align}
N^{(2)}_y&=0,\\
I^{(2)}_y&=4y\mathcal I_3,
\end{align}
where numerical factors $\mathcal I_{n}$ are defined in (\ref{B:2}).
Assuming that the thermalization time scale is rapid enough compared to the typical that of the cosmic expansion, these quantities should have the following forms at the next moment:
\begin{align}
N^{(2)}_{\rm BE}&=3\mathcal I_2\Theta^{(2)}_{\rm BE}-2\mu \mathcal I_1\\
I^{(2)}_{\rm BE}&=4\mathcal I_3\Theta^{(2)}_{\rm BE}-3\mu \mathcal I_2
\end{align}
where subscript ``BE'' implies that they are the parameters associated with a Bose distribution function.
Then we can impose the number and the energy conservation laws:
\begin{align}
N^{(2)}_y&=N^{(2)}_{\rm BE}\label{N_equate}\\
I^{(2)}_y&=I^{(2)}_{\rm BE}\label{I_equate}
\end{align}
so that we obtain
\begin{align}
\mu=\left(\frac{2\mathcal I_1}{3\mathcal I_2}-\frac{3\mathcal I_2}{4\mathcal I_3}\right)^{-1}y.\label{e:mu}
\end{align}
The numerical constant is calculated as $\mu=1.40066\times 4y$, and the well known relation is derived.
Now we shall have a comment on this matter.
(\ref{N_equate}) and (\ref{I_equate}) should not be established at a distribution function level, namely, the continuous evolution of the $\mu$ is never explained by this approach.
This is because the momentum integrals with $p^n$ should be always consistent if we start with the Boltzmann equation.
Suppose that, for instance, we use the Boltzmann equations which are integrated with $p^4$ and $p^5$, we find the other numerical factor in (\ref{e:mu}).
Therefore, there exist time discontinuities in the both sides of the equalities in (\ref{N_equate}) and (\ref{I_equate}).
Using (\ref{y0:calc:def}) and (\ref{e:mu}) we approximately obtain the following form of $\mu$ distortion from the scalar perturbations:
\begin{align}
\mu_0(\eta_f,k)\sim \int^{\eta_f}_{\eta_i} n_{\rm e}\sigma_{\rm T} a \left(\frac{9}{2} \Theta_2^2-7 \Theta_3^2+9 \Theta_4^2+\cdots\right )d\eta.\label{mu0:calc:def}
\end{align}
\subsection{Monopole formation}
We have mentioned that the $\mu$ formation is non-trivial and our prescription is not applicable.
Here let us revisit the monopole terms in (\ref{CK:1}).
Theses terms are actually coincide with those in (\ref{CK:0}).
This implies that the pervious numerical simulation based on (\ref{CK:0}) is also applicable to the inhomogeneous case as long as we ignore the higher order multipoles.
As discussed in the previous section, multipoles of the spectral distortions are vanishing, and only the long wavelength modes of the monopole component are dominant.
In this sense, we expect that the generation of the inhomogeneous distortions should be explained in the same manner with the previous numerical calculations for the homogenous $\mu$ distortions~\cite{Hu:1994bz,Chluba:2012gq}.
\subsection{Suppression from the Double Compton scattering}
So far we have introduced the initial redshift for the $\mu$ by referring to the previous numerical works~\cite{Hu:1994bz,Chluba:2012gq}.
It is determined by the double Compton effect, which is cubic order QED interaction.
The process is crucial for the spectral distortions since it changes the number of photons and erases the distortions.
The derivation of the double Compton scattering collision term is complicated so that we avoid writing the term explicitly here.
Instead, roughly we estimate the time scales of these interactions.
Let $\Gamma_{\rm K}$ and $\Gamma_{\rm DC}$ be the energy transfer ratio due to the Compton scattering and double Compton scattering, respectively.
$\Gamma_{\rm K}$ should be proportional to the scattering event ratio $n_{\rm e}\sigma_{\rm T}$.
Note that there is no energy transfer only if $n_{\rm e}\sigma_{\rm T}$ is large.
There exist significant energy transfers if electrons are relativistic enough to transfer the photon energy.
This should be characterized by $T_{\rm e}/m_{\rm e}$.
Then, we can estimate the energy transfer ratio due to the Compton scattering as follows:
\begin{align}
\Gamma_{\rm K}\sim n_\text{e}\sigma_{\rm T}\frac{T_\gamma}{m_{\rm e}}.
\end{align}
Actually, we can reproduce this relation by using (\ref{CK:0}).
In analogy with the above discussion, we can roughly estimate $\Gamma_{\rm DC}$ as well.
First, the double Compton scattering interaction ratio should be proportional to $n_{\rm e}\sigma_{\rm T}\alpha \epsilon^2$ with $\alpha$ being the fine structure constant.
This is because the process is a cubic order QED interaction, and $\epsilon \to0$ limit electron does not emit the second photon in terms of energy conservation law~\footnote{
Linear terms in $\epsilon$ do not exist since the scattering cross section should be a Lorentz scalar.}.
Then, the lowest order term can be estimated as
\begin{align}
\Gamma_{\rm DC}\sim n_\text{e}\sigma_T\alpha\left(\frac{T_{\gamma}}{m_\text{e}}\right)^2.\label{timsecale_dc}
\end{align}
Using $\Gamma_{\rm K}$ and $\Gamma_{\rm DC}$, we can guess the suppression time scale of the $\mu$ distortion.
The $\mu$ distortion varies as a result of both the double Compton scattering and the Compton scattering.
Therefore, the suppression time scale can be given as the inverse of $\Gamma_\mu\sim \sqrt{\Gamma_{\rm DC}\Gamma_{\rm K}}$.
Employing these facts, we roughly obtain $\Gamma_{\mu}\sim 10^{-35}\times(1+z)^{\frac92} {\rm s}^{-1}$ and $\Gamma_{\rm K}\sim 10^{-29}\times(1+z)^{4} {\rm s}^{-1}$.
Comparing these with $H\sim 10^{-20}\times(1+z)^{2}{\rm s}^{-1}$, one finds that the window of $\mu$ era is opened during $\mathcal O(10^5)<z<\mathcal O(10^6)$.
\section{Higher order spectral distortions}\label{sec:6}
A product of distribution functions in (\ref{dist:expand}) can be expanded as
\begin{align}
&g(\mathbf{\tilde q}')f(\mathbf {\tilde p}')[1+f(\mathbf {\tilde p})]-g(\mathbf{\tilde q})f(\mathbf{\tilde p})[1+f(\mathbf{\tilde p}')]\notag \\
&=g(\mathbf{\tilde q})\left(f(\mathbf {\tilde p}')-f(\mathbf {\tilde p})+\left(-\frac{(\mathbf {\tilde p}-\mathbf {\tilde p}')^2}{2m_{\rm e}T_{\rm e}}-\frac{(\mathbf{\tilde q}-m_{\rm e} \mathbf v)\cdot (\mathbf {\tilde p}-\mathbf {\tilde p}')}{m_{\rm e}T_{\rm e}}\right)f(\mathbf {\tilde p}')\left[1+f(\mathbf {\tilde p})\right]+\cdots \right).\notag\\
&=g(\mathbf{\tilde q})\left[f(\mathbf {\tilde p}')-f(\mathbf {\tilde p})+\mathcal O\left(\frac{\eta}{\epsilon}\right)\right].
\end{align}
This implies that the collision terms are linear in $f$ if we ignore the momentum transfer corrections coming from $\tilde p/m_{\rm e}$ and $T_{\rm e}/m_{\rm e}$.
We start this section with the above Thomson scattering limit.
\subsection{Cubic order ansatz at Thomson limit}
The dimensional quantity is only the photon momentum in the collision terms.
Therefore, the derivative operators always appear in the form of $p\partial/\partial p$.
The cubic order Thomson term $(n_{\rm e}\sigma_{\rm T}a)^{-1}\mathcal C^{(3)}_{\rm T}[f]$ should be written as a linear combination of
\begin{align}
f,\quad p\frac{\partial f}{\partial p},\quad \left(p\frac{\partial }{\partial p}\right)^2f ,\quad \left(p\frac{\partial }{\partial p}\right)^3 f,\label{dim:an}
\end{align}
and their Legendre coefficients with the baryon bulk velocity as explicitly shown in (\ref{col:cubic_thomson}).
Let us introduce a following momentum function:
\begin{align}
\mathcal K(p)=\left(-p\frac{\partial }{\partial p}\right)\mathcal Y(p),
\end{align}
where the momentum integral of $\mathcal K$ with $p^2$ is 0, which inspires us to define a higher order $y$ distortion.
Using this function, the third order derivative of the Planck distribution is given as
\begin{align}
\left(-p\frac{\partial }{\partial p}\right)^3 f^{(0)}(p)&=\mathcal K +3\mathcal Y+9\mathcal G.
\end{align}
Combining the above with (\ref{planck_dist_exp}), one finds the following cubic order terms:
\begin{align}
f^{(3)}=\widetilde \Theta^{(3)}\mathcal G+ \widetilde \Theta^{(1)}\widetilde \Theta^{(2)}\left(3\mathcal G + \mathcal Y\right)
+\frac{\widetilde\Theta^{(1)3}}{3!}\left(9\mathcal G + 3 \mathcal Y+ \mathcal K\right).\label{cubic_expand}
\end{align}
Then we separate the momentum dependence of the temperature perturbations as
\begin{align}
\widetilde \Theta^{(1)}(p)&=\Theta^{(1)}\\
\widetilde \Theta^{(2)}(p)&=\Theta^{(2)}+\frac{\mathcal Y}{\mathcal G}y^{(2)}\\
\widetilde \Theta^{(3)}(p)&=\Theta^{(3)}-\frac{\mathcal Y^2}{\mathcal G^2}\Theta^{(1)} y^{(2)}+\frac{\mathcal Y}{\mathcal G}y^{(3)}+\frac{\mathcal K}{\mathcal G}\kappa^{(3)},
\end{align}
and we can write the cubic order terms as
\begin{align}
f^{(3)}&=\left[\Theta^{(3)}+3\Theta^{(1)}\Theta^{(2)}+\frac32\Theta^{(1)3}\right]\mathcal G\notag\\
&+\left[\Theta^{(1)}\Theta^{(2)}+\frac12\Theta^{(1)3}+3\Theta^{(1)}y^{(2)}+y^{(3)}\right]\mathcal Y\notag \\
&+\left[\frac1{3!}\Theta^{(1)3}+\kappa^{(3)} \right]\mathcal K.\label{cubic_ans}
\end{align}
This is our ansatz for the cubic order Thomson limit Boltzmann equation.
The former discussion suggests that closed equations for the higher order spectral distortions such as $\Theta^{(3)}$, $y^{(3)}$ and $\kappa^{(3)}$ are systematically obtained.
We can reconstruct the distribution functions in the form of the sum of local blackbody and spectral distortions.
(\ref{cubic_expand}) and (\ref{cubic_ans}) yield
\begin{align}
f=\frac{1}{e^{\frac{p}{T_0}e^{-\Theta}}-1}+\left[(1+3\Theta^{(1)})y^{(2)}+y^{(3)}\right]\mathcal Y+\kappa^{(3)}\mathcal K,
\end{align}
where we have defined momentum independent temperature perturbation as $\Theta = \Theta^{(1)}+\Theta^{(2)}+\Theta^{(3)}$.
The spectral shapes of the momentum basis are shown in Fig.\ref{fig_1}.
Defining $\alpha=2\mathcal I_1/(3\mathcal I_2)$, conventionally the $\mu$ distortion is expressed by not $\mathcal M$ but $\mathcal M+ \alpha \mathcal G$, which is the difference between a Bose and a Planck distributions whose number densities are the same.
$y^{(3)}$ can be subdominant part of $y^{(2)}$; however, we can identify $\kappa^{(3)}$ due to the momentum dependence even if its magnitude is smaller.
\begin{figure}
\includegraphics[width=15cm]{dist_prod.pdf}
\caption{Spectral shapes of the photon number shift, $\mu$ distortion, $y$ distortion and higher order $y$ distortion are drawn.
They are rescaled for comparing the shapes and the peaks.
The multiples are shown in the legend in the figure.}
\label{fig_1}
\end{figure}
\subsection{General ansatz at Thomson limit}
The same prescription is available for higher orders as long as we assume the linearity of the distribution functions.
Let us introduce $n$-th order momentum function whose integral with $p^2$ is zero:
\begin{align}
\mathcal Y^{(n+1)}(p)=\left(-p\frac{\partial }{\partial p}\right)^n\mathcal Y(p),
\end{align}
where $\mathcal Y^{(1)}=\mathcal Y$ and $\mathcal Y^{(2)}=\mathcal K$.
Then, $l$-th term in (\ref{planck_dist_exp}) is expressed as
\begin{align}
\left(-p\frac{\partial }{\partial p}\right)^l f^{(0)}(p)&=\mathcal Y^{(l-1)}+3\mathcal Y^{(l-2)}+\cdots +3^{l-2}\mathcal Y^{(1)}+3^{l-1}\mathcal G\\
&=3^{l-1}\mathcal G(p)+\sum^{l-1}_{k=1}3^{l-k-1}\mathcal Y^{(k)}(p).
\end{align}
As discussed in (\ref{dim:an}), the momentum dependence is always expressed by linear combination of $\mathcal G$, $\mathcal Y^{(1)},\cdots$ and $\mathcal Y^{(n-1)}$.
Using these functions, the $n$-th order distribution function should be written as
\begin{align}
f^{(n)}=\left[\Theta^{(n)}+\cdots\right]\mathcal G +\cdots +\left[\cdots +y^{(n-1,1)}\right]\mathcal Y^{(n-2)}+\left[\frac{1}{n!}\left(\Theta^{(1)}\right)^n+y^{(n,0)}\right]\mathcal Y^{(n-1)},
\end{align}
where $y^{(2)}=y^{(2,0)}$, $y^{(3)}=y^{(2,1)}$ and $\kappa^{(3)}=y^{(3,0)}$.
A number of the new parameters for the $n$-th order Thomson limit Boltzmann equations can be $n$.
On the other hand, the time derivative of the momentum basis is calculated as
\begin{align}
\mathcal Y^{(n)'}=-(\ln p)'\mathcal Y^{(n+1)}.
\end{align}
Using this with the same manipulation for (\ref{fprimeGY}), acoustic sources for higher order distortions can be written as
\begin{align}
y^{(n,0)'}=-\frac{1}{(n-1)!}{\Theta^{(1)}}^{n-1}\mathcal A^{(1)}+\cdots.
\end{align}
Therefore, we always have the higher order spectral distortions as results of mode couplings as in the case with the usual $y$ distortion.
\subsection{First order Kompaneets terms}
So far we have discussed the Thomson limit to ignore the nonlinear terms of $f$ for simplicity.
The above prescription itself can be powerful since it is applicable for the same class of collision process; however we should take into account not only the inhomogeneity but also the momentum transfer in the realistic application to the CMB.
When we discussed the second order theory, the Kompaneets terms are comparable to the Thomson anisotropic parts; however, they are homogenous and do not contribute to the perturbation equation.
The total average part is calculated by combining the result of the Thomson part with the SZ effects.
If we look at the cubic order, the momentum transfer is expected to be written as products of the Compton $y$ parameter and the first order anisotropies.
In this case, the linear Kompaneets terms are non-negligible for the perturbation equations.
We now discuss the momentum transfer coming from $p(1+z)/m_{\rm e}$ and $T_{\rm e}/m_{\rm e}$ at cubic order.
From (\ref{CK:0}) and (\ref{CK:1}), we have
\begin{align}
(n_{\rm e}\sigma_{\rm T}a)^{-1}\left(\mathcal C^{(0)}_{\rm K,0}[f]+\mathcal C^{(1)}_{\rm K,0}[f]\right)=\frac{1}{m_{\rm e}p^2}\frac{\partial }{\partial p}p^4\left(T_{\rm e}(1+{\Theta_{\rm e0}})\frac{\partial f_0}{\partial p}+f_0\left[1+f_0\right]\right),\label{617}
\end{align}
where $f_0=f^{(0)}+f^{(1)}_0$, and we replace $T_{\rm e}\to T_{\rm e}(1+{\Theta_{\rm e0}})$ to include the electron temperature perturbation.
The differentiated part can be calculated as
\begin{align}
T_{\rm e}(1+{\Theta_{\rm e0}})\frac{\partial f_0}{\partial p}+f_0\left[1+f_0\right]=\left[T_{\rm e}(1+\Theta_{\rm e0})-T_0 e^{\widetilde\Theta_0}\right]\frac{\partial f_0}{\partial p}-\frac{T_0e^{\widetilde\Theta_0}}{1-p\frac{\partial \widetilde\Theta_0}{\partial p}}\frac{\partial \widetilde\Theta_0}{\partial p}p\frac{\partial f_0}{\partial p}.\label{f0:kernel_Komp}
\end{align}
Therefore, (\ref{617}) yeilds
\begin{align}
(n_{\rm e}\sigma_{\rm T}a)^{-1}\mathcal C^{(1)}_{\rm K,0}[f]\simeq\frac{1}{m_{\rm e}p^2}\frac{\partial}{\partial p}p^4\left[T_{\rm e}(1+\Theta^{(1)}_{\rm e0})-T_{\gamma} (1+\Theta^{(1)}_0)\right]\frac{\partial f_0}{\partial p}.\label{f0:kernel_Komp}
\end{align}
The momentum independence of $\Theta^{(1)}$ is important for this expression, and we have nontrivial additional terms at higher order.
Thus, the monopole component of the first order Kompaneets equation is obtained as follows~\footnote{The terms proportional to $v$ do not contain the zeroth order distribution functions so that they are the second order Kompaneets terms.}:
\begin{align}
(n_{\rm e}\sigma_{\rm T}a)^{-1}\mathcal C^{(1)}_{\rm K,0}[f]&=\frac{1}{m_{\rm e}p^2}\frac{\partial}{\partial p}p^4\left[(T_{\rm e}\Theta_{\rm e0}-T_{\gamma}\Theta_0)\frac{\partial f^{(0)}}{\partial p}+
(T_{\rm e}-T_{\gamma})\Theta \frac{\partial \mathcal G}{\partial p}\right]\notag \\
&=\frac{T_{\rm e}\Theta_{\rm e0}-T_{\gamma}\Theta_0}{m_{\rm e}}\mathcal Y+\frac{T_{\rm e}-T_{\gamma} }{m_{\rm e}} \Theta_0 \mathcal K,\label{1st:Komp}
\end{align}
where we use
\begin{align}
\frac{1}{p^2}\frac{\partial }{\partial p}p^4\frac{\partial }{\partial p}=-3\left(-p\frac{\partial }{\partial p}\right)+\left(-p\frac{\partial }{\partial p}\right)^2,
\end{align}
and
\begin{align}
\frac{1}{p^2}\frac{\partial }{\partial p}p^4\frac{\partial }{\partial p}\mathcal G=\mathcal K.
\end{align}
The other cubic order terms should be linear combinations of $\mathcal G$, $\mathcal Y$ and $\mathcal K$ as pointed above.
\subsection{Linear Sunyaev-Zel'dovich effect}
(\ref{1st:Komp}) has terms proportional to $\mathcal Y$ and $\mathcal K$.
The first term implies that there are additional sources for (\ref{y:eq}).
Assuming that $T_{\rm e}=T_{\gamma}$,
\begin{align}
\frac{T_{\rm e}\Theta_{\rm e0}-T_{\gamma}\Theta_0}{m_{\rm e}}=\frac{T_{\gamma}}{3m_{\rm e}}S_{\rm e \gamma},\label{iso_sz}
\end{align}
where we have defined the baryon isocurvature perturbation as
\begin{align}
S_{\rm e \gamma}=3(\Theta_{\rm e0}-\Theta_0)=\delta_{\rm e}-\frac34\delta_{\gamma}.
\end{align}
This implies that the $yT$ cross correlation function does exist even for Gaussian perturbations suppose that there are baryon isocurvature perturbations and that they are cross correlated with the adiabatic ones.
Physical implication of (\ref{iso_sz}) is clear: the fluctuations of relative number density induce additional recoil effects.
These terms may be crucial since $T_{\gamma}/m_{\rm e}=\mathcal O(10^{-9})(1+z)$, which may be comparable to the acoustic source for $z\gtrsim \mathcal O(10^3)$.
On the other hand, for the adiabatic initial condition $\Theta_{\rm e0}=\Theta_0$, one finds
\begin{align}
n_{\rm e}\sigma_{\rm T}a\frac{T_{\rm e}\Theta_{\rm e0}-T_{\gamma}\Theta_0}{m_{\rm e}}=\dot y_{\rm C}\Theta_0.
\end{align}
This should be more important since $y_{\rm C}$ is recently estimated in~\cite{Hill:2015tqa}, and the magnitude is expected to be $10^{-6}$.
Therefore, we roughly expect
\begin{align}
y^{(3)}_0\sim \int d\eta \dot y_{\rm C} \Theta_0\sim y_{\rm C} \Theta_0(z_c),
\end{align}
where $z_c$ is the redshift when SZ effects occur.
The cross correlation with the temperature can be given as
\begin{align}
C^{y^{(3)} T}_{l}\sim 10^{-6}C_l^{TT}.\label{3yt}
\end{align}
If we compare (\ref{3yt}) to the non-Gaussianity origin $y^{(2)}T$ cross correlation, this corresponds to $f^{\rm loc}_{\rm NL}\sim 10$~\cite{Chluba:2016aln}, that is, we cannot ignore this contribution for the non-Gaussianity observation by using $yT$ cross correlation; however, we also point out that the systematic errors for the PIXIE experiment is $10^3$ times larger than the signal~\cite{Kogut:2011xw} and the problem is not so simple.
The second term in (\ref{1st:Komp}) also implies the other higher order SZ effects.
As in the case with $y^{(3)}$, we estimate the higher order spectral distortion as
\begin{align}
\kappa^{(3)}_0\sim y_{\rm C}\Theta_0(z_c).
\end{align}
This implies that there are two types of linear SZ effects, and we can distinguish this higher order distortion from the former tSZ effect due to the momentum dependence.
We should note that the anisotropies in the distortions are connected with the electron gas configurations, and it may be possible to have a 3D map of the linear perturbations by using the linear SZ effects.
\subsection{Higher order spectral distortions and a residual distortion}
Recently, non-$\mu$ and non-$y$ type spectral distortions called \textit{residual distortion} is proposed for the purpose of classifying the actual observational data of CMB intensity spectrum~\cite{Chluba:2013pya}.
The authors introduced the $n$ dimensional Euclidean space with $n$ being a number of frequency channels and characterize the distortions by using linearly independent vectors in the space.
The residual distortion is the fourth direction perpendicular to temperature shift, $y$ distortion and $\mu$ distortion directions.
Actually, there are $n-3$ linearly independent directions for residual distortion, and our higher order momentum basis $\mathcal Y^{(n)}$ should be included in them.
The residual distortion mainly well describes thermal history during $\mu$-$y$ transition period, which cannot be treated in our method due to the non-linearity of the distribution functions in Kompaneets terms as seen in~(\ref{f0:kernel_Komp}).
Full parametrization of the residual distortion in a systematic approach should be important for the future observational cosmology.
\section{Summary}
The second order temperature perturbations are momentum dependent in contrast to zeroth and first order.
The momentum is usually integrated to obtain the second order brightness perturbations so that non-trivial configurations in the momentum spectrum has not been analyzed.
In this paper, we explicitly wrote the second order Boltzmann equation for the Planck distribution function with a momentum dependent temperature and showed that such a dependence is separated into two ``linearly independent'' functions with corresponding parameters.
One of them is understood as the fluctuations of the local blackbodies, and we derived the evolution equation for the acoustic temperature rise in a explicit way.
Another is the form of well known $y$ distortion which arise from Silk damping.
We derived the exact evolution equation of the distortion and combine it with the homogenous component coming from the other thermal history.
On the other hand, we also showed that the formation of the spectral $\mu$ distortion is not understood in our framework.
The $\mu$ is a result of the frequent momentum transfer, and the momentum independent ansatz does not work to explain the generation.
In the last section, we also discussed the potential to extend our method to higher order.
In our cases, the linearity of the distribution functions in the collision terms is crucial.
As an example, we investigated the cubic order Boltzmann equations.
We derived the cubic order Thomson terms and linear Kompaneets terms, and newly define the higher order $y$ distortion to make the equations closed.
We also showed that the mode coupling arises as in the case with $y$ distortions, and found linear SZ effects.
The above method has potential for applying to wider classes of non-equilibrium physics or non-linear problems.
The basis functions may have the different forms depending on concrete collision terms; however several classes can be solved systematically as we have shown in this paper.
For example, the Maxwell-Boltzmann distribution functions with several orders of the spectral distortions may be another window for the analysis of large scale structure, and Boltzmann equations for the massive neutrino might be solved in the same manner.
\acknowledgments
We also would like to thank Enrico Pajer, Jens Chluba and Wayne Hu for a lot of helpful discussions.
We would like to thank Masahide Yamaguchi for a lot of helpful comments.
We would like to thank Jens Chluba for careful reading of our manuscript.
The author is supported by a Grant-in-Aid for Japan Society for the Promotion of Science~(JSPS) Fellows.
|
2,877,628,090,748 | arxiv | \section{Introduction}
Many problems of classical Euclidean geometry explicitly or
implicitly consider objects up to similarity. Understanding and
using similarity is an important geometry competence feature for
schoolchildren.
Recall that two geometric figures $A$ and $B$ are similar if $B$
can be obtained from $A$ after a finite composition of
translations, rotations, reflections and dilations (homotheties).
Similarity is an equivalence relation and thus, for example, the
set of all triangles in a plane is partitioned into similarity
equivalence classes which can be identifies with \sl similarity
types\rm\ of triangles.
In many areas of mathematics objects are studied up to equivalence
relations. Depending on situation and traditions this is done
explicitly, implicitly or inadvertently. The problem of finding a
distinguished (\sl canonical, normal\rm) representatives of
equivalence classes of objects ir posed. Alternatively, it is the
problem of mapping the quotient set injectively back to the
original set. Let $X$ be a set with an equivalence relation $\sim$
or, equivalently, $R\subseteq X\times X$, denote the equivalence
class of $x\in X$ by $[x]$. Let $\pi:X\rightarrow X/R$ such that
$\pi(x)=[x]$ be the canonical projection map. We call a map
$\sigma: X/R \rightarrow X$ \sl normal object map\rm\ provided 1)
$\sigma$ is injective and 2) $\sigma\circ \pi=id_{X}$. For
example, there are various normal forms of matrices, such as the
Jordan normal form. See \cite{Sh} for examples of normal forms in
algebra and \cite{P} for a related recent work. Normal objects are
designed for educational, pure research (e.g. for classification)
and applied reasons. Normal objects are constructed as objects of
simple, minimalistic design, to show essential properties and
parameters of original objects. Often it is easier to solve a
problem for normal objects first and extend the solution to
atbitrary objects afterwards. Normal objects which are initially
designed for educational, pure research or problem solving
purposes are also used to optimize computations.
In elementary Euclidean geometry normal map approach does not seem
to be popular working with simple discrete objects such as
triangles. This may be related to the traditional dominance of the
synthetic geometry in school mathematics at the expense of the
coordinate/analytic approach. We can pose the problem of
introducing and using normal forms of triangles up to similarity.
This means to describe a set $S$ of mutually non-similar triangles
such that any triangle in the plane would be similar to a triangle
in $S$. We assume that Cartesian coordinates are introduced in the
plane, $S$ is designed using the Cartesian coordinates. For
triangle we offer three normal forms based on side lengths. Using
these normal forms the set of triangle similarity forms is
bijectively mapped to a fixed plane domain bounded by lines and
circles. For these forms two vertices are fixed and the third
vertex belongs to this finite domain, we call them \sl the one
vertex normal forms.\rm\ One vertex normal forms are also
generalized for quadrilaterals. Another normal form for triangles
is based on angles and circumscribed circles. For this form one
constant vertex is fixed on the unit circle and two other variable
vertices also belong to the unit circle, we call this form \sl the
circle normal form.\rm\
These normal forms may be useful in solving geometry problems
involving similarity and teaching geometry. The paper may be
useful for mathematics educators.
\section{Main results}
\subsection{Normal forms of triangles}
\subsubsection{Notations}
Consider $\mathbb{R}^{2}$ with a Cartesian system of coordinates
$(x,y)$ and center $O$. We think of classical triangles as being
encoded by their vertices. Strictly speaking by the triangle
$\triangle XYZ$ we mean the multiset $\{\{X,Y,Z\}\}$ of three
points in $\mathbb{R}^2$ each point having multiplicity at most
$2$. A triangle is called degenerate if points lie on a line.
Given $\triangle ABC$ we denote $\angle BAC=\alpha$, $\angle
ABC=\beta$, $\angle ACB=\gamma$, $|BC|=a$, $|AC|=b$, $|AB|=b$. We
exclude mutisets of type $\{\{XXX\}\}$.
We will use the following affine transformations of
$\mathbb{R}^{2}$: 1) translations, 2) rotations, 3) reflections
with respect to an axis, 4) dilations (given by the rule
$(x,y)\rightarrow (cx,cy)$ for some $c\in \mathbb{R}\backslash
\{0\}$). It is known that these transformations generate the \sl
dilation group\rm\ of $\mathbb{R}^{2}$, denoted by some authors as
$IG(2)$, see \cite{H}, \cite{P}. Two triangles $T_{1}$ and $T_{2}$
as similar if there exists $g\in IG(2)$, such that
$g(T_{1})=T_{2}$ (as multisets). If triangles $T_{1}$ and $T_{2}$
are similar, we write $T_{1}\sim T_{2}$ or $\triangle
X_{1}Y_{1}Z_{1}\sim \triangle X_{2}Y_{2}Z_{2}$.
The point $(x_{1},y_{1})$ is lexicographically smaller than the
point $(x_{2},y_{2})$ and denoted as $(x_{1},y_{1})\prec
(x_{2},y_{2})$ provided ($x_{1}<x_{2}$) or ($x_{1}=x_{2}$ and
$y_{1}<y_{2}$). The lexicographical order of points can be
extended to lexicographical ordering of sequences of points: the
sequence of points $[p_{1},p_{2}]$ is lexicographically smaller
than the sequence $[q_{1},q_{2}]$ denoted by $[p_{1},p_{2}]\prec
[q_{1},q_{2}]$ provided ($p_{1}\prec q_{1}$) or ($p_{1}=q_{1}$ and
$p_{2}\prec q_{2}$).
We use normal letters to denote fixed objects and $\backslash
mathcal$ letters to denote objects as function values.
\subsubsection{The $C$-vertex normal form}\label{1}
A normal form can be obtained by transforming the longest side of
the triangle into a unit interval of the $x$-axis. We call it \sl
the $C$-normal form.\rm\
In this subsection $A=(0,0)$ and $B=(1,0)$.
\begin{definition}
Let $S_{C}\subseteq \mathbb{R}^{2}$ be the domain in the first
quadrant bounded by the lines $y=0$, $x=\frac{1}{2}$ and the
circle $x^2+y^2=1$, see Figure 1.
\begin{center}
\epsfysize=70mm
\epsfbox{dom_C1_math.eps}
Fig.1. - the domain $S_{C}$.
\end{center}
In other terms, $S_{C}$ is the set of solutions of the system of
inequalities
$$
\left\
\begin{array}{ll}
y\ge 0 \\
x\ge \frac{1}{2}\\
x^2+y^2\le 1. \\
\end{array
\right.
$$
\end{definition}
\begin{theorem} Every triangle $UVW$ (including degenerate triangles)
in $\mathbb{R}^{2}$ is similar to a triangle $AB\mathcal{C}$,
where $A=(0,0)$, $B=(1,0)$ and $\mathcal{C}\in S_{C}$.
\end{theorem}
\begin{proof} Let $\triangle UVW$ has side lengths $a,b,c$
satisfying $a\le b\le c$. Perform the following sequence of
transformations:
\begin{enumerate}
\item translate and rotate the triangle so that the longest side
is on the $x$-axis, one vertex has coordinates $(0,0)$ and another
vertex has coordinates $(c,0)$, $c>0$;
\item if the third vertex has negative $y$-coordinate, reflect the
triangle with respect to the $x$-axis;
\item do the dilation with coefficient $\frac{1}{c}$, note that
the vertices on the $x$-axis have coordinates $(0,0)$ and $(1,0)$,
the third vertex has coordinates $(x'_{C},y'_{C})$, where
$x'^2_{C}+y'^2_{C}\le 1$ and $(x'_{C}-1)^2+y'^2_{C}\le 1$;
\item if $x'_{C}<\frac{1}{2}$, then reflect the triangle with
respect to the line $x=\frac{1}{2}$, denote the third vertex by
$\mathcal{C}=(x_{C},y_{C})$, by construction we have that
$\mathcal{C}\in S_{C}$.
\end{enumerate}
The image of the initial triangle $\triangle UVW$ is the triangle
$AB\mathcal{C}$, where $\mathcal{C}\in S_{C}$. All transformations
preserve similarity type therefore $\triangle UVW\sim \triangle
AB\mathcal{C}$.
\end{proof}
\begin{theorem} If $C_{1}\in S_{C}$, $C_{2}\in S_{C}$ and $C_{1}\neq C_{2}$, then $\triangle ABC_{1}\not\sim\triangle ABC_{2}$.
\end{theorem}
\begin{proof} If $\angle C_{1}AB=\angle C_{2}AB$ and $C_{1}\neq C_{2}$, then $\angle C_{1}BA\neq \angle
C_{2}BA$. By equality of angles for similar triangles it follows
that $\triangle ABC_{1}\not\simeq\triangle ABC_{2}$.
Let $\angle C_{1}AB\neq \angle C_{2}AB$. The angle $C_{i}AB$ is
the smallest angle in $\triangle ABC_{i}$. By equality of angles
for similar triangles it again follows that $\triangle
ABC_{1}\not\simeq\triangle ABC_{2}$.
\end{proof}
\begin{definition} A point $\mathcal{C}\in S_{C}$ such that $\triangle AB\mathcal{C}\sim \triangle
UVW$ is called \sl the $C$-normal point\rm\ of $\triangle UVW$.
\end{definition}
\begin{definition} The $C$-vertex normal form of $\triangle UVW$ is
$\triangle AB\mathcal{C}$, where $\mathcal{C}\in S_{C}$ is the
$C$-normal point of $\triangle UVW$.
\end{definition}
\begin{remark} Denote by $R_{C}$ the intersection of the circle
$(x-\frac{1}{2})^2+y^2=(\frac{1}{2})^2$ and $S_{C}$. Points of
$R_{C}$ correspond to right angle triangles. Points below and
above $R_{C}$ correspond to, respectively, obtuse and acute
triangles, see Figure 2.
Points on the intersection of the line $x=\frac{1}{2}$ and $S_{C}$
correspond to isosceles obtuse triangles. Points on the
intersection of the circle $x^2+y^2=1$ and $S_{C}$ correspond to
isosceles acute triangles. Points in the interior of $S_{C}$
correspond to scalene triangles. The point
$(\frac{1}{2},\frac{\sqrt{3}}{2})$ corresponds to the equilateral
triangle. Points on the intersection of the line $y=0$ and $S_{C}$
correspond to degenerate triangles. $\mathcal{C}=B$ for triangles
having side lengths $0,c,c$.
\end{remark}
\begin{remark} A similar normal form can be obtained reflecting
$S_{C}$ with respect to the line $x=\frac{1}{2}$.
\end{remark}
\begin{center}
\epsfysize=60mm
\epsfbox{dom_C2_math.eps}
Fig.2. - the subdomains of $S_{C}$ corresponding to obtuse and
acute triangles.
\end{center}
\subsubsection{The $B$-vertex normal form}
Another normal form can be obtained by transforming the median
length side (in the sense of ordering) of the triangle into a unit
interval of the $x$-axis. By analogy it is called \sl the
$B$-normal form.\rm\
In this subsection $A=(0,0)$ and $C=(1,0)$.
\begin{definition}
Let $S_{B}\subseteq \mathbb{R}^{2}$ be the domain in the first
quadrant bounded by the line $y=0$ and the circles $x^2+y^2=1$ and
$(x-1)^2+y^2=1$, see Figure 3.
\begin{center}
\epsfysize=60mm
\epsfbox{dom_B1_math.eps}
Fig.3. - the domain $S_{B}$.
\end{center}
In other terms, $S_{B}$ is the set of solutions of the system of
inequalities
$$
\left\
\begin{array}{ll}
y\ge 0 \\
x^2+y^2\ge 1\\
(x-1)^2+y^2\le 1.\\
\end{array
\right.
$$
\end{definition}
\begin{theorem} Every triangle $UVW$ (including degenerate triangles)
in $\mathbb{R}^{2}$ is similar to a triangle $A\mathcal{B}C$,
where $A=(0,0)$, $C=(1,0)$ and $\mathcal{B}\in S_{B}$.
\end{theorem}
\begin{proof} Let $\triangle UVW$ has side lengths $a,b,c$
satisfying $a\le b\le c$. Perform the following sequence of
transformations:
\begin{enumerate}
\item translate and rotate the triangle so that the side of length
$b$ is on the $x$-axis, one vertex has coordinates $(b,0)$ and
another vertex has coordinates $(b,0)$, $b>0$, the side of length
$c$ is incident to the vertex $(c,0)$;
\item if the third vertex has negative $y$-coordinate, reflect the
triangle with respect to $x$-axis;
\item do the dilation with coefficient $\frac{1}{b}$, note that
the vertices on the $x$-axis have coordinates $(0,0)$ and $(1,0)$,
at this point the third vertex $\mathcal{B}$ has coordinates
$(x'_{B},y'_{B})$, where $y'_{B}\ge 0$, $x'^2_{B}+y'^2_{B}\ge 1$
or $(x'_{B}-1)^2+y'^2_{B}\le 1$;
\item if $x'_{B}<\frac{1}{2}$, reflect the triangle with respect
to the line $x=\frac{1}{2}$, now the third vertex $\mathcal{B}$
has new coordinates $(x_{B},y_{B})$, where $x_{B}\ge \frac{1}{2}$,
$y_{B}\ge 0$, $(x_{B}-1)^2+y^2_{B}\le 1$.
\end{enumerate}
The image of the initial triangle $\triangle UVW$ is the triangle
$A\mathcal{B}C$, where $\mathcal{B}\in S_{B}$. All transformations
preserve similarity type therefore $\triangle UVW\sim \triangle
A\mathcal{B}C$.
\end{proof}
\begin{theorem} If $B_{1}=(x_{i},y_{i})\in S_{B}$, $B_{2}=(x_{2},y_{2})\in S_{B}$ and $B_{1}\neq B_{2}$, then $\triangle
AB_{1}C\not\sim\triangle AB_{2}C$.
\end{theorem}
\begin{proof} The angle $\angle B_{i}AC$ is
the smallest angle in the triangle $\triangle AB_{i}C$.
If $\angle B_{1}AC\neq \angle B_{2}AC$, then, since these are
smallest angles in the triangles, it follows that $\triangle
AB_{1}C\not\sim \triangle AB_{2}C$.
If $\angle B_{1}AC=\angle B_{2}AC$ and $B_{1}\neq B_{2}$, then
$\angle AB_{1}C\neq \angle AB_{2}C$. $\angle AB_{i}C$ is the
biggest angle in $\triangle AB_{i}C$, therefore $\angle
AB_{1}C\neq \angle AB_{2}C$ implies $\triangle AB_{1}C\not\sim
AB_{2}C$.
\end{proof}
\begin{definition} A point $\mathcal{B}\in S$ such that $\triangle A\mathcal{B}C\sim \triangle
UVW$ is called \sl the $B$-normal point\rm\ of $\triangle UVW$.
\end{definition}
\begin{definition} The $B$-vertex normal form of $\triangle UVW$ is
$\triangle A\mathcal{B}C$, where $\mathcal{B}\in S$ is the
$B$-normal point of $\triangle UVW$.
\end{definition}
\begin{remark} Denote by $R_{B}$ the intersection of the ray
$x=1$, $x\ge 0$ and $S_{B}$. Points of $R_{B}$ correspond to right
angle triangles. Points to the right and left of $R_{B}$
correspond to, respectively, obtuse and acute triangles, see
Figure 4.
\begin{center}
\epsfysize=60mm
\epsfbox{dom_B2_math.eps}
Fig.4. - the subdomains of $S_{B}$ corresponding to obtuse and
acute triangles.
\end{center}
Points on the intersection of the line $x^2+y^2=1$ and $S_{B}$
correspond to isosceles acute triangles. Points on the
intersection of the circle $(x-1)^2+y^2=1$ and $S_{B}$ correspond
to isosceles obtuse triangles. Points in the interior of $S_{B}$
correspond to scalene triangles. The point
$(\frac{1}{2},\frac{\sqrt{3}}{2})$ corresponds to the equilateral
triangle. Points on the intersection of the line $y=0$ and $S_{B}$
correspond to degenerate triangles. $\mathcal{B}=C$ for triangles
having side lengths $0,c,c$.
\end{remark}
\subsubsection{$A$-vertex normal form}
Finally a normal form can be obtained by transforming the shortest
side of the triangle into a unit interval of the $x$-axis. By
analogy it is called \sl the $A$-normal form.\rm\ In this case
again two vertices on the $x$-axis are $(0,0)$ and $(1,0)$, the
domain $S_{A}$ of possible positions of the third vertex is
unbounded.
In this subsection $B=(0,0)$ and $C=(1,0)$.
\begin{definition}
Let $S_{A}\subseteq \mathbb{R}^{2}$ be the unbounded domain in the
first quadrant bounded by the lines $y=0$, $x=\frac{1}{2}$ and the
circle $(x-1)^2+y^2=1$, see Figure 5.
In other terms, $S_{A}$ is the set of solutions of the system of
inequalities
$$
\left\
\begin{array}{ll}
y\ge 0 \\
x\ge \frac{1}{2}\\
(x-1)^2+y^2\ge 1.\\
\end{array
\right.
$$
\end{definition}
\begin{center}
\epsfysize=60mm
\epsfbox{dom_A2_math.eps}
Fig.5. - the domain $S_{A}$.
\end{center}
\begin{theorem} Every triangle $UVW$ (including degenerate
triangles but excluding the similarity type having side lengths
$0,c,c$) in $\mathbb{R}^{2}$ is similar to a triangle
$\mathcal{A}BC$, where $B=(0,0)$, $C=(1,0)$ and $\mathcal{A}\in
S_{A}$.
\end{theorem}
\begin{proof} Let $\triangle UVW$ has side lengths $a,b,c$
satisfying $a\le b\le c$. Perform the following sequence of
transformations:
\begin{enumerate}
\item translate and rotate the triangle so that the side of length
$a$ is on the $x$-axis, one vertex has coordinates $(0,0)$ and
another vertex has coordinates $(a,0)$, $a>0$, the side of length
$c$ is incident to the vertex $(0,0)$;
\item if the third vertex has negative $y$-coordinate, reflect the
triangle with respect to the $x$-axis;
\item do the dilation with coefficient $\frac{1}{a}$, note that
the vertices on the $x$-axis have coordinates $(0,0)$ and $(1,0)$;
\item if the third point has the $x$-coordinate less than
$\frac{1}{2}$, reflect the triangle with respect to the line
$x=\frac{1}{2}$, now the third vertex $\mathcal{A}$ has
coordinates $(x_{0},y_{0})$, where $x_{0}\ge \frac{1}{2}$,
$y_{0}\ge 0$, $(x_{0}-1)^2+y^2_{0}\le 1$.
\end{enumerate}
The image of the initial triangle $\triangle UVW$ is the triangle
$\mathcal{A}BC$, where $\mathcal{A}\in S_{A}$. All transformations
preserve similarity type therefore $\triangle UVW\sim \triangle
\mathcal{A}BC$.
\end{proof}
\begin{theorem} Let $B=(0,0)$, $C=(1,0)$. If $A_{1}=(x_{i},y_{i})\in S_{A}$, $A_{2}=(x_{2},y_{2})\in S_{A}$ and $A_{1}\neq A_{2}$, then $\triangle
A_{1}BC\not\sim\triangle A_{2}BC$.
\end{theorem}
\begin{proof} The angle $\angle BCA_{i}$ is
the largest angle in the triangle $\triangle A_{i}BC$.
If $\angle BCA_{1}\neq \angle BCA_{2}$, then since these are
largest angles in the triangles it follows that $\triangle
A_{1}BC\not\sim \triangle A_{2}BC$.
If $\angle BCA_{1}=\angle BCA_{2}$ and $A_{1}\neq A_{2}$, then
$\angle BA_{1}C\neq \angle BA_{2}C$. $BA_{i}C$ is the smallest
angle in $\triangle A_{i}BC$, therefore $A_{1}BC\neq A_{2}BC$
implies $\triangle AB_{1}C\not\sim AB_{2}C$.
\end{proof}
\begin{definition} A point $\mathcal{A}\in S_{A}$ such that $\triangle \mathcal{A}BC\sim \triangle
UVW$ is called \sl the $A$-normal point\rm\ of $\triangle UVW$.
\end{definition}
\begin{definition} The $A$-vertex normal form of $\triangle UVW$ is
$\triangle \mathcal{A}BC$, where $\mathcal{A}\in S_{A}$ is the
$A$-normal point of $\triangle UVW$.
\end{definition}
\begin{remark} Denote by $R_{A}$ the intersection of the ray
$x=1$, $x\ge 0$ and $S_{A}$. Points of $R_{A}$ correspond to right
angle triangles. Points to the right and left of $R_{A}$
correspond to, respectively, obtuse and acute triangles, see
Figure 6.
\begin{center}
\epsfysize=60mm
\epsfbox{dom_A3_math.eps}
Fig.6. - the subdomains of $S_{A}$ corresponding to obtuse and
acute triangles.
\end{center}
Points on the intersection of the line $x=\frac{1}{2}$ and $S_{A}$
correspond to isosceles acute triangles. Points on the
intersection of the circle $(x-1)^2+y^2=1$ and $S_{A}$ correspond
to isosceles obtuse triangles. Points in the interior of $S_{A}$
correspond to scalene triangles. The point
$(\frac{1}{2},\frac{\sqrt{3}}{2})$ corresponds to the equilateral
triangle. Points on the intersection of the line $y=0$ and $S_{A}$
correspond to degenerate triangles excluding the similarity type
with side lengths $0,c,c$, which corresponds to the point at
infinity. In contrast to the $C$-vertex and $B$-vertex normal
forms triangles for the $A$-vertex normal form are not bounded.
\end{remark}
\subsubsection{The circle normal form}
Consider $\mathbb{R}^{2}$ with a Cartesian system of coordinates
$(x,y)$ and center $O$. We also consider polar coordinates
$[r,\varphi]$ introduced in the standard way: the polar angle
$\varphi$ is measured from the positive $x$-axis going
counterclockwise.
Note that angles $\alpha,\beta,\gamma$ of a nondegenerate triangle
such that $\alpha\le \beta\le \gamma$ satisfy the system of
inequalities
$$
\left\
\begin{array}{ll}
0<\alpha\le \frac{\pi}{3},\\
\alpha\le \beta\le \frac{\pi-\alpha}{2}.\\
\end{array
\right.
$$
Similarity types of nondegenerate triangles are parametrized by
one point in the domain in $(\alpha,\beta)$-plane determined by
the system
$$
\left\
\begin{array}{ll}
\alpha>0,\\
\beta\ge\alpha,\\
\beta\le\frac{\pi}{2}-\frac{\alpha}{2}.\\
\end{array
\right.
$$
See Fig.7.
\begin{center}
\epsfysize=50mm
\epsfbox{alpha_beta_3.eps}
Fig.7. - parametrization of similarity types by $(\alpha,\beta)$.
\end{center}
For the normal form described in this subsection the vertex with
the biggest angle will be fixed at $(1,0)$, to be consistent with
previous notations we define $C=(1,0)$. For this normal form only
nondegenerate triangles are considered.
In this case normal form triangles are inscribed in the unit
circle $\mathbb{U}=\{x^2+y^2=1\}$ having $C$ as one of the
vertices.
\begin{definition} A triangle $\triangle ABC$ inscribed in $\mathbb{U}$ is called \sl normal circle
triangle\rm\ if
\begin{enumerate}
\item $0\le \alpha\le \frac{\pi}{3}$,
\item $\alpha\le \beta\le \frac{\pi}{2}-\frac{\alpha}{2}$,
\item $C=(1,0)$,
\item the point $A$ is above $x$-axis,
\item the point $B$ is below $x$-axis.
\end{enumerate}
\end{definition}
\begin{remark} For a normal triangle $\triangle ABC$ we have that
$\alpha\le \beta \le \gamma$.
\end{remark}
\begin{remark} A normal triangle with angles $\alpha\le \beta\le \gamma$ can be constructed in the
following way:
\begin{enumerate}
\item choose a point $B$ below the $y$-axis with the argument
equal to $2\alpha$, where $0\le 2\alpha\le \frac{2\pi}{3}$;
\item draw the bisector of $\angle BOC$, denote the intersection
of this bisector with the arc $BC$ having angle $2\pi-2\alpha$ by
$D$;
\item find the point $\widetilde{B}$ which is symmetric to $B$
with respect to the $x$-axis;
\item choose a point $A$ in the shorter arc $\widetilde{B}D$.
\end{enumerate}
See Figure 8.
\begin{center}
\epsfysize=70mm
\epsfbox{circle.eps}
Fig.8. - construction of a normal circle triangle.
\end{center}
\end{remark}
\begin{theorem} For every nondegenerate triangle $\triangle UVW$ there exists a
normal circle triangle $\triangle \mathcal{A}\mathcal{B}C$ such
that $\triangle UVW\sim \triangle \mathcal{AB}C$.
\end{theorem}
\begin{proof} Suppose $\triangle UVW$ has angles $\alpha\le \beta\le \gamma$.
Let $\mathcal{B}\in \mathbb{U}$ be the point with polar
coordinates $[1,-2\alpha]$. Let $\mathcal{A}\in \mathbb{U}$ be the
point with polar coordinates $[1,2\beta]$. Then since $\triangle
\mathcal{AB}C$ is inscribed in $\mathbb{U}$ we have that $\angle
\mathcal{BA}C=\alpha$, $\angle \mathcal{AB}C=\beta$ and thus
$\triangle \mathcal{AB}C\sim \triangle UVW$.
\end{proof}
\begin{theorem} Let $\triangle A_{1}BC_{1}$ and $A_{2}BC_{2}$ be
two distinct normal circle triangles: $A_{1}\neq A_{2}$ or
$B_{1}\neq B_{2}$. Then $\triangle A_{1}BC_{1}\not\sim\triangle
A_{2}BC_{2}$.
\end{theorem}
\begin{proof} If $A_{1}\neq A_{2}$, then $\angle B_{1}A_{1}C\neq \angle
B_{2}A_{2}C$. The angle $B_{i}A_{i}C$ is the smallest angle of
$\triangle A_{i}B_{i}C$. We have that $\angle B_{1}A_{1}C\neq
\angle B_{2}A_{2}C$ implies $\triangle
A_{1}B_{1}C\not\sim\triangle A_{2}B_{2}C$.
If $B_{1}\neq B_{2}$ and $A_{1}=A_{2}$, then $\angle
A_{1}CB_{1}\neq \angle A_{2}CB_{2}$. The angle $A_{i}CB_{i}$ is
the largest angle of $\triangle A_{i}B_{i}C$. We have that $\angle
A_{1}CB_{1}\neq \angle A_{2}CB_{2}$ in this case implies
$\triangle A_{1}B_{1}C\not\sim\triangle A_{2}B_{2}C$.
\end{proof}
\begin{remark} The only isosceles normal triangles are normal triangles of type $\triangle B\widetilde{B}C$ and $\triangle
BDC$. Right normal triangles are normal triangles with $AB$
passing through $O$. Acute/obtuse normal triangles as normal
triangles with $O$ inside/outside $\triangle ABC$. In contrast to
the one vertex normal forms triangles for the circle normal form
are unbounded from below.
\end{remark}
\begin{remark} Other normal forms of this type can be designed
choosing another point instead of $(1,0)$ and rearranging triangle
points.
\end{remark}
\subsubsection{Conversions}\
\begin{definition} Given a triangle with side lengths $a,b,c$
define $N_{X}(a,b,c)$ to be the Cartesian plane coordinates of the
$X$-normal point ($X\in \{A,B,C\}$) corresponding to this
triangle. Note that $N_{X}$ is a symmetric function. We can also
think of arguments of $N_{X}$ as multisets and think that
$N_{X}(a,b,c)=N_{X}(L)$, where $L$ is the multiset
$\{\{a,b,c\}\}$.
\end{definition}
\begin{proposition} Let $\triangle ABC$ has side lengths $a\le b\le c$.
Then
\begin{enumerate}
\item $N_{C}(a,b,c)=\Big(\frac{-a^2+b^2+c^2}{2c^2},
\frac{\sqrt{-a^4-b^4-c^4+2(a^2b^2+a^2c^2+b^2c^2)}}{2c^2}\Big)$;
\item $N_{B}(a,b,c)=\Big(\frac{-a^2+b^2+c^2}{2b^2},
\frac{\sqrt{-a^4-b^4-c^4+2(a^2b^2+a^2c^2+b^2c^2)}}{2b^2}\Big)$;
\item $N_{A}(a,b,c)=\Big(\frac{a^2-b^2+c^2}{2a^2},
\frac{\sqrt{-a^4-b^4-c^4+2(a^2b^2+a^2c^2+b^2c^2)}}{2a^2}\Big)$.
\end{enumerate}
\end{proposition}
\begin{proof}
1. Translate, rotate and reflect $\triangle ABC$ so that
$A=(0,0)$, $B=(c,0)$ and $C=(x,y)$ is in the first quadrant. For
$(x,y)$ we have the system $$\left\
\begin{array}{ll}
x^2+y^2=b^2\\
(c-x)^2+y^2=a^2\\
\end{array
\right. $$
and find $$\left\
\begin{array}{ll}
x=\frac{-a^2+b^2+c^2}{2c}\\
y=\frac{\sqrt{-a^4-b^4-c^4+2(a^2b^2+a^2c^2+b^2c^2)}}{2c}\\
\end{array
\right. $$
After the dilation by coefficient $\frac{1}{c}$ we get the given
formula.
2. and 3. proved are in a similar way.
\end{proof}
\begin{proposition}
Let a triangle $T$ have angles $\alpha\le\beta\le\gamma$.
Then
\begin{enumerate}
\item its $C$-normal point is $N_{C}(\frac{\sin\alpha}{\sin
\gamma},\frac{\sin\beta}{\sin\gamma},1)$;
\item if $T$ has the $C$-normal point $(x,y)$, then it has angles
$\alpha=\arctan{\frac{y}{x}}$, $\beta=\arctan{\frac{y}{1-x}}$,
$\gamma=\pi-\arctan{\frac{y}{x}}-\arctan{\frac{y}{1-x}}$.
\item its $B$-normal point is $N_{B}(\frac{\sin\alpha}{\sin
\beta},\frac{\sin\gamma}{\sin\beta},1)$;
\item if $T$ has the $B$-normal point $(x,y)$, then it has angles
$\alpha=\arctan\frac{y}{x}$,
$\beta=-\arctan\frac{y}{x}+\arctan\frac{y}{x-1}$,
$\gamma=\pi-\arctan\frac{y}{x-1}$;
\item its $A$-normal point is $N_{A}(\frac{\sin\beta}{\sin
\alpha},\frac{\sin\gamma}{\sin\alpha},1)$;
\item if $T$ has the $A$-normal point $(x,y)$, then it has angles
$\alpha=-\arctan\frac{y}{x}+\arctan\frac{y}{x-1}$,
$\beta=\arctan\frac{y}{x}$, $\gamma=\pi-\arctan\frac{y}{x-1}$.
\end{enumerate}
\end{proposition}
\begin{proof}
1. Let $\triangle ABC$ be the $C$-normal triangle with angles
$\alpha\le \beta \le \gamma$, i.e. $|AB|=1$. By the law of sines
we have $b=|AC|=\frac{\sin \beta}{\sin \gamma}$ and $a=\frac{\sin
\alpha}{\sin\gamma}$. By definition $C$ has coordinates
$N_{C}(\frac{\sin\alpha}{\sin\gamma},\frac{\sin\beta}{\sin\gamma},1)$.
2. Let $CD$ be a height of $\triangle ABC$. Formulas for angles
are obtained considering $\triangle ACD$ and $\triangle BCD$.
3.,4.,5.,6. ar proved similarly.
\end{proof}
\subsection{Normal forms of quadrilaterals}
In this subsection we consider mutisets of $4$ points in a plane.
A multiset of $4$ points can be interpreted as a quadrilateral. We
exclude the case of one point of multiplicity $4$. The multiset
$Q=\{\{X,Y,Z,T\}\}$ is also denoted as $\Box XYZT$. We define
$Q_{1}\sim Q_{2}$ provided there is an element of the dilation
group $g$ such that $g(Q_{1})=Q_{2}$.
A set of $4$ points defines a set of $6$ distances between these
points. Choosing any two points we can translate, rotate, reflect
and dilate the given $4$-point configuration so that the chosen
two points have coordinates $A=(0,0)$ and $B=(1,0)$. Different
normal forms can be obtained choosing pairs with different
relative metric properties. In this paper we consider only the
simplest case - two points having the maximal distance are mapped
to the $x$-axis.
\subsubsection{Longest distance normal form} Suppose we are given a quadrilateral $\Box XYZT$ such that $|XY|\ge |XZ|$,
$|XY|\ge |XT|$, $|XY|\ge |YZ|$, $|XY|\ge |YT|$, $|XY|\ge |ZT|$. We
map $X$ and $Y$ by a dilation to the $x$-axis (to $A=(0,0)$ and
$B=(1,0)$) and determine what are positions of the $2$ remaining
vertices $C$ and $D$ so that $\Box XYZT \sim \Box ABCD$.
\begin{definition} Let $p\in \mathbb{R}^2$. The mapping of $p$ by reflections of $p$ with respect to
the $x$-axis and the line $x=\frac{1}{2}$ to the domain $y\ge 0$,
$x\ge \frac{1}{2}$ is denoted by $p_{s}$.
\begin{definition} Let $p, p'\in \mathbb{R}^2$. We say that
$p$ is \sl quasilexicographically\rm\ smaller or equal to $p'$,
denoted by $p\vartriangleleft p'$, provided $p_{s}\prec p'_{s}$ or
$p_{s}=p'_{s}$. Given two pairs $[p,q]$ and $[p',q']$ we define
$[p,q]\vartriangleleft [p',q']$ provided ($p_{s}\prec p'_{s}$) or
($p_{s}=p'_{s}$ and $q\vartriangleleft q'$).
\end{definition}
\end{definition}
\begin{definition}
Let $S_{D}(x_{0},y_{0})\subseteq \mathbb{R}^2$ with
$(x_{0},y_{0})\in S_{C}$ (for the definition of $S_{C}$ see
section \ref{1}) be the set of solutions of the following system
of inequalities:
\begin{equation}\label{2}
\left\
\begin{array}{ll}
x^2+y^2\le 1\\
(x-1)^2+y^2\le 1\\
(x-x_{0})^2+(y-y_{0})^2\le 1\\
|x-\frac{1}{2}|\le |x_{0}-\frac{1}{2}|\\
if\ |x-\frac{1}{2}|=|x_{0}-\frac{1}{2}|,then\ |y|\le |y_{0}|\\
\end{array
\right.
\end{equation}
See Figure 9.
\begin{center}
\epsfysize=70mm
\epsfbox{quadr_6.eps}
Fig.9. - example of the domain $S_{D}$.
\end{center}
\end{definition}
\begin{remark} Conditions for $p\in S_{D}(x_{0},y_{0})$ consist of two
parts:
\begin{enumerate}
\item distance from $p$ to $A$, $B$ and $(x_{0},y_{0})$ is less
than or equal to $1$;
\item $p_{s}\vartriangleleft (x_{0},y_{0})$.
\end{enumerate}
\end{remark}
\begin{theorem}\label{3} Every $\Box UVWZ$ (including
multisets with multiplicities at most $3$) in $\mathbb{R}^{2}$ is
similar to $\Box AB\mathcal{CD}$, where $A=(0,0)$, $B=(1,0)$,
$\mathcal{C}\in S_{C}$ and $\mathcal{D}\in S_{D}$.
\end{theorem}
\begin{proof} Let $UVWZ$ be a multiset of points in $\mathbb{R}^{2}$ with at least two distinct elements.
Perform the following sequence of transformations:
\begin{enumerate}
\item translate and rotate the plane so that $2$ points with the
longest distance are on the $x$-axis, one vertex has coordinates
$(0,0)$ and another vertex has coordinates $(d,0)$, $d>0$; if
there is more than one possibility to choose two points with the
longest distance then choose this pair so that the remaining pair
is the largest in the quasilexicographic order;
\item do the dilation with coefficient $\frac{1}{d}$, note that
the vertices on the $x$-axis have coordinates $(0,0)$ and $(1,0)$,
suppose the the other two vertices have coordinates
$(x_{C},y_{C})$ and $(x_{D},y_{D})$;
\item if $|x_{C}-\frac{1}{2}|\ne |x_{D}-\frac{1}{2}|$, then put
the point with the maximal $|x-\frac{1}{2}|$ value into $S_{C}$ by
reflections with respect to the $x$-axis and the line
$x=\frac{1}{2}$;
\item if $|x_{C}-\frac{1}{2}|=|x_{D}-\frac{1}{2}|$, then put the
point with the maximal value of $|y|$ into $S_{C}$ by reflections
with respect to the $x$-axis and the line $x=\frac{1}{2}$;
\item if $|x_{C}-\frac{1}{2}|=|x_{D}-\frac{1}{2}|$ and
$|y_{C}|=|y_{D}|$, then map any of the points into $S_{C}$.
\end{enumerate}
Denote the point which is mapped to $S_{C}$ by this sequence of
transformations by $C=(x_{C},y_{C})$ and the fourth point by
$D=(x_{D},y_{D})$. For any $C=(x_{0},y_{0})\in S_{C}$ we have that
$S_{D}(x_{0},y_{0})\neq \emptyset$.
We check that $D\in S_{D}(x_{C},y_{C})$. From conditions $|AD|\le
1, |BD|\le 1$, $|CD|\le 1$ it follows that $D$ satisfies the first
three inequalities of the system \ref{2}. If $|y_{D}|>|y_{C}|$,
then $|x_{D}-\frac{1}{2}|<|x_{C}-\frac{1}{2}|$ due to the
quasilexicographic order condition.
\end{proof}
\begin{definition} \sl The longest distance normal form\rm\ of $\Box UVWZ$
is $\Box ABCD$ with $A=(0,0)$, $B=(1,0)$, $C=(x_{C},y_{C})\in
S_{C}$ and $D\in S_{D}(x_{C},y_{C})$ constructed according to the
algorithm given in the proof of \ref{3}.
\end{definition}
\begin{proposition} Let $\Box ABC_{1}D_{1}$ and $\Box ABC_{2}D_{2}$ be two
quadrilaterals constructed according to the longest distance
normal form algorithm.
If $C_{1}\neq C_{2}$ or $D_{1}\neq D_{2}$, then $\Box ABC_{1}D_{1}
\not\sim \Box ABC_{2}D_{2}$.
\end{proposition}
\begin{proof} $D_{i} \vartriangleleft C_{i}$, therefore if $C_{1}\neq C_{2}$, then
$\Box ABC_{1}D_{1}\not\sim\Box ABC_{2}D_{2}$.
Suppose $C_{1}=C_{2}$ and $D_{1}\neq D_{2}$. Under any similarity
mapping $C_{1}$ must be mapped to $C_{2}$. If a similarity mapping
fixes three noncollinear points $A$, $B$ and $C_{i}$, then it must
fix any other point of the plane. If $A$, $B$ and $C_{i}$ are on
the $x$-axis, then $D_{i}$ must also be on the $x$-axis and must
be fixed. Therefore $D_{1}\neq D_{2}$ implies $\Box
ABC_{1}D_{1}\not\sim \Box ABC_{2}D_{2}$ in this case.
\end{proof}
\begin{remark} If $C_{s}=D_{s}$, then there
are the following possibilities for the number of similarity types
of quadrilaterals with a given $C\in S_{C}$: 1) $1$ similarity
type if $C=D$, 2) $2$ similarity types if $C\neq D$ and
$|AC|=|BC|$, or $C\neq D$ and $C$ belongs to the $x$-axis or 3)
$4$ similarity types in other cases.
\end{remark}
\section{Possible uses of normal forms in education}
One vertex normal forms of triangles can be used to represent all
similarity types of triangles in a single picture with all
triangles having a fixed side, especially $C$-vertex and
$B$-vertex normal forms. It may be useful to have an example for
students showing that similarity type of triangle can be
parametrized by coordinates of a single point. One vertex normal
forms can also be used in considering quadrilaterals.
The circle normal form of triangles may be useful teaching
properties of circumscribed circles, e.g. inscribing triangles
with given angles in a circle.
Normal forms of triangles can also be used to teach the idea of
normal (canonical) objects using a case of simple and popular
geometric constructions.
\section{Conclusion and further development} It is relatively easy to define several normal forms of
triangles up to similarity. Since the main purpose of this work is
contribution to mathematics education only simplest approaches
which may be used in teaching are considered in this paper. One
approach is to map one side to the $x$-axis and use dilations and
reflections to position the third vertex in a unique way, in this
approach normal triangles are parametrized by one vertex. This
approach can be generalized for quadrilaterals. Another approach
considered in this paper is to design normal triangles as
triangles inscribed in a unit circle. Further development in this
direction may be related to using other figures related to a given
triangle, for example, the inscribed circle, medians, altitudes or
bisectors.
|
2,877,628,090,749 | arxiv | \section{Introduction}
We investigate local and metric geometry of a general class of
Carnot-Carath\'eodory spaces (see Definition \ref{cc-space}) which
generalize classical sub-Riemannian manifolds (see e.g. \cite{bel,
herm,jean, gro,lan,mit,mon} and references therein) and naturally
arise in different areas, in particular, geometric control theory,
harmonic analysis and subelliptic equations.
As it is well-known, if $X_1,X_2,\ldots,X_m$ are smooth
``horizontal'' vector fields on a smooth connected manifold
$\mathbb M$ ($\operatorname{dim}\mathbb M=N,\ m\leq N$), a
necessary and sufficient condition for a system
\begin{equation}\label{lin}\dot{x}=\sum\limits_{i=1}^ma_iX_i(x)
\end{equation} to be locally controllable
is that $X_1,X_2,\ldots,X_m$ span, together with their
commu\-ta\-tors up to some finite order $M$, the tangent space
$T_v\mathbb M$ at any point $v\in\mathbb M$ (H\"{o}rmander's
condition \cite{hor}), i.e. define a sub-Riemannian geometry on
$\mathbb M$. The existence of a controllable ``horizontal'' path,
joining two arbitrary points $v,w\in\mathbb M$, is equivalent to
the Rashevsky-Chow connectivity theorem \cite{Chow,Rash}. This
theorem implies existence of an intrinsic Carnot-Carath\'eodory
metric $d_c(v,w)$ defined as the infimum of lengths of all
horizontal curves (with their tangent vectors belonging to the
subbundle $H\mathbb M=\operatorname{span}\{X_1,X_2,\ldots,X_m\}$)
joining $v$ and $w$. Investigation of local geometry of
sub-Riemannian manifolds is important e.g. for constructing
optimal motion planning algorithms for \eqref{lin} and studying
their complexity \cite{bel,bian,herm,jean,jean1,jean2}. In
particular, investigation of the algebraic structure of the
tangent cone (in Gromov's sense \cite{bur,gro1,gro,gro2})
$(\mathbb M, d_c^u)$ to the metric space $(\mathbb M,d_c)$ plays
here a crucial role, as well as obtaining estimates on comparison
of the metrics $d_c$ and $d_c^u$. In contrast to the Riemannian
case they are not bilipshitz-equivalent, but the following
estimate holds:
{\bf Theorem} (Local approximation theorem
\cite{bel,gro,jean,vk1}). If, for $u,v\in\mathbb M$,
$d_c(u,v)=O(\varepsilon)$ and $d_c(u,w)=O(\varepsilon)$, then
$|d_c(v,w)-d_c^u(v,w)|=O(\varepsilon^{1+\frac{1}{M}})$, where $M$
is the depth of the sub-Riemannian manifold $\mathbb M$.
If the dependence of the right-hand part of a control system is
nonlinear on the control functions (see \cite{agmar,cor} and
references therein):
\begin{equation}\label{nonlin}\begin{cases}\dot{x}=f(x,a),\\x(0)=
x_0,\end{cases}\end{equation} where $x\in \mathbb M,a\in\mathbb
R^m$, then a sufficient (but not necessary) controllability
condition is that
$$\operatorname{span}\bigl\{h(0):h\in\operatorname{Lie}
\frac{\partial^{|\alpha|}} {\partial
a^{\alpha}}f(0,\cdot),\alpha\in\mathbb N^M \bigr\}=T_{x_0}\mathbb
M$$ for some $M\in\mathbb N$. Letting
$$F_{\nu}=\bigl\{\frac{\partial^{\alpha}}{\partial
a^{\alpha}}f(0,\cdot):|\alpha|\leq\nu\bigr\}$$ and
$$H_k(q)=\operatorname{span}\{[f_1,[f_2,\ldots,[f_{i-1},f_i]
\ldots](q):f_j\in
F_{\nu_j}, \nu_1+\nu_2+\ldots+\nu_i\leq k, i>0\},$$ one obtains a
weighted filtration $$\{0\}\subseteq H_1\subseteq
H_2\subseteq\ldots\subseteq H_M=T\mathbb M, \text{ such that }
[H_i,H_j]\subseteq H_{i+j},$$ of the tangent bundle. The condition
of having such a filtration is obviously weaker than the
H\"{o}rmander's condition, and in this case it may happen that not
all points can be joined by a horizontal path (see Example 2),
i.e. the Rashevsky-Chow theorem fails to hold and the intrinsic
metric $d_c$ might not exist.
Other examples, where weighted Carnot-Carath\'eodory spaces
appear, stem from the theory of subelliptic equations
\cite{bram,cnsw,mm,nsw}. Besides weakening the H\"ormander's
condition, an important line of generalization of sub-Riemannian
geometry is minimizing the smoothness assumptions on the vector
fields $X_i$ generating the space (see e.g.
\cite{bram,bram1,gre,karm,karm1,
karm2,vk,morbid,morbid2,dan,sus,sus1,str,Tao,vk1,vk2}).
In this paper we consider the following notion of a weighted
Carnot-Carath\'eodory space (this definition is close to the one
of the paper \cite{cnsw}). A smooth manifold $\mathbb M$ will be
called a {\it (weighted) Carnot-Carath\'eodory space} (shortly,
C-C space) if there are $C^{2M+1}$-smooth vector fields
$X_1,X_2,\ldots,X_q$ given on an area $U\subseteq\mathbb M$ (the
number $M$ is defined below), endowed with formal weights
$\operatorname{deg}(X_i)=d_i$, $1\leq d_1\leq
d_2\leq\ldots\leq d_q$, $d_j\in \mathbb N$, with the following
properties. It is assumed that
$\operatorname{span}\{X_I(v)\}_{|I|_h\leq M}=T_v\mathbb M$ for all
$v\in U$ and some $M\in\mathbb N$, where
$$\operatorname{deg} X_I=|I|_h=d_{i_1}+\ldots+d_{i_k}$$ is the
homogeneous degree of the commutator
$X_I=[X_{i_1},[\ldots,[X_{i_{k-1}},X_{i_k}]\ldots]$. Letting
$H_j=\operatorname{span}\{X_I\}_{|I|_h\leq j}$ we get a weighted
filtration of the tangent bundle $H_1\subseteq
H_2\subseteq\ldots\subseteq H_M=T\mathbb M,$ which meets the
property $ [H_i,H_j]\subseteq H_{i+j}. $ A point $u\in U$ is called
{\it regular} if there is a neighborhood of $u$ in which the
dimensions of all $H_k$ are constant; otherwise this point is called
{\it nonregular}.
This notion of a C-C space is suitable to describe nonlinear
control systems \eqref{nonlin}. One of the peculiarities stemming
from the presence of a formal degree structure is that different
choices of weights may lead to different distributions of regular
and nonregular points on the space (see Example 1).
Because of the mentioned difficulties, new methods for studying
local geometry of such spaces are needed. In particular, since the
metric $d_c$ might not exist, we obtain all estimates w.r.t. the
following distance function, first introduced in \cite{nsw}, which
is actually not a metric, but a quasimetric, i.e. the triangle
inequality holds only in the generalized sense, with some
constant.
$$\rho(v,w)=\inf\Bigl\{\delta>0\mid \text{ there is a curve }
\gamma:[0,1]\to U\text{ such that }$$
$$\gamma(0)=v, \gamma(1)=w, \dot{\gamma}(t)=\sum\limits_{|I|_h\leq
M} w_I X_I(\gamma(t)), |w_I|<\delta^{|I|_h}\Bigr\}.$$
A crucial result on local geometry, which we prove in Section 5,
is the estimate on comparison of this quasimetric w.r.t. the
initial vector fields and the quasimetric $\rho^u$ (see Section
3), induced in by their nilpotentizations $\widehat{X}_i^u$ at a
point $u$, which is possibly nonregular.
{\bf Theorem} (Theorem on divergence of integral lines) If $u,v\in
U$, $\rho(u,v)=O(\varepsilon)$ and $r=O(\varepsilon)$, then we
have
$$R(u,v,r)=O(\varepsilon^{1+\frac{1}{M}}),$$
where
$$R(u,v,r)=\max\{\underset{\widehat{y}\in
B^{\rho^u}(v,r)}\sup\{\rho^u(y, \widehat{y})\},\underset{y\in
B^{\rho}(v,r)}\sup\{\rho(y, \widehat{y})\}\}.$$ Here the points
$y$ and $\widehat{y}$ are defined as follows. Let $\gamma(t)$ be
an arbitrary curve such that
$$\begin{cases}\dot{\gamma}(t)=\sum\limits_{|I|_h\leq M}b_I
\widehat{X}^u_I(\gamma(t)),\\
\gamma(0)=v, \gamma(1)=\widehat{y},
\end{cases}$$
and $$\rho^u(v,\widehat{y})\leq\underset{|I|_h\leq
M}\max\{|b_I|^{1/|I|_h}\}\leq r.$$ Define
$y=\text{exp}(\sum\limits_{|I|_h\leq M}b_I X_I)(v)$. In this way,
the supremum is taken not only over $\widehat{y}\in
B^{\rho^u}(v,r)$, but also over the infinite set of admissible
$\{b_I\}_{|I|_h\leq M}$.
This theorem allows construction of motion planning algorithms for
the system \eqref{nonlin} like it was done for \eqref{lin} in
\cite{bel,bian,herm,jean,jean1}, and to prove an analog of the
local approximation theorem, as well as to study the algebraic
structure of the tangent cone.
{\bf Theorem} (Local approximation theorem). For any points $u\in
U$ and $v,w\in U$, such that $\rho(u,v)=O(\varepsilon)$,
$\rho(u,w)=O(\varepsilon)$, we have
$$|\rho(v,w)-\rho^u(v,w)|=O(\varepsilon^{1+\frac{1}{M}}).$$
{\bf Theorem} (Tangent cone theorem). The quasimetric space
$(U,\rho^u)$ is a local tangent cone at the point $u$ to the
quasimetric space $(U,\rho)$. The tangent cone is a homogeneous
space $G/H$, where $G$ is a nilpotent graded group with a weight
structure.
This theorem, see Section 6, generalizes an analogous fact for
sub-Riemannian manifolds, known as Mitchell's cone theorem.
Namely, it is known that, at a regular point, the tangent cone to
a sub-Riemannian manifold is a nilpotent stratified group
\cite{gro,mit}, while at a nonregular point it is a homogeneous
space \cite{bel,jean}.
The notion of the tangent cone to a quasimetric space, extending
the Gromov's notion for metric spaces, was introduced and studied
recently in \cite{dan,smj}. Note that a straightforward
generalization of the Gromov-Hausdorff convergence theory would
make no sense for quasimetric spaces, since the Gromov-Haussdorff
distance between any two quasimetric spaces would be equal to
zero. However, such generalization can be done for particular
classes of compact quasimetric spaces \cite{gre}.
All of the mentioned three results are new even for the case of
``classical'' sub-Riemannian manifolds; moreover, methods of their
proofs allow to prove in a new way the classical results for
sub-Riemannian manifolds (see Section 7). In particular, in
contrast to the proof of the Local approximation theorem in
\cite{bel}, we do not need special polynomial ``privileged''
coordinates and do not use Newton-type approximation methods.
The proofs of the main results of this paper heavily rely on
results of \cite{vk,vk1} for the case of regular C-C spaces, see
Definition \ref{cc-space-reg}, and on methods of submersion of a
C-C space into a regular one, \cite{rs,bel,cnsw,herm,jean}, as
well as on obtaining new geometric properties for the quasimetrics
$\rho$ and $\rho^u$ (Section 4).
This paper is essentially an extended version of the short notes
\cite{dan,izv}.
I am deeply grateful to Professor Sergey Vodopyanov for suggesting
me this problematic and many fruitful discussions. I am also
grateful to Doctor Maria Karmanova for a consultation on her paper
\cite{karm1}.
\section{Basic definitions, examples and known facts}
Recall that locally any vector field $X_i$ on a manifold $\mathbb
M$ can be viewed as a first-order differential operator
$X_i=\sum\limits_{j=1}^Na_{ij}(x)\frac{\partial}{\partial x_j}$
acting on a function $f\in C^{\infty}(\mathbb M)$, and its
smoothness coincides with the smoothness of the coordinate
functions $a_{ij}(x)$. A commutator of two vector fields is a
vector field defined as $[X_i,X_j]=X_iX_j-X_jX_i$.
In this paper we will use the following definition of a weighted
Carnot-Carath\'eodory space (this definition is very close to the
one formulated in \cite{cnsw}). It is easy to see that this
definition can be reformulated in such a way that it involves only
first-order (not higher-order) commutators
of the vector fields $X_1,X_2,\ldots, X_q$
and thus can be applied to the case of $C^1$-smooth vector fields.
\begin{Definition}\label{cc-space} Let $X_1,X_2,\ldots,X_q \in
$ be $C^{2M+1}$-smooth vector fields given on an area $U$ in a
connected $C^{\infty}$-smooth manifold $\mathbb M$ (the number
$M$ is defined below) and associated with formal weights
$\operatorname{deg}(X_i)=d_i$, $1\leq d_1\leq
d_2\leq\ldots\leq d_q$, $d_j\in \mathbb N$. To the commutator
$X_I=[X_{i_1},[\ldots,[X_{i_{k-1}},X_{i_k}]\ldots]$ a weight equal
to its homogeneous degree is assigned:
\begin{equation}\label{deg}
\operatorname{deg} X_I=|I|_h=d_{i_1}+\ldots+d_{i_k}.
\end{equation}
It is assumed that $\operatorname{span}\{X_I(v)\}_{|I|_h\leq
M}=T_v\mathbb M$ for all $v\in U$. Letting $H_j=\operatorname{span}
\{X_I\}_{|I|_h\leq j}$ we get a weighted
filtration of the tangent bundle
\begin{equation}\label{filtr}
H_1\subseteq H_2\subseteq\ldots\subseteq H_M=T\mathbb
M,\end{equation} which meets the property
\begin{equation}\label{filtr_commut}
[H_i,H_j]\subseteq H_{i+j}.
\end{equation}
A manifold
$\mathbb M$ endowed with the described structure will be called a
{\it (weighted) Carnot-Carath\'eodory space} (shortly, C-C space).
The minimal number $M$ of the elements $H_i$ in the filtration
\eqref{filtr} is called the {\it depth} of the given
Carnot-Carath\'eodory space.
\end{Definition}
Note that \eqref{deg}, \eqref{filtr_commut} relate the natural
algebraic structure, induced by commutators of the vector fields
$X_1,X_2,\ldots,X_q$, with the additional formal degree structure.
If $X_j\in C^{\infty}(U)$ and $d_1=\dots=d_q=1$ then Definition
\ref{cc-space} coincides with the classical definition of a
sub-Riemannian manifold. The subbundle $H_1$ is then called {\it
horizontal} and generates, by commutation, the whole tangent
bundle ({\it H\"ormander's condition}).
\begin{Remark}
For simplicity of notation we will carry out all computations for
the basic case when $d_1=1$, $d_q=M$. All results of this
paper hold in the framework of Definition $\ref{cc-space}$,
replacing $\frac{1}{M}$ by
$\frac{d_1}{\operatorname{max}\{d_q,M\}}$.
\end{Remark}
\begin{Definition}\label{nereg} A point $u\in U$ of a
Carnot-Carath\'eodory space is called {\it regular} if there is a
neighborhood of $u$ in which the dimensions of all $H_k$ are
constant; otherwise this point is called {\it nonregular} or {\it
singular}.\end{Definition}
\begin{Definition} Let us consider a distance function on $U$
defined as
$$\rho(v,w)=\inf\Bigl\{\delta>0\mid \text{ there is a curve }
\gamma:[0,1]\to U\text{ such that }$$
\begin{equation}\label{rho}\gamma(0)=v, \gamma(1)=w,
\dot{\gamma}(t)=\sum\limits_{|I|_h\leq M} w_I X_I(\gamma(t)),
|w_I|<\delta^{|I|_h}\Bigr\}.\end{equation}\end{Definition}
The distance function \eqref{rho} was first introduced in
\cite{nsw} where it was proved that it is continuous and, for
``classical'' sub-Riemannian manifolds, equivalent to the
intrinsic Carnot-Carath\'eodory metric $d_c$ (Ball-Box theorem,
see also \cite{bel,jean,karm2,vk,morbid,morbid2}).
\begin{Definition}[\rm{\cite{Stein}}]\label{quasim} A {\it quasimetric
space}
$(X,d_X)$ is a topological space $X$ endowed with a quasimetric
$d_X$. A {\it quasimetric } is a mapping $d_X:X\times X\to \mathbb
R^+$ meeting the following properties
(1) $d_X(u,v)\geq 0$; $d_X(u,v)= 0$ iff $u=v$;
(2) $d_X(u,v)\leq c_Xd_X(v,u)$, where $1\leq c_X<\infty$ is a
constant independent of $u,v\in X$ (generalized symmetry
property);
(3) $d_X(u,v)\leq Q_X(d_X(u,w)+d_X(w,v))$, where $1\leq
Q_X<\infty$ is a constant independent of $u,v,w\in X$ (generalized
triangle inequality);
(4) $d_X(u,v)$ is upper-semicontinuous on the first argument.
If $c_X= Q_X=1$, then $(X,d_X)$ is a metric space.\end{Definition}
\begin{Proposition} $(U,\rho)$ is a quasimetric space.\end{Proposition}
\begin{proof}
Properties (1), (2) and (4) immediately follow from the properties
of solutions of ordinary differential equations (and we have
$\rho(v,w)= \rho(w,v)$). The generalized triangle inequality will
be proved below (Proposition \ref{treug}).
\end{proof}
The simplest examples of (regular) weighted Carnot-Carath\'eodory
spaces are Carnot groups endowed with an additional degree
structure.
\begin{Example}[\rm{\cite{fol,fs}}]
Consider the Heisenberg group $\mathbb H^n$: let $\mathbb
M=\mathbb R^N$, $N=2n+1$, with the coordinates
$(x_1,x_2,\ldots,x_n,y_1,y_2,\ldots,y_n,t) \in\mathbb R^N$.
Consider the vector fields
$$X_j=\partial_{x_j}-\frac{y_j}{2}\partial_{t},\
Y_j=\partial_{y_j}+\frac{x_j}{2}\partial_{t},\ \partial_t $$ with
commutator relations
$$[X_j,Y_j]=T.$$
Let us first assign to all of these vector fields the weights
naturally defined by their commutator table:
$$\text{deg}(X_j)=\text{deg}(Y_j)=1,\ \text{deg}(T)=2,$$
then for $w=\text{exp}(x_jX_j+y_jY_j+tT)(v)$ we have
$$\rho(v,w)=\max\{|x_1|,\ldots,|x_n|,|y_1|,
\ldots,|y_n|,|t|^{1/2}\}.$$ Now let $$\text{deg}(X_j)=a_j,\
\text{deg}(Y_j)=b_j,\ \text{deg}(T)=c,\text{ where } a_j+b_j=c$$
for all $j=1,\ldots,N$. Then
$$\rho(v,w)=\max\{|x_ j|^{1/a_j},|y_j|^{1/b_j},
|t|^{1/c}\}$$ is a quasimetric not equivalent to the previous one.
In both cases all points of $\mathbb R^N$ are
regular.
\end{Example}
The next example illustrates that, for the C-C spaces from
Definition \ref{cc-space}, the Rashevsky-Chow theorem may fail to
hold, i.e. the intrinsic C-C metric might not exist.
\begin{Example}[\rm{\cite{Stein}}]\label{ex2}
Consider the Euclidean space $\mathbb R^N$ with the standard basis
$$\partial_{x_1},\partial_{x_2},\ldots,\partial_{x_N}$$ and let
$\text{deg}(\partial_{x_i})=d_j$ for
$i=k_j+1,k_j+2,\ldots,k_{j+1}$, where $k_1\leq k_2\leq\ldots\leq
k_M=N$.
Obviously, the subbundles
$$H_i=\text{span}\{\partial_{x_1},\partial_{x_2},\ldots,\partial_{x_i}\}$$
meet the condition $[H_i,H_j]\subseteq H_{i+j}$, since
$[H_i,H_j]=\{0\}$. At the same moment, none of the subsets of the set
of vector fields
$\{\partial_{x_i}\}$ meets the H\"ormander's condition, and, for
any ``horizontal'' subbundle, there are points of $\mathbb R^N$
which can not be joined by a horizontal curve.
In the considered example all points of $\mathbb R^N$ are regular.
If $v,w\in\mathbb R^N$ and
$w-v=(x_1,x_2,\ldots,x_N),$ then
$$\rho(v,w)=\underset{i=1,\ldots,N}\max\{|x_i|^{1/d_i}\}.
$$
\end{Example}
A further peculiarity of the considered weighted C-C spaces is that
different choices of weights $d_i$ may lead to different
combinations of regular and nonregular points.
\begin{Example}\label{reg_nereg}
Consider on the Euclidean space $\mathbb M=\mathbb R^3$ the vector
fields
$$\{X_1=\partial_y, X_2=\partial_x+y\partial_t,X_3=\partial_x\}$$
with the only nontrivial commutator relation $[X_1,X_2]=\partial_t$.
Let first $\text{deg}(X_i):=1$,
$i=1,2,3$. Then $\text{deg}([X_1,X_2])=2$ and
$$H_1=\text{span}\{X_1,X_2,X_3\},\
H_2=H_1\cup\text{span}\{[X_1,X_2]\}.$$ In this case $\{y=0\}$ is a
plane consisting of nonregular points. Really, for $y\neq 0$ we
have $\text{dim}(H_1)=3$, while for $y=0$ we have
$\text{dim}(H_1)=2$.
Now assume that $\text{deg}(X_1):=a,$ $\text{deg}(X_2):=b$ and
$\text{deg}(X_3):=a+b$, where $a\leq b$. Then
$\text{deg}([X_1,X_2])=a+b$, hence
$$H_a=\text{span}\{X_1\},\ H_b=H_a\cup\text{span}\{X_2\},\
H_{a+b}=H_a\cup H_b\cup\text{span}\{X_3,[X_1,X_2]\}.$$ In this
case all points of $\mathbb R^3$ are regular.
\end{Example}
Let us now briefly recall the approach of the papers of S.
Vodopyanov and M. Karmanova
\cite{karm, karm1, vk, vk1, vk2}, devoted to regular C-C spaces
(they are particular cases of weighted C-C spaces from Definition
\ref{cc-space}) in minimal smoothness assumptions, and some main
results of those papers, on which the proofs of the main results
of the present paper heavily rely.
\begin{Definition}[\rm \cite{vk, vk1, vk2, gre,karm,karm1}, cf.
\cite{bel,gro,lan,mm,nsw, rs} etc.]\label{cc-space-reg}
Let $\mathbb M$ be a connected $C^{\infty}$-smooth Riemannian
manifold of dimension $N$. The manifold $\mathbb M$
is called a {\it regular Carnot-Carath\'eodory space},
if there is a filtration of its tangent bundle $T\mathbb M$
\begin{equation}\label{filtr1}
H\mathbb M=H_1\subsetneq \ldots\subsetneq H_i\subsetneq\ldots
H_M=T\mathbb M,\end{equation} such that in some area
$U\subseteq\mathbb M$ there are $C^{p}$-smooth vector fields
$X_1,\ldots,X_N$, where $p>1$, meeting the following conditions.
\noindent For all $u\in U$ we have
(i) $H_i(u)=\text{span}\{X_1(u),\ldots,X_{\text{dim} H_i}(u)\}$
is a subbundle of $T_u\mathbb M$ of dimension
$\text{dim}H_i,\
i=1,\ldots,M$
(ii) The following decomposition holds
\begin{equation}\label{commut}[X_i,X_j](v)=
\sum\limits_{k:\deg X_k\leq \deg X_i+\deg
X_j}c_{ijk} (v) X_k(v),\end{equation} where
$\deg X_k=\min\{m|X_k\in H_m\}$ is the {\it degree}
of the vector field $X_k$.
The number $M$ is again called the {\it depth} of the C-C space
$\mathbb M$.
\end{Definition}
The condition (i) is equivalent to \eqref{filtr_commut} in Definition
\ref{cc-space}.
\begin{Remark}
In the present paper it suffices to have, for regular C-C spaces,
smoothness $p=M+1$, but most of the results of this section are
true for $C^{1,\alpha}$-smooth vector fields $X_1,\ldots,X_N$,
where $\alpha>0$ is the H\"older constant of the first-order
derivatives. In this case, the expression $\frac{1}{M}$ in the
estimates below is replaced by $\frac{\alpha}{M}$.
\end{Remark}
Consider on $U\subseteq\mathbb M$ {\it canonical first-order coordinates}
defined in a neighborhood of a point
$g\in\mathbb M$ as
\begin{equation}\label{exp}
\theta_g(v_1,\ldots,v_N)=\exp\left(\sum\limits_{i=1}^Nv_iX_i\right)(g).
\end{equation}
From theorems on continuous dependence of the solutions of ODE on
the initial data (see e.g. \cite{pon}) it follows that $\theta_g$ is
a $C^1$-diffeomorphism of the Euclidean ball $B_E(0,r)\subset
\mathbb R^N$ to $\mathbb M$, where $0\leq r<r_g$ for a sufficiently
smooth $r_g>0$. Let $U_g=\theta_g(B_E(0,r_g))$. The tuple
$(v_1,v_2,\ldots, v_N)=\theta_g^{-1}(v)\in B_E(0,r_g)$ is called the
first-order coordinates of the point $v\in U_g$. Further we assume
that $U\subseteq\bigcup_{g\in U}U_g$.
In the regular case, the tuple $(v_1,v_2,\ldots, v_N)$ is
uniquely defined, thus the quasimetric \eqref{rho}, denoted
in the above-mentioned papers as $d_{\infty}$, is defined for points
$w,v\in {\cal U}$, such that
$v=\exp\left(\sum\limits_{i=1}^Nv_iX_i\right)(w)$, as
$$d_{\infty}(w,v)=\rho(w,v)=\max\limits_i\{|v_i|^{\frac{1}{\deg X_i}}\}. $$
The generalized triangle inequality for $d_{\infty}$ is proved in
\cite{karm,vk,vk2} in minimal smoothness assumptions and in \cite{nsw}
for sufficiently smooth vector fields (in the general case, not just
near regular points).
The balls w. r. t. the quasimetric $d_{\infty}$ will be denoted as
$\operatorname{Box}(u,r)=\{v\in U\ |\ d_{\infty}(v,u)<r\}.$
The vector fields $\widehat{X}_i^u$, obtained from the commutator
table \eqref{commut} replacing the inequality by equality, i. e.
$$[\widehat{X}^u_i,\widehat{X}^u_j](v)=
\sum\limits_{k:\deg X_k= \deg X_i+\deg
X_j}c_{ijk} (u) \widehat{X}_k(v),$$ define a graded nilpotent Lie
algebra $V=V_1\oplus\ldots\oplus V_M$, where $[V_1,V_i]\subseteq
V_{i+1},\ i=1,\ldots M-1$, due to the following result.
\begin{Theorem}[\rm \cite{vk}]\label{al}
For a fixed point $u\in U$ consider a family of coefficients
$$\{\bar{c}_{ij}^k\}=\{c_{ijk}(u)\}_{\operatorname{deg}X_k=
\operatorname{deg}X_i+\operatorname{deg}X_j}.$$
Then these constants $\{\bar{c}_{ij}^k\}$ meet the Jacobi identity
and hence define a Lie algebra. This Lie algebra is graded and nilpotent
and it can be defined by canonical
$C^{\infty}$-smooth vector fields
$\{(\widehat{X}_i^g)^{\prime}\}_{i=1}^N$ on $\mathbb R^N$, such that
$(\widehat{X}_i^g)^{\prime}(0)=e_i$.
By means of the exponential mapping \eqref{exp} the obtained
vector fields can be pushed into the initial space:
$D\theta_g\langle (\widehat{X}_i^g)^{\prime}
\rangle=\widehat{X}_i^g\in C^{\alpha}({\cal U})$ in such way that
\begin{equation}\label{equal_g}\widehat{X}_i^g(g)=X_i(g),\quad i=1,
\ldots,N.\end{equation}
\end{Theorem}
\begin{Definition}\label{loc_carno} Denote the local Lie group,
corresponding to the Lie algebra $V$ generated by
$\{\widehat{X}_i^g\}_{i=1}^N$, as ${\cal G}^g=(U,\star)$ at the
point $g$. The group operation $\star$ is defined as follows: if
$$x=\exp\Bigl(\sum\limits_{i=1}^Nx_i\widehat{X}^g_i\Bigr)(g),\
y=\exp\Bigl(\sum\limits_{i=1}^Ny_i\widehat{X}^g_i\Bigr)(g),$$ then
$$x\star
y=\exp\Bigl(\sum\limits_{i=1}^Ny_i\widehat{X}^g_i\Bigr)\circ
\exp\Bigl(\sum\limits_{i=1}^Nx_i\widehat{X}^g_i\Bigr)(g) =
\exp\Bigl(\sum\limits_{i=1}^Nz_i\widehat{X}^g_i\Bigr)(g),$$ where
$z_i$ are calculated by means of the Campbell-Hausdorff
formula.\end{Definition}
Note that, if the H\"ormander's condition holds, ${\cal G}^g$ is a
local Carnot group, i. e. $V=V_1\oplus\ldots\oplus V_M$, where
$[V_1,V_i]= V_{i+1},\ i=1,\ldots M-1$.
The quasimetric on ${\cal G}^g$ is defined in a similar way as
$d_{\infty}$: for $u,v\in {\cal G}^g$ such that
$v=\exp\left(\sum\limits_{i=1}^Nv_i\widehat{X}^g_i\right)(u)$ let
$$d_{\infty}^g(u,v)=\rho^g(u,v)=
\max\limits_i\{|v_i|^{\frac{1}{\deg X_i}}\}.$$
In the present paper we will use the following results.
\begin{Theorem}[\rm \cite{vk}]\label{decomp}
For all $x\in U$, such that $|x_j|\leq
\varepsilon^{|I_j|}$, the following decompositions hold:
\begin{equation}\label{dec}
X_j(x)=\sum\limits_{i=1}^{\tilde{N}}a_{j,k}(x)\widehat{X}_k(x),
\end{equation}
where
$$ a_{j,k}=
\begin{cases}
\delta_{j,k}+O(\varepsilon),\quad&\operatorname{deg}(X_j)=\operatorname
{deg}(X_k),\\
O(\varepsilon),\quad&\operatorname{deg}(X_j)<\operatorname{deg}(X_k),\\
o(\varepsilon^{|I_k|-|I_j|}),\quad&\operatorname{deg}(X_k)>
\operatorname{deg}(X_j).
\end{cases}
$$
\end{Theorem}
Note that Theorem \ref{decomp} implies Gromov's nilpotentization
theorem, which is proved in \cite{gro,bel,rs,vk} for smooth vector
fields, in \cite{vk} for $C^1$ vector fields and depth $M=2$, in
\cite{gre} for $C^2$ vector fields, in \cite{karm1} for
$C^{1,\alpha}$ vector fields, where $\alpha>0$.
\begin{Theorem}[\rm Theorem on divergence of integral lines \cite{vk1}]
\label{int_lines_perem} Let $u,v\in U$,
$d_{\infty}(u,v)=C\varepsilon$, $C<\infty$. Consider the curves
$\gamma(t)$, $\widehat{\gamma}(t)$ in
$\operatorname{Box}(v,K\varepsilon)$, satisfying the equations
$$\begin{cases}\dot{\gamma}(t)=\sum\limits_{i=1}^Nb_i(t)X_i(\gamma(t)),\\
\gamma(0)=v,\end{cases}\quad\text{and}\quad
\begin{cases}\dot{\widehat{\gamma}}(t)=
\sum\limits_{i=1}^Nb_i(t)\widehat{X}^u_i(\widehat{\gamma}(t)),\\
\widehat{\gamma}(0)=v,\end{cases}$$ where
$$\int\limits_{0}^1|b_i(t)|dt<S\varepsilon^{\operatorname{deg}X_i},
S<\infty.$$
Then
$$\max\{d_{\infty}(w,\widehat{w}),d_{\infty}^u(w,\widehat{w})\}=
O(\varepsilon^{1+\frac{1}{M}})$$ uniformly on $U$.
\end{Theorem}
Note that in \cite{karm,karm1} an analog of this result (with
constant coefficients $b_i$) is proved without using the
Campbell-Hausdorff formula and Gromov's nilpotentization theorem,
for the case of $C^{1,\alpha}$-smooth vector fields, by means of
estimates obtained in \cite{karm,vk}.
Theorem \ref{int_lines_perem} and its analogs have many important
corollaries, in particular, each of them allows to prove the local
approximation theorem, in the smoothness assumptions considered in
each case, and also the Ball-Box theorem in the framework of the
following definition.
\begin{Definition} [\rm\cite{vk}]\label{cc-manifold}
If in Definition \ref{cc-space-reg} the following assumption (3)
holds, then $\mathbb M$ is called a {\it Carnot manifold}.
(3) The factor-mapping $[\,\cdot ,\cdot\,]:H_1\times
H_j/H_{j-1}\to H_{j+1}/H_{j}$, induced by the Lie bracket, is an
epimorphism for all $1\leq j<M$ (here it is assumed that
$H_0=\{0\}$).
In this case, the subbundle $HM=H_1$ is called {\it horizontal}.
\end{Definition}
By means of Theorem \ref{int_lines_perem}, an analog of the
Rashevsky-Chow theorem is proved in \cite{karm,vk} for Carnot
manifolds defined by $C^{1,\alpha}$-smooth vector fields. Thus it
is possible to define the intrinsic C-C
metric\begin{equation}\label{dc}d_c(u,v)=\inf\limits_
{\substack{\dot{\gamma}\in H\mathbb M, \\ \gamma(0)=u,\
\gamma(1)=v}}\{L(\gamma)\}.\end{equation}
The following assertion is formulated and proved in \cite{vk1}, in
the proof of the local approximation theorem.
\begin{Theorem}\label{int_lines_dc}
Consider the curves $\gamma$ and $\widehat{\gamma}$, satisfying
the equations
$$\begin{cases}\dot{\gamma}(t)=\sum\limits_{i=1}^ma_i(t)
\xi_i(\gamma(t)),\\
\gamma(0)=\tilde{v},
\end{cases}
\quad\text{and}\quad \begin{cases}\dot{\widehat{\gamma}}\
(t)=\sum\limits_{i=1}^ma_i (t)\widehat{\xi
}_i(\widehat{\gamma}(t)),\\
\widehat{\gamma}(0)=v.
\end{cases}$$
Denote $\gamma(1)=w$, $\widehat{\gamma}(1)=\widehat{w}$. If we
have $d_c(u,v)=O(\varepsilon)$ and $d_c(v,w)=O(\varepsilon)$,
then
\begin{equation}\label{int_lines_est}\max\{d_c(w,\widehat{w}),d_c^{u}(w,
\widehat{w})\}=O(\varepsilon^{1+\frac{1}{M}}).\end{equation}
\end{Theorem}
\begin{Theorem}[\rm Local approximation theorem \cite{vk1}]
\label{lat_dc} Uniformly on $u\in U$, $v,w\in
B^{d_c}(u,\varepsilon)$ the following estimate holds
$$\left|d_c(v,w)-d_c^u(v,w)\right|=
O(\varepsilon^{1+\frac{1}{M}}).$$
\end{Theorem}
\section{Choice of basis, nilpotent approximation and
a homo\-ge\-neous quasimetric}
\begin{Definition}\label{min_bas}
Among the vector fields $\{X_I\}_{|I|_h\leq M}$ we choose a basis
\begin{equation}\label{bas}
\{Y_1, Y_2,\ldots, Y_N\}
\end{equation}
as follows:
(i) the vector fields $Y_1,Y_2,\ldots, Y_N$ are linearly
independent at the point $u$ (hence, in some neighborhood of $u$);
(ii) the sum of their weights $\sum\limits_{i=1}^N \text{deg} Y_i$
is minimal;
(iii) the sum of orders $\sum\limits_{j=1}^N |I_j|$ of the
commutators $X_{I_j}$, corresponding to $Y_j$, is
minimal.
We say that the basis meeting conditions (i), (ii), (iii) is {\it
associated with the filtration} \eqref{filtr} at the point $u$.
\end{Definition}
Denote the dimension of the $k$-th element $H_k$ of filtration \eqref{filtr} at the point $u$ as $n_k=\operatorname{dim}H_k(u)$.
Then items (i), (ii) of Definition \ref{min_bas} are equivalent to
the fact that the vectors $\{Y_1(u),\ldots,Y_{n_k}(u)\}$ form
bases of $H_k(u)$ for all $k=1,\ldots,M$.
\begin{Remark}
Bases satisfying (i), (iii) were considered for ``classical''
sub-Riemannian geometry in \cite{bel,jean,mon} and other papers
(``normal'' or ``mimimal'' frame), when (ii) and (iii) coincide. In
our case the necessity of considering both (ii) and (iii) can be
seen from the Example \ref{reg_nereg}: having only (i), (ii) we can
choose both the basis $\{X_1,X_2,X_3\}$ and $\{X_1,X_2,[X_1,X_2]\}$;
these bases define a different algebraic structure. Adding both
conditions excludes such examples.
\end{Remark}
\begin{Proposition}\label{treug_prop}
For any vector field $X\in H_s$ we have
\begin{equation}\label{treugX}
X(v)=\sum\limits_{i=1}^N\xi_i(v)Y_i(v), \text{ where }\xi_i(u)=0
\text{ for }
\operatorname{deg}Y_i>s.\end{equation}\end{Proposition}
\begin{proof}
Really, by choice of the basis \eqref{bas} the vectors
$Y_1(u),\ldots, Y_{n_s}(u)$ constitute a basis of $H_s(u)$, hence
$\xi_i(u)=0$ for $i>n_s$. Consequently, $\xi_i(u)=0$ for
$\operatorname{deg}Y_i>s$.
\end{proof}
\begin{Proposition}\label{treug_prop_reg} At a fixed point $u\in U$
the following identity holds:
\begin{equation}\label{treug}[Y_i,Y_j](u)=\sum\limits_
{\operatorname{deg}Y_k\leq
\operatorname{deg}Y_i+\operatorname{deg}Y_j}
c_{ijk}(u)Y_k(u).\end{equation} If the point $u$ is regular, this
identity holds not just in $u$, but in some neighborhood of $u$.
\end{Proposition}
\begin{proof}
The identity \eqref{treug} follows from the fact that
$[H_m,H_l]\subseteq H_{m+l}$.
In some neighborhood of a regular point we can choose the same
basis, satisfying (i), (ii), (iii), for all points, by definition of
regularity.
\end{proof}
\begin{Definition} Consider {\it second-kind canonical coordinates}
$\Phi^u:\mathbb R^N\to U$ on $U$ defined as
\begin{equation}\label{loc_coord}\Phi^u(x_1,\ldots,x_N)
=\operatorname{exp}(x_1Y_1)\circ
\operatorname{exp}(x_2Y_2)\circ\ldots\circ\operatorname{exp}(x_NY_N)
(u)\end{equation}
\end{Definition}
Due to the smoothness assumptions of the Definition \ref{cc-space}
and theorems on continuous dependence of solutions of ordinary
differential equations on parameters \cite{pon}, the mapping $\Phi^u$
is a
$C^{M+1}$-diffeomorphism onto some neighborhood of zero
$V\subseteq\mathbb R^N$.
We will construct nilpotent approximations in these coordinates
\eqref{loc_coord} in the same way as it was done in
\cite{bel,herm}. Dilations are defined like in \cite{bel,fs,herm}:
on $\mathbb R^N$ let $\delta_{\varepsilon} (x_1,x_2,\ldots,
x_N)=(x_1\varepsilon^{\operatorname{deg}Y_1},x_2\varepsilon^
{\operatorname{deg}Y_2},
\ldots,x_N\varepsilon^{\operatorname{deg}Y_N}).$ The function
$f:\mathbb R^N\to\mathbb R$ is {\it homogeneous of order} $l$, if
$f(\delta_{\varepsilon}x)=\varepsilon^lf(x).$
\begin{Definition}\label{odnor_vf}
A vector field $X$ on $\mathbb R^N$ is homogeneous of order $s$,
if $\delta_{\varepsilon}^*X=\varepsilon^sX,$ where the action of
dilations on a vector field is defined as
$\delta_{\varepsilon}^*X(f\circ\delta_{\varepsilon})=(Xf)
\circ\delta_{\varepsilon}.$
\end{Definition}
The proofs of the next Proposition \ref{vf_decomp} and Corollary
\ref{alg_prop} follow the scheme of \cite{herm} for $C^{\infty}$
vector fields meeting the H\"ormander's condition. We recall
briefly main steps of these proofs.
\begin{Proposition}\label{vf_decomp}
In coordinates $\Phi^u$ for the $C^{M+1}$-smooth vector field $X_I$
the following decomposition holds:
$$X_I^{\prime}(x):=(\Phi^u)^{-1}_*X_I(\Phi^u(x))=\sum\limits_{j=1}^Na_j(x)
\frac{\partial}{\partial x_j}=$$ $$=
\sum\limits_{j=1}^N\left(\sum\limits_{\substack{|\alpha|_h\geq
\operatorname{deg}Y_j- \operatorname{deg}X_I, \\ |\alpha|\leq
M}}f_{(j,\alpha)} x^{\alpha}+o(||x||^M)\right)
\frac{\partial}{\partial x_j} \text{ for }||x||\to 0,$$ where
$\alpha=(\alpha_1,\ldots,\alpha_N)$, $|\alpha|_h=
\sum\limits_{i=1}^N\alpha_ i\operatorname{deg}Y_i$, $|\alpha|=
\sum\limits_{i=1}^N\alpha_ i$, $f_{(j,\alpha)} \in\mathbb R$,
$||x||$ is the Euclidean norm in $\mathbb R^N$.
\end{Proposition}
\begin{proof} Applying to both parts of the obvious equality
$$\Phi^u_*X_I^{\prime}(x)=X_I(\Phi_u(x)),\ x\in V\subseteq \
\mathbb R^N$$ the mapping $\operatorname{exp}(-x_NY_N)_*
\ldots\operatorname{exp}(-x_1Y_1)_*$ and carrying out all the
differentiations \cite{herm} in the obtained equality
$$\sum\limits_{j=1}^Na_j(x)\operatorname{exp}(-x_NY_N)_*
\ldots\operatorname{exp}(-x_1Y_{1})_*\Phi^u_*\left(\frac{\partial}
{\partial x_j}\right)=$$
\begin{equation}\label{tozhd}=\operatorname{exp}(-x_NY_N)_*
\ldots\operatorname{exp}(-x_1Y_1)_*X_I(\Phi^u(x)),\end{equation}
we get the identity
$$\sum\limits_{|\nu|=0}^M\frac{(-x_1)^{\nu_1}}{\nu_1!}\frac{(-x_2)^
{\nu_2}}{\nu_2!}
\ldots\frac{(-x_N)^{\nu_N}}{\nu_N!}(\operatorname{ad}^{\nu_N}Y_N\ldots
\operatorname{ad}^{\nu_2}Y_2
\operatorname{ad}^{\nu_1}Y_1,X_I)(u)+o(||x||^M)=$$
\begin{equation}\label{tozhd1}
=\sum\limits_{|\nu|=0}^Ma_j(x)\frac{(-x_{j+1})^{\nu_{j+1}}}{\nu_{j+1}!}
\ldots\frac{(-x_N)^{\nu_N}}{\nu_N!}(\operatorname{ad}^{\nu_N}Y_N\ldots
\operatorname{ad}^{\nu_{j+1} }Y_{j+1},Y_j)(u)+o(||x||^M),
\end{equation}
where $$(\operatorname{ad}Z,Y)=[Z,Y];\
(\operatorname{ad}^{\nu+1}Z,Y)=[Z,(\operatorname{ad}^{\nu}Z,Y)];\
[\operatorname{ad}^0Z,Y]=Y.$$ According to Proposition
\ref{treug_prop}, the following decomposition holds:
$$(\operatorname{ad}^{\nu_N}Y_N\ldots\operatorname{ad}^{\nu_2}Y_2
\operatorname{ad}^{\nu_1}Y_1,X_I)(u)=\sum\limits_{k=1}^N\beta_{\nu}^k
Y_k(u),$$ where
\begin{equation}\label{beta}
\beta_{\nu}^k=0 \text{ for
|\nu|_h=\sum\limits_{j=1}^N\nu_j\operatorname{deg}Y_j<\operatorname{deg}
Y_k-\operatorname{deg}X_I.
\end{equation}
Denoting
$$b_k(x)=\sum\limits_{|\nu|=0}^M\beta_{\nu}^k\frac{(-x_1)^{\nu_1}}
{\nu_1!}\frac{(-x_2)^{\nu_2}}{\nu_2!}
\ldots\frac{(-x_N)^{\nu_N}}{\nu_N!}$$
and
$$c_{jk}(x)=\frac{(-x_{j+1})^{\nu_{j+1}}}{\nu_{j+1}!}
\ldots\frac{(-x_N)^{\nu_N}}{\nu_N!}\gamma_{\nu,j}^k,\quad\text{where }(\operatorname{ad}^{\nu_N}Y_N\ldots\operatorname{ad}^{\nu_{j+1}
}Y_{j+1},Y_j)(u)=\sum\limits_{k=1}^N\gamma_{\nu,j}^kY_k(u),$$
we derive
\begin{equation}\label{lev}\sum\limits_{j=1}^Na_j(x)\left[Y_j(u)+
\sum\limits_{k=1}^Nc_{jk}(x)Y_k(u)\right]=\sum\limits_{k=1}^Nb_k(x)Y_k(u)+o(||x||^M)
\text{ for }||x||\to 0,
\end{equation}
Here $b_k(x)$ is a polynomial function beginning from terms $x_1^{\nu_1}\ldots x_N^{\nu_N}$ of order
$|\nu|_h\geq\operatorname{deg}Y_k-\operatorname{deg}X_I$, while $||(c_{jk}(x))||<1$ in some neighborhood of zero. Denoting
$$a(x)=(a_1(x),a_2(x),\ldots,a_N(x)),$$
$$b(x)=(b_1(x),b_2(x),\ldots,b_N(x)),\ C(x)=(c_{jk}(x))_{j,k=1}^N,$$
we finally obtain
$$a(x)=(I+C(x))^{-1}(b(x)+o(||x||^M))=b(x)-C(x)b(x)+o(||x||^M)
\text{ for }||x||\to 0,$$ from where, according to the properties
of $b_k(x)$, the proposition follows.
\end{proof}
Since $\operatorname{deg}X_I=|I|_h$ and the vector field $\frac{\partial}
{\partial x_j}$ is homogeneous of order $-\operatorname{deg}Y_j$, we have
\begin{Corollary}
The vector field $X_{I}^{\prime}\in C^{M}$, $|I|_h\leq M$, can be written as
$$X_{I}^{\prime}(x)=(X_{I}^{\prime})^{(-|I|_h)}(x)+(X_{|I|_h}^{\prime})^
{(-|I|_h+1)}(x)+
\ldots+(X_{I}^{\prime})^{(-|I|_h+M)}(x)+o(||x||^M) \text{ for
}||x||\to 0,$$ where the $C^{\infty}$-smooth vector field
$(X_{I}^{\prime})^{(-j)}$ is homogeneous of order $-j$.
\end{Corollary}
\begin{Corollary}\label{alg_prop}
The $C^{M+1}$-smooth vector fields
$\{\widehat{X}^u_{I}\}_{|I|_h\leq M}$ on $\mathbb M$,
where $\widehat{X}^u_{I}=
\Phi^u_*\langle(X_{I}^{\prime})^{(-|I|_h)}\rangle$,
constitute a nilpotent Lie algebra
\begin{equation}\label{lie}L=\operatorname{Lie}
\{\widehat{X}^u_1,\ldots,\widehat{X}^u_q\}\end{equation}
and we have $$H_l(u)=\widehat{H}_l(u), \quad\text{ where }
\widehat{H}_l=\operatorname{span}\{\widehat{X}^u_I\}_{|I|_h\leq l}.$$
The vector fields $\{\widehat{Y}^u_1,\widehat{Y}^u_2,\ldots,\widehat{Y}^u_N\},$
chosen from the commutators $\widehat{X}^u_{I}$ in the same way as the basis
\eqref{bas} from the commutators $X_{I}$, form a basis, associated with the filtration \eqref{filtr} on some neighborhood $\widehat{U}$ of the point $u$.
\end{Corollary}
\begin{proof}
The smoothness assertion follows from the fact that $\Phi^u$
is a $C^{M+1}$-diffeomorphism and that
$\Phi^u_*[X,Y]=[\Phi^u_*X,\Phi^u_*Y].$ The Lie algebra is nilpotent since for $|I|_h>M$ we have
$(X_I^{\prime})^{(-|I|_h)}=0$.
To prove the second part of the corollary it is sufficient to note
that $(\widehat{Y}_i^u)^{\prime}(0)=\frac{\partial}{\partial x_i}$
due to differentiation rules and homogeneity of the vector fields
$(\widehat{Y}_i^u)^{\prime}$. Thus the vector fields
$\widehat{Y}_i^u$ are linearly independent at the point $u$ and
hence in some its neighborhood. Moreover, if
$$X_{I}(v)=\sum\limits_{i=1}^N\xi_i(v)Y_i(v),\
\widehat{X}_I(v)=\sum\limits_{i=1}^N\eta_i(v)Y_i(v),$$ then
$\xi_i(u)=\eta_i(u)$ for $n_{|I|_h-1}+1\leq i\leq N$. Indeed, in
coordinates \eqref{loc_coord} we have
$$X_I^{\prime}(0)=(\Phi^u_*)^{-1}X_I(u)=\sum\limits_{i=1}^N\xi_i(u)
\frac{\partial} {\partial x_i};\
\widehat{X}_I^{\prime}(0)=(\Phi^u_*)^{-1}\widehat{X}_I(u)=
\sum\limits_{i=1}^N\eta_i(u)\frac{\partial} {\partial x_i};$$
$$X_I^{\prime}(x)=\widehat{X}_I^{\prime}(x)+Z(x),$$ where the vector field
$Z(x)=\sum\limits_{i=1}^Nz_i(x)\frac{\partial} {\partial x_i}$
consists of summands having order of homogeneity bigger than
$-|I|_h$, hence $z_i(0)=0$ for $n_{|I|_h-1}+1\leq i\leq N$.
\end{proof}
W.l.o.g. assume that $U=\widehat{U}$.
\begin{Definition}
The vector fields $\{\widehat{X}^u_{I}\}_{|I|_h\leq M}$ are called
{\it nilpotent approximations} of the vector fields
$\{X_I\}_{|I|_h\leq M}$.
\end{Definition}
\begin{Definition}
Define a dilation group, associated with the basis \eqref{bas},
$\Delta^v_{\varepsilon}=\Phi^v\delta^v_{\varepsilon}(\Phi^v)^{-1}$
on $U$: if
$$w=\operatorname{exp}(w_1Y_1)
\circ\operatorname{exp}(w_2Y_2)\circ\ldots\circ\operatorname{exp}(w_NY_N)
(v),$$ then
\begin{equation}\label{dil}\Delta_{\varepsilon}^v w=\operatorname{exp}(
w_1\varepsilon^
{\operatorname{deg}Y_1}Y_1)\circ\operatorname{exp}(w_2\varepsilon^
{\operatorname{deg}Y_2}Y_2)
\circ\ldots\circ\operatorname{exp}(w_N\varepsilon^{\operatorname{deg}
Y_N}Y_N)(v).\end{equation}
\end{Definition}
From Proposition \ref{vf_decomp} it follows immediately
\begin{Corollary}\label{gromov}
On $U$ the following convergence takes place:
$$(\Delta^u_{\varepsilon^{-1}})_*\varepsilon^{|I|_h}X_{I}
(\Delta^u_{\varepsilon}(v))\to\widehat{X}_{I}^u(v)
\text{ for } \varepsilon\to 0, |I|_h\leq M.$$
\end{Corollary}
\begin{proof}
Really, in coordinates \eqref{loc_coord} we have
$$(\delta_{\varepsilon^{-1}})_*\varepsilon^{|I|_h}X_{I}^{\prime}
(\delta_{\varepsilon}(x))$$$$=\delta_{\varepsilon}^*
\varepsilon^{|I|_h}\sum\limits_{j=1}^N\left(\sum
\limits_{\operatorname{deg}Y_j- |I|_h\leq|\alpha|_h\leq
M}f_{(j,\alpha)} (\varepsilon x)^{\alpha}+o(||\varepsilon x||^M)
\right)\frac{\partial}{\partial x_j}
$$
$$=
\varepsilon^{|I|_h}\sum\limits_{j=1}^N\varepsilon^{\operatorname{deg}Y_j}
\left(\sum\limits_{\operatorname{deg}Y_j- |I|\leq|\alpha|_h\leq
M}f_{(j,\alpha)} \varepsilon^{|\alpha|_h}
x^{\alpha}+o(||\varepsilon x||^M) \right)\frac{\partial}{\partial
x_j}\to$$ $$\to \sum\limits_{j=1}^N
\left(\sum\limits_{|\alpha|_h=\operatorname{deg}Y_j-
|I|_h}f_{(j,\alpha)} x^{\alpha} \right)\frac{\partial}{\partial
x_j}=(X_I^{\prime})^{(-|I|_h)}$$ for $\varepsilon\to 0$.
\end{proof}
Introduce a distance function on $U$, generated by nilpotent
approximations, in a similar way as in \eqref{rho}:
$$\rho^u(v,w)=\inf\{\delta>0\mid \text{ there is a curve}
\gamma:[0,1]\to U, \text{ such that }$$
\begin{equation}\label{rho_u}\gamma(0)=v, \gamma(1)=w,
\dot{\gamma}(t)=\sum\limits_{|I|_h\leq M} w_I
\widehat{X}^u_I(\gamma(t)), |w_I|<\delta^{|I|_h}\}.\end{equation}
Actually, $\rho^u$ is again a quasimetric; the generalized triangle
inequality will be proved in the next subsection.
\begin{Proposition}
The quasimetric $\rho^u$ meets the conical property
\begin{equation}\label{cone_prop}\rho^u(\Delta_{\varepsilon}^uv,\Delta_
{\varepsilon}^uw)= \varepsilon \rho^u(v,w).
\end{equation}
\end{Proposition}
\begin{proof}
By definition, $\rho^u(v,w)$ is the infimum of
$\underset{|I|_h\leq M}\max\{|a_I|^{1/|I|_h}\}$ over all curves
$\gamma$ such that
$$\begin{cases}
\dot{\gamma}(t)=\sum\limits_{|I|_h\leq M} a_I
\widehat{X}^u_I(\gamma(t)),\\ \gamma(0)=v, \gamma(1)=w.
\end{cases}$$
Consider the curve
\begin{equation}\label{gamma_eps}
\gamma_{\varepsilon}(t)=\Delta_{\varepsilon}^u\gamma(t).\end{equation}
Due to homogeneity of the vector fields $\widehat{X}^u_I$ we have
$$\begin{cases}
\dot{\gamma_{\varepsilon}}(t)=\sum\limits_{|I|_h\leq M}
a_I\varepsilon^{|I|_h}
\widehat{X}^u_I(\gamma_{\varepsilon}(t)),\\
\gamma_{\varepsilon}(0)=\Delta_{\varepsilon}^uv,
\gamma_{\varepsilon}(1)=\Delta_{\varepsilon}^uw.
\end{cases}$$ Note that all curves connecting the points
$\Delta_{\varepsilon}^uv$ � $\Delta_ {\varepsilon}^uw$ have the form
\eqref{gamma_eps}: really, let $\kappa(t)$ be an arbitrary curve connecting the points $\Delta_{\varepsilon}^uv$ and
$\Delta_ {\varepsilon}^uw$, then the curve $\gamma(t)=\Delta_
{\varepsilon^{-1}}^u\kappa(t)$ connects the points $v$ and $w$. Hence
$ \rho^u(\Delta_{\varepsilon}^uv,\Delta_ {\varepsilon}^uw)$ is the infimum of $\varepsilon\underset{|I|_h\leq
M}\max\{|a_I|^{1/|I|_h}\}$ over $\gamma$, from where the proposition follows.
\end{proof}
\section{The lifting construction and further properties of
the quasimetrics $\rho$ and $\rho^u$}
In this section we first recall the lifting construction proposed
in by L.~Rotshild and E.~M. Stein \cite{rs} and developed in many
other papers (\cite{good,hormel,cnsw, jean, bram1} etc.). We
present this construction in the form suitable for our purposes,
making essentially a synthesis of the ideas of papers \cite{cnsw}
and \cite{jean}, in order to get a (quasi)metric-decreasing
embedding of our C-C space into a regular one.
Using this embedding and results for regular quasimetric C-C
spaces \cite{vk,vk1} we will derive some important geometric
properties of the quasimetrics $\rho$ and $\rho^u$, in particular
prove the generalized triangle inequality for both of them.
Crucial for proving main theorems of the next section is the
``rolling-of the-box lemma'' (Proposition \ref{prokat}).
Let us recall the construction of a free nilpotent Lie algebra ${\cal
N}_{d_1,\ldots,d_q}^M$ with $q$ generators ${\cal X}_1,\ldots,{\cal
X}_q$ of weights $\{d_i\}_{i=1}^q$ and depth $M$ \cite{cnsw}.
Let ${\cal F}_q$ be a free (infinite-dimensional) Lie algebra with
$q$ generators, i. e. the only interrelation between commutators of
vector fields $\{{\cal X}_i\}$ are the scewcommutativity and the
Jacobi idenity. Introduce on ${\cal F}_q$ dilations acting as
\begin{equation}\label{dil_n}\delta_{\varepsilon}
(\sum\limits_{j=1}^qc_j{\cal X}_j)=
\sum\limits_{j=1}^qc_j\varepsilon^{d_j}{\cal X}_j,\quad
\delta_{\varepsilon}({\cal X}_I)=\varepsilon^{|I|_h}{\cal
X}_I.\end{equation} Consider subspaces ${\cal F}_q^l$,
invariant of order $l$
under dilations \eqref{dil_n}. Then ${\cal
F}_q=\bigoplus\limits_{l=1}^{\infty}{\cal F}_q^l.$ Let
\begin{equation}\label{al_svob} {\cal N}={\cal
N}_{d_1,\ldots,d_q}^M={\cal F}_q/I_M,\quad\text{ where
}I_M=\bigoplus\limits_{l>M}{\cal F}_q^l\end{equation} is an Lie
algebra ideal in ${\cal F}_q$. Note that ${\cal F}_q/I_M$
isomorphic to the direct sum $\bigoplus\limits_{l\leq M}{\cal
F}_q^l.$
Let $\psi:{\cal N}\to \bigoplus\limits_{l\leq M}{\cal F}_q^l $ be a Lie algebra isomorphism and $ X_j=\psi({\cal X}_j)$.
Denote
\begin{equation}\label{dimN}\tilde{N}=\tilde{N}(d_1,\ldots,d_q,M)=
\operatorname{dim} {\cal N}_{d_1,\ldots,d_q}^M.\end{equation}
\begin{Definition} The vector fields $\tilde{X}_1,\tilde{X}_2,\ldots,
\tilde{X}_q$ on $\tilde{U}\subseteq\tilde{\mathbb M}$,
defining a filtration of the form \eqref{filtr}, are called {\it free up to the order $s$} at the point $u\in \tilde{U}$, if
$\operatorname{dim}H_s(u)=\tilde{N}(d_1,\ldots,d_q,s)$.\end{Definition}
\begin{Remark}[\rm\cite{rs,cnsw}]\label{svob_reg} If the vector fields
$\tilde{X}_1,\tilde{X}_2,\ldots, \tilde{X}_q$ on
$\tilde{U}\subseteq\tilde{\mathbb M}$ are free up to the order $M$
at the point $u\in\tilde{U}$, where $M$ is the depth of the C-C
space $\mathbb M$, then the point $u$ is regular.\end{Remark}
The proof of the next proposition follows the same lines as the
proof of a similar assertion in \cite{jean} for the case of smooth
vector fields meeting the H\"ormander's condition. We recall this
proof, since some of its details are needed below.
\begin{Proposition}\label{lift}Let all conditions of Definition
$\ref{cc-space}$ be satisfied and $\tilde{N}$ be the
dimension defined by \eqref{dimN} of the corresponding
free Lie algebra. Consider the manifold
$\tilde{\mathbb M}=\mathbb M\times\mathbb
R^{\tilde{N}-N}$ of the dimension $\tilde{N}$.
Then there are a neighborhood $\tilde{U}$ of the point $(u,0)$
in $\tilde{\mathbb M}$,
a neighborhood $U$ of the point $u$, where $U\times\{0\}\subseteq\tilde{U}$, coordinates $(y,z)$ on $\tilde{U}$ and two systems of $C^M$-smooth vector fields
\begin{equation}\label{lift_vf}
\tilde{X}_k(y,z)=X_k(y)+\sum\limits_{j=N+1}^{\tilde{N}}b_{kj}(y,z)\frac
{\partial} {\partial z_j}
\text{ and }\widehat{\tilde{X}}_k(y,z)=\widehat{X}_k^u(y)+\sum\limits_
{j=N+1}^{\tilde{N}} b_{kj}(y,z)\frac{\partial} {\partial z_j},\
\end{equation}
$k=1,2,\ldots,q$, defining a C-C structure of depth $M$ on $\tilde{U}\subseteq\tilde{\mathbb M}$and, hence, free up to order $M$ on $\tilde{U}$. Here $b_{jk}(y,z)$ are polynomial functions on $\tilde{U}$, such that the vector fields
$\sum\limits_{j=N+1}^{\tilde{N}} b_{kj}(y,z)\frac{\partial}
{\partial z_j}$ are homogeneous of order $-d_k$, $k=1,2,\ldots,q$.
All points of some neighborhood $\tilde{V}=\tilde{V}(\tilde{u})
\subseteq\tilde{U}$
are regular.
\end{Proposition}
\begin{proof
Consider canonical vector fields $\{\widehat{\tilde{X^{\prime}}}_I\}_{|I|_h\leq M}\in C^{\infty}$ on $\mathbb R^{\tilde{N}}$ which generate the Lie algebra ${\cal N}$ defined in \eqref{al_svob} in such way that ${\cal
F}_l^q=\text{span}\{\widehat{\tilde{X}}_I^{\prime}\}_{|I|_h\leq l}$,
$\widehat{\tilde{X^{\prime}}}_{I_j}(0)=e_j$,
$j=1,\ldots,\tilde{N}$ \cite{pos,fs,lan,vk}.
By definition of a free algebra, there is a surjective homomorphism of nilpotent Lie algebras $\Psi:{\cal N}\to
\text{Lie}\{\widehat{X}^u_1,\widehat{X}^u_2,\ldots,\widehat{X}^u_q\}$
such that
$\Psi(\widehat{\tilde{X^{\prime}}}_I)=\widehat{X}^{u}_I$,
$|I|_h\leq M$.
Let $G=\operatorname{exp}({\cal N})(0)$ be the corresponding Lie group $\mathbb R^{\tilde{N}}$. Define the action of $G$ on $\mathbb M$ by means of the homomorphism
$\Psi$: for
$g=\text{exp}\left(\sum\limits_{j=1}^{\tilde{N}}c_j
\widehat{\tilde{X^{\prime}}}_{I_j}\right)(0)\in G,\ v\in U$
let
$$g(v)=\text{exp}(\sum\limits_{j=1}^{\tilde{N}}c_j
\Psi(\widehat{\tilde{X^{\prime}}}))(v)=
\text{exp}(\sum\limits_{j=1}^{\tilde{N}}c_j
\widehat{X}_{I_j}^u)(v).$$
The isotropy subgroup $H=\{g\in G\mid g(u)=u\}\subseteq G$ is connected and invariant under dilations
\begin{equation}\label{dil_lift}
\tilde{\delta}_{\varepsilon}(x_1,x_2,\ldots,x_{\tilde{N}})=
(\varepsilon^{|I_1|_h}x_1, \varepsilon^{|I_2|_h}x_2,\ldots,
\varepsilon^{|I_{\tilde{N}}|_h}x_N),\end{equation} due to homogeneity of the vector fields
Moreover,
\begin{equation}\label{H_alg}{\cal
H}=\operatorname{span}\left\{\sum\limits_j
c_j\widehat{\tilde{X^{\prime}}}_{I_j}\mid \sum\limits_j
c_j\widehat{X}^u_{I_j}(u)=0\right\}.\end{equation}
Denote
by $\widehat{Z}_{N+1},\ldots,\widehat{Z}_{\tilde{N}}$
the basis of the subalgebra ${\cal H}$ consisting of vector fields homogeneous under dilations.
The mapping
$\varphi_u:G\to U\subseteq \mathbb M$ defined as
$\varphi_u(g)=g(u)$ induces a diffeomorfism from the homogeneous space
$G/H=\{Hg\mid g\in G\}$ onto the neighborhood $U$:
$\varphi_u(Hg)=g(u).$ Consider on $G/H$ left-invariant vector fields
$$\widehat{X}_i^h(Hg)=\frac{d}{dt}[Hg\operatorname{exp}
(t\widehat{\tilde{X^{\prime}}}_i)(0)]\Big\vert_{t=0}, i
=1,\ldots,q.$$ By the diffeomorphism $\varphi_u$ identify them with the vector fields $\widehat{X}_i^u$:
$$(\varphi_u)_*\langle\widehat{X}_i^h\rangle(Hg)=\frac{d}{dt}
[\varphi(
Hg\operatorname{exp}(t\widehat{\tilde{X^{\prime}}}_i)(0))]
\Big\vert_{t=0}=$$
$$=\frac{d}{dt}[\operatorname{exp}(t\widehat{X}^u_i)(g(u))]\Big\vert_{t=0}
= \widehat{X}^u_i(g(u)).$$
Consider on $U$ the basis
\begin{equation}\label{bas_hat}
\widehat{Y}^u_1,\widehat{Y}^u_2,\ldots,\widehat{Y}^u_N,\end{equation}
consisting of the same commutators of the vector fields
$\{\widehat{X}_I^u\}_{|I|_h\leq M}$, as the basis \eqref{bas} of the commutators of $\{X_I\}_{|I|_h\leq M}$
Taking in account \eqref{H_alg}, we see that the family of vector
fields $\{\widehat{Y}_i\}_{i=1}=\{(\varphi_u)^{-1}_*
\langle\widehat{Y}^u_i\rangle\}_{i=1}^N$ is a basis of the
algebraic complement to $\mathcal{H}$ in the Lie subalgebra
$N_{M,m}$, consisting of homogeneous vector fields.
Introduce on $G$ coordinates
\begin{equation}\label{odnor_coord}(y,z)\in\mathbb R^{\tilde{N}}\mapsto g=
\operatorname{exp}\left(\sum\limits_
{k=N+1}^{\tilde{N}}z_k\widehat{Z}_k\right)\operatorname{exp}
(y_{N}\widehat{Y}_{N})\ldots\operatorname{exp}
(y_{1}\widehat{Y}_{1})\end{equation}
In these coordinates it holds
\begin{equation}\label{decomp_group}
\widehat{\tilde{X^{\prime}}}_k(y,z)=\widehat{X}_k^h(y)+\sum
\limits_{j=N+1}^{\tilde{N}} b_{kj}(y,z)\frac{\partial} {\partial
z_j},\ j=1,2,\ldots,q.
\end{equation}
Indeed,
$$\widehat{\tilde{X^{\prime}}}_i(g)=\frac{d}{dt}[g\operatorname{exp}
(t\widehat{\tilde{X^{\prime}}}_i)](0) \mid_{t=0};$$ in coordinates
\eqref{odnor_coord} we have
$$g\operatorname{exp}(t\tilde{X}^{\prime}_i)(0)=
\operatorname{exp}\left(\sum\limits_
{k=N+1}^{\tilde{N}}z_k\widehat{Z}_k\right)\operatorname{exp}
(y_{N}\widehat{Y}_{N})\ldots\operatorname{exp}
(y_{1}\widehat{Y}_{1})
\operatorname{exp}(t\widehat{\tilde{X^{\prime}}}_i)(0)=$$
$$=
\operatorname{exp}\left(\sum\limits_
{k=N+1}^{\tilde{N}}z_k\widehat{Z}_k\right) h(t)\operatorname{exp}
(c_N(y,t)\widehat{Y}_{N})\ldots\operatorname{exp}
(c_1(y,t)\widehat{Y}_{1}),$$ where $h(t)\in H$;
$$\hspace{-40.pt}Hg\operatorname{exp}(t\widehat{\tilde{X^{\prime}}}_i)(0)=
H\operatorname{exp}\left(\sum\limits_
{k=N+1}^{\tilde{N}}z_k\widehat{Z}_k\right) \operatorname{exp}
(y_{N}\widehat{Y}_{N})\ldots\operatorname{exp}
(y_{1}\widehat{Y}_{1})
\operatorname{exp}(t\widehat{\tilde{X^{\prime}}}_i)(0)=$$
$$=
H\operatorname{exp}
(c_{N}(y,t)\widehat{Y}_{N})\ldots\operatorname{exp}
(c_{1}(y,t)\widehat{Y}_{1}).$$ Thus the coordinates of the vector
fields $\widehat{X}^h_i$ and $\widehat{\tilde{X^{\prime}}}_i$ by
$\frac{\partial}{\partial y_k}$ coincide and are equal to
$\frac{d}{dt}c_k(y,0)$. Hence, we have \eqref{decomp_group}.
Now define the vector fields $\tilde{X}_k$,
$\widehat{\tilde{X}}_k$ by formulas \eqref{lift_vf}. Since the
vector fields $\widehat{\tilde{X^{\prime}}}_k$ are homogeneous of
order $-d_k$, then the vector fields
$\sum\limits_{j=N+1}^{\tilde{N}} b_{kj}(y,z)\frac {\partial}
{\partial z_j},\ k=1,2,\ldots,q$ are homogeneous of the same
order. By construction, we have that
$\widehat{\tilde{X}}_k=\tilde{X}_k^{(-d_k)}$ w.r.t. the dilations
\eqref{dil_lift}. Thus the vector fields $\{\tilde{X}_k\}_{k=1}^q$
define a C-C structure of depth $M$ on $\tilde{\mathbb M}$ and
are free of order $M$ on $U$. The point $\tilde{u}$ and hence all
points in some of its neighborhoods are regular, according to
Remark \ref{svob_reg}.
\end{proof}
\begin{Proposition}\label{lift_I}
For all multiindices $I$, such that $|I|_h\leq M$, the following decompositions hold:
\begin{equation}\label{lift_vf_I}
\tilde{X}_I(y,z)=X_I(y)+\sum\limits_{j=N+1}^{\tilde{N}}b_{Ij}(y,z)\frac
{\partial} {\partial z_j}\text{ and }
\widehat{\tilde{X}}_I(y,z)=\widehat{X}_I^u(y)+
\sum\limits_{j=N+1}^{\tilde{N}}
\widehat{b}_{Ij}(y,z)\frac{\partial} {\partial z_j},
\end{equation}
where $b_{Ij}(y,z),\widehat{b}_{Ij}(y,z)\in C^{M+1}(\tilde{U})$.
\end{Proposition}
\begin{proof}
Let us prove the first decomposition of \eqref{lift_vf_I} by
induction on the length of $I$ (the second decomposition is proved
in a similar way). Let \eqref{lift_vf_I} be true for all $J$, such
that $|J|_h\leq l$. By the Jacobi identity, any vector field
$\tilde{X}_I$, where $|I|_h\leq l+\min\{d_1,\ldots,d_q\}$, can be
represented as $\tilde{X}_I=[\tilde{X}_i,\tilde{X}_J]$, where
$i\in{1,\ldots,q}$ and $|J|_h\leq l$. By induction and taking into
account the identity \eqref{lift_vf}, we get
$$\tilde{X}_I(y,x)=[X_i,X_J](y)+[X_i,
\sum\limits_{j=N+1}^{\tilde{N}} b_{Jj}(y,z)\frac{\partial}
{\partial z_j}]+$$$$+[\sum\limits_{j=N+1}^{\tilde{N}}
b_{ij}(y,z)\frac{\partial} {\partial
z_j},X_J]+[\sum\limits_{j=N+1}^{\tilde{N}}
b_{ij}(y,z)\frac{\partial} {\partial
z_j},\sum\limits_{j=N+1}^{\tilde{N}} b_{Jj}(y,z)\frac{\partial}
{\partial z_j}]$$
$$=X_I(y)+\sum\limits_{j=N+1}^{\tilde{N}} \left(X_ib_{Jj}-X_Jb_{ij}+
\frac{\partial} {\partial z_j}b_{Ji}-\frac{\partial} {\partial
z_j}b_{ij}\right)\frac{\partial} {\partial z_j}.$$ Thus the vector field
$\tilde{X}_I$ has the desired form. The rest of the proposition
follows from the smoothness assumptions of Definition \ref{cc-space}.
\end{proof}
Consider the neighborhood $\tilde{U}$ and the vector fields
$\tilde{X}_I$ from Propositions \ref{lift}, \ref{lift_I}. Let
$\pi:\tilde{U}\to U$ be a {\it canonical projection} acting on an
arbitrary point $\tilde{v}=(v,y)$, such that $v\in U$,
$y\in\mathbb R^{\tilde{N}-N}$, as $\pi(\tilde{v})=v$.
The next proposition states that the projection is distance-decreasing (cf. \cite{bel,jean})
\begin{Proposition}\label{ineq_prop} For any $v,w\in U$ and
$p,q\in\mathbb R^{\tilde{N}-N}$ the following inequalities hold:
\begin{equation}\label{ineq_rho}\rho(v,w)\leq \tilde{\rho}((v,p),(w,q)),\end{equation}
\begin{equation}\label{ineq_u_rho}\rho^u(v,w)\leq
\tilde{\rho}^{\tilde{u}}((v,p),(w,q)),\end{equation} where the
quasimetrics $\tilde{\rho}$, $\tilde{\rho}^{\tilde{u}}$ on the
regular C-C space $\tilde{U}$ are defined in a similar way as $\rho,\rho^u$ on
the initial neighborhood $U\subseteq\mathbb M$.
\end{Proposition}
\begin{proof} Show the inequality \eqref{ineq_rho}.
Denote $\tilde{v}=(v,p)$, $\tilde{w}=(w,q)$. There is a unique
curve $\tilde{\gamma}(t)$ such that
$$\begin{cases}\dot{\tilde{\gamma}}(t)=\sum\limits_{|I|_h\leq M}w_I
\tilde{X}_I(\tilde{\gamma}(t))=\sum\limits_{|I|_h\leq M}w_I
(X_I(\pi(\tilde{\gamma}(t)))+\sum\limits_{k=N+1}^{\tilde{N}}b_{Ik}
(\tilde{\gamma}(t))\frac{\partial}{\partial z_k}),\\
\tilde{\gamma}(0)=\tilde{v}, \tilde{\gamma}(1)=\tilde{w}.
\end{cases}$$
By definition,
$\tilde{\rho}(\tilde{v},\tilde{w})=\underset{|I|_h\leq
M}\max\{|w_I|^{1/|I|_h}\}$. Let
$\gamma(t)=\pi(\tilde{\gamma}(t))$, then
$$\begin{cases}\dot{\gamma}(t)=\sum\limits_{|I|_h\leq M}w_I
X_I(\gamma(t)),\\
\gamma(0)=v, \gamma(1)=w.
\end{cases}$$
Thus, the curve $\gamma(t)$ lies in $U$ and joins the points $v$
and $w$, from where \eqref{ineq_rho} follows:
$\rho(v,w)\leq\underset{|I|_h\leq
M}\max\{|w_I|^{1/|I|_h}\}=\tilde{\rho}(\tilde{v},\tilde{w})$. The
inequality \eqref{ineq_u_rho} is proved in the same way.
\end{proof}
\begin{Proposition}[\rm Generalized triangle inequalities]
\label{treug_ineq} For any point $g\in U$ there are constants
$Q,Q_g>0$ such that, for all $u,v,w\in U$, we have
\begin{equation}\label{treug_rho}\rho(v,w)\leq
Q(\rho(u,v)+\rho(u,w)),\end{equation}
\begin{equation}\label{treug_rhou}
\rho^g(v,w)\leq Q_g(\rho^g(u,v)+\rho^g(u,w)).\end{equation}
\end{Proposition}
\begin{proof}
For any (arbitrarily small) $\zeta>0$ consider
$$\{a_I\}_{|I|_h\leq M}\text{ and }\{b_I\}_{|I|_h\leq M},$$ such that
$$v=\text{exp}(\sum\limits_{|I|_h\leq M}a_IX_I)(u),\
w=\text{exp}(\sum\limits_{|I|_h\leq M}b_IX_I)(u)$$ and
$$\underset{|I|_h\leq
M}\max\{|a_I|^{1/|I|_h}\}\leq\rho(u,v)+\zeta,\ \underset{|I|_h\leq
M}\max\{|b_I|^{1/|I|_h}\}\leq\rho(u,w)+\zeta.$$ Let
$\tilde{u}=(u,0)$ and consider on $\tilde{U}$ points
$$\tilde{v}=\text{exp}(\sum\limits_{|I|_h\leq
M}a_I\tilde{X}_I)(\tilde{u})\text{ and }
\tilde{w}=\text{exp}(\sum\limits_{|I|_h\leq
M}b_I\tilde{X}_I)(\tilde{u}).$$ Then we have $v=\pi(\tilde{v}),
w=\pi(\tilde{w})$ and
$$\tilde{\rho}(\tilde{u},\tilde{v})=\underset{|I|_h\leq
M}\max\{|a_I|^{1/|I|_h}\},\
\tilde{\rho}(\tilde{u},\tilde{w})=\underset{|I|_h\leq
M}\max\{|b_I|^{1/|I|_h}\}.$$ According to Proposition \ref{ineq_prop} and the
generalized triangle inequality for $\tilde{\rho}$ (in the
neighborhood of a regular point \cite{vk}) we have
$$\rho(v,w)\leq \tilde{\rho}(\tilde{v},\tilde{w})\leq
Q(\tilde{\rho}(\tilde{u},\tilde{v})+\tilde{\rho}(\tilde{u},\tilde{w}))\leq
Q(\rho(u,v)+\rho(u,w)+2\zeta),$$ from where \eqref{treug_rho}
follows; \eqref{treug_rhou} is proved in a similar way.
\end{proof}
\begin{Proposition}[\rm ``Rolling-of-the-box'' lemma]\label{prokat}
For all points $u,v\in U$ and $r,\xi>0$, for which both parts of the
following inclusions make sense (i.e. lie in $U$), we have
\begin{equation}\label{prokat_incl_u}
\underset{x\in B^{\rho^u}(v,r)}\bigcup B^{\rho^u}(x,\xi)\subseteq
B^{\rho^u}(v,r+C\xi),\end{equation}
\begin{equation}\label{prokat_incl} \underset{x\in
B^{\rho}(v,r)}\bigcup B^{\rho}(x,\xi)\subseteq
B^{\rho}(v,r+C\xi+O(r^{1+\frac{1}{M}})+O(\xi^{1+\frac{1}{M}})).
\end{equation}
\end{Proposition}
\begin{proof}
Let us prove \eqref{prokat_incl}. Fix points $x,z$, such that $\rho(v,x)< r$, $\rho(x,z)<
\xi$, and show that $\rho(v,z)<
r+C\xi+O(r^{1+\frac{1}{M}})+O(\xi^{1+\frac{1}{M}})$. For arbitrarily
small
$\zeta>0$
consider two curves $\gamma_1$, $\gamma_2$, such that
$$\begin{cases}\dot{\gamma_1}(t)=\sum\limits_{|I|_h\leq M}x_I
X_I(\gamma_1(t)),\\
\gamma_1(0)=v, \gamma_1(1)=x,
\end{cases}\quad\quad
\begin{cases}\dot{\gamma_2}(t)=\sum\limits_{|I|_h\leq M}z_I
X_I(\gamma_2(t)),\\
\gamma_2(0)=x, \gamma_2(1)=z,
\end{cases}
$$
and $$\underset{|I|_h\leq
M}\max\{|x_I|^{1/|I|_h}\}\leq\rho(v,x)+\zeta,\ \underset{|I|_h\leq
M}\max\{|z_I|^{1/|I|_h}\}\leq\rho(x,z)+\zeta.$$
Consider a point $\tilde{v}=(v,0)\in U$ and a curve
$\tilde{\gamma}_1$ such that
$$\begin{cases}\dot{\tilde{\gamma}}_1(t)=\sum\limits_{|I|_h\leq M}x_I
\tilde{X}_I(\tilde{\gamma}_1(t)),\\
\tilde{\gamma}_1(0)=\tilde{v}.\end{cases}$$ Since
$\gamma_1(t)=\pi(\tilde{\gamma}_1(t))$, we have
$\tilde{\gamma}_1(1)=(x,p)=:\tilde{x}\in\tilde{U}$, where $p\in\mathbb
R^{\tilde{N}-N}$. However,
$$\tilde{\rho}(\tilde{v},\tilde{x})=\underset{|I|_h\leq
M}\max\{|x_I|^{1/|I|_h}\}<r+\zeta.$$
In a similar way, for a curve $\tilde{\gamma}_2$, such that
$$\begin{cases}\dot{\tilde{\gamma}}_2(t)=\sum\limits_{|I|_h\leq M}x_I
\tilde{X}_I(\tilde{\gamma}_2(t)),\\
\tilde{\gamma}_2(0)=\tilde{x}.\end{cases}$$ We have
$\gamma_2(t)=\pi(\tilde{\gamma}_2(t))$, and hence
$\tilde{\gamma}_2(1)=(z,q)=:\tilde{z}\in\tilde{U}$, where $q\in\mathbb
R^{\tilde{N}-N}$, and
$$\tilde{\rho}(\tilde{x},\tilde{z})=\underset{|I|_h\leq
M}\max\{|z_I|^{1/|I|_h}\}<\xi+\zeta.$$
According to Remark \ref{svob_reg}, all points of $\tilde{U}$ are
regular w. r. t. the C-C structure induced by the vector fields $\{\tilde{X}_I\}_{|I|_h}\leq M$.
By the Campbell-Hausdorff formula \cite{lan}, for any vector fields
$X,Y\in C^{k_0+1}$ the following decomposition is true:
\begin{equation}\label{kh}\text{exp}(sY)\circ\text{exp}(tX)(v)=
\text{exp}(sY+tX+\frac{st}{2}[X,Y]+\end{equation}
$$+\sum\limits_{2\leq k+j\leq
k_0}s^kt^jC_{kj}(X,Y)+O(s^{k_0+1})+O(t^{k_0+1})),$$ where
$C_{kj}(X,Y)$ are linear combinations of ($k+j-1$)-order commutators of $X$ and $Y$.
Applying \eqref{kh}, by simple computations, we get
$$\operatorname{exp}\left(\sum\limits_{|I|_h\leq M}z_I\tilde{X}_I
\right)\circ\operatorname{exp}\left(\sum\limits_{|I|_h\leq
M}x_I\tilde{X}_I
\right)=\operatorname{exp}\left(\sum\limits_{|I|_h\leq
M}v_I\tilde{X}_I \right)(v),$$ where
$$v_I=x_I+y_I+\sum\limits_{\substack {|\alpha+\beta|\leq M\\
|\alpha+\beta|_h\geq|I|_h}}F^I_{\alpha,\beta}x^{\alpha}z^{\beta}+
O(||x||^{M+1}) +O(||z||^{M+1}).$$
Consequently,
$$|v_I|\leq|x_I|+|z_I|+\sum\limits_{|\alpha+\beta|_h=|I|_h}
|F^I_{\alpha,\beta}|x^{\alpha}z^{\beta}+$$$$+\sum
\limits_{\substack{|\alpha+\beta|\leq M\\|\alpha+\beta|_h>|I|_h}}
|F^I_{\alpha,\beta}|x^{\alpha}z^{\beta}+O(||x||^{M+1})
+O(||z||^{M+1})\leq$$$$\leq(\tilde{r}+C_I\tilde{\xi})^{|I|_h}+O(\tilde{r}
^{|I|_h+1}) +O(\tilde{\xi}^{|I|_h+1})+O(\tilde{r}^{M+1})+
O(\tilde{\xi}^{M+1}),$$ from where it follows, that
$$\tilde{\rho}(\tilde{v},\tilde{z})=\underset{|I|_h\leq
M}\max\{|v_I|^{1/|I|_h}\}\leq
\tilde{r}+C\tilde{\xi}+O(\tilde{r}^{1+\frac{1}{M}})+
O(\tilde{\xi}^{1+\frac{1}{M}}).$$
Applying \eqref{ineq_rho}, we finally obtain
$$\rho(v,z)\leq \tilde{\rho}(\tilde{v},\tilde{z})\leq r
+C\xi+O(r^{1+\frac{1}{M}})+O(\xi^{1+\frac{1}{M}})+O(\zeta),$$ from
where \eqref{prokat_incl} follows. The inclusion \eqref{prokat_incl_u} can be proved in a similar way.
\end{proof}
\section{Main theorems on local geometry}
\begin{Proposition}
Consider on $\tilde{U}$ bases $\{\tilde{X}_I\}_{|I|_h\leq M}\text{
and }
\{\widehat{\tilde{X}}_{I}\}_{|I|_h\leq M},$ consisting of commutators
of the vector fields defined in
\eqref{lift_vf}.
Then, in coordinates $x=(y,z)$ defined in \eqref{odnor_coord},
for all $x\in\tilde{U}$, such that $|x_j|\leq
\varepsilon^{|I_j|}$, the following decompositions hold:
\begin{equation}\label{dec1}
\tilde{X}_I(x)=\sum\limits_{i=1}^{\tilde{N}}a_{I,J}(x)
\widehat{\tilde{X}}_J(x),
\end{equation}
where
$$ a_{I,J}=
\begin{cases}
\delta_{I,J}+O(\varepsilon),\quad&|J|_h=|I|_h,\\
o(\varepsilon^{|J|_h-|I|_h}),\quad&|J|_h>|I|_h,\\
O(1),\quad&|J|_h<|I|_h.\\
\end{cases}
$$
\end{Proposition}
\begin{proof}
From Propositions \ref{lift_I} and \ref{vf_decomp} it follows that
$$\tilde{X}_{I}(x)=\widehat{\tilde{X}}_{I}(x)+R_I(x),$$ where
$x=(y,z)\in\mathbb R^{\tilde{N}}$, while the vector field $R_I$
consists of summands of homogeneity order, w.r.t. the dilations
\eqref{dil_lift}, bigger than
$-|I|_h$. Since the vector fields $\widehat{\tilde{X}}_J$
are homogeneous of order $|J|_h$, we have
$$R_I(x)=\sum\limits_{|J|_h\leq M}\sum\limits_{|\alpha|_h>
|J|_h-|I|_h} c_{Il}x^{\alpha}\widehat{\tilde{X}}_J=$$
$$=\sum\limits_{|J|_h>|I|_h}\varepsilon^{|J|_h-|I|_h+1}
(O(1)+O(\varepsilon))\widehat{\tilde{X}}_J+$$$$+\sum\limits_{|J|_h=|I|_h}
\varepsilon(O(1)+O(\varepsilon))\widehat{\tilde{X}}_J+
\sum\limits_{|J|_h<|I|_h}(O(1)+O(\varepsilon))\widehat{\tilde{X}}_J=\sum
a_{I,J}\widehat{\tilde{X}}_J,$$
from where the proposition follows.
\end{proof}
Next we introduce an important characteristic of the C-C space
$\mathbb M$.
\begin{Definition}\label{int_lines_koef}
Let $u,v\in U$, $r>0$. {\it The divergence of integral lines}
with nilpotentizations centered at $u$ over a box of radius $r$
centered at $v$ is the value
\begin{equation}\label{R}
R(u,v,r)=\max\{\underset{\widehat{y}\in
B^{\rho^u}(v,r)}\sup\{\rho^u(y, \widehat{y})\},\underset{y\in
B^{\rho}(v,r)}\sup\{\rho(y, \widehat{y})\}\}.\end{equation} Here
the points $y$ and $\widehat{y}$ are defined as follows. Let
$\gamma(t)$ be an arbitrary curve, defined as a solution of the
system of ODE
$$\begin{cases}\dot{\gamma}(t)=\sum\limits_{|I|_h\leq M}b_I
\widehat{X}^u_I(\gamma(t)),\\
\gamma(0)=v, \gamma(1)=\widehat{y},
\end{cases}$$
and
\begin{equation}\label{b_I}\rho^u(v,\widehat{y})\leq\underset{|I|_h\leq
M}\max\{|b_I|^{1/|I|_h}\}\leq r.\end{equation} Define
$y=\text{exp}(\sum\limits_{|I|_h\leq M}b_I X_I)(v)$. In this way,
the supremum in the first expression of \eqref{R} is taken not
only over $\widehat{y}\in B^{\rho^u}(v,r)$, but also over the
infinite set of the possible $\{b_I\}_{|I|_h\leq M}$, satisfying
\eqref{b_I}. The
second expression is understood in a similar way.
\end{Definition}
\begin{Proposition}\label{lemma_incl}
Let $u,v\in U$ and $r>0$. Then the following inclusions are true:
\begin{equation}\label{incl1}B^{\rho}(v,r)
\subseteq B^{\rho^u}(v,r+CR(u,v,r)),\end{equation}
\begin{equation}\label{incl2}B^{\rho^u}(v,r)\subseteq
B^{\rho}(v,r+CR(u,v,r)+O(r^{1+\frac{1}{M}})+O(R(u,v,r)^{1+
\frac{1}{M}})),\end{equation}
where $R(u,v,r)$ is defined by
\eqref{R}.
\end{Proposition}
\begin{proof}
Let $y\in B^{\rho}(v,r)$, i.e. $\rho(v,y)<r$, and show that
$\rho^u(v,y)<r+CR(u,v,r)$ for some constant $C$.
By definition of the quasimetric $\rho$, for arbitrarily small
$\zeta>0$ there are $\{a_I\}_{|I|_h\leq M}$, such that
$$y=\text{exp}(\sum\limits_{|I|_h\leq M}a_IX_I)(v)$$ and
$$\underset{|I|_h\leq
M}\max\{|a_I|^{1/|I|_h}\}\leq\rho(v,y)+\zeta\leq r.$$
Consider a point
$\widehat{y}=\text{exp}(\sum\limits_{|I|_h\leq M}a_IX_I)(v).$ Then
$\widehat{y}\in B^{\rho^u}(v,r)$, since
$$\rho^u(v,\widehat{y})\leq\underset{|I|_h\leq
M}\max\{|a_I|^{1/|I|_h}\}\leq r.$$ Obviously,
$\rho^u(y,\widehat{y})<R(u,v,r)$. Hence, by \eqref{prokat_incl_u},
$$y\in\underset{x\in B^{\rho^u}(v,r)}\bigcup B^{\rho^u}(x,R(u,v,r))
\subseteq B^{\rho^u}(v,r+CR(u,v,r)),$$ and \eqref{incl1} is
proved.
The inclusion \eqref{incl2} is proved in the same way with the
application of \eqref{prokat_incl}.
\end{proof}
\begin{Theorem}[\rm Theorem on divergence of integral lines]
\label{theorem_int_lines} Let $u,v\in U$,
$\rho(u,v)=O(\varepsilon)$, $r=O(\varepsilon)$ and
$B^{\rho}(v,r)\cup B^{\rho^u}(v,r)\subseteq U$. Then we have the
following estimate on divergence of integral lines from Definition
$\ref{int_lines_koef}$:
$$R(u,v,r)=O(\varepsilon^{1+\frac{1}{M}}).$$
\end{Theorem}
\begin{proof}
For a fixed point $\widehat{y}\in B^{\rho^u}(v,r)$ and $\zeta>0$ we consider
arbitrary $\{b_I\}_{|I|_h\leq M}$ such that
$$\widehat{y}=\operatorname{exp}(\sum\limits_{|I|_h\leq
M}b_I\widehat{X}^u_I)(v)\quad\text{
and
}\quad\underset{|I|_h\leq
M}\max\{|b_I|^{1/|I|_h}\}\leq\rho^u(v,\widehat{y})+\zeta\leq r.$$ Let
$y=\operatorname{exp}\bigl(\sum\limits_{|I|_h\leq M}b_IX_I\bigr)(v)$ and
$v=\text{exp}\Bigl(\sum\limits_{|I|_h\leq M}v_IX_I\Bigr)(u)\in
U$. Consider points
$$\tilde{v}=\text{exp}\Bigl(\sum\limits_{|I|_h\leq
M}v_I\tilde{X}_I\Bigr)(u,0)\in \tilde{U}\quad\text{
and
}\quad\tilde{y}=\text{exp}\Bigl(\sum\limits_{|I|_h\leq
M}b_I\tilde{X}_I\Bigr)(\tilde{v})\in\tilde{U}.$$ Then
$$\tilde{\rho}(\tilde{v},\tilde{y})=\underset{|I|_h\leq
M}\max\{|b_I|^{1/|I|_h}\}=O(\varepsilon).$$ Let
$\widehat{\tilde{y}}:=\text{exp}(\sum\limits_{|I|_h\leq
M}b_I\tilde{X_I})(v)$. Since all points of $\tilde{U}$ are
regular, from Theorem \ref{int_lines_perem} it follows that
$$\max\{\tilde{\rho}(\tilde{y},\widehat{\tilde{y}}),
\tilde{\rho}^{\tilde{u}} (\tilde{y},\widehat{\tilde{y}})\} =
O(\varepsilon^{1+\frac{1}{M}}),$$ from where, taking into account
Proposition 8, the proposition follows. The application of this
theorem is possible due to Proposition 11.
\end{proof}
\begin{Remark}
In the paper \cite{vk1}, where Theorem \ref{int_lines_perem} was
proved, the nilpotentized vector fields satisfy estimates
\eqref{dec} which are stronger than \eqref{dec1}, namely, with
$O(\varepsilon)$ in place of $O(1)$ in the last estimate. Here we
can not guarantee $O(\varepsilon)$ because, in contrast to the
case of regular points, not all of the values of commutators
$\widehat{X}_I(u)$ at $u$ might coincide with the values $X_I(u)$
(see \cite{herm} and references therein). Nevertheless, a revision
of the proof of Theorem \ref{int_lines_perem} shows that it holds
also with these weaker estimates. Note also that this theorem
\ref{int_lines_perem} is true in any coordinates, in which the
decomposition \eqref{dec} or \eqref{dec1} is true.
\end{Remark}
\begin{Theorem}[\rm Local approximation theorem]\label{lat_quasim}
For any points $u\in U$ and $v,w\in U$, such
that $\rho(u,v)=O(\varepsilon)$, $\rho(u,w)=O(\varepsilon)$, we
have
$$|\rho(v,w)-\rho^u(v,w)|=O(\varepsilon^{1+\frac{1}{M}}).$$
\end{Theorem}
\begin{proof}
In Proposition
\ref{lemma_incl} let $r:=\rho(v,w)$. Then
$w\in\bar{B}^{\rho}(v,r)$, hence
$$\rho^u(v,w)\leq\rho(v,w)+CR(u,v,r).$$ In the same way, setting
$r:=\rho^u(v,w)$, we obtain
$$\rho(v,w)\leq\rho^u(v,w)+CR(u,v,r)+O(r^{1+\frac{1}{M}})+
O(R(u,v,r)^{1+\frac{1}{M}}).$$ Due to Proposition \ref{treug}
(generalized triangle inequality for $\rho$) we have
$r=O(\varepsilon)$, since from Theorem \ref{theorem_int_lines} the
proposition follows.
\end{proof}
\section{The tangent cone theorems}
First we briefly recall the notion and basic properties of
convergence of a sequence of quasimetric spaces, as well as the
notion of the tangent cone to a quasimetric space, introduced in
\cite{dan,smj} as an extension of Gromov's theory for metric
spaces.
The {\it distortion} (see e.g. \cite{bur}) of a mapping
$f:(X,d_X)\to (Y,d_Y)$ is the
value
$$\operatorname{dis}(f)=\sup\limits_{u,v\in
X}|d_Y(f(u),f(v))-d_X(u,v)|,$$ which is a measure of difference of
$f$ from an isometry.
\begin{Definition}[\cite{dan,smj}]\label{ghq-dist}
The {\it distance} $d_{qm}(X,Y)$ between quasimetric spaces
\linebreak $(X,d_X)$ and $(Y,d_Y)$ is defined as the infimum taken
over $\rho>0$ for which there exist (not necessarily continuous)
mappings $f:X\to Y$ and $g:Y\to X$ such that
$$\max\Bigl\{\operatorname{dis}(f),\,
\operatorname{dis}(g),\,\sup\limits_{x\in X}d_{X}(x,g(f(x))),\,
\sup\limits_{y\in Y}d_{Y}(y,f(g(y)))\Bigr\}\leq\rho.
$$
\end{Definition}
Note that for bounded quasimetric spaces the introduced distance is
obviously finite.
\begin{Proposition}\label{s1} The distance $d_{qm}$ possesses the following
properties$:$
$1)$ if quasimetric spaces $X$ and $Y$ are isometric, then
$d_{qm}(X,Y)=0$; if $X$ and $Y$ are compact and $d_{qm}(X,Y)=0$,
then $X$ and $Y$ are isometric $($nondegeneracy$)$.
$2)$ $d_{qm}(X,Y)=d_{qm}(Y,X)$ $($symmetry$)$.
$3)$ $d_{qm}(X,Y)\leq(Q_Z+1)(d_{qm}(X,Z)+d_{qm}(Z,Y))$ $($analog of
the generalized triangle inequality$)$.
\end{Proposition}
Note that the constant in 3) depends on the constant $Q_Z$.
By means of the (quasi)distance $d_{qm}$ a convergence, the limit
by which is unique up to isometry, for compact quasimetric spaces
can be introduced, in a similar way as it was done for metric
spaces. Namely, for a sequence $\{X_n\}$ of compact quasimetric
spaces, we say that $X_n\to X$, if $d_{qm}(X_n,X)\to 0$, when
$n\to\infty$. Note that a straightforward generalization of
Gromov's definition of the distance $d_{GH}$ between two metric
spaces is possible only for a particular class of quasimetric
spaces \cite{gre1}.
For noncompact spaces we use the following more
general notion of convergence. A {\it pointed $($quasi$)$metric
space} is a pair $(X,p)$ consisting of a (quasi)metric space $X$
and a point $p\in X$. Whenever we want to emphasize what kind of
(quasi)metric is on $X$, we shall write the pointed space as a
triple $(X,p,d_X)$.
\begin{Definition}\label{km_conv_nc}
A sequence $(X_n,p_n,d_{X_n})$ of pointed quasimetric spaces {\it
converges} to the pointed space $(X,p,d_X)$, if there exists a
sequence of reals $\delta_n\to 0$ such that for each $r>0$ there
exist mappings $f_{n,r}:B^{d_{X_n}}(p_n,r+\delta_n)\to X,\
g_{n,r}:B^{d_{X}}(p,r+2\delta_n)\to X_n$ such that
1) $f_{n,r}(p_n)=p, \ g_{n,r}(p)=p_n$;
2) $\operatorname{dis}(f_{n,r})<\delta_n,\
\operatorname{dis}(g_{n,r})<\delta_n;$
3) $\sup\limits_{x\in
B^{d_{X_n}}(p_n,r+\delta_n)}d_{X_n}(x,g_{n,r}(f_{n,r}(x)))<\delta_n$.
\end{Definition}
Recall that a quasimetric space $X$ is {\it boundedly compact}, if
all closed bounded subsets of $X$ are compact. Two pointed
quasimetric spaces $(X,p)$ and $(Y,q)$ are called {\it isometric},
if there exists an isometry $\eta:Y\to X$ such that $\eta(q)=p$. The
following theorem $($see \cite{dan,smj} for details$)$ informally
states
that, for boundedly compact spaces, the limit is unique up to isometry.
\begin{Theorem}\label{ed_km}
1) Reduced to the case of metric spaces, the convergence of Definition \ref{km_conv_nc} is equivalent to the Gromov-Hausdorff convergence.
2) Let $(X,p),\ (Y,q)$ be two complete pointed quasimetric spaces
obtained as limits $($in the sense of definition \ref{km_conv_nc}$)$
of the same sequence $(X_n,p_n)$ such that $|Q_{X_n}|\leq C$ for all
$n\in\mathbb N$. If $X$ is boundedly compact then $(X,p)$ and
$(Y,q)$ are isometric.
\end{Theorem}
The tangent cone is then defined as usual:
\begin{Definition}\label{cone_def} Let $X$ be a boundedly compact
(quasi)metric space, $p\in X$.
If the limit of pointed spaces
$\lim\limits_{\lambda\to\infty}(\lambda X,p)=(T_pX,e)$ (in the sense
of definition \ref{km_conv_nc}) exists, then
$T_p X$ is called the {\it tangent cone} to $X$ at $p$.
Here $\lambda X=(X,\lambda\cdot d_X)$; the symbol
$\lim\limits_{\lambda\to\infty} (\lambda X,p)$ means that, for any
sequence $\lambda_n\to\infty$, there exists
$\lim\limits_{\lambda_n\to\infty}(\lambda_n X,p)$ which is
independent of the choice of sequence $\lambda_n\to\infty$ as
$n\to\infty$.
{\it A local tangent cone} is an arbitrary neighborhood
$U(e)\subseteq T_pX$ of fixed point $e\in T_pX$.
\end{Definition}
\begin{Remark}According to Theorem \ref{ed_km}, the tangent cone from
Definition
\ref{cone_def} is unique up to
isometry, i.~e. one should treat it as a
class of pointed quasimetric spaces isometric to each other. Note also that
the tangent cone is isometric to $(\lambda T_pX,e)$ for all
$\lambda>0$ and is completely defined by any (arbitrarily small) neighborhood
of the point.
\end{Remark}
\begin{Theorem}\label{cc_teor}
Let $\mathbb M$ be a C-C space from Definition \ref{cc-space}.
Then the quasimetric space $(U,\rho^u)$ is a local tangent cone at the point $u$ to the quasimetric space $(U,\rho)$, where the quasimetrics $\rho$ and $\rho^u$
are defined by \eqref{rho} and \eqref{rho_u}, respectively. The tangent cone is a homogeneous space $G/H$, constructed in the proof of the Proposition
\ref{lift} (here $G$ is a nilpotent graded group).
\end{Theorem}
\begin{proof}
We have to verify Definition \ref{cone_def} for the spaces $X_n=(U,u,\lambda_n\cdot \rho)$, $X=(U,u,\rho^u)$, where
$\lambda_n\to\infty,\ \lambda_n\geq 0$ is an arbitrary sequence of reals (w.l.o.g. we assume
$\lambda_n\geq 1$). It is sufficient to take $$f_{n_r}=\Delta_{\lambda_n}^u,\
g_{n,r}=\Delta_{\lambda_n^{-1}}^u. $$ Due to the conical property
\eqref{cone_prop} and Theorem \ref{lat_quasim} we have the first assertion.
To verify the second assertion, we have to verify the left-invariance of $\rho^u$, i.e. to prove that
\begin{equation}\label{levoinv}\rho^u(g(v),g(w))=\rho^u(v,w),\end{equation}
where $g$ is defined in Proposition \ref{lift}.
Consider a curve $\gamma(t)$ such that
$$\begin{cases}\dot{\gamma}(t)=\sum\limits_{|I|_h\leq M}b_I
\widehat{X}^u_I(\gamma(t)),\\
\gamma(0)=v, \gamma(1)=w.
\end{cases}$$
Due to the left-invariance of the vector fields
$\{\widehat{\tilde{X^{\prime}}}^u_I\}_{|I|_h\leq M}$, introduced
in the proof of Proposition \ref{lift}, and the existence of the
homomorphism
$\Psi(\widehat{\tilde{X^{\prime}}}^u_I)=\widehat{X^{\prime}}^u_I$,
the curve $\gamma_g(t)=g(\gamma(t))$ is a solution of the system
of equations
$$\begin{cases}\dot{\gamma_g}(t)=\sum\limits_{|I|_h\leq M}b_I
\widehat{X}^u_I(\gamma_g(t)),\\
\gamma(0)=g(v), \gamma(1)=g(w).
\end{cases}$$
By definition of the quasimetric $\rho^u$, we get the required assertion.
\end{proof}
\begin{Corollary}At a regular point, the tangent cone to a weighted
C-C space is a nilpotent graded group.\end{Corollary}
\section{The case of H\"{o}rmander vector fields}
\begin{Definition}\label{horm_usl} The vector fields
$\{X_1,\ldots,X_m\}\in C^{p}$ on $U\subseteq\mathbb M$, $m\leq N$,
meet H\"ormander's condition of depth $M$, if they span, by their
commutators up to the order $M-1$, the whole tangent space
$T_u\mathbb M$ at any point $u\in U$, and $M$ is the minimal
number with such property.
\end{Definition}
Obviously, for the case of regular points, $\mathbb M$ is an
example of a Carnot manifold, see Definition \ref{cc-manifold}. In
this paper we assume that $p=2M+1$.
The homogeneous degree of the vector field $X_I$ is now equal to
its commutator order
$$\text{deg}(X_I)=\text{degalg}(X_I)=|I|=i_1+\ldots+i_k,$$
and the conditions (ii) and (iii) for the basis \eqref{bas}
coincide. Introduce the same local coordinates on $U$ as in
\eqref{loc_coord} and construct the nilpotent approximations
$\{\widehat{X}_I^u\}_{|I|\leq M}$, as in Proposition
\ref{vf_decomp}. The lifting construction is also carried out in a
similar way as before, see Proposition \ref{lift}. Here we have
$q=m$ and the Lie group of the free algebra ${\cal N}$ is a Carnot
group. These constructions and results of \cite{vk} for regular
points allow to prove an analog of the Rashevsky-Chow theorem for
spaces from Definition \ref{horm_usl}. This result is, however,
not new, in particular, the existence of $d_c$ for the case when
$p=M-1,\alpha$ was proved in \cite{bram} with other methods.
\begin{Theorem}\label{chow_nereg}
On $U$ there are finite metrics
\begin{equation}\label{dcu}d_c(v,w)=
\inf\limits_{\substack{\dot{\gamma}\in
H\mathbb M\\
\gamma(0)=v,\gamma(1)=w}}\{L(\gamma)\}\quad\text{ and }\quad
d_c^u(v,w)=\inf\limits_{\substack{\dot{\widehat{\gamma}}\in
\widehat{H}^u\mathbb M \\
\widehat{\gamma}(0)=v,\widehat{\gamma}(1)=w}}\{L(\widehat{\gamma})\}.
\end{equation}\end{Theorem}
\begin{proof}
Consider the manifold $\tilde{\mathbb M}$ and the vector fields
$\tilde{X}_i,\widehat{\tilde{X}}_i$ constructed in Proposition
\ref{lift}. Due to Remark \ref{svob_reg} and to the results of
\cite{vk} for regular points, on the neighborhood $\tilde{U}$
there are finite metrics $\tilde{d}_c$ and
$\tilde{d_c}^{\tilde{u}}$, defined by the horizontal vector fields
$\tilde{X}_i$ and $\widehat{\tilde{X}}_i$, respectively.
Denote as $\pi:\tilde{\mathbb M}\to\mathbb M$ the canonical
projection, i. e. $\pi(v,z)=v$, where $v\in\mathbb M$,
$z\in\mathbb R^{\tilde{N}-N}$.
Assume $\tilde{\gamma}(t):[0,1]\to\tilde{U}$ be a geodesic of the
distance $\tilde{d_c}((v,0),(w,0))$. Consider the curve
$\gamma:[0,1]\to U$ defined as $\gamma(t)=\pi
(\tilde{\gamma}(t))$. Then, in coordinates \eqref{loc_coord}, we
have
\begin{equation}\label{lift_eq}\begin{cases}
\dot{\tilde{\gamma}}(t)=\sum\limits_{i=1}^ma_i(t)
\tilde{X}_i(\tilde{\gamma}(t))=\sum\limits_{i=1}^ma_i(t)
\left[X_i(\gamma(t))
+\sum\limits_{i=N+1}^{\tilde{N}}b_{ij}(\tilde{\gamma}(t))
\frac{\partial}{\partial
z_j}\right],
\\
\tilde{\gamma}(0)=(v,0), \tilde{\gamma}(0)=(w,0),
\end{cases}
\end{equation}
hence the curve $\gamma(t)$ connects the points $v,w\in U$ and is
horizontal w. r. t. the vector fields $X_1,\ldots, X_m$.
The proof for \eqref{dcu} is carried out in a similar way, with
help of the existence of the metric $\tilde{d_c}^{\tilde{u}}$.
\end{proof}
Since the vector fields $\{\widehat{X}_i\}$ are homogeneous of
order $-1$, the metric \eqref{dcu} meets the conical property:
\begin{equation}\label{cone_dcu}d_c^u(\Delta_{\varepsilon}^uv,\Delta_
{\varepsilon}^uw)= \varepsilon d_c^u(v,w).\end{equation}
The next two propositions are proved in the same way as in the
``classical'' $C^{\infty}$-smooth case \cite{rs,bel,jean}; we
write down the proofs for the convenience of the reader.
\begin{Proposition}\label{proj}
The projections of the balls w. r. t. the metric $\tilde{d}_c$
onto the initial neighborhood $U\subseteq\mathbb M$ coincide with
the balls w. r. t. the metric $d_c$, i. e.
\begin{equation}\label{ball_proj}
B^{d_c}(v,r)=\pi\left(B^{\tilde d_c}((u,z),r)\right),
\end{equation}
where $u\in U$, $z\in\mathbb R^{\tilde{N}-N}$, $\pi:\tilde{\mathbb
M}\to\mathbb M$ is a canonical projection $\pi(v,z)=v$.
\end{Proposition}
\begin{proof}
Let $\tilde{\gamma}(t):[0,1]\to\tilde{U}$ be any horizontal curve
starting from $(v,z)$. Then
\begin{equation}\label{lift_eq}\begin{cases}
\dot{\tilde{\gamma}}(t)=\sum\limits_{i=1}^ma_i(t)
\xi_i(\tilde{\gamma}(t))=\sum\limits_{i=1}^ma_i(t)\left[X_i(\pi
(\tilde{\gamma}(t))+\sum\limits_{i=N+1}^{\tilde{N}}b_{ij}
(\tilde{\gamma}(t))\frac{\partial}{\partial z_j}\right],
\\
\tilde{\gamma}(0)=(v,z).
\end{cases}
\end{equation}
Denote
\begin{equation}\label{proj_curve}\gamma(t)=\pi(\tilde{\gamma}(t)),
\end{equation}
then
\begin{equation}\label{tilde}\tilde{\gamma}(t)=\left(\begin{array}{c}
\gamma(t)\\---
\\\tilde{\gamma}_{N+1}(t)\\
\tilde{\gamma}_{N+2}(t)\\ \ldots \\
\tilde{\gamma}_{N+p}(t)\end{array}\right)\end{equation} and
\begin{equation}\label{lift_eq}\begin{cases}
\dot{\gamma}(t)=\sum\limits_{i=1}^ma_i(t)X_i(\gamma(t)),\\
\gamma(0)=v,
\end{cases}
\end{equation}
i. e. the curve $\gamma(t)=\pi(\tilde{\gamma}(t))$ is horizontal
w. r. t. the vector fields $X_1,X_2,\ldots, X_m$ and is of the
same length as $\tilde{\gamma}(t)$, i. e. the projections of
horizontal curves on $\tilde{\mathbb M}$ are horizontal curves on
$\mathbb M$.
Conversely, if $\gamma(t)$ is a horizontal curve on $\mathbb M$, a
horizontal curve $\tilde{\gamma}(t)$ on $\tilde{\mathbb M}$ can be
defined in such way that \eqref{proj_curve} holds. Indeed, it is
sufficient to define $\tilde{\gamma}(t)$ by \eqref{tilde}, where
the last $\tilde{N}-N$ components are computed as the solutions of
the Cauchy problem
$$\begin{cases}\dot{\tilde{\gamma}}_{N+j}(t)=\sum\limits_{i=1}^m
a_i(t)b_{ij}(\tilde{\gamma}(t)),\\
\tilde{\gamma}_{N+j}(0)=z_j.\end{cases}$$ In this way, the set of
horizontal curves on $\mathbb M$ coincides with the set of
projections of the horizontal curves on $\tilde{\mathbb M}$, hence
the equality of balls \eqref{ball_proj} is true.
\end{proof}
\begin{Proposition}\label{ineq_prop} The projection $\pi$ is
distance-decreasing, i. e. for any points $v,w\in U$,
$p,q\in\mathbb R^{\tilde{N}-N}$ the following inequalities hold:
\begin{equation}\label{ineq}d_c(v,w)\leq \tilde{d}_c((v,p),(w,q)),
\end{equation}
\begin{equation}\label{ineq_u}d^u_c(v,w)\leq \tilde{d}_c^u((v,p),(w,q)).
\end{equation}
\end{Proposition}
\begin{proof}
Denote $\tilde{v}=(v,p)$, $\tilde{w}=(w,q)$,
$r=\tilde{d_c}(\tilde{v}, \tilde{w})$. Obviously,
$\tilde{w}\in\bar{B}^{\tilde{d_c}}(\tilde{v},r)$. Since
$w=\pi(\tilde{w})$, then $w\in\bar{B}^{d_c}(v,r)$ due to
Proposition \ref{proj}, from where \eqref{ineq} follows. The
inequality \eqref{ineq_u} is proved in a similar way.
\end{proof}
The sketch of proof of the next theorem is similar to the proof of
its analog in \cite{bel}; the main difference lies in the method
of proof of the divergence of integral lines. In particular, we do
not need special polynomial ``privileged'' coordinates (though the
second-order coordinates, as well as coordinates constructed in
Proposition \ref{lift} are privileged as well) and do not use
Newton-type approximation methods.
\begin{Theorem}[\rm Local approximation theorem]\label{lat}
For the points $u,v,w\in U$, such that $d_c(u,v)=O(\varepsilon)$
and $d_c(u,w)=O(\varepsilon)$, the following estimate is true
$$|d_c(v,w)-d_c^u(v,w)|=O(\varepsilon^{1+\frac{1}{M}}).$$
\end{Theorem}
\begin{proof}
Let $\gamma:[0,1]\to\mathbb M$ be a geodesic for the distance
$d_c$, i. e.
$$\begin{cases}\dot{\gamma}(t)=\sum\limits_{i=1}^ma_i(t)X_i(\gamma(t)),\\
\gamma(0)=v,\ \gamma(1)=w
\end{cases}
$$
and $L(\gamma)=d_c(v,w)$. Consider a curve $\widehat{\gamma}(t)$
such that
$$\begin{cases}\dot{\widehat{\gamma}}(t)=\sum\limits_{i=1}^ma_i(t)
\widehat{X
}_i(\widehat{\gamma}(t)),\\
\widehat{\gamma}(0)=v
\end{cases}
$$ and denote $\widehat{w}=\widehat{\gamma}(1)$. Note that the lengths
of the curves $\gamma$ and $\widehat{\gamma}$ differ on a value of
order $O(\varepsilon^2)$ \cite{vk1}. Consequently,
$$d_c(v,w)= L(\gamma)= L(\widehat{\gamma})+O(\varepsilon^2)\geq
d_c^u(v,\widehat{w})\geq
d_c^u(v,w)-d_c^u(w,\widehat{w})+O(\varepsilon^2).$$ In a similar
way,
$$d_c^u(v,w)\geq d_c(v,w)-d_c(w,\widehat{w})+O(\varepsilon^2).$$
Taking in account Theorem \ref{int_lines_dc} and the estimates
\eqref{ineq} we get the required assertion.
\end{proof}
The following tangent cone result is proved in a similar way as
Theorem \ref{cc_teor} with the help of Theorem \ref{lat_dc} and
the homogeneity of the vector fields $\widehat{X}_i^u$.
\begin{Theorem}[\rm\cite{izv}]\label{cc_teor}
The metric space $(U,d_c^g)$ is a local tangent cone at $g$ to
the metric space $({\cal U},d_c)$. The tangent cone has the
structure of a homogeneous space $G/H$, where $G$ is a Carnot
group.
If $u$ is a regular point, the tangent cone is isomorphic to a
Carnot group.
\end{Theorem}
|
2,877,628,090,750 | arxiv | \section{Introduction}
The recently observed first neutron star (NS) merger event GW170817
\citep{merger}
has allowed to substantially restrict the freedom of constructing NS
equations of state (EOS) compatible with observation.
In particular the determination of the NS tidal compressibility allowed
to derive severe constraints on the NS radii and the related stiffness of
the NS EOS \citep{radice,pascha,drago4,jinbiao}.
It turns out that microscopic EOSs compatible with those constraints
are rather stiff and feature fairly large proton fractions in the stellar matter,
which in turn implies the presence of the extremely strong Direct Urca (DU)
neutrino cooling process over an extended range of density in NS matter.
This allows to take a fresh look at the phenomenon of NS cooling,
where for a long time EOSs featuring DU cooling have been
disregarded in the so-called minimal cooling paradigm \citep{minimal},
which imposed a strong bias on the choice of the NS EOS.
In fact many microscopic nuclear EOSs do reach easily the required
proton fractions for the DU process \citep{zhl1,2010bs,zhl2,sel},
and we have already shown in \cite{2016MNRAS,2018MNRAS}
that a microscopic EOS featuring strong DU processes is very well able
to describe current cooling observations of isolated NSs
as well as the reheated cooling of the accreting NSs
in X-ray transients in quiescence \citep{yako14,Beznogov1,Beznogov2},
provided the DU process is dampened by the presence of nuclear pairing
throughout a sufficiently large (but not complete) set of the NS population.
The purpose of this work is to extend the
cooling simulations of \cite{2016MNRAS,2018MNRAS},
where only one specific microscopic EOS was used,
to three other microscopic EOSs that have also been identified as compatible
with the GW170817 event in \cite{drago4,jinbiao}.
All these EOSs fulfill upper and lower constraints on the tidal compressibility
derived from the interpretation of the merger event.
We investigate now which specific features of the EOSs determine the
NS cooling properties and compare the results obtained with the four EOSs.
This paper is organized as follows.
In Section~\ref{s:eos} we give a brief overview of the theoretical framework,
namely the Brueckner-Hartree-Fock (BHF) formalism adopted for the EOS,
the various cooling processes,
and the pairing gaps obtained with the same interactions.
Section~\ref{s:res} is devoted to the presentation and discussion
of the results for the cooling diagrams and the deduced NS mass distributions.
Conclusions are drawn in Section~\ref{s:end}.
\section{Formalism}
\label{s:eos}
\subsection{Equation of state}
One of the key ingredients of cooling simulations is the EOS.
In this paper we assume (without further justification)
the absence of exotic components like hyperons and/or quark matter,
such that the composition of the NS core is
asymmetric, charge-neutral, and beta-stable matter made of nucleons and leptons.
In our model, we calculate the EOS of nuclear matter within the BHF theoretical
approach \citep{1976Jeu,1999Book,2012Rep},
in which the starting point is the Brueckner-Bethe-Goldstone
equation for the in-medium $G$-matrix,
whose only input is the nucleon-nucleon (NN) bare potential $V$,
\begin{equation}
G[\rho;\omega] = V + \sum_{k_a k_b} V {{|k_a k_b\rangle Q \langle k_a k_b|}
\over {\omega - e(k_a) - e(k_b)}} G[\rho;\omega] \:,
\label{e:g}
\end{equation}
where
$\rho=\sum_{k<k_F}$ is the nucleon number density,
$\omega$ is the starting energy,
and the Pauli operator $Q$ determines the propagation of
intermediate baryon pairs.
The single-particle (s.p.) energy reads
($\hbar=c=1$)
\begin{equation}
e(k) = e(k;\rho) = {k^2\over 2m} + U(k;\rho) \:,
\label{e:en}
\end{equation}
where the s.p.~potential $U(k;\rho)$ is calculated in the so-called
{\em continuous choice} and is given by
\begin{equation}
U(k;\rho) = {\rm Re} \sum_{k'<k_F}
\big\langle k k'\big| G[\rho; e(k)+e(k')] \big| k k'\big\rangle_a \:,
\end{equation}
where the subscript $a$ indicates antisymmetrization of the matrix element.
Finally the energy per nucleon is expressed by
\begin{equation}
{E \over A} =
{3\over5}{k_F^2\over 2m} + {1\over{2\rho}} \sum_{k<k_F} U(k;\rho) \:.
\end{equation}
In this scheme, we use several different nucleon-nucleon potentials
as bare NN interaction $V$ in the Bethe-Goldstone equation~(\ref{e:g}),
in particular the Argonne $V_{18}$ \citep{v18},
the Bonn B (BOB) \citep{1987PhR,Machleidt1989},
and the Nijmegen 93 (N93) \citep{1978PRD_Nagels,1994PRC_Stoks}.
These two-body forces are supplemented by suitable three-nucleon forces (TBF),
which are introduced in order to reproduce correctly
the nuclear matter saturation point.
In particular, BOB and N93 have been supplemented by microscopic TBF
employing the same meson-exchange parameters as the NN potential
\citep{1989Grange,2002Zuo,Li2008bp,zhl1},
whereas $V_{18}$ is combined either with the microscopic
or a phenomenological TBF,
the latter consisting of an attractive term due to two-pion exchange
with excitation of an intermediate $\Delta$ resonance,
and a repulsive phenomenological central term
\citep{Carlson:1983kq,Schiavilla:1985gb,1997Pud,1997A&A}.
They are labeled as V18 and UIX, respectively,
throughout the paper and in all figures.
Further important ingredients in the cooling simulations are the
neutron and proton effective masses,
\begin{equation}
\frac{m^*(k)}{m} = \frac{k}{m} \left[ \frac{d e(k)}{dk} \right]^{-1} \:,
\end{equation}
which we derived consistently from the BHF s.p.~energy $e(k)$, Eq.~(\ref{e:en}),
see \cite{meff} for the numerical parametrizations.
Their effect is small compared to other uncertainties regarding
the cooling, and therefore in this paper we use the bare nucleon mass,
at variance with our previous simulations \citep{2016MNRAS},
where the effective masses were used in a consistent manner.
\subsection{Cooling processes}
\label{s:cp}
One important tool of analysis is the
temperature(or luminosity)-vs.-age cooling diagram,
in which currently a few ($\sim20$) observed isolated NSs are located.
NS cooling is over a vast domain of time
($10^{-10}-10^5\,\text{yr}$)
dominated by neutrino emission due to several microscopic processes
\citep{2001rep,2006ARNPS,2006PaWe,2007LatPra,potrev}.
The theoretical analysis of these reactions requires the knowledge of the
elementary matrix elements,
the relevant beta-stable nuclear EOS, and, very important,
the superfluid properties of the stellar matter, i.e.,
the gaps and critical temperatures in the different pairing channels.
The main contribution to the cooling comes from the
neutrons, protons, and electrons in the NS core.
In a non-superfluid NS, the most powerful neutrino process is the DU process,
which is in fact the neutron $\beta$-decay followed by its inverse reaction:
\begin{equation}
n \rightarrow p + e^-\! + \bar{\nu}_e
\qquad \textrm{and} \qquad
p+e^- \rightarrow n+\nu_e \:.
\label{e:DU}
\end{equation}
However, the energy and momentum conservation imposes a density threshold
on this process \citep{1991LP}.
Various less efficient neutrino processes may be operating in the NS core
\citep{2001rep},
and dominate when the DU process is forbidden or strongly reduced.
The two main ones are the so-called modified Urca (MU) process:
\begin{equation}
n + N \rightarrow p + e^-\! + \bar{\nu}_e + N
\qquad \textrm{and} \qquad
p + e^-\! + N \rightarrow n + \nu_e + N \:,
\label{e:MU}
\end{equation}
where $N$ is a spectator nucleon that ensures momentum conservation,
and the nucleon-nucleon bremsstrahlung:
\begin{equation}
N+N \rightarrow N+N+ \nu+\bar{\nu} \:,
\end{equation}
with $N$ a nucleon and
$\nu$, $\bar{\nu}$ an (anti)neutrino of any flavor.
\begin{figure
\vspace{-17mm}
\centerline{\hskip5mm\includegraphics[angle=0,scale=0.32]{coolmic1}}
\vspace{-42mm}
\caption{
DU onset condition (upper panel),
proton fraction (central panel),
and NS mass (lower panel)
vs.~the (central) baryon density for the different EOSs.
The vertical dotted lines indicate the threshold density for the DU process.}
\label{f:mrho}
\end{figure
For the different EOSs used in this work,
the DU process sets in at slightly different values of the proton fraction $x_p$
due to the presence of muons,
as shown in Fig.~\ref{f:mrho}.
The threshold values $x_\text{DU}$ are calculated starting from Eq.~(\ref{e:DU}),
in which the momentum conservation imposes the triangle rule, i.e.,
\begin{equation}
k_F^{(n)} < k_F^{(p)} + k_F^{(e)} \:.
\label{e:thres}
\end{equation}
This is indicated by the vertical dotted lines in Fig.~\ref{f:mrho}
for the different EOSs,
and $x_\text{DU}$ is comprised in the range $0.133 < x_\text{DU} < 0.136$.
It is important to determine at which corresponding value $\rho_\text{DU}$
of the nucleon density
of beta-stable and charge-neutral matter the DU process sets in,
because compact stars characterized by central densities larger than
$\rho_\text{DU}$ will cool down very rapidly.
This is also displayed in Fig.~\ref{f:mrho}
and occurs in a density range between 0.30 and 0.45 $\text{fm}^{-3}$
depending on the EOS.
In the lower panel we display the NS mass-central density relations
obtained by solving the Tolman-Oppenheimer-Volkoff equations
for hydrostatic equilibrium.
The NS masses $M_\text{DU}$ corresponding to the central densities $\rho_\text{DU}$
span a range between 0.82 (N93) and 1.56 (BOB) $M_\odot$,
above which the DU process can potentially operate.
We notice that in all cases the value of the maximum mass $M_\text{max}$
is larger than the current observational lower limits
\citep{demorest2010,heavy2,fonseca16}.
The main results for $x_\text{DU}$, $\rho_\text{DU}$, $M_\text{DU}$, and $M_\text{max}$
are also listed in Table~\ref{t:eos}.
We conclude that in all cases there is a wide range of NS masses
where the DU process is operative,
practically for all NSs with the V18 and N93 EOSs,
while only for the BOB EOS the threshold $M_\text{DU}=1.56\,M_\odot$ is high,
but in this case also $M_\text{max}=2.51\,M_\odot$ is very large.
\def\myc#1{\multicolumn{1}{c}{$#1$}}
\begin{table
\renewcommand{\arraystretch}{0.9}
\begin{center}
\caption{
Characteristic properties of several EOSs:
DU onset proton fraction $x_\text{DU}$, density $\rho_\text{DU}$, and
corresponding NS mass with that central density $M_\text{DU}$.
Upper limit of the range of p1S0 pairing $\rho_{1S0}$
and NS mass with that central density $M_{1S0}$.
Maximum NS mass $M_\text{max}$.
Densities are given in $\text{fm}^{-3}$ and masses in $M_\odot$.}
\label{t:eos}
\medskip
\begin{tabular}{@{}lrrrrrr}
\hline
EOS & $x_\text{DU}$ & $\rho_\text{DU}$ & $M_\text{DU}$ & $\rho_{1S0}$ & $M_{1S0}$ & \myc{M_\text{max}} \\
\hline\\[-3mm]
BOB & 0.1357 & 0.41 & 1.56 & 0.59 & 2.23 & 2.51 \\
V18 & 0.1348 & 0.37 & 1.01 & 0.60 & 1.91 & 2.34 \\
N93 & 0.1331 & 0.30 & 0.82 & 0.52 & 1.59 & 2.13 \\
UIX & 0.1363 & 0.45 & 1.17 & 0.70 & 1.70 & 2.04 \\
\hline
\end{tabular}
\end{center}
\end{table
We finally remind the
possible strong constraints on NS cooling imposed by the
speculated very rapid cooling of the supernova remnant Cas~A
\citep{2009Nat,2010HeiHo,2013Elsha,2015HoPRC},
which we have studied in detail in \cite{2016MNRAS}.
As the observational claims remain highly debated \citep{casno1,casno2},
we do not consider this scenario in this work.
\subsection{Pairing gaps}
\label{s:gaps}
The effect of the neutron or proton superfluidity
in the dominant 1S0 and 3P2 channels
on the neutrino emissivity is twofold \citep{2001rep}.
On one hand, when the temperature decreases below the critical superfluid
temperature $T_c$ of a given type of baryons,
the neutrino emissivity of processes involving a superfluid baryon
is exponentially reduced,
together with the specific heat of that component.
For example, proton superfluidity in the core of a NS suppresses both Urca
processes but does not affect the neutron-neutron bremsstrahlung.
On the other hand, the pairing of baryons initiates a new type
of neutrino reactions called the pair breaking and formation (PBF) processes.
The energy is released in the form of a neutrino-antineutrino pair
when a Cooper pair of baryons is formed.
The process starts when $T=T_c$,
is maximally efficient when $T\approx0.8\,T_c$,
and is exponentially suppressed for $T\ll T_c$ (\citealt{2001rep}).
Therefore an essential ingredient for cooling simulations
is the knowledge of the pairing gaps for neutrons and protons
in beta-stable matter.
\begin{figure
\vspace{-14mm}
\centerline{\includegraphics[angle=0,scale=0.32]{coolmic2}}
\vspace{-39mm}
\caption{
BCS gaps in the n1S0, p1S0, and n3P2 channels
in NS matter for the different EOSs.
The vertical dotted lines indicate the central density
of NSs with different masses
$M/M_\odot=1.0,1.1,\ldots$, up to the maximum mass value.
The shaded areas indicate the region between DU onset density $\rho_\text{DU}$
and vanishing of the p1S0 gap at $\rho_{1S0}$
(listed in Table~\ref{t:eos}),
i.e., where the DU process is blocked by pairing.
The orange vertical line represents the crust-core boundary.}
\label{f:gaps}
\end{figure
In a previous paper \citep{2016MNRAS} we presented calculations obtained
using gaps computed consistently with the EOS, i.e.,
based on the same $V_{18}$ NN interaction and using the same medium effects
(TBF and effective masses), as shown in \cite{ourgaps}.
In this work we extend those studies by employing different nucleonic potentials.
Before illustrating the new results,
we remind the reader that the pairing gaps were computed on the BCS level
by solving the (angle-averaged) gap equation
\citep{bcsp1,bcsp2,bcsp3,bcsp4,bcsp5,bcsp6}
for the $L=0$ (1S0 gaps)
and the two-component $L=1,3$ (3P2 gaps) gap functions,
\begin{equation}
\left(\!\!\!\begin{array}{l} \Delta_1 \\ \Delta_3 \end{array}\!\!\!\right)\!(k)
= - {1\over\pi} \int_0^{\infty}\!\! dk' {k'}^2 {1\over E(k')}
\left(\!\!\!\begin{array}{ll}
V_{11}\! & \!V_{13} \\ V_{31}\! & \!V_{33}
\end{array}\!\!\!\right)\!(k,k')
\left(\!\!\!\begin{array}{l} \Delta_1 \\ \Delta_3 \end{array}\!\!\!\right)\!(k')
\label{e:gap}
\end{equation}
with
\begin{equation}
E(k)^2 = [e(k)-\mu]^2 + \Delta_1(k)^2 + \Delta_3(k)^2 \:,
\end{equation}
while fixing the (neutron or proton) density,
\begin{equation}
\rho = {k_F^3\over 3\pi^2}
= 2 \sum_k {1\over 2} \left[ 1 - { e(k)-\mu \over E(k)} \right] \:.
\label{e:rho}
\end{equation}
Here $e(k)$ are the BHF s.p.~energies, Eq.~(\ref{e:en}),
containing contributions due to two-body and three-body forces,
$\mu \approx e(k_F)$ is the chemical potential
determined self-consistently from Eqs.~(\ref{e:gap}--\ref{e:rho}),
and
\begin{equation}
V^{}_{LL'}(k,k') =
\int_0^\infty \! dr\, r^2\, j_{L'}(k'r)\, V^{TS}_{LL'}(r)\, j_L(kr)
\label{e:v}
\end{equation}
are the relevant potential matrix elements
($T=1$ and
$S=0$; $L,L'=0$ for the 1S0 channel,
$S=1$; $L,L'=1,3$ for the 3P2 channel).
The relation between (angle-averaged) pairing gap at zero temperature
$\Delta \equiv \sqrt{\Delta_1^2(k_F)+\Delta_3^2(k_F)}$
obtained in this way and the critical temperature of superfluidity is then
$T_c \approx 0.567\Delta$.
In \cite{2016MNRAS,2018MNRAS}
we concluded that a very good description of cooling properties can be obtained
by just using the (eventually scaled) BCS values in the p1S0 channel and
disregarding any pairing in the n3P2 channel.
The complete suppression of the 3P2 gaps could be caused by
polarization corrections \citep{pol1,ppol1,ppol2,pol,pol3,ppol3},
which for the 1S0 channel are known to be repulsive,
but for the 3P2 are still essentially unknown;
and this might change the value of these gaps even by orders of magnitude.
For illustration we display in Fig.~\ref{f:gaps} the BCS pairing gaps
and critical temperatures
as a function of baryonic density of beta-stable matter for the different EOSs.
In this way one can easily identify which range of gaps is active
in different stars,
whose central densities are shown by vertical dotted lines for given NS masses.
Regarding the p1S0 pairing,
an important information is the density $\rho_{1S0}$ at which the
BCS gap disappears,
and the corresponding NS mass $M_{1S0}$ with that central density.
The DU process can only be completely blocked for NSs with $M<M_{1S0}$,
whereas for heavier stars it is active and unblocked in a certain domain
of the core, which leads to extremely fast cooling of these objects.
The value of $M_{1S0}$ is also listed in Table~\ref{t:eos}
and together with $M_\text{DU}$
determines the ranges of blocked and unblocked DU cooling.
We observe that for the BOB, V18, UIX, N93 EOS
the DU blocking terminates at $M/M_\odot=2.23,1.91,1.70,1.59$, respectively,
which implies very rapid cooling for heavier stars,
reflected in the following cooling diagrams.
The regions of blocked DU cooling are represented by shading
in Fig.~\ref{f:gaps}.
\begin{figure*
\vspace{-40mm}
\centerline{\hskip-8mm\includegraphics[angle=0,scale=0.9,clip]{coolmic3}}
\vspace{-14mm}
\caption{
Cooling curves for different EOSs
without any pairing (upper panels),
with the inclusion of n1S0 and p1S0 BCS gaps (central panels),
and excluding the PBF processes in the latter case (lower panels)
for different NS masses $M/M_\odot=1.0,1.1,\ldots,M_\text{max}$
(decreasing curves).
The dashed green curves mark the NS mass $M_\text{DU}+0.01\,M_\odot$
at which the DU process has just set in,
the dash-dotted green curves mark the NS mass $M_{1S0}$
for which the p1S0 gap vanishes in the center of the star,
and the dotted green curves correspond to $M_\text{max}$ for each EOS.
The black curves are obtained with a Fe atmosphere and the shaded areas
cover the same results obtained with a light-elements
($\eta=10^{-7}$) atmosphere.
The data points are from \citep{Beznogov1}.
See text for more details.
}
\label{f:cool}
\end{figure*
\section{Results}
\label{s:res}
Having quantitatively specified EOS and pairing gaps,
the NS cooling simulations are carried out using the
widely used code {\tt NSCool} \citep{Pageweb},
which comprises all relevant cooling reactions:
DU, MU, PBF, and Bremsstrahlung,
including modifications due to pairing discussed before.
In order to assess the uncertainty of the heat-blanketing effect
of the atmosphere,
we compare in the following results obtained with
a non-accreted heavy-elements (Fe) atmosphere
and one containing also a maximum fraction
$\eta=\Delta M/M=10^{-7}$
of light elements from accretion, see \cite{potek}.
Our set of observational cooling data comprises the
(age, temperature) information
of the 19 isolated NS sources listed in \cite{Beznogov1},
where it was also pointed out that in many cases the distance to the object,
the composition of its atmosphere, thus its luminosity,
and its age are rather estimated than measured.
Thus in these cases, we use large error bars (a factor 0.5 and 2)
to reflect this uncertainty.
\subsection{Cooling diagrams}
\label{s:rescool}
For a better understanding,
we begin by discussing the simulations obtained with different EOSs
without including any superfluidity.
Results are displayed in Fig.~\ref{f:cool} (upper panels),
where the luminosity vs.~age is plotted for several NS masses
in the range $1.0,1.1,...,M_\text{max}$.
The dashed green curves mark the NS mass $M_\text{DU}+0.01M_\odot$
at which the DU process has just set in,
whereas the dotted green curves correspond to the maximum mass $M_\text{max}$.
The results are clearly unrealistic,
as observed NSs would essentially be divided into very hot ones and very cold
ones by the DU threshold $M_\text{DU}$,
with very few stars in between:
In a NS with $M<M_\text{DU}$, the DU process is turned off
and therefore the total neutrino emissivity
is orders of magnitude smaller than for a NS with a mass above the DU threshold.
Consequently the former has at a given age has a much higher luminosity
than the latter.
All NSs with $M<M_\text{DU}$ have a small neutrino emissivity,
hence their cooling curves are nearly indistinguishable on the scale
of Fig.~\ref{f:cool},
while for objects with $M>M_\text{DU}$,
the larger is the mass and thus the bigger is the central region of the star
where the DU process operates,
the lower is the luminosity and the cooling curves are no longer superimposed.
This feature depends on the EOS as explained in Sect.~\ref{s:cp}.
For example, in the N93 case the onset takes place at a very small value
of the density and the related gravitational mass $M_\text{DU}=0.82\,M_\odot$,
and therefore all NS masses undergo DU processes.
In the V18 case, the DU process starts for a 1.01\,$M_\odot$ NS,
and therefore only the first upper curve is influenced by MU alone,
whereas the remaining ones are determined by DU cooling.
The other EOSs have higher threshold values of
$M_\text{DU}/M_\odot=1.17,1.56$ for the UIX and BOB, respectively.
We now discuss the cooling curves switching on the n1S0 and p1S0 gaps,
as shown in Fig.~\ref{f:cool} (central panels).
For the gaps used in this work,
the main effect of superfluidity on NSs with $M>M_\text{DU}$
(dashed green curves, partially covered)
is the strong quenching of the DU process,
and thus a substantial reduction of the total neutrino emissivity.
Hence those stars have a higher luminosity compared to the non-superfluid case.
On the other hand, if $M>M_{1S0}$
(dash-dotted green curves),
the complete blocking of the DU process disappears
and the star cools very rapidly again.
One observes results in line with these features in the figure,
namely between $M_\text{DU}$ and $M_{1S0}$ there is now a smooth dependence of
the luminosity on the NS mass for a given age.
The effect is qualitatively the same for all EOSs,
just the distribution of NS masses in the cooling diagram depends on the EOS,
which will be analyzed in the next section.
Regarding the effect of the atmosphere models,
we note that by assuming a proper atmosphere for any given data point,
all current cooling data could potentially be explained in
the present scenario,
by assigning
a Fe atmosphere (black curves) to the oldest objects
and an accreted light-elements atmosphere (shaded area) to the hottest ones.
The currently known most extrem object,
XMMU J1731-347,
($\log_{10}{t}\approx4.4$, $\log_{10}{L_\gamma^\infty}\approx34.2$),
is indeed supposed to have a carbon atmosphere,
see the discussions in \cite{Beznogov1,Beznogov2,2018MNRAS}.
Although in general the presence of superfluidity is slowing down the cooling,
the PBF processes might prevail in certain situations and provide
an accelerated cooling of certain stellar configurations
\citep{2011PRLPage,2011MNRASSHTE,2011MNRASYA,2016MNRAS}.
It is therefore of interest to show in Fig.~\ref{f:cool} (lower panels)
also results where the 1S0 PBF processes have been switched off by hand.
It can be seen, however, that the effect of PBF cooling in the 1S0 channels
is practically negligible.
We conclude that the BCS p1S0 gap alone is able to suppress sufficiently
the DU cooling and to yield realistic cooling curves,
provided that it extends over a large enough density/mass range.
This is the case for all considered EOSs,
which yield however different mass profiles in the luminosity vs.~age plane.
This illustrates the necessity of precise information
on the masses of the NSs in the cooling diagram,
without which no theoretical cooling model can be verified.
We study this issue in some detail in the following.
\begin{figure
\vskip-13mm
\centerline{\includegraphics[angle=0,scale=0.33,clip]{coolmic4}}
\vskip-13mm
\caption{
Deduced NS mass distributions from the cooling diagrams in
Fig.~\ref{f:cool} (central panels)
for the different EOSs.
Maximum masses are also indicated.
The lowest panel shows some recent theoretical results
\citep{anton16,alsing18}.
}
\label{f:md}
\end{figure
\begin{figure
\vskip-4mm
\centerline{\includegraphics[angle=0,scale=0.31,clip]{coolmic5}}
\vskip-9mm
\caption{
Cooling curves obtained with the V18 EOS, Fe atmosphere,
and 1S0 BCS gaps scaled by
factors $s=0.2,0.5,1.0,2.0$.
The insets show the derived mass distributions.
}
\label{f:scale}
\end{figure
\subsection{Neutron star mass distributions}
Assuming that the currently observed set of isolated NSs in the cooling
diagrams of Fig.~\ref{f:cool}
reflects the unbiased mass distribution of NSs in the Universe
(which is highly unlikely due to strong selection effects;
for example, very old and massive NSs are too faint to be observed
and would therefore never appear in the cooling diagrams.
Also, the mass distribution of isolated NSs could be very different from those
in binary systems, etc.),
one can extract straightforwardly the predicted mass distribution
of NSs from the figures.
For simplicity we disregard the error bars in this analysis.
The results are shown as histograms in Fig.~\ref{f:md},
obtained directly from the binning in mass intervals in Fig.~\ref{f:cool}.
One observes clear differences between the four EOSs,
with surprisingly small dependence on the atmosphere model.
The lowest panel provides for comparison
a compilation of recent theoretical results
for the NS mass distribution \citep{ozel12,kizil13,anton16,alsing18}.
We stress again, however, that there is no good reason that the
mass distribution extracted from the cooling data in this way
should be similar to the overall mass distribution of NSs in the Universe
or even to that of isolated NSs only.
Due to this problem and the scarcity of data,
no firm conclusions can be drawn for the moment,
apart from perhaps excluding the BOB model,
which does not predict any mass even close to the NS canonical value
of about $1.4\,M_\odot$.
This model features also a very large $M_\text{max}=2.51\,M_\odot$,
which seems to be in conflict with recent upper limits on $M_\text{max}$
derived from analysis of the NS merger event \citep{shiba17,marga17,rezz18}.
On the other hand,
the $M_\text{max}=2.04\,M_\odot$ of the UIX EOS appears too small,
which leaves as most realistic models either N93 or V18.
Clearly more data points, ideally with assigned known masses,
would be required for a more profound analysis of this kind.
To emphasize even more the value of data with well-assigned masses,
we point out that the mass distributions do not only depend on the EOS,
but also on the pairing gaps.
For that purpose we plot in Fig.~\ref{f:scale}
the results obtained with the V18 model and applying
different scaling factors $s = 0.2, 0.5, 1.0, 2.0$ to the 1S0 BCS gaps,
which could be motivated by the polarization effects discussed
in Sec.~\ref{s:gaps}.
One sees that while the overall coverage of the luminosity vs.~age plane
remains nearly unaffected,
the deduced NS mass distributions
(shown as insets)
depend sensitively on the gap scaling factor.
In the specific case,
one would be able to exclude very large scaling factors,
which is indeed physically reasonable.
Of course, modifying also the density domain of the pairing,
similar variations would be obtained in line with the considerations
in Sec.~\ref{s:rescool}.
But we think it is premature to try to resolve this issue
with the present set of cooling data.
\section{Conclusions}
\label{s:end}
Motivated by the recent availability of more stringent restrictions
on the NS EOS provided by a NS merger event,
we have analyzed in this work the predictions of some compatible
microscopic BHF EOSs for the cooling properties of isolated NSs.
All EOSs feature strong DU cooling for a wide range of masses
and the presence of superfluidity is required for realistic cooling scenarios.
We find that assuming absence of n3P2 pairing and employing n1S0 and p1S0 BCS
gaps with possible rather generous scaling factors,
a reproduction of all current cooling data for isolated NSs
can be achieved with any of the proposed EOSs.
A naive and straightforward analysis of the deduced NS mass distribution
would exclude only the stiffest BOB EOS,
which also predicts a fairly large maximum mass.
The combined and consistent analysis of different aspects of NS physics
like mergers, radius measurements, and cooling
will allow in the future an always more accurate derivation of the nuclear EOS,
also in view of the possible presence of exotic components like quarks
or hyperons,
which was excluded from the start in the present work,
but will be addressed in the future.
\section*{Acknowledgments}
We acknowledge helpful discussions with M.~Fortin
and financial support from ``PHAROS,'' COST Action CA16214,
and the China Scholarship Council
(CSC File No.~201706410092).
\bibliographystyle{mnras}
|
2,877,628,090,751 | arxiv | \section{Introduction} In the simplest models of inflation, the
primordial density perturbations have a negligible degree of
nongaussianity. The parameter $f_{NL}$ which characterizes
nongaussianity is of the order of $|n-1|\ll 1$ (where $n$ is the
spectral index) in conventional inflation models
\cite{BMR}-\cite{SeeryLidsey}, whereas the current experimental
limit is $|f_{NL}|\mbox{\raisebox{-.6ex}{~$\stackrel{<}{\sim}$~}} 100$ \cite{WMAP3}; one can additionally
characterize the nongaussianity using the trispectrum, which is also
small in conventional models \cite{trispectrum}. Nevertheless,
there have been intense theoretical efforts to find models which
predict observably large levels \cite{otherNG} (see \cite{NGreview}
for a review). It has been difficult to find examples which give
large $f_{NL}$. In single field inflation models a small inflaton
sound speed is necessary to achieve large nongaussianity
\cite{large_nongauss}, as in the models of \cite{DBI,leblond}, unless the
inflaton potential has a sharp feature \cite{feature}.
The simplest multi-field models do not seem to give large
nongaussianity \cite{multi_field}, though it is not clear if this is
true also of more complicated models. Thus it is quite significant
that one of the most prevalent classes of models, hybrid inflation
\cite{hybrid}, is able to yield large nongaussianity for certain
ranges of parameters \cite{BC}. The effect is due to the growth of the
waterfall field---tachyonic preheating---which contributes to the
curvature perturbation, and hence the temperature anisotropy, only
starting at second order in cosmological perturbation theory.
We show that, depending on the values of certain model parameters,
two interesting effects are possible: $n=4$ distortion
of the spectrum or large nongaussianity. The same effect was observed
in \cite{preheatNG0}, though tachyonic preheating after hybrid inflation
was not considered in that paper.\footnote{Reference \cite{preheatNG0}
studied the behaviour of metric perturbations during preheating
in the model $V = \lambda \phi^4 / 4 + g^2\phi^2\chi^2/2 + \lambda' \chi^4/4$
and found that an $n=4$ contamination of the spectrum is generated
when the $\chi$ field is heavy throughout inflation while large nongaussianity
is possible when $\chi$ field is light. In this paper we consider a different
model, tachyonic preheating, finding results which are qualitatively
similar.}
Nongaussianity from preheating has also been studied
in \cite{preheatNG1}-\cite{preheatNG6}. The calculations are
complicated, and required numerical integrations over time and
wavenumbers; hence the results are not immediately intuitive. One
of our goals in the present paper is to give a better understanding
of this novel effect, and to present some new results concerning the
application of these results to popular models of hybrid inflation
including brane inflation \cite{KKLMMT} and P-term inflation
\cite{Pterm}, which is a synthesis of supergravity inflationary
models interpolating between F-term and D-term inflation.
We begin by reviewing the results of \cite{BC} in section \ref{II}.
In section \ref{III} we apply these results by establishing
constraints on the parameters of hybrid inflation, coming either from
the production of large nongaussianity, or from nonscale-invariant
contributions to the spectrum (as opposed to bispectrum). These
results extend and correct our previous limits \cite{BC}. We then
adapt them to the cases of brane-antibrane inflation in section
\ref{IV} and P-term inflation in section \ref{V}. We further extend
our analysis to the more realistic case of a complex tachyon field
in section \ref{VI}, showing that
the extra components of the tachyon add in a simple way and amplify
the real-field results by factors of order unity. Conclusions are
given in section \ref{VII}. Appendix A gives details about the
matching between early- and late-time WKB solutions of the tachyon
fluctuation mode functions, while appendix B gives details about
the source term of the curvature perturbation for complex tachyons.
\section{Review of previous results}
\label{II}
\subsection{Hybrid inflation}
The hybrid inflation model which we study is defined by the
potential
\begin{equation}
\label{pot}
V(\varphi,\sigma) =
\frac{\lambda}{4} \left( \sigma^2 - v^2 \right)^2
+ \frac{m_\varphi^2}{2}\varphi^2 + \frac{g^2}{2} \varphi^2 \sigma^2
\end{equation}
where $\varphi$ is the inflaton and $\sigma$ is the tachyonic field.
Its mass depends on $\varphi$ as
$m_\sigma^2 = -\lambda v^2 + g^2 \varphi^2$, which changes sign
when $\varphi$ reaches the critical value
$\varphi_c = (\sqrt{\lambda}/{g}) v$. At this time, fluctuations
in the tachyon field start to grow exponentially.
This phase of exponential growth is called tachyonic preheating
\cite{tachyonic1}-\cite{GarciaBellido}; see also \cite{preheating}
for a discussion of the general theory of preheating and
\cite{resonance} for a different type of tachyonic preheating.
During the early stages of preheating, before the fluctuations have
become nonperturbatively large and before the backreaction has set
in, the expansion of the universe will still be approximately de
Sitter. Once the tachyon fluctuations become sufficiently large
their backreaction modifies the expansion of the universe and brings
inflation to an end. This happens at a time $N_\star \equiv H
t_\star$ when the fluctuations in $\sigma$ grow to a certain value,
\begin{equation}
\label{end_of_inflation}
\left\langle \delta\sigma^2(N_*)\right\rangle
= \left. \int \frac{d^3k}{(2\pi)^3} |\xi_k|^2 \right|_{N = N_\star}
= \frac{v^2}{4}
\end{equation}
where $\xi_k$ is the mode function for the fluctuations (discussed
below). This happens at some time after the onset of the
instability. For a wide range of parameters (including the values
originally considered in \cite{hybrid}) one has $N_\star \ll 1$ so
that the symmetry breaking completes on a time scale short compared
to the Hubble time. This is the usual {\it waterfall condition} regime of
hybrid inflation. In the present work we will consider both the
possibilities that $N_\star \ll 1$ and also $N_\star \mbox{\raisebox{-.6ex}{~$\stackrel{>}{\sim}$~}} 1$.
We
find it convenient to measure time in terms of number of e-foldings,
taking $N=0$ to coincide with the onset of the instability,
when $m_\sigma^2=0$, and $N_*$
to be the end of inflation, defined by (\ref{end_of_inflation}).
Horizon crossing occurs at some $N_i<0$, so the number of e-foldings
since horizon crossing is $N_e = N_* - N_i$. We determine $N_e$
using the standard relation
\begin{equation}
\label{Ne2}
N_e = 62 - \ln\left(\frac{10^{16}\ \mathrm{GeV}}{V^{1/4}}\right)
- \frac{1}{3}\ln\left(\frac{V^{1/4}}{\rho_{\mathrm{r.h.}}}\right)
\end{equation}
with the energy density at reheating ($\rho_{\mathrm{r.h.}}\sim
T_{\mathrm{r.h.}}^4$) assumed to be limited by the gravitino bound
$T_{\mathrm{r.h.}}\mbox{\raisebox{-.6ex}{~$\stackrel{<}{\sim}$~}} 10^{10}$ GeV, though we have checked that this
assumption has little effect on our results. Given $N_*$ and $N_e$,
$N_i$ is determined by $N_i = N_*-N_e$.
\subsection{Second order curvature perturbation}
We work up to second order in perturbation theory, employing the longitudinal gauge
throughout. The expanded metric and Einstein equations can be found in \cite{BC}.
The matter content of the theory is expanded in perturbation theory as
\begin{eqnarray*}
\varphi(\tau,\vec{x}) &=& \varphi_0(\tau) + \delta^{(1)} \varphi(\tau,\vec{x})
+ \frac{1}{2}\delta^{(2)} \varphi(\tau,\vec{x}) \\
\sigma(\tau,\vec{x}) &=& \delta^{(1)} \sigma(\tau,\vec{x})
+ \frac{1}{2}\delta^{(2)} \sigma(\tau,\vec{x})
\end{eqnarray*}
As discussed in \cite{BC}, we are justified in dropping the homogeneous backgound
of the tachyon field $\langle \sigma(\tau,\vec{x}) \rangle \equiv \sigma_0(\tau) = 0$.
Conformal time, $\tau$, is related to cosmic time as $dt = ad\tau$. We denote derivaties
with respect to conformal time as $f' = \partial_\tau f$ and with respect to cosmic time
as $\dot{f} = \partial_t f$.
Similarly the gauge invariant curvature perturbation, $\zeta$, is expanded in perturbation
theory as
\[
\zeta = \zeta^{(1)} + \frac{1}{2}\zeta^{(2)}
\]
Because $\sigma_0 = 0$ the first order contribution $\zeta^{(1)}$ is identical to the
standard result from single field models. We split the second order curvature perturbation
into a component which is due to the inflaton field and a component which is due to the
tachyon field as
\[
\zeta^{(2)} = \zeta^{(2)}_\varphi + \zeta^{(2)}_\sigma
\]
The second order inflaton curvature perturbation, $\zeta^{(2)}_\varphi$, coincides with the
$\zeta^{(2)}$ in single field models. This contribution has been previously computed and
is known to be small and conserved on large scales \cite{BMR}-\cite{SeeryLidsey}.
The quantity of interest is $\zeta^{(2)}_\sigma$, the tachyon curvature perturbation.
Beyond linear order in perturbation theory there are nonadiabatic pressures in the model
which will source the time evolution of $\zeta^{(2)}$ on large scales. The contribution
$\zeta^{(2)}_\sigma$ is the term which is amplified during the preheating phase and which
will come to dominate $\zeta^{(2)}$ at late times. We therefore focus on $\zeta^{(2)}_\sigma$,
since any significant nongaussianity will arise due to this term.
One of the principal results of \cite{BC} was the computation of the
tachyonic contribution to the second order tachyon curvature perturbation in terms of
the first order tachyon fluctutations $\delta^{(1)}\sigma$:
\begin{eqnarray}
\zeta^{(2)}_{\sigma} &\cong& \frac{\kappa^2}{\epsilon}
\int_{\tau_i}^{\tau}d\tau' \left[
\frac{\left(\delta^{(1)}\sigma'\right)^2}{\mathcal{H}(\tau')} \right. \nonumber
\\ &-& \left. \frac{\mathcal{H}(\tau')^2}{\mathcal{H}(\tau)^3}\left( \left(\delta^{(1)}\sigma'\right)^2
- a^2 m_{\sigma}^2\left(\delta^{(1)}\sigma\right)^2\right) \right]
\label{final}
\end{eqnarray}
where $\kappa^2 = M_p^{-2} = 8\pi G_{N}$, $\epsilon$ is the slow
roll parameter, $\epsilon=\frac12 M_p^2 (V'(\varphi)/V)^2$, $\tau$ is the
conformal time, $\tau = -[H a (1-\epsilon)]^{-1}$, ${\cal H}$ is the
conformal time Hubble parameter, ${\cal H} = 1[\tau(1-\epsilon)]^{-1}$,
and all factors in the integrand are evaluted at $\tau'$ unless
otherwise indicated. In deriving (\ref{final}), we have performed
partial integrations in which surface terms at the initial time
were dropped; hence (\ref{final}) is only valid for perturbations
which are dominated by the tachyonic growth at late times. For
such perturbations, there is little sensitivity to the value taken for
$\tau_i$. An analogous result was derived for hybrid inflation (not considering
preheating) in \cite{EV}.
Metric perturbations during preheating have also been discussed in
\cite{firstorderreheating}-\cite{FB2}.
Since $\zeta^{(2)}_\sigma$ depends only on the first order tachyon fluctuation $\delta^{(1)}\sigma$,
and not on $\delta^{(2)}\sigma$ \footnote{Indeed, as was shown in \cite{BC}, $\delta^{(2)}\sigma$ decouples
from the gauge invariant quantity up to second order in perturbation theory.} we can drop
the superscript $(1)$ and denote $\delta^{(1)}\sigma$ by $\delta\sigma$ in subsequent text.
The expression (\ref{final}) satisfies an important consistency check,
namely that it is a local expression. Nonlocal
operators $\triangle^{-n}$, powers of the inverse Laplacian, arise at intermediate steps in
the calculation, using the generalized longitudinal gauge,
which separates metric perturbations into scalar, vector and tensor
components. In the process of decoupling these to solve for the
curvature perturbation, one must apply $\triangle^{-1}$. The nonlocal terms
should cancel out of physical quantities, similarly to
electrodynamics in Coulomb gauge. The second order curvature
perturbation is related to the observable CMB temperature
fluctuations in a nontrivial way, so this by itself does not prove
that $\zeta^{(2)}$ must be local. However \cite{LangloisVernizzi}
has recently shown that under the conditions present in our model,
$\zeta^{(2)}$ should indeed by local.
Using (\ref{final}), it is possible to compute tachyonic
contributions to the spectrum and bispectrum of the curvature
perturbation,
\beqa
\label{spectrum}
\left\langle \zeta^{(2)}_{k_1}\, \zeta^{(2)}_{k_2}\,
\right\rangle &\equiv& \delta(\vec k_1 + \vec k_2) \, S(k_i)\\
\label{bispectrum}
\left\langle \zeta^{(2)}_{k_1}\, \zeta^{(2)}_{k_2}\,
\zeta^{(2)}_{k_3} \right\rangle
&\equiv& {\delta^{(3)}(\vec k_1 + \vec k_2 + \vec k_3)\over(2\pi)^{3/2}}\, \,
B(\vec k_1, \vec k_2 ,\vec k_3)\nonumber\\
\eeqa
\subsection{Tachyon mode functions}
To compute the correlations in (\ref{spectrum})-(\ref{bispectrum}),
we express the tachyon fluctuation in terms of creation and
annihilation operators,
\beq
\label{quantum}
\delta \sigma(\vec x,N) = \int {d^{\,3}k\over (2\pi)^{3/2}}
\,a_k \, \xi_k(N)\, e^{i\vec k\cdot\vec x}+ {\rm h.c.}
\eeq
where the mode functions obey the linearized tachyon equation of
motion. To make this equation more tractable, we have approximated
the tachyon mass dependence on time ($N$) as being linear,
$m_\sigma^2 \cong -c H^2 N$, so that
\begin{equation}
\label{mode}
\frac{d^2}{dN^2}\delta\sigma_k + 3 \frac{d}{dN}\delta\sigma_k +
\left[\hat{k}^2e^{-2N} - c N\right]\delta\sigma_k = 0
\end{equation}
where $\hat{k} = k / H$. We found that this technical assumption is
nearly always satisfied for model parameter values consistent with the
near scale-invariance of the CMB fluctuations; furthermore it is
always satisfied for parameters which lead to large nongaussianity.
The term $cN$ is proportional to the first term in the Taylor series
for $e^{2\eta N} - 1$, where
$\eta \cong 4 M_p^2 m^2_\phi/ (\lambda v^4)$ is the usual slow roll
parameter for $\varphi$. We thus demand that
$2\eta|N|\ll 1$ throughout inflation. Notice that inflation is
ended by the tachyonic instability, not by the failure of the slow
roll conditions.
The coefficient $c$ is given by $c = 2\eta\lambda v^2/H^2$, and $H^2 = V/(3M_p^2)$, with $V\cong
\frac14\lambda v^4$. Using the COBE normalization
$V / (M_p^4 \epsilon) = 6\times 10^{-7}$ to eliminate the inflaton
mass $m_\varphi$, we find that $c= 2.2\times 10^4 g \, M_p/v$.
Although eq.\ (\ref{mode}) has exact solutions in terms of Airy
functions when $k=0$, for general $k$ no closed-form solutions exist.
We therefore constructed solutions, using the WKB approximation,
or alternatively
the adiabatic approximation. The WKB approximation is valid in the
limit of $N\to\pm\infty$; these solutions are matched to each other
at $N_k$, a $k$-dependent intermediate value of $N$. They have the
form
\beq
\label{solns}
\xi_k \cong \left\{ \begin{array}{ll}
\left({2 H\hat k^3}\right)^{-1/2}
\left(1 + i \hat k e^{-N}\right), & N < N_k \\
b_k\,{ e^{-\frac32 N + \frac{9}{4c}z^{3/2}}(1+|z|)^{-1/4} }, & N > N_k
\end{array} \right.
\eeq
with
\beq
\label{bk}
b_k = {1-i\sqrt{c|N_k|}\over \sqrt{2 H}(c|N_k|)^{3/4}}
{(1 + |z_k|)^{1/4} \over
\exp\left({9\over 4c}z_k^{3/2}\right)}
\eeq
where $z \equiv \left(1+\frac49 cN\right)$,
$z_k \equiv \left(1+\frac49 cN_k\right)$, and the dividing time
between the small- and large-$N$ behavior for a given mode is
implicitly defined
by $N_k = \ln(\hat k/\sqrt{c}) - \ln\sqrt{|N_k|}$. The matching time $N_k$ is
discussed in some detail in appendix A. The alternative
method, using the adiabatic approximation, will be reviewed in
section \ref{III}. See also \cite{falsevacuumdecay} for a discussion of the
solutions of the mode function equation.
The solutions of (\ref{mode}) have been discussed in great detail in
\cite{BC}. However, a few comments are in order about the solutions
(\ref{solns}). At early times $N < N_k$ the gradient term in the
Klein-Gordon equation dominates over the mass term, $k^2/a^2 >
|m_\sigma^2|$, and the resulting mode functions look just like the
solutions for a massless field in de Sitter space. These ultraviolet
modes are redshifted by the expansion of the universe into the
instability band where the mass term dominates the dynamics
$|m_\sigma^2| > k^2/a^2$. (Because of the time dependence of
$m_\sigma$ the modes may reenter the massless regime for a brief
period of time; we have verified that this does not alter any of our
results.) In this infrared regime (where the mass term dominates)
the Airy function solutions are appropriate. We have checked the
solutions (\ref{solns}) against numerical solutions of (\ref{mode}),
and found good agreement.
\subsection{Integrated results}
Using the solution (\ref{solns}) in (\ref{final}), and going to
the limit of vanishing wave numbers,
$\zeta^{(2)}_\sigma$ takes the form
\beqa
\label{timeint}
\zeta^{(2)}_{k=0}(N_*) &=& {\kappa^2\over\epsilon}
\int {d^3 p \over(2\pi)^{3/2}}
\left(\,a_p\, b_p\, + a^\dagger_p\, b_{-p}\right)^2
\nonumber\\&&
\times\int_{{\rm max}(N_p,N_i)}^{N_*} f(c,N,N_*)\, dN
\eeqa
where $f(c,N,N_*)$ is given by
\beqa
\label{feqn}
&& f(c,N,N_{\ast}) =
{e^{-3N + \frac{9}{2c}z^{3/2}}
\left(1 + |z|\right)^{-1/2}}\times\qquad\nonumber\\
&& \qquad\left[ \frac{9}{4}\left( 1-e^{3(N-N_{\ast})}\right)
\left|\sqrt{z} - 1 - \frac{2 c\,\, {\rm sign}(z)}{27(1+|z|)}
\right|^2 - \right. \nonumber\\
&& \left. \qquad \phantom{\left(\frac94\right)^2_2}
c N e^{3(N-N_{\ast})} \right]
\eeqa
It should be noted that the dominant time-dependence is determined
by the combination $-3N + \frac{9}{2c}z^{3/2}$ in the exponent.
The $e^{-3N}$ decay factor is typical of massive modes,
which redshift as $\delta\sigma_k\sim a^{-3/2}$. The
$e^{\frac{9}{2c}z^{3/2}}$ growth factor is a result of the tachyonic
instability. As we noted in the discussion following eq.\ (\ref{final}),
(\ref{timeint}) is valid only when the late-time behavior dominates.
Therefore an important consistency condition for all of our analysis
is that
\beq
\label{condition}
\frac{9}{2c}z_*^{3/2} \equiv
\frac{9}{2c}\left(1+\frac49 c N_*\right)^{3/2} > 3|N_i|\, .
\eeq
If this condition is violated then the preheating is not playing any role
in the dynamics and we can safely assume that no significant nongaussianity
is produced.
Using (\ref{feqn}), the correlation functions which give the spectrum and bispectrum
from (\ref{spectrum}) and (\ref{bispectrum}) can then be computed
as
\begin{equation}
\label{Seq}
S = 2{\kappa^4\over\epsilon^2}\int {d^{\,3}p\over (2\pi)^3} |b_p|^4
\left[\int_{\mathrm{max}(N_p, N_i)}^{N_*} dN\, f(c,N,N_*)\right]^2
\end{equation}
and
\beq
\label{Beq}
B = 8{\kappa^6\over\epsilon^3}\int {d^{\,3}p\over (2\pi)^3} |b_p|^6\left
[\int_{\mathrm{max}(N_p, N_i)}^{N_*}\!\!\!\!\!\!\!\! dN\, f(c,N,N_*)\right]^3
\eeq
The tachyonic contribution to the spectrum cannot exceed the
experimentally inferred inflaton power spectrum, $\sqrt{P_\varphi(k)}
\cong 2\pi \times 10^{-5} k^{-3/2}$, leading to the bound
$S<P_\varphi$. Moreover the bispectrum is related to the
nonlinearity parameter $f_{NL}$ by $B =
-\frac{6}{5}f_{NL}\left(P_\varphi(k_1)P_\varphi(k_2) +
\mathrm{perms} \right)$, which at equal momenta $k_i$ leads to
\beq
f_{NL} =-\frac{5}{18}\,{B\over P_\phi^2}
\label{fnl}
\eeq
The current experimental constraint is $|f_{NL}|\mbox{\raisebox{-.6ex}{~$\stackrel{<}{\sim}$~}} 100$.
By analogy, we also define a parameter $f_L$ for the spectrum
as
\beq
f_L = {S\over P_\phi}
\label{fl}
\eeq
and demand that $|f_L| < 1$.\footnote{One might consider being more conservative and imposing,
say, $|f_L| < 0.01$, rather than $|f_L| < 1$, as we have done. Because the effect turns
on exponentially fast, our exclusion plots are actually quite insensitive to the value assumed
for $f_{L}$ and $f_{NL}$. For example, the exclusion plots for $|f_{NL}| < 1$ are visually
hard to distinguish from those for $|f_{NL}| < 100$.}
\section{Constraints on hybrid inflation parameter space}
\label{III}
By numerically evaluating the integrals (\ref{Seq}) and (\ref{Beq})
and applying the experimental limits on the inflaton power spectrum
and bispectrum, we were able to find constraints on the hybrid
inflation parameter space. We update these bounds in the present
section.
\subsection{The issue of scale invariance}
In \cite{BC} we noted that it is possible for the tachyonic
contributions to the spectrum or bispectrum either to be nearly
scale invariant ($S\sim 1/k^3$, $B\sim 1/k^6$), or else to badly
violate scale invariance ($S$, $B\sim k^0$). In the latter case, the
spectral index for the tachyon contribution to the two-point function is
$n=4$. However we did not clearly differentiate between these two
regimes in the limits presented in \cite{BC}, an omission which we
rectify here.
The two regimes, scale-invariant and nonscale-invariant, can be
understood in reference to the condition (\ref{condition}) which
must be satisfied in order for tachyonic preheating to play any
significant role. There are two ways to satisfy
(\ref{condition}). One is to take $c N_\star \gg 1$, which
usually requires $c > 1$. This is the regime in which the
tachyon mass is not small compared to $H$ during most of
inflation, and so it corresponds to nonscale-invariant
fluctuations of $\delta\sigma$. The tachyon fluctuations are
Hubble damped as $\delta\sigma \sim a(t)^{-3/2}$ prior to
inflation, but this suppression can be overcome on large
scales if the amplification during the preheating phase is
sufficiently large, which typically requires very small values
of the self-coupling $\lambda \ll 1$. This nonscale-invariant
regime corresponds to a region of the parameter space where the
waterfall condition of hybrid inflation is satisfied.
The second way to satisfy (\ref{condition}) is to take $c|N| < 1$
for all $N \in \left[N_i,N_\star\right]$. This gives a
scale-invariant spectrum for the tachyon and also for
$\zeta^{(2)}_\sigma$, which is most easily seen by writing the
tachyon mass-squared as ${|m_\sigma^2|}/{H^2} = c |N|$.
It is clear that if $c|N| < 1$ for all $N \in
\left[N_i,N_\star\right]$ then the tachyon field will have been
light compared to the Hubble scale for all $\sim 60$ e-foldings
of inflation which ensures a nearly scale-invariant spectrum for
the tachyon. Also, in this case the instability will typically
take several e-foldings to complete so that $N_\star > 1$. This
scale-invariant regime corresponds to a region of the parameter
space where the usual waterfall condition of hybrid inflation is
violated.
\subsection{Nonscale-invariant case}
In the nonscale-invariant case $f_{L}$ and $f_{NL}$ depend on $k$. Because the
tachyon spectrum
is blue in this case the strongest constraint comes from evaluating $f_{L}$,
$f_{NL}$ at the
largest values of $k$ which are measured by the CMB. In deriving our
constraints we
conservatively take this to be $k = e^6 H e^{-N_e}$ where $N_e$ is the
total number of
e-foldings of inflation (\ref{Ne2}). The resulting constraints
in the plane of $\log_{10} g$ and $\log_{10}\lambda$
are shown in the right-hand region of figure \ref{figN}, for
several values of $\log_{10} v/ M_p$. We find that the most
stringent constraints come from $f_L$ rather than $f_{NL}$, so
we expect to see distortions of the spectrum rather than
nongaussianity at the left-hand boundaries of the excluded regions.
(The other boundaries are unexcluded for the different reasons
described in the next paragraph.) In
comparing these excluded regions to figure 12 of \cite{BC}, one
sees that they are smaller than in our previous work, in the upper
right-hand corner. This is due
to correcting an error in \cite{BC}, in which we failed to apply
the condition (\ref{condition}) restricting the validity of our analysis.
\begin{figure}[htbp]
\bigskip \centerline{\epsfxsize=0.5\textwidth\epsfbox{nongauss3a.eps}}
\caption{Excluded regions of the hybrid inflation parameter
space, for $\log_{10}v/M_p = -1,-3,-5,-7,-9$.
}
\label{figN}
\end{figure}
In computing $f_{L}$, $f_{NL}$ over a wide range of
$g,\lambda,v/M_p$ we also checked that the additional
assumptions were respected: the tachyon mass-squared $m_\sigma^2$
varies linearly with the number of e-foldings, which was shown in
\cite{BC} to require $g {v}/{M_p} < 10^{-5}$; the false vacuum
energy density $\lambda v^4 /4$ dominates during inflation,
leading to the bound $g > 460 \lambda
\left({v}/{M_p}\right)^3$; the reheat temperature exceeds 100
GeV, so that baryogenesis can occur at least during the
electroweak phase transition, leading to the lower bounds on
$\lambda$.
\subsection{Scale-invariant case; the adiabatic approximation}
On the left-hand side of figure \ref{figN}, we display new
constraints for which the spectrum and bispectrum are
scale-invariant. In contrast to the right-hand side, $f_{NL}$
provides the dominant constraint here, so that nongaussianity
is playing the important role. To obtain these results,
we employed a different approximation for the tachyon mode
functions, namely the adiabatic approximation, described in
appendix F of \cite{BC}. Because the tachyon has a small mass
during the entirety of inflation (subsequent to horizon crossing),
its mass is changing slowly, and we can use the standard mode
functions for light fields, but with a time-dependent mass:
\beq
\label{appquantum}
\delta \sigma(x) = \int {d^{\,3}k\over (2\pi)^{3/2}}
{H\over\sqrt{2k^3}} (-k\tau)^{\eta_\sigma(\tau)}
e^{ikx}\,a_k + {\rm h.c.}
\eeq
Here $\eta_\sigma = M_p^2 V_{,\sigma\sigma}/V$ is the slow-roll
parameter for the tachyon, given by
\beq
\label{etas2}
{\eta_\sigma}(\tau) = 8\eta\, {M_p^2\over v^2}\,
\ln| H\tau|
\eeq
where $\eta = M_p^2 V_{,\varphi\varphi}/V$. We have also verified that the
solution
(\ref{appquantum}) can be reproduced by (\ref{solns}) in the appropriate limit.
Since the mode
functions have a simple form in the adiabatic approximation, it is possible to
go farther
analytically in this case. Notably, we could find an implicit
equation for $N_*$ after evaluating the integral
(\ref{end_of_inflation}):
\beq
\label{nstar}
N_* \cong { v/ M_p\over 15000\, N_e\,
g} \ln\left[ 1 + 2\times 10^6\,{N_*g}
\left({M_p\over v}\right)^{3}\right]
\eeq
This expression for $N_*$ is much easier to evaluate than the
one which arises in the WKB approximation since the latter
leads to a numerical integral $f(N; \lambda, g,v)$
which must be inverted to find the $N_*$ which satisfies
$f(N_*; \lambda, g,v)=v^2/4$.
Moreover, the time ($N$) integral in (\ref{final}) can be
evaluated explicitly using the saddle point method, since it is
dominated by the exponential growth near $N=N_*$. Namely, an
integral of the form $\int dN e^g$ is approximated by
$e^{g_*}/\sqrt{|g'_*|}$ where $g_*$ is the maximum value (at
$N=N_*$) and $g'_*$ is the derivative evaluated at the same
point. Defining $\eta_f$ to be $|\eta_\sigma|$ at $N=N_*$, this
results in the expression
\beqa
B(\vec k_i) &=& 4H^{6\eta_f} d_*^3\int
{d^{\,3}p\over (2\pi)^3}\,{|p|^{-3-2\eta_f}\,
|p-k_3|^{-3-2\eta_f}}\nonumber\\
&&\left({|p+k_2|^{-3-2\eta_f}} +
{|p+k_1|^{-3-2\eta_f}}\right)
\eeqa
for the bispectrum,\footnote{In \cite{BC}, the conformal time
when the instability starts is (perhaps confusingly named)
$\tau_* = -1/H$, due to our choice of $N=0$ for the beginning
of the instability.} where
$d_* = H^2 \kappa^2 \eta_f e^{2\eta_f N_*}/(2\epsilon)$. In the
limit of small $c\sim \eta_f$, this is manifestly nearly scale invariant,
$B\sim 1/k_i^{6(1+\eta_f)}$, by power-counting the divergent behavior of
the integral in the infrared. The divergence must be cut off
in the usual way, by ignoring modes with $p$ smaller than the
horizon. Numerically evaluating the remaining $p$ integral for
$k_i\sim k$, and using $k\tau_*\sim e^{N_i}$ for modes
near the horizon,\footnote{The horizon-crossing condition is
$k\tau_i=1$, and $\tau_*/\tau_i = e^{N_i}/e^0$.}
we find
\beq
B(k)\cong 45\, k_i^{-6(1+\eta_f)}\left({\kappa^2 H^2\eta_f
e^{2\eta_f N_e}\over 2\pi \epsilon}\right)^3
\eeq
Further using the
COBE normalization to write $\kappa^2 H^2/\epsilon = 2\times
10^{-7}$, we find that the nonlinearity parameter is
\beq
f_{NL} = -2.6\times 10^{-5} \left(\eta_f e^{2\eta_f N_e}
\right)^3.
\eeq
Moreover the COBE normalization implies $\eta_f = 7360 N_* g
M_p/v$. Demanding that $|f_{NL}|<100$ gives the new excluded
regions on the left-hand side where $\xi_k$ is the mode function for the fluctuations (discussed
below). This happens at some time after the onset of the instability.
For a wide range of parameters (including the values originally considered
in \cite{hybrid}) one has $N_\star \ll 1$ so that the symmetry breaking
completes on a time scale short compared to the Hubble time. This is the
usual waterfall condition regime of hybrid inflation. In the present work
we will consider both the possibilities that $N_\star \ll 1$ an also
$N_\star \mbox{\raisebox{-.6ex}{~$\stackrel{>}{\sim}$~}} 1$.of figure \ref{figN}. Unlike the
nonscale-invariant regions, these have nongaussianity being the
dominant effect, rather than the tachyon contribution to the
spectrum.
We have claimed that in the scale-invariant regime the dominant
constraint is coming from $f_{NL}$ and not from $f_L$, contrarily to
the nonscale-invariant regime. We
now justify this claim. Repeating the steps above for the tachyon spectrum,
$S$, one obtains
\begin{equation}
|f_L| \sim 10^{-6}\left(\eta_f e^{2\eta_f N_e}
\right)^2
\end{equation}
Thus, in the scale-invariant regime, the linearity and nonlinearity parameters
are related as
\begin{equation}
\label{nonlinearity_vs_linearity}
|f_{NL}| \sim 10^{6} |f_L|^{3/2}
\end{equation}
so that $|f_{NL}| > |f_L|$ except when $|f_L|$ is extremely small. This
demonstrates that it is indeed possible to obtain significant nongaussianity
in this region of the parameter space. We have verified that the result
(\ref{nonlinearity_vs_linearity}) can also be derived using the mode function
solutions (\ref{solns}).
\section{Implications for Brane-Antibrane Inflation}
\label{IV}
We now apply our results to a popular model of inflation from string
theory, brane-antibrane inflation, correcting and extending our
preliminary results in this direction in \cite{BC}. This can be done
by mapping the low-energy effective action for the brane-antibrane
system onto the hybrid inflation model. We focus on the popular
KKLMMT scenario \cite{KKLMMT,realistic,tye} which reconciles brane
inflation with modulus stabilization using warped geometries with
background fluxes for type IIB string theory vacua \cite{vacua}. In
this model, the antibrane is at the bottom of a Klebanov-Strassler
(KS) throat \cite{KS}, with warp factor $a_i\ll 1$, and the brane
moves down the throat.\footnote{See \cite{braneNG} for other discussions on
nongaussianity in string theory models of inflation.}\
Within the KS throat the geometry is well approximated by
\[
ds^2 = a(y)^2 g_{\mu\nu}dx^{\mu}dx^{\nu} + dy^2 + y^2 d\Omega_5^2
\]
where $y$ is the distance along the throat, $a(y)\cong e^{ky}$ is the warp
factor and $d\Omega_5^2$ is the metric on the base space of the corresponding
conifold sigularity of the underlying Calabi-Yau space. In the subsequent analysis
we ignore the base space and treat the geometry as $AdS_5$.
In the following we compute only the nongaussianity which is due to
the preheating dynamics and ignore the possible effects of
the inflaton sound speed \cite{DBI,leblond,braneCs}; hence our results
may be thought of as a lower bound on the nongaussianity from brane
inflation.
In string theory the open string tachyon $T$ between a
D$3$-brane and antibrane,\footnote{We restrict ourselves to
inflation models driven by D$3$-branes since inflation driven by
higher dimensional branes have problems with overclosure of the
universe by defect formation \cite{BBCS}.} separated by
a distance $y$, is described by the action \cite{Sen}
\begin{eqnarray}
S_{\mathrm{tac}} &=& -\int d^{4}x \, \sqrt{-g} \, \mathcal{L} \nonumber \\
\mathcal{L} &=& V(T,y) \, \sqrt{1 + (a_iM_s)^{-2}
g^{\mu\nu}\partial_{\mu}T^*\partial_{\nu}T}
\label{DBI}
\end{eqnarray}
Here the small-$|T|$ expansion of the potential is
\begin{eqnarray}
V(T,y) &=& 2\, a_i^4\,\tau_3 \left[ 1 +
\frac{1}{2} \left(\left(\frac{M_s y}{2\pi}\right)^2
- \frac{1}{2}\right)|T|^2 \right. \nonumber\\
&& \,\,\,\,\,\,\,\,\, + \left. \mathcal{O}(|T|^4) + \cdots
\phantom{\frac{(M_s y)^2}{(2\pi)^2}}
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\right].
\label{smallT}
\end{eqnarray}
where $M_s$ is the string mass scale, $\tau_3 =
g_s^{-1}M_s^4/(2\pi)^3$ is the D3-brane
tension, and $g_s$ is
the string coupling.
Notice that in the warped throat scenario the instability does not
set in until the
branes are separated by the unwarped string
length,\footnote{There is some confusion on the literature on this
point, with some papers having stated that the instability is
determined by the warped string length, but this is not the case \cite{leblond}.
We thank L.\ Leblond for pointing this out to us.}
$(a_i M_s)^{-1}$.
An interesting difference between the string tachyon and that of
ordinary hybrid inflation is that (at $y=0$) the tachyon potential $V(|T|)$
in the string case does not have a local minimum; rather
\beq
V(T,0) \sim \tau_3 \, e^{-|T|^2 /4}
\label{exp_pot}
\eeq
The potential is minimized as
$T\to\infty$. Therefore $T$ does not have a VEV. Nevertheless,
the unstable brane-antibrane system decays into closed strings
soon after the instability begins, and the large-$T$ part of the
potential is not meaningful for determining the actual evolution
of the tachyon. In hybrid inflation, it is also true that the
end of inflation occurs somewhat before the fluctuations of the
tachyon become as large as the VEV. We will see that even in the
absence of a $T^4$ coupling, we can still define the equivalent
of $\lambda$ and $v$ for the brane-antibrane system, by equating
$\frac14\lambda v^4$ to the false vacuum energy, and $\lambda v^2$ to the
tachyon mass scale. This amounts to replacing the condition
for the end of inflation (\ref{end_of_inflation}) by
\begin{equation}
\label{eoi2}
\left. \int \frac{d^3k}{(2\pi)^3} |\xi_k|^2 \right|_{N = N_\star} =
{\hbox{false vacuum energy}\over|\hbox{tachyon mass}|^2}
\end{equation}
Despite that fact that the tachyon potential is only minimized at
$|T| \rightarrow \infty$ the condition (\ref{eoi2}) is quite
reasonable. Detailed numerical simulations of the symmetry breaking
in the theory (\ref{DBI}) were performed in \cite{BBCS}. Comparing
to the analysis of \cite{BBCS} one finds that $N_\star$ as defined in
(\ref{eoi2}) roughly corresponds to the time at which
singularities in the spatial gradients of the tachyon field
form \cite{singularity1}. The appearance of
singularities within a finite time
corresponds to the formation of lower dimensional
branes \cite{singularity2} and hence by $N=N_\star$ the inflaton
field ceases to exist as a physical degree of freedom. This
means that, as in our previous analysis, for $N > N_\star$ there no
longer exists any nonadiabatic pressure (since only one field, the
tachyon, is dynamical) and the large scale curvature perturbation
becomes conserved to all orders in perturbation theory
\cite{conserved}.
The effective values of the couplings
can be found by rewriting the action in
terms of the canonically normalized fields $\sigma = a_i\sqrt{\tau_3} T
/ M_s$ and $\varphi = \sqrt{\tau_3} y$
(see equations 3.6, 3.10 or C.1 in \cite{KKLMMT}), and then matching
to the hybrid inflation potential (\ref{pot}).
This gives the correspondence
\begin{eqnarray}
v &=& \sqrt{\frac{2}{\pi^{3}}}\frac{a_i M_s}{\sqrt{g_s}}
\label{bi_v} \\
\lambda &=& \frac{\pi^3}{4} g_s \label{bi_lambda} \\
g &=& \sqrt{2\pi g_s}\, a_i \label{bi_g}
\end{eqnarray}
For the analysis of the preceding sections to be valid,
the inflaton potential must be well-described by
$V_0 + \frac12 m_\varphi^2\varphi^2$ during
the relevant e-foldings of inflation.
The full potential can be written as
\begin{equation}
\label{inf_pot}
V_{\mathrm{inf}} = \frac{m_\varphi^2}{2}\varphi^2
+ V_0\left(1-\frac{\nu}{4\pi^2}\frac{V_0}{\varphi^4}\right)
\end{equation}
where $V_0 = 2 a_i^4 \tau_3$
and $\nu$ is a geometrical factor which
is given $\nu = 27 / 16$ for the KS throat.
It is typical to parameterize the inflaton mass in terms of the dimensionless quantity
$\beta$ as $m_\varphi^2 = \beta H_0^2$ where
$H_0^2 = V_0/({3M_p^2})$. Using the COBE normalization, we find
that
\beq
\beta = 10^{7/2}\, a_i^3\,\left(M_s\over M_p\right)^3.
\eeq
Demanding that the mass term in (\ref{inf_pot}) dominate over the
Coulomb term even
when the branes are separated by the local string length yields a lower bound on
$\beta$:
\[
\beta > 324 \pi^4 g_s^2 a_i^{10} \left(\frac{M_p}{M_s}\right)^2
\]
The parameter $\beta$ is also bounded from above by the requirement that
$|n-1| \mbox{\raisebox{-.6ex}{~$\stackrel{<}{\sim}$~}} 10^{-1}$ which corresponds to
$g_s^2 a_i^{10}({M_p}/{M_s})^2 \ll 5\times10^{-6}$.
Our results apply only in the case that $\beta > 0$; moreover
the case where $\beta<0$ does not make sense from the string
theory point of view, since $\varphi=0$ denotes the bottom of the
throat, and the brane must roll toward that point, not away from
it \cite{BCDF}.
Our preliminary results about this in \cite{BC} suffered from the
neglect of the condition (\ref{condition}); moreover we unduly
restricted the full string parameter space by assuming that the scale
of inflation was determined by the COBE normalization; that is not
the case. As in hybrid inflation, we still have three parameters
even after normalizing the spectrum, which we can take to be $g_s$,
$a_i$ and the ratio of the warped string scale to the Planck scale,
$a_i M_s/ M_p$. Taking into account the additional restrictions on
$\beta$, we find that the scale-noninvariant exclusions (right-hand
side of fig.\ \ref{figN}) do not survive at all in the KKLMMT model;
however all the scale-invariant ones do. Therefore this model has the
potential for producing large nongaussianity, and is even constrained
by producing too high levels of nongaussianity.
\begin{figure}[htbp]
\bigskip \centerline{\epsfxsize=0.5\textwidth\epsfbox{string-ex1newa.eps}}
\caption{Excluded regions of the KKLMMT brane-antibrane inflation
parameter space, in the plane of $\log_{10} a_i$ versus $\log_{10} g_s$
for $\log_{10}a_i M_s /M_p =-13,-11,\dots,-5$
from nongaussianity.
}
\label{string-ex1}
\end{figure}
The constraints in the string parameter space are shown in figure
\ref{string-ex1}. The excluded regions shown here correspond to very
small values of $g_s\mbox{\raisebox{-.6ex}{~$\stackrel{<}{\sim}$~}} 10^{-10}$. In the simplest way of
connecting type IIB string theory to low-energy phenomenology, the
gauge couplings of the Standard Model are related to $g_s$ by running
down from the string scale, which would render such small values of
$g_s$ incompatible with observations. However, type IIB strings are
dual to themselves under SL(2,Z) transformations which take $g_s\to
1/g_s$. In the dual picture, the string coupling is very large, and
the gauge dynamics at the string scale would be confining. It is
conceivable that the Standard Model arises as a remnant of a strongly
coupled gauge theory at the high scale, similar to technicolor models.
In this case, the small values of $g_s$ which give rise to large
nongaussianity could still be compatible with particle physics
constraints.
\section{Implications for P-term Inflation}
\label{V}
In realistic models of hybrid inflation from supergravity, the
potential is generated by F- or D-terms. P-term inflation is a
class of models which combines the two kinds of terms and can
interpolate between them \cite{Pterm}.
The potential for P-term inflation, along the inflationary trajectory,
is
\begin{equation}
\label{p_pot}
V_{\mathrm{inf}} = \frac{g^2 \xi^2}{2}\left(1
+ \frac{g^2}{8\pi^2}\ln\frac{\varphi^2}{\varphi_c^2}
+ \frac{f}{8}\frac{\varphi^4}{M_p^4} + \cdots \right)
\end{equation}
where $\cdots$ denotes terms of order $\varphi^6/M_p^6$ and higher.
Here $\varphi_c=\xi =\sqrt{|\xi_+|^2+ \xi_3^2}$ is defined in terms of
two
Fayet-Iliopoulos parameters $\xi_+$ and $\xi_3$, and
in (\ref{p_pot})
$f$ must lie in the interval $0 \leq f \leq 1$, since it is defined
as $f = |\xi_+|^2/ \xi^2$. The limits $f=0$ and $1$ correspond to
D-term \cite{Dterm} and F-term \cite{Fterm} models, respectively.
We will consider each of these limits separately. We do not consider
the complications which arise when these models are coupled to moduli
fields \cite{ACDavis}.
As in the models previously considered the false vacuum energy dominates
during inflation and the Hubble scale is given by
\begin{equation}
\label{p_H}
H \cong \frac{g \,\xi}{\sqrt{6}M_p}
\end{equation}
The inflaton couples to two scalar fields $\Phi_\pm$, of which
one linear combination $\sigma$ is tachyonic. Its
mass-squared is given by
\begin{equation}
\label{p_m_sigma}
m_\sigma^2 = g^2\left(\varphi^2 - \xi\right)
\end{equation}
By comparing (\ref{p_H}) and (\ref{p_m_sigma}) to the hybrid inflation potential
(\ref{pot}) we can determine the hybrid inflation model parameters as
\begin{eqnarray}
\lambda &=& \frac{g^2}{2} \label{p_lambda} \\
v &=& \sqrt{2\xi} \label{v_lambda}
\end{eqnarray}
The coupling $g$ retains its original meaning in P-term inflation.
\subsection{D-term Inflation}
D-term inflation corresponds to taking $f=0$ in (\ref{p_pot}).
During a slow roll phase the inflaton field evolves as
\[
\varphi_0(t)^2 = \varphi_c^2 - \frac{g^3 \xi}{2\sqrt{6}\pi^2}(t-t_c)
\]
which implies that $m_\sigma^2$ varies linearly with the number of e-foldings. Scales
relevant for the CMB left the horizon when $\varphi = \varphi_N$ where
\[
\varphi_N^2 = \varphi_c^2 + \frac{g^2 N}{2\pi^2}M_p^2 = \xi + \frac{g^2 N}{2\pi^2}M_p^2
\]
Two distinct regimes are possible depending on the value of the coupling $g$.
It is often assumed that $g$ is relatively large so that $g^2 N/(2\pi^2) \gg \xi/M_p^2$
\cite{Dterm2} which gives the correct amplitude of density perturbations with
$\xi \cong 10^{-5} M_p^2$
and requires $g \mbox{\raisebox{-.6ex}{~$\stackrel{>}{\sim}$~}} 2\times 10^{-3}$ for consistency. In this regime
$\varphi_N \gg \varphi_c$ so that slow roll at the onset of the instability is not guaranteed
and our previous analysis of the tachyon mode functions does not apply. However, in this
regime it is also difficult to satisfy the constraints
coming from the cosmic string tension, to avoid overproduction of
cosmic strings, $\xi \mbox{\raisebox{-.6ex}{~$\stackrel{<}{\sim}$~}} 4\times 10^{-7}$ \cite{Dterm_strings}.
(Ref. \cite{loophole} has pointed out that the constraints on the cosmic
string tension can be weakened by incorporating the effect which strings have
on the observed spectral index.)
We are therefore driven to consider D-term inflation in the regime of
small coupling $g^2$ so that $g^2 N/(2\pi^2) \ll \xi/M_p^2$ and $\varphi_N
\cong \varphi_c$. In this case we are guaranteed that the universe
will still be in a slow roll phase at the onset of the instability
and our previous analysis holds without modification. This
corresponds to very small couplings $g \ll 2\times 10^{-3}$, however,
there is no obstruction to taking such a small coupling since $g^2$
is not necessarily related to the gauge coupling constant in a GUT \cite{Pterm}.
In this regime the COBE normalization fixes $\xi \cong 7\times
10^{-4} g^{2/3} M_p^2$ so that $g$ is the only independent model
parameter. The cosmic string constraint $\xi \mbox{\raisebox{-.6ex}{~$\stackrel{<}{\sim}$~}} 4\times 10^{-7} M_p^2$
then restricts the coupling $g$ to be smaller than $g \mbox{\raisebox{-.6ex}{~$\stackrel{<}{\sim}$~}} 1.3
\times 10^{-5}$.
Applying our previous analysis of hybrid inflation to D-term
inflation,\footnote{See \cite{Dterm_preheat} for further discussion of preheating
in D-term inflation.}\ including the additional constraints mentioned above,
we find that there is a range of couplings,
\beq
-10.0 < \log_{10} g \mbox{\raisebox{-.6ex}{~$\stackrel{<}{\sim}$~}} -8.7
\label{dterm-ex}
\eeq
which is ruled out because of the spectral distortion constraint,
in the scale-noninvariant region of figure \ref{figN}.
On the other hand there is no constraint coming from
nongaussianity for this model.
\subsection{F-term Inflation}
F-term inflation \cite{Fterm,Fterm2} corresponds to taking $f=1$ in (\ref{p_pot}). In this
case the
dynamics are somewhat more complicated than the D-term model. Again there are
two possible regimes: a large coupling regime where $\varphi_N \gg \varphi_c$ and
our previous analysis does not apply and also a small coupling regime where
$\varphi_N \cong \varphi_c$ and our previous analysis does apply. The large coupling
regime corresponds to $g \mbox{\raisebox{-.6ex}{~$\stackrel{>}{\sim}$~}} 2\times 10^{-3}$ and again the cosmic string tension
constraints are difficult to satisfy (see, however, \cite{enlarging}).
We are therefore driven to consider only the
small coupling regime, $g \mbox{\raisebox{-.6ex}{~$\stackrel{<}{\sim}$~}} 2\times 10^{-3}$. For couplings
$3\times 10^{-7} \ll g \mbox{\raisebox{-.6ex}{~$\stackrel{<}{\sim}$~}} 2\times 10^{-3}$ it can be shown that the quadratic term
$\varphi^4/M_p^4$ in the potential (\ref{p_pot}) can be neglected and the dynamics
is identical to D-term inflation, which we have already considered. Thus we consider
only the F-term model for $g \ll 3 \times 10^{-7}$ since this is the only region of
parameter space for which the model differs significantly from the D-term model.
\footnote{We have neglected the intermediate regime $0.06 \mbox{\raisebox{-.6ex}{~$\stackrel{<}{\sim}$~}} g \mbox{\raisebox{-.6ex}{~$\stackrel{<}{\sim}$~}} 0.15$
which will not yield significant nongaussianity or spectral distortion.}
For $g \mbox{\raisebox{-.6ex}{~$\stackrel{<}{\sim}$~}} 3\times 10^{-7}$, so the $f$-term is dominating the
potential, the slow roll parameter $\epsilon = \xi^3/(8 M_p^6)$,
and the COBE normalization fixes
$\xi \cong 6.7\times 10^{6} g^2 M_p^2$
and the cosmic string tension is within observational bounds for $g
\mbox{\raisebox{-.6ex}{~$\stackrel{<}{\sim}$~}} 2\times 10^{-7}$. Again applying the general hybrid inflation
constraints, we find the excluded region
\beq
-13.0 \mbox{\raisebox{-.6ex}{~$\stackrel{<}{\sim}$~}} \log_{10} g \mbox{\raisebox{-.6ex}{~$\stackrel{<}{\sim}$~}} -9.5
\label{fterm-ex}
\eeq
which, as in the case of D-term inflation, comes from the
$k^3$ spectral distortion effect rather than nongaussianity.
For more general P-term models with $0< f < 1$ we expect the
excluded regions to interpolate between (\ref{dterm-ex}) and
(\ref{fterm-ex}). In deriving our constraints we have been driven to
the small coupling regime by the requirement that the cosmic sting tension
be within observational bounds. Our analysis does not give any significant
constraint on the string theoretic D3/D7 model \cite{D3/D7} since in this case the cosmic
strings are not stable and there is no motivation to consider the small values
of the coupling $g$ in (\ref{dterm-ex}) and (\ref{fterm-ex}). Indeed, such small
couplings are difficult to motivate from string theory \cite{D3/D7_small_g}.
\section{The Case of a Complex Tachyon}
\label{VI}
In the preceeding sections we have applied the results of \cite{BC},
which were derived under the assumption that $\sigma$ is a real
field, to models in which the tachyon is actually complex. In doing
so we have assumed that the generalization of the analysis of
\cite{BC} to the case of a complex tachyon does not significantly
modify the exclusion plot, figure \ref{figN}. Here we verify this
claim.
\subsection{Cosmological Perturbation Theory for an $O(M)$ Multiplet}
Before restricting to the case of a complex tachyon we consider the somewhat
more general case of an $O(M)$ symmetric multiplet of tachyon fields $\sigma_A$
with $A = 1, \dots, M$. The matter sector is expanded in perturbation theory as
\begin{eqnarray}
\varphi(\tau,\vec{x}) &=& \varphi^{(0)}(\tau) + \delta^{(1)} \varphi(\tau,\vec{x})
+ \frac{1}{2}\delta^{(2)} \varphi(\tau,\vec{x}) \\
\sigma_A(\tau,\vec{x}) &=& \delta^{(1)} \sigma_A(\tau,\vec{x})
+ \frac{1}{2} \delta^{(2)} \sigma_A(\tau,\vec{x}).
\end{eqnarray}
As in \cite{BC} the time-dependent vacuum expectation value (VEV) of the
tachyon fields are set to zero $\langle \sigma_A \rangle \equiv \sigma^{(0)} = 0$
which is a consequence of the $O(M)$ symmetry of the theory. Notice, however, that the
tachyon field \emph{does} develop an effective VEV for the radial component
\[
\langle |\sigma| \rangle \equiv \langle \sqrt{\sigma_A\sigma^A} \rangle
\not= 0
\]
We also assume that
\[
\frac{\partial V}{\partial \sigma_A} =
\frac{\partial^2 V}{\partial \sigma_A \partial \varphi} = 0
\]
but $V$ is, for the time being, otherwise arbitrary. Here and elsewhere the potential and
its derivatives are understood to be evaluated on background values of the fields
so that $V = V(\varphi^{(0)},\sigma^{(0)})$, for example.
We consider only the $\delta^{(2)} G^0_0 = \kappa^2 \delta^{(2)} T^0_0$,
$\partial_i \delta^{(2)} G^i_0 = \kappa^2 \partial_i \delta^{(2)} T^i_0$ and
$\delta^i_j \delta^{(2)} G^j_i = \kappa^2 \delta^i_j \delta^{(2)} T^j_i$
equations since the second order vector and
tensor fluctuations decouple from this system. In the case that $\sigma_A^{(0)} = 0$
the second
order tachyon fluctations $\delta^{(2)} \sigma_A$ decouple from the inflaton and gravitational
fluctuations up to second order and hence we do not need to solve for
$\delta^{(2)} \sigma_A$. Note also that the Klein-Gordon equation for the inflaton fluctations
is not necessary to close the system. In this section we
sometimes insert the slow roll parameters $\epsilon$ and $\eta$ explicitly though we do not
yet assume that they are small. We also introduce the shorthand notation
$m_{\varphi}^2 \equiv \partial^2 V / \partial \varphi^2$.
The second order $(0,0)$ equation is
\begin{eqnarray}
&& 3 \mathcal{H} \psi'^{(2)} +(3-\epsilon)\mathcal{H}^2 \phi^{(2)} - \partial^{k}\partial_{k} \psi^{(2)} \nonumber \\
&=& -\frac{\kappa^2}{2} \left[ \varphi'_0 \delta^{(2)} \varphi'
+ a^2 \frac{\partial V}{\partial \varphi } \delta^{(2)} \varphi \right]
+ \Upsilon_1
\label{00}
\end{eqnarray}
where $\Upsilon_1$ is constructed entirely from first order quantities. Dividing
$\Upsilon_1$ into inflaton and tachyon contributions we have
\[
\Upsilon_1 = \Upsilon_1^{\varphi} + \Upsilon_1^{\sigma}
\]
where
\begin{eqnarray}
\Upsilon_1^{\varphi} &=& 4(3-\epsilon)\mathcal{H}^2\left(\phi^{(1)}\right)^2
+ 2\kappa^2 \varphi'_0 \phi^{(1)} \delta^{(1)} \varphi' \nonumber \\
&-& \frac{\kappa^2}{2}\left( \delta^{(1)} \varphi' \right)^2
- \frac{\kappa^2}{2} a^2 m^2_{\varphi} \left( \delta^{(1)} \varphi \right)^2
- \frac{\kappa^2}{2} \left(\vec{\nabla} \delta^{(1)} \varphi \right)^2 \nonumber \\
&+& 8 \phi^{(1)} \partial^{k}\partial_{k} \phi^{(1)}
+ 3\left( \phi'^{(1)} \right)^2
+ 3\left( \vec{\nabla}\phi^{(1)}\right)^2
\label{Upsilon1varphi}
\end{eqnarray}
and
\begin{eqnarray}
\Upsilon_1^{\sigma} &=& - \frac{\kappa^2}{2} \left[ \delta^{(1)} \sigma'_A \delta^{(1)}\sigma'^A
+ \partial_i\delta^{(1)}\sigma_A\partial^i\delta^{(1)}\sigma^A \right. \nonumber \\
&+& \left. a^2 \frac{\partial^2 V}{\partial\sigma_A\partial\sigma_B}
\delta^{(1)}\sigma_A\delta^{(1)}\sigma_B \right].
\label{Upsilon1sigma}
\end{eqnarray}
The divergence of the second order $(0,i)$ equation is
\begin{equation}
\label{0i}
\partial^{k}\partial_{k} \left[ \psi'^{(2)} + \mathcal{H} \phi^{(2)} \right] =
\frac{\kappa^2}{2} \varphi'_0 \partial^{k}\partial_{k} \delta^{(2)} \varphi + \Upsilon_2
\end{equation}
where $\Upsilon_2 = \Upsilon_2^{\varphi} + \Upsilon_2^{\sigma}$ is constructed entirely from
first order quantities. The inflaton part is
\begin{eqnarray}
\Upsilon_2^{\varphi}
&=&
2\kappa^2 \varphi'_0 \partial_i \left( \phi^{(1)} \partial^i \delta^{(1)} \varphi \right)
+
\kappa^2 \partial_i \left( \delta^{(1)} \varphi' \partial^i \delta^{(1)} \varphi \right) \nonumber \\
&-& 8 \partial_i \left( \phi^{(1)} \partial^i \phi'^{(1)} \right)
- 2 \partial_i \left( \phi'^{(1)} \partial^i \phi^{(1)} \right)
\label{Upsilon2varphi}
\end{eqnarray}
and the tachyon part is
\begin{equation}
\label{Upsilon2sigma}
\Upsilon_2^{\sigma} =
\kappa^2 \partial_i \left( \delta^{(1)} \sigma'_A \partial^i \delta^{(1)} \sigma^A \right).
\end{equation}
The trace of the second order $(i,j)$ equation is
\begin{eqnarray}
&& 3\psi''^{(2)} + \partial^{k}\partial_{k} \left[ \phi^{(2)} - \psi^{(2)} \right]
+ 6 \mathcal{H} \psi'^{(2)} \nonumber \\
&+& 3 \mathcal{H} \phi'^{(2)} + 3(3-\epsilon) \mathcal{H}^2 \phi^{(2)} \nonumber \\
&=& \frac{3\kappa^2}{2}\left[ \varphi'_0 \delta^{(2)} \varphi'
- a^2 \frac{\partial V}{\partial \varphi } \delta^{(2)} \varphi\right] + \Upsilon_3
\label{ij}
\end{eqnarray}
where $\Upsilon_3 = \Upsilon_3^{\varphi} + \Upsilon_3^{\sigma}$ is constructed
entirely from first order quantities. The inflaton part is
\begin{eqnarray}
\Upsilon_3^{\varphi} &=& 12(3-\epsilon)\mathcal{H}^2 \left( \phi^{(1)}\right)^2
- 6 \kappa^2 \varphi'_0 \phi^{(1)} \delta^{(1)}\varphi' \nonumber \\
&+& \frac{3\kappa^2}{2}\left(\delta^{(1)} \varphi'\right)^2
- \frac{3 \kappa^2}{2} a^2 m^2_{\varphi} \left( \delta^{(1)} \varphi \right)^2
- \frac{\kappa^2}{2}\left(\vec{\nabla} \delta^{(1)} \varphi \right)^2 \nonumber \\
&+& 3 \left( \phi'^{(1)}\right)^2
+ 8 \phi^{(1)} \partial^{k}\partial_{k} \phi^{(1)}
+ 24 \mathcal{H} \phi^{(1)} \phi'^{(1)} \nonumber \\
&+& 7 \left( \vec{\nabla} \phi^{(1)} \right)^2
\label{Upsilon3varphi}
\end{eqnarray}
and the tachyon part is
\begin{eqnarray}
\Upsilon_3^{\sigma} &=& \kappa^2 \left[ \frac{3}{2}\delta^{(1)}\sigma'_A\delta^{(1)}\sigma'^A
- \frac{1}{2} \partial_i\delta^{(1)}\sigma_A\partial^i\delta^{(1)}\sigma^A\right. \nonumber \\
&-& \left. \frac{3}{2}a^2 \frac{\partial^2 V}{\partial\sigma_A\partial\sigma_B}
\delta^{(1)}\sigma_A\delta^{(1)}\sigma_B \right]
\label{Upsilon3sigma}
\end{eqnarray}
The derivation of the master equation which was presented in appendix B of \cite{BC} follows
here unmodified except for the new definitions of $\Upsilon_1^\sigma$, $\Upsilon_2^\sigma$
and $\Upsilon_3^\sigma$. The master equation is
\begin{equation}
\label{master}
\phi''^{(2)} + 2(\eta-\epsilon)\mathcal{H} \phi'^{(2)}
+ \left[ 2(\eta-2\epsilon)\mathcal{H}^2 - \partial^{k}\partial_{k} \right] \phi^{(2)} = J
\end{equation}
where the source is
\begin{eqnarray}
J &=& \Upsilon_1 - \Upsilon_3 + 4\triangle^{-1}\Upsilon'_2
+ 2(1-\epsilon+\eta)\mathcal{H} \triangle^{-1}\Upsilon_2 \nonumber \\
&+& \triangle^{-1} \gamma'' - (1 + 2\epsilon-2\eta)\mathcal{H}\triangle^{-1}\gamma'. \label{source}
\end{eqnarray}
and the quantity $\gamma$ is defined as
\begin{equation}
\label{gamma}
\gamma = \Upsilon_3 - 3 \triangle^{-1} \Upsilon'_2 - 6 \mathcal{H} \triangle^{-1} \Upsilon_2
\end{equation}
We can split the source into tachyon and inflaton contributions
$J = J^{\varphi} + J^{\sigma}$ in the obvious manner, by taking the tachyon
and inflaton parts of $\Upsilon_1,\Upsilon_2,\Upsilon_3,\gamma$.
In appendix B we prove the identity (see eqn. \ref{gamma_sigma2})
\begin{eqnarray*}
\gamma_\sigma &=&
-\frac{\kappa^2}{2}\left(\partial_i\delta^{(1)}\sigma_A\partial^i\delta^{(1)}\sigma^A\right)
\nonumber \\
&-& 3\kappa^2\triangle^{-1} \partial_i\left(\partial^{k}\partial_{k}\delta^{(1)}\sigma_A\partial^i\delta^{(1)}\sigma^A\right)
\end{eqnarray*}
which is analogous to the result for a real tachyon field, derived in \cite{BC}.
We now proceed to derive the tachyon curvature perturbation.
The derivation of $\zeta^{(2)}_\sigma$ presented in \cite{BC} follows unmodified except,
of course, for the change in the definitions of $\Upsilon_1^\sigma$, $\Upsilon_2^\sigma$,
$\Upsilon_3^\sigma$ and $\gamma_\sigma$. From this point onwards we assume
that $\epsilon,|\eta| \ll 1$. The leading contribution to the tachyon curvature
perturbation is
\begin{eqnarray*}
\zeta_\sigma^{(2)} &\cong& \frac{1}{\epsilon}\int_{\tau_i}^{\tau}d\tau'
\left[ - \frac{\Upsilon_1^\sigma}{\mathcal{H}(\tau')} +
\frac{1}{3}\frac{\Upsilon_3^\sigma}{\mathcal{H}(\tau')}\right.
\\
&-& \left. \frac{2}{3}\frac{\mathcal{H}(\tau')^2}{\mathcal{H}(\tau)^3}\Upsilon_3^{\sigma}\right]
\end{eqnarray*}
Now, using equations (\ref{Upsilon1sigma}) and (\ref{Upsilon3sigma}) we can write this in
terms of the tachyon fluctuation $\delta^{(1)}\sigma$ as
\begin{eqnarray}
\zeta^{(2)}_{\sigma} &\cong& \frac{\kappa^2}{\epsilon}\int_{-1/a_iH}^{\tau}d\tau' \left[
\frac{\delta^{(1)}\sigma'_A\delta^{(1)}\sigma'^A}{\mathcal{H}(\tau')} \right.
\nonumber \\
&-& \frac{\mathcal{H}(\tau')^2}{\mathcal{H}(\tau)^3}\left(
\delta^{(1)}\sigma'_A\delta^{(1)}\sigma'^A \right. \nonumber \\
&-& \left. \left. a^2 \frac{\partial^2 V}{\partial\sigma_A\partial\sigma_B}
\delta^{(1)}\sigma_A\delta^{(1)}\sigma_B \right) \right]
\label{zeta_sigma}
\end{eqnarray}
The corrections to (\ref{zeta_sigma}) are either total gradients or are subleading in the
slow roll expansion. In deriving (\ref{zeta_sigma}) we have restricted ourselves to the
preheating phase during which the fluctuations $\delta^{(1)}\sigma_A$ grow exponentially.
Using (\ref{zeta_sigma}) the second order tachyon curavture perturbation can be computed once
the fluctuations $\delta^{(1)}\sigma_A$ are determined. The first order tachyon fluctuations
are described by the perturbed Klein-Gordon equation
\begin{equation}
\label{multi_comp_KG}
\delta^{(1)} \sigma''_A + 2\mathcal{H} \delta^{(1)} \sigma_A - \partial^{k}\partial_{k}\delta^{(1)}\sigma_A
+ a^2 \frac{\partial^2 V}{\partial\sigma_A\partial\sigma_B}\delta^{(1)} \sigma_B = 0
\end{equation}
\subsection{Complex Tachyon Mode Functions}
At this point we restrict our attention to the case with $M=2$ and the potential
\begin{eqnarray}
\label{complex_pot}
V &=&\frac{\lambda}{4}\left(\sigma_A\sigma^A - v^2\right)^2
+ \frac{g^2}{2}\varphi^2 \sigma_A\sigma^A \nonumber \\
&& + \frac{m_\varphi^2}{2}\varphi^2
\end{eqnarray}
For $\sigma^{(0)}_A = 0$ the mass matrix is diagonal
\begin{eqnarray*}
\frac{\partial^2 V}{\partial \sigma_A \partial \sigma_B}
&=& \left( -\lambda v^2 + g^2 \varphi_0^2 \right) \delta_{AB} \\
&\equiv& m_\sigma^2 \delta_{AB}
\end{eqnarray*}
so that the tachyon fluctuations with $A=1$ and $A=2$ evolve independently
(see eqn. \ref{multi_comp_KG}).
As previously the quantum mechanical solutions $\delta^{(1)}\sigma_A$ are written in terms of
annihilation and creation operators $a_k^A,\ a_k^{A\dagger}$ in the usual
way
\beq
\label{complex_quantum}
\delta^{(1)} \sigma_A(x) = \int {d^{\,3}k\over (2\pi)^{3/2}}
\,a_k^A \, \xi_k(t)\, e^{ikx}+ {\rm h.c.}
\eeq
Both components $A=1$ and $A=2$ have the same time dependence owing to the
fact that the mass matrix is diagonal. The $\xi_k$ in (\ref{complex_quantum}) are thus
identical to the solutions of (\ref{mode}), which we have already studied.
\subsection{The End of Symmetry Breaking}
For a multi-component tachyon the condition defining $N_\star$
must be modified as $\langle \delta^{(1)}\sigma_A\delta^{(1)}\sigma^A \rangle(N=N_\star) = v^2 / 4$
which, for the case $M=2$, changes (\ref{end_of_inflation}) to
\[
\left. \int \frac{d^3k}{(2\pi)^3}|\xi_k|^2 \right|_{N=N_\star} = \frac{v^2}{8}
\]
\subsection{Tachyon Curvature Perturbation}
For the potential (\ref{complex_pot}) the tachyon curvature perturbation
$\zeta^{(2)}_\sigma$ decomposes into a sum of term
\[
\zeta^{(2)}_\sigma = \sum_{A=1,2} \zeta^{(2)}_{A}
\]
where $\zeta^{(2)}_A$ is the contribution to $\zeta^{(2)}_\sigma$ coming from $\sigma_A$.
Consider, as an example, the spectrum of the tachyon curvature perturbation
\begin{eqnarray*}
\langle\zeta^{(2)}_{\sigma,k_1}\zeta^{(2)}_{\sigma,k_2}\rangle &=&
\langle\zeta^{(2)}_{1,k_1}\zeta^{(2)}_{1,k_2}\rangle +
\langle\zeta^{(2)}_{2,k_1}\zeta^{(2)}_{2,k_2}\rangle \\
&+& \langle\zeta^{(2)}_{1,k_1}\zeta^{(2)}_{2,k_2}\rangle +
\langle\zeta^{(2)}_{1,k_2}\zeta^{(2)}_{2,k_1}\rangle
\end{eqnarray*}
Because the annihilation/creation operators $a^1_k$ and $a^2_k$ are independent
the cross-terms on the last line do not contribute to the connected part of the
correlation function. This means that
\[
\langle\zeta^{(2)}_{\sigma,k_1}\zeta^{(2)}_{\sigma,k_2}\rangle =
2\, \langle\zeta^{(2)}_{1,k_1}\zeta^{(2)}_{1,k_2}\rangle
\]
The quantity $\langle\zeta^{(2)}_{1,k_1}\zeta^{(2)}_{1,k_2}\rangle =
\langle\zeta^{(2)}_{2,k_1}\zeta^{(2)}_{2,k_2}\rangle$ will be identical
to the $\langle\zeta^{(2)}_{\sigma,k_1}\zeta^{(2)}_{\sigma,k_2}\rangle$
which we have already computed.
We see, then, that the effect of having a complex tachyon field
(as opposed to a real field) is to multiply $f_L$ and $f_{NL}$
by a factor of $2$ and also to slightly reduce $N_\star$. The net
change in $f_L$, $f_{NL}$ is order unity and the new exclusion plots
is difficult to visually distinguish from figure \ref{figN}.
This justifies our previous claims that our constraints do not change significantly
when one considers a complex tachyon field.
\section{Conclusions}
\label{VII}
In this paper we have studied the evolution of the second order curvature perturbation
during tachyonic preheating at the end of hybrid inflation. We have found that, depending
on the values of certain model parameters, two interesting effects are possible:
\begin{itemize}
\item Preheating generates a scale-invariant contribution to the curvature perturbation. In
this case significant nongaussianity can be generated during preheating and the model
is even constrained by producing too high a level of nongaussianity.
\item Preheating generates a nonscale-invariant contribution to the curvature perturbation
with spectral index $n=4$. In this case the strongest constraint comes from the
distortion of the power spectrum and no significant nongaussianity can be produced.
\end{itemize}
In both cases one typically requires fairly small values of the dimensionless couplings
$g,\lambda$ in order to obtain a strong effect. Note that a small coupling $g$ does not
require fine tuning in the technical sense, since $g^2$ is only multiplicatively
renormalized: $\beta(g^2) \sim \mathcal{O}(g^2\lambda,g^4) / (16\pi^2)$. That is, if $g$
is small at tree level then loop corrections do not change its effective value significantly.
We have applied our constraints on hybrid inflation to several popular models: brane inflation,
D-term inflation and F-term inflation. In the case of brane inflation we have found that
significant nongaussianity from preheating is possible for sufficiently small values of the
warp factor. For both D- and F-term inflation we have shown that no nongaussianity is produced
during preheating, however, we still put interesting constraints on the model due to the
distortion of the spectrum by nonscale-invariant fluctuations.
We have also generalized the results of \cite{BC} to the case of a complex tachyon field,
confirming our previous claims that this modification does not significantly alter our
exclusion plots.
We should note that the model of hybrid inflation considered here always gives a small blue
tilt to the spectral index, $n > 1$, which is disfavoured by recent data \cite{WMAP3}.
One avenue for future study \cite{inprog} is to generalize our results to the case of
inverted hybrid inflation \cite{inverted} which always gives $n < 1$.
\bigskip{\bf Acknowledgments.}
This work was supported by NSERC of Canada and FQRNT of Qu\'ebec.
We are grateful to M.\ Sakellariadou for helpful discussions and
suggestions concering our results for the complex tachyon.
We thank L.\ Leblond for alerting us to an error in the original
manuscript.
We also wish to thank R.\ Brandenberger, C.\ Byrnes, K.\ Dasgupta,
J.\ Garcia-Bellido, A.\ Linde, J.\ Magueijo, G.\ D.\ Moore, G.\ Rigopoulos,
T.\ Riotto, P.\ Shellard, G.\ Shiu, B.\ van Tent and F.\ Vernizzi for enlightening
discussions and correspondence.
\renewcommand{\theequation}{A-\arabic{equation}}
\setcounter{equation}{0}
\section*{APPENDIX A: The Matching Time $N_k$}
\label{A}
The matching time $N_k$ which determines the boundary between large- and small-scale behaviour
of the mode functions (\ref{solns}) is determined by the transcendental equation
\begin{equation}
|N_k|e^{2 N_k} = \frac{\hat{k}^2}{c}
\end{equation}
The solutions may be written exactly in terms of the branches
of the Lambert-W functions. In the region $\hat{k} < \sqrt{c / (2 e)}$ the
solution is triple-valued (see figure 1 of \cite{BC}) and may be written as
\begin{equation}
\label{Nk_smallk}
N_k = \left\{ \begin{array}{lll}
\frac{1}{2} W_{-1}\left(-\frac{2\hat{k}^2}{c}\right) & \mbox{for the branch with
$N_k < -1$};\\
\frac{1}{2} W_{0}\left(-\frac{2\hat{k}^2}{c}\right) &
\mbox{for the branch with
$-1 < N_k < 0$};\\
\frac{1}{2} W_{0}\left(+\frac{2\hat{k}^2}{c}\right) & \mbox{for the branch with
$N_k > 0$}.
\end{array} \right.
\end{equation}
In the region $\hat{k} > \sqrt{c / (2e)}$ the solution is single valued and can be written
as
\begin{equation}
\label{Nk_largek}
N_k = \frac{1}{2} W_0\left(+\frac{2\hat{k}^2}{c}\right)
\end{equation}
One may derive some asymptotic expressions for $N_k$ in various regions of interest.
When $|N_k| \gg 1$ we have
\begin{equation}
\label{Nk_asym_large}
N_k \cong \ln\left(\frac{\hat{k}}{\sqrt{c}}\right)
\end{equation}
which describes $N_k$ at $\hat{k} \gg \sqrt{c / (2e)}$ and also the lower branch of $N_k$
at $\hat{k} \ll \sqrt{c / (2e)}$. For $\hat{k} \mbox{\raisebox{-.6ex}{~$\stackrel{<}{\sim}$~}} \sqrt{c / (2e)}$ there are two more
branches of the solution with approximate behaviour
\begin{equation}
\label{Nk_asym_small}
N_k \cong \pm\,\frac{\hat{k}^2}{c}
\end{equation}
In our analysis we have used the approximation that $N_k$ is a single-valued function,
described by
\begin{eqnarray*}
N_k^{\mathrm{s.v.}} &=& \frac{1}{2}\Theta\left(\sqrt{c/(2e)}-\hat{k}\right)
W_{-1}\left(-\frac{2\hat{k}^2}{c}\right) \\
&+& \frac{1}{2} \Theta\left(\hat{k}-\sqrt{c/(2e)}\right)
W_0\left(+\frac{2\hat{k}^2}{c}\right)
\end{eqnarray*}
where $\Theta(x)$ is the Heaviside step function. We have verified both numerically
\cite{BC} and analytically that the single-valued approximation does not significantly
alter our results.
\renewcommand{\theequation}{B-\arabic{equation}}
\setcounter{equation}{0}
\section*{APPENDIX B: An Identity Concerning $\gamma_\sigma$}
\label{B}
In this appendix we derive an identity concering the tachyon source term
$\gamma_\sigma$ (\ref{gamma}):
\[
\gamma_\sigma = \Upsilon_3^\sigma - 3\triangle^{-1}\partial_\tau\Upsilon_2^\sigma
- 6\mathcal{H}\triangle^{-1}\Upsilon_2^\sigma.
\]
Using equations (\ref{Upsilon2sigma}) and (\ref{Upsilon3sigma}) we can write this
\begin{eqnarray*}
\gamma_\sigma &=& \kappa^2\triangle^{-1}\left[\frac{3}{2}\partial^{k}\partial_{k}(\delta^{(1)}\sigma'_A\delta^{(1)}\sigma'^A) \right. \\
&-& \frac{1}{2}\partial^{k}\partial_{k}(\partial_i\delta^{(1)}\sigma_A\partial^i\delta^{(1)}\sigma^A) \\
&-& \frac{3}{2}a^2 \frac{\partial^2 V}{\partial\sigma_A\partial\sigma_B}
\partial^{k}\partial_{k}(\delta^{(1)}\sigma_A\delta^{(1)}\sigma_B) \\
&-& 3\partial_\tau\partial_i(\delta^{(1)}\sigma'_A\partial^i\delta^{(1)}\sigma^A) \\
&-& \left. 6\mathcal{H}\partial_i(\delta^{(1)}\sigma'_A\partial^i\delta^{(1)}\sigma^A) \right]
\end{eqnarray*}
and, after some algebra, we have
\begin{eqnarray}
&& \gamma_\sigma = \kappa^2\triangle^{-1}\left[
-\frac{1}{2}\partial^{k}\partial_{k}(\partial_i\delta^{(1)}\sigma_A\partial^i\delta^{(1)}\sigma^A) \right. \label{intermediate}
\\
&-& 3 \partial_i\left(\delta^{(1)}\sigma''_A + 2\mathcal{H}\delta^{(1)}\sigma'_A +
a^2 \frac{\partial^2 V}{\partial\sigma_A\partial\sigma_B} \delta^{(1)}\sigma_B\right)
\partial^i\delta^{(1)}\sigma^A \nonumber \\
&-& \left. 3 \left(\delta^{(1)}\sigma''_A + 2\mathcal{H}\delta^{(1)}\sigma'_A
+ a^2 \frac{\partial^2 V}{\partial\sigma_A\partial\sigma_B}\delta^{(1)}\sigma_B\right)
\partial^{k}\partial_{k}\delta^{(1)}\sigma^A \right] \nonumber
\end{eqnarray}
In deriving this equation we have used that fact that
\[
\frac{\partial^2 V}{\partial\sigma_A\partial\sigma_B}
= \frac{\partial^2 V}{\partial\sigma_B\partial\sigma_A}
\]
which follows from the $O(M)$ symmetry of the theory.
The last two lines of (\ref{intermediate}) can be simplified using the equation of motion
for the tachyon fluctuation (\ref{multi_comp_KG}) which gives
\begin{eqnarray}
\gamma_\sigma
&=&
-\frac{\kappa^2}{2}\left(\partial_i\delta^{(1)}\sigma_A\partial^i\delta^{(1)}\sigma^A\right)
\nonumber \\
&-& 3\kappa^2\triangle^{-1} \partial_i\left(\partial^{k}\partial_{k}\delta^{(1)}\sigma_A\partial^i\delta^{(1)}\sigma^A\right)
\label{gamma_sigma2}
\end{eqnarray}
|
2,877,628,090,752 | arxiv | \section{Introduction}
The classical P\'olya urn was introduced by P\'olya and
Eggenberger \cite{Polya} describing contagious diseases.
The first model is as
follows: An urn contains balls of two colors at the start, white
and black. At each step, one picks a ball randomly and returns it
to the urn with a ball of the same color.
Afterward this model was generalized and it has become a simple
tool to describe several models such finance, clinical trials
(see \cite{Pages}, \cite{Wei}), biology (see \cite{bio}), computer
sciences, internet (see
\cite{Mahmoud},\cite{Goldman}), etc. \\
Recently, H. Mahmoud, M.R. Chen, C.Z Wei, M. kuba and H. Sulzbach
\cite{Kuba-Mahmoud-Panholzer,Chen-Kuba,Chen-Wei,kuba-Zulzbach,Kuba-mahmoud,Kuba-Mahmoud2},
have focused on the multidrawing urn. Instead of picking a ball,
one picks a sample of $m$ balls ($m\geq 1$), say $l$ white and
$m-l$ black balls. the pick is returned back to the urn together
with $a_{m-l}$ white and $b_{l}$ black balls, where $a_l$ and
$b_l, 0\leq l\leq m$ are integers. At first, they treated two
particular cases when
\{$a_{m-l}=c\times l \quad \text{and}\quad
b_{m-l}=c\times (m-l)$\} and when \{$a_{m-l}=c\times (m-l)$
\quad\text{and}\quad $b_{m-l}=c\times l$\}, where $c$ is a
positive constant. By different methods as martingales and moment
methods, the authors described the asymptotic behavior of the urn
composition. When considering the general case and in order to
ensure the existence of a martingale, they
supposed that $W_n$, the number of white balls in
the urn after $n$ draws, satisfies the affinity condition i.e,
there exists two deterministic sequences $(\alpha_n)$ and
$(\beta_n)$ such that, for all $n\geq 0$,
$\mathbb{E}[W_{n+1}|\mathcal{F}_n]=\alpha_n W_{n}+\beta_n$. Under
this condition, the authors focused on small and large index urns.
Later, the affinity condition was removed in the work of C.
Mailler, N. Lasmer and S. Olfa \cite{C.N.O}, they generalized
this model and looked at the case of more than two
colors.\\
In the present paper, we deal with an unbalanced urn model, which
was not been sufficiently addressed in the literature. It was
mainly dealt with in the works of R. Aguech \cite{R.Aguech}, S.
Janson \cite{S. Janson} and H. Renlund \cite{Renlund1, Renlund2}.
In \cite{R.Aguech} and \cite{S. Janson}, the authors dealt with
model with a simple pick, whereas in \cite{Renlund1,Renlund2} the
author considered a model with two picks and, under some
conditions, they described the asymptotic behavior of the urn
composition.
In this paper, we aim to give a generalization of a recent work
\cite{A.L.O}. We deal with an unbalanced urn model with random
addition. We consider an urn containing two different colors white
and blue. We suppose that the urn is non empty at time 0. Let
denote by $W_n$ (resp $B_n$) the number of white balls (resp blue
balls) and by $T_n$ the total number of balls in the urn at time
$n$. Let $(X_n)_{n\geq 0}$ and $(Y_n)_{n\geq 0}$ be strictly
positive sequences of independent identically distributed discrete
random variables with finite means and variances. The model we
study is defined as follows: At a discrete time, we pick out a
sample of $m$ balls from the urn (we suppose that $T_0=W_0+B_0\geq
m$) and according to the composition of the sample, we return the
balls with $Q_n(\xi_n,m-\xi_n)^t$ balls, where $Q_n$ is a $2\times
2$ matrix depending on the variables $X_n$ and $Y_n$ and $\xi_n$
is the number of white balls in the $n^{th}$ sample.
Let $(\mathcal{F}_n)_{n\ge 0}$ be the $\sigma$-field
generated by the first $n$ draws. We summarize
the evolution of
the urn by
the recurrence
\begin{equation}\label{recurrence}
\begin{pmatrix}
W_{n} \\
B_{n} \\
\end{pmatrix}\stackrel{\mathcal D}{=}
\begin{pmatrix}
W_{n-1} \\
B_{n-1} \\
\end{pmatrix}+Q_n
\begin{pmatrix}
\xi_{n} \\
m-\xi_{n} \\
\end{pmatrix}.
\end{equation}
Note that, with these notations, we have
\begin{equation*}\mathbb{P}[\xi_n=k|\mathcal{F}_{n-1}]=\displaystyle\frac{\binom {W_{n-1}} k \binom {B_{n-1}} {m-k}}{\binom {T_{n-1}} m}.\end{equation*}
The paper is organized as follows. In Section \ref{Main results},
we give the main results of the paper. In the first paragraph of Section \ref{Proofs}, we develop
Theorem 1 \cite{Renlund1} and apply it to our urn model. The rest
of this section is devoted to the prove the theorems.
\textbf{Notation:} For a random variable $R$, we denote
by $\mu_R=\mathbb{E}(R)$ and
$\sigma_R^2=\mathbb{V}ar(X)$. Note that $\mu_X, \mu_Y ,\sigma^2_X$
and $\sigma^2_Y$ are finite.\\
\section{Main Results}\label{Main results}
\begin{thm}\label{thmXopp}
Consider the urn model evolving by the matrix $Q_n=
\begin{pmatrix}
0 & X_n \\
X_n & 0 \\
\end{pmatrix}$. We have the following results:
\begin{enumerate}
\item \begin{equation}\label{asymp-T_n}T_n\stackrel{a.s}{=}m\mu_X n
+o(\sqrt{n}\ \ln(n)^\delta),\end{equation} \begin{equation}
W_n\stackrel{a.s}{=}\frac{m\mu_X }{2}n+o(\sqrt{n}\
\ln(n)^{\delta})\quad \text{and}\quad
B_n\stackrel{a.s}{=}\frac{m\mu_X }{2}n+o(\sqrt{n}\
\ln(n)^{\delta});\quad\delta>\frac{1}{2}.\end{equation}
\item \begin{equation}\frac{W_n-\frac{1}{2}T_n}{\sqrt{n}}
\stackrel{\mathcal{L}}{\longrightarrow}\mathcal{N}\Big(0,\frac{m(\sigma_X^2+\mu_X^2)}{12}\Big).\end{equation}
\item \begin{equation}\frac{W_n-\mathbb{E}(W_n)}{\sqrt{n}}\stackrel{\mathcal{L}}{\longrightarrow}
\mathcal{N}\Big(0,\frac{m(\sigma_X^2+\mu_X^2)+m^2\sigma^2_X}{12}\Big).\end{equation}
\end{enumerate}
\end{thm}
\begin{thm}\label{thmXself}
Consider the urn model evolving by the matrix $Q_n=
\begin{pmatrix}
X_n & 0 \\
0 & X_n \\
\end{pmatrix}$. There exists a positive random variable $\tilde W_\infty$, such that
\begin{equation}T_n\stackrel{a.s}{=}m\mu_X n
+o(\sqrt{n}\ \ln(n)^\delta),\quad W_n\stackrel{a.s}{=}\tilde
W_\infty n +o(n) \quad\mbox{and}\;\;B_n\stackrel{a.s}{=}
(m\mu_X-\tilde W_\infty)n+o(n).
\end{equation}
\end{thm}
\begin{rmq}
The random variable $\tilde W_\infty$ is absolutely continuous
whenever $X$ is bounded.
\end{rmq}
\begin{thm}\label{thmXYopp} Consider the urn model evolving by
the matrix $Q_n=
\begin{pmatrix}
0 & X_n \\
Y_n & 0 \\
\end{pmatrix}.$ Let $z:=\frac{\sqrt{\mu_X}}{{\sqrt{\mu_X}+\sqrt{\mu_Y}}}$, we
have the following results:
\begin{enumerate}
\item \begin{equation}\label{SL-Total}T_n\stackrel{a.s}{=}m\sqrt{\mu_X}\sqrt{\mu_Y}\
n+o(n),\end{equation}
\begin{equation} W_n\stackrel{a.s}{=}m\sqrt{\mu_X}\sqrt{\mu_Y}\ z\ n+o(n)\quad\text{and}\quad B_n\stackrel{a.s}{=}m\sqrt{\mu_X}\sqrt{\mu_Y}(1-z)\ n+o(n).\end{equation}
\item \begin{equation}\frac{W_n-z T_n}{\sqrt{n}}\stackrel{\mathcal{L}}{\longrightarrow}\mathcal{N}\Big(0,\frac{ G(z)}{3}\Big),\end{equation}
where, \begin{equation*} G(x)=\sum_{i=0}^4a_ix^i,\end{equation*}
with
\begin{eqnarray*}a_0=m^2(\sigma^2_X+\mu_X^2)&,&a_1=m(1-2m)(\sigma_X^2+\mu_X^2),\\
a_2=3m(m-1)(\sigma_X^2+\mu_X^2)-2m(m-1)\mu_X\mu_Y &,&
a_3=m\mathbb{E}(X-Y)^2-2(m^2-m)\bigl(\sigma_X^2+\mu_X^2-\mu_X\mu_Y\bigr)\\
\text{and}\quad a_4=m(m-1)\mathbb{E}(X-Y)^2.\end{eqnarray*}
\end{enumerate}
\end{thm}
\begin{thm}\label{thmXYself}
Consider the urn evolving by the matrix $Q_n=
\begin{pmatrix}
X_n & 0 \\
0 & Y_n \\
\end{pmatrix}.$ We have the following results:
\begin{enumerate}
\item If $\mu_X > \mu_Y$,
\begin{equation}
T_n\stackrel{a.s}{=}m\mu_Xn+o(n),\quad
W_n\stackrel{a.s}{=}m\mu_Xn+o(n)\quad \text{and}\quad
B_n\stackrel{a.s}{=}B_{\infty}n^\rho+o(n^\rho),
\end{equation}
where $\rho=\frac{\mu_Y}{\mu_X}$ and $B_{\infty}$ is a positive
random variable.
\item If $\mu_X=\mu_Y$,
\begin{equation}T_n\stackrel{a.s}{=}m\mu_Xn+o(n),\quad W_n\stackrel{a.s}{=}W_{\infty}n+o(n)\quad \text{and}\quad B_n\stackrel{a.s}{=}(\mu_Xm-W_{\infty})\ n+o(n),\end{equation}
where $W_\infty$ is a positive random variable.
\end{enumerate}
\end{thm}
\begin{rmq}
The case when $\mu_X<\mu_Y$ is obtained by interchanging the
colors.
\end{rmq}\\
\textbf{Example:} Let $m=1$, this particular case was studied by
R. Aguech \cite{R.Aguech}.
Using martingales and branching processes , R. Aguech proved the following results:\\
if $\mu_X>\mu_Y$,
\begin{equation*}W_n=\mu_X n+o(n),\quad B_n=D n^\rho\quad \text{and}\quad T_n=\mu_X n+o(n),\end{equation*}
where $D$ is a positive random variable.\\
If $\mu_X=\mu_Y$,
\begin{equation*}W_n=\mu_X \frac{W}{W+B}n+o(n)\quad \text{and}\quad B_n=\mu_X\frac{B}{W+B}n+o(n),\end{equation*}
where $W$ and $B$ are positive random variables obtained by
embedding some martingales in continuous time.
\section{Proofs}\label{Proofs}
The stochastic algorithm approximation plays a crucial role in the
proofs in order to describe the asymptotic composition of the urn.
As many versions of the stochastic algorithm exist in the
literature (see \cite{Duflo} for example), we adapt the version of
H. Renlund in \cite{Renlund1, Renlund2}.
\subsection{A basic tool: Stochastic approximation}
\begin{df}\label{def-algo}
A stochastic approximation algorithm $(U_n)_{n\geq 0}$ is a
stochastic process taking values in $[0,1]$ and adapted to a
filtration $\mathcal{F}_n$ that satisfies
\begin{equation}\label{eq:algo_sto}
U_{n+1}-U_n = \gamma_{n+1}\big(f(U_n)+\Delta M_{n+1}\big),
\end{equation}
where $(\gamma_n)_{n\geq 1}$ and $(\Delta_n)_{n\geq 1}$ are two
$\mathcal F_n$-measurable sequences of random variables, $f$ is a
function from $[0,1]$ onto $\mathbb R$ and the following
conditions hold almost surely.
\begin{description}
\item [(i)]$\frac{c_l}{n}\leq \gamma_n \leq \frac{c_u}{n}$,
\item [(ii)]$|\Delta M_n|\leq K_u,$
\item [(iii)]$ |f(U_n)|\leq K_f,$
\item [(iv)]$ \mathbb E[\gamma_{n+1}
\Delta M_{n+1}| \mathcal F_n] \leq K_e \gamma_n^2,$
\end{description}
where the constants $c_l, c_u, K_u, K_f, $ and $K_e$ are positive
real numbers.
\end{df}
\begin{df} Let
$Q_f=\{x; f(x)=0\}.$ A zero $p\in Q_f$ will be called stable if
there exists a neighborhood $\mathcal{N}_p$ of $p$ such that
$f(x)(x-p)<0$ whenever $x\in \mathcal{N}_p\setminus\{p\}.$ If $f$
is differentiable, then $f'(p)$ is sufficient to determine that
$p$ is stable.
\end{df}
\begin{thm}[\cite{Renlund1}]\label{th:renlund}Let $U_n$ bea
stochastic algorithm defined in Equation (\ref{eq:algo_sto}). If
$f$ is continuous, then $\displaystyle\lim_{n\rightarrow +\infty}
U_n$ exists almost surely and is in $Q_f$. Furthermore, if $p$ is
a stable zero, then $\mathbb{P}\Big(U_n\longrightarrow p\Big)>0.$
\end{thm}
\begin{rmq}
The conclusion of Theorem \ref{th:renlund} holds if we replace the
condition $(ii)$ in Definition \ref{def-algo} by the following
condition $\mathbb{E}[\Delta M_{n+1}^2|\mathcal{F}_n]\leq K_u$.
\end{rmq}
\begin{proof}[Proof of Theorem \ref{th:renlund}]
For the convenience of the reader, we adapt the proof of Theorem
\ref{th:renlund} and we show that, under the new condition $(ii)
\quad\mathbb{E}[\Delta M_{n+1}^2|\mathcal{F}_n]\leq
K_u$, the conclusion remains true. In fact, the following lemmas are useful.\\
\begin{lem}\label{V_n converges}
Let $V_n=\sum_{i=1}^n\gamma_i\Delta M_i$. Then, $V_n$ converges
almost surely.
\end{lem}
\begin{proof}
Set $A_i=\gamma_i\Delta M_i$ and $\tilde
A_i=\mathbb{E}[A_i|\mathcal{F}_{i-1}].$ Define the martingale
$C_n=\sum_{i=1}^n(A_i-\tilde A_i),$ then
\begin{eqnarray*}
\mathbb{E}(C_n^2)&\leq&\sum_{i=1}^n\mathbb{E}(A_i^2)=\sum_{i=1}^n\mathbb{E}(\gamma_i^2\Delta M_i^2)\\
&\leq&\sum_{i=1}^n\frac{c_u^2}{i^2}\mathbb{E}(\Delta M_i^2),
\end{eqnarray*}
if there exists some positive constant $K_u$ such that
$\mathbb{E}[\Delta M_{n+1}^2|\mathcal{F}_n]\leq K_u$, we
conclude that $C_n$ is an $L^2-$ martingale and thus converges almost surely.\\
Next, since \[\sum_{i\geq 1}|\tilde A_i|\leq\sum_{i\geq
1}\frac{c_u^2}{(i-1)^2}K_l<+\infty,\] the series $\sum_{i\geq
1}A_i$ must also converges almost surely.
\end{proof}
\begin{lem}\label{Q_f}Let $U_\infty$ be the set of accumulation point of
$\{U_n\}$ and $Q_f=\{x; f(x)=0\}$ be the zeros of $f$. Suppose
$f$ is continuous. Then,
\[\mathbb{P}\Big(U_\infty \subseteq Q_f\Big)=1.\]
\end{lem}
\begin{proof}
See \cite{Renlund1}
\end{proof}
Next, we prove the main result of the theorem.
If $\displaystyle\lim_{n\rightarrow +\infty} U_n$ does not exist, we can find two rational numbers in
the open interval $]\displaystyle\liminf_{n\rightarrow +\infty} U_n, \displaystyle\limsup_{n\rightarrow +\infty} U_n[$.\\
Let $p<q$ be two arbitrary different rational numbers. If we can
show that
\[\mathbb{P}\Big(\{\liminf U_n \leq p\}\cap\{\limsup U_n \geq q\}\Big)=0,\]
then, the existence of the limit will be established and the claim of
the theorem follows from Lemma \ref{Q_f}.\\
For this reason, we need to distinguish two different cases whether or not $p$
and $q$ are in the same connected component of $Q_f$.\\
\textbf{Case 1:
$p$ and $q$ are not in the same connected
component of $Q_f$.}\\
See the proof in \cite{Renlund1}.\\
\textbf{Case 2:
$p$ and $q$ are in the connected component of
$Q_f$.}\\
Let $p$ and $q$ be two arbitrary rational numbers such that $p$ and
$q$ are in the same connected component of $Q_f$.
Assume that $\displaystyle\liminf_{n\rightarrow +\infty} U_n \leq p$ and fix an
arbitrary $\varepsilon$ in such a way that $0\leq \varepsilon \leq
q-p$.\\
We aim to show that $\displaystyle\limsup_{n\rightarrow +\infty}
U_n\leq q$ i.e, it is sufficient to
show that $\displaystyle\limsup_{n\rightarrow +\infty} U_n\leq p+\varepsilon.$\\
In view of Lemma \ref{V_n converges}, we have
$V_n=\sum_{i=1}^n\gamma_i\Delta M_i$ converges a.s, then, for a
stochastic $N_1> 0$, for $n,m> N_1$ we have
$|W_n-W_m|<\frac{\varepsilon}{4}$ and
$\gamma_n\Delta M_n\leq \frac{\varepsilon}{4}$.\\
Let $N=max(\frac{4K_f}{\varepsilon},N_1)$. By assumption, there is
some stochastic $n>N$ such that $U_n-p<
\frac{\varepsilon}{2}$.\\
Let
\[\tau_1=\inf\{k\geq n; U_k\geq p\}\quad \text{and}\quad \sigma_1=\inf\{k>\tau_1; U_k<p\},\]
and define, for $n\geq 1,$
\[\tau_{n+1}=\inf \{k>\sigma_n; U_k\geq p\}\quad \sigma_{n+1}=\inf \{k>\tau_n; U_k<p\}.\]
Now, for all $k$ we have
\[U_{\tau_k}=U_{\tau_k-1}+\gamma_{\tau_k-1}(f(U_{\tau_k-1})+\Delta M_{\tau_k}).\]
Recall that $\gamma_{\tau_k-1}f(X_{\tau_k-1})\leq
\frac{K_f}{\tau_{k}-1}\leq \frac{K_f}{n}$, for $n\geq N\geq
\frac{4K_f}{\varepsilon}$ we have
$\gamma_{\tau_k-1}f(X_{\tau_k-1})<\frac{\varepsilon}{4}$. It
follows,
\[\gamma_{\tau_k-1}(f(U_{\tau_k-1})+\Delta M_{\tau_k})\leq \frac{K_f}{n}+\frac{\varepsilon}{4}\leq
\frac{\varepsilon}{4}+\frac{\varepsilon}{4}
=\frac{\varepsilon}{2}.\]
Note that $f(x)=0 $
when $x \in [ p,q ]$ ($p$ and $q$ are in $Q_f$). For $j$ such that
$\tau_k+j-1$ is a time before the exit time of the interval
$[p,q]$, we have
\[U_{\tau_k+j}=X_{\tau_k}+W_{\tau_k+j}-W_{\tau_k}.\]
As $|W_{\tau_k+j}-W_{\tau_k}|<\frac{\varepsilon}{4},$ we have
$U_{\tau_k+j}\leq
p+\frac{\varepsilon}{2}+\frac{\varepsilon}{4}\leq p+\varepsilon,$
the precess will never exceed $p+\varepsilon$ before
$\sigma_{k+1}$. We conclude that $\sup_{k\geq n} U_k\leq
p+\varepsilon.$\\
To establish that the limit is to a stable point, we refer the
reader to \cite{Renlund1} to see a detailed proof.
\end{proof}
\begin{thm}[\cite{Renlund2}]\label{clt-renlund}
Let $(U_n)_{n\geq 0}$ satisfying Equation~\eqref{eq:algo_sto} such
that $\displaystyle\lim_{n\to +\infty} U_n = U^\star$. Let $\hat
\gamma_n:= n\gamma_n \hat f(U_{n-1})$ where $\hat f(x) =
\frac{-f(x)}{x-U^\star}$. Assume that $\hat \gamma_n$ converges
almost surely to some limit $\hat \gamma$. Then,
if $\hat\gamma > \frac12$ and if $\mathbb E[(n\gamma_n \Delta
M_n)^2|\mathcal F_{n-1}] \to \sigma^2 > 0$, then \[\sqrt n (U_n
-U^\star) \to \mathcal N\Big(0, \frac{\sigma^2}{2\hat\gamma
-1}\Big).\]
\end{thm}
\subsection{Proof of the main results}
\begin{proof}[Proof of Theorem \ref{thmXopp}] Consider the
urn model defined in Equation (\ref{recurrence}) with $Q_n=
\begin{pmatrix}
0 & X_n \\
X_n & 0 \\
\end{pmatrix}$.
We have the following recursions:
\begin{equation}\label{recurrence-opp2}W_{n+1}=W_n+X_{n+1}(m-\xi_{n+1})\quad \text{and} \quad T_{n+1}=T_n+mX_{n+1}.\end{equation}
\textbf{Proof of claim 1}
\begin{lem}
Let $Z_n=\frac{W_n}{T_n}$ be the proportion of white balls in the
urn after $n$ draws. Then, $Z_n $ satisfies the stochastic
approximation algorithm defined by (\ref{eq:algo_sto}) with
$\gamma_n=\frac{1}{T_n}$, $f(x)=\mu_X m(1-2x)$ and $\Delta
M_{n+1}=X_{n+1}(m-\xi_{n+1}-mZ_n)-\mu m(1-Z_n)$.
\end{lem}
\begin{proof}
We need to check the conditions of definition \ref{def-algo}.\\
\begin{description}
\item [(i)] Recall that $T_n=T_0+m\sum_{i=1}^nX_i$, with
$(X_i)_{i\geq 1}$ are iid random variables. It follows, by
Rajechman strong law of large numbers, that
\begin{equation}\label{asymp-T_n}T_n\stackrel{a.s}{=}\mu_X mn
+o(\sqrt{n}\ \ln(n)^\delta),\quad \delta
>\frac{1}{2},\end{equation} it follows that
$\frac{1}{(m\mu_X+1)n}\leq\frac{1}{T_n}\leq\frac{2}{m\mu_X n},$
then, $c_l=\frac{1}{m\mu_X+1}$ and $c_u=\frac{2}{m\mu_X n},$
\item[(ii)] $\mathbb{E}[\Delta
M_{n+1}^2|\mathcal{F}_n]\leq (6m^2+m)\mathbb{E}(X^2)+9m^2\mu^2=K_u,$\\
\item[(iii)] $|f(Z_n)|=m\mu_X|1-2Z_n|\leq 3m\mu_X=K_f$,\\
\item[(iv)]$\mathbb{E}(\gamma_{n+1}\Delta
M_{n+1}|\mathcal{F}_n)\leq \frac{1}{T_n}\mathbb{E}(\Delta
M_{n+1}|\mathcal{F}_n)=0=K_e$.
\end{description}
\end{proof}
\begin{prop}\label{prop1/2}
The proportion of white balls in the urn
after $n$ draws, $Z_n$, converges almost surely to $\frac{1}{2}$.
\end{prop}
\begin{proof}[Proof of Proposition \ref{prop1/2}]
Since the process $Z_n$ satisfies the stochastic approximation
algorithm defined by Equation (\ref{eq:algo_sto}), we apply
Theorem \ref{th:renlund}. As the function $f$ is continuous we
conclude that $Z_n$ converges almost surely to $\frac{1}{2}$: the
unique stable zero of the function $f$.
\end{proof}
We apply the previous results to the urn composition. As we can
write $\frac{W_n}{n}=\frac{W_n}{T_n}\frac{T_n}{n}$, we deduce from
Proposition \ref{prop1/2} and Equation (\ref{asymp-T_n}) that
$\frac{W_n}{n}\stackrel{a.s}{=}\bigl(\frac{1}{2}+o(1)\bigr)\Big(\mu_Xm+o\Bigl(\frac{\ln(n)^\delta}{\sqrt{n}}\Bigr)\Bigr),$
then this corollary follows:
\begin{cor}
The number of white balls in the urn after $n$ draws, $W_n$,
satisfies for $n$ large enough
\begin{equation*}W_n\stackrel{a.s}{=}\frac{\mu_X m}{2}n+o(\sqrt{n}\ \ln(n)^\delta),\quad \delta >\frac{1}{2}.\end{equation*}
\end{cor}
\textbf{Proof of claim 2} We aim to apply Theorem
\ref{clt-renlund}. For this reason, we need to find this limits:
\begin{equation*}
\lim_{n\rightarrow \infty}\mathbb{E}[\bigl(\frac{n}{T_n}\bigr)^2\Delta
M_{n+1}^2|\mathcal{F}_n]\quad \text{and}\quad
\lim_{n\rightarrow \infty}-\frac{n}{T_n}f'(Z_n).
\end{equation*}
We have
\begin{eqnarray*}
\mathbb{E}[\Delta
M_{n+1}^2|\mathcal{F}_n]&=&\mathbb{E}(X_{n+1}^2)\mathbb{E}[(m-\xi_{n+1}-mZ_n)^2|\mathcal{F}_n])+\mu^2\mathbb{E}[(m-2mZ_n)^2|\mathcal{F}_n]
\\ &&-2\mu_X^2\mathbb{E}[(m-\xi_{n+1}-mZ_n)(m-2mZ_n)|\mathcal{F}_n]\\
&=&(\sigma_X^2+\mu_X^2)\Big[m^2-4m^2Z_n+4m^2Z_n^2+mZ_n(1-Z_n)\frac{T_n-m}{T_n-1}\Big]-\mu_X^2[m^2+4m^2Z_n^2-4m^2Z_n].
\end{eqnarray*}
As $n$ tends to infinity, we have $Z_n
\stackrel{a.s}{\longrightarrow} \frac{1}{2}$ and
$\frac{T_n-m}{T_n-1}\stackrel{a.s}{\longrightarrow} 1$. Then,
\begin{equation*}\lim_{n\rightarrow \infty}\mathbb{E}[\Delta
M_{n+1}^2|\mathcal{F}_n]\stackrel{a.s}{=}(\sigma_X^2+\mu_X^2)\frac{m}{4}\quad
\text{and}\quad \lim_{n\rightarrow
\infty}-\frac{n}{T_n}f'(Z_n)\stackrel{a.s}{=}2.\end{equation*}
According to Theorem \ref{clt-renlund},
$\sqrt{n}(Z_n-\frac{1}{2})$ converges in distribution to
$\mathcal{N}(0,\frac{\sigma_X^2+\mu_X^2}{12\mu_X^2m})$. Finally,
by writing
$\Big(\frac{W_n-\frac{1}{2}T_n}{\sqrt{n}}\Big)=\sqrt{n}(Z_n-\frac{1}{2})\frac{T_n}{n}$,
we conclude using Slutsky theorem.\\
\textbf{Proof of claim 3} To prove this claim, we follow the proof
of Lemma 3 and Theorem 2 in \cite{A.L.O}. Using the same methods,
we show in a first step that the variables $(X_n(m-\xi_n))_{n\geq
0}$ are $\alpha$-mixing variables with a strong mixing coefficient
$\alpha(n)=o\Big(\frac{\ln(n)^\delta}{\sqrt{n}}\Big)$, $\delta
>\frac{1}{2}$. To conclude, we adapt the Bernstein method.
Consider the same notation as in Theorem 2 in \cite{A.L.O}, and
define $S_n=\frac{1}{\sqrt{n}}\sum_{i=1}^n\tilde\xi_i$ where
$\tilde\xi_i=X_i(m-\xi_i)-\mu_X(m-\mathbb{E}(\xi_i))$. At first,
we need to estimate the variance of $W_n$.
\begin{prop}\label{var}
The variance of $W_n$ satisfies
\begin{equation}\label{variance}\mathbb{V}ar(W_n)=\frac{m(\sigma_X^2+\mu_X^2)+m^2\sigma_X^2}{12}\ n+o(\sqrt{n}\ \ln(n)^\delta),\quad \delta>\frac{1}{2}.\end{equation}
\end{prop}
\begin{proof}[Proof of Proposition \ref{var}]
Recall that the number of white balls in the urn satisfies
Equation (\ref{recurrence-opp2}), then
\begin{equation*}\mathbb{V}ar(W_{n+1})=\mathbb{V}ar(W_n)+\mathbb{V}ar(X_n(m-\xi_n))+2\ \mathbb{C}ov(W_{n-1},X_n(m-\xi_n)).\end{equation*}
We have
$\mathbb{V}ar(X_n(m-\xi_n))=(\sigma_X^2+\mu_X^2)\Big(\mathbb{V}ar(mZ_{n-1})
+\mathbb{E}\Big(mZ_{n-1}(1-Z_{n-1})\frac{T_{n-1}-m}{T_{n-1}-1}\Big)\Big)+\sigma_X^2\mathbb{E}(m-\xi_n)^2.$
Using Equation (\ref{asymp-T_n}) and the fact that
$Z_n\stackrel{a.s}{\rightarrow}\frac{1}{2}$, we obtain
\begin{eqnarray*}\mathbb{V}ar(W_{n+1})&=&\Big(1-\frac
2n+o\Big(\frac{\ln(n)^\delta}{n^{\frac32}}\Big)\Big)\mathbb{V}ar(W_{n})
+\frac{m(\sigma_X^2+\mu_X^2)+m^2\sigma_X^2}{4}+o\Big(\frac{\ln(n)^\delta}{\sqrt
n}\Big)\\
&=&a_n\mathbb{V}ar(W_{n})+b_n,\end{eqnarray*} where
$a_n=\Bigl(1-\frac
2n+o\Big(\frac{\ln(n)^\delta}{n^{\frac32}}\Big)\Bigr)$ and
$b_n=\frac{m(\sigma_X^2+\mu_X^2)+m^2\sigma_X^2}{4}+o\Big(\frac{\ln(n)^\delta}{\sqrt
n}\Big).$\\
Thus, \begin{equation*} \mathbb{V}ar(W_n)=\Big(\prod_{k=1}^n
a_k\Big)\Big(\mathbb{V}ar(W_0)+\sum_{k=0}^{n-1}\frac{b_k}{\prod_{j=0}^ka_j}\Big).
\end{equation*}
There exists a constant $a$ such that
$\prod_{k=1}^na_k=\displaystyle\frac{e^{a}}{n^2}\Big(1+o\Big(\frac{\ln(n)^\delta}{\sqrt
n}\Big)\Big)$, which leads to
\begin{equation*}
\mathbb{V}ar(W_n)=\frac{m(\sigma_X^2+\mu_X^2)+m^2\sigma_X^2}{12}n+o(\sqrt
n\ln(n)^\delta),\quad \delta>\frac{1}{2}.
\end{equation*}
\end{proof}
Recall that we follow the proof of Theorem 2 in \cite{A.L.O},
using Equation (\ref{variance}), we conclude that
\begin{equation}\frac{W_n-\mathbb{E}(W_n)}{\sqrt{n}}\stackrel{\mathcal{L}}{\longrightarrow}
\mathcal{N}\Bigl(0,\frac{m(\sigma_X^2+\mu_X^2)+m^2\sigma_X^2}{12}\Bigr).
\end{equation}
\end{proof}
\begin{proof}[Proof of Theorem \ref{thmXself}]
Consider the urn model defined in (\ref{recurrence}) with $Q_n=
\begin{pmatrix}
X_n & 0 \\
0 & X_n \\
\end{pmatrix}$. The following recurrences hold:
\begin{equation}\label{W-self}
W_{n+1}=W_n+X_{n+1}\xi_{n+1}\quad \text{and}\quad
T_{n+1}=T_n+mX_{n+1}.
\end{equation}
As $T_n$ is a sum of iid random variables then $T_n$ satisfies the
following
\begin{equation}\label{totalself}T_n\stackrel{a.s}{=}\frac{\mu_Xm}{2}n+o(\sqrt{n}\ln(n)^\delta).\end{equation}
The processes $\tilde
M_{n}=\prod_{k=1}^{n-1}\Big(\frac{T_k}{T_k+m\mu_X}\Big)W_n$ and
$\tilde N_n=\prod_{k=1}^{n-1}\Big(\frac{T_k}{T_k+m\mu_X}\Big)B_n$
are two $\mathcal{F}_n$ positive martingales. In view of
(\ref{totalself}), we have
$\prod_{k=1}^{n-1}\Big(\frac{T_k}{T_k+m\mu_X}\Big)\stackrel{a.s}{=}\displaystyle\frac{e^{\gamma}}{n}\Big(1+o\Big(\frac{\ln
(n)^\delta}{\sqrt n}\Big)\Big)$ for a positive constant $\gamma$.
Thus, there exists nonnegative random variables $\tilde W_\infty$
and $\tilde B_{\infty}$ such that $\tilde W_\infty+\tilde
B_\infty\stackrel{a.s}{=}m\mu_X$ and
\begin{equation*}\frac{W_n}{n}\stackrel{a.s}{\longrightarrow}\tilde W_\infty,\quad
\text{and}\quad \frac{B_n}{n}\stackrel{a.s}{\longrightarrow}\tilde
B_{\infty}.\end{equation*}
\textbf{Example: } In the original P\`olya urn model \cite{Polya},
when $m=1$ and $X=C$ (deterministic), the random variable $\tilde
W_\infty/C$ has a $Beta(\frac{B_0}{C},\frac{W_0}{C})$ distribution
\cite{Athreya-Ney,S. Janson}. Whereas, M.R. Chen and M. Kuba
\cite{Chen-Kuba} considered the case when $X=C$ (non random) and
$m>1$. They gave moments of all orders of $W_n$ and proved that
$\tilde W_\infty$ cannot be an ordinary $Beta$ distribution.
\begin{rmq}
Suppose that the
random variable $X$ has moments of all orders, let
$\:m_k=E(X^k)$, for $ k\ge 1$. We have, almost surely, $W_n\le
T_n$ then, by Minskowski inequality, we obtain
$\mathbb{E}(W_n^{2k})\leq (mn)^{2k}\mathbb{E}(X^{2k})$. Using
Carleman's condition we conclude that, if
$\sum_{k\ge1}\mu_{2k}^{-\frac{1}{2k}}=\infty$, then the random
variable $\tilde W_\infty$ is determined by its moments.
Unfortunately, till now we still unable to give exact expressions
of moments of all orders of $W_n$. But, we can characterize the
distribution of $\tilde W_\infty$ in the case when the variable
$X$ is bounded.
\end{rmq}
\begin{lem}
\label{Abs_con} Assume that $X$ is a bounded random variable,
then, for fixed $W_0,B_0$ and $m$ the random variable $\tilde
W_\infty$ is absolutely continuous.
\end{lem}
The proof that $\tilde W_\infty$ is absolutely continuous is very
close to that of Theorem 4.2 in \cite{Chen-Wei}. We give the main
proposition to make the proof clearer.
\begin{prop}\cite{Chen-Wei}
Let $\Omega_\ell$ be a sequence of increasing events such that
$\mathbb{P}(\cup_{\ell \ge 0}\Omega_\ell)=1$. If there exists
nonnegative Borel measurable function $\{f_\ell\}_{\ell\geq 1}$
such that $\mathbb{P}\Big(\Omega_\ell\cap \tilde
W_\infty^{-1}(B)\Big)=\int_Bf_\ell (x)dx$ for all Borel sets B,
then, $f=\displaystyle\lim_{l\rightarrow+\infty}f_\ell$ exists
almost everywhere and $f$ is the density of $\tilde W_\infty$.
\end{prop}
Let
$(\Omega,\mathcal F,\mathcal{P})$ be a probability space. Suppose
that there exists a constant $A$ such that, we have almost surely,
$X\le A$. \begin{lem} Define the events
\begin{equation*}
\Omega_{\ell}:=\{W_\ell\ge m A \;\mbox{and}\;B_\ell\ge mA\},
\end{equation*}
then, $(\Omega_{\ell})_{\ell\geq 0}$ is a sequence of increasing
events, moreover we have $\mathbb{P}(\cup_{\ell \ge
0}\Omega_\ell)=1$.
\end{lem}
Next, we just need to show that the restriction of $\tilde
W_\infty$ on $\Omega_{\ell,j}=\{\omega; W_\ell(\omega)=j\}$ has a
density for each $j$, with $Am\leq j\leq T_{\ell-1}.$ Let
$(p_c)_{c\in\text{supp}(X)}$ the distribution of $X$.
\begin{lem}
For a fixed $\ell>0$, there exists a positive constant $\kappa$,
such that, for every $c\in\text{supp(X)}$, $n\ge \ell+1$, $Am\le
j\le T_{\ell-1}$ and $k\le Am(n+1)$, we have
\begin{equation}
\label{Inequality_WEI} \sum_{i=0}^m
\mathbb{P}(W_{n+1}=j+k|W_n=j+k-ci)\le p_c(1-\frac
1n+\frac{\kappa}{n^2}).
\end{equation}
\end{lem}
\begin{proof}
According to Lemma 4.1 \cite{Chen-Wei},
for $Am \leq
j\leq T_{\ell -1}$, $n\geq \ell$ and $k\leq Am(n+1)$, the
following holds:
\begin{equation}\label{step2}\sum_{i=0}^m{j+c(k-i)\choose
i}{T_n-j-c(k-i)\choose
m-i}=\frac{T_n^m}{m!}+\frac{(1-m-2c)T_n^{m-1}}{2(m-1)!}+...,\end{equation}
which is a polynomial in $T_n$ of degree $m$ with coefficients
depending on $W_0, B_0, m$ and $c$ only.\\
Let $u_{n,k}(c)=\sum_{i=0}^m \mathbb{P}(W_{n+1}=j+k|W_n=j+k-ic)$.
Applying Equation (\ref{step2}) to our model we have
\begin{eqnarray}
\label{Majoration1} u_{n,k}(c)&=&p_c\sum_{i=0}^m{j+k\choose
i}{T_n-j-k\choose m-i}{T_n\choose m}^{-1} \nonumber
\\
&=&p_c{T_n\choose m}^{-1}\Big(\frac{T_n^{m}}{m!}+
\frac{(1-m-2c)}{(m-1)!}T_n^{m-1}+\ldots\Big)\Big(\frac{T_n^m}{m!}+\frac{(1-m)}{2(m-1)!}T_n^{m-1}+\ldots\Big)^{-1}\nonumber
\\
&\stackrel{a.s}{=}&
p_c\Big(1-\frac{1}{n}+O\Big(\frac{1}{n^2}\Big)\Big).
\end{eqnarray}
\end{proof}
Later, we will limit the proof by mentioning the main differences with
Lemma 4.1 \cite{Chen-Wei}. For a fixed $\ell$ and $n\ge \ell+1$, we denote by $v_{n,j}=\displaystyle\max_{0\leq k\leq
Amn}\mathbb{P}\bigl(W_{\ell+n}=j+k|W_\ell=j\bigr)$. We have the
following inequality:
\begin{eqnarray*}
v_{n+1,j}&\le & \max_{0\le k\le
Am(n+1)}\Big\{\sum_{i=0}^m\sum_{c\in\text{supp}{(X)}}\mathbb{P}(W_{\ell+
n+1}=j+k|W_{\ell+n}=j+k-ci)\Big\}\nonumber
\\
&\le& \max_{0\le k\le Am(n+1)}\Big\{\sum_{i=0}^m\sum_{c\in\text{supp}{(X)}}\mathbb{P}(W_{\ell+n+1}=j+k|W_{\ell+n}=j+k-ci)\nonumber\\
&&\times \mathbb{P}(W_{\ell+n}=j+k-ci|W_\ell=j)\Big\}\nonumber\\
&\le&\max_{0\le k\le
Am(n+1)}\sum_{i=0}^m\sum_{c\in\text{supp}{(X)}}\mathbb{P}(W_{\ell+n+1}=j+k|W_{\ell+n}=j+k-ci)\\
&&\times \max_{0\leq \tilde k\leq
Amn}\mathbb{P}\bigl(W_{\ell+n}=j+\tilde
k|W_\ell =j\bigr)\\
&\le&\sum_{c\in\text{supp}{(X)}}p_c\Big(1-\frac{1}{n+l}+\frac{\kappa}{(n+l)^2}\Big)v_{n,j}\\&&=\Big(1-\frac{1}{n+l}+\frac{\kappa}{(n+l)^2}\Big)v_{n,j}.
\end{eqnarray*}
This implies that there exists some positive constant $C(\ell)$,
depending on $\ell$ only, such that, for a fixed $\ell$ and for
all $n\ge \ell+1$, we get
\begin{equation}
\max_{0\leq k\leq
m(n-l)}\mathbb{P}\bigl(W_n=j+k|W_l=j\bigr)\le\prod_{i=\ell}^n\Big(1-\frac1i+\frac{\kappa}{i^2}\Big)\le
\frac{C(\ell)}{n}.
\end{equation}
The rest of the proof follows.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thmXYopp}]
Consider the urn model evolving by the matrix $Q_n=
\begin{pmatrix}
0 & X_n \\
Y_n & 0 \\
\end{pmatrix}$. According to Equation (\ref{recurrence}), we have
the following recursions:
\begin{equation}\label{opposite-rec}W_{n+1}=W_n+X_{n+1}(m-\xi_{n+1})\quad \text{and}\quad T_{n+1}=T_n+mX_{n+1}+\xi_{n+1}(Y_{n+1}-X_{n+1}).\end{equation}
\begin{lem}
The proportion of white balls after $n$ draws, $Z_n$, satisfies the stochastic algorithm
defined by (\ref{eq:algo_sto}), where
$f(x)=m(\mu_Y-\mu_X)x^2-2\mu_Xmx+\mu_Xm$, $\gamma_n=\frac{1}{T_n}$
and $\Delta M_{n+1}=D_{n+1}-\mathbb{E}[D_{n+1}|\mathcal{F}_n]$,
with $D_{n+1}=\xi_{n+1}(Z_n(X_{n+1}-Y_{n+1})-X_{n+1})+mX_{n+1}$.
\end{lem}
\begin{proof}
We check the conditions of Definition \ref{def-algo}, indeed,
\begin{description}
\item[(i)] recall that
$T_n=T_0+m\sum_{i=1}^nX_i+\sum_{i=1}^n\xi_i(Y_i-X_i)$, then $\frac{T_n}{n}\leq\frac{T_0}{n}+\frac{m}{n}\sum_{i=1}^nX_i+\frac{m}{n}\sum_{i=1}^n|Y_i-X_i|.$
By the strong law of large numbers we have $\frac{T_n}{n}\leq
m(\mu_X+\mu_{|Y-X|})+1$. On the other hand, we have $T_n\geq \displaystyle\min_{1\leq i\leq n}(X_i, Y_i) m n,$
thus, the following bound holds
\begin{equation*}\frac{1}{(m(\mu_X+\mu_{|Y-X|})+1)n}\leq \frac{1}{T_n}\leq \frac{1}{m \displaystyle\min_{1\leq i\leq n}(X_i,Y_i) n},\end{equation*}
then $c_l=\frac{1}{(m(\mu_X+\mu_{|Y-X|})+1)n}$ and $c_u=\frac{1}{m \displaystyle\min_{1\leq i\leq n}(X_i,Y_i)},$\\
\item[(ii)] $\mathbb{E}[\Delta M_{n+1}^2|\mathcal{F}_n]
\leq
(\mu_{(X-Y)^2}+3\mu_X)(m+m^2)+5m^2\mu_{X^2}+2m^2\mu_X\mu_Y+m^2(|\mu_X-\mu_Y|+3\mu_X)=K_u,$
\item[(iii)]$|f(Z_n)|\leq
m(|\mu_Y-\mu_X|+3\mu_X)=K_f,$
\item[(iv)] $\mathbb{E}[\frac{1}{T_{n+1}}\Delta
M_{n+1}|\mathcal{F}_n]\leq \frac{1}{T_n}\mathbb{E}[\Delta
M_{n+1}|\mathcal{F}_n]=0$
\end{description}
\end{proof}
\begin{prop}
The proportion of white balls in the urn after $n$ draws, $Z_n$,
satisfies as $n$ tends to infinity
\begin{equation}\label{conv-proportion}Z_n\stackrel{a.s}{\longrightarrow}z:=\frac{\sqrt{\mu_X}}{\sqrt{\mu_X}+\sqrt{\mu_Y}}.\end{equation}
\end{prop}
\begin{proof}
The proportion of white balls in the urn satisfies the stochastic
approximation algorithm defined in (\ref{eq:algo_sto}). As the
function $f$ is continuous, by Theorem \ref{th:renlund}, the
process $Z_n$ converges almost surely to
$z=\frac{\sqrt{\mu_X}}{\sqrt{\mu_X}+\sqrt{\mu_Y}}$,
the unique zero of $f$ with negative derivative.
\end{proof}
Next, we give an estimate of $T_n$, the total number of balls in
the urn after $n$ draws, in order to describe the asymptotic of
the urn composition. By Equation (\ref{opposite-rec}), we have
\begin{equation*}\frac{T_n}{n}=\frac{T_0}{n}+\frac{m}{n}\sum_{i=1}^nX_i
+\frac{m(\mu_Y-\mu_X)}{n}\sum_{i=1}^nZ_{i-1}+\frac{1}{n}\sum_{i=1}^n\Big[\xi_i(Y_i-X_i)-\mathbb{E}[\xi_i(Y_i-X_i)|\mathcal{F}_{i-1}]\Big].\end{equation*}
Since $(X_i)_{i\geq 1}$ are iid random variables, then by the
strong law of large numbers we have
$\frac{m}{n}\sum_{i=1}^nX_i\stackrel{a.s}{\rightarrow} m\mu_X$.
Via Ces\'aro lemma, we conclude that
$\frac{1}{n}\sum_{i=1}^nZ_{i-1}$ converges almost surely, as $n$
tends to infinity, to $z$. Finally, we prove that last term in the
right side tends to zero, as $n$ tends to infinity. In fact, let
$G_n=\sum_{i=1}^n\Big[\xi_i(Y_i-X_i)-\mathbb{E}[\xi_i(Y_i-X_i)|\mathcal{F}_{i-1}]\Big]$,
then $(G_n,\mathcal{F}_n)$ is a martingale difference sequence
such that
\begin{equation*}\frac{<G>_n}{n}=\frac{1}{n}\sum_{i=1}^n\mathbb{E}[\nabla G_i^2|\mathcal{F}_{i-1}],\end{equation*}
where $\nabla
G_n=G_n-G_{n-1}=\xi_n(Y_n-X_{n})-\mathbb{E}[\xi_n(Y_n-X_{n})|\mathcal{F}_{n-1}]$ and $<G>_n$ denotes the quadratic variation of the martingale.\\
By a simple computation, we have the almost sure convergence of
$\mathbb{E}[\nabla G_i^2|\mathcal{F}_{i-1}]$ to $(mz
(1-z)+m^2z^2)(\sigma_Y^2+\sigma_X^2)$. Therefore, Ces\'aro lemma
ensures that, $\frac{<G>_n}{n}$ converges to $(mz
(1-z)+m^2z^2)(\sigma_Y^2+\sigma_X^2)$ and
$\frac{G_n}{n}\stackrel{a.s}{\longrightarrow} 0$. Thus, for $n$
large enough we have
\begin{equation}\label{T_n-convergence}\frac{T_n}{n}\stackrel{a.s}{\longrightarrow}
m\sqrt{\mu_X}\sqrt{\mu_Y}.\end{equation} In view of Equation
(\ref{T_n-convergence}), we describe the asymptotic behavior of
the urn composition after $n$ draws. One can write
$\frac{W_n}{n}\frac{W_n}{T_n}\frac{T_n}{n}$ and
$\frac{B_n}{n}\stackrel{a.s}{=}\frac{B_n}{T_n}\frac{T_n}{n}$,
using Equations (\ref{conv-proportion}, \ref{T_n-convergence}) and
Slutsky theorem, we have, as $n$ tends to infinity,
$\frac{W_n}{n}\stackrel{a.s}{\longrightarrow}m\sqrt{\mu_X}\sqrt{\mu_Y}
z$ and $\frac{B_n}{n}\stackrel{a.s}{\longrightarrow}
m\sqrt{\mu_X}\sqrt{\mu_Y}(1-z)$.\\
\textbf{Proof of claim 2}\\
Later, we aim to apply Theorem \ref{clt-renlund}. In our model, we have $\gamma_n=\frac{1}{T_n}$, then we need to
control the following asymptotic behaviors
\begin{equation*}
\lim_{n\rightarrow +\infty}\mathbb{E}[\Big(\frac{n}{T_n}\Big)^2\Delta
M_{n+1}^2|\mathcal{F}_n]\quad \text{and}\quad
\lim_{n\rightarrow +\infty}-\frac{n}{T_n}f'(Z_n).
\end{equation*}
In fact, recall that $\frac{n}{T_n}$ converges almost surely to
$\frac{1}{m\sqrt{\mu_X}\sqrt{\mu_Y}}$ and $\mathbb{E}[\Delta
M_{n+1}^2|\mathcal{F}_n]=\mathbb{E}[D_{n+1}^2|\mathcal{F}_n]+\mathbb{E}[D_{n+1}|\mathcal{F}_n]^2$.
Since $\mathbb{E}[D_{n+1}|\mathcal{F}_n]^2$ converges almost
surely to $f(z)^2=0$, we have,
\begin{eqnarray*}\mathbb{E}[D_{n+1}^2|\mathcal{F}_n]&=&\mathbb{E}\Big[Z_n^2(X_{n+1}-Y_{n+1})^2-2Z_nX_{n+1}+X_{n+1}|\mathcal{F}_n\Big]
\mathbb{E}[\xi_{n+1}^2|\mathcal{F}_n]+m^2\mathbb{E}(X^2)\\&&+2m^2\Bigl(Z_n^2(\mathbb{E}(X^2)-\mu_X\mu_Y)-Z_n\mathbb{E}(X^2)\Bigr).\end{eqnarray*}
Using the fact that
$\mathbb{E}[\xi_{n+1}^2|\mathcal{F}_n]=mZ_n(1-Z_n)\frac{T_n-m}{T_n-1}+m^2Z_n^2$
and that $Z_n$ converges almost surely to $z$, we conclude that
$\mathbb{E}[D_{n+1}^2|\mathcal{F}_n]$ converges almost surely to
$G(z)>0.$ Applying Theorem \ref{clt-renlund}, we obtain the
following
\begin{equation}\sqrt{n}(Z_n-z)\stackrel{\mathcal{L}}{\longrightarrow}\mathcal{N}\Big(0,\frac{G(z)}{3m^2\mu_X\mu_Y}\Big). \end{equation}
But, we can write
$\frac{W_n-zT_n}{\sqrt{n}}=\sqrt{n}\bigl(\frac{W_n}{T_n}-z\bigr)\frac{T_n}{n}$.
Thus, it is enough to use Slutsky theorem to conclude the proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thmXYself}] Consider
the urn model defined in (\ref{recurrence}) with $Q_n=
\begin{pmatrix}
X_n & 0\\
0& Y_n \\
\end{pmatrix}$. The process of the
urn satisfies the following recursions:
\begin{equation}\label{recurence-self1}
W_{n+1}=W_n+X_{n+1}\xi_{n+1}\quad \text{and} \quad
T_{n+1}=T_n+mY_{n+1}+\xi_{n+1}(X_{n+1}-Y_{n+1}).\end{equation}
\begin{lem}\label{algo}
If $\mu_X\neq \mu_Y$, the proportion of white balls in the urn
after $n$ draws satisfies the stochastic algorithm defined by
(\ref{eq:algo_sto}) where $\gamma_n=\frac{1}{T_n}$,
$f(x)=m(\mu_Y-\mu_X)x(x-1)$ and $\Delta
M_{n+1}=D_{n+1}-\mathbb{E}[D_{n+1}|\mathcal{F}_n]$ with
$D_{n+1}=\xi_{n+1}(Z_n(Y_{n+1}-X_{n+1})+X_{n+1})-mZ_nY_{n+1}$.\\
\end{lem}
\begin{proof}
We check that, if $\mu_X\neq\mu_Y$, the conditions of definition
\ref{def-algo} hold. Indeed, \begin{description} \item[(i)] as
$T_n=T_0+m\sum_{i=1}^nY_i+\sum_{i=1}^n\xi_i(X_i-Y_i)$,
then via the strong law of large numbers we have $|\frac{T_n}{n}|\leq
m\mu_Y+m\mu_{|X-Y|}+1$.
On the other hand, we have $T_n\geq \min_{1\leq i\leq n}(X_i,Y_i)
m n$, thus,\begin{equation*}\frac{1}{(m\mu_Y+m\mu_{|X-Y|})n}\leq
\frac{1}{T_n}\leq \frac{1}{\displaystyle\min_{1\leq i\leq
n}(X_i,Y_i) m n},\end{equation*}
\item[(ii)]$\mathbb{E}[\Delta
M_{n+1}^2|\mathcal{F}_n]\leq
(2m+m^2)(4\mu_{X^2}+\mu_{Y^2})+3m^2\mu_{Y^2}+2m^2\mu_X+2m^2\mu_X\mu_Y+4m^2(\mu_X-\mu_Y)^2=K_u,$\\
\item
[(iii)]$|f(Z_n)|=|m(\mu_Y-\mu_X)Z_n(Z_n-1)|\leq 2m |\mu_Y-\mu_X|=K_f,$\\
\item[(iv)] $\mathbb{E}[\gamma_{n+1}\Delta
M_{n+1}|\mathcal{F}_n]\leq \frac{1}{T_n}\mathbb{E}[\Delta
M_{n+1}|\mathcal{F}_n]=0=K_e.$
\end{description}
\end{proof}
\begin{prop}\label{prop-self}
The proportion of white balls in the urn after $n$ draws, $Z_n$,
satisfies almost surely\\
$\displaystyle\lim_{n\rightarrow \infty}Z_n=\left\
\begin{array}{ll}
1, & \hbox{if $\mu_X>\mu_Y$;} \\
0, & \hbox{if $\mu_X<\mu_Y$;} \\
\tilde Z_\infty, & \hbox{if $\mu_X=\mu_Y$,} \\
\end{array
\right $\\
where $\tilde Z_\infty$ is a positive random
variable.
\end{prop}
\begin{proof}[Proof of Proposition \ref{prop-self}]
Recall that, if $\mu_X\neq \mu_Y$, $Z_n$ satisfies the stochastic
algorithm defined in Lemma \ref{algo}. As the function $f$ is
continuous, by Theorem \ref{clt-renlund} we conclude that $Z_n$
converges almost surely to the stable zero of the function $h$
with a negative derivative,
which is $1$ if $\mu_X>\mu_Y$ and $0$ if $\mu_X<\mu_Y.$\\
In the case when $\mu_X=\mu_Y$, we have
$Z_{n+1}=Z_n+\frac{P_{n+1}}{T_{n+1}}$, where
$P_{n+1}=X_{n+1}\xi_{n+1}-Z_n\bigl(mY_{n+1}+\xi_{n+1}(X_{n+1}-Y_{n+1})\bigr)$.
Since $\mathbb{E}[P_{n+1}|\mathcal{F}_n]=0$, then $Z_n$ is a
positive
martingale which converges almost surely to a positive random variable $\tilde
Z_\infty$.\\
As a consequence, we have
\begin{cor} The total number of balls in the urn, $T_n$,
satisfies as $n$ tends to infinity
if $\mu_X\geq \mu_Y$
\begin{equation*}
\frac{T_n}{n}\stackrel{a.s}{\longrightarrow}m\mu_X.
\end{equation*}
\end{cor}
\begin{proof}
In fact, let
$M_n=\sum_{i=1}^n\xi_i(X_i-Y_i)-\mathbb{E}[\xi_i(X_i-Y_i)|\mathcal{F}_{i-1}],$
we have
\begin{eqnarray*}\frac{T_n}{n}&=&\frac{T_0}{n}+\frac{m}{n}\sum_{i=1}^nY_i+\frac{1}{n}\sum_{i=1}^n\xi_i(X_i-Y_i)\\
&=&\frac{T_0}{n}+\frac{m}{n}\sum_{i=1}^nY_i+
\frac{m(\mu_X-\mu_Y)}{n}\sum_{i=1}^nZ_{i-1}+\frac{M_n}{n}.
\end{eqnarray*}
As it was proved in the previous theorem, we show that, as $n$
tends to infinity, we have
$\frac{M_n}{n}\stackrel{a.s}{\longrightarrow} 0$. Recall that, if
$\mu_X>\mu_X$ , $Z_n$ converges almost surely to $1$. Then, using
Ces\'aro lemma, we obtain the limits requested. If $\mu_X=\mu_Y$,
we have $\frac{1}{n}\sum_{i=1}^nY_i$ converges to $\mu_Y$.
\end{proof}
Using the results above, the convergence of the normalized number
of white balls follows immediately. Indeed, if $\mu_X>\mu_Y$, we
have, as $n$ tends to infinity,
\[\frac{W_n}{n}=\frac{W_n}{T_n}\frac{T_n}{n}\stackrel{a.s}{\longrightarrow}m\mu_X,\]
Let $\tilde
G_n=\Bigl(\prod_{i=1}^{n-1}(1+\frac{m\mu_Y}{T_i})\Bigr)^{-1}B_n,$
then $(\tilde G_n,\mathcal{F}_n)$ is a positive martingale. There
exists a positive number $A$ such that
$\prod_{i=1}^{n-1}(1+\frac{m\mu_Y}{T_i})\simeq A
n^{\rho}$. Then, as $n$ tends to infinity we have
\begin{equation*} \frac{B_n}{n^\rho}\stackrel{a.s}{\rightarrow} B_{\infty},\end{equation*}
where $B_\infty$ is a positive random variable.\\
If $\mu_X=\mu_Y$, the sequences
$\Bigl(\prod_{i=1}^{n-1}(1+\frac{m\mu_X}{T_i})\Bigr)^{-1}W_n$ and
$\Bigl(\prod_{i=1}^{n-1}(1+\frac{m\mu_Y}{T_i})\Bigr)^{-1}B_n$ are
$\mathcal{F}_n$ martingales such that
$\Bigl(\prod_{i=1}^{n-1}(1+\frac{m\mu_X}{T_i})\Bigr)^{-1}\simeq B
n,$ where $B>0$, then, as $n$ tends to infinity,
we have
\begin{equation*}\frac{W_n}{n}\stackrel{a.s}{\rightarrow}
W_{\infty} \quad \text{and}\quad \frac{B_n}{n}\stackrel{a.s}{\rightarrow} \tilde B_{\infty},\end{equation*}
where $W_{\infty}$ and $\tilde B_{\infty}$ are positive random
variables satisfying $ \tilde B_{\infty}=m\mu_X-W_{\infty}.$
\end{proof}
\begin{rmq}
The case when $\mu_X<\mu_Y$ is obtained by interchanging the
colors. In fact we have the following results:
\begin{equation*}T_n\stackrel{a.s}{=}m\mu_Y n+o(n),\quad W_n=\tilde W_\infty n^\sigma+o(n)\quad \text{and} \quad B_n=m\mu_Yn+o(n),\end{equation*}
where $\tilde W_\infty$ is a positive random variable and
$\sigma=\frac{\mu_X}{\mu_Y}.$
\end{rmq}
\section{Introduction}
The classical P\'olya urn was introduced by P\'olya and
Eggenberger \cite{Polya} describing contagious diseases.
The first model is as
follows: An urn contains balls of two colors at the start, white
and black. At each step, one picks a ball randomly and returns it
to the urn with a ball of the same color.
Afterward this model was generalized and it has become a simple
tool to describe several models such finance, clinical trials
(see \cite{Pages}, \cite{Wei}), biology (see \cite{bio}), computer
sciences, internet (see
\cite{Mahmoud},\cite{Goldman}), etc. \\
Recently, H. Mahmoud, M.R. Chen, C.Z Wei, M. kuba and H. Sulzbach
\cite{Kuba-Mahmoud-Panholzer,Chen-Kuba,Chen-Wei,kuba-Zulzbach,Kuba-mahmoud,Kuba-Mahmoud2},
have focused on the multidrawing urn. Instead of picking a ball,
one picks a sample of $m$ balls ($m\geq 1$), say $l$ white and
$m-l$ black balls. the pick is returned back to the urn together
with $a_{m-l}$ white and $b_{l}$ black balls, where $a_l$ and
$b_l, 0\leq l\leq m$ are integers. At first, they treated two
particular cases when
\{$a_{m-l}=c\times l \quad \text{and}\quad
b_{m-l}=c\times (m-l)$\} and when \{$a_{m-l}=c\times (m-l)$
\quad\text{and}\quad $b_{m-l}=c\times l$\}, where $c$ is a
positive constant. By different methods as martingales and moment
methods, the authors described the asymptotic behavior of the urn
composition. When considering the general case and in order to
ensure the existence of a martingale, they
supposed that $W_n$, the number of white balls in
the urn after $n$ draws, satisfies the affinity condition i.e,
there exists two deterministic sequences $(\alpha_n)$ and
$(\beta_n)$ such that, for all $n\geq 0$,
$\mathbb{E}[W_{n+1}|\mathcal{F}_n]=\alpha_n W_{n}+\beta_n$. Under
this condition, the authors focused on small and large index urns.
Later, the affinity condition was removed in the work of C.
Mailler, N. Lasmer and S. Olfa \cite{C.N.O}, they generalized
this model and looked at the case of more than two
colors.\\
In the present paper, we deal with an unbalanced urn model, which
was not been sufficiently addressed in the literature. It was
mainly dealt with in the works of R. Aguech \cite{R.Aguech}, S.
Janson \cite{S. Janson} and H. Renlund \cite{Renlund1, Renlund2}.
In \cite{R.Aguech} and \cite{S. Janson}, the authors dealt with
model with a simple pick, whereas in \cite{Renlund1,Renlund2} the
author considered a model with two picks and, under some
conditions, they described the asymptotic behavior of the urn
composition.
In this paper, we aim to give a generalization of a recent work
\cite{A.L.O}. We deal with an unbalanced urn model with random
addition. We consider an urn containing two different colors white
and blue. We suppose that the urn is non empty at time 0. Let
denote by $W_n$ (resp $B_n$) the number of white balls (resp blue
balls) and by $T_n$ the total number of balls in the urn at time
$n$. Let $(X_n)_{n\geq 0}$ and $(Y_n)_{n\geq 0}$ be strictly
positive sequences of independent identically distributed discrete
random variables with finite means and variances. The model we
study is defined as follows: At a discrete time, we pick out a
sample of $m$ balls from the urn (we suppose that $T_0=W_0+B_0\geq
m$) and according to the composition of the sample, we return the
balls with $Q_n(\xi_n,m-\xi_n)^t$ balls, where $Q_n$ is a $2\times
2$ matrix depending on the variables $X_n$ and $Y_n$ and $\xi_n$
is the number of white balls in the $n^{th}$ sample.
Let $(\mathcal{F}_n)_{n\ge 0}$ be the $\sigma$-field
generated by the first $n$ draws. We summarize
the evolution of
the urn by
the recurrence
\begin{equation}\label{recurrence}
\begin{pmatrix}
W_{n} \\
B_{n} \\
\end{pmatrix}\stackrel{\mathcal D}{=}
\begin{pmatrix}
W_{n-1} \\
B_{n-1} \\
\end{pmatrix}+Q_n
\begin{pmatrix}
\xi_{n} \\
m-\xi_{n} \\
\end{pmatrix}.
\end{equation}
Note that, with these notations, we have
\begin{equation*}\mathbb{P}[\xi_n=k|\mathcal{F}_{n-1}]=\displaystyle\frac{\binom {W_{n-1}} k \binom {B_{n-1}} {m-k}}{\binom {T_{n-1}} m}.\end{equation*}
The paper is organized as follows. In Section \ref{Main results},
we give the main results of the paper. In the first paragraph of Section \ref{Proofs}, we develop
Theorem 1 \cite{Renlund1} and apply it to our urn model. The rest
of this section is devoted to the prove the theorems.
\textbf{Notation:} For a random variable $R$, we denote
by $\mu_R=\mathbb{E}(R)$ and
$\sigma_R^2=\mathbb{V}ar(X)$. Note that $\mu_X, \mu_Y ,\sigma^2_X$
and $\sigma^2_Y$ are finite.\\
\section{Main Results}\label{Main results}
\begin{thm}\label{thmXopp}
Consider the urn model evolving by the matrix $Q_n=
\begin{pmatrix}
0 & X_n \\
X_n & 0 \\
\end{pmatrix}$. We have the following results:
\begin{enumerate}
\item \begin{equation}\label{asymp-T_n}T_n\stackrel{a.s}{=}m\mu_X n
+o(\sqrt{n}\ \ln(n)^\delta),\end{equation} \begin{equation}
W_n\stackrel{a.s}{=}\frac{m\mu_X }{2}n+o(\sqrt{n}\
\ln(n)^{\delta})\quad \text{and}\quad
B_n\stackrel{a.s}{=}\frac{m\mu_X }{2}n+o(\sqrt{n}\
\ln(n)^{\delta});\quad\delta>\frac{1}{2}.\end{equation}
\item \begin{equation}\frac{W_n-\frac{1}{2}T_n}{\sqrt{n}}
\stackrel{\mathcal{L}}{\longrightarrow}\mathcal{N}\Big(0,\frac{m(\sigma_X^2+\mu_X^2)}{12}\Big).\end{equation}
\item \begin{equation}\frac{W_n-\mathbb{E}(W_n)}{\sqrt{n}}\stackrel{\mathcal{L}}{\longrightarrow}
\mathcal{N}\Big(0,\frac{m(\sigma_X^2+\mu_X^2)+m^2\sigma^2_X}{12}\Big).\end{equation}
\end{enumerate}
\end{thm}
\begin{thm}\label{thmXself}
Consider the urn model evolving by the matrix $Q_n=
\begin{pmatrix}
X_n & 0 \\
0 & X_n \\
\end{pmatrix}$. There exists a positive random variable $\tilde W_\infty$, such that
\begin{equation}T_n\stackrel{a.s}{=}m\mu_X n
+o(\sqrt{n}\ \ln(n)^\delta),\quad W_n\stackrel{a.s}{=}\tilde
W_\infty n +o(n) \quad\mbox{and}\;\;B_n\stackrel{a.s}{=}
(m\mu_X-\tilde W_\infty)n+o(n).
\end{equation}
\end{thm}
\begin{rmq}
The random variable $\tilde W_\infty$ is absolutely continuous
whenever $X$ is bounded.
\end{rmq}
\begin{thm}\label{thmXYopp} Consider the urn model evolving by
the matrix $Q_n=
\begin{pmatrix}
0 & X_n \\
Y_n & 0 \\
\end{pmatrix}.$ Let $z:=\frac{\sqrt{\mu_X}}{{\sqrt{\mu_X}+\sqrt{\mu_Y}}}$, we
have the following results:
\begin{enumerate}
\item \begin{equation}\label{SL-Total}T_n\stackrel{a.s}{=}m\sqrt{\mu_X}\sqrt{\mu_Y}\
n+o(n),\end{equation}
\begin{equation} W_n\stackrel{a.s}{=}m\sqrt{\mu_X}\sqrt{\mu_Y}\ z\ n+o(n)\quad\text{and}\quad B_n\stackrel{a.s}{=}m\sqrt{\mu_X}\sqrt{\mu_Y}(1-z)\ n+o(n).\end{equation}
\item \begin{equation}\frac{W_n-z T_n}{\sqrt{n}}\stackrel{\mathcal{L}}{\longrightarrow}\mathcal{N}\Big(0,\frac{ G(z)}{3}\Big),\end{equation}
where, \begin{equation*} G(x)=\sum_{i=0}^4a_ix^i,\end{equation*}
with
\begin{eqnarray*}a_0=m^2(\sigma^2_X+\mu_X^2)&,&a_1=m(1-2m)(\sigma_X^2+\mu_X^2),\\
a_2=3m(m-1)(\sigma_X^2+\mu_X^2)-2m(m-1)\mu_X\mu_Y &,&
a_3=m\mathbb{E}(X-Y)^2-2(m^2-m)\bigl(\sigma_X^2+\mu_X^2-\mu_X\mu_Y\bigr)\\
\text{and}\quad a_4=m(m-1)\mathbb{E}(X-Y)^2.\end{eqnarray*}
\end{enumerate}
\end{thm}
\begin{thm}\label{thmXYself}
Consider the urn evolving by the matrix $Q_n=
\begin{pmatrix}
X_n & 0 \\
0 & Y_n \\
\end{pmatrix}.$ We have the following results:
\begin{enumerate}
\item If $\mu_X > \mu_Y$,
\begin{equation}
T_n\stackrel{a.s}{=}m\mu_Xn+o(n),\quad
W_n\stackrel{a.s}{=}m\mu_Xn+o(n)\quad \text{and}\quad
B_n\stackrel{a.s}{=}B_{\infty}n^\rho+o(n^\rho),
\end{equation}
where $\rho=\frac{\mu_Y}{\mu_X}$ and $B_{\infty}$ is a positive
random variable.
\item If $\mu_X=\mu_Y$,
\begin{equation}T_n\stackrel{a.s}{=}m\mu_Xn+o(n),\quad W_n\stackrel{a.s}{=}W_{\infty}n+o(n)\quad \text{and}\quad B_n\stackrel{a.s}{=}(\mu_Xm-W_{\infty})\ n+o(n),\end{equation}
where $W_\infty$ is a positive random variable.
\end{enumerate}
\end{thm}
\begin{rmq}
The case when $\mu_X<\mu_Y$ is obtained by interchanging the
colors.
\end{rmq}\\
\textbf{Example:} Let $m=1$, this particular case was studied by
R. Aguech \cite{R.Aguech}.
Using martingales and branching processes , R. Aguech proved the following results:\\
if $\mu_X>\mu_Y$,
\begin{equation*}W_n=\mu_X n+o(n),\quad B_n=D n^\rho\quad \text{and}\quad T_n=\mu_X n+o(n),\end{equation*}
where $D$ is a positive random variable.\\
If $\mu_X=\mu_Y$,
\begin{equation*}W_n=\mu_X \frac{W}{W+B}n+o(n)\quad \text{and}\quad B_n=\mu_X\frac{B}{W+B}n+o(n),\end{equation*}
where $W$ and $B$ are positive random variables obtained by
embedding some martingales in continuous time.
\section{Proofs}\label{Proofs}
The stochastic algorithm approximation plays a crucial role in the
proofs in order to describe the asymptotic composition of the urn.
As many versions of the stochastic algorithm exist in the
literature (see \cite{Duflo} for example), we adapt the version of
H. Renlund in \cite{Renlund1, Renlund2}.
\subsection{A basic tool: Stochastic approximation}
\begin{df}\label{def-algo}
A stochastic approximation algorithm $(U_n)_{n\geq 0}$ is a
stochastic process taking values in $[0,1]$ and adapted to a
filtration $\mathcal{F}_n$ that satisfies
\begin{equation}\label{eq:algo_sto}
U_{n+1}-U_n = \gamma_{n+1}\big(f(U_n)+\Delta M_{n+1}\big),
\end{equation}
where $(\gamma_n)_{n\geq 1}$ and $(\Delta_n)_{n\geq 1}$ are two
$\mathcal F_n$-measurable sequences of random variables, $f$ is a
function from $[0,1]$ onto $\mathbb R$ and the following
conditions hold almost surely.
\begin{description}
\item [(i)]$\frac{c_l}{n}\leq \gamma_n \leq \frac{c_u}{n}$,
\item [(ii)]$|\Delta M_n|\leq K_u,$
\item [(iii)]$ |f(U_n)|\leq K_f,$
\item [(iv)]$ \mathbb E[\gamma_{n+1}
\Delta M_{n+1}| \mathcal F_n] \leq K_e \gamma_n^2,$
\end{description}
where the constants $c_l, c_u, K_u, K_f, $ and $K_e$ are positive
real numbers.
\end{df}
\begin{df} Let
$Q_f=\{x; f(x)=0\}.$ A zero $p\in Q_f$ will be called stable if
there exists a neighborhood $\mathcal{N}_p$ of $p$ such that
$f(x)(x-p)<0$ whenever $x\in \mathcal{N}_p\setminus\{p\}.$ If $f$
is differentiable, then $f'(p)$ is sufficient to determine that
$p$ is stable.
\end{df}
\begin{thm}[\cite{Renlund1}]\label{th:renlund}Let $U_n$ bea
stochastic algorithm defined in Equation (\ref{eq:algo_sto}). If
$f$ is continuous, then $\displaystyle\lim_{n\rightarrow +\infty}
U_n$ exists almost surely and is in $Q_f$. Furthermore, if $p$ is
a stable zero, then $\mathbb{P}\Big(U_n\longrightarrow p\Big)>0.$
\end{thm}
\begin{rmq}
The conclusion of Theorem \ref{th:renlund} holds if we replace the
condition $(ii)$ in Definition \ref{def-algo} by the following
condition $\mathbb{E}[\Delta M_{n+1}^2|\mathcal{F}_n]\leq K_u$.
\end{rmq}
\begin{proof}[Proof of Theorem \ref{th:renlund}]
For the convenience of the reader, we adapt the proof of Theorem
\ref{th:renlund} and we show that, under the new condition $(ii)
\quad\mathbb{E}[\Delta M_{n+1}^2|\mathcal{F}_n]\leq
K_u$, the conclusion remains true. In fact, the following lemmas are useful.\\
\begin{lem}\label{V_n converges}
Let $V_n=\sum_{i=1}^n\gamma_i\Delta M_i$. Then, $V_n$ converges
almost surely.
\end{lem}
\begin{proof}
Set $A_i=\gamma_i\Delta M_i$ and $\tilde
A_i=\mathbb{E}[A_i|\mathcal{F}_{i-1}].$ Define the martingale
$C_n=\sum_{i=1}^n(A_i-\tilde A_i),$ then
\begin{eqnarray*}
\mathbb{E}(C_n^2)&\leq&\sum_{i=1}^n\mathbb{E}(A_i^2)=\sum_{i=1}^n\mathbb{E}(\gamma_i^2\Delta M_i^2)\\
&\leq&\sum_{i=1}^n\frac{c_u^2}{i^2}\mathbb{E}(\Delta M_i^2),
\end{eqnarray*}
if there exists some positive constant $K_u$ such that
$\mathbb{E}[\Delta M_{n+1}^2|\mathcal{F}_n]\leq K_u$, we
conclude that $C_n$ is an $L^2-$ martingale and thus converges almost surely.\\
Next, since \[\sum_{i\geq 1}|\tilde A_i|\leq\sum_{i\geq
1}\frac{c_u^2}{(i-1)^2}K_l<+\infty,\] the series $\sum_{i\geq
1}A_i$ must also converges almost surely.
\end{proof}
\begin{lem}\label{Q_f}Let $U_\infty$ be the set of accumulation point of
$\{U_n\}$ and $Q_f=\{x; f(x)=0\}$ be the zeros of $f$. Suppose
$f$ is continuous. Then,
\[\mathbb{P}\Big(U_\infty \subseteq Q_f\Big)=1.\]
\end{lem}
\begin{proof}
See \cite{Renlund1}
\end{proof}
Next, we prove the main result of the theorem.
If $\displaystyle\lim_{n\rightarrow +\infty} U_n$ does not exist, we can find two rational numbers in
the open interval $]\displaystyle\liminf_{n\rightarrow +\infty} U_n, \displaystyle\limsup_{n\rightarrow +\infty} U_n[$.\\
Let $p<q$ be two arbitrary different rational numbers. If we can
show that
\[\mathbb{P}\Big(\{\liminf U_n \leq p\}\cap\{\limsup U_n \geq q\}\Big)=0,\]
then, the existence of the limit will be established and the claim of
the theorem follows from Lemma \ref{Q_f}.\\
For this reason, we need to distinguish two different cases whether or not $p$
and $q$ are in the same connected component of $Q_f$.\\
\textbf{Case 1:
$p$ and $q$ are not in the same connected
component of $Q_f$.}\\
See the proof in \cite{Renlund1}.\\
\textbf{Case 2:
$p$ and $q$ are in the connected component of
$Q_f$.}\\
Let $p$ and $q$ be two arbitrary rational numbers such that $p$ and
$q$ are in the same connected component of $Q_f$.
Assume that $\displaystyle\liminf_{n\rightarrow +\infty} U_n \leq p$ and fix an
arbitrary $\varepsilon$ in such a way that $0\leq \varepsilon \leq
q-p$.\\
We aim to show that $\displaystyle\limsup_{n\rightarrow +\infty}
U_n\leq q$ i.e, it is sufficient to
show that $\displaystyle\limsup_{n\rightarrow +\infty} U_n\leq p+\varepsilon.$\\
In view of Lemma \ref{V_n converges}, we have
$V_n=\sum_{i=1}^n\gamma_i\Delta M_i$ converges a.s, then, for a
stochastic $N_1> 0$, for $n,m> N_1$ we have
$|W_n-W_m|<\frac{\varepsilon}{4}$ and
$\gamma_n\Delta M_n\leq \frac{\varepsilon}{4}$.\\
Let $N=max(\frac{4K_f}{\varepsilon},N_1)$. By assumption, there is
some stochastic $n>N$ such that $U_n-p<
\frac{\varepsilon}{2}$.\\
Let
\[\tau_1=\inf\{k\geq n; U_k\geq p\}\quad \text{and}\quad \sigma_1=\inf\{k>\tau_1; U_k<p\},\]
and define, for $n\geq 1,$
\[\tau_{n+1}=\inf \{k>\sigma_n; U_k\geq p\}\quad \sigma_{n+1}=\inf \{k>\tau_n; U_k<p\}.\]
Now, for all $k$ we have
\[U_{\tau_k}=U_{\tau_k-1}+\gamma_{\tau_k-1}(f(U_{\tau_k-1})+\Delta M_{\tau_k}).\]
Recall that $\gamma_{\tau_k-1}f(X_{\tau_k-1})\leq
\frac{K_f}{\tau_{k}-1}\leq \frac{K_f}{n}$, for $n\geq N\geq
\frac{4K_f}{\varepsilon}$ we have
$\gamma_{\tau_k-1}f(X_{\tau_k-1})<\frac{\varepsilon}{4}$. It
follows,
\[\gamma_{\tau_k-1}(f(U_{\tau_k-1})+\Delta M_{\tau_k})\leq \frac{K_f}{n}+\frac{\varepsilon}{4}\leq
\frac{\varepsilon}{4}+\frac{\varepsilon}{4}
=\frac{\varepsilon}{2}.\]
Note that $f(x)=0 $
when $x \in [ p,q ]$ ($p$ and $q$ are in $Q_f$). For $j$ such that
$\tau_k+j-1$ is a time before the exit time of the interval
$[p,q]$, we have
\[U_{\tau_k+j}=X_{\tau_k}+W_{\tau_k+j}-W_{\tau_k}.\]
As $|W_{\tau_k+j}-W_{\tau_k}|<\frac{\varepsilon}{4},$ we have
$U_{\tau_k+j}\leq
p+\frac{\varepsilon}{2}+\frac{\varepsilon}{4}\leq p+\varepsilon,$
the precess will never exceed $p+\varepsilon$ before
$\sigma_{k+1}$. We conclude that $\sup_{k\geq n} U_k\leq
p+\varepsilon.$\\
To establish that the limit is to a stable point, we refer the
reader to \cite{Renlund1} to see a detailed proof.
\end{proof}
\begin{thm}[\cite{Renlund2}]\label{clt-renlund}
Let $(U_n)_{n\geq 0}$ satisfying Equation~\eqref{eq:algo_sto} such
that $\displaystyle\lim_{n\to +\infty} U_n = U^\star$. Let $\hat
\gamma_n:= n\gamma_n \hat f(U_{n-1})$ where $\hat f(x) =
\frac{-f(x)}{x-U^\star}$. Assume that $\hat \gamma_n$ converges
almost surely to some limit $\hat \gamma$. Then,
if $\hat\gamma > \frac12$ and if $\mathbb E[(n\gamma_n \Delta
M_n)^2|\mathcal F_{n-1}] \to \sigma^2 > 0$, then \[\sqrt n (U_n
-U^\star) \to \mathcal N\Big(0, \frac{\sigma^2}{2\hat\gamma
-1}\Big).\]
\end{thm}
\subsection{Proof of the main results}
\begin{proof}[Proof of Theorem \ref{thmXopp}] Consider the
urn model defined in Equation (\ref{recurrence}) with $Q_n=
\begin{pmatrix}
0 & X_n \\
X_n & 0 \\
\end{pmatrix}$.
We have the following recursions:
\begin{equation}\label{recurrence-opp2}W_{n+1}=W_n+X_{n+1}(m-\xi_{n+1})\quad \text{and} \quad T_{n+1}=T_n+mX_{n+1}.\end{equation}
\textbf{Proof of claim 1}
\begin{lem}
Let $Z_n=\frac{W_n}{T_n}$ be the proportion of white balls in the
urn after $n$ draws. Then, $Z_n $ satisfies the stochastic
approximation algorithm defined by (\ref{eq:algo_sto}) with
$\gamma_n=\frac{1}{T_n}$, $f(x)=\mu_X m(1-2x)$ and $\Delta
M_{n+1}=X_{n+1}(m-\xi_{n+1}-mZ_n)-\mu m(1-Z_n)$.
\end{lem}
\begin{proof}
We need to check the conditions of definition \ref{def-algo}.\\
\begin{description}
\item [(i)] Recall that $T_n=T_0+m\sum_{i=1}^nX_i$, with
$(X_i)_{i\geq 1}$ are iid random variables. It follows, by
Rajechman strong law of large numbers, that
\begin{equation}\label{asymp-T_n}T_n\stackrel{a.s}{=}\mu_X mn
+o(\sqrt{n}\ \ln(n)^\delta),\quad \delta
>\frac{1}{2},\end{equation} it follows that
$\frac{1}{(m\mu_X+1)n}\leq\frac{1}{T_n}\leq\frac{2}{m\mu_X n},$
then, $c_l=\frac{1}{m\mu_X+1}$ and $c_u=\frac{2}{m\mu_X n},$
\item[(ii)] $\mathbb{E}[\Delta
M_{n+1}^2|\mathcal{F}_n]\leq (6m^2+m)\mathbb{E}(X^2)+9m^2\mu^2=K_u,$\\
\item[(iii)] $|f(Z_n)|=m\mu_X|1-2Z_n|\leq 3m\mu_X=K_f$,\\
\item[(iv)]$\mathbb{E}(\gamma_{n+1}\Delta
M_{n+1}|\mathcal{F}_n)\leq \frac{1}{T_n}\mathbb{E}(\Delta
M_{n+1}|\mathcal{F}_n)=0=K_e$.
\end{description}
\end{proof}
\begin{prop}\label{prop1/2}
The proportion of white balls in the urn
after $n$ draws, $Z_n$, converges almost surely to $\frac{1}{2}$.
\end{prop}
\begin{proof}[Proof of Proposition \ref{prop1/2}]
Since the process $Z_n$ satisfies the stochastic approximation
algorithm defined by Equation (\ref{eq:algo_sto}), we apply
Theorem \ref{th:renlund}. As the function $f$ is continuous we
conclude that $Z_n$ converges almost surely to $\frac{1}{2}$: the
unique stable zero of the function $f$.
\end{proof}
We apply the previous results to the urn composition. As we can
write $\frac{W_n}{n}=\frac{W_n}{T_n}\frac{T_n}{n}$, we deduce from
Proposition \ref{prop1/2} and Equation (\ref{asymp-T_n}) that
$\frac{W_n}{n}\stackrel{a.s}{=}\bigl(\frac{1}{2}+o(1)\bigr)\Big(\mu_Xm+o\Bigl(\frac{\ln(n)^\delta}{\sqrt{n}}\Bigr)\Bigr),$
then this corollary follows:
\begin{cor}
The number of white balls in the urn after $n$ draws, $W_n$,
satisfies for $n$ large enough
\begin{equation*}W_n\stackrel{a.s}{=}\frac{\mu_X m}{2}n+o(\sqrt{n}\ \ln(n)^\delta),\quad \delta >\frac{1}{2}.\end{equation*}
\end{cor}
\textbf{Proof of claim 2} We aim to apply Theorem
\ref{clt-renlund}. For this reason, we need to find this limits:
\begin{equation*}
\lim_{n\rightarrow \infty}\mathbb{E}[\bigl(\frac{n}{T_n}\bigr)^2\Delta
M_{n+1}^2|\mathcal{F}_n]\quad \text{and}\quad
\lim_{n\rightarrow \infty}-\frac{n}{T_n}f'(Z_n).
\end{equation*}
We have
\begin{eqnarray*}
\mathbb{E}[\Delta
M_{n+1}^2|\mathcal{F}_n]&=&\mathbb{E}(X_{n+1}^2)\mathbb{E}[(m-\xi_{n+1}-mZ_n)^2|\mathcal{F}_n])+\mu^2\mathbb{E}[(m-2mZ_n)^2|\mathcal{F}_n]
\\ &&-2\mu_X^2\mathbb{E}[(m-\xi_{n+1}-mZ_n)(m-2mZ_n)|\mathcal{F}_n]\\
&=&(\sigma_X^2+\mu_X^2)\Big[m^2-4m^2Z_n+4m^2Z_n^2+mZ_n(1-Z_n)\frac{T_n-m}{T_n-1}\Big]-\mu_X^2[m^2+4m^2Z_n^2-4m^2Z_n].
\end{eqnarray*}
As $n$ tends to infinity, we have $Z_n
\stackrel{a.s}{\longrightarrow} \frac{1}{2}$ and
$\frac{T_n-m}{T_n-1}\stackrel{a.s}{\longrightarrow} 1$. Then,
\begin{equation*}\lim_{n\rightarrow \infty}\mathbb{E}[\Delta
M_{n+1}^2|\mathcal{F}_n]\stackrel{a.s}{=}(\sigma_X^2+\mu_X^2)\frac{m}{4}\quad
\text{and}\quad \lim_{n\rightarrow
\infty}-\frac{n}{T_n}f'(Z_n)\stackrel{a.s}{=}2.\end{equation*}
According to Theorem \ref{clt-renlund},
$\sqrt{n}(Z_n-\frac{1}{2})$ converges in distribution to
$\mathcal{N}(0,\frac{\sigma_X^2+\mu_X^2}{12\mu_X^2m})$. Finally,
by writing
$\Big(\frac{W_n-\frac{1}{2}T_n}{\sqrt{n}}\Big)=\sqrt{n}(Z_n-\frac{1}{2})\frac{T_n}{n}$,
we conclude using Slutsky theorem.\\
\textbf{Proof of claim 3} To prove this claim, we follow the proof
of Lemma 3 and Theorem 2 in \cite{A.L.O}. Using the same methods,
we show in a first step that the variables $(X_n(m-\xi_n))_{n\geq
0}$ are $\alpha$-mixing variables with a strong mixing coefficient
$\alpha(n)=o\Big(\frac{\ln(n)^\delta}{\sqrt{n}}\Big)$, $\delta
>\frac{1}{2}$. To conclude, we adapt the Bernstein method.
Consider the same notation as in Theorem 2 in \cite{A.L.O}, and
define $S_n=\frac{1}{\sqrt{n}}\sum_{i=1}^n\tilde\xi_i$ where
$\tilde\xi_i=X_i(m-\xi_i)-\mu_X(m-\mathbb{E}(\xi_i))$. At first,
we need to estimate the variance of $W_n$.
\begin{prop}\label{var}
The variance of $W_n$ satisfies
\begin{equation}\label{variance}\mathbb{V}ar(W_n)=\frac{m(\sigma_X^2+\mu_X^2)+m^2\sigma_X^2}{12}\ n+o(\sqrt{n}\ \ln(n)^\delta),\quad \delta>\frac{1}{2}.\end{equation}
\end{prop}
\begin{proof}[Proof of Proposition \ref{var}]
Recall that the number of white balls in the urn satisfies
Equation (\ref{recurrence-opp2}), then
\begin{equation*}\mathbb{V}ar(W_{n+1})=\mathbb{V}ar(W_n)+\mathbb{V}ar(X_n(m-\xi_n))+2\ \mathbb{C}ov(W_{n-1},X_n(m-\xi_n)).\end{equation*}
We have
$\mathbb{V}ar(X_n(m-\xi_n))=(\sigma_X^2+\mu_X^2)\Big(\mathbb{V}ar(mZ_{n-1})
+\mathbb{E}\Big(mZ_{n-1}(1-Z_{n-1})\frac{T_{n-1}-m}{T_{n-1}-1}\Big)\Big)+\sigma_X^2\mathbb{E}(m-\xi_n)^2.$
Using Equation (\ref{asymp-T_n}) and the fact that
$Z_n\stackrel{a.s}{\rightarrow}\frac{1}{2}$, we obtain
\begin{eqnarray*}\mathbb{V}ar(W_{n+1})&=&\Big(1-\frac
2n+o\Big(\frac{\ln(n)^\delta}{n^{\frac32}}\Big)\Big)\mathbb{V}ar(W_{n})
+\frac{m(\sigma_X^2+\mu_X^2)+m^2\sigma_X^2}{4}+o\Big(\frac{\ln(n)^\delta}{\sqrt
n}\Big)\\
&=&a_n\mathbb{V}ar(W_{n})+b_n,\end{eqnarray*} where
$a_n=\Bigl(1-\frac
2n+o\Big(\frac{\ln(n)^\delta}{n^{\frac32}}\Big)\Bigr)$ and
$b_n=\frac{m(\sigma_X^2+\mu_X^2)+m^2\sigma_X^2}{4}+o\Big(\frac{\ln(n)^\delta}{\sqrt
n}\Big).$\\
Thus, \begin{equation*} \mathbb{V}ar(W_n)=\Big(\prod_{k=1}^n
a_k\Big)\Big(\mathbb{V}ar(W_0)+\sum_{k=0}^{n-1}\frac{b_k}{\prod_{j=0}^ka_j}\Big).
\end{equation*}
There exists a constant $a$ such that
$\prod_{k=1}^na_k=\displaystyle\frac{e^{a}}{n^2}\Big(1+o\Big(\frac{\ln(n)^\delta}{\sqrt
n}\Big)\Big)$, which leads to
\begin{equation*}
\mathbb{V}ar(W_n)=\frac{m(\sigma_X^2+\mu_X^2)+m^2\sigma_X^2}{12}n+o(\sqrt
n\ln(n)^\delta),\quad \delta>\frac{1}{2}.
\end{equation*}
\end{proof}
Recall that we follow the proof of Theorem 2 in \cite{A.L.O},
using Equation (\ref{variance}), we conclude that
\begin{equation}\frac{W_n-\mathbb{E}(W_n)}{\sqrt{n}}\stackrel{\mathcal{L}}{\longrightarrow}
\mathcal{N}\Bigl(0,\frac{m(\sigma_X^2+\mu_X^2)+m^2\sigma_X^2}{12}\Bigr).
\end{equation}
\end{proof}
\begin{proof}[Proof of Theorem \ref{thmXself}]
Consider the urn model defined in (\ref{recurrence}) with $Q_n=
\begin{pmatrix}
X_n & 0 \\
0 & X_n \\
\end{pmatrix}$. The following recurrences hold:
\begin{equation}\label{W-self}
W_{n+1}=W_n+X_{n+1}\xi_{n+1}\quad \text{and}\quad
T_{n+1}=T_n+mX_{n+1}.
\end{equation}
As $T_n$ is a sum of iid random variables then $T_n$ satisfies the
following
\begin{equation}\label{totalself}T_n\stackrel{a.s}{=}\frac{\mu_Xm}{2}n+o(\sqrt{n}\ln(n)^\delta).\end{equation}
The processes $\tilde
M_{n}=\prod_{k=1}^{n-1}\Big(\frac{T_k}{T_k+m\mu_X}\Big)W_n$ and
$\tilde N_n=\prod_{k=1}^{n-1}\Big(\frac{T_k}{T_k+m\mu_X}\Big)B_n$
are two $\mathcal{F}_n$ positive martingales. In view of
(\ref{totalself}), we have
$\prod_{k=1}^{n-1}\Big(\frac{T_k}{T_k+m\mu_X}\Big)\stackrel{a.s}{=}\displaystyle\frac{e^{\gamma}}{n}\Big(1+o\Big(\frac{\ln
(n)^\delta}{\sqrt n}\Big)\Big)$ for a positive constant $\gamma$.
Thus, there exists nonnegative random variables $\tilde W_\infty$
and $\tilde B_{\infty}$ such that $\tilde W_\infty+\tilde
B_\infty\stackrel{a.s}{=}m\mu_X$ and
\begin{equation*}\frac{W_n}{n}\stackrel{a.s}{\longrightarrow}\tilde W_\infty,\quad
\text{and}\quad \frac{B_n}{n}\stackrel{a.s}{\longrightarrow}\tilde
B_{\infty}.\end{equation*}
\textbf{Example: } In the original P\`olya urn model \cite{Polya},
when $m=1$ and $X=C$ (deterministic), the random variable $\tilde
W_\infty/C$ has a $Beta(\frac{B_0}{C},\frac{W_0}{C})$ distribution
\cite{Athreya-Ney,S. Janson}. Whereas, M.R. Chen and M. Kuba
\cite{Chen-Kuba} considered the case when $X=C$ (non random) and
$m>1$. They gave moments of all orders of $W_n$ and proved that
$\tilde W_\infty$ cannot be an ordinary $Beta$ distribution.
\begin{rmq}
Suppose that the
random variable $X$ has moments of all orders, let
$\:m_k=E(X^k)$, for $ k\ge 1$. We have, almost surely, $W_n\le
T_n$ then, by Minskowski inequality, we obtain
$\mathbb{E}(W_n^{2k})\leq (mn)^{2k}\mathbb{E}(X^{2k})$. Using
Carleman's condition we conclude that, if
$\sum_{k\ge1}\mu_{2k}^{-\frac{1}{2k}}=\infty$, then the random
variable $\tilde W_\infty$ is determined by its moments.
Unfortunately, till now we still unable to give exact expressions
of moments of all orders of $W_n$. But, we can characterize the
distribution of $\tilde W_\infty$ in the case when the variable
$X$ is bounded.
\end{rmq}
\begin{lem}
\label{Abs_con} Assume that $X$ is a bounded random variable,
then, for fixed $W_0,B_0$ and $m$ the random variable $\tilde
W_\infty$ is absolutely continuous.
\end{lem}
The proof that $\tilde W_\infty$ is absolutely continuous is very
close to that of Theorem 4.2 in \cite{Chen-Wei}. We give the main
proposition to make the proof clearer.
\begin{prop}\cite{Chen-Wei}
Let $\Omega_\ell$ be a sequence of increasing events such that
$\mathbb{P}(\cup_{\ell \ge 0}\Omega_\ell)=1$. If there exists
nonnegative Borel measurable function $\{f_\ell\}_{\ell\geq 1}$
such that $\mathbb{P}\Big(\Omega_\ell\cap \tilde
W_\infty^{-1}(B)\Big)=\int_Bf_\ell (x)dx$ for all Borel sets B,
then, $f=\displaystyle\lim_{l\rightarrow+\infty}f_\ell$ exists
almost everywhere and $f$ is the density of $\tilde W_\infty$.
\end{prop}
Let
$(\Omega,\mathcal F,\mathcal{P})$ be a probability space. Suppose
that there exists a constant $A$ such that, we have almost surely,
$X\le A$. \begin{lem} Define the events
\begin{equation*}
\Omega_{\ell}:=\{W_\ell\ge m A \;\mbox{and}\;B_\ell\ge mA\},
\end{equation*}
then, $(\Omega_{\ell})_{\ell\geq 0}$ is a sequence of increasing
events, moreover we have $\mathbb{P}(\cup_{\ell \ge
0}\Omega_\ell)=1$.
\end{lem}
Next, we just need to show that the restriction of $\tilde
W_\infty$ on $\Omega_{\ell,j}=\{\omega; W_\ell(\omega)=j\}$ has a
density for each $j$, with $Am\leq j\leq T_{\ell-1}.$ Let
$(p_c)_{c\in\text{supp}(X)}$ the distribution of $X$.
\begin{lem}
For a fixed $\ell>0$, there exists a positive constant $\kappa$,
such that, for every $c\in\text{supp(X)}$, $n\ge \ell+1$, $Am\le
j\le T_{\ell-1}$ and $k\le Am(n+1)$, we have
\begin{equation}
\label{Inequality_WEI} \sum_{i=0}^m
\mathbb{P}(W_{n+1}=j+k|W_n=j+k-ci)\le p_c(1-\frac
1n+\frac{\kappa}{n^2}).
\end{equation}
\end{lem}
\begin{proof}
According to Lemma 4.1 \cite{Chen-Wei},
for $Am \leq
j\leq T_{\ell -1}$, $n\geq \ell$ and $k\leq Am(n+1)$, the
following holds:
\begin{equation}\label{step2}\sum_{i=0}^m{j+c(k-i)\choose
i}{T_n-j-c(k-i)\choose
m-i}=\frac{T_n^m}{m!}+\frac{(1-m-2c)T_n^{m-1}}{2(m-1)!}+...,\end{equation}
which is a polynomial in $T_n$ of degree $m$ with coefficients
depending on $W_0, B_0, m$ and $c$ only.\\
Let $u_{n,k}(c)=\sum_{i=0}^m \mathbb{P}(W_{n+1}=j+k|W_n=j+k-ic)$.
Applying Equation (\ref{step2}) to our model we have
\begin{eqnarray}
\label{Majoration1} u_{n,k}(c)&=&p_c\sum_{i=0}^m{j+k\choose
i}{T_n-j-k\choose m-i}{T_n\choose m}^{-1} \nonumber
\\
&=&p_c{T_n\choose m}^{-1}\Big(\frac{T_n^{m}}{m!}+
\frac{(1-m-2c)}{(m-1)!}T_n^{m-1}+\ldots\Big)\Big(\frac{T_n^m}{m!}+\frac{(1-m)}{2(m-1)!}T_n^{m-1}+\ldots\Big)^{-1}\nonumber
\\
&\stackrel{a.s}{=}&
p_c\Big(1-\frac{1}{n}+O\Big(\frac{1}{n^2}\Big)\Big).
\end{eqnarray}
\end{proof}
Later, we will limit the proof by mentioning the main differences with
Lemma 4.1 \cite{Chen-Wei}. For a fixed $\ell$ and $n\ge \ell+1$, we denote by $v_{n,j}=\displaystyle\max_{0\leq k\leq
Amn}\mathbb{P}\bigl(W_{\ell+n}=j+k|W_\ell=j\bigr)$. We have the
following inequality:
\begin{eqnarray*}
v_{n+1,j}&\le & \max_{0\le k\le
Am(n+1)}\Big\{\sum_{i=0}^m\sum_{c\in\text{supp}{(X)}}\mathbb{P}(W_{\ell+
n+1}=j+k|W_{\ell+n}=j+k-ci)\Big\}\nonumber
\\
&\le& \max_{0\le k\le Am(n+1)}\Big\{\sum_{i=0}^m\sum_{c\in\text{supp}{(X)}}\mathbb{P}(W_{\ell+n+1}=j+k|W_{\ell+n}=j+k-ci)\nonumber\\
&&\times \mathbb{P}(W_{\ell+n}=j+k-ci|W_\ell=j)\Big\}\nonumber\\
&\le&\max_{0\le k\le
Am(n+1)}\sum_{i=0}^m\sum_{c\in\text{supp}{(X)}}\mathbb{P}(W_{\ell+n+1}=j+k|W_{\ell+n}=j+k-ci)\\
&&\times \max_{0\leq \tilde k\leq
Amn}\mathbb{P}\bigl(W_{\ell+n}=j+\tilde
k|W_\ell =j\bigr)\\
&\le&\sum_{c\in\text{supp}{(X)}}p_c\Big(1-\frac{1}{n+l}+\frac{\kappa}{(n+l)^2}\Big)v_{n,j}\\&&=\Big(1-\frac{1}{n+l}+\frac{\kappa}{(n+l)^2}\Big)v_{n,j}.
\end{eqnarray*}
This implies that there exists some positive constant $C(\ell)$,
depending on $\ell$ only, such that, for a fixed $\ell$ and for
all $n\ge \ell+1$, we get
\begin{equation}
\max_{0\leq k\leq
m(n-l)}\mathbb{P}\bigl(W_n=j+k|W_l=j\bigr)\le\prod_{i=\ell}^n\Big(1-\frac1i+\frac{\kappa}{i^2}\Big)\le
\frac{C(\ell)}{n}.
\end{equation}
The rest of the proof follows.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thmXYopp}]
Consider the urn model evolving by the matrix $Q_n=
\begin{pmatrix}
0 & X_n \\
Y_n & 0 \\
\end{pmatrix}$. According to Equation (\ref{recurrence}), we have
the following recursions:
\begin{equation}\label{opposite-rec}W_{n+1}=W_n+X_{n+1}(m-\xi_{n+1})\quad \text{and}\quad T_{n+1}=T_n+mX_{n+1}+\xi_{n+1}(Y_{n+1}-X_{n+1}).\end{equation}
\begin{lem}
The proportion of white balls after $n$ draws, $Z_n$, satisfies the stochastic algorithm
defined by (\ref{eq:algo_sto}), where
$f(x)=m(\mu_Y-\mu_X)x^2-2\mu_Xmx+\mu_Xm$, $\gamma_n=\frac{1}{T_n}$
and $\Delta M_{n+1}=D_{n+1}-\mathbb{E}[D_{n+1}|\mathcal{F}_n]$,
with $D_{n+1}=\xi_{n+1}(Z_n(X_{n+1}-Y_{n+1})-X_{n+1})+mX_{n+1}$.
\end{lem}
\begin{proof}
We check the conditions of Definition \ref{def-algo}, indeed,
\begin{description}
\item[(i)] recall that
$T_n=T_0+m\sum_{i=1}^nX_i+\sum_{i=1}^n\xi_i(Y_i-X_i)$, then $\frac{T_n}{n}\leq\frac{T_0}{n}+\frac{m}{n}\sum_{i=1}^nX_i+\frac{m}{n}\sum_{i=1}^n|Y_i-X_i|.$
By the strong law of large numbers we have $\frac{T_n}{n}\leq
m(\mu_X+\mu_{|Y-X|})+1$. On the other hand, we have $T_n\geq \displaystyle\min_{1\leq i\leq n}(X_i, Y_i) m n,$
thus, the following bound holds
\begin{equation*}\frac{1}{(m(\mu_X+\mu_{|Y-X|})+1)n}\leq \frac{1}{T_n}\leq \frac{1}{m \displaystyle\min_{1\leq i\leq n}(X_i,Y_i) n},\end{equation*}
then $c_l=\frac{1}{(m(\mu_X+\mu_{|Y-X|})+1)n}$ and $c_u=\frac{1}{m \displaystyle\min_{1\leq i\leq n}(X_i,Y_i)},$\\
\item[(ii)] $\mathbb{E}[\Delta M_{n+1}^2|\mathcal{F}_n]
\leq
(\mu_{(X-Y)^2}+3\mu_X)(m+m^2)+5m^2\mu_{X^2}+2m^2\mu_X\mu_Y+m^2(|\mu_X-\mu_Y|+3\mu_X)=K_u,$
\item[(iii)]$|f(Z_n)|\leq
m(|\mu_Y-\mu_X|+3\mu_X)=K_f,$
\item[(iv)] $\mathbb{E}[\frac{1}{T_{n+1}}\Delta
M_{n+1}|\mathcal{F}_n]\leq \frac{1}{T_n}\mathbb{E}[\Delta
M_{n+1}|\mathcal{F}_n]=0$
\end{description}
\end{proof}
\begin{prop}
The proportion of white balls in the urn after $n$ draws, $Z_n$,
satisfies as $n$ tends to infinity
\begin{equation}\label{conv-proportion}Z_n\stackrel{a.s}{\longrightarrow}z:=\frac{\sqrt{\mu_X}}{\sqrt{\mu_X}+\sqrt{\mu_Y}}.\end{equation}
\end{prop}
\begin{proof}
The proportion of white balls in the urn satisfies the stochastic
approximation algorithm defined in (\ref{eq:algo_sto}). As the
function $f$ is continuous, by Theorem \ref{th:renlund}, the
process $Z_n$ converges almost surely to
$z=\frac{\sqrt{\mu_X}}{\sqrt{\mu_X}+\sqrt{\mu_Y}}$,
the unique zero of $f$ with negative derivative.
\end{proof}
Next, we give an estimate of $T_n$, the total number of balls in
the urn after $n$ draws, in order to describe the asymptotic of
the urn composition. By Equation (\ref{opposite-rec}), we have
\begin{equation*}\frac{T_n}{n}=\frac{T_0}{n}+\frac{m}{n}\sum_{i=1}^nX_i
+\frac{m(\mu_Y-\mu_X)}{n}\sum_{i=1}^nZ_{i-1}+\frac{1}{n}\sum_{i=1}^n\Big[\xi_i(Y_i-X_i)-\mathbb{E}[\xi_i(Y_i-X_i)|\mathcal{F}_{i-1}]\Big].\end{equation*}
Since $(X_i)_{i\geq 1}$ are iid random variables, then by the
strong law of large numbers we have
$\frac{m}{n}\sum_{i=1}^nX_i\stackrel{a.s}{\rightarrow} m\mu_X$.
Via Ces\'aro lemma, we conclude that
$\frac{1}{n}\sum_{i=1}^nZ_{i-1}$ converges almost surely, as $n$
tends to infinity, to $z$. Finally, we prove that last term in the
right side tends to zero, as $n$ tends to infinity. In fact, let
$G_n=\sum_{i=1}^n\Big[\xi_i(Y_i-X_i)-\mathbb{E}[\xi_i(Y_i-X_i)|\mathcal{F}_{i-1}]\Big]$,
then $(G_n,\mathcal{F}_n)$ is a martingale difference sequence
such that
\begin{equation*}\frac{<G>_n}{n}=\frac{1}{n}\sum_{i=1}^n\mathbb{E}[\nabla G_i^2|\mathcal{F}_{i-1}],\end{equation*}
where $\nabla
G_n=G_n-G_{n-1}=\xi_n(Y_n-X_{n})-\mathbb{E}[\xi_n(Y_n-X_{n})|\mathcal{F}_{n-1}]$ and $<G>_n$ denotes the quadratic variation of the martingale.\\
By a simple computation, we have the almost sure convergence of
$\mathbb{E}[\nabla G_i^2|\mathcal{F}_{i-1}]$ to $(mz
(1-z)+m^2z^2)(\sigma_Y^2+\sigma_X^2)$. Therefore, Ces\'aro lemma
ensures that, $\frac{<G>_n}{n}$ converges to $(mz
(1-z)+m^2z^2)(\sigma_Y^2+\sigma_X^2)$ and
$\frac{G_n}{n}\stackrel{a.s}{\longrightarrow} 0$. Thus, for $n$
large enough we have
\begin{equation}\label{T_n-convergence}\frac{T_n}{n}\stackrel{a.s}{\longrightarrow}
m\sqrt{\mu_X}\sqrt{\mu_Y}.\end{equation} In view of Equation
(\ref{T_n-convergence}), we describe the asymptotic behavior of
the urn composition after $n$ draws. One can write
$\frac{W_n}{n}\frac{W_n}{T_n}\frac{T_n}{n}$ and
$\frac{B_n}{n}\stackrel{a.s}{=}\frac{B_n}{T_n}\frac{T_n}{n}$,
using Equations (\ref{conv-proportion}, \ref{T_n-convergence}) and
Slutsky theorem, we have, as $n$ tends to infinity,
$\frac{W_n}{n}\stackrel{a.s}{\longrightarrow}m\sqrt{\mu_X}\sqrt{\mu_Y}
z$ and $\frac{B_n}{n}\stackrel{a.s}{\longrightarrow}
m\sqrt{\mu_X}\sqrt{\mu_Y}(1-z)$.\\
\textbf{Proof of claim 2}\\
Later, we aim to apply Theorem \ref{clt-renlund}. In our model, we have $\gamma_n=\frac{1}{T_n}$, then we need to
control the following asymptotic behaviors
\begin{equation*}
\lim_{n\rightarrow +\infty}\mathbb{E}[\Big(\frac{n}{T_n}\Big)^2\Delta
M_{n+1}^2|\mathcal{F}_n]\quad \text{and}\quad
\lim_{n\rightarrow +\infty}-\frac{n}{T_n}f'(Z_n).
\end{equation*}
In fact, recall that $\frac{n}{T_n}$ converges almost surely to
$\frac{1}{m\sqrt{\mu_X}\sqrt{\mu_Y}}$ and $\mathbb{E}[\Delta
M_{n+1}^2|\mathcal{F}_n]=\mathbb{E}[D_{n+1}^2|\mathcal{F}_n]+\mathbb{E}[D_{n+1}|\mathcal{F}_n]^2$.
Since $\mathbb{E}[D_{n+1}|\mathcal{F}_n]^2$ converges almost
surely to $f(z)^2=0$, we have,
\begin{eqnarray*}\mathbb{E}[D_{n+1}^2|\mathcal{F}_n]&=&\mathbb{E}\Big[Z_n^2(X_{n+1}-Y_{n+1})^2-2Z_nX_{n+1}+X_{n+1}|\mathcal{F}_n\Big]
\mathbb{E}[\xi_{n+1}^2|\mathcal{F}_n]+m^2\mathbb{E}(X^2)\\&&+2m^2\Bigl(Z_n^2(\mathbb{E}(X^2)-\mu_X\mu_Y)-Z_n\mathbb{E}(X^2)\Bigr).\end{eqnarray*}
Using the fact that
$\mathbb{E}[\xi_{n+1}^2|\mathcal{F}_n]=mZ_n(1-Z_n)\frac{T_n-m}{T_n-1}+m^2Z_n^2$
and that $Z_n$ converges almost surely to $z$, we conclude that
$\mathbb{E}[D_{n+1}^2|\mathcal{F}_n]$ converges almost surely to
$G(z)>0.$ Applying Theorem \ref{clt-renlund}, we obtain the
following
\begin{equation}\sqrt{n}(Z_n-z)\stackrel{\mathcal{L}}{\longrightarrow}\mathcal{N}\Big(0,\frac{G(z)}{3m^2\mu_X\mu_Y}\Big). \end{equation}
But, we can write
$\frac{W_n-zT_n}{\sqrt{n}}=\sqrt{n}\bigl(\frac{W_n}{T_n}-z\bigr)\frac{T_n}{n}$.
Thus, it is enough to use Slutsky theorem to conclude the proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thmXYself}] Consider
the urn model defined in (\ref{recurrence}) with $Q_n=
\begin{pmatrix}
X_n & 0\\
0& Y_n \\
\end{pmatrix}$. The process of the
urn satisfies the following recursions:
\begin{equation}\label{recurence-self1}
W_{n+1}=W_n+X_{n+1}\xi_{n+1}\quad \text{and} \quad
T_{n+1}=T_n+mY_{n+1}+\xi_{n+1}(X_{n+1}-Y_{n+1}).\end{equation}
\begin{lem}\label{algo}
If $\mu_X\neq \mu_Y$, the proportion of white balls in the urn
after $n$ draws satisfies the stochastic algorithm defined by
(\ref{eq:algo_sto}) where $\gamma_n=\frac{1}{T_n}$,
$f(x)=m(\mu_Y-\mu_X)x(x-1)$ and $\Delta
M_{n+1}=D_{n+1}-\mathbb{E}[D_{n+1}|\mathcal{F}_n]$ with
$D_{n+1}=\xi_{n+1}(Z_n(Y_{n+1}-X_{n+1})+X_{n+1})-mZ_nY_{n+1}$.\\
\end{lem}
\begin{proof}
We check that, if $\mu_X\neq\mu_Y$, the conditions of definition
\ref{def-algo} hold. Indeed, \begin{description} \item[(i)] as
$T_n=T_0+m\sum_{i=1}^nY_i+\sum_{i=1}^n\xi_i(X_i-Y_i)$,
then via the strong law of large numbers we have $|\frac{T_n}{n}|\leq
m\mu_Y+m\mu_{|X-Y|}+1$.
On the other hand, we have $T_n\geq \min_{1\leq i\leq n}(X_i,Y_i)
m n$, thus,\begin{equation*}\frac{1}{(m\mu_Y+m\mu_{|X-Y|})n}\leq
\frac{1}{T_n}\leq \frac{1}{\displaystyle\min_{1\leq i\leq
n}(X_i,Y_i) m n},\end{equation*}
\item[(ii)]$\mathbb{E}[\Delta
M_{n+1}^2|\mathcal{F}_n]\leq
(2m+m^2)(4\mu_{X^2}+\mu_{Y^2})+3m^2\mu_{Y^2}+2m^2\mu_X+2m^2\mu_X\mu_Y+4m^2(\mu_X-\mu_Y)^2=K_u,$\\
\item
[(iii)]$|f(Z_n)|=|m(\mu_Y-\mu_X)Z_n(Z_n-1)|\leq 2m |\mu_Y-\mu_X|=K_f,$\\
\item[(iv)] $\mathbb{E}[\gamma_{n+1}\Delta
M_{n+1}|\mathcal{F}_n]\leq \frac{1}{T_n}\mathbb{E}[\Delta
M_{n+1}|\mathcal{F}_n]=0=K_e.$
\end{description}
\end{proof}
\end{proof}
\begin{prop}\label{prop-self}
The proportion of white balls in the urn after $n$ draws, $Z_n$,
satisfies almost surely\\
$\displaystyle\lim_{n\rightarrow \infty}Z_n=\left\
\begin{array}{ll}
1, & \hbox{ if }\mu_X>\mu_Y;
\\
0, & \hbox{ if }\mu_X<\mu_Y;
\\
\tilde Z_\infty, & \hbox{ if }\mu_X=\mu_Y,
\end{array
\right. $
\\
where $\tilde Z_\infty$ is a positive random
variable.
\end{prop}
\begin{proof}[Proof of Proposition \ref{prop-self}]
Recall that, if $\mu_X\neq \mu_Y$, $Z_n$ satisfies the stochastic
algorithm defined in Lemma \ref{algo}. As the function $f$ is
continuous, by Theorem \ref{clt-renlund} we conclude that $Z_n$
converges almost surely to the stable zero of the function $h$
with a negative derivative,
which is $1$ if $\mu_X>\mu_Y$ and $0$ if $\mu_X<\mu_Y.$\\
In the case when $\mu_X=\mu_Y$, we have
$Z_{n+1}=Z_n+\frac{P_{n+1}}{T_{n+1}}$, where
$P_{n+1}=X_{n+1}\xi_{n+1}-Z_n\bigl(mY_{n+1}+\xi_{n+1}(X_{n+1}-Y_{n+1})\bigr)$.
Since $\mathbb{E}[P_{n+1}|\mathcal{F}_n]=0$, then $Z_n$ is a
positive
martingale which converges almost surely to a positive random variable $\tilde
Z_\infty$.\\
As a consequence, we have
\begin{cor} The total number of balls in the urn, $T_n$,
satisfies as $n$ tends to infinity
if $\mu_X\geq \mu_Y$
\begin{equation*}
\frac{T_n}{n}\stackrel{a.s}{\longrightarrow}m\mu_X.
\end{equation*}
\end{cor}
\begin{proof}
In fact, let
$M_n=\sum_{i=1}^n\xi_i(X_i-Y_i)-\mathbb{E}[\xi_i(X_i-Y_i)|\mathcal{F}_{i-1}],$
we have
\begin{eqnarray*}\frac{T_n}{n}&=&\frac{T_0}{n}+\frac{m}{n}\sum_{i=1}^nY_i+\frac{1}{n}\sum_{i=1}^n\xi_i(X_i-Y_i)\\
&=&\frac{T_0}{n}+\frac{m}{n}\sum_{i=1}^nY_i+
\frac{m(\mu_X-\mu_Y)}{n}\sum_{i=1}^nZ_{i-1}+\frac{M_n}{n}.
\end{eqnarray*}
As it was proved in the previous theorem, we show that, as $n$
tends to infinity, we have
$\frac{M_n}{n}\stackrel{a.s}{\longrightarrow} 0$. Recall that, if
$\mu_X>\mu_X$ , $Z_n$ converges almost surely to $1$. Then, using
Ces\'aro lemma, we obtain the limits requested. If $\mu_X=\mu_Y$,
we have $\frac{1}{n}\sum_{i=1}^nY_i$ converges to $\mu_Y$.
\end{proof}
Using the results above, the convergence of the normalized number
of white balls follows immediately. Indeed, if $\mu_X>\mu_Y$, we
have, as $n$ tends to infinity,
\[\frac{W_n}{n}=\frac{W_n}{T_n}\frac{T_n}{n}\stackrel{a.s}{\longrightarrow}m\mu_X,\]
Let $\tilde
G_n=\Bigl(\prod_{i=1}^{n-1}(1+\frac{m\mu_Y}{T_i})\Bigr)^{-1}B_n,$
then $(\tilde G_n,\mathcal{F}_n)$ is a positive martingale. There
exists a positive number $A$ such that
$\prod_{i=1}^{n-1}(1+\frac{m\mu_Y}{T_i})\simeq A
n^{\rho}$. Then, as $n$ tends to infinity we have
\begin{equation*} \frac{B_n}{n^\rho}\stackrel{a.s}{\rightarrow} B_{\infty},\end{equation*}
where $B_\infty$ is a positive random variable.\\
If $\mu_X=\mu_Y$, the sequences
$\Bigl(\prod_{i=1}^{n-1}(1+\frac{m\mu_X}{T_i})\Bigr)^{-1}W_n$ and
$\Bigl(\prod_{i=1}^{n-1}(1+\frac{m\mu_Y}{T_i})\Bigr)^{-1}B_n$ are
$\mathcal{F}_n$ martingales such that
$\Bigl(\prod_{i=1}^{n-1}(1+\frac{m\mu_X}{T_i})\Bigr)^{-1}\simeq B
n,$ where $B>0$, then, as $n$ tends to infinity,
we have
\begin{equation*}\frac{W_n}{n}\stackrel{a.s}{\rightarrow}
W_{\infty} \quad \text{and}\quad \frac{B_n}{n}\stackrel{a.s}{\rightarrow} \tilde B_{\infty},\end{equation*}
where $W_{\infty}$ and $\tilde B_{\infty}$ are positive random
variables satisfying $ \tilde B_{\infty}=m\mu_X-W_{\infty}.$
\end{proof}
\begin{rmq}
The case when $\mu_X<\mu_Y$ is obtained by interchanging the
colors. In fact we have the following results:
\begin{equation*}T_n\stackrel{a.s}{=}m\mu_Y n+o(n),\quad W_n=\tilde W_\infty n^\sigma+o(n)\quad \text{and} \quad B_n=m\mu_Yn+o(n),\end{equation*}
where $\tilde W_\infty$ is a positive random variable and
$\sigma=\frac{\mu_X}{\mu_Y}.$
\end{rmq}
|
2,877,628,090,753 | arxiv | \section{Introduction}
Time Series Analysis (TSA) is a successful field of a cross disciplinary character. In economics, climate, geophysics and
statistics \cite{GRANGER1986,liming2013, NOAKES1988,KIM2016}, among others,
forecasting based
on the observation of the previous states of the system is of critical interest. Engineering and communications
are focused on control and parameter detection of signals\cite{WERON2014,NOURY2019,Rowsseeuw2019}.
Classification, clustering or detection of abnormalities in a collection of samples, are the main targets for Data mining and Machine learning\cite{SI2018,HUSSAIN2018}. A
time series is basically the collection of a given variable describing the activity of a system during different time-points. This system may be
linear or nonlinear, translating this properties to the observed time series.
Regardless of the diversity of techniques to acquire and study a collection of temporal samples, the extraction of information about the underlying dynamical system from the
observation of the evolution of one (or several) of its variables, is frequently a non-trivial task \cite{Fu2011,DeGooijer2006}.
Notwithstanding the difficulties, several methodologies and tools have been proposed to better capture and understand the properties of a system from the observation of its dynamics.
In general, they rely on the fact that time series have internal structures
along several dimensions (time, amplitude, phase, frequency, ...) that contain information about the dynamical response of a system. The way these structures are related
would help at describing the system's activity and predicting its evolution. With this regard, TSA has broadly focused on two perspectives: One in which few parameters can
describe time series if they come from stationary stochastic processes; and a non-parametric one, in which the estimation of the spectral density or higher order conditional
moments totally describes a collection of samples\cite{CHEN1997,Kocsis2017}. In the other perspective, TSA methods can be grouped in univariate and multivariate ones. The latter
accounts for techniques to quantify the contribution of two or more variables on a single event, while the former aims at the describing and inferring the
evolution of temporal structures based on a single variable\cite{HE2015,HOGA2017,ABOAGYESARFO2015,SALLES2019274}. Regarding univariate methods, a central
question is to know whether temporal structures correlate each other. The serial correlation quantifies the point-by-point
correlation of a signal with a delayed version of itself\cite{gubner2006}. Autocorrelation is then useful to accurately seek for repetitive patterns in a signal.
However, when a signal is split into consecutive segments, autocorrelation fails in capturing the interplay between them. Therefore, a naive but not trivial question may raise about whether
signal segments of a time series might be related to each other.
In this context, this paper is focused on the hypothesis that consecutive segments of a time series may be inter-relayed, transferring information along them. To test this hypothesis, we unravel
how segments transmit information between them, with the final objective of gaining additional insights about what are the properties of time series and, ultimately, of the system behind them.
We quantify the communication levels among consecutive segments of a time series using a combination of
Networks Science (NS) and Symbolic Dynamics (SD).
The use of NS has reached a broad range of areas, such as social sciences, biology, ecology, neuroscience, epidemics, among others \cite{newman2010}.
Its main advantage relies on the ability of translating almost any system under interaction into nodes, which are endowed with properties that can be extracted from almost any variable of the system.
These nodes are connected by links, entities that account for any kind of interaction between them.
The mathematical background of NS is useful to design artificial models leading to different network structures, while its transversal nature allows these models to be
applied to different kind of datasets \cite{newman2010,estrada2015}. NS has also been applied in data mining and machine learning
problems, and more recently it has been implemented as a concomitant tool of TSA \cite{ZANIN2016,CAMACHO2018,TANIZAWA2018,WANG2019,ZOU2019,Lacasa2008,Iacovacci2019,Masoller2015}.
In this context, publications using {\it visibility graphs} have introduced new perspectives on how to extract information from experimental time series \cite{LozanoPerez79,Lacasa2008}.
The idea behind visibility graphs is that a time series can be transformed into a network, where nodes are the different values of the time series and their links are created between any two amplitudes as long as they can ``see" each other without being covered by another intermediate sample, as if it were a series of concatenated hills and valleys \cite{Lacasa2008}.
Since the seminal work of Lozano-Perez et. al.\cite{LozanoPerez79}, visibility graphs have been applied to a broad
range of problems, from planning collision-free paths to avoid polyhedral obstacles, characterization of random time series,
combinatorics on words, to three dimensional perspectives for image processing \cite{Nunez2012}.
Alternative techniques to map time series dynamics into a graph representation have been introduced by
means of symbolic dynamics. SD assumes a symbol as a perceptible idea that encloses common characteristics of a set of
samples in a time series. In other words, a symbol contains valuable information about a temporal segment but, at the same time,
it simplifies the analysis of the original variable. The symbol definition has lead to
different approaches, many of them with interesting results \cite{Lin2003,Kennel2004,Daw2000,Martinez2018}. However,
among the diversity of definitions of a symbol, the one proposed by Bandt and Pompe is nowadays the most endorsed by
the community of researchers making use of symbolic time series \cite{Bandt2002}. The originality of this approach consists of a formalism that quantifies the information of a
time series by its transformation into ordinal patterns, which take into account the ranking between consecutive values.
An ordinal pattern $\pi$ is defined by comparing the relative amplitudes of a temporal segment of $D$ consecutive elements observations ($D$ is also called the dimension of the pattern), and mapping
them into a one-dimensional ordinal space $\{0,1,...,(D-1)\}$, where the highest value in the segment will be tagged by $D-1$ and the
lowest will be assigned $0$, with the rest of the values going in descending other. Thus, each element of the pattern $\pi$ only contains the order, disregarding its specific value.
For finite time series of length $M$, when the
cardinality $D$ of the temporal segments is fixed, the amount of possible symbols is determined by $D!$. This way,
the variables reduction boosts the run-time of exploratory experiments. Next, the probability distribution of
finding a pattern $\pi$ in the symbol sequence is obtained and analysed in order to obtain information about the general properties of the underlying system. This methodology has been proved to be robust against noise, fully data-driven, adequate
under weak stationarity and computationally efficient with no further assumptions over the
datasets \cite{Martinez2018,Monetti2013,amigo2007,amigo2015,zanin2008,cazelles2004,rosso2007}. It
has been extensively used to characterize time series of different nature, as well as for mapping a collection
of samples into a directed graph.
With this regard, a directed graph containing $D!$ nodes can be created when different symbols $\pi$ of a
signal are consecutively connected as they appear in the time series \cite{Masoller2015}. Using this transformation of patterns into networks,
lasers and chaotic time series were characterized by means of the link entropy and
a non-normalized version of Shannon entropy of the resulting nodes. This work revealed the usefulness of mapping time series into a graph, and
opened the door to further studies about the structure of the resulting networks.
In this paper, we go one step beyond by studying the transfer of information along temporal segments of epidemic time series. We transform them into ordinal patterns, which are then
projected into nodes of a network whose links are based on consecutive appearances along the time series.
In particular, we construct symbolic networks from epidemic time series ($x_t$) of vector-borne (Dengue, Malaria) and air-borne (Influenza)
diseases reported at different countries (see Tab. \ref{tab:01} of Methods Section). We also introduce a family of five novel parameters that characterize the role of the nodes (patterns) of
the network, specifically: 1) the amount of entropy entering/leaving a node $\pi$, by means of the \textit{incoming/outgoing entropy} ($H_{in}, H_{out}$),
2) the amount of complexity entering and leaving a node $\pi$, by means of the \textit{incoming/outgoing complexity} ($C_{in}, C_{out}$),
3) the level of conductance of information of a node, by
means of the \textit{flux} of entropy/complexity ($\phi_{H,C}$),
4) how much a node amplifies or attenuates entropy/complexity ($A_{H,C}$), and 5) the internal \textit{fluctuation} ($f$) inside each pattern.
Using these five metrics we are able to identify differences between diseases and assign specific roles to the different patterns of the time series.
We explore the interplay between the fluctuation $f$ of each pattern $\pi$ and the role the corresponding node has on the network structure.
Next, we compare our results with a set of synthetic outputs of different complexity in order to infer the closeness similarities of
these diseases with synthetic models. Our results demonstrate that this methodology unveils differences among air-borne and vector-borne diseases, and evidences how
these outbreaks share similarities with autoregressive processes of linear and nonlinear nature.
\section{Methods}
\subsection{Datasets}
Epidemiological cohorts are composed of nine time series $x_t$ of different length $M$ corresponding to three diseases in six different countries for the time periods shown in Tab. \ref{tab:01}. Each times series represents the weekly amount of individuals infected with a specific disease. In order to account the non-stationarity of signals, we extracted their log-returns $y_t=log(x_t-1)-log(x_t)$.
\begin{table}[!h]
\centering
\caption{Datasets: Countries and diseases. For each country and disease, we show the number of weeks $M$ of each available epidemiological time series with the corresponding time period (in years) in parenthesis.}
\label{tab:01}
\begin{tabular}{l c c c}
\hline
{\bf Country} & {\bf Dengue} & {\bf Influenza} & {\bf Malaria} \\ \hline
{\it Australia}& & 974 $(1997-2015)$ & \\
{\it Colombia} & 626 $(2005-2016) $ & &626 $(2005-2016)$ \\
{\it Japan} & & 964 $(1998-2015)$ & \\
{\it Mexico} & 678 $(2000-2015)$ & 830 $(2000-2015)$ & \\
{\it Singapore}& 838 $(2000-2015)$ & & \\
{\it Venezuela}& 660 $(2002-2014)$ & & 669 $(2002-2014)$ \\
\hline
\end{tabular}
\end{table}
The datasets analyzed in this paper were extracted from online reports of corresponding {\it Ministry of Health} of each country. \cite{australiaER,colombiaER,japanER,mexicoER,singaporeER,venezuelaER}.
We studied diseases in terms of the graph representation of their aggregated time series. Time series are cut into vectors of specific length $D$, where each different vector corresponds to a node of the network. Nodes (vectors) are then consecutively connected in order of appearance, which leads to a directed graph. Patterns associated to each node are extracted from a symbolic transformation mapping the inner relative amplitudes of each vector into an ordinal representation. When two vectors co-occur sequentially more than one time, a weight is given to the direct connection between the two vectors (nodes), leading to a weighted graph. Next, we analyze networks of diseases using a family of novel network parameters, consisting on a set of information-based metrics quantifying the entropy and complexity (\textit{in/out entropy}, \textit{in/out complexity}) of the nodes and a set of parameters assessing their dynamical role (\textit{flux}, \textit{amplitude} and \textit{fluctuation}).
\subsection{Symbolic transformation}
Following the methodology proposed by Bandt \& Pompe \cite{Bandt2002}, we retrieve all different patterns $\pi$ of length $D$ appearing in a signal $x_t$. When defining an embedding dimension $D$, we are constrained by the size $D$ of the patterns, since $D!$ is the total number of possible patterns, e.g., $D=3$ leads to $D!=6$ possible symbols emerging from $x_t$. The amount of all appearing symbols is also closely related to the length $M$ of the time series, since extremely short time series have not enough statistics to guarantee a fully resolved distribution of patterns. For a statistical reliable estimation, we follow the condition $M-D \gg D!$ \cite{Tiana2010}. Once this condition is fulfilled, each symbol is then constructed by considering consecutive samples of length $D$ extracted from $x_t$. Under this mapping, $x_t$ ($\forall$ $t=1,2,...M$), is transformed into a restricted number of patterns encoding the relative inner amplitudes of the $D-$dimensional vector $\{x_t, x_{t+1}, ..., x_{t+D}\}$. Samples are arranged (or ranked) onto the permutation $\pi=(\pi_0, \pi_1, ..., \pi_{(D-1)})$ of $(0,1, ..., D-1)$ fulfilling $x_{t+\pi_0}\leq x_{t+\pi_1}\leq x_{t+\pi_{D-1}}$. Hence, each permutation is a symbol of the full spectrum of available patterns. The full process is summarized, schematically, in Fig. \ref{fig:01}. This methodology had been proven to be computationally efficient, fully data-driven with no further assumptions over the data, robust against noise, and well-behaved under weak stationarity \cite{amigo2007,amigo2015,zanin2008,amigo2010,keller2014,cazelles2004,johann2018,zanin2012,rosso2007}.
\begin{figure}[ht]
\centering
\includegraphics[width=.8\linewidth]{fig01}
\caption{Extracting ordinal patterns from a time series. {\bf A.} Signal amplitudes are splitted into consecutive vectors of length $D$ (in this example, $D=3$). {\bf B.} The values of the amplitude inside each vector are translated into a ranking. Consecutive patterns are connected through a direct link. {\bf C.} Directed graph derived from the time series and patterns. {\bf D.} For illustrative purposes, vector $\{-0.3,1.2,0.5\}$ is transformed into the ordinal pattern $\pi=\{0,2,1\}$ and its variability is obtained as $T_i=d_1+d_2=\sqrt{5}+\sqrt{2}=3.65$ (where $d_1$ and $d_2$ are the lengths of the lines connecting two consecutive rankings), corresponding to a fluctuation of $f=2$ (see Tab. \ref{tab:02} and main text, for details).}
\label{fig:01}
\end{figure}
\paragraph{Pattern Fluctuation.}
We propose to quantify the internal variations of each symbol by defining its \textit{fluctuation} (\textit{f}). With this aim, we quantify the total length $T$ of the lines connecting the ranking values inside each pattern, as shown in Fig. \ref{fig:01}D. Note that, the larger the value of $T$, the higher the variability inside each pattern. Next, define the fluctuation $f$ of a given pattern as the ranking of its corresponding value of $T$, from the lowest ($f=1$) to the highest (See Tab. \ref{tab:02} for details). In this way, we can group patterns in terms of their internal fluctuations, which gives interesting information about the temporal dynamics of the pattern. For instance, monotonic and periodic time series results in an abundance of fluctuations with low $f$, a parameter that can be included to complement the information about the specific features of a given node.
\begin{table}[!h]
\centering
\caption{Pattern fluctuations for $D=3$. First row: All possible patterns $\pi$ of length $3$. Second row: The internal variability $T$ (obtained as the length of the connecting lines). Third row: The corresponding fluctuation parameter $f$. Note that different patterns have the same fluctuation, since they have the same internal varaibility.}
\label{tab:02}
\begin{tabular}{lcccccc}
\hline
{\bf $\mathbf{\pi}$}
& \begin{tikzpicture}
\draw[thick,-] (0,0) -- (.5,.5);
\draw[thick,-] (.5,.5) -- (1,1);
\filldraw[fill=gray] (0,0) circle[radius=2pt];
\filldraw[fill=gray] (.5,.5) circle[radius=2pt];
\filldraw[fill=gray] (1,1) circle[radius=2pt];
\end{tikzpicture}
& \begin{tikzpicture}
\draw[thick,-] (0,1) -- (.5,.5);
\draw[thick,-] (.5,.5) -- (1,0);
\filldraw[fill=gray] (0,1) circle[radius=2pt];
\filldraw[fill=gray] (.5,.5) circle[radius=2pt];
\filldraw[fill=gray] (1,0) circle[radius=2pt];
\end{tikzpicture}
& \begin{tikzpicture}
\draw[thick,-] (0,1) -- (.5,0);
\draw[thick,-] (.5,0) -- (1,.5);
\filldraw[fill=gray] (0,1) circle[radius=2pt];
\filldraw[fill=gray] (.5,0) circle[radius=2pt];
\filldraw[fill=gray] (1,.5) circle[radius=2pt];
\end{tikzpicture}
& \begin{tikzpicture}
\draw[thick,-] (0,.5) -- (.5,0);
\draw[thick,-] (.5,0) -- (1,1);
\filldraw[fill=gray] (0,.5) circle[radius=2pt];
\filldraw[fill=gray] (.5,0) circle[radius=2pt];
\filldraw[fill=gray] (1,1) circle[radius=2pt];
\end{tikzpicture}
& \begin{tikzpicture}
\draw[thick,-] (0,.5) -- (.5,1);
\draw[thick,-] (.5,1) -- (1,0);
\filldraw[fill=gray] (0,.5) circle[radius=2pt];
\filldraw[fill=gray] (.5,1) circle[radius=2pt];
\filldraw[fill=gray] (1,0) circle[radius=2pt];
\end{tikzpicture}
& \begin{tikzpicture}
\draw[thick,-] (0,0) -- (.5,1);
\draw[thick,-] (.5,1) -- (1,.5);
\filldraw[fill=gray] (0,0) circle[radius=2pt];
\filldraw[fill=gray] (.5,1) circle[radius=2pt];
\filldraw[fill=gray] (1,.5) circle[radius=2pt];
\end{tikzpicture} \\ \hline
{\bf $\mathbf{T}$} & 2.828 & 2.828 & 3.650 & 3.650 & 3.650 & 3.650 \\
{\bf $\mathbf{f}$}& 1 & 1 & 2 & 2 & 2 & 2 \\ \hline
\end{tabular}
\end{table}
\paragraph{Network construction.}
We built the network representation of the time series by transforming each different pattern into a node, following the methodology proposed
by Masoller et al. \cite{Masoller2015}.
Consecutive symbols were connected through a direct link (see red arrows of \ref{fig:01}B) and auto-loops (links leaving and reaching the same node) were dismissed. In this way, sequential associations of patterns lead to a directed graph, while repetitions of pattern co-ocurrence (e.g., when pattern $\pi_i$ and $\pi_j$ appear one after the other more than once) reinforce the weight $w_{ij}$ of the link going from node $i$ to $j$ (or, in other words, from pattern $\pi_i$ to $\pi_j$). The weight of a link $\pi_i\rightarrow \pi_j$, $w_{ij}$, is the relative number of times, in the sequence, the symbol $i$ is followed by symbol $j$; in this way, the link weights are normalized in each node, i.e. $\sum_{j=1}^{D!}w_{ij}=1$. Figure \ref{fig:01}C, shows the resulting network from the time series and patterns.
\begin{figure}[ht]
\centering
\includegraphics[width=.6\linewidth]{fig02}
\caption{
Qualitative representation of the entropy-complexity ($H\times C$) plane. Entropy $H$ increases with the disorder of the signal. Disequilibrium $Q$ decays as the signal approximates to a complete disordered state. Statistical complexity $C=Q\cdot H$ reaches its maximum when the system stays between order and disorder. Note the region of positive correlations between $H$ and $C$, which is highlighted in blue, while the region of negative correlation is highlighted in yellow.}
\label{fig:02}
\end{figure}
\subsection{Global measures}
Entropy $H$ and statistical complexity $C$ are useful when characterizing the appearance of ordinal pattern populations along time series \cite{rosso2007,zanin2012}. A simple diagram of these measures globally characterizes its dynamics (see Fig \ref{fig:02}B). In this way, a region of positive correlations between $H$ and $C$ reveals that the system is close to an order phase, meanwhile a negative correlation arises in a region of high disorder \cite{amigo2010} (see Fig \ref{fig:02}B). The [$H,C$] plane has been widely used for the description of a diversity of natural and artificial systems, always departing from the transformation of time series into ordinal patterns. \cite{amigo2007,amigo2010,amigo2015,zanin2008,keller2014,cazelles2004,carpi2010,zanin2012}
$H[p]$ and $C[p]$ are defined from the empirical distribution $p$ of ordinal patterns extracted from $x_t$. Permutation entropy $H[p]$ is given by the ratio between the Shannon entropy $S[p]$ and the maximum entropy $S_{max}=S[p_e]$, being $p=\{p_i\}$ the distributions of $i$ patterns and $p_e$ a uniform probability. In this way, $0\leq H[p]\leq 1$, with $H$ = 0 if $p_{i}=1$ and ${H}=1$ if ${{p}_{i}}=1/D!$ $\forall i$. The disequilibrium $Q$ evaluates the existence of preferred states among the accessible ones. It is defined as $Q[p]=Q_0D[p,p_e]$, with $Q_0$ being a normalization constant and $D[p,p_e]$ as the mean information for discriminating between $p$ and $p_e$ per observation $p_i$, defined as the symmetric form of the Kullback-Leibler relative entropy \cite{Kullback1951}. Low values of $Q[p]$ imply a distribution of patterns $p$ close to what one may expect at random, i.e. $p_e$. Otherwise, high values of $Q[p]$ reflect the existence of some privileged patterns. Finally, statistical complexity $C[p]=H[p]\cdot Q[p]$ quantifies the interplay between order and disorder of the system. Note that $C[p]$ vanishes if the system is close to the equilibrium (maximum disorder and minimum distance to uniform probability) or in a state of high order (minimum entropy). The triad of dynamical properties $\{H,Q,C\}$ is commonly known as the generalized statistical complexity measures of a time series. Here, we propose to use these global parameters capturing the properties of the time series of a system and combine them with the topological properties of directed-weighted graphs.
\subsection{Local measures}
Additionally to the pattern fluctuation $f$, we also propose a collection of node features based on global parameters. We define the incoming content of information of a node $i$ as the probability distribution $P^i_{in}=\{ p_l\}$ of the inbound links, $\forall$ $l=\{1, ..., k_{in}\}$, where $k_{in}$ is the in-degree. To do that, the weight of each incoming link is divided by the total weight of the $l$ links entering to $i$, such that $\sum_{l=1}^{k_{in}}p_l=1$. Reciprocally, we construct the outgoing information content $P^i_{out}=\{p_m\}$ upon outgoing $m$ edges from $i$ $\forall$ $m=\{1, ..., k_{out}\}$, with $k_{out}$ as the out-degree. All departing weights are divided by the total sum of its $m$ outbound weights fulfilling $\sum_{m=1}^{k_{in}}p_m=1$. This turns a node $i$ into an entity endowed with the property of conveying information among ordinal patterns, which can be interpreted as the way temporal structures (patterns) communicate (or transmit information) along time.
\begin{table}
\caption{Definitions of the node entropies based on the distribution of incoming and outgoing links of a node $i$.}
\label{tab:03}
\centering
\rule{\textwidth}{.1pt}
\vspace{-0.9cm}
\begin{subequations}
\begin{flalign}
& \text{Nodal entropy} & H_{in}[p_l] = \frac{S_{in}}{S^{max}_{in}}
&& H_{out}[p_m] = \frac{S_{out}}{S^{max}_{out}} && \label{equ:h}
\end{flalign}
\begin{flalign}
& \text{Nodal complexity} & C_{in}[p_l] = H_{in}[p_l] \cdot Q_{in}[p_l]
&& C_{out}[p_m] = H_{out}[p_m] \cdot Q_{out}[p_m] && \label{equ:c}
\end{flalign}
\vspace{-0.1cm}
\end{subequations}
\rule{\textwidth}{.1pt}
\end{table}
\paragraph{Nodal entropy and complexity.}
Under this framework, we can measure the Shannon entropy arriving to node $i$ as $S_{in} = -\sum_{l=1}^{k_{in}} p_l\ln\left(p_l\right)$, and we can contrast it with its maximal entropy $S_{in}^{max}=log(k_{in})$ obtained for the uniform distribution $p_e=\{p_{l,e} \mid p_{l,e}=1/k_{in} ~\forall ~l\}$. The ratio between $S_i$ and $S_{in}^{max}$ leads to the \textit{incoming node entropy} $H_{in}[p_l] = S_{in}/S^{max}_{in}$ that is bounded by $0\leq H_{in}[p_l] \leq 1$. When only one link arrives to node $i$, i.e. $P^i_{in}=1$, the incoming node entropy is the lowest ($H_{in}=0$). On the other hand, when all possible links arrive to $i$ (i.e., node $i$ has $l=D!-1$ incoming links), the incoming node entropy is the highest ($H_{out}=1$) if all incoming links have the same probability $p_l=1/k_{in}$. Analogous reasoning applied to the $m$ outgoing links leads to the {\it outgoing node entropy} $H_{out}[p_m]$. First row of Tab. \ref{tab:03} summarizes the definitions of the incoming and outgoing node entropies.
Similarly to global measures, comparing to a uniform distribution $p_e$ one can measure the disequilibrium of incoming links $Q_{in}[p_l]=Q_0\cdot D[p_l,p_e]$. Here, $Q_0=-2\{ \frac{k_{in}+1}{k_{in}}log(k_{in}+1) - 2log(2k_{in}) + log(k_{in}) \}^{-1}$ is the normalization constant leading to $0\leq Q_{in}[p_l] \leq 1$. $D[p_l,p_e]$ accounts for the Jensen-Shannon divergence defined in terms of $S_{in}$ as $D[p_l,p_e]=S_{in}[(p_l+p_e)/2]-S_{in}[p_l]/2-S[p_e]/2$.
By multiplying $H_{in}$ and $Q_{in}$ we finally obtain the \textit{incoming node complexity} that is bounded between $0\leq C_{in}[p_l] \leq 1$. Following the same procedure as node entropy, we can define the {\it outgoing node complexity} $C_{out}[p_m]$ considering the $m$ outgoing links. Second row of Tab. \ref{tab:03} summarizes the definitions of the incoming and outgoing node complexities.
The previous parameters shed light on how temporal structures in time series are temporally correlated to each other, and how different patterns are related the increase of entropy and complexity along a time series.
\paragraph{Flux and Amplification of information.}
The flux is a vectorial quantity that describes the magnitude and direction of the flow of information passing through a node $i$. In directed graphs, direction is trivially associated with incoming links of node $i$ to its outgoing ones. Therefore, the net amount of ``information'' passing through $i$ can be quantified in terms of two nodal properties:
The \textit{entropy $\phi_{H}$ and complexity $\phi_{C}$ fluxes}. Fluxes $\phi_{H}$ and $\phi_{C}$ are measures of the dynamical importance of a node, since they characterize which patterns behave as dynamical hubs by allowing that larger amounts of information entering and leaving the pattern $i$. They quantify the dynamical relevance of nodes by taking into account the entropies and complexities of the incoming and outgoing links. Table \ref{tab:04} contains the mathematical definitions of the entropy and complexity fluxes.
\begin{table}[ht]
\caption{Definitions of the flux $\phi$ and amplification $A$ of a node $i$, both for the entropy and complexity of the incoming and outgoing links.}
\label{tab:04}
\centering
\rule{\textwidth}{1pt}
\vspace{-0.9cm}
\begin{subequations}
\begin{flalign}
& \text{Flux} & \phi_H = \sqrt{H_{in}^2 + H_{out}^2}
&& \phi_C = \sqrt{C_{in}^2 + C_{out}^2} && \label{equ:f}
\end{flalign}
\vspace{-\baselineskip}
\begin{flalign}
& \text{Amplification} & A_{H} = \frac{H_{out}^i}{H_{in}^i}
&& A_{C} = \frac{C_{out}^i}{C_{in}^i} && \label{equ:a}
\end{flalign}
\vspace{-0.1cm}
\end{subequations}
\rule{\textwidth}{1pt}
\end{table}
Importantly, fluxes allow to identify \textit{basin nodes}, which retain all incoming information from other patterns and do not transmit information to another ones, i.e. nodes whose entropy (or complexity) flux is ($\phi_H=\sqrt{H_{in}^2 + 0^2}=H_{in}$). On the other hand, \textit{generator nodes} have no incoming links, but convey information to another nodes ($\phi_H=\sqrt{0^2 + H_{out}^2}=H_{out}$). However, note that generator nodes may not exist in our kind of networks, since all patterns appear after a previous one (with the unique exception of the first pattern of the time series). Finally the \textit{flux dynamical hubs} are those patterns receiving large amounts of information from other temporal structures (i.e., patterns that are reached from a large amount of different patterns), and redistribute it equally (i.e., also lead to a diversity of patterns). For example, in the extreme case of ($\phi_H=\sqrt{1^2 + 1^2}=\sqrt{2}$) we would have the most important hub for the entropy flux. As a consequence, by definition, the value of fluxes are bounded by $0< \phi_{H,C} \leq \sqrt{2}$.
\begin{table}[!h]
\centering
\caption{Definitions of the dynamical roles of a node. The first row depicts five types of transitions for a pattern $\pi_i$. $P_l$, $P_m$ are the probabilities associated to incoming and outgoing links. The second row corresponds to the role assigned to the nodes. The third row contain the limits and domains for the \textit{flux} $\phi_{H,C}$. Note that $\phi_{H,C}=\sqrt{2}$ only for dynamical hubs. The last row shows
the boundaries of $A_{H,C}$. \textbf{NA} stands for Not Applicable.}
\label{tab:05}
\begin{tabular}{l c c c c c}
\hline
& \begin{tikzpicture}[auto,node distance = 1.0cm,shorten > = 1pt,> = Stealth,semithick,
state/.style = {circle, fill=#1, draw=none,text=white},state/.default = gray
]
\node (A) [state] {$\pi_i$};
\node (B) [state=white, right of = A] {};
\node (C) [state=white,above right of = A] {};
\node (D) [state=white,below right of = A] {};
\path[->] (A) edge ["\hspace{5mm} $P_{m}$"] (B)
(A) edge [""] (C)
(A) edge [""] (D);
\end{tikzpicture}
& \begin{tikzpicture}[auto,node distance = 1.0cm,shorten > = 1pt,> = Stealth,semithick,
state/.style = {circle, fill=#1, draw=none,text=white},state/.default = gray
]
\node (A) [state] {$\pi_i$};
\node (B) [state=white, right of = A] {};
\node (C) [state=white,above right of = A] {};
\node (D) [state=white,below right of = A] {};
\node (E) [state=white,left of = A] {};
\path[->] (A) edge ["\hspace{5mm} $P_{m}$"] (B)
(A) edge [""] (C)
(A) edge [""] (D)
(E) edge [bend left, "$P_{l}$"] (A)
(E) edge [bend right] (A);
\end{tikzpicture}
& \begin{tikzpicture}[auto,node distance = 1.0cm,shorten > = 1pt,> = Stealth,semithick,
state/.style = {circle, fill=#1, draw=none,text=white},state/.default = gray
]
\node (A) [state] {$\pi_i$};
\node (B) [state=white, right of = A] {};
\node (C) [state=white,above right of = A] {};
\node (D) [state=white,below right of = A] {};
\node (E) [state=white,above left of = A] {};
\node (F) [state=white,below left of = A] {};
\node (G) [state=white,left of = A] {};
\path[->] (A) edge ["\hspace{5mm} $P_{m}$"] (B)
(A) edge [""] (C)
(A) edge [""] (D)
(E) edge [""] (A)
(F) edge [] (A)
(G) edge ["$P_{l}$ \hspace{5cm}"] (A);
\end{tikzpicture}
& \begin{tikzpicture}[auto,node distance = 1.0cm,shorten > = 1pt,> = Stealth,semithick,
state/.style = {circle, fill=#1, draw=none,text=white},state/.default = gray
]
\node (A) [state] {$\pi_i$};
\node (B) [state=white, right of = A] {};
\node (E) [state=white,above left of = A] {};
\node (F) [state=white,below left of = A] {};
\node (G) [state=white,left of = A] {};
\path[->] (A) edge [bend left, "$P_{m}$"] (B)
(A) edge [bend right] (B)
(E) edge [""] (A)
(F) edge [] (A)
(G) edge ["$P_{l}$ \hspace{5cm}"] (A);
\end{tikzpicture}
& \begin{tikzpicture}[auto,node distance = 1.1cm,shorten > = 1pt,> = Stealth,semithick,
state/.style = {circle, fill=#1, draw=none,text=white},state/.default = gray
]
\node (A) [state=white] {};
\node (B) [state, right of = A] {$\pi_i$};
\node (C) [state=white,above right of = A] {};
\node (D) [state=white,below right of = A] {};
\path[->] (A) edge ["$P_{l}$"] (B)
(C) edge [""] (B)
(D) edge [""] (B);
\end{tikzpicture}
\\ \hline
{\bf $\mathbf{Type}$} & \textit{Generator} & \textit{Amplifier} & \textit{Transmitter} & \textit{Attenuator} & \textit{Basin}
\\
\multirow{ 2}{*}{{\bf $\mathbf{\phi_{H,C}}$}} & $H_{out}$ & $(H_{out},\sqrt{2})$ & $\sqrt{2}$ & $(H_{in},\sqrt{2})$ & $H_{in}$ \\
& $C_{out}$ & $(C_{out},\sqrt{2})$ & $\sqrt{2}$ & $(C_{in},\sqrt{2})$ & $C_{in}$ \\
{\bf $A_{H,C}$} & \textbf{NA} & $>1$ & $1$ & $<1$ & $0$
\\ \hline
\end{tabular}
\end{table}
The last dynamical feature we introduce is the entropy $A_{H}$ and complexity $A_{C}$ amplifications, whose definition is given by Eqs. \ref{equ:a} of Tab. \ref{tab:04}. While $\phi_{H,C}$ give us an idea of which patterns act as pipe-flows, $A_{H,C}$ tell us about the node gain. In other words, the ability to amplifying or diminishing the level of information a pattern
is transmitting. In the case of entropy, for instance, \textit{basin nodes} have zero amplification $A_H=0/H_{in}$. \textit{Transmitter nodes} that equally receive and distribute information would reach up to $A_H=1$ when $H_{out}=H_{in}$, as in the case of \textit{dynamical hubs}. Nodes that increase levels of entropy will have $A_H \gg 1$ and are called {\it amplifier nodes}. In the case of $A_H$, they amplify the level of entropy received from the incoming links taking the system to a disordered one. In the time series, they correspond to short periodic temporal structures followed by irregular patterns. In the case of complexity, an \textit{amplifier node} receive links endowed with either lower or higher entropy, but in both cases outgoing links have higher complexity levels. In the same way, a node might receive higher levels of entropy taking the system down to lower ones. These \textit{attenuator nodes} ($A_H<1$) can be related to changes from irregular fluctuations to more periodic patterns. Likewise, in terms of $A_C$, \textit{attenuator nodes} reduce the complexity levels of the system.
Table. \ref{tab:05} summarizes the node classification based on incoming and departing probabilities $P_l$ and $P_m$. Each dynamical role is characterized by the two fluxes $\phi_{H,C}$ and the two amplifications $A_{H,C}$.
\section{Results}
Networks obtained from the same disease are grouped to visualize, first, the incoming and outgoing complexity of a node $i$, respectively, $C_{in}$ and $C_{out}$
(see Eq. \ref{equ:c} of Tab. \ref{tab:03} for details). As it is shown in Fig. \ref{fig:03}, we found positive correlations between these two variables in all
diseases, which means that the higher the complexity entering a node, the higher the complexity departing from it (note that dashed lines correspond to $C_{in}=C_{out}$).
Influenza (Fig. \ref{fig:03}B), which is an air-bone disease, is the one reaching higher levels of complexity, with 9 nodes with
complexities higher than 0.10 (only 2 in the case of Dengue and none in Malaria).
In contrast, Malaria networks (Fig. \ref{fig:03}C) contain nodes with the lowest amounts of complexity.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{fig03}
\caption{Complexity entering a node (pattern) $C_{in}$ vs complexity leaving a node $C_{out}$. Color codes are always the same along the paper: Red for Dengue (A), blue for Influenza (B) and green for Malaria (C).
Dashed lines correspond to $C_{out}=C_{in}$.
Solid lines are the regression lines. (A) Dengue coefficient of determination is $R^2=0.6$. (B) Influenza has $R^2=0.71$. (C) In Malaria, $R^2=0.3$.}
\label{fig:03}
\end{figure}
We have obtained the linear regressions accounting for the interplay between $C_{out}$ and $C_{in}$ for the three diseases.
Influenza has the highest coefficient of determination $R^2=0.71$, which means that a linear equation is around 70$\%$ effective to estimate the values of $C_{out}$.
Vector-borne diseases have lower $R^2$, in comparison to Influenza. Interestingly, Malaria has the lowest bound with a value of $R^2=0.3$, which reveals the
absence of linear correlation between $C_{out}$ and $C_{in}$ .
Since dashed lines of Fig. \ref{fig:03} correspond to $C_{in}$=$C_{out}$, nodes (i.e., patterns)
lying above these lines are \textit{complexity amplifiers} (see classification of node roles at Tab. \ref{tab:05}), indicating that they distribute more complexity than the levels they receive.
On the contrary, nodes below dashed lines
act as \textit{complexity attenuators}, since they reduce the complexity existing in previous patterns.
In order to understand how the complexity and entropy of a disease evolve, and what are the patterns that increase or decrease them, we obtain the flux $\phi$ and the amplification $A$ for both
$C$ and $H$ (see Methods for details). Figure \ref{fig:04} shows the interplay between the entropy and complexity fluxes, respectively, $\phi_H$ and $\phi_C$, that a node handles.
Panels of Fig. \ref{fig:04} show the behaviour of the three diseases, revealing negative correlations between entropy and complexity fluxes. This result indicates that we are in a region where the level of stochasticity is high, since, as we can see in Fig. \ref{fig:02}B of the Methods section, entropy and complexity only have positive correlations in situations of high disorder.
Also note that {\it entropy hubs}, i.e., those nodes handling high amounts of entropy, are located in the bottom right side of panels. These nodes receive the highest amount of entropy and distribute it among the rest of patterns in the network, but fail as complexity distributors. Interestingly, Influenza and Malaria have two entropy hubs that reach the highest possible value $(\phi_H=\sqrt{2}$) and, as a consequence, their complexity flux decreases to $\phi_C=0$ (see Methods for an explanation of the boundaries of $\phi_H$ and $\phi_C$).
Similar to Fig. \ref{fig:03}, patterns in Influenza span their
$\phi_C$ along different levels of $\phi_H$, reaching the highest complexity values. This suggest that some specific patterns in Influenza behave as units of well ``conductance" of complexity,
while others act as entropy transmitters. On the contrary, Malaria only have entropy hubs, while Dengue seems to be between the other two diseases, having just one complexity hub with a $\phi_C>1.2$.
Linear regression of the interplay between $\phi_C$ and $\phi_H$ show high values of the coefficient of determination $R^2$ for the three diseases.
While entropy and complexity fluxes $\phi_{H,C}$ allow to determine the existence of hub patterns and their conductance level, the \textit{amplification} $A$ gives an estimate of how much entropy/complexity a node gains or losses in the network.
\begin{figure}[htb]
\centering
\includegraphics[width=\linewidth]{fig04}
\caption{Entropy and complexity fluxes and the ($\phi_H,\phi_C$) plane. Solid line correspond to the linear regressions of the three diseases. Coefficient of determinations is $R^2=0.93$ for Dengue (A), $R^2=0.97$
for Influenza (B) and $R^2=0.9$ for Malaria (C).}
\label{fig:04}
\end{figure}
\textit{Amplification} is defined as the ratio between the incoming and outgoing entropy/complexity that passes through a node $i$ (see Eq. \ref{equ:a} of Table. \ref{tab:04} for the mathematical definition). In Fig \ref{fig:05} we plot ($A_H,A_C$) plane of the three diseases, where we can observe that the amplification parameter allows to define four regions of interest (marked by dashed lines) that, in turn, assign different roles to the nodes of the disease network. Region $R1$ includes nodes that amplify both entropy ($A_H> 1$) and complexity ($A_C> 1$). Since, as we have previously seen, we are in a state where a negative correlation exists between entropy and complexity, there are no nodes lying within this region. Region $R2$ defines nodes that may act as entropy attenuators ($A_H< 1$) and complexity amplifiers ($A_C> 1$). Region $R3$ allocates nodes that attenuate both entropy ($A_H < 1$) and complexity ($A_C< 1$). Finally, region $R4$ correspond to those nodes that increase the entropy ($A_H>1$) while reducing their complexity ($A_C< 1$).
\begin{figure}[htb]
\centering
\includegraphics[width=\linewidth]{fig05}
\caption{Amplifications $A_H \times A_C$ plane. Solid line for model fit. Dashed lines defines region of interest R1, R2, R3, and R4 allocating nodes of different characteristics. (\textbf{A}) Corresponding coefficient of determinations for Dengue is $R^2=0.66$. (\textbf{B}) Influenza leads to $R^2=0.61$. (\textbf{C}) Malaria: $R^2=0.81$.}
\label{fig:05}
\end{figure}
The case of $A_H=A_C=1$ (i.e., no amplification is reported) only occurs for nodes with $\phi_{H,C}=\sqrt{2}$, which is the characteristic of patterns behaving as entropy and complexity hubs.
We can observe in all cases that $C$ faces the most amplification (or attenuation) of its values (see maximum and minimum values of Fig. \ref{fig:05}), while entropy amplification is always bounded between $0.9$ and $1.15$.
In region R2, we observe how Dengue (Fig. \ref{fig:05}A) has three nodes that increase around three times their incoming complexity, while slightly reducing their entropy. However, it is Influenza (Fig. \ref{fig:05}B) that has the node with the higher complexity amplification, which increases the incoming complexity up to 4 times. On the other hand, Malaria (Fig. \ref{fig:05}C) is the disease where the amplification of complexity is the lowest, with only one node with a value higher than 2.
On the other hand, $R4$ allocates nodes that decrease the level of complexity of their incoming patterns and, at the same time, increase their entropy. We can observe that both in Dengue and Influenza, those nodes with the highest entropy amplification $A_H$ depart from the linear behavior that seem to exist in the interplay between $A_C$ and $A_H$.
In fact, when looking at the linear regression, Malaria's coefficient of determination is higher than Dengue and Influenza, but this behavior can be attributed to the deviations from the linear trend that are
reported at both ends of the distributions. In this way, nodes with higher $A_H$ and $A_C$ behave differently from the rest, since in Malaria there are not nodes with extreme values, its coefficient of determination is higher.
For the sake of a complete characterization, we now pay attention to the interplay between the entropy/complexity role of the nodes and their topological importance in the structure of the networks. With this aim, we use the eigenvector centrality ($ec$) to account for the node importance \cite{newman2010}. $ec$ assumes that central nodes are those that are, at the same time, (i) connected to many nodes and (ii) connected to well connected nodes as well. In parallel with Additionally to $ec$, we obtain the \textit{pattern fluctuation} ($f$), which quantifies the variability inside each pattern: those patterns whose elements increase and decrease one after the other, will have a high $f$, while those patterns that monotonically increase or decrease, will have the lowest $f$ (see Methods). For each disease, assuming a pattern dimension of $D=4$, we obtain $24$ different patterns, whose variability $T$ results in 6 different levels of fluctuation. The higher the variability $T$, the higher the internal disorder of a pattern $\pi$ and the higher its $f$. Table \ref{tab:06} of the Supp. Info. shows how patterns with $D=4$ are grouped in $6$ different values of $f$. Figure \ref{fig:06} depicts the fluctuation $f$ vs the topological importance $ec$, where nodes grouped in terms of its $f$. We can observe that central (important) nodes are those with low levels of fluctuation. In other words, patterns with low variability are the central ones in the network structure. On the contrary, as the internal fluctuation of a pattern increases, its probability of being a hub decreases. As a consequence, peripheral nodes are associated with patterns having the highest fluctuations.
It is worth noting that for Dengue and Influenza, nodes with the lowest fluctuations, i.e., those patterns associated to either a monotonic increase or decrease of the number of infected individuals, are those with the highest centrality. This fact is probability caused by the existence of abundant periods where the number of infected individuals monotonically increase or decrease, leading to a high number of appearances of patterns
$(0,1,2,3)$ or $(3,2,1,0)$, which increases their number of links and, unavoidably, their eigenvector centrality. At the same time, when a node accumulates great part of the centrality, it relegates the rest of the nodes
of the network to a secondary role. However, Malaria behaves in a different way, since we can observe how the heterogeneity of the eigenvector centrality is not that high as in the other two diseases.
Next, we compare the previous results about the interplay between the node role and its centrality with those obtained with synthetic time series. The reason is to qualitatively observe whether dynamical signatures of these diseases are similar to those of classical models. With this aim, we simulated six models of different levels of complexity and entropy. Specifically:
\begin{itemize}
\item A Linear Gaussian Process (LGP, $\mathcal{N}(0,1)$), with the aim of having a signal with the highest entropy, and, as a consequence, with the lowest complexity.
\item A second-order auto-regressive model (AR(2), $x_{t+2}=0.7x_{t+1}+0.2x_t+\epsilon_t$) with $\epsilon_t$ being a Gaussian noise.
\item A Self-Exciting Threshold AR model (SETAR($k;p_1,p_2$)), with $k$ as the number of regimes, and $p_1, p_2$ the order of the autoregressive parts.
SETAR models have been largely used when modeling ecological systems that oscillate between two non-linear regimes with different delays. This model accounts for higher levels of complexity by decreasing its entropy levels. We modeled it by the SETAR(2;2,2) switching between regimes ($0.62+1.25x_{t-1}-0.43x_{t-2}+0.0381\epsilon_t$) if $x_{t-2}\leq3.25$ and ($2.25+1.52x_{t-1}-1.24x_{t-2}+0.0626\epsilon_t$), otherwise.
\end{itemize}
We also modeled three chaotic systems:
\begin{itemize}
\item A logistic map: $x_{t+1}=4x_t(1-x_t)$.
\item A R\"osler system: $\dot{x}=-y-z$, \hspace{0.2cm} $\dot{y}=x+0.2y$, \hspace{0.2cm} $\dot{z}=0.2+z(x-5.7)$.
\item A Lorenz system: $\dot{x}=10(y-x)$, \hspace{0.2cm} $\dot{x}=x(28-z)-y$, \hspace{0.2cm} $\dot{x}=xy-2.6667z$.
\end{itemize}
We set the length of the time series generated by each model to $M=10^4$ after removing the first 1000 samples to avoid possible transients. We reconstructed the respective symbolic networks with $D=4$ and calculated both their patterns' centralities and fluctuations. Comparing the dynamics vs topology diagrams, we detected that AR(2) and SETAR models behave similarly to the diseases showed in Fig. \ref{fig:06}. We observed how the SETAR model has a considerable gap between peripheral and central ones (Fig. \ref{fig:07}A), something that was already observed for the Influenza networks (Fig. \ref{fig:06}B). At the same time, the AR(2) model shows a smoother decay combined with higher values of centrality for all patterns (Fig. \ref{fig:07}A), as it is the case of Malaria (Fig. \ref{fig:06}C).
In both cases, peripheral nodes are the ones with the highest fluctuations, while hubs correspond to patterns having internal monotonic increase/decrease.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth,height=13cm]{fig06}
\caption{Interplay between the entropy/complexity role of the nodes and their centrality in the disease networks.}
\label{fig:06}
\end{figure}
We now reproduce the ($H,C$) plane of global measures for all synthetic models and compare them with the experimental results (see Fig. \ref{fig:02}B of Materials and Methods).
Figure \ref{fig:07}B depicts the ($H,C$) plane accounting for the interplay between these two variables and giving interesting information about the global organization of each disease dynamics. Here, the properties of the time series are captured with the distribution of all patterns appearing in the time series for $D=4$ and visualized in the ($H,C$) phase diagram. All models are enclosed in-between the theoretical maximum and minimum complexity described by black curves. These curves depend solely on the functional form that describes entropy and disequilibrium, as well as the dimension of the space of probabilities associated to the system. Specifically, acquiring the extreme values (maximum or minimum) of complexity is an optimization problem where we compute the set of probability distributions that optimizes the disequilibrium due to an specific value of entropy. The previous makes these curves useful when comparing the dynamics of time series of different nature and/or even lengths, in the same ($H,C$) space but with the same dimension.
The R\"osler (variable y), Lorenz (variable z) and Logistic chaotic maps appear in the region of positive correlations between $H$ and $C$, which corresponds to high levels of complexity, which are, in turn, correlated with the low levels of entropy. Note that the region before the maximum showed by the theoretical curves, can be considered
as the division between ordered and disordered states and crucially determines the kind of correlation between entropy and complexity.
In the case of autoregressive models, all of them lie in the region of negative correlations. SETAR depicts a high level of complexity with a tendency to the disorder, something that is expected for highly non-linear coupled systems with Gaussian noise. As soon as noise begins to drive the dynamics, and the nonlinearity vanishes, which is the case of AR(2), complexity decreases while entropy increases. In the extreme case of the LGP, its disorder is the highest and its complexity goes to zero.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{fig07}
\caption{Comparison between real datasets and synthetic models
Colors correspond to AR(2) (orange), Logistic (purple), Lorenz (variable z) (yellow), LGP (brown), Rösler (variable y) (pink) and SETAR (gray) models. (A) SETAR and AR(2) share similitudes with the behaviour reported in seasonal and vector-borne diseases (see Figure \ref{fig:06}). (B) ($H,C$) plane for all synthetic models and real datasets. Black lines represent theoretical curves of maximum and minimum complexity for $D=4$.}
\label{fig:07}
\end{figure}
Finally, it is important to highlight that fluctuations are valid to achieve the types of $H\times C$ diagrams, as well as for identifying types of nodes. In the Supplementary Information, we reproduced the results of the relationships between ($C_{in}$, vs $C_{out}$), ($\phi_{H}$, vs $\phi_{C}$) and ($A_{H}$, vs $A_{C}$). Without loss of generality, figures of supplementary material are consistent with the behaviour observed in Figs. \ref{fig:03}, \ref{fig:04}, \ref{fig:05}.
\section{Discussion}
In this work we investigated the problem of how entropy and complexity flow along the temporal evolution of different epidemic diseases. Time series containing the number of individuals infected with Dengue, Influenza and Malaria are encoded by ordinal patterns, i.e. symbols that are connected to each other sequentially in order to build symbolic graphs. To measure how entropy and complexity flow along ordinal patterns, we endowed symbolic networks with patterns (nodes) that have the ability of transferring their amounts of entropy/complexity to other patterns. Consequently, nodes, or ordinal patterns, might amplify or attenuate the entropy/complexity transmitted to other nodes.
In this way we propose a family of five parameters to quantify how the entropy/complexity flows among temporal patterns. Two of them quantify how the information arrives (departs) to (from) a pattern in terms of its entropy and complexity. They are normalized allowing quantitative comparisons between different diseases and synthetic models. Other two features quantify both the level of conductance (flux), and the gain of information of a pattern (amplification). The last one (fluctuation), assesses the inner variability of the dynamics inside each pattern.
Using these metrics, we characterize disease's outbreaks converting epidemiological cohorts into symbolic networks, where temporal fluctuations of signals might communicate among each other.
The aim of this approach is to unveil whether the exchange of information among each signal's patterns shows differences among different diseases.
One may consider the incoming/outgoing complexity as information that enters/leaves a given pattern that, in turn, consists on a dynamical state of the disease prevalence given by the combination of $D$ sequential values, accounting the number of infected individuals. In this context, we have seen in Fig. \ref{fig:03} how the incoming and outgoing complexity are positively correlated for the
three diseases. In this way, the more information a signal's segment receives from a previous one, the more information it sends to following temporal structures. From the three diseases, Influenza is the one reaching the highest correlation. In contrast, Malaria is the disease with the lowest correlation, suggesting the influence of another variables in the transmission of complexity levels.
To the best of our knowledge, this works present the first evidence of the link between the inter dynamics of ordinal patterns and the role it plays in a symbolic network. Thus, fluctuation ($f$) seems an interesting quantity when measuring the internal dynamics of temporal structures in a signal. By associating node fluctuation with its structural importance, we discover how hub patterns correspond to those nodes with the lowest fluctuation (i.e., hubs are nodes whose related patterns behave more monotonically). Reciprocally, the more peripheral a node is, the more fluctuations the patterns has. Interestingly, this behaviour is reported in all diseases. However, the distribution of centrality in Malaria is more homogeneous, with the centrality of both hubs and peripheral nodes
between $0.1$ and $0.4$ (Fig. \ref{fig:03}C). On the other side, the distribution of centrality in Dengue and, specially, Influenza, is quite heterogeneous, with few nodes acquiring a centrality above $0.5$ (Fig. \ref{fig:03}A-B). This ``accumulation'' of centrality might be driven by the seasonal nature of Influenza, and seasonal-like nature of the Dengue data for some of the countries considered. Nodes with low $f$ appear regularly in the time series.
Importantly, when nodes are grouped in terms of their variabilities (see Sup. Info.), fluctuation shows comparable results to those that were disclosed by flux, amplitude and nodal complexity.
For synthetic time series, we detected qualitative similarities between Influenza and nonlinear autoregressive models, while observing that Malaria resembles more closely to linear AR processes; this relations were confirmed by a simple analysis of the entropy-complexity plane.
It is worthwhile to highlight that the family of novel nodal features is not only valid for symbolic networks, but for any kind of weighted-directed graphs.
When having directed and weighted networks, e.g., in metabolic, genetic or systems biology scenarios, one can translate it to directed graphs by repeating the methodology proposed in this paper.
Nonetheless, further research is needed to better understand both the structure and dynamics of symbolic networks. On one hand, the
construction of symbolic networks from time series should be evaluated when patterns contain delays $\tau\neq 1$. Additionally, working
with large time series is desirable and would allow arriving to larger
dimensions $D$ for the patterns length. Hence, obtaining large size connected networks would benefit its statistical properties.
Apart from that, a detailed analysis of the robustness of this kind of networks is still remaining and
would help understanding the global properties of these particular graphs.
On the other hand, one of the weakness of our study was the poor quality of the available data. Unfortunately, the lack of organization in the acquisition and
storage of surveillance data from developing countries makes difficult to obtain deeper conclusions for healthy public policies.
Datasets usually lack of complete information. For example, there exists few weeks with zero reported cases in countries where it is well known the continuous
presence of infected population. In addition, epidemiological weeks do not reflect the real case of infected individuals. Due to human migrations, infected
subjects in a country could have been infected in foreign ones (imported cases), which is a problem when performing studies of spatio-temporal
differentiation. At the moment of retrieving the data, there were no other countries with available datasets of the three diseases we consider, and surprisingly,
the Influenza is not well documented in most cases. In addition, although Dengue and Malaria belong to vector-borne diseases, both of
them have different strains of virus. Hence, a study with high-quality datasets would find differences among them, but it's mandatory
to have large prevalence records of both strains to large enough signals. In general, works with datasets of different nature, would
shed light in how different/similar real systems are by means of these entropy/complexity features.
All in all, this work opens the door to new experimental designs to extract information from time series, as well as from directed
and weighted networks. However, this methodology should be used as a complementary tool analyzing time series of real datasets. In general, the use of
these metrics and its related methodologies will grant new information to better understand
the interplay between temporal structures in natural and artificial collection of samples of finite nature.
\dataccess{Diseases data available at: \href{https://github.com/JohannHM/Disease-Outbreaks-Data.git}{GitHub repository. Disease-Outbreaks-Data}.}
\aucontribute{JLHD and JHM worked on acquisition of data, JMB and JHM on the conception and experimental design, JLHD, JHM, MC and JMB worked on analysis and interpretation of data, JHM drafting the article, All authors equally collaborated on writing the final manuscript and revising it critically for intellectual content.}
\competing{We declare we have no competing interests.}
\funding{JHM thank to FAPESP grant 2016/01343-7 for funding my visit to ICTP-SAIFR from March-April 2019 where part of this work was done. JLHD was supported by the S\~ao Paulo Research Foundation (FAPESP) under Grants No. 2016/01343-7 and No. 2017/05770-0. JMB is founded by MINECO (project FIS2017-84151-P)}
\ack{JHM thanks to Xavier Bosch-Capblanch, Swiss Tropical and Public Health Institute. University of Basel, and C. Mart\'inez for valuable conversations}
\vskip2pc
\bibliographystyle{RS}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.